kubectl get pods --all-namespaces -o wide

NAMESPACE                      NAME                                           READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
kube-system                    calico-kube-controllers-6f8f7fd457-qt6cj       1/1     Running   0          7h59m   192.168.0.166   master   <none>           <none>
kube-system                    calico-node-lgqcb                              1/1     Running   0          7h59m   192.168.0.155   node1    <none>           <none>
kube-system                    calico-node-qqjz8                              1/1     Running   0          7h59m   192.168.0.144   node2    <none>           <none>
kube-system                    calico-node-tvsfh                              1/1     Running   1          7h59m   192.168.0.166   master   <none>           <none>
kube-system                    coredns-7f9d8dc6c8-k6dkg                       1/1     Running   0          7h59m   10.233.70.1     master   <none>           <none>
kube-system                    dns-autoscaler-796f4ddddf-2f8mf                1/1     Running   0          7h59m   10.233.70.2     master   <none>           <none>
kube-system                    kube-apiserver-master                          1/1     Running   0          8h      192.168.0.166   master   <none>           <none>
kube-system                    kube-controller-manager-master                 1/1     Running   1          8h      192.168.0.166   master   <none>           <none>
kube-system                    kube-proxy-299jd                               1/1     Running   0          8h      192.168.0.144   node2    <none>           <none>
kube-system                    kube-proxy-5qjxd                               1/1     Running   0          8h      192.168.0.166   master   <none>           <none>
kube-system                    kube-proxy-7r9p2                               1/1     Running   0          8h      192.168.0.155   node1    <none>           <none>
kube-system                    kube-scheduler-master                          1/1     Running   1          8h      192.168.0.166   master   <none>           <none>
kube-system                    nodelocaldns-h577n                             1/1     Running   0          7h59m   192.168.0.166   master   <none>           <none>
kube-system                    nodelocaldns-qnt28                             1/1     Running   0          7h59m   192.168.0.155   node1    <none>           <none>
kube-system                    nodelocaldns-sj8tp                             1/1     Running   0          7h59m   192.168.0.144   node2    <none>           <none>
kube-system                    openebs-localpv-provisioner-77fbd6858d-gpczv   1/1     Running   2          7h36m   10.233.90.2     node1    <none>           <none>
kube-system                    openebs-ndm-ms2ps                              1/1     Running   0          7h36m   192.168.0.155   node1    <none>           <none>
kube-system                    openebs-ndm-n54r5                              1/1     Running   0          7h23m   192.168.0.144   node2    <none>           <none>
kube-system                    openebs-ndm-operator-59c75c96fc-4rhwv          1/1     Running   1          7h36m   10.233.90.3     node1    <none>           <none>
kube-system                    tiller-deploy-79b566b5ff-8glxm                 1/1     Running   0          7h59m   10.233.90.1     node1    <none>           <none>
kubesphere-controls-system     default-http-backend-5d464dd566-426kq          1/1     Running   0          7h25m   10.233.90.5     node1    <none>           <none>
kubesphere-controls-system     kubectl-admin-6c664db975-fbzh8                 1/1     Running   0          7h25m   10.233.90.8     node1    <none>           <none>
kubesphere-monitoring-system   kube-state-metrics-566cdbcb48-jn9ll            4/4     Running   0          7h25m   10.233.90.7     node1    <none>           <none>
kubesphere-monitoring-system   node-exporter-4gxcq                            2/2     Running   0          7h25m   192.168.0.144   node2    <none>           <none>
kubesphere-monitoring-system   node-exporter-f7b2m                            2/2     Running   0          7h25m   192.168.0.166   master   <none>           <none>
kubesphere-monitoring-system   node-exporter-hn9g9                            2/2     Running   0          7h25m   192.168.0.155   node1    <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-0                               3/3     Running   1          7h25m   10.233.90.14    node1    <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-1                               3/3     Running   1          7h25m   10.233.90.13    node1    <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-system-0                        3/3     Running   1          7h25m   10.233.90.17    node1    <none>           <none>
kubesphere-monitoring-system   prometheus-k8s-system-1                        3/3     Running   1          7h25m   10.233.90.18    node1    <none>           <none>
kubesphere-monitoring-system   prometheus-operator-6b97679cfd-kxtm7           1/1     Running   0          7h25m   10.233.90.6     node1    <none>           <none>
kubesphere-system              ks-account-596657f8c6-c97dv                    1/1     Running   0          7h25m   10.233.70.9     master   <none>           <none>
kubesphere-system              ks-apigateway-78bcdc8ffc-9nrnn                 1/1     Running   0          7h25m   10.233.70.7     master   <none>           <none>
kubesphere-system              ks-apiserver-5b548d7c5c-v45b2                  1/1     Running   0          7h25m   10.233.70.8     master   <none>           <none>
kubesphere-system              ks-console-78bcf96dbf-zqq59                    1/1     Running   0          7h25m   10.233.70.11    master   <none>           <none>
kubesphere-system              ks-controller-manager-696986f8d9-sndh2         1/1     Running   1          7h25m   10.233.70.10    master   <none>           <none>
kubesphere-system              ks-installer-7d9fb945c7-dgxg5                  1/1     Running   0          7h36m   10.233.90.4     node1    <none>           <none>
kubesphere-system              openldap-0                                     1/1     Running   0          7h26m   10.233.70.6     master   <none>           <none>
kubesphere-system              redis-6fd6c6d6f9-pt5d8                         1/1     Running   0          7h26m   10.233.70.5     master   <none>           <none>

pod都是running状态的,但是访问任何节点的30880都是无法访问,这到底是是咋了

The push refers to repository [192.168.0.166:5000/kubesphere/elasticsearch-oss]
c573321b5d86: Pushed
46cd2571f1c6: Pushed
fc56d8e86bb4: Pushed
743117a68886: Pushed
2e5badaeb57f: Pushed
32b15aee3e49: Pushed
9b0e1f384d5d: Retrying in 1 second
d69483a6face: Pushed
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json186586334: no space left on device
192.168.0.166:5000/k8scsi/csi-attacher:v2.0.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json999432357: no space left on device
The push refers to repository [192.168.0.166:5000/k8scsi/csi-attacher]
94f49fb5c15d: Retrying in 1 second
932da5156413: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json514450280: no space left on device
192.168.0.166:5000/k8scsi/csi-node-driver-registrar:v1.2.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json340647847: no space left on device
The push refers to repository [192.168.0.166:5000/k8scsi/csi-node-driver-registrar]
e242ebe3c0e7: Retrying in 1 second
932da5156413: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json173247938: no space left on device
192.168.0.166:5000/kubesphere/cloud-controller-manager:v1.4.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json383248953: no space left on device
The push refers to repository [192.168.0.166:5000/kubesphere/cloud-controller-manager]
7371592b8bed: Retrying in 1 second
68b0cbfdd0ed: Retrying in 1 second
73046094a9b8: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json114389572: no space left on device
192.168.0.166:5000/google-containers/k8s-dns-node-cache:1.15.5
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json034605779: no space left on device
The push refers to repository [192.168.0.166:5000/google-containers/k8s-dns-node-cache]
5d024027846e: Retrying in 1 second
a95807b0aa21: Retrying in 1 second
fe9a8b4f1dcc: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json765821462: no space left on device
192.168.0.166:5000/library/redis:5.0.5-alpine
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json243390077: no space left on device
The push refers to repository [192.168.0.166:5000/library/redis]
76ff8be8279a: Retrying in 1 second
9559709fdf7f: Retrying in 1 second
b499b26b07f7: Retrying in 1 second
1ac7839ac772: Retrying in 1 second
b34cd2e3555a: Retrying in 1 second
03901b4a2ea8: Waiting
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json796660152: no space left on device
192.168.0.166:5000/kubesphere/configmap-reload:v0.3.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json859707831: no space left on device
The push refers to repository [192.168.0.166:5000/kubesphere/configmap-reload]
f78d3758f4e1: Retrying in 2 seconds


又报错了,我都快崩溃了,这到底是哪里有不对

    TAO 报错信息里写的很清楚了,你的存储空间不足

    • TAO 回复了此帖

      fnag_huna 这个确实是的,之前在root目录下,root目录只分配了50G,然后这个压缩包解压后差不多就占了60%的root空间,安装程序已启动直接拉满,但是后面我吧这个压缩包迁移到home目录下,这个目录有400G的空间,再次运行后显示ssh连接问题to use the 'ssh' connection type with passwords, you must install the sshpas

        TAO 之前root下安装的ks卸载了吗?提示你安装那你安装下试试

        • TAO 回复了此帖

          fnag_huna 我大概装k8s10次左右,就只有第一次安装的时候看到过登录界面,但是登录就报错,后面每一次从装k8s我都是重装系统,青云、本地搭集群的方式都试过

            TAO 装k8s还是kubesphere?这个离线安装包是连k8s一起安装的啊,只需要纯净环境就可以了。还有你的防火墙和selinux都关了吗?

            • TAO 回复了此帖

              fnag_huna 刚开始我是先搭建k8s在在k8s的基础上安装kubesphere,然后就是Redis、Mysql的一些问题
              报错如下

              然后我在网上找不到解决方案,然后我就使用Multi-Node模式安装也是报错这上面的一些报错都是使用Multi-Node模式在线安装报的错

              防火墙每次都是关闭的,全部确认过的,setenforce 0都关闭过了,而且都是永久关闭

              Feynman Multi-Node 模式安装2.1.1的报的错,而且重试两三次都是这个情况,Pip 相关依赖怎么安装

                TAO
                上边那些ignoring的可以忽略

                TAO

                pod都running之后不要再重装了,登录不了的的话先检查环境的网络,防火墙、安全组……。

                • TAO 回复了此帖

                  `TASK [etcd : Configure | Check if etcd cluster is healthy] ***********************************************************************************************************************************************
                  Wednesday 09 September 2020 21:47:08 +0800 (0:00:00.108) 0:04:45.957 ***
                  fatal: [master]: FAILED! => {
                  “changed”: false,
                  “cmd”: “/usr/local/bin/etcdctl –endpoints=https://192.168.0.166:2379 cluster-health | grep -q ‘cluster is healthy’”,
                  “delta”: “0:00:00.011060″,
                  “end”: “2020-09-09 21:47:08.939047”,
                  “rc”: 1,
                  “start”: “2020-09-09 21:47:08.927987”
                  }

                  STDERR:

                  Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.0.166:2379: getsockopt: connection refused

                  error #0: dial tcp 192.168.0.166:2379: getsockopt: connection refused

                  MSG:

                  non-zero return code

                  `

                  这又是什么报错,虽然…ignoring可以忽略,但是这是啥情况


                  安装结果就是这样,但是我怎么访问都不通,登录界面都无法访问,再次声明防火墙也关了,在k8s机器内部curl也无法访问

                  win工作机pingk8s集群master节点、node1、node2节点

                  master节点防火墙状态

                  三个节点互ping

                  swap也都关闭了

                  安全策略也改变了

                  最后这是我所有master节点中pod的运行情况

                  所有节点

                  然而我全部按照流程走下来还是无法访问

                    fnag_huna disabled是啥也提示 permissve打印警告,应该不是这个问题

                      TAO

                      curl 192.168.0.166:30880 试下,如果返回正常,说明可以打开登录界面。

                      • TAO 回复了此帖