FeynmanK零SK贰SK壹S
TAO 机器缺少 Pip 相关依赖,3.0 已经 GA 并且安装无需 Python 和 Pip 依赖,如果你需要离线安装,建议参考 3.0 的离线安装方式:https://kubesphere.com.cn/forum/d/2034-kubekey-kubesphere-v3-0-0
TAO 机器缺少 Pip 相关依赖,3.0 已经 GA 并且安装无需 Python 和 Pip 依赖,如果你需要离线安装,建议参考 3.0 的离线安装方式:https://kubesphere.com.cn/forum/d/2034-kubekey-kubesphere-v3-0-0
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
查看日志后
`TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.269034”, “end”: “2020-09-07 17:43:19.182103”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:18.913069″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.264322”, “end”: “2020-09-07 17:43:19.580897”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:19.316575″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.267661″, “end”: “2020-09-07 17:43:19.981956”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:19.714295″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.269183″, “end”: “2020-09-07 17:43:20.384573”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:20.115390″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.266951″, “end”: “2020-09-07 17:43:20.786619″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:20.519668″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.266343”, “end”: “2020-09-07 17:43:21.193700”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-07 17:43:20.927357”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting redis ha pv size] **************************
`
最后访问http://192.168.0.166:30880/也没有响应
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-6f8f7fd457-qt6cj 1/1 Running 0 7h59m 192.168.0.166 master <none> <none>
kube-system calico-node-lgqcb 1/1 Running 0 7h59m 192.168.0.155 node1 <none> <none>
kube-system calico-node-qqjz8 1/1 Running 0 7h59m 192.168.0.144 node2 <none> <none>
kube-system calico-node-tvsfh 1/1 Running 1 7h59m 192.168.0.166 master <none> <none>
kube-system coredns-7f9d8dc6c8-k6dkg 1/1 Running 0 7h59m 10.233.70.1 master <none> <none>
kube-system dns-autoscaler-796f4ddddf-2f8mf 1/1 Running 0 7h59m 10.233.70.2 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 0 8h 192.168.0.166 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 1 8h 192.168.0.166 master <none> <none>
kube-system kube-proxy-299jd 1/1 Running 0 8h 192.168.0.144 node2 <none> <none>
kube-system kube-proxy-5qjxd 1/1 Running 0 8h 192.168.0.166 master <none> <none>
kube-system kube-proxy-7r9p2 1/1 Running 0 8h 192.168.0.155 node1 <none> <none>
kube-system kube-scheduler-master 1/1 Running 1 8h 192.168.0.166 master <none> <none>
kube-system nodelocaldns-h577n 1/1 Running 0 7h59m 192.168.0.166 master <none> <none>
kube-system nodelocaldns-qnt28 1/1 Running 0 7h59m 192.168.0.155 node1 <none> <none>
kube-system nodelocaldns-sj8tp 1/1 Running 0 7h59m 192.168.0.144 node2 <none> <none>
kube-system openebs-localpv-provisioner-77fbd6858d-gpczv 1/1 Running 2 7h36m 10.233.90.2 node1 <none> <none>
kube-system openebs-ndm-ms2ps 1/1 Running 0 7h36m 192.168.0.155 node1 <none> <none>
kube-system openebs-ndm-n54r5 1/1 Running 0 7h23m 192.168.0.144 node2 <none> <none>
kube-system openebs-ndm-operator-59c75c96fc-4rhwv 1/1 Running 1 7h36m 10.233.90.3 node1 <none> <none>
kube-system tiller-deploy-79b566b5ff-8glxm 1/1 Running 0 7h59m 10.233.90.1 node1 <none> <none>
kubesphere-controls-system default-http-backend-5d464dd566-426kq 1/1 Running 0 7h25m 10.233.90.5 node1 <none> <none>
kubesphere-controls-system kubectl-admin-6c664db975-fbzh8 1/1 Running 0 7h25m 10.233.90.8 node1 <none> <none>
kubesphere-monitoring-system kube-state-metrics-566cdbcb48-jn9ll 4/4 Running 0 7h25m 10.233.90.7 node1 <none> <none>
kubesphere-monitoring-system node-exporter-4gxcq 2/2 Running 0 7h25m 192.168.0.144 node2 <none> <none>
kubesphere-monitoring-system node-exporter-f7b2m 2/2 Running 0 7h25m 192.168.0.166 master <none> <none>
kubesphere-monitoring-system node-exporter-hn9g9 2/2 Running 0 7h25m 192.168.0.155 node1 <none> <none>
kubesphere-monitoring-system prometheus-k8s-0 3/3 Running 1 7h25m 10.233.90.14 node1 <none> <none>
kubesphere-monitoring-system prometheus-k8s-1 3/3 Running 1 7h25m 10.233.90.13 node1 <none> <none>
kubesphere-monitoring-system prometheus-k8s-system-0 3/3 Running 1 7h25m 10.233.90.17 node1 <none> <none>
kubesphere-monitoring-system prometheus-k8s-system-1 3/3 Running 1 7h25m 10.233.90.18 node1 <none> <none>
kubesphere-monitoring-system prometheus-operator-6b97679cfd-kxtm7 1/1 Running 0 7h25m 10.233.90.6 node1 <none> <none>
kubesphere-system ks-account-596657f8c6-c97dv 1/1 Running 0 7h25m 10.233.70.9 master <none> <none>
kubesphere-system ks-apigateway-78bcdc8ffc-9nrnn 1/1 Running 0 7h25m 10.233.70.7 master <none> <none>
kubesphere-system ks-apiserver-5b548d7c5c-v45b2 1/1 Running 0 7h25m 10.233.70.8 master <none> <none>
kubesphere-system ks-console-78bcf96dbf-zqq59 1/1 Running 0 7h25m 10.233.70.11 master <none> <none>
kubesphere-system ks-controller-manager-696986f8d9-sndh2 1/1 Running 1 7h25m 10.233.70.10 master <none> <none>
kubesphere-system ks-installer-7d9fb945c7-dgxg5 1/1 Running 0 7h36m 10.233.90.4 node1 <none> <none>
kubesphere-system openldap-0 1/1 Running 0 7h26m 10.233.70.6 master <none> <none>
kubesphere-system redis-6fd6c6d6f9-pt5d8 1/1 Running 0 7h26m 10.233.70.5 master <none> <none>
pod都是running状态的,但是访问任何节点的30880都是无法访问,这到底是是咋了
The push refers to repository [192.168.0.166:5000/kubesphere/elasticsearch-oss]
c573321b5d86: Pushed
46cd2571f1c6: Pushed
fc56d8e86bb4: Pushed
743117a68886: Pushed
2e5badaeb57f: Pushed
32b15aee3e49: Pushed
9b0e1f384d5d: Retrying in 1 second
d69483a6face: Pushed
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json186586334: no space left on device
192.168.0.166:5000/k8scsi/csi-attacher:v2.0.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json999432357: no space left on device
The push refers to repository [192.168.0.166:5000/k8scsi/csi-attacher]
94f49fb5c15d: Retrying in 1 second
932da5156413: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json514450280: no space left on device
192.168.0.166:5000/k8scsi/csi-node-driver-registrar:v1.2.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json340647847: no space left on device
The push refers to repository [192.168.0.166:5000/k8scsi/csi-node-driver-registrar]
e242ebe3c0e7: Retrying in 1 second
932da5156413: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json173247938: no space left on device
192.168.0.166:5000/kubesphere/cloud-controller-manager:v1.4.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json383248953: no space left on device
The push refers to repository [192.168.0.166:5000/kubesphere/cloud-controller-manager]
7371592b8bed: Retrying in 1 second
68b0cbfdd0ed: Retrying in 1 second
73046094a9b8: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json114389572: no space left on device
192.168.0.166:5000/google-containers/k8s-dns-node-cache:1.15.5
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json034605779: no space left on device
The push refers to repository [192.168.0.166:5000/google-containers/k8s-dns-node-cache]
5d024027846e: Retrying in 1 second
a95807b0aa21: Retrying in 1 second
fe9a8b4f1dcc: Retrying in 1 second
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json765821462: no space left on device
192.168.0.166:5000/library/redis:5.0.5-alpine
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json243390077: no space left on device
The push refers to repository [192.168.0.166:5000/library/redis]
76ff8be8279a: Retrying in 1 second
9559709fdf7f: Retrying in 1 second
b499b26b07f7: Retrying in 1 second
1ac7839ac772: Retrying in 1 second
b34cd2e3555a: Retrying in 1 second
03901b4a2ea8: Waiting
received unexpected HTTP status: 500 Internal Server Error
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json796660152: no space left on device
192.168.0.166:5000/kubesphere/configmap-reload:v0.3.0
Error response from daemon: open /var/lib/docker/image/overlay2/.tmp-repositories.json859707831: no space left on device
The push refers to repository [192.168.0.166:5000/kubesphere/configmap-reload]
f78d3758f4e1: Retrying in 2 seconds
又报错了,我都快崩溃了,这到底是哪里有不对
TAO 报错信息里写的很清楚了,你的存储空间不足
TAO 之前root下安装的ks卸载了吗?提示你安装那你安装下试试
TAO 装k8s还是kubesphere?这个离线安装包是连k8s一起安装的啊,只需要纯净环境就可以了。还有你的防火墙和selinux都关了吗?
`TASK [etcd : Configure | Check if etcd cluster is healthy] ***********************************************************************************************************************************************
Wednesday 09 September 2020 21:47:08 +0800 (0:00:00.108) 0:04:45.957 ***
fatal: [master]: FAILED! => {
“changed”: false,
“cmd”: “/usr/local/bin/etcdctl –endpoints=https://192.168.0.166:2379 cluster-health | grep -q ‘cluster is healthy’”,
“delta”: “0:00:00.011060″,
“end”: “2020-09-09 21:47:08.939047”,
“rc”: 1,
“start”: “2020-09-09 21:47:08.927987”
}
STDERR:
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.0.166:2379: getsockopt: connection refused
error #0: dial tcp 192.168.0.166:2379: getsockopt: connection refused
MSG:
non-zero return code
`
这又是什么报错,虽然…ignoring可以忽略,但是这是啥情况
安装结果就是这样,但是我怎么访问都不通,登录界面都无法访问,再次声明防火墙也关了,在k8s机器内部curl也无法访问
win工作机pingk8s集群master节点、node1、node2节点
master节点防火墙状态
三个节点互ping
swap也都关闭了
安全策略也改变了
最后这是我所有master节点中pod的运行情况
所有节点
然而我全部按照流程走下来还是无法访问
TAO selinux要disabled啊。。