hongming `TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})
TASK [common : Kubesphere | Deploy minio] **************************************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-minio /etc/kubesphere/minio-ha -f /etc/kubesphere/custom-values-minio.yaml –set fullnameOverride=minio –namespace kubesphere-system –wait –timeout 1800\n”, “delta”: “0:30:03.512983″, “end”: “2019-12-30 01:32:38.488456″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2019-12-30 01:02:34.975473″, “stderr”: “Error: UPGRADE FAILED: timed out waiting for the condition”, “stderr_lines”: [“Error: UPGRADE FAILED: timed out waiting for the condition”], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“1. check the storage configuration and storage server”,
“2. execute ‘helm del –purge ks-minio’”,
“3. Restart the installer pod in kubesphere-system namespace”
]
}
TASK [common : fail] ***********************************************************
TASK [common : fail] ***********************************************************
fatal: [localhost]: FAILED! => {“changed”: false, “msg”: “It is suggested to refer to the above methods for troubleshooting problems .”}
PLAY RECAP *********************************************************************
localhost : ok=24 changed=18 unreachable=0 failed=1 skipped=66 rescued=0 ignored=1
PLAY RECAP *********************************************************************
localhost : ok=24 changed=18 unreachable=0 failed=1 skipped=66 rescued=0 ignored=1 `
[root@ks-allinone ~]# journalctl -xe
Dec 30 09:34:15 ks-allinone kubelet[14168]: E1230 09:34:15.607414 14168 pod_workers.go:190] Error syncing pod 8257092f-6048-4419-b9ad
Dec 30 09:34:16 ks-allinone kubelet[14168]: E1230 09:34:16.608442 14168 pod_workers.go:190] Error syncing pod 6526cc32-483a-4894-8191
Dec 30 09:34:17 ks-allinone kubelet[14168]: I1230 09:34:17.015825 14168 setters.go:73] Using node IP: "172.18.248.238"
Dec 30 09:34:21 ks-allinone kubelet[14168]: W1230 09:34:21.313814 14168 reflector.go:302] object-"istio-system"/"istio": watch of *v1
Dec 30 09:34:22 ks-allinone kubelet[14168]: I1230 09:34:22.803304 14168 prober.go:112] Readiness probe for "demo01-yo5naj-6b49d689c5-
Dec 30 09:34:23 ks-allinone kubelet[14168]: I1230 09:34:23.135132 14168 prober.go:112] Liveness probe for "demo01-yo5naj-6b49d689c5-s
Dec 30 09:34:24 ks-allinone dockerd[13850]: time="2019-12-30T09:34:24.790000331+08:00" level=info msg="shim containerd-shim started" ad
Dec 30 09:34:25 ks-allinone kubelet[14168]: W1230 09:34:25.421926 14168 reflector.go:302] object-"kubesphere-system"/"sample-bookinfo
Dec 30 09:34:26 ks-allinone kubelet[14168]: I1230 09:34:26.544329 14168 kubelet.go:1933] SyncLoop (PLEG): "demo01-krizt7-544f8444bf-9
Dec 30 09:34:26 ks-allinone kubelet[14168]: I1230 09:34:26.774169 14168 prober.go:112] Liveness probe for "demo01-krizt7-544f8444bf-9
Dec 30 09:34:27 ks-allinone kubelet[14168]: I1230 09:34:27.172338 14168 setters.go:73] Using node IP: "172.18.248.238"
Dec 30 09:34:27 ks-allinone kubelet[14168]: W1230 09:34:27.336924 14168 reflector.go:302] object-"kubesphere-devops-system"/"ks-gitla
Dec 30 09:34:29 ks-allinone kubelet[14168]: E1230 09:34:29.604934 14168 pod_workers.go:190] Error syncing pod 6526cc32-483a-4894-8191
Dec 30 09:34:30 ks-allinone kubelet[14168]: E1230 09:34:30.604099 14168 pod_workers.go:190] Error syncing pod 8257092f-6048-4419-b9ad
Dec 30 09:34:31 ks-allinone kubelet[14168]: W1230 09:34:31.095429 14168 reflector.go:302] object-"kubesphere-devops-system"/"ks-gitla
Dec 30 09:34:31 ks-allinone kubelet[14168]: I1230 09:34:31.812934 14168 prober.go:112] Readiness probe for "demo01-krizt7-544f8444bf-
Dec 30 09:34:32 ks-allinone kubelet[14168]: I1230 09:34:32.811098 14168 prober.go:112] Readiness probe for "demo01-yo5naj-6b49d689c5-
Dec 30 09:34:33 ks-allinone kubelet[14168]: I1230 09:34:33.132735 14168 prober.go:112] Liveness probe for "demo01-yo5naj-6b49d689c5-s
Dec 30 09:34:33 ks-allinone kubelet[14168]: I1230 09:34:33.132802 14168 kubelet.go:1966] SyncLoop (container unhealthy): "demo01-yo5n
Dec 30 09:34:33 ks-allinone kubelet[14168]: I1230 09:34:33.133659 14168 kuberuntime_manager.go:595] Container "xudongyang0718devops-j
Dec 30 09:34:33 ks-allinone kubelet[14168]: I1230 09:34:33.133717 14168 kuberuntime_container.go:581] Killing container "docker://5f0
Dec 30 09:34:36 ks-allinone kubelet[14168]: I1230 09:34:36.774225 14168 prober.go:112] Liveness probe for "demo01-krizt7-544f8444bf-9
Dec 30 09:34:36 ks-allinone kubelet[14168]: W1230 09:34:36.896948 14168 reflector.go:302] object-"demo-project"/"harbor-q160ne-harbor
Dec 30 09:34:37 ks-allinone kubelet[14168]: I1230 09:34:37.321914 14168 setters.go:73] Using node IP: "172.18.248.238"
Dec 30 09:34:41 ks-allinone kubelet[14168]: I1230 09:34:41.812885 14168 prober.go:112] Readiness probe for "demo01-krizt7-544f8444bf-
Dec 30 09:34:42 ks-allinone kubelet[14168]: I1230 09:34:42.799413 14168 prober.go:112] Readiness probe for "demo01-yo5naj-6b49d689c5-
Dec 30 09:34:44 ks-allinone kubelet[14168]: E1230 09:34:44.629646 14168 pod_workers.go:190] Error syncing pod 8257092f-6048-4419-b9ad
Dec 30 09:34:44 ks-allinone kubelet[14168]: E1230 09:34:44.630803 14168 pod_workers.go:190] Error syncing pod 6526cc32-483a-4894-8191
lines 1022-1049/1049 (END)
[root@ks-allinone ~]# kubectl get pod -n kubesphere-monitoring-system
NAME READY STATUS RESTARTS AGE
grafana-7cf848f4f7-zhxcr 1/1 Running 2 6d1h
kube-state-metrics-868fcf6b48-vl9z4 4/4 Running 8 9d
node-exporter-7b244 2/2 Running 4 10d
prometheus-k8s-0 3/3 Running 7 10d
prometheus-k8s-system-0 3/3 Running 7 10d
prometheus-operator-685bc484cb-7sh8t 1/1 Running 2 10d
[root@ks-allinone ~]# kubectl get pod -n kubesphere-monitoring-system
NAME READY STATUS RESTARTS AGE
grafana-7cf848f4f7-zhxcr 1/1 Running 2 6d1h
kube-state-metrics-868fcf6b48-vl9z4 4/4 Running 8 9d
node-exporter-7b244 2/2 Running 4 10d
prometheus-k8s-0 3/3 Running 7 10d
prometheus-k8s-system-0 3/3 Running 7 10d
prometheus-operator-685bc484cb-7sh8t 1/1 Running 2 10d
[root@ks-allinone ~]# kubectl get pod -n kubesphere-system
NAME READY STATUS RESTARTS AGE
etcd-555778878f-6p5c9 1/1 Running 2 10d
ks-account-64ffdf4688-bgdzk 1/1 Running 1 4d18h
ks-apigateway-65dd54f989-pt6hr 1/1 Running 3 4d18h
ks-apiserver-5d98f5d7cb-xq569 1/1 Running 3 4d18h
ks-console-6f7f75bb48-5jt6t 1/1 Running 1 4d18h
ks-controller-manager-6dd9b76d75-qdgqq 1/1 Running 1 4d18h
ks-installer-7987c659d6-wxfsp 1/1 Running 6 10d
minio-8cd46c8d9-9bpkv 1/1 Running 2 10d
minio-make-bucket-job-tkbp4 0/1 Pending 0 32m
mysql-b5597d996-bwmdn 1/1 Running 2 10d
openldap-0 1/1 Running 2 10d
redis-5d4844b947-rdk2l 1/1 Running 2 10d
[root@ks-allinone ~]#