kumu

  1. 机器之间的防火墙或者安全组放行对应的端口,或者全部放行
  2. 机器的配置需要满足安装要求(cpu、内存、硬盘)
  3. 有什么问题应该把日志发出来
  • kumu 回复了此帖

    [root@node1 ~]# kubectl get pod -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system calico-kube-controllers-677cbc8557-zktgk 1/1 Running 2 19h
    kube-system calico-node-57zrc 1/1 Running 5 18h
    kube-system calico-node-wc59s 0/1 CrashLoopBackOff 29 18h
    kube-system calico-node-wjq9f 1/1 Running 2 19h
    kube-system coredns-79878cb9c9-6pntq 1/1 Running 2 19h
    kube-system coredns-79878cb9c9-wst8c 1/1 Running 2 19h
    kube-system kube-apiserver-node1 1/1 Running 3 19h
    kube-system kube-apiserver-node3 1/1 Running 6 18h
    kube-system kube-controller-manager-node1 1/1 Running 7 19h
    kube-system kube-controller-manager-node3 1/1 Running 6 18h
    kube-system kube-proxy-hlt7n 1/1 Running 6 19h
    kube-system kube-proxy-mdb7b 1/1 Running 4 19h
    kube-system kube-proxy-v6wsj 1/1 Running 6 18h
    kube-system kube-scheduler-node1 1/1 Running 6 19h
    kube-system kube-scheduler-node3 1/1 Running 4 18h
    kube-system metrics-server-98546f9bd-8qt4w 1/1 Running 7 18h
    kube-system nodelocaldns-56kxk 1/1 Running 4 19h
    kube-system nodelocaldns-m7z8p 1/1 Running 4 18h
    kube-system nodelocaldns-trpjx 1/1 Running 2 19h
    kube-system openebs-localpv-provisioner-5cd9579c5-r554k 0/1 ContainerCreating 0 18h
    kube-system openebs-ndm-588jj 1/1 Running 4 19h
    kube-system openebs-ndm-cjjpv 1/1 Running 5 18h
    kube-system openebs-ndm-mjvv6 0/1 CrashLoopBackOff 27 18h
    kube-system openebs-ndm-operator-6656f85b86-9q476 1/1 Running 3 19h
    kube-system snapshot-controller-0 0/1 ContainerCreating 0 49m
    kubesphere-system ks-installer-78745765f5-cl7qq 1/1 Running 2 19h
    kubesphere-system minio-8ccf8886f-2n8pg 0/1 Pending 0 46m
    kubesphere-system openldap-0 0/1 Pending 0 47m
    kubesphere-system redis-ha-haproxy-765c9f6946-62bd5 1/1 Running 0 47m
    kubesphere-system redis-ha-haproxy-765c9f6946-b6hgw 0/1 Init:0/1 0 47m
    kubesphere-system redis-ha-haproxy-765c9f6946-xnwcx 1/1 Running 2 47m
    kubesphere-system redis-ha-server-0 0/2 Pending 0 47m
    [root@node1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath=‘{.items[0].metadata.name}’) -f
    2021-01-05T01:25:20-05:00 INFO : shell-operator latest
    2021-01-05T01:25:20-05:00 INFO : HTTP SERVER Listening on 0.0.0.0:9115
    2021-01-05T01:25:20-05:00 INFO : Use temporary dir: /tmp/shell-operator
    2021-01-05T01:25:20-05:00 INFO : Initialize hooks manager …
    2021-01-05T01:25:20-05:00 INFO : Search and load hooks …
    2021-01-05T01:25:20-05:00 INFO : Load hook config from ‘/hooks/kubesphere/installRunner.py’
    2021-01-05T01:25:50-05:00 INFO : Load hook config from ‘/hooks/kubesphere/schedule.sh’
    2021-01-05T01:25:50-05:00 INFO : Initializing schedule manager …
    2021-01-05T01:25:50-05:00 INFO : KUBE Init Kubernetes client
    2021-01-05T01:25:50-05:00 INFO : KUBE-INIT Kubernetes client is configured successfully
    2021-01-05T01:25:50-05:00 INFO : MAIN: run main loop
    2021-01-05T01:25:50-05:00 INFO : MAIN: add onStartup tasks
    2021-01-05T01:25:50-05:00 INFO : QUEUE add all HookRun@OnStartup
    2021-01-05T01:25:50-05:00 INFO : Running schedule manager …
    2021-01-05T01:25:50-05:00 INFO : MSTOR Create new metric shell_operator_live_ticks
    2021-01-05T01:25:50-05:00 INFO : MSTOR Create new metric shell_operator_tasks_queue_length
    2021-01-05T01:25:50-05:00 INFO : GVR for kind ‘ClusterConfiguration’ is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations
    2021-01-05T01:25:51-05:00 INFO : EVENT Kube event ‘6f808fc5-9f76-4357-a3a5-73cf15d27688’
    2021-01-05T01:25:51-05:00 INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
    2021-01-05T01:25:53-05:00 INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
    2021-01-05T01:25:53-05:00 INFO : Running hook ‘kubesphere/installRunner.py’ binding ‘KUBE_EVENTS’ …
    [WARNING]: No inventory was parsed, only implicit localhost is available
    [WARNING]: provided hosts list is empty, only localhost is available. Note that
    the implicit localhost does not match ‘all’

    PLAY [localhost] ***************************************************************

    TASK [download : include_tasks] ************************************************
    skipping: [localhost]

    TASK [download : Download items] ***********************************************
    skipping: [localhost]

    TASK [download : Sync container] ***********************************************
    skipping: [localhost]

    TASK [kubesphere-defaults : Configure defaults] ********************************
    ok: [localhost] => {
    “msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
    }

    TASK [preinstall : check k8s version] ******************************************
    changed: [localhost]

    TASK [preinstall : init k8s version] *******************************************
    ok: [localhost]

    TASK [preinstall : Stop if kubernetes version is nonsupport] *******************
    ok: [localhost] => {
    “changed”: false,
    “msg”: “All assertions passed”
    }

    TASK [preinstall : check storage class] ****************************************
    changed: [localhost]

    TASK [preinstall : Stop if StorageClass was not found] *************************
    skipping: [localhost]

    TASK [preinstall : check default storage class] ********************************
    changed: [localhost]

    TASK [preinstall : Stop if defaultStorageClass was not found] ******************
    ok: [localhost] => {
    “changed”: false,
    “msg”: “All assertions passed”
    }

    TASK [preinstall : Kubesphere | Checking kubesphere component] *****************
    changed: [localhost]

    TASK [preinstall : Kubesphere | Get kubesphere component version] **************
    skipping: [localhost]

    TASK [preinstall : Kubesphere | Get kubesphere component version] **************
    skipping: [localhost] => (item=ks-openldap)
    skipping: [localhost] => (item=ks-redis)
    skipping: [localhost] => (item=ks-minio)
    skipping: [localhost] => (item=ks-openpitrix)
    skipping: [localhost] => (item=elasticsearch-logging)
    skipping: [localhost] => (item=elasticsearch-logging-curator)
    skipping: [localhost] => (item=istio)
    skipping: [localhost] => (item=istio-init)
    skipping: [localhost] => (item=jaeger-operator)
    skipping: [localhost] => (item=ks-jenkins)
    skipping: [localhost] => (item=ks-sonarqube)
    skipping: [localhost] => (item=logging-fluentbit-operator)
    skipping: [localhost] => (item=uc)
    skipping: [localhost] => (item=metrics-server)

    PLAY RECAP *********************************************************************
    localhost : ok=8 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0

    [WARNING]: No inventory was parsed, only implicit localhost is available
    [WARNING]: provided hosts list is empty, only localhost is available. Note that
    the implicit localhost does not match ‘all’

    PLAY [localhost] ***************************************************************

    TASK [download : include_tasks] ************************************************
    skipping: [localhost]

    TASK [download : Download items] ***********************************************
    skipping: [localhost]

    TASK [download : Sync container] ***********************************************
    skipping: [localhost]

    TASK [kubesphere-defaults : Configure defaults] ********************************
    ok: [localhost] => {
    “msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
    }

    TASK [metrics-server : Metrics-Server | Checking old installation files] *******
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | deleting old metrics-server] ***********
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
    skipping: [localhost] => (item=metrics-server)

    TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | Creating manifests] ********************
    skipping: [localhost] => (item={‘name’: ‘values’, ‘file’: ‘values.yaml’, ‘type’: ‘config’})

    TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
    skipping: [localhost]

    TASK [metrics-server : Metrics-Server | import metrics-server status] **********
    skipping: [localhost]

    PLAY RECAP *********************************************************************
    localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0

    [WARNING]: No inventory was parsed, only implicit localhost is available
    [WARNING]: provided hosts list is empty, only localhost is available. Note that
    the implicit localhost does not match ‘all’

    PLAY [localhost] ***************************************************************

    TASK [download : include_tasks] ************************************************
    skipping: [localhost]

    TASK [download : Download items] ***********************************************
    skipping: [localhost]

    TASK [download : Sync container] ***********************************************
    skipping: [localhost]

    TASK [kubesphere-defaults : Configure defaults] ********************************
    ok: [localhost] => {
    “msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
    }

    TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
    changed: [localhost]

    TASK [common : KubeSphere | Get system namespaces] *****************************
    ok: [localhost]

    TASK [common : set_fact] *******************************************************
    ok: [localhost]

    TASK [common : debug] **********************************************************
    ok: [localhost] => {
    “msg”: [
    “kubesphere-system”,
    “kubesphere-controls-system”,
    “kubesphere-monitoring-system”,
    “kube-node-lease”,
    “kubesphere-logging-system”,
    “openpitrix-system”,
    “kubesphere-devops-system”,
    “istio-system”,
    “kubesphere-alerting-system”,
    “istio-system”
    ]
    }

    TASK [common : KubeSphere | Create kubesphere namespace] ***********************
    changed: [localhost] => (item=kubesphere-system)
    changed: [localhost] => (item=kubesphere-controls-system)
    changed: [localhost] => (item=kubesphere-monitoring-system)
    changed: [localhost] => (item=kube-node-lease)
    changed: [localhost] => (item=kubesphere-logging-system)
    changed: [localhost] => (item=openpitrix-system)
    changed: [localhost] => (item=kubesphere-devops-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=kubesphere-alerting-system)
    changed: [localhost] => (item=istio-system)

    TASK [common : KubeSphere | Labeling system-workspace] *************************
    changed: [localhost] => (item=default)
    changed: [localhost] => (item=kube-public)
    changed: [localhost] => (item=kube-system)
    changed: [localhost] => (item=kubesphere-system)
    changed: [localhost] => (item=kubesphere-controls-system)
    changed: [localhost] => (item=kubesphere-monitoring-system)
    changed: [localhost] => (item=kube-node-lease)
    changed: [localhost] => (item=kubesphere-logging-system)
    changed: [localhost] => (item=openpitrix-system)
    changed: [localhost] => (item=kubesphere-devops-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=kubesphere-alerting-system)
    changed: [localhost] => (item=istio-system)

    TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
    changed: [localhost] => (item=default)
    changed: [localhost] => (item=kube-public)
    changed: [localhost] => (item=kube-system)
    changed: [localhost] => (item=kubesphere-system)
    changed: [localhost] => (item=kubesphere-controls-system)
    changed: [localhost] => (item=kubesphere-monitoring-system)
    changed: [localhost] => (item=kube-node-lease)
    changed: [localhost] => (item=kubesphere-logging-system)
    changed: [localhost] => (item=openpitrix-system)
    changed: [localhost] => (item=kubesphere-devops-system)
    changed: [localhost] => (item=istio-system)
    changed: [localhost] => (item=kubesphere-alerting-system)
    changed: [localhost] => (item=istio-system)

    TASK [common : Kubesphere | Label namespace for network policy] ****************
    changed: [localhost]

    TASK [common : KubeSphere | Getting kubernetes master num] *********************
    changed: [localhost]

    TASK [common : KubeSphere | Setting master num] ********************************
    ok: [localhost]

    TASK [common : Kubesphere | Getting common component installation files] *******
    changed: [localhost] => (item=common)
    changed: [localhost] => (item=ks-crds)

    TASK [common : KubeSphere | Create KubeSphere crds] ****************************
    changed: [localhost]

    TASK [common : KubeSphere | Recreate KubeSphere crds] **************************
    changed: [localhost]

    TASK [common : KubeSphere | check k8s version] *********************************
    changed: [localhost]

    TASK [common : Kubesphere | Getting common component installation files] *******
    changed: [localhost] => (item=snapshot-controller)

    TASK [common : Kubesphere | Creating snapshot controller values] ***************
    changed: [localhost] => (item={‘name’: ‘custom-values-snapshot-controller’, ‘file’: ‘custom-values-snapshot-controller.yaml’})

    TASK [common : Kubesphere | Remove old snapshot crd] ***************************
    changed: [localhost]

    TASK [common : Kubesphere | Deploy snapshot controller] ************************
    changed: [localhost]

    TASK [common : Kubesphere | Checking openpitrix common component] **************
    changed: [localhost]

    TASK [common : include_tasks] **************************************************
    skipping: [localhost] => (item={‘op’: ‘openpitrix-db’, ‘ks’: ‘mysql-pvc’})
    skipping: [localhost] => (item={‘op’: ‘openpitrix-etcd’, ‘ks’: ‘etcd-pvc’})

    TASK [common : Getting PersistentVolumeName (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeName (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
    skipping: [localhost]

    TASK [common : Getting PersistentVolumeName (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeName (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.404304”, “end”: “2021-01-05 01:29:39.511053”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:39.106749”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting mysql db pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.308463”, “end”: “2021-01-05 01:29:40.576754”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:40.268291”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting redis db pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.565084”, “end”: “2021-01-05 01:29:42.187760”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:41.622676”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting minio pv size] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.135581”, “end”: “2021-01-05 01:29:42.938780”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:42.803199”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting openldap pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.130601”, “end”: “2021-01-05 01:29:43.813909”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:43.683308”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting etcd pv size] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.655197”, “end”: “2021-01-05 01:29:45.485754”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:44.830557”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting redis ha pv size] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check es-master PersistentVolumeClaim] *************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-logging-system data-elasticsearch-logging-discovery-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.287793”, “end”: “2021-01-05 01:29:47.307024”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:47.019231”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-elasticsearch-logging-discovery-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-elasticsearch-logging-discovery-0\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting es master pv size] *************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check es data PersistentVolumeClaim] ***************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-logging-system data-elasticsearch-logging-data-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.410326”, “end”: “2021-01-05 01:29:48.675401”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:29:48.265075”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-elasticsearch-logging-data-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-elasticsearch-logging-data-0\” not found"], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Setting es data pv size] ***************************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating common component manifests] ***************
    changed: [localhost] => (item={‘path’: ‘etcd’, ‘file’: ‘etcd.yaml’})
    changed: [localhost] => (item={‘name’: ‘mysql’, ‘file’: ‘mysql.yaml’})
    changed: [localhost] => (item={‘path’: ‘redis’, ‘file’: ‘redis.yaml’})

    TASK [common : Kubesphere | Creating mysql sercet] *****************************
    changed: [localhost]

    TASK [common : Kubesphere | Deploying etcd and mysql] **************************
    skipping: [localhost] => (item=etcd.yaml)
    skipping: [localhost] => (item=mysql.yaml)

    TASK [common : Kubesphere | Getting minio installation files] ******************
    skipping: [localhost] => (item=minio-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={‘name’: ‘custom-values-minio’, ‘file’: ‘custom-values-minio.yaml’})

    TASK [common : Kubesphere | Check minio] ***************************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploy minio] **************************************
    skipping: [localhost]

    TASK [common : debug] **********************************************************
    skipping: [localhost]

    TASK [common : fail] ***********************************************************
    skipping: [localhost]

    TASK [common : Kubesphere | create minio config directory] *********************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating common component manifests] ***************
    skipping: [localhost] => (item={‘path’: ‘/root/.config/rclone’, ‘file’: ‘rclone.conf’})

    TASK [common : include_tasks] **************************************************
    skipping: [localhost] => (item=helm)
    skipping: [localhost] => (item=vmbased)

    TASK [common : Kubesphere | import minio status] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check ha-redis] ************************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting redis installation files] ******************
    skipping: [localhost] => (item=redis-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={‘name’: ‘custom-values-redis’, ‘file’: ‘custom-values-redis.yaml’})

    TASK [common : Kubesphere | Check old redis status] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old redis svc] *******************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting redis PodIp] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating redis migration script] *******************
    skipping: [localhost] => (item={‘path’: ‘/etc/kubesphere’, ‘file’: ‘redisMigrate.py’})

    TASK [common : Kubesphere | Check redis-ha status] *****************************
    skipping: [localhost]

    TASK [common : ks-logging | Migrating redis data] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old redis] *********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    skipping: [localhost] => (item=redis.yaml)

    TASK [common : Kubesphere | import redis status] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Getting openldap installation files] ***************
    skipping: [localhost] => (item=openldap-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    skipping: [localhost] => (item={‘name’: ‘custom-values-openldap’, ‘file’: ‘custom-values-openldap.yaml’})

    TASK [common : Kubesphere | Check old openldap status] *************************
    skipping: [localhost]

    TASK [common : KubeSphere | Shutdown ks-account] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap] ************************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploy openldap] ***********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Load old openldap data] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap-ha status] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get openldap-ha pod list] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get old openldap data] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Migrating openldap data] ***************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old openldap] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Restart openldap] **********************************
    skipping: [localhost]

    TASK [common : KubeSphere | Restarting ks-account] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | import openldap status] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check ha-redis] ************************************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: "/usr/local/bin/helm list -n kubesphere-system | grep \“ks-redis\”\n", “delta”: “0:00:00.329842”, “end”: “2021-01-05 01:30:02.598401”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:30:02.268559”, “stderr”: "", “stderr_lines”: [], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Getting redis installation files] ******************
    changed: [localhost] => (item=redis-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    changed: [localhost] => (item={‘name’: ‘custom-values-redis’, ‘file’: ‘custom-values-redis.yaml’})

    TASK [common : Kubesphere | Check old redis status] ****************************
    changed: [localhost]

    TASK [common : Kubesphere | Delete and backup old redis svc] *******************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    changed: [localhost]

    TASK [common : Kubesphere | Getting redis PodIp] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Creating redis migration script] *******************
    skipping: [localhost] => (item={‘path’: ‘/etc/kubesphere’, ‘file’: ‘redisMigrate.py’})

    TASK [common : Kubesphere | Check redis-ha status] *****************************
    skipping: [localhost]

    TASK [common : ks-logging | Migrating redis data] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old redis] *********************************
    skipping: [localhost]

    TASK [common : Kubesphere | Deploying redis] ***********************************
    skipping: [localhost] => (item=redis.yaml)

    TASK [common : Kubesphere | import redis status] *******************************
    changed: [localhost]

    TASK [common : Kubesphere | Getting openldap installation files] ***************
    changed: [localhost] => (item=openldap-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    changed: [localhost] => (item={‘name’: ‘custom-values-openldap’, ‘file’: ‘custom-values-openldap.yaml’})

    TASK [common : Kubesphere | Check old openldap status] *************************
    changed: [localhost]

    TASK [common : KubeSphere | Shutdown ks-account] *******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap] ************************************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: "/usr/local/bin/helm list -n kubesphere-system | grep \“ks-openldap\”\n", “delta”: “0:00:00.497047”, “end”: “2021-01-05 01:31:28.690847”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:31:28.193800”, “stderr”: "", “stderr_lines”: [], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Deploy openldap] ***********************************
    changed: [localhost]

    TASK [common : Kubesphere | Load old openldap data] ****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Check openldap-ha status] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get openldap-ha pod list] **************************
    skipping: [localhost]

    TASK [common : Kubesphere | Get old openldap data] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | Migrating openldap data] ***************************
    skipping: [localhost]

    TASK [common : Kubesphere | Disable old openldap] ******************************
    skipping: [localhost]

    TASK [common : Kubesphere | Restart openldap] **********************************
    skipping: [localhost]

    TASK [common : KubeSphere | Restarting ks-account] *****************************
    skipping: [localhost]

    TASK [common : Kubesphere | import openldap status] ****************************
    changed: [localhost]

    TASK [common : Kubesphere | Getting minio installation files] ******************
    changed: [localhost] => (item=minio-ha)

    TASK [common : Kubesphere | Creating manifests] ********************************
    changed: [localhost] => (item={‘name’: ‘custom-values-minio’, ‘file’: ‘custom-values-minio.yaml’})

    TASK [common : Kubesphere | Check minio] ***************************************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: "/usr/local/bin/helm list -n kubesphere-system | grep \“ks-minio\”\n", “delta”: “0:00:00.694638”, “end”: “2021-01-05 01:32:33.662385”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:32:32.967747”, “stderr”: "", “stderr_lines”: [], “stdout”: "", “stdout_lines”: []}
    …ignoring

    TASK [common : Kubesphere | Deploy minio] **************************************
    fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-minio /kubesphere/kubesphere/minio-ha -f /kubesphere/kubesphere/custom-values-minio.yaml –set fullnameOverride=minio –namespace kubesphere-system –wait –timeout 1800s\n”, “delta”: “0:30:08.250062”, “end”: “2021-01-05 02:02:42.506954”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2021-01-05 01:32:34.256892”, “stderr”: “Error: timed out waiting for the condition”, “stderr_lines”: [“Error: timed out waiting for the condition”], “stdout”: "Release \“ks-minio\” does not exist. Installing it now.", “stdout_lines”: ["Release \“ks-minio\” does not exist. Installing it now."]}
    …ignoring

    TASK [common : debug] **********************************************************
    ok: [localhost] => {
    “msg”: [
    “1. check the storage configuration and storage server”,
    “2. make sure the DNS address in /etc/resolv.conf is available”,
    “3. execute ‘kubectl logs -n kubesphere-system -l job-name=minio-make-bucket-job’ to watch logs”,
    “4. execute ‘helm -n kubesphere-system uninstall ks-minio && kubectl -n kubesphere-system delete job minio-make-bucket-job’”,
    “5. Restart the installer pod in kubesphere-system namespace”
    ]
    }

    TASK [common : fail] ***********************************************************
    fatal: [localhost]: FAILED! => {“changed”: false, “msg”: “It is suggested to refer to the above methods for troubleshooting problems .”}

    PLAY RECAP *********************************************************************
    localhost : ok=47 changed=41 unreachable=0 failed=1 skipped=77 rescued=0 ignored=12

    • kumu 回复了此帖

      kumu
      啥子问题?大神指教一下吗?

      kumu 单独建一个帖子啊,把问题描述清楚,报错信息贴下。

      • kumu 回复了此帖

        操作系统的问题,之前用centos7 CentOS-7-x86_64-Minimal-1810.iso 试了 n次都不成功 ,试过系统原来的3.10内核 4.4 和 5.8 都不成功。。。
        这次换成Ubuntu18.4 ubuntu-18.04.2-live-server-amd64.iso 内核是:Linux 4.15.0-128-generic
        装就成功了,但是还是有些模块出问题了了

        我是在虚拟机上装的,用的virtual box 三台机器,内存都是10G cpu 分别 8核
        装个环境废了好多时间都没好,每试一次就花了一天的时间…..::::

        Cauchy 果然是在virtual box 上装不成功,我的天,浪费了那么多时间…
        学习的路是多么孤独的….
        想不通,但是在virtual box 虚拟机里面显示的cpu 核心数 内存大小都是跟设置的一样的。。

        • kumu 回复了此帖
          5 天 后

          已解决。

          ubuntu18.04离线添加节点./kk add nodes,原集群使用flannel网络
          【问题描述】
          1、./kk add nodes 安装日志是成功的
          2、新节点的kube-flannel一直处于CrashLoopBackOff状态,日志如下

          bglab@master:~/csz$ kubectl logs kube-flannel-ds-7kcfr -n kube-system
          I0113 02:34:56.117311       1 main.go:514] Determining IP address of default interface
          E0113 02:34:56.117589       1 main.go:202] Failed to find any valid interface to use: failed to get default interface: Unable to find default route

          网上解决思路:https://blog.csdn.net/qingyafan/article/details/93519196
          解决过程:
          方法1、看看flannel配置文件用的哪个网卡,修改配置文件参数- –iface=ens32指定网卡。
          原有yaml文件启动命令

            containers:
            - args:
              - --ip-masq
              - --kube-subnet-mgr
              command:
              - /opt/bin/flanneld

          (1)尝试修改pod的yaml配置文件
          kubectl edit pod kube-flannel-ds-7kcfr -n kube-system -o yaml
          发现不能直接修改pod的配置文件
          (2)尝试修改DaemonSet的yaml配置文件
          kubectl edit DaemonSet kube-flannel-ds -n kube-system -o yaml
          想到会影响集群原有主机上的flannel,所以此路不通

          方法2:其他主机可以找到默认网卡,为什么这个主机找不到
          原集群主机ping www.baidu.com, 可以ping通
          新主机ping www.baidu.com, 不能ping通

          所以开始怀疑是网卡配置问题,开始排查网卡配置,ubuntu18.04的网卡配置在/etc/netplan下
          原配置

                   eno2:
                       addresses:
                       - 10.34.76.242/24
                       #gateway4: 10.34.76.254
                       nameservers:
                           addresses:
                           - 8.8.8.8

          新配置

                   eno2:
                       addresses:
                       - 10.34.76.242/24
                       gateway4: 10.34.76.254
                       nameservers:
                           addresses:
                           - 8.8.8.8  
          ```
          去掉#gateway4的注释后,ping www.baidu.com成功,然后删掉flannel-ds的pod重建,启动成功
          1 个月 后

          kumu 还很可能是网卡的问题,两张网卡直接通信导致,习惯性虚机设置了两张网卡,一张仅主机模式,一张桥接联网,局域网路由限制了通信只能10m/s ,仅主机传东西的到虚机快点。

          kumu
          48G跑KubeSphere肯定是绰绰有余了,具体业务的资源使用量要根据实际情况规划设计。

          请问可以用kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz在线升级吗,有相应的教程吗

          环境离线安装好后,按教材上再部署Bookinfo就不行了,好像在不停地拉取这个镜像,拉取不到? 是不是哪里还要设置账号、密码啥 的?

          bookinfo demo的镜像如下:

              - image: kubesphere/examples-bookinfo-details-v1:1.13.0
              - image: kubesphere/examples-bookinfo-productpage-v1:1.13.0
              - image: kubesphere/examples-bookinfo-ratings-v1:1.13.0
              - image: kubesphere/examples-bookinfo-reviews-v1:1.13.0

          检查下你的仓库里面是不是没有这几个镜像

            zackzhang 使用kk创建自签名镜像仓库的,这个仓库中如果没有 bookinfo 的几个镜像,不会从 dockerhub 上下载的? 请问如何把 bookinfo 塞进这个仓库呢?