下面是mixer的log,能看出什么原因么?

Mixer started with

MaxMessageSize: 1048576

MaxConcurrentStreams: 1024

APIWorkerPoolSize: 1024

AdapterWorkerPoolSize: 1024

APIPort: 9091

APIAddress: unix:///sock/mixer.socket

MonitoringPort: 15014

EnableProfiling: true

SingleThreaded: false

NumCheckCacheEntries: 1500000

ConfigStoreURL: mcp://istio-galley.istio-system.svc:9901

CertificateFile: /etc/certs/cert-chain.pem

KeyFile: /etc/certs/key.pem

CACertificateFile: /etc/certs/root-cert.pem

ConfigDefaultNamespace: istio-system

ConfigWaitTimeout: 2m0s

LoggingOptions: log.Options{OutputPaths:[]string{“stdout”}, ErrorOutputPaths:[]string{“stderr”}, RotateOutputPath:"", RotationMaxSize:104857600, RotationMaxAge:30, RotationMaxBackups:1000, JSONEncoding:false, LogGrpc:true, outputLevels:“default:info”, logCallers:"", stackTraceLevels:“default:none”}

TracingOptions: tracing.Options{ZipkinURL:“http://jaeger-collector.istio-system.svc:9411/api/v1/spans”, JaegerURL:"", LogTraceSpans:false, SamplingRate:0}

IntrospectionOptions: ctrlz.Options{Port:0×2694, Address:“localhost”}

UseTemplateCRDs: true

LoadSheddingOptions: loadshedding.Options{Mode:2, AverageLatencyThreshold:100000000, SamplesPerSecond:1.7976931348623157e+308, SampleHalfLife:1000000000, LatencyEnforcementThreshold:100, MaxRequestsPerSecond:0, BurstSize:0}

UseAdapterCRDs: false

2021-01-21T14:24:52.591100Z info mcp Requesting following collections:

2021-01-21T14:24:52.591135Z info mcp [0] istio/config/v1alpha2/legacy/authorizations

2021-01-21T14:24:52.591140Z info mcp [1] istio/config/v1alpha2/legacy/edges

2021-01-21T14:24:52.591144Z info mcp [2] istio/config/v1alpha2/legacy/reportnothings

2021-01-21T14:24:52.591146Z info mcp [3] istio/config/v1alpha2/templates

2021-01-21T14:24:52.591149Z info mcp [4] istio/config/v1alpha2/adapters

2021-01-21T14:24:52.591152Z info mcp [5] istio/config/v1alpha2/legacy/listentries

2021-01-21T14:24:52.591155Z info mcp [6] istio/config/v1alpha2/legacy/metrics

2021-01-21T14:24:52.591158Z info mcp [7] istio/config/v1alpha2/legacy/tracespans

2021-01-21T14:24:52.591172Z info mcp [8] istio/policy/v1beta1/instances

2021-01-21T14:24:52.591175Z info mcp [9] istio/policy/v1beta1/handlers

2021-01-21T14:24:52.591177Z info mcp [10] istio/config/v1alpha2/legacy/kuberneteses

2021-01-21T14:24:52.591180Z info mcp [11] istio/config/v1alpha2/legacy/logentries

2021-01-21T14:24:52.591184Z info mcp [12] istio/config/v1alpha2/legacy/quotas

2021-01-21T14:24:52.591187Z info mcp [13] istio/policy/v1beta1/rules

2021-01-21T14:24:52.591190Z info mcp [14] istio/config/v1alpha2/legacy/apikeys

2021-01-21T14:24:52.591205Z info mcp [15] istio/config/v1alpha2/legacy/checknothings

2021-01-21T14:24:52.591208Z info mcp [16] istio/policy/v1beta1/attributemanifests

2021-01-21T14:24:52.591238Z info parsed scheme: ""

2021-01-21T14:24:52.591252Z info scheme "" not registered, fallback to default scheme

2021-01-21T14:24:52.591300Z info ccResolverWrapper: sending update to cc: {[{istio-galley.istio-system.svc:9901 0 }] }

2021-01-21T14:24:52.591312Z info ClientConn switching balancer to “pick_first”

2021-01-21T14:24:52.591367Z info Awaiting for config store sync…

2021-01-21T14:24:52.591475Z info pickfirstBalancer: HandleSubConnStateChange: 0xc000521850, CONNECTING

2021-01-21T14:24:52.592099Z info mcp (re)trying to establish new MCP sink stream

2021-01-21T14:24:52.596199Z info pickfirstBalancer: HandleSubConnStateChange: 0xc000521850, READY

2021-01-21T14:24:52.596306Z info mcp New MCP sink stream created

    DivXPro 日志 + 截图

    kubectl -n istio-system get pods

      zackzhang

      root@master1:~# kubectl -n istio-system get pods
      NAME                                      READY   STATUS             RESTARTS   AGE
      istio-citadel-6dc9fd8fc-2r4rm             1/1     Running            0          9h
      istio-galley-6bc47887d7-hvv29             1/1     Running            0          9h
      istio-ingressgateway-5bf76c6574-d2kvh     0/1     Running            0          9h
      istio-init-crd-10-1.4.8-nbl7d             0/1     Completed          0          9h
      istio-init-crd-11-1.4.8-92ccv             0/1     Completed          0          9h
      istio-init-crd-12-1.4.8-chqfk             0/1     Completed          0          9h
      istio-init-crd-14-1.4.8-8jpn8             0/1     Completed          0          9h
      istio-pilot-86f4d4847b-fjg52              2/2     Running            3          9h
      istio-policy-64b69ffd4c-btctb             1/2     CrashLoopBackOff   207        9h
      istio-sidecar-injector-77dcf6f548-n7nh7   1/1     Running            0          9h
      istio-telemetry-5c896cc8b8-mbh2n          2/2     Running            2          9h
      jaeger-collector-578456b5bf-kptww         1/1     Running            0          9h
      jaeger-operator-587999bcb9-fx7dz          1/1     Running            0          9h
      jaeger-query-7bb67b557f-6v9sf             2/2     Running            0          9h

      按你之前的帖子ks-installer重新执行后 只剩下

      root@master1:~# kubectl -n istio-system get pods
      NAME                                READY   STATUS    RESTARTS   AGE
      jaeger-collector-578456b5bf-kptww   1/1     Running   0          9h
      jaeger-operator-587999bcb9-fx7dz    1/1     Running   0          9h
      jaeger-query-7bb67b557f-6v9sf       2/2     Running   0          9h

      日志里似乎看不出什么

      root@master1:~# kubectl -n kubesphere-system get po | grep ks-installer
      ks-installer-7cb866bd-tnx48             1/1     Running   0          9h
      root@master1:~# kubectl -n kubesphere-system exec -ti ks-installer-7cb866bd-tnx48 sh
      kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
      /kubesphere $ helm delete istio-init -n istio-system
      release "istio-init" uninstalled
      /kubesphere $ helm delete istio -n istio-system
      release "istio" uninstalled
      /kubesphere $ kubectl -n kubesphere-system rollout restart deploy/ks-installer
      deployment.apps/ks-installer restarted
      /kubesphere $ command terminated with exit code 137
      root@master1:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
      2021-01-22T09:44:26+08:00 INFO     : shell-operator latest
      2021-01-22T09:44:26+08:00 INFO     : HTTP SERVER Listening on 0.0.0.0:9115
      2021-01-22T09:44:26+08:00 INFO     : Use temporary dir: /tmp/shell-operator
      2021-01-22T09:44:26+08:00 INFO     : Initialize hooks manager ...
      2021-01-22T09:44:26+08:00 INFO     : Search and load hooks ...
      2021-01-22T09:44:26+08:00 INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
      2021-01-22T09:44:27+08:00 INFO     : Load hook config from '/hooks/kubesphere/schedule.sh'
      2021-01-22T09:44:27+08:00 INFO     : Initializing schedule manager ...
      2021-01-22T09:44:27+08:00 INFO     : KUBE Init Kubernetes client
      2021-01-22T09:44:27+08:00 INFO     : KUBE-INIT Kubernetes client is configured successfully
      2021-01-22T09:44:27+08:00 INFO     : MAIN: run main loop
      2021-01-22T09:44:27+08:00 INFO     : MAIN: add onStartup tasks
      2021-01-22T09:44:27+08:00 INFO     : QUEUE add all HookRun@OnStartup
      2021-01-22T09:44:27+08:00 INFO     : Running schedule manager ...
      2021-01-22T09:44:27+08:00 INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
      2021-01-22T09:44:27+08:00 INFO     : MSTOR Create new metric shell_operator_live_ticks
      2021-01-22T09:44:27+08:00 INFO     : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations
      2021-01-22T09:44:27+08:00 INFO     : EVENT Kube event '5d22e560-51f9-4a4d-836c-1f7e05b83fe5'
      2021-01-22T09:44:27+08:00 INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
      2021-01-22T09:44:30+08:00 INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
      2021-01-22T09:44:30+08:00 INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
      [WARNING]: No inventory was parsed, only implicit localhost is available
      [WARNING]: provided hosts list is empty, only localhost is available. Note that
      the implicit localhost does not match 'all'
      
      PLAY [localhost] ***************************************************************
      
      TASK [download : include_tasks] ************************************************
      skipping: [localhost]
      
      TASK [download : Download items] ***********************************************
      skipping: [localhost]
      
      TASK [download : Sync container] ***********************************************
      skipping: [localhost]
      
      TASK [kubesphere-defaults : Configure defaults] ********************************
      ok: [localhost] => {
          "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
      }
      
      TASK [preinstall : check k8s version] ******************************************
      changed: [localhost]
      
      TASK [preinstall : init k8s version] *******************************************
      ok: [localhost]
      
      TASK [preinstall : Stop if kubernetes version is nonsupport] *******************
      ok: [localhost] => {
          "changed": false,
          "msg": "All assertions passed"
      }
      
      TASK [preinstall : check storage class] ****************************************
      changed: [localhost]
      
      TASK [preinstall : Stop if StorageClass was not found] *************************
      skipping: [localhost]
      
      TASK [preinstall : check default storage class] ********************************
      changed: [localhost]
      
      TASK [preinstall : Stop if defaultStorageClass was not found] ******************
      ok: [localhost] => {
          "changed": false,
          "msg": "All assertions passed"
      }
      
      TASK [preinstall : Kubesphere | Checking kubesphere component] *****************
      changed: [localhost]
      
      TASK [preinstall : Kubesphere | Get kubesphere component version] **************
      changed: [localhost]
      
      TASK [preinstall : Kubesphere | Get kubesphere component version] **************
      skipping: [localhost] => (item=ks-openldap) 
      skipping: [localhost] => (item=ks-redis) 
      skipping: [localhost] => (item=ks-minio) 
      skipping: [localhost] => (item=ks-openpitrix) 
      skipping: [localhost] => (item=elasticsearch-logging) 
      skipping: [localhost] => (item=elasticsearch-logging-curator) 
      skipping: [localhost] => (item=istio) 
      skipping: [localhost] => (item=istio-init) 
      skipping: [localhost] => (item=jaeger-operator) 
      skipping: [localhost] => (item=ks-jenkins) 
      skipping: [localhost] => (item=ks-sonarqube) 
      skipping: [localhost] => (item=logging-fluentbit-operator) 
      skipping: [localhost] => (item=uc) 
      skipping: [localhost] => (item=metrics-server) 
      localhost                  : ok=9    changed=5    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=9    changed=5    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
      
      [WARNING]: No inventory was parsed, only implicit localhost is available
      [WARNING]: provided hosts list is empty, only localhost is available. Note that
      the implicit localhost does not match 'all'
      
      PLAY [localhost] ***************************************************************
      
      TASK [download : include_tasks] ************************************************
      skipping: [localhost]
      
      TASK [download : Download items] ***********************************************
      skipping: [localhost]
      
      TASK [download : Sync container] ***********************************************
      skipping: [localhost]
      
      TASK [kubesphere-defaults : Configure defaults] ********************************
      ok: [localhost] => {
          "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
      }
      
      TASK [metrics-server : Metrics-Server | Checking old installation files] *******
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | deleting old metrics-server] ***********
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
      skipping: [localhost] => (item=metrics-server) 
      
      TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | Creating manifests] ********************
      skipping: [localhost] => (item={'name': 'values', 'file': 'values.yaml', 'type': 'config'}) 
      
      TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
      skipping: [localhost]
      
      TASK [metrics-server : Metrics-Server | import metrics-server status] **********
      skipping: [localhost]
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=13   rescued=0    ignored=0   
      
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=13   rescued=0    ignored=0   
      
      [WARNING]: No inventory was parsed, only implicit localhost is available
      [WARNING]: provided hosts list is empty, only localhost is available. Note that
      the implicit localhost does not match 'all'
      
      PLAY [localhost] ***************************************************************
      
      TASK [download : include_tasks] ************************************************
      skipping: [localhost]
      
      TASK [download : Download items] ***********************************************
      skipping: [localhost]
      
      TASK [download : Sync container] ***********************************************
      skipping: [localhost]
      
      TASK [kubesphere-defaults : Configure defaults] ********************************
      ok: [localhost] => {
          "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
      }
      
      TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
      changed: [localhost]
      
      TASK [common : KubeSphere | Get system namespaces] *****************************
      ok: [localhost]
      
      TASK [common : set_fact] *******************************************************
      ok: [localhost]
      
      TASK [common : debug] **********************************************************
      ok: [localhost] => {
          "msg": [
              "kubesphere-system",
              "kubesphere-controls-system",
              "kubesphere-monitoring-system",
              "kube-node-lease",
              "kubesphere-logging-system",
              "istio-system",
              "kubesphere-alerting-system",
              "istio-system"
          ]
      }
      
      TASK [common : KubeSphere | Create kubesphere namespace] ***********************
      changed: [localhost] => (item=kubesphere-system)
      changed: [localhost] => (item=kubesphere-controls-system)
      changed: [localhost] => (item=kubesphere-monitoring-system)
      changed: [localhost] => (item=kube-node-lease)
      changed: [localhost] => (item=kubesphere-logging-system)
      changed: [localhost] => (item=istio-system)
      changed: [localhost] => (item=kubesphere-alerting-system)
      changed: [localhost] => (item=istio-system)
      
      TASK [common : KubeSphere | Labeling system-workspace] *************************
      changed: [localhost] => (item=default)
      changed: [localhost] => (item=kube-public)
      changed: [localhost] => (item=kube-system)
      changed: [localhost] => (item=kubesphere-system)
      changed: [localhost] => (item=kubesphere-controls-system)
      changed: [localhost] => (item=kubesphere-monitoring-system)
      changed: [localhost] => (item=kube-node-lease)
      changed: [localhost] => (item=kubesphere-logging-system)
      changed: [localhost] => (item=istio-system)
      changed: [localhost] => (item=kubesphere-alerting-system)
      changed: [localhost] => (item=istio-system)
      
      TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
      changed: [localhost] => (item=default)
      changed: [localhost] => (item=kube-public)
      changed: [localhost] => (item=kube-system)
      changed: [localhost] => (item=kubesphere-system)
      changed: [localhost] => (item=kubesphere-controls-system)
      changed: [localhost] => (item=kubesphere-monitoring-system)
      changed: [localhost] => (item=kube-node-lease)
      changed: [localhost] => (item=kubesphere-logging-system)
      changed: [localhost] => (item=istio-system)
      changed: [localhost] => (item=kubesphere-alerting-system)
      changed: [localhost] => (item=istio-system)
      
      TASK [common : Kubesphere | Label namespace for network policy] ****************
      changed: [localhost]
      
      TASK [common : KubeSphere | Getting kubernetes master num] *********************
      changed: [localhost]
      
      TASK [common : KubeSphere | Setting master num] ********************************
      ok: [localhost]
      
      TASK [common : Kubesphere | Getting common component installation files] *******
      changed: [localhost] => (item=common)
      changed: [localhost] => (item=ks-crds)
      
      TASK [common : KubeSphere | Create KubeSphere crds] ****************************
      changed: [localhost]
      
      TASK [common : KubeSphere | Recreate KubeSphere crds] **************************
      changed: [localhost]
      
      TASK [common : KubeSphere | check k8s version] *********************************
      changed: [localhost]
      
      TASK [common : Kubesphere | Getting common component installation files] *******
      changed: [localhost] => (item=snapshot-controller)
      
      TASK [common : Kubesphere | Creating snapshot controller values] ***************
      changed: [localhost] => (item={'name': 'custom-values-snapshot-controller', 'file': 'custom-values-snapshot-controller.yaml'})
      
      TASK [common : Kubesphere | Remove old snapshot crd] ***************************
      changed: [localhost]
      
      TASK [common : Kubesphere | Deploy snapshot controller] ************************
      changed: [localhost]
      
      TASK [common : Kubesphere | Checking openpitrix common component] **************
      changed: [localhost]
      
      TASK [common : include_tasks] **************************************************
      skipping: [localhost] => (item={'op': 'openpitrix-db', 'ks': 'mysql-pvc'}) 
      skipping: [localhost] => (item={'op': 'openpitrix-etcd', 'ks': 'etcd-pvc'}) 
      
      TASK [common : Getting PersistentVolumeName (mysql)] ***************************
      skipping: [localhost]
      
      TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
      skipping: [localhost]
      
      TASK [common : Setting PersistentVolumeName (mysql)] ***************************
      skipping: [localhost]
      
      TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
      skipping: [localhost]
      
      TASK [common : Getting PersistentVolumeName (etcd)] ****************************
      skipping: [localhost]
      
      TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
      skipping: [localhost]
      
      TASK [common : Setting PersistentVolumeName (etcd)] ****************************
      skipping: [localhost]
      
      TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
      changed: [localhost]
      
      TASK [common : Kubesphere | Setting mysql db pv size] **************************
      ok: [localhost]
      
      TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
      changed: [localhost]
      
      TASK [common : Kubesphere | Setting redis db pv size] **************************
      ok: [localhost]
      
      TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
      fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.070949", "end": "2021-01-22 09:45:23.880921", "msg": "non-zero return code", "rc": 1, "start": "2021-01-22 09:45:23.809972", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
      ...ignoring
      
      TASK [common : Kubesphere | Setting minio pv size] *****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
      changed: [localhost]
      
      TASK [common : Kubesphere | Setting openldap pv size] **************************
      ok: [localhost]
      
      TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
      changed: [localhost]
      
      TASK [common : Kubesphere | Setting etcd pv size] ******************************
      ok: [localhost]
      
      TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
      fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.072366", "end": "2021-01-22 09:45:24.881844", "msg": "non-zero return code", "rc": 1, "start": "2021-01-22 09:45:24.809478", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
      ...ignoring
      
      TASK [common : Kubesphere | Setting redis ha pv size] **************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check es-master PersistentVolumeClaim] *************
      fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-logging-system data-elasticsearch-logging-discovery-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.066043", "end": "2021-01-22 09:45:25.202645", "msg": "non-zero return code", "rc": 1, "start": "2021-01-22 09:45:25.136602", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-elasticsearch-logging-discovery-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-elasticsearch-logging-discovery-0\" not found"], "stdout": "", "stdout_lines": []}
      ...ignoring
      
      TASK [common : Kubesphere | Setting es master pv size] *************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check es data PersistentVolumeClaim] ***************
      fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-logging-system data-elasticsearch-logging-data-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.065157", "end": "2021-01-22 09:45:25.520820", "msg": "non-zero return code", "rc": 1, "start": "2021-01-22 09:45:25.455663", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-elasticsearch-logging-data-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-elasticsearch-logging-data-0\" not found"], "stdout": "", "stdout_lines": []}
      ...ignoring
      
      TASK [common : Kubesphere | Setting es data pv size] ***************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Creating common component manifests] ***************
      changed: [localhost] => (item={'path': 'etcd', 'file': 'etcd.yaml'})
      changed: [localhost] => (item={'name': 'mysql', 'file': 'mysql.yaml'})
      changed: [localhost] => (item={'path': 'redis', 'file': 'redis.yaml'})
      
      TASK [common : Kubesphere | Creating mysql sercet] *****************************
      changed: [localhost]
      
      TASK [common : Kubesphere | Deploying etcd and mysql] **************************
      skipping: [localhost] => (item=etcd.yaml) 
      skipping: [localhost] => (item=mysql.yaml) 
      
      TASK [common : Kubesphere | Getting minio installation files] ******************
      skipping: [localhost] => (item=minio-ha) 
      
      TASK [common : Kubesphere | Creating manifests] ********************************
      skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'}) 
      
      TASK [common : Kubesphere | Check minio] ***************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy minio] **************************************
      skipping: [localhost]
      
      TASK [common : debug] **********************************************************
      skipping: [localhost]
      
      TASK [common : fail] ***********************************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | create minio config directory] *********************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Creating common component manifests] ***************
      skipping: [localhost] => (item={'path': '/root/.config/rclone', 'file': 'rclone.conf'}) 
      
      TASK [common : include_tasks] **************************************************
      skipping: [localhost] => (item=helm) 
      skipping: [localhost] => (item=vmbased) 
      
      TASK [common : Kubesphere | import minio status] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check ha-redis] ************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting redis installation files] ******************
      skipping: [localhost] => (item=redis-ha) 
      
      TASK [common : Kubesphere | Creating manifests] ********************************
      skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'}) 
      
      TASK [common : Kubesphere | Check old redis status] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Delete and backup old redis svc] *******************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploying redis] ***********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting redis PodIp] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Creating redis migration script] *******************
      skipping: [localhost] => (item={'path': '/etc/kubesphere', 'file': 'redisMigrate.py'}) 
      
      TASK [common : Kubesphere | Check redis-ha status] *****************************
      skipping: [localhost]
      
      TASK [common : ks-logging | Migrating redis data] ******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Disable old redis] *********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploying redis] ***********************************
      skipping: [localhost] => (item=redis.yaml) 
      
      TASK [common : Kubesphere | import redis status] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting openldap installation files] ***************
      skipping: [localhost] => (item=openldap-ha) 
      
      TASK [common : Kubesphere | Creating manifests] ********************************
      skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'}) 
      
      TASK [common : Kubesphere | Check old openldap status] *************************
      skipping: [localhost]
      
      TASK [common : KubeSphere | Shutdown ks-account] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check openldap] ************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy openldap] ***********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Load old openldap data] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check openldap-ha status] **************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Get openldap-ha pod list] **************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Get old openldap data] *****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Migrating openldap data] ***************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Disable old openldap] ******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Restart openldap] **********************************
      skipping: [localhost]
      
      TASK [common : KubeSphere | Restarting ks-account] *****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | import openldap status] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check ha-redis] ************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting redis installation files] ******************
      skipping: [localhost] => (item=redis-ha) 
      
      TASK [common : Kubesphere | Creating manifests] ********************************
      skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'}) 
      
      TASK [common : Kubesphere | Check old redis status] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Delete and backup old redis svc] *******************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploying redis] ***********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting redis PodIp] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Creating redis migration script] *******************
      skipping: [localhost] => (item={'path': '/etc/kubesphere', 'file': 'redisMigrate.py'}) 
      
      TASK [common : Kubesphere | Check redis-ha status] *****************************
      skipping: [localhost]
      
      TASK [common : ks-logging | Migrating redis data] ******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Disable old redis] *********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploying redis] ***********************************
      skipping: [localhost] => (item=redis.yaml) 
      
      TASK [common : Kubesphere | import redis status] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting openldap installation files] ***************
      skipping: [localhost] => (item=openldap-ha) 
      
      TASK [common : Kubesphere | Creating manifests] ********************************
      skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'}) 
      
      TASK [common : Kubesphere | Check old openldap status] *************************
      skipping: [localhost]
      
      TASK [common : KubeSphere | Shutdown ks-account] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check openldap] ************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy openldap] ***********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Load old openldap data] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Check openldap-ha status] **************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Get openldap-ha pod list] **************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Get old openldap data] *****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Migrating openldap data] ***************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Disable old openldap] ******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Restart openldap] **********************************
      skipping: [localhost]
      
      TASK [common : KubeSphere | Restarting ks-account] *****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | import openldap status] ****************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Getting minio installation files] ******************
      skipping: [localhost] => (item=minio-ha) 
      
      TASK [common : Kubesphere | Creating manifests] ********************************
      skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'}) 
      
      TASK [common : Kubesphere | Check minio] ***************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy minio] **************************************
      skipping: [localhost]
      
      TASK [common : debug] **********************************************************
      skipping: [localhost]
      
      TASK [common : fail] ***********************************************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | create minio config directory] *********************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Creating common component manifests] ***************
      skipping: [localhost] => (item={'path': '/root/.config/rclone', 'file': 'rclone.conf'}) 
      
      TASK [common : include_tasks] **************************************************
      skipping: [localhost] => (item=helm) 
      skipping: [localhost] => (item=vmbased) 
      
      TASK [common : Kubesphere | import minio status] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploying common component] ************************
      skipping: [localhost] => (item=mysql.yaml) 
      
      TASK [common : Kubesphere | import mysql status] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploying common component] ************************
      changed: [localhost] => (item=etcd.yaml)
      
      TASK [common : Kubesphere | Getting elasticsearch and curator installation files] ***
      changed: [localhost]
      
      TASK [common : Kubesphere | Creating custom manifests] *************************
      changed: [localhost] => (item={'name': 'custom-values-elasticsearch', 'file': 'custom-values-elasticsearch.yaml'})
      changed: [localhost] => (item={'name': 'custom-values-elasticsearch-curator', 'file': 'custom-values-elasticsearch-curator.yaml'})
      
      TASK [common : Kubesphere | Check elasticsearch data StatefulSet] **************
      changed: [localhost]
      
      TASK [common : Kubesphere | Check elasticsearch storageclass] ******************
      changed: [localhost]
      
      TASK [common : Kubesphere | Comment elasticsearch storageclass parameter] ******
      skipping: [localhost]
      
      TASK [common : KubeSphere | Check internal es] *********************************
      changed: [localhost]
      
      TASK [common : Kubesphere | Deploy elasticsearch-logging] **********************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Get PersistentVolume Name] *************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Patch PersistentVolume (persistentVolumeReclaimPolicy)] ***
      skipping: [localhost]
      
      TASK [common : Kubesphere | Delete elasticsearch] ******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Waiting for seconds] *******************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy elasticsearch-logging] **********************
      skipping: [localhost]
      
      TASK [common : Kubesphere | import es status] **********************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy elasticsearch-logging-curator] **************
      changed: [localhost]
      
      TASK [common : Kubesphere | Getting elasticsearch and curator installation files] ***
      skipping: [localhost]
      
      TASK [common : Kubesphere | Creating custom manifests] *************************
      skipping: [localhost] => (item={'path': 'fluentbit', 'file': 'custom-fluentbit-fluentBit.yaml'}) 
      skipping: [localhost] => (item={'path': 'init', 'file': 'custom-fluentbit-operator-deployment.yaml'}) 
      skipping: [localhost] => (item={'path': 'migrator', 'file': 'custom-migrator-job.yaml'}) 
      
      TASK [common : Kubesphere | Checking fluentbit-version] ************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Backup old fluentbit crd] **************************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deleting old fluentbit operator] *******************
      skipping: [localhost] => (item={'type': 'deploy', 'name': 'logging-fluentbit-operator'}) 
      skipping: [localhost] => (item={'type': 'fluentbits.logging.kubesphere.io', 'name': 'fluent-bit'}) 
      skipping: [localhost] => (item={'type': 'ds', 'name': 'fluent-bit'}) 
      skipping: [localhost] => (item={'type': 'crd', 'name': 'fluentbits.logging.kubesphere.io'}) 
      
      TASK [common : Kubesphere | Prepare fluentbit operator setup] ******************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Migrate fluentbit operator old config] *************
      skipping: [localhost]
      
      TASK [common : Kubesphere | Deploy new fluentbit operator] *********************
      skipping: [localhost]
      
      TASK [common : Kubesphere | import fluentbit status] ***************************
      skipping: [localhost]
      
      TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ******************
      skipping: [localhost]
      
      TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *******************
      skipping: [localhost]
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=41   changed=32   unreachable=0    failed=0    skipped=116  rescued=0    ignored=4   
      
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=41   changed=32   unreachable=0    failed=0    skipped=116  rescued=0    ignored=4   
      
      [WARNING]: No inventory was parsed, only implicit localhost is available
      [WARNING]: provided hosts list is empty, only localhost is available. Note that
      the implicit localhost does not match 'all'
      
      PLAY [localhost] ***************************************************************
      
      TASK [download : include_tasks] ************************************************
      skipping: [localhost]
      
      TASK [download : Download items] ***********************************************
      skipping: [localhost]
      
      TASK [download : Sync container] ***********************************************
      skipping: [localhost]
      
      TASK [kubesphere-defaults : Configure defaults] ********************************
      ok: [localhost] => {
          "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
      }
      
      TASK [ks-core/prepare : KubeSphere | Check core components (1)] ****************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Check core components (2)] ****************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Check core components (3)] ****************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Check core components (4)] ****************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Update ks-core status] ********************
      skipping: [localhost]
      
      TASK [ks-core/prepare : set_fact] **********************************************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Create KubeSphere dir] ********************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
      skipping: [localhost] => (item=ks-init) 
      
      TASK [ks-core/prepare : Kubesphere | Checking account init] ********************
      skipping: [localhost]
      
      TASK [ks-core/prepare : Kubesphere | Init account] *****************************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Init KubeSphere] **************************
      skipping: [localhost] => (item=iam-accounts.yaml) 
      skipping: [localhost] => (item=webhook-secret.yaml) 
      skipping: [localhost] => (item=users.iam.kubesphere.io.yaml) 
      
      TASK [ks-core/prepare : KubeSphere | Getting controls-system file] *************
      skipping: [localhost] => (item={'name': 'kubesphere-controls-system', 'file': 'kubesphere-controls-system.yaml'}) 
      
      TASK [ks-core/prepare : KubeSphere | Installing controls-system] ***************
      skipping: [localhost]
      
      TASK [ks-core/prepare : KubeSphere | Generate kubeconfig-admin] ****************
      skipping: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Create KubeSphere dir] *****************
      ok: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Getting installation init files] *******
      changed: [localhost] => (item=jwt-script)
      
      TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] ************
      changed: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] ************
      ok: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] ************
      skipping: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Enable Token Script] *******************
      changed: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Getting KS Token] **********************
      changed: [localhost]
      
      TASK [ks-core/init-token : Kubesphere | Checking kubesphere secrets] ***********
      changed: [localhost]
      
      TASK [ks-core/init-token : Kubesphere | Delete kubesphere secret] **************
      skipping: [localhost]
      
      TASK [ks-core/init-token : KubeSphere | Create components token] ***************
      changed: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Getting kubernetes version] ***************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Setting kubernetes version] ***************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Getting kubernetes master num] ************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Setting master num] ***********************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : ks-console | Checking ks-console svc] ******************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : ks-console | Getting ks-console svc port] **************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : ks-console | Setting console_port] *********************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Getting Ingress installation files] *******
      skipping: [localhost] => (item=ingress) 
      skipping: [localhost] => (item=ks-apiserver) 
      skipping: [localhost] => (item=ks-console) 
      skipping: [localhost] => (item=ks-controller-manager) 
      
      TASK [ks-core/ks-core : KubeSphere | Creating manifests] ***********************
      skipping: [localhost] => (item={'path': 'ingress', 'file': 'ingress-controller.yaml', 'type': 'config'}) 
      skipping: [localhost] => (item={'path': 'ks-apiserver', 'file': 'ks-apiserver.yml', 'type': 'deploy'}) 
      skipping: [localhost] => (item={'path': 'ks-controller-manager', 'file': 'ks-controller-manager.yaml', 'type': 'deploy'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-config.yml', 'type': 'config'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-deployment.yml', 'type': 'deploy'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-svc.yml', 'type': 'svc'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'sample-bookinfo-configmap.yaml', 'type': 'config'}) 
      
      TASK [ks-core/ks-core : KubeSphere | Delete Ingress-controller configmap] ******
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Creating Ingress-controller configmap] ****
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Creating ks-core] *************************
      skipping: [localhost] => (item={'path': 'ks-apiserver', 'file': 'ks-apiserver.yml'}) 
      skipping: [localhost] => (item={'path': 'ks-controller-manager', 'file': 'ks-controller-manager.yaml'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-config.yml'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'sample-bookinfo-configmap.yaml'}) 
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-deployment.yml'}) 
      
      TASK [ks-core/ks-core : KubeSphere | Check ks-console svc] *********************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | Creating ks-console svc] ******************
      skipping: [localhost] => (item={'path': 'ks-console', 'file': 'ks-console-svc.yml'}) 
      
      TASK [ks-core/ks-core : KubeSphere | Patch ks-console svc] *********************
      skipping: [localhost]
      
      TASK [ks-core/ks-core : KubeSphere | import ks-core status] ********************
      skipping: [localhost]
      localhost                  : ok=9    changed=6    unreachable=0    failed=0    skipped=35   rescued=0    ignored=0   
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=9    changed=6    unreachable=0    failed=0    skipped=35   rescued=0    ignored=0   
      
      Start installing monitoring
      Start installing multicluster
      Start installing alerting
      Start installing auditing
      Start installing events
      Start installing logging
      Start installing notification
      Start installing servicemesh
      **************************************************
      task monitoring status is running
      task multicluster status is successful
      task alerting status is successful
      task auditing status is successful
      task events status is successful
      task logging status is successful
      task notification status is successful
      task servicemesh status is successful
      total: 8     completed:7
      **************************************************
      task monitoring status is running
      task multicluster status is successful
      task alerting status is successful
      task auditing status is successful
      task events status is successful
      task logging status is successful
      task notification status is successful
      task servicemesh status is successful
      total: 8     completed:7
      **************************************************
      task monitoring status is running
      task multicluster status is successful
      task alerting status is successful
      task auditing status is successful
      task events status is successful
      task logging status is successful
      task notification status is successful
      task servicemesh status is successful
      total: 8     completed:7
      **************************************************
      task monitoring status is running
      task multicluster status is successful
      task alerting status is successful
      task auditing status is successful
      task events status is successful
      task logging status is successful
      task notification status is successful
      task servicemesh status is successful
      total: 8     completed:7
      **************************************************
      task monitoring status is running
      task multicluster status is successful
      task alerting status is successful
      task auditing status is successful
      task events status is successful
      task logging status is successful
      task notification status is successful
      task servicemesh status is successful
      total: 8     completed:7
      **************************************************
      task monitoring status is successful
      task multicluster status is successful
      task alerting status is successful
      task auditing status is successful
      task events status is successful
      task logging status is successful
      task notification status is successful
      task servicemesh status is successful
      total: 8     completed:8
      **************************************************
      #####################################################
      ###              Welcome to KubeSphere!           ###
      #####################################################
      
      Console: http://192.168.0.7:30880
      Account: admin
      Password: P@88w0rd
      
      NOTES:
        1. After logging into the console, please check the
           monitoring status of service components in
           the "Cluster Management". If any service is not
           ready, please wait patiently until all components 
           are ready.
        2. Please modify the default password after login.
      
      #####################################################
      https://kubesphere.io             2021-01-22 09:47:29
      #####################################################

        DivXPro 安装没有问题,就是有些pods没有正常,你describe pods 、get events看下是什么原因造成的;或是删除这个pods重新调度到其他的nodes看下;先保证把所有的pods正常。

          zackzhang
          istio-telemetry里的事件里有些异常。mixer 一直重启。

          istio-proxy 里报
          2021-01-22T14:36:22.641766Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

          通过更换节点,mixer正常了,但是istio-proxy 还是一直异常。通过更换节点也还是老样子

          DivXPro

          这个检查下这个Pod是哪个,为什么连不上,检查下网络

          DivXPro 这个pod不应该注入sidecar,检查下是不是template中有 sidecar.istio.io/inject: "true" 的annotation.

          另外,试下把kubectl-admin这个Pod重新调度到其他节点,或是最好与istio-sidecar-injector这个pod一个节点上