之前安装了ks3.0,但是正式GA得等到月底,目前又急着用,只能卸载了3.0后安装2.1.1,之前是helm v3的,现在也降级到v2了,但是最小化安装失败了,安装日志如下,麻烦大佬帮忙看下:
`2020-07-17T09:38:38Z INFO : shell-operator v1.0.0-beta.5
2020-07-17T09:38:38Z INFO : HTTP SERVER Listening on 0.0.0.0:9115
2020-07-17T09:38:38Z INFO : Use temporary dir: /tmp/shell-operator
2020-07-17T09:38:38Z INFO : Initialize hooks manager …
2020-07-17T09:38:38Z INFO : Search and load hooks …
2020-07-17T09:38:38Z INFO : Load hook config from ‘/hooks/kubesphere/installRunner.py’
2020-07-17T09:38:38Z INFO : Initializing schedule manager …
2020-07-17T09:38:38Z INFO : KUBE Init Kubernetes client
2020-07-17T09:38:38Z INFO : KUBE-INIT Kubernetes client is configured successfully
2020-07-17T09:38:38Z INFO : MAIN: run main loop
2020-07-17T09:38:38Z INFO : MAIN: add onStartup tasks
2020-07-17T09:38:38Z INFO : QUEUE add all HookRun@OnStartup
2020-07-17T09:38:38Z INFO : Running schedule manager …
2020-07-17T09:38:38Z INFO : MSTOR Create new metric shell_operator_live_ticks
2020-07-17T09:38:38Z INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2020-07-17T09:38:38Z INFO : GVR for kind ‘ConfigMap’ is /v1, Resource=configmaps
2020-07-17T09:38:38Z INFO : EVENT Kube event ‘72fdbd18-c565-487f-adcd-7c5b85828395’
2020-07-17T09:38:38Z INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2020-07-17T09:38:41Z INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2020-07-17T09:38:41Z INFO : Running hook ‘kubesphere/installRunner.py’ binding ‘KUBE_EVENTS’ …
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}

TASK [preinstall : check k8s version] ******************************************
changed: [localhost]

TASK [preinstall : init k8s version] *******************************************
ok: [localhost]

TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}

TASK [preinstall : check helm status] ******************************************
changed: [localhost]

TASK [preinstall : Stop if Helm is not available] ******************************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}

TASK [preinstall : check storage class] ****************************************
changed: [localhost]

TASK [preinstall : Stop if StorageClass was not found] *************************
skipping: [localhost]

TASK [preinstall : check default storage class] ********************************
changed: [localhost]

TASK [preinstall : Stop if defaultStorageClass was not found] ******************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}

PLAY RECAP *********************************************************************
localhost : ok=9 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}

TASK [metrics-server : Metrics-Server | Checking old installation files] *******
ok: [localhost]

TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
skipping: [localhost]

TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
[DEPRECATION WARNING]: evaluating {‘failed’: False, u’stat’: {u’exists’:
False}, u’changed’: False} as a bare variable, this behaviour will go away and
you might need to add |bool to the expression in the future. Also see
CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in
version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [localhost] => (item=metrics-server)

TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
changed: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] ********************
changed: [localhost] => (item={u’type’: u’config’, u’name’: u’values’, u’file’: u’values.yaml’})

TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
changed: [localhost]

TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
skipping: [localhost]

TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
changed: [localhost]

PLAY RECAP *********************************************************************
localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Download items] ***********************************************
skipping: [localhost]

TASK [download : Sync container] ***********************************************
skipping: [localhost]

TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}

TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
changed: [localhost]

TASK [common : KubeSphere | Get system namespaces] *****************************
ok: [localhost]

TASK [common : set_fact] *******************************************************
ok: [localhost]

TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“kubesphere-system”,
“kubesphere-controls-system”,
“kubesphere-monitoring-system”,
“kube-node-lease”
]
}

TASK [common : KubeSphere | Create kubesphere namespace] ***********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Getting kubernetes master num] *********************
changed: [localhost]

TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]

TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)

TASK [common : KubeSphere | Create KubeSphere crds] ****************************
changed: [localhost]

TASK [common : Kubesphere | Checking openpitrix common component] **************
changed: [localhost]

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={u’ks’: u’mysql-pvc’, u’op’: u’openpitrix-db’})
skipping: [localhost] => (item={u’ks’: u’etcd-pvc’, u’op’: u’openpitrix-etcd’})

TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.706385”, “end”: “2020-07-17 09:39:40.089396”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:39.383011″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.691090”, “end”: “2020-07-17 09:39:41.073139″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:40.382049″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.706604″, “end”: “2020-07-17 09:39:42.083565″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:41.376961”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.707600″, “end”: “2020-07-17 09:39:43.076481″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:42.368881″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.711714″, “end”: “2020-07-17 09:39:44.068008″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:43.356294″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.696871″, “end”: “2020-07-17 09:39:45.066185”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:44.369314”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={u’path’: u’etcd’, u’file’: u’etcd.yaml’})
changed: [localhost] => (item={u’name’: u’mysql’, u’file’: u’mysql.yaml’})
changed: [localhost] => (item={u’path’: u’redis’, u’file’: u’redis.yaml’})

TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]

TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml)
skipping: [localhost] => (item=mysql.yaml)

TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})

TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]

TASK [common : debug] **********************************************************
skipping: [localhost]

TASK [common : fail] ***********************************************************
skipping: [localhost]

TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={u’path’: u’/root/.config/rclone’, u’file’: u’rclone.conf’})

TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm)
skipping: [localhost] => (item=vmbased)

TASK [common : Kubesphere | Check ha-redis] ************************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})

TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})

TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]

TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml)

TASK [common : Kubesphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})

TASK [common : Kubesphere | Check old openldap status] *************************
skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]

TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap] ************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]

TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]

TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]

TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Check ha-redis] ************************************
changed: [localhost]

TASK [common : Kubesphere | Getting redis installation files] ******************
changed: [localhost] => (item=redis-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})

TASK [common : Kubesphere | Check old redis status] ****************************
changed: [localhost]

TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]

TASK [common : Kubesphere | Deploying redis] ***********************************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-redis /etc/kubesphere/redis-ha -f /etc/kubesphere/custom-values-redis.yaml –set fullnameOverride=redis-ha –namespace kubesphere-system\n”, “delta”: “0:00:00.817659”, “end”: “2020-07-17 09:40:00.569627”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-17 09:39:59.751968″, “stderr”: "Error: UPGRADE FAILED: \“ks-redis\” has no deployed releases", “stderr_lines”: ["Error: UPGRADE FAILED: \“ks-redis\” has no deployed releases"], “stdout”: "", “stdout_lines”: []}

PLAY RECAP *********************************************************************
localhost : ok=25 changed=20 unreachable=0 failed=1 skipped=56 rescued=0 ignored=6 `

    • [已注销]

    • 最佳回复Jeff 选择

    可以了,是helm的版本问题,v2.16.9有毒的,换成v2.16.1就好了;总结下,高版本的降到低版本的,记得清空各个配置文件,还有请选对helm版本,v2.16.9版本有毒!这个坑帮大家踩了

zackzhang 已经调用那个删除脚本把所有的都删了,为了以防万一,我还特地把所有的命名空间都删了个遍,同时helm安装的,我也都清除了;执行安装脚本后,一开始是有两个安装的,一个redis-ha安装失败了,后面我手动把他删掉了,再尝试安装的时候,那个redis再也没有出来过了:

现在是要咋整呀

你看下ks-installer的安装日志:

root@node1:/root # kubectl get po -A | grep ks-installer
kubesphere-system              ks-installer-6b9fc66575-4gzhl                                     1/1     Running            0          9d

看下这个pod的logs。如果失败了,会有日志。看下是哪一步失败了,如果是直接没有安装,应该是跳过了。你看下执行结果。

    zackzhang 日志跟之前的还是一样的,执行命令:kubectl logs ks-installer-59fb465b7-f5pcm -n kubesphere-system 查看,截图末尾是 ks-installer的日志,到那里就直接停了:

    这个错目前来看就是helm在安装redis-ha的时候一直失败。你直接进这个Pod然后执行这个命令看下,如果还是报错,结合上面我发的那个链接,那个是redis的安装步骤,然后分析下错的原因。如果感觉最终还是无法解决,很有可能是你的系统里面有卸载后残留的一些垃圾导致的,建议把系统环境初始化下,然后按照步骤重新安装。

      把ks-installer这个pod删掉下,它会重新执行,再看下日志。

        zackzhang 删除pod,或者deployment后,也是一样的,现在是这几个路径:/etc/kubesphere/redis-ha , /etc/kubesphere/custom-values-redis.yaml 是在哪一步复制过去的,我自己本地找了/etc 下的目录,没发现有这个kubesphere 文件夹的

        这个是在在ks-installer这个Pod里面的。

        [root@node1 ~]# kubectl  -n kubesphere-system get po |grep ks-installer
        
        ks-installer-7c59d944ff-xd7tq            1/1     Running   0          2d20h
        [root@node1 ~]# kubectl  -n kubesphere-system exec -ti ks-installer-7c59d944ff-xd7tq sh
        / # ls /etc/kubesphere/redis-ha
        Chart.yaml   OWNERS       README.md    ci           templates    values.yaml
        / #

          上面的步骤你是在这个Pod里面执行的吗?比如按报错提示,手动helm来安装,然后看报错。另外你的环境初始化,重新安装过吗

            zackzhang 是的,报的是 ks -redis没安装:
            [root@master1 ~]# kubectl -n kubesphere-system exec -ti ks-installer-59fb465b7-qsrd5 sh
            / # ls /etc/kubesphere/redis-ha
            Chart.yaml OWNERS README.md ci templates values.yaml
            / # /usr/local/bin/helm upgrade --install ks-redis /etc/kubesphere/redis-ha -f /etc/kubesphere/custom-values-redis.yaml --set fullnameOverride=redis-ha --namespace kubesphere-system
            Error: UPGRADE FAILED: "ks-redis" has no deployed releases
            / #

            你看下你的helm的版本吧,helm2这样执行应该是没有问题的。

              zackzhang helm 我本地安装redis,是没问题的啊:
              `[root@master1 ]# helm version
              Client: &version.Version{SemVer:“v2.16.9″, GitCommit:“8ad7037828e5a0fca1009dabe290130da6368e39″, GitTreeState:“clean”}
              Server: &version.Version{SemVer:“v2.16.9”, GitCommit:“8ad7037828e5a0fca1009dabe290130da6368e39”, GitTreeState:“clean”}
              [root@master1 ]# helm install aliyun/redis
              NAME: geared-sheep
              LAST DEPLOYED: Mon Jul 20 10:47:17 2020
              NAMESPACE: default
              STATUS: DEPLOYED

              RESOURCES:
              ==> v1/PersistentVolumeClaim
              NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
              geared-sheep-redis Bound pvc-f98f8aba-7787-455f-9c7a-20b2aa66758a 8Gi RWO managed-nfs-storage 0s

              ==> v1/Pod(related)
              NAME READY STATUS RESTARTS AGE
              geared-sheep-redis-59585b47c6-zvmjg 0/1 Pending 0 0s

              ==> v1/Secret
              NAME TYPE DATA AGE
              geared-sheep-redis Opaque 1 0s

              ==> v1/Service
              NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
              geared-sheep-redis ClusterIP 10.68.73.170 <none> 6379/TCP 0s

              ==> v1beta1/Deployment
              NAME READY UP-TO-DATE AVAILABLE AGE
              geared-sheep-redis 0/1 1 0 0s

              NOTES:
              Redis can be accessed via port 6379 on the following DNS name from within your cluster:
              geared-sheep-redis.default.svc.cluster.local
              To get your password run:

              REDIS_PASSWORD=$(kubectl get secret --namespace default geared-sheep-redis -o jsonpath="{.data.redis-password}" | base64 --decode)

              To connect to your Redis server:

              1. Run a Redis pod that you can use as a client:

                kubectl run –namespace default geared-sheep-redis-client –rm –tty -i \
                –env REDIS_PASSWORD=$REDIS_PASSWORD \
                –image bitnami/redis:4.0.8-r2 – bash

              2. Connect using the Redis CLI:

                redis-cli -h geared-sheep-redis -a $REDIS_PASSWORD

              [root@master1 ]# helm list

              NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE

              geared-sheep 1 Mon Jul 20 10:47:17 2020 DEPLOYED redis-1.1.15 4.0.8 default

              metrics-server 1 Fri Jul 17 16:41:39 2020 DEPLOYED metrics-server-2.5.0 0.3.1-0217 kube-system`

              zackzhang 好像那个版本不对,并且那个repo list好像也没有的,是不是这个问题啊:

              [root@master1 ~]# kubectl -n kubesphere-system exec -ti ks-installer-59fb465b7-qsrd5 sh
              / # /usr/local/bin/helm version
              Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
              Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
              / # /usr/local/bin/helm install aliyun/redis
              Error: failed to download "aliyun/redis" (hint: running
              helm repo updatemay help)
              / #

                xingye311 刚我以为是helm的问题,但是我安装了3.0的,又可以安装成功了,真是一言难尽:

                zackzhang 之前helm删除没删除干净,删除干净后重新安装,helm 安装出来的reids-ha 是FAILED状态的,这个要做怎么定位啊:

                [root@master1 ]# helm list

                NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE

                ks-redis 1 Mon Jul 20 14:07:52 2020 FAILED redis-ha-3.9.0 5.0.5 kubesphere-system
                metrics-server 1 Fri Jul 17 16:41:39 2020 DEPLOYED metrics-server-2.5.0 0.3.1-0217 kube-system

                zackzhang redis的现在可以安装了,是redis-ha.conf 的残余配置有问题,删了后就可以了,但是现在到openldap的安装有问题了:

                fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-openldap /etc/kubesphere/openldap-ha -f /etc/kubesphere/custom-values-openldap.yaml –set fullnameOverride=openldap –namespace kubesphere-system\n”, “delta”: “0:00:00.765785”, “end”: “2020-07-20 07:03:46.721472″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-07-20 07:03:45.955687″, “stderr”: "Error: render error in \“openldap-ha/templates/statefulset.yaml\”: template: openldap-ha/templates/statefulset.yaml:25:9: executing \“openldap-ha/templates/statefulset.yaml\” at <(.Values.ldap.replication) and eq .Values.ldap.replication \“true\”>: can’t give argument to non-function .Values.ldap.replication", “stderr_lines”: ["Error: render error in \“openldap-ha/templates/statefulset.yaml\”: template: openldap-ha/templates/statefulset.yaml:25:9: executing \“openldap-ha/templates/statefulset.yaml\” at <(.Values.ldap.replication) and eq .Values.ldap.replication \“true\”>: can’t give argument to non-function .Values.ldap.replication"], “stdout”: "Release \“ks-openldap\” does not exist. Installing it now.", “stdout_lines”: ["Release \“ks-openldap\” does not exist. Installing it now."]}

                这个statefulset.yaml配置文件是不是有什么问题啊?麻烦帮忙看下,这两天要搭出来一套,急用呀,谢谢