请问这里应该怎么修改一下呢?
我想其实有两种改法 一种是改 installer 容器中的 ansible playbook 变量.

另一种是手动创建 PV ,但这两种方法 我昨天 都试了一下 没有成功 求大佬们 解答一下

改完 installer 后要把 status 中的es部分删掉,否则不会更新pvc的

    wanjunlei 大佬 我 进 installer 改 发现 容器内没权限额 ,这咋操作

    kubectl edit cc -n kubesphere-system ks-installer

    在这改

      wanjunlei
      感谢大佬提醒 请问 installer 容器内 怎么修改呢 ? 没有权限修改 yaml 额
      这两处 都没法修改, exec 进容器是没有权限的

      /kubesphere/installer/roles/common/defaults/main.yaml:14: elasticsearchMasterVolumeSize: 4Gi

      /kubesphere/installer/roles/common/templates/custom-values-elasticsearch.yaml.j2:86: size: {% if es_master_pv_size is defined %}{{ es_master_pv_size }}{% else %}{{ common.es.master.volumeSize | default(“4Gi”) }}{% endif %}

      wanjunlei

      cc 更改了 但是 还是 建 4G 的 ,大佬 这里有方法建 pv 不, 感觉 ansible 这里走不通额

      我手动创建一个 4Gi 的 pv 也是可以的 就是不晓得 模版是什么样的, 如果 有默认 模版 我就去试试

      阿里云这个 20Gi 的 最低云盘限制实在是太坑了 , 只允许通过 yaml 手搓 小于 20Gi 的 , 默认 sc 的限制在 20 Gi 以上

      elasticsearchMasterVolumeSize 设置的是 master 节点的 pvc, data 节点 要用 elasticsearchDataVolumeSize。

      再不行先把创建出来的 pvc 删掉。

      记得删除 status 中的 es 部分

        wanjunlei
        当前操作 卸载 logging 全部组件,cc 删除 status 中的 es 部分, 删除 集群中的 pv pvc , 增加elasticsearchDataVolumeSize 和 elasticsearchMasterVolumeSize 均验证为 20 Gi, 还是不能修改 data-elasticsearch-logging-discovery-0 生成 pvc 的大小 .生成出来的依旧是 4Gi

        1.ks 默认的 es pv 大小是20g

        2 kubectl get statefulsets.apps -n kubesphere-logging-system elasticsearch-logging-data -o jsonpath=‘{.spec.volumeClaimTemplates[0].spec.resources.requests.storage}’ 确认一下 sts 中设置的 pvc 大小

        3.exec 进 installers 的 pod,cat /kubesphere/kubesphere/elasticsearch/custom-values-elasticsearch.yaml 确认下 ElasticSearch helm 中配置的 pvc 大小

          wanjunlei

          1. 确认了 pvc 集群内为 data-elasticsearch-logging-discovery-0 4Gi pending

          2. statefulsets elasticsearch-logging-data 大小为 20Gi

            `updateStrategy:

            type: OnDelete

            volumeClaimTemplates:

            • apiVersion: v1

              kind: PersistentVolumeClaim

              metadata:

              creationTimestamp: null

              name: data

              spec:

              accessModes:

              • ReadWriteOnce

                resources:

                requests:

                storage: 20Gi

                volumeMode: Filesystem

                status:

                phase: Pending`

          3. ElasticSearch helm 中配置的 pvc 大小 看起来依然是 4Gi

          bash-5.1$ cat /kubesphere/kubesphere/elasticsearch/custom-values-elasticsearch.yaml 
          # Default values for elasticsearch.
          # This is a YAML-formatted file.
          # Declare variables to be passed into your templates.
          appVersion: "6.8.22"
          
          ## Define serviceAccount names for components. Defaults to component's fully qualified name.
          ##
          serviceAccounts:
            client:
              create: false
              name:
            master:
              create: true
              name:
            data:
              create: true
              name:
          
          ## Specify if a Pod Security Policy for node-exporter must be created
          ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
          ##
          podSecurityPolicy:
            enabled: false
            annotations: {}
              ## Specify pod annotations
              ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
              ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
              ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
              ##
              # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
              # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
              # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
          
          image:
            repository: kubesphere/elasticsearch-oss
            tag: 6.8.22
            pullPolicy: "IfNotPresent"
            # If specified, use these secrets to access the image
            # pullSecrets:
            #   - registry-secret
          
          initImage:
            repository: alpine
            tag: 3.14
            pullPolicy: "IfNotPresent"
          
          cluster:
            name: "elasticsearch"
            # If you want X-Pack installed, switch to an image that includes it, enable this option and toggle the features you want
            # enabled in the environment variables outlined in the README
            xpackEnable: false
            # Some settings must be placed in a keystore, so they need to be mounted in from a secret.
            # Use this setting to specify the name of the secret
            # keystoreSecret: eskeystore
            config: {}
            # Custom parameters, as string, to be added to ES_JAVA_OPTS environment variable
            additionalJavaOpts: ""
            # Command to run at the end of deployment
            bootstrapShellCommand: ""
            env:
              # IMPORTANT: https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#minimum_master_nodes
              # To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible
              # node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster.
              # MINIMUM_MASTER_NODES: master.replicas/2 +1 
              # EXPECTED_MASTER_NODES: master.replicas
              # EXPECTED_DATA_NODES: data.replicas
              # RECOVER_AFTER_MASTER_NODES: master.replicas/2 +1
              # RECOVER_AFTER_DATA_NODES: data.replicas/2 +1
            # List of plugins to install via dedicated init container
            # plugins: []
          
          master:
            name: master
            exposeHttp: false
            replicas: 1
            heapSize: "512m"
            # additionalJavaOpts: "-XX:MaxRAM=512m"
            persistence:
              enabled: true
              accessMode: ReadWriteOnce
              name: data
              size: 4Gi
            readinessProbe:
              httpGet:
                path: /_cluster/health?local=true
                port: 9200
              initialDelaySeconds: 5
            antiAffinity: "soft"
            nodeAffinity: {}
            nodeSelector: {}
            tolerations: []
            initResources: {}
              # limits:
              #   cpu: "25m"
              #   # memory: "128Mi"
              # requests:
              #   cpu: "25m"
              #   memory: "128Mi"
            resources:
              limits:
                cpu: 2
                memory: 1024Mi
              requests:
                cpu: 25m
                memory: 512Mi
            priorityClassName: ""
            ## (dict) If specified, apply these annotations to each master Pod
            # podAnnotations:
            #   example: master-foo
            podDisruptionBudget:
              enabled: false
              minAvailable: 1  # Same as `cluster.env.MINIMUM_MASTER_NODES`
              # maxUnavailable: 1
            updateStrategy:
              type: OnDelete
          
          data:
            name: data
            exposeHttp: true
            serviceType: ClusterIP
            loadBalancerIP: {}
            loadBalancerSourceRanges: {}
            replicas: 2
            heapSize: "1536m"
            # additionalJavaOpts: "-XX:MaxRAM=1536m"
            persistence:
              enabled: true
              accessMode: ReadWriteOnce
              name: data
              size: 20Gi
            readinessProbe:
              httpGet:
                path: /_cluster/health?local=true
                port: 9200
              initialDelaySeconds: 5
            terminationGracePeriodSeconds: 120
            antiAffinity: "soft"
            nodeAffinity: {}
            nodeSelector: {}
            tolerations: [{key: "CriticalAddonsOnly", operator: "Exists"}, {key: "dedicated", value: "log", effect: "NoSchedule"}]
            initResources: {}
              # limits:
              #   cpu: "25m"
              #   # memory: "128Mi"
              # requests:
              #   cpu: "25m"
              #   memory: "128Mi"
            resources:
              limits:
                cpu: 4
                memory: 2048Mi
              requests:
                cpu: 25m
                memory: 1536Mi
            priorityClassName: ""
            ## (dict) If specified, apply these annotations to each data Pod
            # podAnnotations:
            #   example: data-foo
            podDisruptionBudget:
              enabled: false
              # minAvailable: 1
              maxUnavailable: 1
            updateStrategy:
              type: OnDelete
            hooks:  # post-start and pre-stop hooks
              drain:  # drain the node before stopping it and re-integrate it into the cluster after start
                enabled: true
          
          ## Sysctl init container to setup vm.max_map_count
          # see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
          # and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
          sysctlInitContainer:
            enabled: true
          ## Additional init containers
          extraInitContainers: |

          卸载 es

          helm uninstall elasticsearch-logging -n kubesphere-logging-system

          确保 pvc data-elasticsearch-logging-discovery-0 删除了, 然后清理 cc status 中的 es 部分, 重启 ks-installer

          重新安装下试试

            wanjunlei

            依然不行, 创建出来的还是 4Gi 的, 大佬这里没办法手工 创建 pv 吗? 不行我手工创建也可以

            apiVersion: v1
            kind: PersistentVolumeClaim
            metadata:
              annotations:
                volume.beta.kubernetes.io/storage-provisioner: diskplugin.csi.alibabacloud.com
              creationTimestamp: "2022-08-09T03:19:48Z"
              finalizers:
              - kubernetes.io/pvc-protection
              labels:
                app: elasticsearch
                component: master
                release: elasticsearch-logging
                role: master
              name: data-elasticsearch-logging-discovery-0
              namespace: kubesphere-logging-system
              resourceVersion: "291967623"
              uid: 09f42aaf-0ffd-486a-9b9a-75442782c754
            spec:
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  storage: 4Gi
              storageClassName: alicloud-disk-efficiency
              volumeMode: Filesystem
            status:
              phase: Pending

            wanjunlei 辛苦大佬

            apiVersion: installer.kubesphere.io/v1alpha1
            kind: ClusterConfiguration
            metadata:
              annotations:
                kubectl.kubernetes.io/last-applied-configuration: |
                  {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.3.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":true},"auditing":{"enabled":true},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchHost":"","externalElasticsearchPort":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":true,"volumeSize":"20Gi"},"redis":{"enableHA":false,"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":true,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"1200m","jenkinsJavaOpts_Xmx":"1600m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"edgeruntime":{"enabled":false,"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""]},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"}},"enabled":false,"iptables-manager":{"enabled":true,"mode":"external"}}},"etcd":{"endpointIps":"localhost","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":true},"gatekeeper":{"enabled":false},"local_registry":"","logging":{"enabled":true,"logsidecar":{"enabled":true,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"node_exporter":{"port":9100},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":true},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":true}},"persistence":{"storageClass":""},"servicemesh":{"enabled":true,"istio":{"components":{"cni":{"enabled":true},"ingressGateways":[{"enabled":true,"name":"istio-ingressgateway"}]}}},"terminal":{"timeout":600}}}
              creationTimestamp: "2022-07-14T09:37:06Z"
              generation: 283
              labels:
                version: v3.3.0
              name: ks-installer
              namespace: kubesphere-system
              resourceVersion: "291972596"
              uid: edb1f6f7-cca0-41e9-861e-4388c8c48195
            spec:
              alerting:
                enabled: true
              auditing:
                enabled: true
              authentication:
                jwtSecret: ""
              common:
                core:
                  console:
                    enableMultiLogin: true
                    port: 30880
                    type: NodePort
                es:
                  basicAuth:
                    enabled: true
                    password: #####
                    username: ####
                  elasticsearchDataVolumeSize: 20Gi
                  elasticsearchMasterVolumeSize: 20Gi
                  elkPrefix: logstash
                  externalElasticsearchHost: ""
                  externalElasticsearchPort: ""
                  logMaxAge: 7
                gpu:
                  kinds:
                  - default: true
                    resourceName: nvidia.com/gpu
                    resourceType: GPU
                minio:
                  volumeSize: 20Gi
                monitoring:
                  GPUMonitoring:
                    enabled: false
                  endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
                openldap:
                  enabled: true
                  volumeSize: 20Gi
                redis:
                  enableHA: false
                  enabled: false
                  volumeSize: 20Gi
              devops:
                enabled: true
                jenkinsJavaOpts_MaxRAM: 4g
                jenkinsJavaOpts_Xms: 1200m
                jenkinsJavaOpts_Xmx: 1800m
                jenkinsMemoryLim: 4Gi
                jenkinsMemoryReq: 2Gi
                jenkinsVolumeSize: 20Gi
              edgeruntime:
                enabled: false
                kubeedge:
                  cloudCore:
                    cloudHub:
                      advertiseAddress:
                      - ""
                    service:
                      cloudhubHttpsNodePort: "30002"
                      cloudhubNodePort: "30000"
                      cloudhubQuicNodePort: "30001"
                      cloudstreamNodePort: "30003"
                      tunnelNodePort: "30004"
                  enabled: false
                  iptables-manager:
                    enabled: true
                    mode: external
              etcd:
                endpointIps: localhost
                monitoring: false
                port: 2379
                tlsEnable: true
              events:
                enabled: true
              gatekeeper:
                enabled: false
              local_registry: ""
              logging:
                enabled: true
                logsidecar:
                  enabled: true
                  replicas: 2
              metrics_server:
                enabled: false
              monitoring:
                gpu:
                  nvidia_dcgm_exporter:
                    enabled: false
                node_exporter:
                  port: 9100
                storageClass: ""
              multicluster:
                clusterRole: none
              network:
                ippool:
                  type: none
                networkpolicy:
                  enabled: true
                topology:
                  type: none
              openpitrix:
                store:
                  enabled: true
              persistence:
                storageClass: ""
              servicemesh:
                enabled: true
                istio:
                  components:
                    cni:
                      enabled: true
                    ingressGateways:
                    - enabled: true
                      name: istio-ingressgateway
              terminal:
                timeout: 600
            status:
              alerting:
                enabledTime: 2022-08-09T10:25:04CST
                status: enabled
              auditing:
                enabledTime: 2022-08-09T10:30:48CST
                status: enabled
              clusterId: 9ec78457-6add-4690-8bea-dbac41ce82fa-1660011937
              core:
                enabledTime: 2022-08-09T11:20:52CST
                status: enabled
                version: v3.3.0
              devops:
                enabledTime: 2022-08-09T10:23:00CST
                status: enabled
              es:
                enabledTime: 2022-08-09T11:19:49CST
                status: enabled
              events:
                enabledTime: 2022-08-09T10:31:25CST
                status: enabled
              fluentbit:
                enabledTime: 2022-08-09T10:27:02CST
                status: enabled
              logging:
                enabledTime: 2022-08-09T10:31:18CST
                status: enabled
              minio:
                enabledTime: 2022-08-09T10:26:42CST
                status: enabled
              monitoring:
                enabledTime: 2022-08-09T11:23:51CST
                status: enabled
              openldap:
                enabledTime: 2022-08-09T10:26:32CST
                status: enabled
              servicemesh:
                enabledTime: 2022-08-09T10:22:00CST
                status: enabled

            直接把 data-elasticsearch-logging-discovery 这个 sts 删了,然后按照 这个的 sts 的 yaml 重新创建一个,创建的时候修改 pvc 为 20g

              wanjunlei 可以了 , 导出 sts 删除已 pending pvc ,
              观察 pv pvc 正常创建, 观察 pod elasticsearch-logging-discovery-0 正常创建 . 谢谢大佬