环境准备

环境说明:

  • 操作系统:Rocky Linux release 9.0 (Blue Onyx)
  • 私有镜像库: harbor(http -> http://192.168.1.3
  • 样例主机IP:192.168.1.2
  • 样例主机配置: 8C 16G 100G

镜像准备

下载以下镜像并导入本地私有harbor库内(创建对应的project,如kubesphere/openpitrix-jobs:v3.2.1需要创建kubesphere项目并设置公开权限 )

kubesphere/openpitrix-jobs:v3.2.1
kubesphere/kube-apiserver:v1.23.9
kubesphere/kube-controller-manager:v1.23.9
kubesphere/kube-proxy:v1.23.9
kubesphere/kube-scheduler:v1.23.9
openebs/provisioner-localpv:3.3.0
openebs/linux-utils:3.3.0
kubesphere/ks-installer:v3.3.0
calico/kube-controllers:v3.23.2
calico/cni:v3.23.2
calico/pod2daemon-flexvol:v3.23.2
calico/node:v3.23.2
kubesphere/ks-controller-manager:v3.3.0
kubesphere/ks-apiserver:v3.3.0
kubesphere/ks-console:v3.3.0
kubesphere/ks-jenkins:v3.3.0-2.319.1
kubesphere/fluent-bit:v1.8.11
kubesphere/s2ioperator:v3.2.1
argoproj/argocd:v2.3.3
kubesphere/prometheus-config-reloader:v0.55.1
kubesphere/prometheus-operator:v0.55.1
prom/prometheus:v2.34.0
kubesphere/fluentbit-operator:v0.13.0
argoproj/argocd-applicationset:v0.4.1
kubesphere/kube-events-ruler:v0.4.0
kubesphere/kube-events-operator:v0.4.0
kubesphere/kube-events-exporter:v0.4.0
kubesphere/elasticsearch-oss:6.8.22
kubesphere/kube-state-metrics:v2.3.0
prom/node-exporter:v1.3.1
library/redis:6.2.6-alpine
dexidp/dex:v2.30.2
library/alpine:3.14
kubesphere/kubectl:v1.22.0
kubesphere/notification-manager:v1.4.0
jaegertracing/jaeger-operator:1.27
coredns/coredns:1.8.6
jaegertracing/jaeger-collector:1.27
jaegertracing/jaeger-query:1.27
jaegertracing/jaeger-agent:1.27
kubesphere/notification-tenant-sidecar:v3.2.0
kubesphere/notification-manager-operator:v1.4.0
kubesphere/pause:3.6
prom/alertmanager:v0.23.0
istio/pilot:1.11.1
kubesphere/kube-auditing-operator:v0.2.0
kubesphere/kube-auditing-webhook:v0.2.0
kubesphere/kube-rbac-proxy:v0.11.0
kubesphere/kiali-operator:v1.38.1
kubesphere/kiali:v1.38
kubesphere/metrics-server:v0.4.2
jimmidyson/configmap-reload:v0.5.0
csiplugin/snapshot-controller:v4.0.0
kubesphere/kube-rbac-proxy:v0.8.0
library/docker:19.03
kubesphere/log-sidecar-injector:1.1
osixia/openldap:1.3.0
kubesphere/k8s-dns-node-cache:1.15.12
minio/mc:RELEASE.2019-08-07T23-14-43Z
minio/minio:RELEASE.2019-08-07T01-59-21Z
mirrorgooglecontainers/defaultbackend-amd64:1.4

介质准备

  1. 下载离线必需安装介质

kubekey-v2.3.0-rc.1-linux-amd64.tar.gz

  1. 创建工作目录,上传安装介质

文件目录结构如下

$ /work
├── kubekey
│   ├── cni
│   │   └── v0.9.1
│   │       └── amd64
│   │           └── cni-plugins-linux-amd64-v0.9.1.tgz
│   ├── crictl
│   │   └── v1.24.0
│   │       └── amd64
│   │           └── crictl-v1.24.0-linux-amd64.tar.gz
│   ├── docker
│   │   └── 20.10.8
│   │       └── amd64
│   │           └── docker-20.10.8.tgz
│   ├── etcd
│   │   └── v3.4.13
│   │       └── amd64
│   │           └── etcd-v3.4.13-linux-amd64.tar.gz
│   ├── helm
│   │   └── v3.9.0
│   │       └── amd64
│   │           └── helm-v3.9.0-linux-amd64.tar.gz
│   ├── kube
│   │   └── v1.23.9
│   │       └── amd64
│   │           ├── kubeadm
│   │           ├── kubectl
│   │           └── kubelet
└── kubekey-v2.3.0-rc.1-linux-amd64.tar.gz
  1. 部分介质解压
$ cd /work/kubekey/helm/v3.9.0/amd64 && tar -zxf helm-v3.9.0-linux-amd64.tar.gz && mv linux-amd64/helm . && rm -rf *linux-amd64* && cd -
$ cd /work && tar zxvf kubekey-v2.3.0-rc.1-linux-amd64.tar.gz

配置本地镜像源

  1. 挂载镜像DVD

  2. 挂载至本地

$ mount -o loop /dev/cdrom /media
  1. 配置本地镜像源
$ rm -rf /etc/yum.repos.d/*
$ tee /etc/yum.repos.d/media.repo <<EOF
[media-baseos]
name=Rocky Linux $releasever - Media - BaseOS
baseurl=file:///media/BaseOS
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9
 
[media-appstream]
name=Rocky Linux $releasever - Media - AppStream
baseurl=file:///media/AppStream
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9
EOF
  1. 建立缓存
$ dnf makecache

安装部署

  1. 安装依赖
$ dnf install conntrack socat chrony ipvsadm -y
  1. 初始化配置文件
$ ./kk create config --with-kubesphere v3.3.0
  1. 调整配置

样例信息已脱敏,仅作说明使用,变更内容如下:

  • 配置hosts节点与角色组(hosts、roleGroups)
  • 配置私有镜像库(privateRegistry、insecureRegistries)
  • 注释掉controlPlaneEndpoint
  • 开启以下组件:
    • alerting
    • auditing
    • devops
    • events
    • logging
    • metrics_server
    • openpitrix
    • servicemesh
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"}
  roleGroups:
    etcd:
    - node1
    control-plane:
    - node1
    worker:
    - node1
  #controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

   # domain: lb.kubesphere.local
   # address: ""
   # port: 6443
  kubernetes:
    version: v1.23.9
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: "harbor.wl.io"
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: ["harbor.wl.io"]
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.0
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: true
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: true
    # resources: {}
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 1200m
    jenkinsJavaOpts_Xmx: 1600m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: true
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: true
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:
    enabled: true
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600
  1. 配置harbor host解析

由于habror使用的是假域名,需要配置自定义解析

$ echo "192.168.1.3 harbor.wl.io" >> /etc/hosts
  1. 创建dns配置文件
$ touch /etc/resolv.conf

否则初始化沙箱会异常

$ Sep 15 08:48:39 node1 kubelet[35254]: E0915 08:48:39.708357   35254 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-node1_kube-system(868ca46a733b98e2a3523d80b3c75243)\" with CreatePodSandboxError: \"Failed to generate sandbox config for pod \\\"kube-scheduler-node1_kube-system(868ca46a733b98e2a3523d80b3c75243)\\\": open /etc/resolv.conf: no such file or directory\"" pod="kube-system/kube-scheduler-node1" podUID=868ca46a733b98e2a3523d80b3c75243
  1. 初始化集群
$ ./kk create cluster --with-kubesphere v3.3.0 -f config-sample.yaml -y
  1. 安装补全
$ dnf install -y bash-completion
$ source /usr/share/bash-completion/bash_completion
$ source <(kubectl completion bash)
$ echo "source <(kubectl completion bash)" >> ~/.bashrc
  1. 配置内网dns(可选)

设置DNS

$ nmcli connection modify ens192 ipv4.dns "10.10.1.254"
$ nmcli connection up ens192
  • ens192: 网卡名称

  • 10.10.1.254: dns地址

  • 配置core dns解析

加入自定义Host

$ kubectl edit configmap coredns -n kube-system

修改前:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-09-15T00:48:59Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "232"
  uid: 4a4a69f2-b151-4323-b5b2-ae9d2867e58f

修改后

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        hosts {
          192.168.1.3 harbor.wl.io
          fallthrough
        }
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-09-15T00:48:59Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "232"
  uid: 4a4a69f2-b151-4323-b5b2-ae9d2867e58f

重载

$ kubectl rollout restart deploy coredns -n kube-system
  1. 修改nodelocaldns
$ kubectl edit cm -n kube-system nodelocaldns

修改前

apiVersion: v1
data:
  Corefile: |
    cluster.local:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
        health 169.254.25.10:9254
    }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    ip6.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . /etc/resolv.conf
        prometheus :9253
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":"cluster.local:53 {\n    errors\n    cache {\n        success 9984 30\n        denial 9984 5\n    }\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n    health 169.254.25.10:9254\n}\nin-addr.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\nip6.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\n.:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . /etc/resolv.conf\n    prometheus :9253\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"nodelocaldns","namespace":"kube-system"}}
  creationTimestamp: "2022-09-15T00:49:03Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  name: nodelocaldns
  namespace: kube-system
  resourceVersion: "368"
  uid: adb09cd0-b5c1-4939-98bf-b48bfb5418ce

修改后

apiVersion: v1
data:
  Corefile: |
    cluster.local:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
        health 169.254.25.10:9254
    }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    ip6.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":"cluster.local:53 {\n    errors\n    cache {\n        success 9984 30\n        denial 9984 5\n    }\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n    health 169.254.25.10:9254\n}\nin-addr.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\nip6.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\n.:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . /etc/resolv.conf\n    prometheus :9253\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"nodelocaldns","namespace":"kube-system"}}
  creationTimestamp: "2022-09-15T00:49:03Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  name: nodelocaldns
  namespace: kube-system
  resourceVersion: "6905"
  uid: adb09cd0-b5c1-4939-98bf-b48bfb5418ce

即修改.:53 {}块内容

重载

$ kubectl rollout restart ds nodelocaldns -n kube-system

此时安装配置完毕!

$ [root@localhost work]# kubectl get pod -A
NAMESPACE                      NAME                                                      READY   STATUS      RESTARTS        AGE
argocd                         devops-argocd-application-controller-0                    1/1     Running     0               3h59m
argocd                         devops-argocd-applicationset-controller-87f75668b-vgbwh   1/1     Running     0               3h18m
argocd                         devops-argocd-dex-server-65d7f957b8-hfrsf                 1/1     Running     0               3h33m
argocd                         devops-argocd-notifications-controller-ddc6b8968-jvg6z    1/1     Running     0               3h59m
argocd                         devops-argocd-redis-55cddfffb8-x2n27                      1/1     Running     0               3h59m
argocd                         devops-argocd-repo-server-7b5f976848-ng858                1/1     Running     0               3h59m
argocd                         devops-argocd-server-7b479f5496-qm9dc                     1/1     Running     0               3h59m
istio-system                   istiod-1-11-2-5878b46b6b-tfnwz                            1/1     Running     0               3h59m
istio-system                   jaeger-collector-5c87ff484b-9ss9p                         1/1     Running     0               3h58m
istio-system                   jaeger-operator-7b5d6948b9-mzwbl                          1/1     Running     0               3h58m
istio-system                   kiali-588d8cddb5-7tmj6                                    1/1     Running     0               3h48m
istio-system                   kiali-operator-7cb5964894-kvzd6                           1/1     Running     0               3h58m
kube-system                    calico-kube-controllers-7dbbd76bf5-w6zks                  1/1     Running     0               4h27m
kube-system                    calico-node-h7wc7                                         1/1     Running     0               4h27m
kube-system                    coredns-56c488fff9-59h2f                                  1/1     Running     0               3h13m
kube-system                    coredns-56c488fff9-66bkf                                  1/1     Running     0               3h13m
kube-system                    kube-apiserver-node1                                      1/1     Running     0               4h28m
kube-system                    kube-controller-manager-node1                             1/1     Running     0               4h28m
kube-system                    kube-proxy-4b6jj                                          1/1     Running     0               4h28m
kube-system                    kube-scheduler-node1                                      1/1     Running     0               4h28m
kube-system                    metrics-server-6ddd7b648d-f6kfn                           1/1     Running     0               4h3m
kube-system                    nodelocaldns-ch6rv                                        1/1     Running     0               3h13m
kube-system                    openebs-localpv-provisioner-55d4d7984b-gq2rv              1/1     Running     0               4h27m
kube-system                    snapshot-controller-0                                     1/1     Running     0               4h27m
kubesphere-controls-system     default-http-backend-5779498df7-9272b                     1/1     Running     0               4h25m
kubesphere-controls-system     kubectl-admin-59d497c48f-cgn5r                            1/1     Running     0               4h23m
kubesphere-devops-system       devops-27720300-5qpnb                                     0/1     Completed   0               61m
kubesphere-devops-system       devops-27720330-2krn5                                     0/1     Completed   0               31m
kubesphere-devops-system       devops-27720360-nqtcn                                     0/1     Completed   0               119s
kubesphere-devops-system       devops-apiserver-8597c6f59f-522d5                         1/1     Running     0               3h58m
kubesphere-devops-system       devops-controller-65c9c777f5-zgtp2                        1/1     Running     0               3h31m
kubesphere-devops-system       devops-jenkins-d7ddb745d-8t4bq                            1/1     Running     0               3h13m
kubesphere-devops-system       s2ioperator-0                                             1/1     Running     0               3h13m
kubesphere-logging-system      elasticsearch-logging-data-0                              1/1     Running     0               4h1m
kubesphere-logging-system      elasticsearch-logging-discovery-0                         1/1     Running     0               4h1m
kubesphere-logging-system      fluent-bit-r945s                                          1/1     Running     0               4h1m
kubesphere-logging-system      fluentbit-operator-7b67679899-srxsn                       1/1     Running     0               4h1m
kubesphere-logging-system      ks-events-exporter-766fb9854b-bz7xl                       2/2     Running     0               3h59m
kubesphere-logging-system      ks-events-operator-66899cd64d-6gn6j                       1/1     Running     0               3h59m
kubesphere-logging-system      ks-events-ruler-8464b99bf7-9ztlh                          2/2     Running     0               3h59m
kubesphere-logging-system      ks-events-ruler-8464b99bf7-f9xw7                          2/2     Running     0               3h59m
kubesphere-logging-system      kube-auditing-operator-79557b8487-d29fx                   1/1     Running     0               3h59m
kubesphere-logging-system      kube-auditing-webhook-deploy-6bcbcdd5dd-5f9df             1/1     Running     0               3h59m
kubesphere-logging-system      kube-auditing-webhook-deploy-6bcbcdd5dd-njwmt             1/1     Running     0               3h59m
kubesphere-logging-system      logsidecar-injector-deploy-79d56bd69b-b2jzt               2/2     Running     0               3h59m
kubesphere-logging-system      logsidecar-injector-deploy-79d56bd69b-qxd7w               2/2     Running     0               3h59m
kubesphere-monitoring-system   alertmanager-main-0                                       2/2     Running     0               4h24m
kubesphere-monitoring-system   kube-state-metrics-9cbd8b569-n99bd                        3/3     Running     0               4h24m
kubesphere-monitoring-system   node-exporter-gdg7h                                       2/2     Running     0               4h24m
kubesphere-monitoring-system   notification-manager-deployment-854756b49b-nl79r          2/2     Running     0               4h23m
kubesphere-monitoring-system   notification-manager-operator-679ccb4c98-nskdt            2/2     Running     0               4h23m
kubesphere-monitoring-system   prometheus-k8s-0                                          2/2     Running     0               3h57m
kubesphere-monitoring-system   prometheus-operator-55ccf9574-xr9m4                       2/2     Running     0               4h24m
kubesphere-monitoring-system   thanos-ruler-kubesphere-0                                 0/2     Pending     0               3h56m
kubesphere-system              ks-apiserver-678bbbdf95-sfldl                             1/1     Running     0               4h25m
kubesphere-system              ks-console-6df65c7df-znnms                                1/1     Running     0               4h25m
kubesphere-system              ks-controller-manager-85f4c54dcd-75762                    1/1     Running     2 (3h53m ago)   4h25m
kubesphere-system              ks-installer-7bd5bd4dc9-fwqds                             1/1     Running     0               4h3m
kubesphere-system              minio-58f86fcc4f-kmjkf                                    1/1     Running     0               4h2m
kubesphere-system              openldap-0                                                1/1     Running     1 (4h2m ago)    4h2m
kubesphere-system              openpitrix-import-job-7m66d                               0/1     Completed   0               4h