• 建议反馈
  • kubespace 中的NS 加入workspace, 随后加入之后的NS就会被自动删除

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统环境信息

[root@k8s-master01 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@k8s-master01 ~]# kubectl get node
NAME           STATUS                     ROLES    AGE   VERSION
k8s-master01   Ready,SchedulingDisabled   <none>   51d   v1.22.0
k8s-master02   Ready,SchedulingDisabled   <none>   51d   v1.22.0
k8s-master03   Ready,SchedulingDisabled   <none>   51d   v1.22.0
k8s-node01     Ready                      <none>   51d   v1.22.0
k8s-node02     Ready                      <none>   51d   v1.22.0
k8s-node03     Ready                      <none>   51d   v1.22.0
k8s-node04     Ready                      <none>   12d   v1.22.0
k8s-node05     Ready                      <none>   12d   v1.22.0
[root@k8s-master01 ~]# kubectl get cluster
NAME        FEDERATED   PROVIDER     ACTIVE   VERSION
host        true        kubesphere   true     v1.22.0
test-host   true                              v1.22.0

一、前言/问题描述

项目有两套k8s环境,即生产/测试;本人基于生产环境k8s最小化部署kubesphere. 并将此设置为host主集群,测试环境k8s设置为member; 就在上线前夕,项目客户打算增加node节点虚拟机资源,为此直接克隆现有两台node节点虚拟机作为新node节点,没曾想该操作直接导致kubesphere整个集群出现问题(集群节点无法识别,出现未就绪状态);捣鼓半天都无法修复该问题;重启操作系统均未能修复,情急之下从 Kubernetes 上卸载 了KubeSphere进行重装,重装了之后,暂时恢复了正常,随后将两个集群均加入到kubesphere,我以为都没问题,我就直接把生产NS加入到workspace,万万没想到NS就被自动删除了,因此导致我NS生产数据全部丢失,随后我创建一个测试的workspace和NS,把测试的NS加入到测试workspace同样会被自动删除,我瞬间懵逼了;内心是真的凉透了,看ks日志也没有任何错误信息,从7月23号截止到现在,这个问题都没得到解决;

二、故障/问题演示:

模式故障,创建NS 为testv1 ,然后将其加入到ws1 workspace中,操作截图如下

查看kubefed-controller-managerr日志信息

#kubectl -n kube-federation-system get pod
NAME                                          READY   STATUS    RESTARTS      AGE
kubefed-admission-webhook-6f9f5dcbbf-8krrp    1/1     Running   1 (13d ago)   13d
kubefed-controller-manager-78c4dbc5f8-bbqj6   1/1     Running   0             11d
kubefed-controller-manager-78c4dbc5f8-qvsrb   1/1     Running   0             11d
#kubectl -n kube-federation-system logs -f  kubefed-controller-manager-78c4dbc5f8-qvsrb

模拟将NS加入到wokspace的同时,实时输出kubefed-controller-manager日志信息

上述日志和截图我们可以看到,testv1这个命名空间加入workspace的同时,NS已被自动删除,真的很吓人;

导出kubespace 中的workspacetemplate/workspace 的yaml信息


#kubectl get workspace ws1 -oyaml
apiVersion: tenant.kubesphere.io/v1alpha1
kind: Workspace
metadata:
  creationTimestamp: "2023-08-05T01:33:31Z"
  deletionGracePeriodSeconds: 0
  deletionTimestamp: "2023-08-05T01:33:31Z"
  finalizers:
  - finalizers.tenant.kubesphere.io
  generation: 3
  labels:
    kubefed.io/managed: "true"
  name: ws1
  resourceVersion: "107426819"
  uid: 1246fae0-86b5-4cc5-8bdc-c815792a5253
spec:
  manager: admin
status: {}

导出来的workspacetemplate yaml信息

[root@k8s-master01 ~]# kubectl get workspacetemplate ws1 -oyaml
apiVersion: tenant.kubesphere.io/v1alpha2
kind: WorkspaceTemplate
metadata:
  annotations:
    kubesphere.io/creator: admin
    kubesphere.io/description: 测试
  creationTimestamp: "2023-07-24T13:10:11Z"
  finalizers:
  - finalizers.workspacetemplate.kubesphere.io
  generation: 1
  labels:
    kubefed.io/managed: "false"
  name: ws1
  resourceVersion: "20088641"
  uid: 14a1b713-75a7-4a3a-8cc5-75c2b0b369f5
spec:
  placement:
    clusters:
    - name: test-host
    - name: host
  template:
    metadata: {}
    spec:
      manager: admin

另外我查看kubectl get workspace发现自己的企业空间状态不稳定,时而消失,基本是秒级消失;我怀疑是该问题和这个也有关系

写到最后

最后本人目前能排查的问题现象只有这些了,虽然7月23号已经上线了,但问题截止目前仍然存在没有得到,现在也不敢在把ns加入到workspace了,本人身边人懂kubespace技术的人很少,本人对咱们的ks产品非常感兴趣,对问题势必会一探到底。宁愿知识付费也要弄清楚里面的原由并解决,还希望各位圈内大佬能指点一下

  • [已注销]

  • 已编辑

@bixiaoyu

尝试着复现了一下你遇到的问题,从现象上看 host 集群中 workspace 被不停的创建和删除,是因为你现在这个 host 集群被多个 kubfed 给托管产生了冲突。

一个是当前集群中的 kubfed,因为你创建的workspace关联了host集群,所以kubfed会在这个host 上创建出 workspace。

在此之前,这个 host 集群应该被托管到了另外一个kubfed,由于创建出来的 workspace 带有 kubfed.io/managed: ‘true’ 这个 label, 此时产生的冲突会导致,workspace 被其他的kubfed给移除。

你现在需要检查的是除了当前集群中的 kubfed,还有哪个集群的 kubfed 会影响到这个集群。

你可以通过一下方式来进行验证是否被其他集群的kubfed 给影响到了:

把当前这个集群中的 kubfed controller 给停掉(scale 到 0就可以),

再手动创建一个 workspace 并打上 kubefed.io/managed: “true” 这个label:

apiVersion: tenant.kubesphere.io/v1alpha1kind: Workspacemetadata: name: test-xxxx labels: kubefed.io/managed: "true"spec: manager: adminstatus: {}

apiVersion: tenant.kubesphere.io/v1alpha1
kind: Workspace
metadata:
  name: test-xxxx
  labels:
    kubefed.io/managed: "true"
spec:
  manager: admin
status: {}

如果你发现这个 workspace 还是被删除了, 就可以断定还有其他的kubefed 在影响着这个集群

同理你可以将上述 workspace 的 label 设置为 kubefed.io/managed: "false" 再创建一遍,如果 workspace 不再被删除,那就可以断定是其他的 kubfed 产生了冲突

问题的解决方式: 找到影响当前 host 集群的 kubfed,将当前集群从中移除

    [已注销] 感谢前辈百忙之中回复,我这边针对上述前辈的描述,有两个疑问哈

    1、就是我如何去查找当前host集群的kubfed并移除呢?我尝试用kubectl get kubefed查找 发现并不是资源类;我也查阅了百度,都没有相关的方法,还请前辈指点

    2、如果移除kubfed会影线现有的生产环境业务吗?这个风险是否可以把控呢?

    @bixiaoyu 这里的逻辑是这样的, kubesphere 3.x 版本的多集群管理功能使用了 kubfed 这个组件,你在当前存在问题的这个 host cluster 上 kubectl get kubefedclusters.core.kubefed.io -n kube-federation-system 可以看到两个 cluster,分别是当前的 host cluster 和 member cluster,当前的 host 集群(可以理解为集群中部署的 kubfed 管理了自己)和 member 集群都被托管到了 host 集群中部署的 kubfed 并通过 KubeSphere 进行管理。

    如果你按照我上面回复中说的验证方式测试过手动创建 带有 kubefed.io/managed: “true” 的 workspace 和带有 kubefed.io/managed: “false” 的 workspace,并且只有 kubefed.io/managed: “true” 的 workspace 被移除了,那就可以断定,除了当前host集群中的kubfed之外,还有其他的 kubfed 对当前这个 host 集群产生了影响,所以你得确定,当前这个集群之前有没有被当作 member 集群被其他集群托管,没有正确的从其他的 host 集群中移除当前集群,就会造成冲突

    1. 你可以尝试着在其他可能的集群(安装了kubfed并且将这个存在异常的 host cluster集群托管了)上执行 kubectl get kubefedclusters.core.kubefed.io -n kube-federation-system -o yaml 通过 api 地址来判断是不是有当前异常的这个 host cluster
    2. 还有一种方式是你开启当前异常的 host cluster 上的审计功能,当测试过重中 kubefed.io/managed: “true” 的 workspace 被删除的时候,可以根据对应的删除审计日志找到调用方的IP
    3. 你还可以重新签发(kubeadm certs renew)当前 host 集群的证书,直接将之前的凭证给吊销,避免位未知的客户端再对当前集群造成影响

      [已注销]

      根据前辈上述的描述进行自检,发现果然有旧的host集群(produce-trt)这个是之前的host,因为出现故障,我重装了一下并且集群的名字我就改成了”host“,所以目前可以看到当前环境存在 三个环境,正如前辈所料,我大概明白了根本原因了;之前出现过故障,我仅仅是执行了官网上的卸载脚本,本质上现有的环境并没有卸载干净,导致旧的host仍然存在,而在此基础上呢,新的host也运行了,所以导致host冲突,最终导致NS加入到workspace被删除

      # kubectl get kubefedclusters.core.kubefed.io -n kube-federation-system

      猜想一下:是否可以通过直接delete删除旧集群能否恢复正常呢?

      下面是我导出的yaml信息。发现其中当前host集群的API 地址竟然是指定k8s集群的svc地址192.168.0.1apiEndpoint:https://192.168.0.1:443正常情况下是192.168.65.236:8443才对

      #kubectl get kubefedclusters.core.kubefed.io -n kube-federation-system -o yaml

      apiVersion: v1
      items:
      - apiVersion: core.kubefed.io/v1beta1
        kind: KubeFedCluster
        metadata:
          creationTimestamp: "2023-07-23T00:42:06Z"
          generation: 1
          labels:
            cluster-role.kubesphere.io/host: ""
            cluster.kubesphere.io/visibility: private
            kubesphere.io/managed: "true"
          name: host
          namespace: kube-federation-system
          resourceVersion: "123141071"
          uid: da2629b9-7fa1-4cf3-a425-37c6a3b005f5
        spec:
          apiEndpoint: https://192.168.0.1:443
          caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVWFMwRjRua1A0UjVrMFZodnZqTlpRb0Nxc0RJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl6TURZeE5EQXpNRGd3TUZvWUR6SXgKTWpNd05USXhNRE13T0RBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRREhGTHhySVBGMzc5SGs5L2FKWkxFeTV1VEh0aVFDOUNwYgphcm85RFR6cG96VG1MS2tZQWlrc2xMVWhxdW04ZmpSUGtIMDByWWU3b3FPNzBVbkZIUG03cTZMVHI0TVI3TXFPCk1pNUh1TThhT0NCL0FrYWFHK2hYdVdHZ2xSM0JEK1BUMXdCeVcyMVA3RjhGQ01kRXdLbStZMC9Dc0xlMFFjbWEKK2J2bTRqc21yYWZ2VXJjUmJwUlN3Qm9KT1ZudGxuYlU0RGtPNzJqNHZQVklvd1Y0dVZBN0lFSlliYWJXVm5kKwplYTIyMVA2WStRNE9CYXNBa0VrcW9TZnN4cDZtbHFWaUxldmxQT1lRNjgvV1lHdXhQNDV1WWtwRW1zRlRvUk1DCjBIWTVCV2pzSnQ3aEQ5clppQ3ZrVko0c2tGVVpPYWEyVzY3UC81dnVFQlRJODJrRFluOVpBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJSdQpiOW1PcWVyNkZzNk40d0pXRFpzdWUrVXF5ekFmQmdOVkhTTUVHREFXZ0JSdWI5bU9xZXI2RnM2TjR3SldEWnN1CmUrVXF5ekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBcGF3VE4yKzNQOUUrdWs0UHBqTVhtVG5nK21sTUhQbFYKaG1vampNNitRb04xN2pDY2lXbTl2OGl3N1ZmRUVTN25rRGQ2cjJTRDhlQWF2Q1JEMzZ4SUttc0o3Y2NwTGFLMQpKMlBReHNsWExWeGdwRml1MmRjYyt2N3MzSmU0TVhOU3NkM0owRUUyNWNkUENLNk5ndGtjR0NJU1pUSUxnYzZiCkRvcVJVSjh1VVE0bTdOSVN5VzNWd3MvUEhvVlFaZE1ETTBoZFVkMTh2cnd4WGdoWjVZc3JYYTQ4ZkpOMVNUbGIKcXpZOVMzcVROdTBwT2ZqSWRoai93NUxFakdPUUE4N3l3dzFyWFBjR3JTWnZBQjlmZWEyMlBSellHaVdsamtPKwp5dmRmMG5yZEdBWlAzVTBpWHVTVW5tNlFyYXZmNSsyYVcrYWhHa1h3L2dIdVFpMkhkYXl1ZHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
          proxyURL: ""
          secretRef:
            name: host-secret
        status:
          conditions:
          - lastProbeTime: "2023-08-07T03:30:28Z"
            lastTransitionTime: "2023-07-24T09:35:07Z"
            message: /healthz responded with ok
            reason: ClusterReady
            status: "True"
            type: Ready
      - apiVersion: core.kubefed.io/v1beta1
        kind: KubeFedCluster
        metadata:
          creationTimestamp: "2023-06-15T06:40:07Z"
          generation: 1
          labels:
            cluster-role.kubesphere.io/host: ""
            cluster.kubesphere.io/group: production
          name: produce-trt
          namespace: kube-federation-system
          resourceVersion: "123141081"
          uid: b6394397-2112-46b6-a63a-e9afc9f0429b
        spec:
          apiEndpoint: https://192.168.65.236:8443
          caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVWFMwRjRua1A0UjVrMFZodnZqTlpRb0Nxc0RJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl6TURZeE5EQXpNRGd3TUZvWUR6SXgKTWpNd05USXhNRE13T0RBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRREhGTHhySVBGMzc5SGs5L2FKWkxFeTV1VEh0aVFDOUNwYgphcm85RFR6cG96VG1MS2tZQWlrc2xMVWhxdW04ZmpSUGtIMDByWWU3b3FPNzBVbkZIUG03cTZMVHI0TVI3TXFPCk1pNUh1TThhT0NCL0FrYWFHK2hYdVdHZ2xSM0JEK1BUMXdCeVcyMVA3RjhGQ01kRXdLbStZMC9Dc0xlMFFjbWEKK2J2bTRqc21yYWZ2VXJjUmJwUlN3Qm9KT1ZudGxuYlU0RGtPNzJqNHZQVklvd1Y0dVZBN0lFSlliYWJXVm5kKwplYTIyMVA2WStRNE9CYXNBa0VrcW9TZnN4cDZtbHFWaUxldmxQT1lRNjgvV1lHdXhQNDV1WWtwRW1zRlRvUk1DCjBIWTVCV2pzSnQ3aEQ5clppQ3ZrVko0c2tGVVpPYWEyVzY3UC81dnVFQlRJODJrRFluOVpBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJSdQpiOW1PcWVyNkZzNk40d0pXRFpzdWUrVXF5ekFmQmdOVkhTTUVHREFXZ0JSdWI5bU9xZXI2RnM2TjR3SldEWnN1CmUrVXF5ekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBcGF3VE4yKzNQOUUrdWs0UHBqTVhtVG5nK21sTUhQbFYKaG1vampNNitRb04xN2pDY2lXbTl2OGl3N1ZmRUVTN25rRGQ2cjJTRDhlQWF2Q1JEMzZ4SUttc0o3Y2NwTGFLMQpKMlBReHNsWExWeGdwRml1MmRjYyt2N3MzSmU0TVhOU3NkM0owRUUyNWNkUENLNk5ndGtjR0NJU1pUSUxnYzZiCkRvcVJVSjh1VVE0bTdOSVN5VzNWd3MvUEhvVlFaZE1ETTBoZFVkMTh2cnd4WGdoWjVZc3JYYTQ4ZkpOMVNUbGIKcXpZOVMzcVROdTBwT2ZqSWRoai93NUxFakdPUUE4N3l3dzFyWFBjR3JTWnZBQjlmZWEyMlBSellHaVdsamtPKwp5dmRmMG5yZEdBWlAzVTBpWHVTVW5tNlFyYXZmNSsyYVcrYWhHa1h3L2dIdVFpMkhkYXl1ZHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
          proxyURL: ""
          secretRef:
            name: produce-trt-secret
        status:
          conditions:
          - lastProbeTime: "2023-08-07T03:30:28Z"
            lastTransitionTime: "2023-07-27T17:14:12Z"
            message: /healthz responded with ok
            reason: ClusterReady
            status: "True"
            type: Ready
      - apiVersion: core.kubefed.io/v1beta1
        kind: KubeFedCluster
        metadata:
          creationTimestamp: "2023-07-08T07:10:45Z"
          generation: 1
          labels:
            cluster.kubesphere.io/group: testing
          name: test-host
          namespace: kube-federation-system
          resourceVersion: "123141073"
          uid: 6ebe8158-90f4-4234-981a-4c6e389bb111
        spec:
          apiEndpoint: https://192.168.40.236:8443
          caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVT0g3UHozZE52MFJvL1pwVUhFTXMvMjBGZVZjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl6TURjd09EQXlOREF3TUZvWUR6SXgKTWpNd05qRTBNREkwTURBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRHFrazFTdXNJT0h3aGJOdUtPOXVabWNWdWl0VEhmYXM5SgprR2tYcDdTWVhVWUxrQXZ5NGFUTHpCcTVQT0lRbXhnUXVOYng4SGY0VjdOZGluZW1FS1pIS2dDYlc1bFBJdVFlClZQMDlCcUVIL1ZvSk00SDNWUjhTM1g1UFBhS0tRSVlDQUxXUWh2U1ZvOTBPbFVFTk5Tck1ZbkdMSWMwcGI1Y3kKMUJGaFIvck1kYmNMQkpvQUFMMFAxNlQyTjI3THBSYWxUOFE5Q3FoMzlQUklTM3JqdGh6YXE3MlgzUUZNNjV4RApWc2t5alhqaGMzMlNxbEJxaEVaSWtaYy9rblFxZWhGTy83U1c2WXFuVVpkZ3RCT0IvZ1FsSlVZNTEveGVkNkt0Cit4MkxXNURxQTVLSDhMbEZMQmxWUGVLWXNydFh6YVdaVTN1Z0djY0RaaHU5WE5aVGJiSEJBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJRSwo3TDhDM2x3VVRZTzNxV3R6dXVuT1R5Wjl1VEFmQmdOVkhTTUVHREFXZ0JRSzdMOEMzbHdVVFlPM3FXdHp1dW5PClR5Wjl1VEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBTEw5ZndGS3IrTmk1djlBdTlJN1UyRC9PY1h5Zno0YlQKaXhqOTVoMi93M24vSGVrOHZVY1RSb2dXZU9lakU3cmN2TUJFMitLUmRabDVnY3I1NThGK0FBb1pOSktGK2JHVwp2SnkvOStxSHN5MXhLcG5pSGtLTVRUR2JHNWpwUHp0R0R4aXFuYjhmWGpxY2JvblNjZjZRYXlTbk8zRkxUbUdZCkh0R1lnSk81TU9XTzI3QnhsYVFKNzN5UEZhRDJCS2ZaVklQQThWWGNkV1NhcWRsd0dxdTR5K3BXb214MGM2U0wKZkRJNXNxTkNhSlpjZExRQ3F0RDM5QStFUVVEMHYzdTVST3JqdzQvVGYxbXFONUZON1p1QW13NGw1R2hOekltNQpsOXhpTXBJVnZlcGZpTjFDdXFwbFJjYkdhNE90U1g2VkRuOURPMkNkdm5RT2U1TzlZOWJvWVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
          proxyURL: ""
          secretRef:
            name: test-host-secret
        status:
          conditions:
          - lastProbeTime: "2023-08-07T03:30:28Z"
            lastTransitionTime: "2023-07-25T03:48:28Z"
            message: /healthz responded with ok
            reason: ClusterReady
            status: "True"
            type: Ready
      kind: List
      metadata:
        resourceVersion: ""
        selfLink: ""
      5 个月 后