发现报错 The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]

解决办法

sudo vi /var/lib/kubelet/config.yaml

修改如下内容

clusterDNS:

- 169.254.25.10

clusterDNS:

- 10.233.0.10

    @gs80140 registry.k8s.io/pause:3.8 这个镜像是 containerd 的配置文件里定义的,如果这个节点上装了harbor,是不会创建containerd配置文件的,如果你还有其它节点,从其它节点copy 一份 /etc/containerd/config.toml 过来重启 containerd。然后执行下 kk delete cluster -f xxx.yaml, 再重新执行安装。

      手工下载再替换

      docker pull registry.aliyuncs.com/google_containers/pause:3.8

      docker tag registry.aliyuncs.com/google_containers/pause:3.8 registry.k8s.io/pause:3.8

      Cauchy

      这台机器是有containerd的, 没有配置pause, harbor装在另外一台机器了, 如果在同一台机器安装, harbor会受影响

      gs80140 这里也是错误的, 一并修改 sudo vi /etc/kubernetes/kubeadm-config.yaml

      @gs80140 那给这些节点的containerd的 sandbox image 配置成离线仓库里的镜像吧

      gs80140

      sudo vi /var/lib/kubelet/config.yaml

      sudo vi kubeadm-config.yaml

      修改如下内容

      clusterDNS:

      - 169.254.25.10

      clusterDNS:

      - 10.233.0.10

      8080端口是谁的? 为什么启来?

      14:54:35 CST stdout: [master]

      E1211 14:54:35.007262 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.009045 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.009898 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.011881 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.012490 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      The connection to the server localhost:8080 was refused - did you specify the right host or port?

      14:54:35 CST message: [master]

      get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”

      E1211 14:54:35.007262 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.009045 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.009898 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.011881 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      E1211 14:54:35.012490 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

      The connection to the server localhost:8080 was refused - did you specify the right host or port?: Process exited with status 1

      14:54:35 CST retry: [master]

      14:54:40 CST stdout: [master]

      API Server 默认端口

      1. 安全端口 (--secure-port)

        • 默认值:6443
        • 这是常用的端口,使用 HTTPS 协议,并需要认证和授权。
      2. 非安全端口 (--insecure-port)

        • 默认值:8080

        • 从 Kubernetes 1.20 开始,默认值为 0(禁用)。

        • 不推荐使用,建议仅使用安全端口。

      cat /etc/kubernetes/manifests/kube-apiserver.yaml

      apiVersion: v1

      kind: Pod

      metadata:

      annotations:

      kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.16.21.35:6443

      creationTimestamp: null

      labels:

      component: kube-apiserver
      
      tier: control-plane

      name: kube-apiserver

      namespace: kube-system

      spec:

      containers:

      • command:

        • kube-apiserver

        • –advertise-address=172.16.21.35

        • –allow-privileged=true

        • –authorization-mode=Node,RBAC

        • –bind-address=0.0.0.0

        • –client-ca-file=/etc/kubernetes/pki/ca.crt

        • –enable-admission-plugins=NodeRestriction

        • –enable-bootstrap-token-auth=true

        • –etcd-cafile=/etc/ssl/etcd/ssl/ca.pem

        • –etcd-certfile=/etc/ssl/etcd/ssl/node-master.pem

        • –etcd-keyfile=/etc/ssl/etcd/ssl/node-master-key.pem

        • –etcd-servers=https://172.16.21.35:2379

        • –feature-gates=RotateKubeletServerCertificate=true

        • –kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt

        • –kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

        • –kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

        • –proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt

        • –proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key

        • –requestheader-allowed-names=front-proxy-client

        • –requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt

        • –requestheader-extra-headers-prefix=X-Remote-Extra-

        • –requestheader-group-headers=X-Remote-Group

        • –requestheader-username-headers=X-Remote-User

        • –secure-port=6443

        • –service-account-issuer=https://kubernetes.default.svc.cluster.local

        • –service-account-key-file=/etc/kubernetes/pki/sa.pub

        • –service-account-signing-key-file=/etc/kubernetes/pki/sa.key

        • –service-cluster-ip-range=10.233.0.0/18

        • –tls-cert-file=/etc/kubernetes/pki/apiserver.crt

        • –tls-private-key-file=/etc/kubernetes/pki/apiserver.key

          image: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12

          imagePullPolicy: IfNotPresent

          livenessProbe:

          failureThreshold: 8

          httpGet:

          host: 172.16.21.35

          path: /livez

          port: 6443

          scheme: HTTPS

          initialDelaySeconds: 10

          periodSeconds: 10

          timeoutSeconds: 15

          name: kube-apiserver

          readinessProbe:

          failureThreshold: 3

          httpGet:

          host: 172.16.21.35

          path: /readyz

          port: 6443

          scheme: HTTPS

          periodSeconds: 1

          timeoutSeconds: 15

          resources:

          requests:

          cpu: 250m

          startupProbe:

          failureThreshold: 24

          httpGet:

          host: 172.16.21.35

          path: /livez

          port: 6443

          scheme: HTTPS

          initialDelaySeconds: 10

          periodSeconds: 10

          timeoutSeconds: 15

          volumeMounts:

        • mountPath: /etc/ssl/certs

          name: ca-certs

          readOnly: true

        • mountPath: /etc/ca-certificates

          name: etc-ca-certificates

          readOnly: true

        • mountPath: /etc/pki

          name: etc-pki

          readOnly: true

        • mountPath: /etc/ssl/etcd/ssl

          name: etcd-certs-0

          readOnly: true

        • mountPath: /etc/kubernetes/pki

          name: k8s-certs

          readOnly: true

        • mountPath: /usr/local/share/ca-certificates

          name: usr-local-share-ca-certificates

          readOnly: true

        • mountPath: /usr/share/ca-certificates

          name: usr-share-ca-certificates

          readOnly: true

        hostNetwork: true

        priority: 2000001000

        priorityClassName: system-node-critical

        securityContext:

        seccompProfile:

        type: RuntimeDefault

        volumes:

      • hostPath:

        path: /etc/ssl/certs

        type: DirectoryOrCreate

        name: ca-certs

      • hostPath:

        path: /etc/ca-certificates

        type: DirectoryOrCreate

        name: etc-ca-certificates

      • hostPath:

        path: /etc/pki

        type: DirectoryOrCreate

        name: etc-pki

      • hostPath:

        path: /etc/ssl/etcd/ssl

        type: DirectoryOrCreate

        name: etcd-certs-0

      • hostPath:

        path: /etc/kubernetes/pki

        type: DirectoryOrCreate

        name: k8s-certs

      • hostPath:

        path: /usr/local/share/ca-certificates

        type: DirectoryOrCreate

        name: usr-local-share-ca-certificates

      • hostPath:

        path: /usr/share/ca-certificates

        type: DirectoryOrCreate

        name: usr-share-ca-certificates

      status: {}

      gs80140
      pause 镜像现在能正常 pull 到了吧?可以的话先执行 ./kk delete cluster -f xxx.yaml 清理下环境,然后再重新执行 create cluster 观察。

      Dec 11 16:48:36 master kubelet[43735]: E1211 16:48:36.464582 43735 remote_image.go:171] “PullImage from image service failed” err="rpc error: code = Unknown desc = failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority" image=“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12”

      Dec 11 16:48:36 master kubelet[43735]: E1211 16:48:36.464615 43735 kuberuntime_image.go:53] “Failed to pull image” err="rpc error: code = Unknown desc = failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority" image=“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12”

      通过手工的方式 pull

      docker pull dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12

      docker pull dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12

      然后报错变成

      Dec 11 16:55:45 master kubelet[43735]: E1211 16:55:45.453268 43735 pod_workers.go:965] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“kube-apiserver\” with ImagePullBackOff: \"Back-off pulling image \\\“dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\\\”\"" pod=“kube-system/kube-apiserver-master” podUID=9eb830c8cce30bfcab1dc46488c4c23e

      放弃离线安装, 换成在线安装

      export KKZONE=cn

      ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.4.1

      果然比离线安装好多了, 至少 docKer镜像都创建了.

      2 个月 后
      • 已编辑

      Cauchy
      Hi, 我做了一樣的動作將 /etc/containerd/config.toml 複製一份到離線環境節點中,delete cluster 後也重新安裝,仍舊是沒有拉到鏡像

      h00283@coverity-ms:~/kubesphere$ sudo systemctl status containerd
      ● containerd.service - containerd container runtime
           Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
           Active: active (running) since Wed 2025-02-12 09:01:18 CST; 12min ago
             Docs: https://containerd.io
         Main PID: 57855 (containerd)
            Tasks: 20
           Memory: 32.1M
              CPU: 3.259s
           CGroup: /system.slice/containerd.service
                   └─57855 /usr/bin/containerd
      
      Feb 12 09:10:18 coverity-ms containerd[57855]: time="2025-02-12T09:10:18.030777307+08:00" level=info msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\""
      Feb 12 09:10:53 coverity-ms containerd[57855]: time="2025-02-12T09:10:53.048661264+08:00" level=error msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\" failed" error="failed to pull >
      Feb 12 09:10:53 coverity-ms containerd[57855]: time="2025-02-12T09:10:53.048737870+08:00" level=info msg="stop pulling image docker.io/kubesphere/kube-scheduler:v1.28.0: active requests>
      Feb 12 09:10:53 coverity-ms containerd[57855]: time="2025-02-12T09:10:53.067791408+08:00" level=info msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\""
      Feb 12 09:11:47 coverity-ms containerd[57855]: time="2025-02-12T09:11:47.921987128+08:00" level=error msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\" failed" error="failed to pull >
      Feb 12 09:11:47 coverity-ms containerd[57855]: time="2025-02-12T09:11:47.922058874+08:00" level=info msg="stop pulling image docker.io/kubesphere/kube-scheduler:v1.28.0: active requests>
      Feb 12 09:11:47 coverity-ms containerd[57855]: time="2025-02-12T09:11:47.959575620+08:00" level=info msg="PullImage \"kubesphere/kube-proxy:v1.28.0\""
      Feb 12 09:12:42 coverity-ms containerd[57855]: time="2025-02-12T09:12:42.900990255+08:00" level=error msg="PullImage \"kubesphere/kube-proxy:v1.28.0\" failed" error="failed to pull and >
      Feb 12 09:12:42 coverity-ms containerd[57855]: time="2025-02-12T09:12:42.901048488+08:00" level=info msg="stop pulling image docker.io/kubesphere/kube-proxy:v1.28.0: active requests=0, >
      Feb 12 09:12:42 coverity-ms containerd[57855]: time="2025-02-12T09:12:42.920106926+08:00" level=info msg="PullImage \"kubesphere/kube-proxy:v1.28.0\""
      lines 1-21/21 (END)
      h00283@coverity-ms:~/kubesphere$ sudo systemctl status containerd
      ● containerd.service - containerd container runtime
           Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
           Active: active (running) since Wed 2025-02-12 09:01:18 CST; 24min ago
             Docs: https://containerd.io
         Main PID: 57855 (containerd)
            Tasks: 20
           Memory: 28.1M
              CPU: 6.470s
           CGroup: /system.slice/containerd.service
                   └─57855 /usr/bin/containerd
      
      Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007196587+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86>
      Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007258612+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006521370+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006963140+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms>
      Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.007017828+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:25:19 coverity-ms containerd[57855]: time="2025-02-12T09:25:19.006177756+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
      Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.006859372+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007282985+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
      Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007336669+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:25:25 coverity-ms containerd[57855]: time="2025-02-12T09:25:25.006213130+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>
      
      h00283@coverity-ms:~/kubesphere$ sudo journalctl -xeu containerd
      Feb 12 09:23:50 coverity-ms containerd[57855]: time="2025-02-12T09:23:50.007470945+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
      Feb 12 09:23:50 coverity-ms containerd[57855]: time="2025-02-12T09:23:50.007543865+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:23:57 coverity-ms containerd[57855]: time="2025-02-12T09:23:57.005215126+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
      Feb 12 09:24:02 coverity-ms containerd[57855]: time="2025-02-12T09:24:02.005641258+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>
      Feb 12 09:24:05 coverity-ms containerd[57855]: time="2025-02-12T09:24:05.005768903+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b9>
      Feb 12 09:24:15 coverity-ms containerd[57855]: time="2025-02-12T09:24:15.014632688+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:24:15 coverity-ms containerd[57855]: time="2025-02-12T09:24:15.015099837+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
      Feb 12 09:24:15 coverity-ms containerd[57855]: time="2025-02-12T09:24:15.015168391+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:24:27 coverity-ms containerd[57855]: time="2025-02-12T09:24:27.005963308+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:24:27 coverity-ms containerd[57855]: time="2025-02-12T09:24:27.006362809+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86>
      Feb 12 09:24:27 coverity-ms containerd[57855]: time="2025-02-12T09:24:27.006405634+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:24:28 coverity-ms containerd[57855]: time="2025-02-12T09:24:28.005946694+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b9>
      Feb 12 09:24:32 coverity-ms containerd[57855]: time="2025-02-12T09:24:32.007109598+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:24:32 coverity-ms containerd[57855]: time="2025-02-12T09:24:32.007513724+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms>
      Feb 12 09:24:32 coverity-ms containerd[57855]: time="2025-02-12T09:24:32.007574457+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.006011082+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
      Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.011167166+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.011456251+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
      Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.011516448+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:24:44 coverity-ms containerd[57855]: time="2025-02-12T09:24:44.005319589+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>
      Feb 12 09:24:52 coverity-ms containerd[57855]: time="2025-02-12T09:24:52.005626755+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b9>
      Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.006747234+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007196587+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86>
      Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007258612+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006521370+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006963140+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms>
      Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.007017828+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:25:19 coverity-ms containerd[57855]: time="2025-02-12T09:25:19.006177756+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
      Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.006859372+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
      Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007282985+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
      Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007336669+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
      Feb 12 09:25:25 coverity-ms containerd[57855]: time="2025-02-12T09:25:25.006213130+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>