创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息

虚拟机,Ubuntu22.04,24C/64G

Kubernetes版本信息
{

“clientVersion”: {

"major": "1",

"minor": "26",

"gitVersion": "v1.26.12",

"gitCommit": "df63cd7cd818dd2262473d2170f4957c6735ba53",

"gitTreeState": "clean",

"buildDate": "2023-12-19T13:43:37Z",

"goVersion": "go1.20.12",

"compiler": "gc",

"platform": "linux/amd64"

},

“kustomizeVersion”: “v4.5.7”

}

The connection to the server localhost:8080 was refused - did you specify the right host or port?

容器运行时
docker version / crictl version / nerdctl version 结果贴在下方

Client: Docker Engine - Community

Version: 27.3.1

API version: 1.47

Go version: go1.22.7

Git commit: ce12230

Built: Fri Sep 20 11:41:03 2024

OS/Arch: linux/amd64

Context: default

Server: Docker Engine - Community

Engine:

Version: 27.3.1

API version: 1.47 (minimum version 1.24)

Go version: go1.22.7

Git commit: 41ca978

Built: Fri Sep 20 11:41:03 2024

OS/Arch: linux/amd64

Experimental: false

containerd:

Version: 1.7.23

GitCommit: 57f17b0a6295a39009d861b89e3b3b87b005ca27

runc:

Version: 1.1.14

GitCommit: v1.1.14-0-g2c9f560

docker-init:

Version: 0.19.0

GitCommit: de40ad0

sudo crictl version

Version: 0.1.0

RuntimeName: containerd

RuntimeVersion: 1.7.23

RuntimeApiVersion: v1

nerdctl version

nerdctl: command not found

KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。

v4.1.2, 离线安装, 使用kk安装

./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz –with-local-storage

问题是什么

k8s启动不了, 报如下错误, 离线安装还去找外网的镜像 registry.k8s.io/pause:3.8, 我将此镜像导入了仍然报错

Dec 11 11:14:45 master kubelet[48502]: E1211 11:14:45.286340 48502 remote_runtime.go:176] “RunPodSandbox from runtime service failed” err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \“registry.k8s.io/pause:3.8\”: failed to pull image \“registry.k8s.io/pause:3.8\”: failed to pull and unpack image \“registry.k8s.io/pause:3.8\”: failed to resolve reference \“registry.k8s.io/pause:3.8\”: failed to do request: Head \“https://registry.k8s.io/v2/pause/manifests/3.8\”: dial tcp 34.96.108.209:443: i/o timeout"

Dec 11 11:14:45 master kubelet[48502]: E1211 11:14:45.286463 48502 kuberuntime_sandbox.go:72] “Failed to create sandbox for pod” err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \“registry.k8s.io/pause:3.8\”: failed to pull image \“registry.k8s.io/pause:3.8\”: failed to pull and unpack image \“registry.k8s.io/pause:3.8\”: failed to resolve reference \“registry.k8s.io/pause:3.8\”: failed to do request: Head \“https://registry.k8s.io/v2/pause/manifests/3.8\”: dial tcp 34.96.108.209:443: i/o timeout" pod=“kube-system/kube-scheduler-master”

Dec 11 11:14:45 master kubelet[48502]: E1211 11:14:45.286493 48502 kuberuntime_manager.go:782] “CreatePodSandbox for pod failed” err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \“registry.k8s.io/pause:3.8\”: failed to pull image \“registry.k8s.io/pause:3.8\”: failed to pull and unpack image \“registry.k8s.io/pause:3.8\”: failed to resolve reference \“registry.k8s.io/pause:3.8\”: failed to do request: Head \“https://registry.k8s.io/v2/pause/manifests/3.8\”: dial tcp 34.96.108.209:443: i/o timeout" pod=“kube-system/kube-scheduler-master”

Dec 11 11:14:45 master kubelet[48502]: E1211 11:14:45.286613 48502 pod_workers.go:965] “Error syncing pod, skipping” err="failed to \“CreatePodSandbox\” for \“kube-scheduler-master_kube-system(4ca7fb2db07d0f724baa8308d590dcb6)\” with CreatePodSandboxError: \"Failed to create sandbox for pod \\\“kube-scheduler-master_kube-system(4ca7fb2db07d0f724baa8308d590dcb6)\\\”: rpc error: code = DeadlineExceeded desc = failed to get sandbox image \\\“registry.k8s.io/pause:3.8\\\”: failed to pull image \\\“registry.k8s.io/pause:3.8\\\”: failed to pull and unpack image \\\“registry.k8s.io/pause:3.8\\\”: failed to resolve reference \\\“registry.k8s.io/pause:3.8\\\”: failed to do request: Head \\\“https://registry.k8s.io/v2/pause/manifests/3.8\\\”: dial tcp 34.96.108.209:443: i/o timeout\"" pod=“kube-system/kube-scheduler-master” podUID=4ca7fb2db07d0f724baa8308d590dcb6

导入的镜像截图

这如何解决?

    k8s的api server 8080端口没起来

    error: Pipeline[CreateClusterPipeline] execute failed: Module[KubernetesStatusModule] exec failed:

    failed: [master] [GetClusterStatus] exec failed after 3 retries: get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”

    E1211 11:52:53.437505 28235 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

    E1211 11:52:53.438186 28235 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

    E1211 11:52:53.439674 28235 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

    E1211 11:52:53.439892 28235 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

    E1211 11:52:53.441515 28235 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

    The connection to the server localhost:8080 was refused - did you specify the right host or port?: Process exited with status 1

    The reset process does not clean your kubeconfig files and you must remove them manually.

    Please, check the contents of the $HOME/.kube/config file.

    13:50:39 CST message: [master]

    init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”

    W1211 13:46:33.866866 31746 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]

    [init] Using Kubernetes version: v1.26.12

    [preflight] Running pre-flight checks

    [preflight] Pulling images required for setting up a Kubernetes cluster

    [preflight] This might take a minute or two, depending on the speed of your internet connection

    [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

        [WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12: output: E1211 13:46:34.465341   31899 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12"

    time=“2024-12-11T13:46:34+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"

    , error: exit status 1

        [WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12: output: E1211 13:46:34.801358   31989 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12"

    time=“2024-12-11T13:46:34+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"

    , error: exit status 1

        [WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12: output: E1211 13:46:35.112048   32092 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12"

    time=“2024-12-11T13:46:35+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"

    , error: exit status 1

        [WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12: output: E1211 13:46:35.393414   32177 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12"

    time=“2024-12-11T13:46:35+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"

    , error: exit status 1

        [WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.9: output: E1211 13:46:35.701777   32277 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/pause:3.9\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/pause:3.9\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.9\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/pause:3.9"

    time=“2024-12-11T13:46:35+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/pause:3.9\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/pause:3.9\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.9\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"

    , error: exit status 1

        [WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: output: E1211 13:46:36.012321   32371 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/coredns:1.9.3"

    time=“2024-12-11T13:46:36+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"

    , error: exit status 1

    [certs] Using certificateDir folder “/etc/kubernetes/pki”

    [certs] Generating “ca” certificate and key

    [certs] Generating “apiserver” certificate and key

    [certs] apiserver serving cert is signed for DNS names [codebase codebase.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local] and IPs [10.233.0.1 172.16.21.35 127.0.0.1 172.16.20.20]

    [certs] Generating “apiserver-kubelet-client” certificate and key

    [certs] Generating “front-proxy-ca” certificate and key

    [certs] Generating “front-proxy-client” certificate and key

    [certs] External etcd mode: Skipping etcd/ca certificate authority generation

    [certs] External etcd mode: Skipping etcd/server certificate generation

    [certs] External etcd mode: Skipping etcd/peer certificate generation

    [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation

    [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation

    [certs] Generating “sa” key and public key

    [kubeconfig] Using kubeconfig folder “/etc/kubernetes”

    [kubeconfig] Writing “admin.conf” kubeconfig file

    [kubeconfig] Writing “kubelet.conf” kubeconfig file

    [kubeconfig] Writing “controller-manager.conf” kubeconfig file

    [kubeconfig] Writing “scheduler.conf” kubeconfig file

    [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

    [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

    [kubelet-start] Starting the kubelet

    [control-plane] Using manifest folder “/etc/kubernetes/manifests”

    [control-plane] Creating static Pod manifest for “kube-apiserver”

    [control-plane] Creating static Pod manifest for “kube-controller-manager”

    [control-plane] Creating static Pod manifest for “kube-scheduler”

    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

    [kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:

        timed out waiting for the condition

    This error is likely caused by:

        - The kubelet is not running
    
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

        - 'systemctl status kubelet'
    
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.

    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all running Kubernetes containers by using crictl:

        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    
        Once you have found the failing container, you can inspect its logs with:
    
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'

    error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster

    To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1

    13:50:39 CST retry: [master]

    发现报错 The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]

    解决办法

    sudo vi /var/lib/kubelet/config.yaml

    修改如下内容

    clusterDNS:

    - 169.254.25.10

    clusterDNS:

    - 10.233.0.10

      @gs80140 registry.k8s.io/pause:3.8 这个镜像是 containerd 的配置文件里定义的,如果这个节点上装了harbor,是不会创建containerd配置文件的,如果你还有其它节点,从其它节点copy 一份 /etc/containerd/config.toml 过来重启 containerd。然后执行下 kk delete cluster -f xxx.yaml, 再重新执行安装。

        手工下载再替换

        docker pull registry.aliyuncs.com/google_containers/pause:3.8

        docker tag registry.aliyuncs.com/google_containers/pause:3.8 registry.k8s.io/pause:3.8

        Cauchy

        这台机器是有containerd的, 没有配置pause, harbor装在另外一台机器了, 如果在同一台机器安装, harbor会受影响

        gs80140 这里也是错误的, 一并修改 sudo vi /etc/kubernetes/kubeadm-config.yaml

        @gs80140 那给这些节点的containerd的 sandbox image 配置成离线仓库里的镜像吧

        gs80140

        sudo vi /var/lib/kubelet/config.yaml

        sudo vi kubeadm-config.yaml

        修改如下内容

        clusterDNS:

        - 169.254.25.10

        clusterDNS:

        - 10.233.0.10

        8080端口是谁的? 为什么启来?

        14:54:35 CST stdout: [master]

        E1211 14:54:35.007262 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.009045 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.009898 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.011881 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.012490 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        The connection to the server localhost:8080 was refused - did you specify the right host or port?

        14:54:35 CST message: [master]

        get kubernetes cluster info failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl –no-headers=true get nodes -o custom-columns=:metadata.name,:status.nodeInfo.kubeletVersion,:status.addresses”

        E1211 14:54:35.007262 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.009045 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.009898 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.011881 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        E1211 14:54:35.012490 39398 memcache.go:265] couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused

        The connection to the server localhost:8080 was refused - did you specify the right host or port?: Process exited with status 1

        14:54:35 CST retry: [master]

        14:54:40 CST stdout: [master]

        API Server 默认端口

        1. 安全端口 (--secure-port)

          • 默认值:6443
          • 这是常用的端口,使用 HTTPS 协议,并需要认证和授权。
        2. 非安全端口 (--insecure-port)

          • 默认值:8080

          • 从 Kubernetes 1.20 开始,默认值为 0(禁用)。

          • 不推荐使用,建议仅使用安全端口。

        cat /etc/kubernetes/manifests/kube-apiserver.yaml

        apiVersion: v1

        kind: Pod

        metadata:

        annotations:

        kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.16.21.35:6443

        creationTimestamp: null

        labels:

        component: kube-apiserver
        
        tier: control-plane

        name: kube-apiserver

        namespace: kube-system

        spec:

        containers:

        • command:

          • kube-apiserver

          • –advertise-address=172.16.21.35

          • –allow-privileged=true

          • –authorization-mode=Node,RBAC

          • –bind-address=0.0.0.0

          • –client-ca-file=/etc/kubernetes/pki/ca.crt

          • –enable-admission-plugins=NodeRestriction

          • –enable-bootstrap-token-auth=true

          • –etcd-cafile=/etc/ssl/etcd/ssl/ca.pem

          • –etcd-certfile=/etc/ssl/etcd/ssl/node-master.pem

          • –etcd-keyfile=/etc/ssl/etcd/ssl/node-master-key.pem

          • –etcd-servers=https://172.16.21.35:2379

          • –feature-gates=RotateKubeletServerCertificate=true

          • –kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt

          • –kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

          • –kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

          • –proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt

          • –proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key

          • –requestheader-allowed-names=front-proxy-client

          • –requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt

          • –requestheader-extra-headers-prefix=X-Remote-Extra-

          • –requestheader-group-headers=X-Remote-Group

          • –requestheader-username-headers=X-Remote-User

          • –secure-port=6443

          • –service-account-issuer=https://kubernetes.default.svc.cluster.local

          • –service-account-key-file=/etc/kubernetes/pki/sa.pub

          • –service-account-signing-key-file=/etc/kubernetes/pki/sa.key

          • –service-cluster-ip-range=10.233.0.0/18

          • –tls-cert-file=/etc/kubernetes/pki/apiserver.crt

          • –tls-private-key-file=/etc/kubernetes/pki/apiserver.key

            image: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12

            imagePullPolicy: IfNotPresent

            livenessProbe:

            failureThreshold: 8

            httpGet:

            host: 172.16.21.35

            path: /livez

            port: 6443

            scheme: HTTPS

            initialDelaySeconds: 10

            periodSeconds: 10

            timeoutSeconds: 15

            name: kube-apiserver

            readinessProbe:

            failureThreshold: 3

            httpGet:

            host: 172.16.21.35

            path: /readyz

            port: 6443

            scheme: HTTPS

            periodSeconds: 1

            timeoutSeconds: 15

            resources:

            requests:

            cpu: 250m

            startupProbe:

            failureThreshold: 24

            httpGet:

            host: 172.16.21.35

            path: /livez

            port: 6443

            scheme: HTTPS

            initialDelaySeconds: 10

            periodSeconds: 10

            timeoutSeconds: 15

            volumeMounts:

          • mountPath: /etc/ssl/certs

            name: ca-certs

            readOnly: true

          • mountPath: /etc/ca-certificates

            name: etc-ca-certificates

            readOnly: true

          • mountPath: /etc/pki

            name: etc-pki

            readOnly: true

          • mountPath: /etc/ssl/etcd/ssl

            name: etcd-certs-0

            readOnly: true

          • mountPath: /etc/kubernetes/pki

            name: k8s-certs

            readOnly: true

          • mountPath: /usr/local/share/ca-certificates

            name: usr-local-share-ca-certificates

            readOnly: true

          • mountPath: /usr/share/ca-certificates

            name: usr-share-ca-certificates

            readOnly: true

          hostNetwork: true

          priority: 2000001000

          priorityClassName: system-node-critical

          securityContext:

          seccompProfile:

          type: RuntimeDefault

          volumes:

        • hostPath:

          path: /etc/ssl/certs

          type: DirectoryOrCreate

          name: ca-certs

        • hostPath:

          path: /etc/ca-certificates

          type: DirectoryOrCreate

          name: etc-ca-certificates

        • hostPath:

          path: /etc/pki

          type: DirectoryOrCreate

          name: etc-pki

        • hostPath:

          path: /etc/ssl/etcd/ssl

          type: DirectoryOrCreate

          name: etcd-certs-0

        • hostPath:

          path: /etc/kubernetes/pki

          type: DirectoryOrCreate

          name: k8s-certs

        • hostPath:

          path: /usr/local/share/ca-certificates

          type: DirectoryOrCreate

          name: usr-local-share-ca-certificates

        • hostPath:

          path: /usr/share/ca-certificates

          type: DirectoryOrCreate

          name: usr-share-ca-certificates

        status: {}

        gs80140
        pause 镜像现在能正常 pull 到了吧?可以的话先执行 ./kk delete cluster -f xxx.yaml 清理下环境,然后再重新执行 create cluster 观察。

        Dec 11 16:48:36 master kubelet[43735]: E1211 16:48:36.464582 43735 remote_image.go:171] “PullImage from image service failed” err="rpc error: code = Unknown desc = failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority" image=“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12”

        Dec 11 16:48:36 master kubelet[43735]: E1211 16:48:36.464615 43735 kuberuntime_image.go:53] “Failed to pull image” err="rpc error: code = Unknown desc = failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority" image=“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12”

        通过手工的方式 pull

        docker pull dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12

        docker pull dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12

        然后报错变成

        Dec 11 16:55:45 master kubelet[43735]: E1211 16:55:45.453268 43735 pod_workers.go:965] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“kube-apiserver\” with ImagePullBackOff: \"Back-off pulling image \\\“dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\\\”\"" pod=“kube-system/kube-apiserver-master” podUID=9eb830c8cce30bfcab1dc46488c4c23e

        放弃离线安装, 换成在线安装

        export KKZONE=cn

        ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.4.1

        果然比离线安装好多了, 至少 docKer镜像都创建了.

        2 个月 后
        • 已编辑

        Cauchy
        Hi, 我做了一樣的動作將 /etc/containerd/config.toml 複製一份到離線環境節點中,delete cluster 後也重新安裝,仍舊是沒有拉到鏡像

        h00283@coverity-ms:~/kubesphere$ sudo systemctl status containerd
        ● containerd.service - containerd container runtime
             Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
             Active: active (running) since Wed 2025-02-12 09:01:18 CST; 12min ago
               Docs: https://containerd.io
           Main PID: 57855 (containerd)
              Tasks: 20
             Memory: 32.1M
                CPU: 3.259s
             CGroup: /system.slice/containerd.service
                     └─57855 /usr/bin/containerd
        
        Feb 12 09:10:18 coverity-ms containerd[57855]: time="2025-02-12T09:10:18.030777307+08:00" level=info msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\""
        Feb 12 09:10:53 coverity-ms containerd[57855]: time="2025-02-12T09:10:53.048661264+08:00" level=error msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\" failed" error="failed to pull >
        Feb 12 09:10:53 coverity-ms containerd[57855]: time="2025-02-12T09:10:53.048737870+08:00" level=info msg="stop pulling image docker.io/kubesphere/kube-scheduler:v1.28.0: active requests>
        Feb 12 09:10:53 coverity-ms containerd[57855]: time="2025-02-12T09:10:53.067791408+08:00" level=info msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\""
        Feb 12 09:11:47 coverity-ms containerd[57855]: time="2025-02-12T09:11:47.921987128+08:00" level=error msg="PullImage \"kubesphere/kube-scheduler:v1.28.0\" failed" error="failed to pull >
        Feb 12 09:11:47 coverity-ms containerd[57855]: time="2025-02-12T09:11:47.922058874+08:00" level=info msg="stop pulling image docker.io/kubesphere/kube-scheduler:v1.28.0: active requests>
        Feb 12 09:11:47 coverity-ms containerd[57855]: time="2025-02-12T09:11:47.959575620+08:00" level=info msg="PullImage \"kubesphere/kube-proxy:v1.28.0\""
        Feb 12 09:12:42 coverity-ms containerd[57855]: time="2025-02-12T09:12:42.900990255+08:00" level=error msg="PullImage \"kubesphere/kube-proxy:v1.28.0\" failed" error="failed to pull and >
        Feb 12 09:12:42 coverity-ms containerd[57855]: time="2025-02-12T09:12:42.901048488+08:00" level=info msg="stop pulling image docker.io/kubesphere/kube-proxy:v1.28.0: active requests=0, >
        Feb 12 09:12:42 coverity-ms containerd[57855]: time="2025-02-12T09:12:42.920106926+08:00" level=info msg="PullImage \"kubesphere/kube-proxy:v1.28.0\""
        lines 1-21/21 (END)
        h00283@coverity-ms:~/kubesphere$ sudo systemctl status containerd
        ● containerd.service - containerd container runtime
             Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
             Active: active (running) since Wed 2025-02-12 09:01:18 CST; 24min ago
               Docs: https://containerd.io
           Main PID: 57855 (containerd)
              Tasks: 20
             Memory: 28.1M
                CPU: 6.470s
             CGroup: /system.slice/containerd.service
                     └─57855 /usr/bin/containerd
        
        Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007196587+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86>
        Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007258612+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006521370+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006963140+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms>
        Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.007017828+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:25:19 coverity-ms containerd[57855]: time="2025-02-12T09:25:19.006177756+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
        Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.006859372+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007282985+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
        Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007336669+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:25:25 coverity-ms containerd[57855]: time="2025-02-12T09:25:25.006213130+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>
        
        h00283@coverity-ms:~/kubesphere$ sudo journalctl -xeu containerd
        Feb 12 09:23:50 coverity-ms containerd[57855]: time="2025-02-12T09:23:50.007470945+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
        Feb 12 09:23:50 coverity-ms containerd[57855]: time="2025-02-12T09:23:50.007543865+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:23:57 coverity-ms containerd[57855]: time="2025-02-12T09:23:57.005215126+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
        Feb 12 09:24:02 coverity-ms containerd[57855]: time="2025-02-12T09:24:02.005641258+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>
        Feb 12 09:24:05 coverity-ms containerd[57855]: time="2025-02-12T09:24:05.005768903+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b9>
        Feb 12 09:24:15 coverity-ms containerd[57855]: time="2025-02-12T09:24:15.014632688+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:24:15 coverity-ms containerd[57855]: time="2025-02-12T09:24:15.015099837+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
        Feb 12 09:24:15 coverity-ms containerd[57855]: time="2025-02-12T09:24:15.015168391+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:24:27 coverity-ms containerd[57855]: time="2025-02-12T09:24:27.005963308+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:24:27 coverity-ms containerd[57855]: time="2025-02-12T09:24:27.006362809+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86>
        Feb 12 09:24:27 coverity-ms containerd[57855]: time="2025-02-12T09:24:27.006405634+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:24:28 coverity-ms containerd[57855]: time="2025-02-12T09:24:28.005946694+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b9>
        Feb 12 09:24:32 coverity-ms containerd[57855]: time="2025-02-12T09:24:32.007109598+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:24:32 coverity-ms containerd[57855]: time="2025-02-12T09:24:32.007513724+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms>
        Feb 12 09:24:32 coverity-ms containerd[57855]: time="2025-02-12T09:24:32.007574457+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.006011082+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
        Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.011167166+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.011456251+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
        Feb 12 09:24:38 coverity-ms containerd[57855]: time="2025-02-12T09:24:38.011516448+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:24:44 coverity-ms containerd[57855]: time="2025-02-12T09:24:44.005319589+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>
        Feb 12 09:24:52 coverity-ms containerd[57855]: time="2025-02-12T09:24:52.005626755+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b9>
        Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.006747234+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007196587+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86>
        Feb 12 09:25:08 coverity-ms containerd[57855]: time="2025-02-12T09:25:08.007258612+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006521370+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.006963140+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms>
        Feb 12 09:25:14 coverity-ms containerd[57855]: time="2025-02-12T09:25:14.007017828+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:25:19 coverity-ms containerd[57855]: time="2025-02-12T09:25:19.006177756+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-coverity-ms,Uid:9c86a>
        Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.006859372+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v>
        Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007282985+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-coverity-ms,Uid:b68b>
        Feb 12 09:25:22 coverity-ms containerd[57855]: time="2025-02-12T09:25:22.007336669+08:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
        Feb 12 09:25:25 coverity-ms containerd[57855]: time="2025-02-12T09:25:25.006213130+08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-coverity-ms,>