创建部署问题时,请参考下面模板:
操作系统信息,例如:虚拟机/物理机,KVM虚机 k3os

Kubernetes版本信息,例如:v18.6。单节点还是多节点 。
kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-10-120-127-237 Ready control-plane,etcd 18d v1.20.4+k3s1
node-10-120-13-20 Ready control-plane,etcd,master 6d20h v1.20.4+k3s1
node-10-120-13-21 Ready <none> 6d20h v1.20.4+k3s1
node-10-120-13-235 Ready control-plane,etcd,master 18d v1.20.4+k3s1

KubeSphere版本信息,例如:v2.1.1/v3.0.0。离线安装还是在线安装。已有K8s安装还是全套安装。

问题是什么,报错日志是什么,最好有截图。
node-10-120-13-235 [~]# kubectl describe pods ks-installer-6f4c9fcfcf-pl2cr -n kubesphere-system
Name: ks-installer-6f4c9fcfcf-pl2cr
Namespace: kubesphere-system
Priority: 0
Node: node-10-120-13-21/10.120.13.21
Start Time: Tue, 01 Jun 2021 03:04:53 +0000
Labels: app=ks-install
pod-template-hash=6f4c9fcfcf
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
“name”: "",
“interface”: “eth0”,
“ips”: [
“10.52.3.12”
],
“mac”: “86:14:ff:96:6a:c3”,
“default”: true,
“dns”: {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
“name”: "",
“interface”: “eth0”,
“ips”: [
“10.52.3.12”
],
“mac”: “86:14:ff:96:6a:c3”,
“default”: true,
“dns”: {}
}]
Status: Running
IP: 10.52.3.12
IPs:
IP: 10.52.3.12
Controlled By: ReplicaSet/ks-installer-6f4c9fcfcf
Containers:
installer:
Container ID: containerd://a09cea2678602863f5c9ce139915bcedfc054b09e32b3af5a9e6defafc6b77ea
Image: kubesphere/ks-installer:v3.1.0
Image ID: docker.io/kubesphere/ks-installer@sha256:eeaf9c965577601392e17cb0ce2d3bb9ab6c8b4f4cff462c272e31503e88e4ef
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: StartError
Message: failed to create containerd task: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container
init caused: rootfs_linux.go:59: mounting “/etc/localtime” to rootfs at “/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/a09cea2678602863f5c9ce139915bcedfc054b09e32b3af5a9e6defafc6b77ea/rootfs/usr/share/zoneinfo/Asia/Shanghai” caused: not a directory: unknown
Exit Code: 128
Started: Thu, 01 Jan 1970 00:00:00 +0000
Finished: Tue, 01 Jun 2021 03:07:55 +0000
Ready: False
Restart Count: 5
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 20m
memory: 100Mi
Environment: <none>
Mounts:
/etc/localtime from host-time (rw)
/var/run/secrets/kubernetes.io/serviceaccount from ks-installer-token-dlnjt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
host-time:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
ks-installer-token-dlnjt:
Type: Secret (a volume populated by a Secret)
SecretName: ks-installer-token-dlnjt
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 4m33s default-scheduler Successfully assigned kubesphere-system/ks-installer-6f4c9fcfcf-pl2cr to node-10-120-13-21
Normal Pulled 3m8s kubelet Successfully pulled image “kubesphere/ks-installer:v3.1.0” in 952.765851ms
Normal Pulled 3m6s kubelet Successfully pulled image “kubesphere/ks-installer:v3.1.0” in 955.877685ms
Normal Pulled 2m50s kubelet Successfully pulled image “kubesphere/ks-installer:v3.1.0” in 943.515454ms
Normal Created 2m22s (x4 over 3m8s) kubelet Created container installer
Warning Failed 2m22s (x4 over 3m8s) kubelet Error: failed to create containerd task: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting “/etc/localtime” to rootfs at “/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/installer/rootfs/usr/share/zoneinfo/Asia/Shanghai” caused: not a directory: unknown
Normal Pulled 2m22s kubelet Successfully pulled image “kubesphere/ks-installer:v3.1.0” in 948.216899ms
Warning BackOff 103s (x8 over 3m5s) kubelet Back-off restarting failed container
Normal Pulling 90s (x5 over 3m9s) kubelet Pulling image “kubesphere/ks-installer:v3.1.0”
node-10-120-13-235 [~]# kubectl get pods -n kubesphere-system
NAME READY STATUS RESTARTS AGE
ks-installer-6f4c9fcfcf-pl2cr 0/1 RunContainerError 6 4m28s
node-10-120-13-235 [~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-10-120-127-237 Ready control-plane,etcd 18d v1.20.4+k3s1
node-10-120-13-20 Ready control-plane,etcd,master 6d20h v1.20.4+k3s1
node-10-120-13-21 Ready <none> 6d20h v1.20.4+k3s1
node-10-120-13-235 Ready control-plane,etcd,master 18d v1.20.4+k3s1
node-10-120-13-235 [~]# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
node-10-120-13-235 [~]# k3s
NAME:
k3s - Kubernetes, but small and simple

USAGE:
k3s [global options] command [command options] [arguments…]

VERSION:
v1.20.4+k3s1 (838a906a)

COMMANDS:
server Run management server
agent Run node agent
kubectl Run kubectl
crictl Run crictl
ctr Run ctr
check-config Run config check
etcd-snapshot Trigger an immediate etcd snapshot
help, h Shows a list of commands or help for one command

GLOBAL OPTIONS:
–debug (logging) Turn on debug logs [$K3S_DEBUG]
–data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
–help, -h show help
–version, -v print the version
node-10-120-13-235 [~]#

    tscswcn 首先检查一下你的主机上是否有 "/etc/localtime"文件,如果没有创建一个:
    ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime.再重启一下ks-installer

    有的,是个目录,1个空目录

    是这个K3os 跟其他 linux 不一样吗

    你尝试一下把这个空目录删除,然后执行ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime,然后重启installer

    tscswcn 可以重启一下ks-installer 看一下。 如果还是有错误,可以进到ks-installer POD中看日志:
    cat /kubesphere/results/result-info/result/stdout

    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (11 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (10 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (9 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (8 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (7 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (6 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (5 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (4 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (3 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (2 retries left).
    FAILED - RETRYING: KubeSphere | Waiting for ks-apiserver (1 retries left).
    fatal: [localhost]: FAILED! => {“attempts”: 30, “changed”: true, “cmd”: “/usr/local/bin/kubectl get pod -n kubesphere-system -o wide | grep ks-apiserver | awk ‘{print $3}’”, “delta”: “0:00:00.094682”, “end”: “2021-06-03 17:25:46.448578”, “rc”: 0, “start”: “2021-06-03 17:25:46.353896”, “stderr”: "", “stderr_lines”: [], “stdout”: “ImagePullBackOff\nImagePullBackOff”, “stdout_lines”: [“ImagePullBackOff”, “ImagePullBackOff”]}

    node-10-120-13-21 [~]# ctr images pull redis:5.0.5-alpine
    ctr: failed to resolve reference “redis:5.0.5-alpine”: parse “dummy://redis:5.0.5-alpine”: invalid port “:5.0.5-alpine” after host
    node-10-120-13-21 [~]# crictl pull redis:5.0.5-alpine
    FATA[2021-06-04T13:57:27.039676370+08:00] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image “docker.io/library/redis:5.0.5-alpine”: failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/redis/manifests/sha256:a606eaca41c3c69c7d2c8a142ec445e71156bae8526ae7970f62b6399e57761c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
    node-10-120-13-21 [~]#

    这边我不知道 怎么把镜像 拉下来

      tscswcn “You have reached your pull rate limit. ” Docker 现在拉取镜像有次数限制。你的ip已经被封了,可以等24小时。或者你在docker中配置个mirror 镜像服务。

      你的集群有可能已经安装成功了。 运行 kubectl get pod -A 看一下是不是所有的pod都起来了

      可以在对应的机器上登录 docker,提升拉取限制 : P

        现在 ks-installer 里 的日志如下,请问怎么真短

        E0605 16:14:56.425265 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:14:57.425821 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:14:58.426458 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:14:59.427048 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:00.427603 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:01.428221 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:02.428889 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:03.429471 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:04.430089 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:05.430631 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:06.431235 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:07.431839 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:08.432435 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        E0605 16:15:09.434703 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: the server is currently unable to handle the request
        node-10-120-13-235 [~]#

        发现不是问题,可以登录,谢谢!