• 安装部署
  • 使用kk添加work节点后,原来的2个节点变成notready了

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
例如:虚拟机/物理机,Centos7.5/Ubuntu18.04,4C/8G
centos7.9 2C/8G
Kubernetes版本信息
kubectl version 命令执行结果贴在下方
1.21.5
容器运行时
docker version / crictl version / nerdctl version 结果贴在下方
20.10.8
KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。
v3.2.1 在线安装,使用kk安装的k8s和ks
问题是什么
报错日志是什么,最好有截图。
./kk add nodes -f config.yaml
添加了node8和node9 没有报错
然后 有2个work节点就变成notready了

ryuhai083

污点是因为节点没有ready,防止pod被调度到这个节点上,集群自动加上的

可以参考https://kubesphere.com.cn/forum/d/3044-k8snotready这个贴文,查看一下kubelet的状态

    [root@k8s-master1 ~]# kubectl describe node k8s-node3

    Name: k8s-node3

    Roles: worker

    Labels: beta.kubernetes.io/arch=amd64

                    beta.kubernetes.io/os=linux
    
                    kubernetes.io/arch=amd64
    
                    kubernetes.io/hostname=k8s-node3
    
                    kubernetes.io/os=linux
    
                    node-role.kubernetes.io/worker=

    Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock

                    node.alpha.kubernetes.io/ttl: 0
    
                    volumes.kubernetes.io/controller-managed-attach-detach: true

    CreationTimestamp: Fri, 28 Jan 2022 00:02:48 +0800

    Taints: node.kubernetes.io/unreachable:NoExecute

                    node.kubernetes.io/unreachable:NoSchedule

    Unschedulable: false

    Lease:

    HolderIdentity: k8s-node3

    AcquireTime: <unset>

    RenewTime: Fri, 28 Jan 2022 00:03:00 +0800

    Conditions:

    Type Status LastHeartbeatTime LastTransitionTime Reason Message


    MemoryPressure Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.

    DiskPressure Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.

    PIDPressure Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.

    Ready Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.

    Addresses:

    InternalIP: 192.168.1.98

    Hostname: k8s-node3

    Capacity:

    cpu: 2

    ephemeral-storage: 49250820Ki

    hugepages-1Gi: 0

    hugepages-2Mi: 0

    memory: 7990048Ki

    pods: 110

    Allocatable:

    cpu: 1600m

    ephemeral-storage: 49250820Ki

    hugepages-1Gi: 0

    hugepages-2Mi: 0

    memory: 7248430689

    pods: 110

    System Info:

    Machine ID: 696084e5c5e44e429eed2dd11ca48ec6

    System UUID: 66CD4D56-FD52-D15D-DB6C-670F01F8186A

    Boot ID: ef481640-7d33-462d-b4fc-548dee067ec5

    Kernel Version: 3.10.0-1160.53.1.el7.x86_64

    OS Image: CentOS Linux 7 (Core)

    Operating System: linux

    Architecture: amd64

    Container Runtime Version: docker://20.10.8

    Kubelet Version: v1.21.5

    Kube-Proxy Version: v1.21.5

    PodCIDR: 10.233.67.0/24

    PodCIDRs: 10.233.67.0/24

    Non-terminated Pods: (5 in total)

    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


    kube-system calico-node-pg959 250m (15%) 0 (0%) 0 (0%) 0 (0%) 92m

    kube-system haproxy-k8s-node3 25m (1%) 0 (0%) 32M (0%) 0 (0%) 92m

    kube-system kube-proxy-zd6mb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 92m

    kube-system nodelocaldns-zd5ps 100m (6%) 0 (0%) 70Mi (1%) 170Mi (2%) 93m

    kubesphere-monitoring-system node-exporter-mqt5g 112m (7%) 2 (125%) 200Mi (2%) 600Mi (8%) 88m

    Allocated resources:

    (Total limits may be over 100 percent, i.e., overcommitted.)

    Resource Requests Limits


    cpu 487m (30%) 2 (125%)

    memory 315115520 (4%) 770Mi (11%)

    ephemeral-storage 0 (0%) 0 (0%)

    hugepages-1Gi 0 (0%) 0 (0%)

    hugepages-2Mi 0 (0%) 0 (0%)

    Events: <none>

    [root@k8s-master1 ~]# kubectl describe node k8s-node3
    Name: k8s-node3
    Roles: worker
    Labels: beta.kubernetes.io/arch=amd64
    beta.kubernetes.io/os=linux
    kubernetes.io/arch=amd64
    kubernetes.io/hostname=k8s-node3
    kubernetes.io/os=linux
    node-role.kubernetes.io/worker=
    Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: 0
    volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp: Fri, 28 Jan 2022 00:02:48 +0800
    Taints: node.kubernetes.io/unreachable:NoExecute
    node.kubernetes.io/unreachable:NoSchedule
    Unschedulable: false
    Lease:
    HolderIdentity: k8s-node3
    AcquireTime: <unset>
    RenewTime: Fri, 28 Jan 2022 00:03:00 +0800
    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message


    MemoryPressure Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.
    DiskPressure Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.
    PIDPressure Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.
    Ready Unknown Fri, 28 Jan 2022 00:02:49 +0800 Fri, 28 Jan 2022 00:03:44 +0800 NodeStatusUnknown Kubelet stopped posting node status.
    Addresses:
    InternalIP: 192.168.1.98
    Hostname: k8s-node3
    Capacity:
    cpu: 2
    ephemeral-storage: 49250820Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 7990048Ki
    pods: 110
    Allocatable:
    cpu: 1600m
    ephemeral-storage: 49250820Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 7248430689
    pods: 110
    System Info:
    Machine ID: 696084e5c5e44e429eed2dd11ca48ec6
    System UUID: 66CD4D56-FD52-D15D-DB6C-670F01F8186A
    Boot ID: ef481640-7d33-462d-b4fc-548dee067ec5
    Kernel Version: 3.10.0-1160.53.1.el7.x86_64
    OS Image: CentOS Linux 7 (Core)
    Operating System: linux
    Architecture: amd64
    Container Runtime Version: docker://20.10.8
    Kubelet Version: v1.21.5
    Kube-Proxy Version: v1.21.5
    PodCIDR: 10.233.67.0/24
    PodCIDRs: 10.233.67.0/24
    Non-terminated Pods: (5 in total)
    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


    kube-system calico-node-pg959 250m (15%) 0 (0%) 0 (0%) 0 (0%) 92m
    kube-system haproxy-k8s-node3 25m (1%) 0 (0%) 32M (0%) 0 (0%) 92m
    kube-system kube-proxy-zd6mb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 92m
    kube-system nodelocaldns-zd5ps 100m (6%) 0 (0%) 70Mi (1%) 170Mi (2%) 93m
    kubesphere-monitoring-system node-exporter-mqt5g 112m (7%) 2 (125%) 200Mi (2%) 600Mi (8%) 88m
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    Resource Requests Limits


    cpu 487m (30%) 2 (125%)
    memory 315115520 (4%) 770Mi (11%)
    ephemeral-storage 0 (0%) 0 (0%)
    hugepages-1Gi 0 (0%) 0 (0%)
    hugepages-2Mi 0 (0%) 0 (0%)
    Events: <none>

    现在我把 k8s-node1 退出集群, 在用命令加入k8s的时候有下吗的报错

    [root@k8s-node1 manifests]# kubeadm join lb.kubesphere.local:6443 –token 6j2vf9.5knky8076jc2rotm –discovery-token-ca-cert-hash sha256:5c8ad38f05d8cad0fc8b71edfb08bc8cc70dfcbfb43e967d2d1bb3deda4a7c50

    [preflight] Running pre-flight checks

    error execution phase preflight: couldn’t validate the identity of the API Server: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: dial tcp 127.0.0.1:6443: connect: connection refused

    To see the stack trace of this error execute with –v=5 or higher

    [root@k8s-node1 manifests]# cat /etc/hosts\

    [root@k8s-master1 ~]# kubectl logs haproxy-k8s-node4 -n kube-system
    Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)