Feynman 更改标题为「KubeKey 离线环境部署 KubeSphere v3.0.0(测试中)

k8s的离线部署包和方案什么时候释出?

安装时报错了:
TASK [common : Kubesphere | Check minio] ***************************************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: "/usr/local/bin/helm list -n kubesphere-system | grep \“ks-minio\”\n", “delta”: “0:00:00.139724″, “end”: “2020-09-21 06:45:08.530778″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-21 06:45:08.391054″, “stderr”: "", “stderr_lines”: [], “stdout”: "", “stdout_lines”: []}
…ignoring

TASK [common : Kubesphere | Deploy minio] **************************************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/helm upgrade –install ks-minio /kubesphere/kubesphere/minio-ha -f /kubesphere/kubesphere/custom-values-minio.yaml –set fullnameOverride=minio –namespace kubesphere-system –wait –timeout 1800s\n”, “delta”: “0:30:00.885918”, “end”: “2020-09-21 07:15:09.927753”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-09-21 06:45:09.041835”, “stderr”: “Error: timed out waiting for the condition”, “stderr_lines”: [“Error: timed out waiting for the condition”], “stdout”: "Release \“ks-minio\” does not exist. Installing it now.", “stdout_lines”: ["Release \“ks-minio\” does not exist. Installing it now."]}
…ignoring

TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“1. check the storage configuration and storage server”,
“2. make sure the DNS address in /etc/resolv.conf is available”,
“3. execute ‘kubectl logs -n kubesphere-system -l job-name=minio-make-bucket-job’ to watch logs”,
“4. execute ‘helm -n kubesphere-system uninstall ks-minio && kubectl -n kubesphere-system delete job minio-make-bucket-job’”,
“5. Restart the installer pod in kubesphere-system namespace”
]
}

TASK [common : fail] ***********************************************************
fatal: [localhost]: FAILED! => {“changed”: false, “msg”: “It is suggested to refer to the above methods for troubleshooting problems .”}

PLAY RECAP *********************************************************************
localhost

    dongweibh 这么明显的日志提示,你可以参考提示排查一下,大概率是存储有问题

    “msg”: [
    “1. check the storage configuration and storage server”,
    “2. make sure the DNS address in /etc/resolv.conf is available”,
    “3. execute ‘kubectl logs -n kubesphere-system -l job-name=minio-make-bucket-job’ to watch logs”,
    “4. execute ‘helm -n kubesphere-system uninstall ks-minio && kubectl -n kubesphere-system delete job minio-make-bucket-job’”,
    “5. Restart the installer pod in kubesphere-system namespace”
    ]

    离线部署运行完成了,日志也提示了账号名密码,所有容器也都running,登陆界面能打开,但是输入完账号密码后就闪一下,然后还是在用户名密码界面,请问怎么进一步查看日志呢?

      liziyang

      kubectl get po -A

      检查有没有异常的pod

      kubectl -n kubesphere-system logs -l app=ks-console
      kubectl -n kubesphere-system logs -l app=ks-apiserver
      kubectl -n kubesphere-system logs -l app=ks-controller-manager

      检查一下有没有错误日志

      pod都正常
      kubectl -n kubesphere-system logs -l app=ks-controller-manager 感觉是这个原因,求大佬分析

      k8s版本1.18.6

      最后重启了ks-apiserver pod能登陆了,感谢

      请教,遇到这个问题:
      ./kk init os -f config-sample.yaml -s ./dependencies/

      Error: unknown command “init” for “kk”
      Run ‘kk –help’ for usage.
      unknown command “init” for “kk”

        dongweibh ./kk -help 就知道了,kk没有init参数啊,是不是搞错了,应该是create吧

        请教各位老师,在安装文件中增加了nfs-client,安装失败,去掉nfs-client就正常。

          hongming
          是按照例子配置的有一个centos7.6的集群,安装没有问题,redhat7.6的集群在nfs-client的位置一直报错,
          这是centos7.6的配置文件:

          这是redhat7.6的配置文件:

          最后在redhat的上面是分三步安装成功的,第一把ClusterConfiguration的安装配置注释掉,只保留Cluster部分,然后手工执行helm install 按照 nfs-client ,再把ClusterConfiguration的内容恢复,再执行 ./kk create cluster -f XXXX.
          是这样成功的

          13 天 后

          安装成功,查看pods状态,发现calico的node一直在重启,卡了好几天了

          [root@node1 ~]# kubectl get pods -n kube-system -o wide
          NAME                                       READY   STATUS             RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
          calico-kube-controllers-677cbc8557-zdsst   1/1     Running            5          9d    10.233.90.18     node1   <none>           <none>
          calico-node-6947s                          0/1     CrashLoopBackOff   109        9d    192.168.56.110   node3   <none>           <none>
          calico-node-pkm5b                          1/1     Running            5          9d    192.168.56.108   node1   <none>           <none>
          calico-node-xljh2                          0/1     CrashLoopBackOff   109        9d    192.168.56.109   node2   <none>           <none>
          coredns-79878cb9c9-g9cfk                   1/1     Running            5          9d    10.233.90.16     node1   <none>           <none>
          coredns-79878cb9c9-hvpc8                   1/1     Running            5          9d    10.233.90.17     node1   <none>           <none>
          kube-apiserver-node1                       1/1     Running            5          9d    192.168.56.108   node1   <none>           <none>
          kube-controller-manager-node1              1/1     Running            6          9d    192.168.56.108   node1   <none>           <none>
          kube-proxy-2m2n8                           1/1     Running            10         9d    192.168.56.108   node1   <none>           <none>
          kube-proxy-7nft6                           1/1     Running            10         9d    192.168.56.109   node2   <none>           <none>
          kube-proxy-j8vs8                           1/1     Running            1          9d    192.168.56.110   node3   <none>           <none>
          kube-scheduler-node1                       1/1     Running            6          9d    192.168.56.108   node1   <none>           <none>
          nodelocaldns-jsq9w                         1/1     Running            5          9d    192.168.56.108   node1   <none>           <none>
          nodelocaldns-pmqlq                         1/1     Running            4          9d    192.168.56.110   node3   <none>           <none>
          nodelocaldns-zxkjb                         1/1     Running            5          9d    192.168.56.109   node2   <none>           <none>

          查看两个calico-node日志,发现与地址10.233.0.1:443/api/v1/nodes/foo不能联通

          [root@node1 ~]# kubectl logs calico-node-6947s -n kube-system
          2020-10-08 14:16:26.014 [INFO][8] startup/startup.go 299: Early log level set to info
          2020-10-08 14:16:26.015 [INFO][8] startup/startup.go 315: Using NODENAME environment for node name
          2020-10-08 14:16:26.015 [INFO][8] startup/startup.go 327: Determined node name: node3
          2020-10-08 14:16:26.017 [INFO][8] startup/startup.go 359: Checking datastore connection
          2020-10-08 14:16:56.018 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
          2020-10-08 14:17:27.021 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
          
          [root@node1 ~]# kubectl logs calico-node-xljh2  -n kube-system 
          2020-10-12 01:08:37.085 [INFO][8] startup/startup.go 299: Early log level set to info
          2020-10-12 01:08:37.086 [INFO][8] startup/startup.go 315: Using NODENAME environment for node name
          2020-10-12 01:08:37.086 [INFO][8] startup/startup.go 327: Determined node name: node2
          2020-10-12 01:08:37.154 [INFO][8] startup/startup.go 359: Checking datastore connection
          2020-10-12 01:09:07.155 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout
          2020-10-12 01:09:38.158 [INFO][8] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.233.0.1:443/api/v1/nodes/foo: dial tcp 10.233.0.1:443: i/o timeout

          确认firewalld和selinux都是关闭的。

          附:
          config-sample.yaml 配置文件

          [root@localhost kubesphere-all-v3.0.0-offline-linux-amd64]# cat  config-sample.yaml 
          apiVersion: kubekey.kubesphere.io/v1alpha1
          kind: Cluster
          metadata:
            name: sample
          spec:
            hosts:
            - {name: node1, address: 192.168.56.108, internalAddress: 192.168.56.108, user: root, password: kkroot}
            - {name: node2, address: 192.168.56.109, internalAddress: 192.168.56.109, user: root, password: kkroot}
            - {name: node3, address: 192.168.56.110, internalAddress: 192.168.56.110, user: root, password: kkroot}
            roleGroups:
              etcd:
              - node1
              master: 
              - node1
              worker:
              - node1
              - node2
              - node3
            controlPlaneEndpoint:
              domain: lb.kubesphere.local
              address: ""
              port: "6443"
            kubernetes:
              version: v1.17.9
              imageRepo: kubesphere
              clusterName: cluster.local
            network:
              plugin: calico
              kubePodsCIDR: 10.233.64.0/18
              kubeServiceCIDR: 10.233.0.0/18
            registry:
              registryMirrors: []
              insecureRegistries: []
              privateRegistry: dockerhub.kubekey.local
            addons: []

          ./kk create cluster 安装日志

          [root@node1 kubesphere-all-v3.0.0-offline-linux-amd64]# ./kk create cluster -f config-sample.yaml
          +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
          | name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
          +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
          | node3 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
          | node1 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
          | node2 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | EDT 10:53:47 |
          +-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
          
          This is a simple check of your environment.
          Before installation, you should ensure that your machines meet all requirements specified at
          https://github.com/kubesphere/kubekey#requirements-and-recommendations
          
          Continue this installation? [yes/no]: yes
          INFO[10:53:49 EDT] Downloading Installation Files               
          INFO[10:53:49 EDT] Downloading kubeadm ...                      
          INFO[10:53:49 EDT] Downloading kubelet ...                      
          INFO[10:53:50 EDT] Downloading kubectl ...                      
          INFO[10:53:50 EDT] Downloading kubecni ...                      
          INFO[10:53:50 EDT] Downloading helm ...                         
          INFO[10:53:51 EDT] Configurating operating system ...           
          [node2 192.168.56.109] MSG:
          net.ipv4.ip_forward = 1
          net.bridge.bridge-nf-call-arptables = 1
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1
          net.ipv4.ip_local_reserved_ports = 30000-32767
          [node1 192.168.56.108] MSG:
          net.ipv4.ip_forward = 1
          net.bridge.bridge-nf-call-arptables = 1
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1
          net.ipv4.ip_local_reserved_ports = 30000-32767
          [node3 192.168.56.110] MSG:
          net.ipv4.ip_forward = 1
          net.bridge.bridge-nf-call-arptables = 1
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1
          net.ipv4.ip_local_reserved_ports = 30000-32767
          INFO[10:53:54 EDT] Installing docker ...                        
          INFO[10:53:55 EDT] Start to download images on all nodes        
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/etcd:v3.3.12
          [node3] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
          [node2] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/pause:3.1
          [node3] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
          [node2] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.17.9
          [node3] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
          [node2] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
          [node3] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.17.9
          [node2] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
          [node2] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.17.9
          [node3] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
          [node2] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.17.9
          [node3] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
          [node2] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
          [node3] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/coredns/coredns:1.6.9
          [node1] Downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
          [node1] Downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/calico/cni:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/calico/node:v3.15.1
          [node1] Downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1
          INFO[10:53:59 EDT] Generating etcd certs                        
          INFO[10:54:01 EDT] Synchronizing etcd certs                     
          INFO[10:54:01 EDT] Creating etcd service                        
          INFO[10:54:05 EDT] Starting etcd cluster                        
          [node1 192.168.56.108] MSG:
          Configuration file already exists
          Waiting for etcd to start
          INFO[10:54:13 EDT] Refreshing etcd configuration                
          INFO[10:54:13 EDT] Backup etcd data regularly                   
          INFO[10:54:14 EDT] Get cluster status                           
          [node1 192.168.56.108] MSG:
          Cluster will be created.
          INFO[10:54:14 EDT] Installing kube binaries                     
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.108:/tmp/kubekey/kubeadm   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.110:/tmp/kubekey/kubeadm   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubeadm to 192.168.56.109:/tmp/kubekey/kubeadm   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.108:/tmp/kubekey/kubelet   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.108:/tmp/kubekey/kubectl   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.108:/tmp/kubekey/helm   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.110:/tmp/kubekey/kubelet   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.108:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubelet to 192.168.56.109:/tmp/kubekey/kubelet   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.110:/tmp/kubekey/kubectl   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/kubectl to 192.168.56.109:/tmp/kubekey/kubectl   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.110:/tmp/kubekey/helm   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/helm to 192.168.56.109:/tmp/kubekey/helm   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.109:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
          Push /root/kubesphere-all-v3.0.0-offline-linux-amd64/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.56.110:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
          INFO[10:54:32 EDT] Initializing kubernetes cluster              
          [node1 192.168.56.108] MSG:
          W1002 10:54:33.546978    7304 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
          W1002 10:54:33.547575    7304 validation.go:28] Cannot validate kube-proxy config - no validator is available
          W1002 10:54:33.547601    7304 validation.go:28] Cannot validate kubelet config - no validator is available
          [init] Using Kubernetes version: v1.17.9
          [preflight] Running pre-flight checks
                  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
          [preflight] Pulling images required for setting up a Kubernetes cluster
          [preflight] This might take a minute or two, depending on the speed of your internet connection
          [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
          [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
          [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
          [kubelet-start] Starting the kubelet
          [certs] Using certificateDir folder "/etc/kubernetes/pki"
          [certs] Generating "ca" certificate and key
          [certs] Generating "apiserver" certificate and key
          [certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local node1 node1.cluster.local node2 node2.cluster.local node3 node3.cluster.local] and IPs [10.233.0.1 10.0.2.15 127.0.0.1 192.168.56.108 192.168.56.109 192.168.56.110 10.233.0.1]
          [certs] Generating "apiserver-kubelet-client" certificate and key
          [certs] Generating "front-proxy-ca" certificate and key
          [certs] Generating "front-proxy-client" certificate and key
          [certs] External etcd mode: Skipping etcd/ca certificate authority generation
          [certs] External etcd mode: Skipping etcd/server certificate generation
          [certs] External etcd mode: Skipping etcd/peer certificate generation
          [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
          [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
          [certs] Generating "sa" key and public key
          [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
          [kubeconfig] Writing "admin.conf" kubeconfig file
          [kubeconfig] Writing "kubelet.conf" kubeconfig file
          [kubeconfig] Writing "controller-manager.conf" kubeconfig file
          [kubeconfig] Writing "scheduler.conf" kubeconfig file
          [control-plane] Using manifest folder "/etc/kubernetes/manifests"
          [control-plane] Creating static Pod manifest for "kube-apiserver"
          [controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
          W1002 10:54:39.078002    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
          [control-plane] Creating static Pod manifest for "kube-controller-manager"
          [controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
          W1002 10:54:39.089428    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
          [control-plane] Creating static Pod manifest for "kube-scheduler"
          [controlplane] Adding extra host path mount "host-time" to "kube-controller-manager"
          W1002 10:54:39.091411    7304 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
          [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
          [apiclient] All control plane components are healthy after 26.007113 seconds
          [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
          [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
          [upload-certs] Skipping phase. Please see --upload-certs
          [mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
          [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
          [bootstrap-token] Using token: rajfez.t9320hox3sddbowz
          [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
          [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
          [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
          [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
          [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
          [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
          [addons] Applied essential addon: CoreDNS
          [addons] Applied essential addon: kube-proxy
          
          Your Kubernetes control-plane has initialized successfully!
          
          To start using your cluster, you need to run the following as a regular user:
          
            mkdir -p $HOME/.kube
            sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
            sudo chown $(id -u):$(id -g) $HOME/.kube/config
          
          You should now deploy a pod network to the cluster.
          Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
            https://kubernetes.io/docs/concepts/cluster-administration/addons/
          
          You can now join any number of control-plane nodes by copying certificate authorities
          and service account keys on each node and then running the following as root:
          
            kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz \
              --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2 \
              --control-plane 
          
          Then you can join any number of worker nodes by running the following on each as root:
          
          kubeadm join lb.kubesphere.local:6443 --token rajfez.t9320hox3sddbowz \
              --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
          [node1 192.168.56.108] MSG:
          node/node1 untainted
          [node1 192.168.56.108] MSG:
          node/node1 labeled
          [node1 192.168.56.108] MSG:
          service "kube-dns" deleted
          [node1 192.168.56.108] MSG:
          service/coredns created
          [node1 192.168.56.108] MSG:
          serviceaccount/nodelocaldns created
          daemonset.apps/nodelocaldns created
          [node1 192.168.56.108] MSG:
          configmap/nodelocaldns created
          [node1 192.168.56.108] MSG:
          I1002 10:55:34.720063    9901 version.go:251] remote version is much newer: v1.19.2; falling back to: stable-1.17
          W1002 10:55:36.884062    9901 validation.go:28] Cannot validate kube-proxy config - no validator is available
          W1002 10:55:36.884090    9901 validation.go:28] Cannot validate kubelet config - no validator is available
          [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
          [upload-certs] Using certificate key:
          a9a0daeedbefb4b9a014f4b258b9916403f7136bea20d28ec03aa926c41fcb3e
          [node1 192.168.56.108] MSG:
          secret/kubeadm-certs patched
          [node1 192.168.56.108] MSG:
          secret/kubeadm-certs patched
          [node1 192.168.56.108] MSG:
          secret/kubeadm-certs patched
          [node1 192.168.56.108] MSG:
          W1002 10:55:37.738867   10303 validation.go:28] Cannot validate kube-proxy config - no validator is available
          W1002 10:55:37.738964   10303 validation.go:28] Cannot validate kubelet config - no validator is available
          kubeadm join lb.kubesphere.local:6443 --token 025byf.2t2mvldlr9wm1ycx     --discovery-token-ca-cert-hash sha256:99f5f95e912acb458719c9cbaa6d4acb5d36ca0e38dccb00c56d69c2f0ef7fa2
          [node1 192.168.56.108] MSG:
          NAME    STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
          node1   NotReady   master,worker   34s   v1.17.9   192.168.56.108   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.4
          INFO[10:55:38 EDT] Deploying network plugin ...                 
          [node1 192.168.56.108] MSG:
          configmap/calico-config created
          customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
          customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
          clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
          clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
          clusterrole.rbac.authorization.k8s.io/calico-node created
          clusterrolebinding.rbac.authorization.k8s.io/calico-node created
          daemonset.apps/calico-node created
          serviceaccount/calico-node created
          deployment.apps/calico-kube-controllers created
          serviceaccount/calico-kube-controllers created
          INFO[10:55:40 EDT] Joining nodes to cluster                     
          [node3 192.168.56.110] MSG:
          W1002 10:55:41.544472   12557 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
          [preflight] Running pre-flight checks
          [preflight] Reading configuration from the cluster...
          [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
          W1002 10:55:43.067290   12557 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
          [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
          [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
          [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
          [kubelet-start] Starting the kubelet
          [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
          
          This node has joined the cluster:
          * Certificate signing request was sent to apiserver and a response was received.
          * The Kubelet was informed of the new secure connection details.
          
          Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
          [node2 192.168.56.109] MSG:
          W1002 10:55:41.963749    8533 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
          [preflight] Running pre-flight checks
          [preflight] Reading configuration from the cluster...
          [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
          W1002 10:55:43.520053    8533 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
          [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
          [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
          [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
          [kubelet-start] Starting the kubelet
          [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
          
          This node has joined the cluster:
          * Certificate signing request was sent to apiserver and a response was received.
          * The Kubelet was informed of the new secure connection details.
          
          Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
          [node3 192.168.56.110] MSG:
          node/node3 labeled
          [node2 192.168.56.109] MSG:
          node/node2 labeled
          INFO[10:55:54 EDT] Congradulations! Installation is successful. 
          • kumu 回复了此帖

            你好,可以把calico pod的events贴一下吗?

              yunkunrao pod/calico-node-xljh2 和 pod/calico-node-6947s

              [root@node1 ~]# kubectl get events
              LAST SEEN   TYPE     REASON                    OBJECT       MESSAGE
              3h5m        Normal   Starting                  node/node1   Starting kubelet.
              3h5m        Normal   NodeHasSufficientMemory   node/node1   Node node1 status is now: NodeHasSufficientMemory
              3h5m        Normal   NodeHasNoDiskPressure     node/node1   Node node1 status is now: NodeHasNoDiskPressure
              3h5m        Normal   NodeHasSufficientPID      node/node1   Node node1 status is now: NodeHasSufficientPID
              3h5m        Normal   NodeAllocatableEnforced   node/node1   Updated Node Allocatable limit across pods
              3h4m        Normal   Starting                  node/node1   Starting kube-proxy.
              3h4m        Normal   RegisteredNode            node/node1   Node node1 event: Registered Node node1 in Controller
              3h4m        Normal   Starting                  node/node1   Starting kube-proxy.
              7m17s       Normal   Starting                  node/node1   Starting kubelet.
              7m16s       Normal   NodeHasSufficientMemory   node/node1   Node node1 status is now: NodeHasSufficientMemory
              7m16s       Normal   NodeHasNoDiskPressure     node/node1   Node node1 status is now: NodeHasNoDiskPressure
              7m16s       Normal   NodeHasSufficientPID      node/node1   Node node1 status is now: NodeHasSufficientPID
              7m17s       Normal   NodeAllocatableEnforced   node/node1   Updated Node Allocatable limit across pods
              5m53s       Normal   Starting                  node/node1   Starting kube-proxy.
              5m38s       Normal   Starting                  node/node1   Starting kube-proxy.
              5m32s       Normal   RegisteredNode            node/node1   Node node1 event: Registered Node node1 in Controller
              3h4m        Normal   RegisteredNode            node/node2   Node node2 event: Registered Node node2 in Controller
              3h3m        Normal   NodeNotReady              node/node2   Node node2 status is now: NodeNotReady
              6m49s       Normal   Starting                  node/node2   Starting kubelet.
              6m35s       Normal   NodeHasSufficientMemory   node/node2   Node node2 status is now: NodeHasSufficientMemory
              6m42s       Normal   NodeHasNoDiskPressure     node/node2   Node node2 status is now: NodeHasNoDiskPressure
              6m35s       Normal   NodeHasSufficientPID      node/node2   Node node2 status is now: NodeHasSufficientPID
              6m49s       Normal   NodeAllocatableEnforced   node/node2   Updated Node Allocatable limit across pods
              5m51s       Normal   Starting                  node/node2   Starting kube-proxy.
              5m38s       Normal   Starting                  node/node2   Starting kube-proxy.
              5m32s       Normal   RegisteredNode            node/node2   Node node2 event: Registered Node node2 in Controller
              3h4m        Normal   RegisteredNode            node/node3   Node node3 event: Registered Node node3 in Controller
              3h3m        Normal   NodeNotReady              node/node3   Node node3 status is now: NodeNotReady
              6m43s       Normal   Starting                  node/node3   Starting kubelet.
              6m29s       Normal   NodeHasSufficientMemory   node/node3   Node node3 status is now: NodeHasSufficientMemory
              6m36s       Normal   NodeHasNoDiskPressure     node/node3   Node node3 status is now: NodeHasNoDiskPressure
              6m29s       Normal   NodeHasSufficientPID      node/node3   Node node3 status is now: NodeHasSufficientPID
              6m42s       Normal   NodeAllocatableEnforced   node/node3   Updated Node Allocatable limit across pods
              5m51s       Normal   Starting                  node/node3   Starting kube-proxy.
              5m39s       Normal   Starting                  node/node3   Starting kube-proxy.
              5m32s       Normal   RegisteredNode            node/node3   Node node3 event: Registered Node node3 in Controller
              [root@node1 ~]# 
              
              [root@node1 ~]# kubectl get events -n kube-system |grep  calico-node-6947s
              4h1m        Warning   Unhealthy                pod/calico-node-6947s                          Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory
              3h52m       Warning   BackOff                  pod/calico-node-6947s                          Back-off restarting failed container
              15m         Warning   FailedMount              pod/calico-node-6947s                          MountVolume.SetUp failed for volume "calico-node-token-qtlkr" : failed to sync secret cache: timed out waiting for the condition
              15m         Normal    SandboxChanged           pod/calico-node-6947s                          Pod sandbox changed, it will be killed and re-created.
              15m         Normal    Pulled                   pod/calico-node-6947s                          Container image "dockerhub.kubekey.local/calico/cni:v3.15.1" already present on machine
              15m         Normal    Created                  pod/calico-node-6947s                          Created container upgrade-ipam
              15m         Normal    Started                  pod/calico-node-6947s                          Started container upgrade-ipam
              15m         Normal    Pulled                   pod/calico-node-6947s                          Container image "dockerhub.kubekey.local/calico/cni:v3.15.1" already present on machine
              15m         Normal    Created                  pod/calico-node-6947s                          Created container install-cni
              15m         Normal    Started                  pod/calico-node-6947s                          Started container install-cni
              15m         Normal    Pulled                   pod/calico-node-6947s                          Container image "dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1" already present on machine
              15m         Normal    Created                  pod/calico-node-6947s                          Created container flexvol-driver
              15m         Normal    Started                  pod/calico-node-6947s                          Started container flexvol-driver
              5m55s       Normal    Pulled                   pod/calico-node-6947s                          Container image "dockerhub.kubekey.local/calico/node:v3.15.1" already present on machine
              15m         Normal    Created                  pod/calico-node-6947s                          Created container calico-node
              15m         Normal    Started                  pod/calico-node-6947s                          Started container calico-node
              14m         Warning   Unhealthy                pod/calico-node-6947s                          Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory
              14m         Warning   Unhealthy                pod/calico-node-6947s                          Liveness probe failed: calico/node is not ready: bird/confd is not live: exit status 1
              55s         Warning   BackOff                  pod/calico-node-6947s                          Back-off restarting failed container
              [root@node1 ~]# 
              
              [root@node1 ~]# kubectl get events -n kube-system |grep  calico-node-xljh2 
              4h8m        Normal    Pulled                   pod/calico-node-xljh2                          Container image "dockerhub.kubekey.local/calico/node:v3.15.1" already present on machine
              3h53m       Warning   Unhealthy                pod/calico-node-xljh2                          Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory
              3h58m       Warning   BackOff                  pod/calico-node-xljh2                          Back-off restarting failed container
              17m         Normal    SandboxChanged           pod/calico-node-xljh2                          Pod sandbox changed, it will be killed and re-created.
              17m         Normal    Pulled                   pod/calico-node-xljh2                          Container image "dockerhub.kubekey.local/calico/cni:v3.15.1" already present on machine
              17m         Normal    Created                  pod/calico-node-xljh2                          Created container upgrade-ipam
              17m         Normal    Started                  pod/calico-node-xljh2                          Started container upgrade-ipam
              16m         Normal    Pulled                   pod/calico-node-xljh2                          Container image "dockerhub.kubekey.local/calico/cni:v3.15.1" already present on machine
              16m         Normal    Created                  pod/calico-node-xljh2                          Created container install-cni
              16m         Normal    Started                  pod/calico-node-xljh2                          Started container install-cni
              16m         Normal    Pulled                   pod/calico-node-xljh2                          Container image "dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.15.1" already present on machine
              16m         Normal    Created                  pod/calico-node-xljh2                          Created container flexvol-driver
              16m         Normal    Started                  pod/calico-node-xljh2                          Started container flexvol-driver
              16m         Normal    Pulled                   pod/calico-node-xljh2                          Container image "dockerhub.kubekey.local/calico/node:v3.15.1" already present on machine
              16m         Normal    Created                  pod/calico-node-xljh2                          Created container calico-node
              16m         Normal    Started                  pod/calico-node-xljh2                          Started container calico-node
              6m53s       Warning   Unhealthy                pod/calico-node-xljh2                          Readiness probe failed: calico/node is not ready: BIRD is not ready: Failed to stat() nodename file: stat /var/lib/calico/nodename: no such file or directory
              15m         Warning   Unhealthy                pod/calico-node-xljh2                          Liveness probe failed: calico/node is not ready: bird/confd is not live: exit status 1
              15m         Normal    Killing                  pod/calico-node-xljh2                          Container calico-node failed liveness probe, will be restarted
              2m1s        Warning   BackOff                  pod/calico-node-xljh2                          Back-off restarting failed container
              [root@node1 ~]# 

              商业产品与合作咨询