kubespehre v4.1.2 安裝 KubeSphere Multi-Cluster Agent Connection

我照著文件做了

host cluster

ets-pst-001@etspst001-hp-compaq-8000-elite-cmt-pc:~/ks$ kubectl -n kubesphere-system get svc
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
extensions-museum       ClusterIP      10.233.44.228   <none>        443/TCP          5m38s
ks-apiserver            ClusterIP      10.233.18.206   <none>        80/TCP           5m38s
ks-console              NodePort       10.233.22.249   <none>        80:30880/TCP     5m38s
ks-controller-manager   ClusterIP      10.233.49.228   <none>        443/TCP          5m38s
tower                   LoadBalancer   10.233.22.234   <pending>     8080:31771/TCP   81s
ets-pst-001@etspst001-hp-compaq-8000-elite-cmt-pc:~/ks$ kubectl -n kubesphere-system edit cm kubesphere-config
configmap/kubesphere-config edited

要加入的cluster

都照著做之後cluster依舊unready

5 天 后

cici

host 集群上 kubectl get cluster <cluster> -o yaml 可以查看到当前的异常状态

到 member 集群上 helm -n kubesphere-system list -a 看看 ks-agent 的部署状态

kubectl -n kubesphere-system get po 看看是不是有 pod 没有起来,排查一下是什么原因

  • cici 回复了此帖

    hongming
    我重新做一次

    member cluster 上 kubectl config view --raw

    輸出

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJVWFKNnp0cmFpSGt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeU1qY3dNRFEzTWpsYUZ3MHpOREV5TWpVd01EVXlNamxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURaaFRObmpqR2s5eUlOOGxLMjlmYXo5a3BVVXNBc3BKTlJkM0VkN0JTajlOQVpzODNjVlF4N05sdEcKcVBseS9rOGRORDhsaDRVY1lqNUo1RncrMGR3Nmh0Nlk4aXBTSzU4SExuVkxEODhZYnBSbVNzZWR5WG5mRDZFNwo4bHlPbnQ0TSt1NytqQmRabHkzVTFKOUFOYWhpR0ZqTEY0cmVaU1dlZ0pMSFkzYzZVV0EzemFENVd6Umxta2NyClNxUDk1OXU5R1F2OHZBejN6TERSZWluVU9CR2YxdmFUYmRMVENuTEFJdHFna3AwOXZyWnVNeEo4QytiaUtIM3kKTHQ5dXdSZ3dGdjVZdU9vTDhXTW94U2RvNU8rVXlzdnZDZk9EZVJyYzgwSmtwQS83eWo0MEMvcXJ6bzBqTmRGVApYSUtxenJWQ3YrSDNvbG5uM1M5V2dSWkl0Ri9wQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSMkV0Z1kyR24vZGhJVWdOKzlWaU9uTjk3dml6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQlc5cFVZQjlEYwpJU3JQNVRhRERuQnhmMUw2SjdBNDFYSkJoMHRmVWlYTVB2T2hBNHJpeXBET0tnYzFnVHRHUlIvVEM5a2s4N240ClhQK3ZKUmRtcytFVmVXZEJPbDVFK1M1NVIwRVRjUmlPVUVHdDk4TG81aXZKVUUwUDl4QXdZdENKZVd0ck1OeFcKc0JrcDlQVjVJc25ZTWlKTnFLOHFTckVvWWZqWDY3MVp5TWJrNTVRM25UdEU3WkUzM1ZiN3g2YVg4cFp4aTRPdwo1UXpLWkRmMCtsWTlBNUZEOTJmRjNkbHFJTy9GSFpTRitsV0p3VllGcS96b25FZ2d6dkxEbnUyZ3ZLRGdGZmRXClBzYTdDNXhGREx2TEpnVFpzcVl0N05XQkUyTGZSNFhXQ2t5bTZFTldSaEc4U1lJdCsxOG92ZWhaN3NTc3huNDMKTU44MlZQV0Q0MTVrCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        server: https://lb.kubesphere.local:6443
      name: cluster.local
    contexts:
    - context:
        cluster: cluster.local
        user: kubernetes-admin
      name: kubernetes-admin@cluster.local
    current-context: kubernetes-admin@cluster.local
    kind: Config
    preferences: {}
    users:
    - name: kubernetes-admin
      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJUlJNdmdOTk9LMjB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeU1qY3dNRFEzTWpsYUZ3MHlOVEV5TWpjd01EVXlNekZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXJNbUM1S2NjWHoxK1l2NnoKdTU3dEFNTUhwSzlZUyt3b2k5UHhjVVZPdlI0Y1dOOWhIaFpUQ3d0WGFJNWtacDBLMFNOWmFqK21TWnRBMXRhZwowcGFaUnJLRS95aGk2NWZRd25vRVp0SjhpSjZFb1UwNmFrckp1dVNxaW16Q0dIcFQ1R25NamUrMFB4TkNlZnQ0CkFNZGduemdSeXJpOUFnMW52SDdoaXdTRjYyYkZ0cVN3VytxSGxmMWpLREdlNEw5cUd5QjNMVVhOMGM1elVjKzUKRVFnRDUzVUdhd0J2UDNCVEJ6NVlQekEzc1dIRkZIZ2hub1hYemsxK1daYW9PemE4S0ZTbGFmMnUyeFVCRFZ1SQpwVXkveUZ1a1JNaHJURnMxU2V2RVRZVGJOUWdvQ2ExZXZhemNkczJQUW5IeXZmNGR1MnRocXZuSWw4SnhESVVxCnZpWWNMd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSMkV0Z1kyR24vZGhJVWdOKzlWaU9uTjk3dgppekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBRy85UHlQZWYyalZFTjhTUEpQWERBNlh3MEk0YlhReXJzWVVUCmQrWGROS3RSVmN0MStoSW9jckxXS3YxWitlMTVtRmh1bmJmSk9wR3NVbXJ1ODY1RzJlem52eVY5Qkw1YWZjcmwKVmJYTWFCQW9DRU5GYlFlRFNnaWNkd3A2ZmdCTGFhdWxjdkxWanBEZTFhbDNCS3ZMemZMS25lZ2JiL2UxSk5MUgp0SERCMnRFcVRrYkRlczJDMzEzcGdFcVZhbjVKRllrN0llM21FcVh0Y1hvbWhob3JTWDhMRE1lMWE4RzF3RGVVCk9xenpaM3A2WWxCMmlRampCNTJkNEM0b2FyanZMbmxBZGN3Q1hRajc1R0k0WDdhbVNSeDRTREd3THhXUTFhOFEKNGpJUkgzcXpoOTNSbjFTd3FzREtaSUVCQURuL0MyNVRGR3N0ZXBkSkdZS0R6Q0tlMnc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
        client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBck1tQzVLY2NYejErWXY2enU1N3RBTU1IcEs5WVMrd29pOVB4Y1VWT3ZSNGNXTjloCkhoWlRDd3RYYUk1a1pwMEswU05aYWorbVNadEExdGFnMHBhWlJyS0UveWhpNjVmUXdub0VadEo4aUo2RW9VMDYKYWtySnV1U3FpbXpDR0hwVDVHbk1qZSswUHhOQ2VmdDRBTWRnbnpnUnlyaTlBZzFudkg3aGl3U0Y2MmJGdHFTdwpXK3FIbGYxaktER2U0TDlxR3lCM0xVWE4wYzV6VWMrNUVRZ0Q1M1VHYXdCdlAzQlRCejVZUHpBM3NXSEZGSGdoCm5vWFh6azErV1phb096YThLRlNsYWYydTJ4VUJEVnVJcFV5L3lGdWtSTWhyVEZzMVNldkVUWVRiTlFnb0NhMWUKdmF6Y2RzMlBRbkh5dmY0ZHUydGhxdm5JbDhKeERJVXF2aVljTHdJREFRQUJBb0lCQVFDUTNlYmJsR3lLUVlHTQp1R1d5NFoxdDdSYWtjY3NDNUw1ZDlkWFJsVDFkL0RmaUgyOUtqWWNVbEc2MW4rVDN6NlU1RVgwdlFxeEZ2R0JSCmYrT1lqR0Y0VDhhSU03RTBPN1h3eCtLVzN0VkFxajhqQ2gvMjdEdFVjZEcyZTFjRXROTlBoNURVVSt2NGtrcEQKQVo1c2NMMUc0UGl4MGMvT1A3VUE2aFJwdk9JWEtWT3FIYmZLZHFWNzFxOUJ6cWVWUDBDMVNaOGd1Ujl3TnpOMApDR0xYdlVWTUsvcXoxU2RUSjRwQThlaUtWTklHTWJIc0hQK2RjUGRsL1hFUDFtRE1kdlNJWXMyVkQ5QlU3dS9QCmJyK0YzeHlrbWZOZUVYcUtDcVMxanMzQjJXNWZNRUpDc3VoY0ZZdTZrdGh2NDRIcEVFZEE4aHRaTzhmeXIxS1QKd2t3cDh5V2hBb0dCQU5uUDJmd1ZLdXNFVWc4eEJGUDZXaU01dXRxSmdjak53MWdGOWloQVBTNW9IU2hyaGhNVApvbUd1a3FESVVJK3NZUUlkRDAxNWkxYnB2MWxuNDhreWppUG8vS1NOeVkyU2M1ZVpIdjA1WlVqaDR3SEFmVk9XCmQwY25CcHRZNnVwSFJaYk5Db1VNZlVFUjZzK3I2ZmxYNFk2UjIraU5YaVg5MXFoK1pvMlJCcHNSQW9HQkFNc1UKeXhpVFFkS1RCUU9UaFVSdkJ2azYvM2w5aHlBUWlBSHcwN2xOVVlmNmRQbzM5dnZjM2ZmS3pwc2M3ZWJwYXdyKworeFRyNzJ6UUE2bmYya0Y0ZkdLWTJNL1ZjamxLKzJsbGU3L1JtNkd3c0hEZXVLUndXNm4yNFAwYUFnSkxkSGRBCjdTTnVRNWNwd29rb1lldVBFcXFuZmhNc3BGMkhaVHZpY0h1ZTZNTS9Bb0dBYVZUZjJNMHZ4Um1BeVlIdDB1SzEKNW1VTG5KVjA0dlBHclBHdEdjZi9Ea3NoRnFQdzRaYlVKeUx5RzdqalpLZDYvamVwWjlFSWRrNXh6NzJ5NVdDQwpacEZNWkJPQlRlcHQ1ZmtSaUduU05rMnVwdkU4YWtqUWcrTTJpYmVWV3hoK2FhL2NqM3o3c1pVRmxjcFFTdG1aCjVZVlo4SHMwOVhCczkyeXhFWEw1MjlFQ2dZRUFwKzNrMThpRlNJeDhPKzNUNkxmcXkxTW5DSjV0aTIxQUdtSzcKb1dJM1Jqc3NXZkRIVXBQY0ZOaG1xa3RzeW1KQU90S0lhMCtDSjdlSElBVFVwUWp0eWxaY0N0aVU0SjJKY2lrUwpBSmRpbTN6UkdqU0IrTEJVakNKeS83aHU2dGpjVVBTbVk0TDliMVYvNFEwOGs1NDJzRmxhWHA2dXVBeUxBTThKCmRwLzhGOThDZ1lFQW5mdWRwZmdydVhCSGJQL0dmd3R2bjVvTW44RzZITG5ldnp3dXBRaVFzbXMxTnJtbUNIM3QKTi9KNTh1V2pNTXV3L01YUWVHeVNJOWNXSVpldjBOWUxlOVptOEIxaGFXV1lFS0srbHB4SDN1YVhhRXJ5NWNIRwpLdEU3bFZiMkU3QVJxQUF3N0NSdThrUitPbEplYWlRcnlWUU50YXJIRGJOU3h5d2xGU0p0ZCswPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

    請問上面的cluster cluster server我是要改這台機器的IP還是要改什麼

    我改成了member的IP:6443

    接著照著加入集群 選擇代理 照著步驟在 member 上建立agent.yaml
    現在member的狀況

    h00283@harbor-poc:~/kubesphere$ kubectl -n kubesphere-system get pod
    NAME                             READY   STATUS    RESTARTS   AGE
    cluster-agent-5cd5fb86c4-k2bcm   1/1     Running   0          13m
    h00283@harbor-poc:~/kubesphere$ helm -n kubesphere-system list -a
    NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

    一樣unready

    host 192.168.1.3
    要加的member 172.20.192.55

    我都是在裸機上做

    member agent log

    h00283@harbor-poc:~/kubesphere$ kubectl -n kubesphere-system logs cluster-agent-5cd5fb86c4-k2bcm
    W1231 02:45:26.463075       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout
    W1231 02:45:26.463244       1 agent.go:173] Retrying in 100ms...
    W1231 02:46:11.565215       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 1)
    W1231 02:46:11.565279       1 agent.go:173] Retrying in 140ms...
    W1231 02:46:56.706739       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 2)
    W1231 02:46:56.706781       1 agent.go:173] Retrying in 195.999999ms...
    W1231 02:47:41.904411       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 3)
    W1231 02:47:41.904457       1 agent.go:173] Retrying in 274.399999ms...
    W1231 02:48:27.180983       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 4)
    W1231 02:48:27.181045       1 agent.go:173] Retrying in 384.159999ms...
    W1231 02:49:12.566843       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 5)
    W1231 02:49:12.566888       1 agent.go:173] Retrying in 537.823999ms...
    W1231 02:49:58.107589       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 6)
    W1231 02:49:58.107626       1 agent.go:173] Retrying in 752.953599ms...
    W1231 02:50:43.864439       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 7)
    W1231 02:50:43.864487       1 agent.go:173] Retrying in 1.054135039s...
    W1231 02:51:29.920940       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 8)
    W1231 02:51:29.921299       1 agent.go:173] Retrying in 1.475789055s...
    W1231 02:52:16.398630       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 9)
    W1231 02:52:16.398692       1 agent.go:173] Retrying in 2.066104678s...
    W1231 02:53:03.466992       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 10)
    W1231 02:53:03.467045       1 agent.go:173] Retrying in 2.892546549s...
    W1231 02:53:51.360559       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 11)
    W1231 02:53:51.360601       1 agent.go:173] Retrying in 4.049565169s...
    W1231 02:54:40.413363       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 12)
    W1231 02:54:40.413405       1 agent.go:173] Retrying in 5.669391237s...
    W1231 02:55:31.083595       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 13)
    W1231 02:55:31.083770       1 agent.go:173] Retrying in 7.937147732s...
    W1231 02:56:24.027535       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 14)
    W1231 02:56:24.027581       1 agent.go:173] Retrying in 11.112006825s...
    W1231 02:57:20.147236       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 15)
    W1231 02:57:20.147279       1 agent.go:173] Retrying in 15.556809555s...
    W1231 02:58:20.710673       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 16)
    W1231 02:58:20.711016       1 agent.go:173] Retrying in 21.779533378s...
    W1231 02:59:27.492944       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 17)
    W1231 02:59:27.492985       1 agent.go:173] Retrying in 30.491346729s...
    W1231 03:00:42.988578       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 18)
    W1231 03:00:42.989028       1 agent.go:173] Retrying in 42.687885421s...
    W1231 03:02:10.684908       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 19)
    W1231 03:02:10.684934       1 agent.go:173] Retrying in 59.763039589s...
    W1231 03:03:55.450910       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 20)
    W1231 03:03:55.450952       1 agent.go:173] Retrying in 1m23.668255425s...
    W1231 03:06:04.121546       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 21)
    W1231 03:06:04.121582       1 agent.go:173] Retrying in 1m57.135557595s...
    W1231 03:08:46.259200       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 22)
    W1231 03:08:46.259235       1 agent.go:173] Retrying in 2m0s...
    W1231 03:11:31.261487       1 agent.go:168] Connection error: dial tcp 192.168.1.3:30424: i/o timeout (Attempt: 23)
    W1231 03:11:31.261528       1 agent.go:173] Retrying in 2m0s...