创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
例如:物理机/树莓派,Ubuntu20.04,4G

Kubernetes版本信息
kubectl version 命令执行结果贴在下方

root@pi:/opt/kubesphere# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.12", GitCommit:"b058e1760c79f46a834ba59bd7a3486ecf28237d", GitTreeState:"clean", BuildDate:"2022-07-13T14:59:18Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.12", GitCommit:"b058e1760c79f46a834ba59bd7a3486ecf28237d", GitTreeState:"clean", BuildDate:"2022-07-13T14:53:39Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/arm64"}

容器运行时
docker version / crictl version / nerdctl version 结果贴在下方

root@pi:/opt/kubesphere# docker version
Client: Docker Engine - Community
 Version:           19.03.9
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        9d98839
 Built:             Fri May 15 00:25:48 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.9
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       9d98839
  Built:            Fri May 15 00:24:20 2020
  OS/Arch:          linux/arm64
  Experimental:     true
 containerd:
  Version:          1.6.26
  GitCommit:        3dd1e886e55dd695541fdcd67420c2888645a495
 runc:
  Version:          1.1.10
  GitCommit:        v1.1.10-0-g18a0cb0
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

KubeSphere版本信息

使用的All-in-One一键安装:

./kk create cluster --with-kubernetes v1.22.12 --with-kubesphere v3.4.1

问题是什么
报错日志是什么,最好有截图。

边缘节点在安装后启用了kubeedge插件,其中自定义资源中的`clusterconfiguration`的ks-installer的YAML如下:

apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.4.1"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","enabled":false,"externalElasticsearchHost":"","externalElasticsearchPort":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"opensearch":{"basicAuth":{"enabled":true,"password":"admin","username":"admin"},"dashboard":{"enabled":false},"enabled":true,"externalOpensearchHost":"","externalOpensearchPort":"","logMaxAge":7,"opensearchPrefix":"whizard"},"redis":{"enableHA":false,"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsCpuLim":1,"jenkinsCpuReq":0.5,"jenkinsMemoryLim":"4Gi","jenkinsMemoryReq":"4Gi","jenkinsVolumeSize":"16Gi"},"edgeruntime":{"enabled":false,"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""]},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"}},"enabled":false,"iptables-manager":{"enabled":true,"mode":"external"}}},"etcd":{"endpointIps":"192.168.1.8","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false,"ruler":{"enabled":true,"replicas":2}},"gatekeeper":{"enabled":false},"logging":{"enabled":false,"logsidecar":{"enabled":true,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"node_exporter":{"port":9100},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false,"istio":{"components":{"cni":{"enabled":false},"ingressGateways":[{"enabled":false,"name":"istio-ingressgateway"}]}}},"terminal":{"timeout":600}}}
  labels:
    version: v3.4.1
  name: ks-installer
  namespace: kubesphere-system
spec:
  alerting:
    enabled: false
  auditing:
    enabled: false
  authentication:
    jwtSecret: ''
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    es:
      basicAuth:
        enabled: false
        password: ''
        username: ''
      elkPrefix: logstash
      enabled: false
      externalElasticsearchHost: ''
      externalElasticsearchPort: ''
      logMaxAge: 7
    gpu:
      kinds:
        - default: true
          resourceName: nvidia.com/gpu
          resourceType: GPU
    minio:
      volumeSize: 20Gi
    monitoring:
      GPUMonitoring:
        enabled: false
      endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'
    openldap:
      enabled: false
      volumeSize: 2Gi
    opensearch:
      basicAuth:
        enabled: true
        password: admin
        username: admin
      dashboard:
        enabled: false
      enabled: true
      externalOpensearchHost: ''
      externalOpensearchPort: ''
      logMaxAge: 7
      opensearchPrefix: whizard
    redis:
      enableHA: false
      enabled: false
      volumeSize: 2Gi
  devops:
    enabled: false
    jenkinsCpuLim: 1
    jenkinsCpuReq: 0.5
    jenkinsMemoryLim: 4Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 16Gi
  edgeruntime:
    enabled: true
    kubeedge:
      cloudCore:
        cloudHub:
          advertiseAddress:
            - 192.168.1.14
        service:
          cloudhubHttpsNodePort: '30002'
          cloudhubNodePort: '30000'
          cloudhubQuicNodePort: '30001'
          cloudstreamNodePort: '30003'
          tunnelNodePort: '30004'
      enabled: true
      iptables-manager:
        enabled: true
        mode: external
  etcd:
    endpointIps: 192.168.1.8
    monitoring: false
    port: 2379
    tlsEnable: true
  events:
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  gatekeeper:
    enabled: false
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:
    enabled: false
  monitoring:
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
    node_exporter:
      port: 9100
    storageClass: ''
  multicluster:
    clusterRole: none
  network:
    ippool:
      type: none
    networkpolicy:
      enabled: false
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  persistence:
    storageClass: ''
  servicemesh:
    enabled: false
    istio:
      components:
        cni:
          enabled: false
        ingressGateways:
          - enabled: false
            name: istio-ingressgateway
  terminal:
    timeout: 600
status:
  clusterId: 48058a46-436b-48a3-98ed-0453e7bccd6e-1702955248
  core:
    enabledTime: '2023-12-19T11:54:22CST'
    status: enabled
    version: v3.4.1
  edgeruntime:
    enabledTime: '2023-12-19T11:18:41CST'
    status: enabled
  monitoring:
    enabledTime: '2023-12-19T11:58:12CST'
    status: enabled

边缘节点启用了kubeedge插件后,kubeedge中的pod的情况:

root@pi:/opt/kubesphere# kubectl get pod -n kubeedge
NAME                           READY   STATUS    RESTARTS        AGE
cloud-iptables-manager-22w8v   1/1     Running   1 (5h43m ago)   6h11m
cloudcore-88d6dd9b5-tl4cz      1/1     Running   1 (5h43m ago)   6h11m
edgeservice-78d6dffb9d-g2v2h   1/1     Running   1 (5h43m ago)   6h11m

边缘节点查看全部命名空间下的pod信息:

root@pi:/opt/kubesphere# kubectl get pod --all-namespaces
NAMESPACE                      NAME                                               READY   STATUS             RESTARTS         AGE
kube-system                    calico-kube-controllers-5bf6854bb9-r9d9c           1/1     Running            4 (5h43m ago)    5h46m
kube-system                    calico-node-qgc9w                                  1/1     Running            0                5h46m
kube-system                    coredns-5495dd7c88-8v255                           1/1     Running            1 (6h8m ago)     7h13m
kube-system                    coredns-5495dd7c88-gpchj                           1/1     Running            1 (6h8m ago)     7h13m
kube-system                    kube-apiserver-pi                                  1/1     Running            1 (6h8m ago)     7h13m
kube-system                    kube-controller-manager-pi                         1/1     Running            1 (6h8m ago)     7h13m
kube-system                    kube-proxy-hw96v                                   1/1     Running            1 (6h8m ago)     7h13m
kube-system                    kube-scheduler-pi                                  1/1     Running            1 (6h8m ago)     7h13m
kube-system                    nodelocaldns-5mvbm                                 1/1     Running            1 (6h8m ago)     7h13m
kube-system                    openebs-localpv-provisioner-58d9ff469c-bwltm       1/1     Running            1 (6h8m ago)     7h13m
kube-system                    snapshot-controller-0                              0/1     CrashLoopBackOff   88 (2m41s ago)   7h8m
kubeedge                       cloud-iptables-manager-22w8v                       1/1     Running            1 (6h8m ago)     6h35m
kubeedge                       cloudcore-88d6dd9b5-tl4cz                          1/1     Running            1 (6h8m ago)     6h35m
kubeedge                       edgeservice-78d6dffb9d-g2v2h                       1/1     Running            1 (6h8m ago)     6h35m
kubesphere-controls-system     default-http-backend-5bf68ff9b8-kxw8n              0/1     CrashLoopBackOff   83 (2m16s ago)   7h1m
kubesphere-controls-system     kubectl-admin-6dbcb94855-4lcm7                     1/1     Running            1 (6h8m ago)     6h48m
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running            2 (6h8m ago)     6h53m
kubesphere-monitoring-system   kube-state-metrics-554c8c5d65-jwjsl                3/3     Running            3 (6h8m ago)     6h54m
kubesphere-monitoring-system   node-exporter-cbn6k                                2/2     Running            2 (6h8m ago)     6h54m
kubesphere-monitoring-system   notification-manager-deployment-566fb6ccf5-gl4bx   2/2     Running            2 (6h8m ago)     6h51m
kubesphere-monitoring-system   notification-manager-operator-8694799c76-nl2vl     2/2     Running            2 (6h8m ago)     6h51m
kubesphere-monitoring-system   prometheus-k8s-0                                   2/2     Running            2 (6h8m ago)     6h53m
kubesphere-monitoring-system   prometheus-operator-8955bbd98-5s9s5                2/2     Running            2 (6h8m ago)     6h55m
kubesphere-system              ks-apiserver-7fd66f7885-hmlcs                      1/1     Running            1 (6h8m ago)     7h1m
kubesphere-system              ks-console-85c97b6d7d-dqbxb                        1/1     Running            1 (6h8m ago)     7h1m
kubesphere-system              ks-controller-manager-798444f496-nbrjz             1/1     Running            1 (6h8m ago)     7h1m
kubesphere-system              ks-installer-5594ffc86d-qfw4c                      1/1     Running            1 (6h8m ago)     7h13m

当边缘节点加入集群时,报错:

root@pi:/opt/kubesphere# ./keadm join --kubeedge-version=1.13.0 --cloudcore-ipport=192.168.1.14:10000 --quicport 10001 --certport 10002 --tunnelport 10004 --edgenode-name pi --edgenode-ip 192.168.1.8 --token afae656b64dd4fb11fff10a78870ba975ba5b0ed4656d727a8efb675634003b9.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MDMwNjI0MDN9.8XUVPVdU1UukWph_JhWjHO8ZHDOwo9aRh2dihwWYMn0
I1219 17:37:17.092989   54992 command.go:845] 1. Check KubeEdge edgecore process status
I1219 17:37:17.140614   54992 command.go:845] 2. Check if the management directory is clean
I1219 17:37:17.140777   54992 join.go:110] 3. Create the necessary directories
I1219 17:37:17.159681   54992 join.go:202] 4. Pull Images
Pulling kubeedge/installation-package:v1.13.0 ...
Successfully pulled kubeedge/installation-package:v1.13.0
Pulling eclipse-mosquitto:1.6.15 ...
Successfully pulled eclipse-mosquitto:1.6.15
Pulling kubeedge/pause:3.6 ...
Successfully pulled kubeedge/pause:3.6
I1219 17:37:17.165565   54992 join.go:202] 5. Copy resources from the image to the management directory
E1219 17:37:18.311616   54992 remote_runtime.go:198] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e56a74dc143e864b00446840aba11236e8247905d6b23c491b9be391542aa0\": plugin type=\"calico\" failed (add): pods \"edgecore\" not found"
Error: edge node join failed: copy resources failed: rpc error: code = Unknown desc = failed to setup network for sandbox "88e56a74dc143e864b00446840aba11236e8247905d6b23c491b9be391542aa0": plugin type="calico" failed (add): pods "edgecore" not found
execute keadm command failed:  edge node join failed: copy resources failed: rpc error: code = Unknown desc = failed to setup network for sandbox "88e56a74dc143e864b00446840aba11236e8247905d6b23c491b9be391542aa0": plugin type="calico" failed (add): pods "edgecore" not found

网上搜索到的处理办法是下载calico.yaml,再apply一下,但是我试着不行,还是报这个错。

17 天 后
14 天 后

解决了,需要在加入集群的命令行后面加入参数,例如,原来的命令为:

arch=$(uname -m); if [[ $arch != x86_64 ]]; then arch='arm64'; fi;  curl -LO https://kubeedge.pek3b.qingstor.com/bin/v1.13.0/$arch/keadm-v1.13.0-linux-$arch.tar.gz  && tar xvf keadm-v1.13.0-linux-$arch.tar.gz && chmod +x keadm && ./keadm join --kubeedge-version=1.13.0 --cloudcore-ipport=X.X.X.X:10000 --quicport 10001 --certport 10002 --tunnelport 10004 --edgenode-name test --edgenode-ip 192.168.0.2 --token 05123da1aecece9e1000df10a628470bfb72f8a6fb93d80adb8f1ef635ff085f.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MDU2MTUwNjR9.tVLJ54T69Sk83kKb7VsHtbal9czSJGigKLxvZiAfVfE

在命令行最后加入`–runtimetype=docker`:

arch=$(uname -m); if [[ $arch != x86_64 ]]; then arch='arm64'; fi;  curl -LO https://kubeedge.pek3b.qingstor.com/bin/v1.13.0/$arch/keadm-v1.13.0-linux-$arch.tar.gz  && tar xvf keadm-v1.13.0-linux-$arch.tar.gz && chmod +x keadm && ./keadm join --kubeedge-version=1.13.0 --cloudcore-ipport=X.X.X.X:10000 --quicport 10001 --certport 10002 --tunnelport 10004 --edgenode-name test --edgenode-ip 192.168.0.2 --token 05123da1aecece9e1000df10a628470bfb72f8a6fb93d80adb8f1ef635ff085f.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MDU2MTUwNjR9.tVLJ54T69Sk83kKb7VsHtbal9czSJGigKLxvZiAfVfE --runtimetype=docker