• 安装部署
  • kk部署k8s1.23.7kubesphere3.3.0使用cilium网络插件,cilium容器起不来

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
发帖前请点击 发表主题 右边的 预览(👀) 按钮,确保帖子格式正确。

操作系统信息
例如:虚拟机/物理机,Centos7.9, 2C/4G

Kubernetes版本信息
例如:v1.23.7。3节点

容器运行时
例如,使用 docker/containerd,版本多少 1.20

KubeSphere版本信息
例如:v3.3.0。在线安装。无K8s,全套安装。

问题是什么

使用kk 部署 k8s 版本 1.23.7 和 kubesphere3.3.0 使用cilium 网络插件,cilium容器起不来

下面是具体日志内容

kubectl logs -n kube-system cilium-cxtzv

level=info msg=“Cilium 1.8.3 54cf3810d 2020-09-04T14:01:53+02:00 go version go1.14.7 linux/amd64” subsys=daemon

level=info msg=“cilium-envoy version: 0a9743dda269a0b0039c9db3cf7e0a637caad7a9/1.13.3/Modified/RELEASE/BoringSSL” subsys=daemon

level=info msg=“clang (10.0.0) and kernel (5.19.0) versions: OK!” subsys=linux-datapath

level=info msg=“linking environment: OK!” subsys=linux-datapath

level=warning msg=“BPF system config check: NOT OK.” error=“CONFIG_BPF kernel parameter is required” subsys=linux-datapath

level=warning msg=“BPF filesystem is going to be mounted automatically in /run/cilium/bpffs. However, it probably means that Cilium is running inside container and BPFFS is not mounted on the host. for more information, see: https://cilium.link/err-bpf-mount” subsys=bpf

level=warning msg=“================================= WARNING ==========================================” subsys=bpf

level=warning msg=“BPF filesystem is not mounted. This will lead to network disruption when Cilium pods” subsys=bpf

level=warning msg=“are restarted. Ensure that the BPF filesystem is mounted in the host.” subsys=bpf

level=warning msg=“https://docs.cilium.io/en/stable/kubernetes/requirements/#mounted-bpf-filesystem” subsys=bpf

level=warning msg=“====================================================================================” subsys=bpf

level=info msg=“Mounting BPF filesystem at /run/cilium/bpffs” subsys=bpf

level=info msg=“Detected mounted BPF filesystem at /run/cilium/bpffs” subsys=bpf

level=info msg=“Valid label prefix configuration:” subsys=labels-filter

level=info msg=“ - :io.kubernetes.pod.namespace” subsys=labels-filter

level=info msg=“ - :io.cilium.k8s.namespace.labels” subsys=labels-filter

level=info msg=“ - :app.kubernetes.io” subsys=labels-filter

level=info msg=“ - !:io.kubernetes” subsys=labels-filter

level=info msg=“ - !:kubernetes.io” subsys=labels-filter

level=info msg=“ - !:.*beta.kubernetes.io” subsys=labels-filter

level=info msg=“ - !:k8s.io” subsys=labels-filter

level=info msg=“ - !:pod-template-generation” subsys=labels-filter

level=info msg=“ - !:pod-template-hash” subsys=labels-filter

level=info msg=“ - !:controller-revision-hash” subsys=labels-filter

level=info msg=“ - !:annotation.*” subsys=labels-filter

level=info msg=“ - !:etcd_node” subsys=labels-filter

level=info msg="Auto-disabling \“enable-bpf-clock-probe\” feature since KERNEL_HZ cannot be determined" error=“Cannot probe CONFIG_HZ” subsys=daemon

level=info msg=“Using autogenerated IPv4 allocation range” subsys=node v4Prefix=10.45.0.0/16

level=info msg=“Initializing daemon” subsys=daemon

level=info msg=“Establishing connection to apiserver” host=“https://10.233.0.1:443” subsys=k8s

level=info msg=“Connected to apiserver” subsys=k8s

level=info msg=“Inheriting MTU from external network interface” device=ens192 ipAddr=172.16.20.45 mtu=1500 subsys=mtu

level=info msg="Trying to auto-enable \“enable-node-port\”, \“enable-external-ips\”, \“enable-host-reachable-services\”, \“enable-host-port\”, \"enable-session-affinity\“features” subsys=daemon

level=info msg=“Restored services from maps” failed=0 restored=0 subsys=service

level=info msg=“Creating CRD (CustomResourceDefinition)…” name=CiliumNetworkPolicy/v2 subsys=k8s

level=info msg=“Creating CRD (CustomResourceDefinition)…” name=v2.CiliumNode subsys=k8s

level=info msg=“Creating CRD (CustomResourceDefinition)…” name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s

level=info msg=“Creating CRD (CustomResourceDefinition)…” name=v2.CiliumIdentity subsys=k8s

level=info msg=“Creating CRD (CustomResourceDefinition)…” name=v2.CiliumEndpoint subsys=k8s

level=fatal msg=“Unable to register CRDs” error=“Unable to create custom resource definition: the server could not find the requested resource” subsys=daemon

17 天 后

cilium版本低了

kubesphere/kubesphere#4318

感谢您提醒我 cilium cli,我可以在 kubesphere 中使用 cilium cli 部署新的 cilium,但我不得不从头开始重新安装 kubesphere,因为 cilium cli 不喜欢其中带有“点”的集群名称,默认情况下配置-样品有clusterName: cluster.local

$$
kk delete cluster -f config-sample.yaml –debug kk create cluster -f config-sample.yaml –with-kubesphere v3.1.0 kubectl delete serviceaccounts cilium -n kube-system kubectl delete serviceaccounts cilium-operator -n kube-system kubectl delete clusterroles cilium-operator -n kube-system kubectl delete clusterroles cilium -n kube-system kubectl delete clusterrolebindings cilium -n kube-system kubectl delete clusterrolebindings cilium-operator -n kube-system kubectl delete configmap cilium-config -n kube-system kubectl delete daemonsets cilium -n kube-system kubectl delete deployments cilium-operator -n kube-system cilium install cluster-dev cilium status /¯¯\ /¯¯_/¯¯\ Cilium: OK _/¯¯_/ Operator: OK /¯¯_/¯¯\ Hubble: disabled _/¯¯_/ ClusterMesh: disabled __/ DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-operator Running: 1 Cluster Pods: 16/16 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.10.4: 1 cilium-operator quay.io/cilium/operator-generic:v1.10.4: 1
$$

cilium版本低了

kubesphere/kubesphere#4318

感谢您提醒我 cilium cli,我可以在 kubesphere 中使用 cilium cli 部署新的 cilium,但我不得不从头开始重新安装 kubesphere,因为 cilium cli 不喜欢其中带有“点”的集群名称,默认情况下配置-样品有clusterName: cluster.local

$$
kk delete cluster -f config-sample.yaml –debug kk create cluster -f config-sample.yaml –with-kubesphere v3.1.0 kubectl delete serviceaccounts cilium -n kube-system kubectl delete serviceaccounts cilium-operator -n kube-system kubectl delete clusterroles cilium-operator -n kube-system kubectl delete clusterroles cilium -n kube-system kubectl delete clusterrolebindings cilium -n kube-system kubectl delete clusterrolebindings cilium-operator -n kube-system kubectl delete configmap cilium-config -n kube-system kubectl delete daemonsets cilium -n kube-system kubectl delete deployments cilium-operator -n kube-system cilium install cluster-dev cilium status /¯¯\ /¯¯_/¯¯\ Cilium: OK _/¯¯_/ Operator: OK /¯¯_/¯¯\ Hubble: disabled _/¯¯_/ ClusterMesh: disabled __/ DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-operator Running: 1 Cluster Pods: 16/16 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.10.4: 1 cilium-operator quay.io/cilium/operator-generic:v1.10.4: 1
$$