DehaoCheng
swap是关闭的,防火墙也是关闭的,这是iptables
我将日志也贴出来
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com systemd[1]: Stopping kubelet: The Kubernetes Node Agent…
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com systemd[1]: Started kubelet: The Kubernetes Node Agent.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: Flag –cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet’s –config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: Flag –network-plugin has been deprecated, will be removed along with dockershim.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: Flag –cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet’s –config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: Flag –network-plugin has been deprecated, will be removed along with dockershim.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: W1116 16:11:43.960052 20504 feature_gate.go:237] Setting GA feature gate TTLAfterFinished=true. It will be removed in a future release.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: W1116 16:11:43.960133 20504 feature_gate.go:237] Setting GA feature gate TTLAfterFinished=true. It will be removed in a future release.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:43.978739 20504 server.go:446] “Kubelet version” kubeletVersion=“v1.23.7”
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: W1116 16:11:43.978919 20504 feature_gate.go:237] Setting GA feature gate TTLAfterFinished=true. It will be removed in a future release.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: W1116 16:11:43.979139 20504 feature_gate.go:237] Setting GA feature gate TTLAfterFinished=true. It will be removed in a future release.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:43.979480 20504 server.go:874] “Client rotation is on, will bootstrap in background”
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:43.984920 20504 certificate_store.go:130] Loading cert/key pair from “/var/lib/kubelet/pki/kubelet-client-current.pem”.
11月 16 16:11:43 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:43.986307 20504 dynamic_cafile_content.go:156] “Starting controller” name=“client-ca-bundle::/etc/kubernetes/pki/ca.crt”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086264 20504 server.go:693] “–cgroups-per-qos enabled, but –cgroup-root was not specified. defaulting to /”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086541 20504 container_manager_linux.go:281] “Container manager verified user specified cgroup-root exists” cgroupRoot=[]
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086692 20504 container_manager_linux.go:286] “Creating Container Manager object based on Node Config” nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format😃ecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format😃ecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:pid.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086721 20504 topology_manager.go:133] “Creating topology manager with policy per scope” topologyPolicyName=“none” topologyScopeName=“container”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086737 20504 container_manager_linux.go:321] “Creating device plugin manager” devicePluginEnabled=true
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086787 20504 state_mem.go:36] “Initialized new in-memory state store”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086849 20504 kubelet.go:313] “Using dockershim is deprecated, please consider using a full-fledged CRI implementation”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086879 20504 client.go:80] “Connecting to docker on the dockerEndpoint” endpoint=“unix:///var/run/docker.sock”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.086895 20504 client.go:99] “Start docker client with request timeout” timeout=“2m0s”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.098479 20504 docker_service.go:571] “Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth” hairpinMode=promiscuous-bridge
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.098524 20504 docker_service.go:243] “Hairpin mode is set” hairpinMode=hairpin-veth
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.208182 20504 docker_service.go:258] “Docker cri networking managed by the network plugin” networkPluginName=“cni”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.223264 20504 docker_service.go:264] “Docker Info” dockerInfo=&{ID:LTG3😛IJ6:ZHNR:B247:JYBF😃RHW😛NVZ:EPTO:ZLNE:ZWVM:IMZK:RMSM Containers:22 ContainersRunning:9 ContainersPaused:0 ContainersStopped:13 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:71 SystemTime:2023-11-16T16:11:44.209557257+08:00 LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:1 NEventsListener:0 KernelVersion:3.10.0-957.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSVersion:7 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000bdc2a0 NCPU:20 MemTotal:32968003584 GenericResources:[] DockerRootDir:/data/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bj100-bcld-k8sslave185.bcld.com Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: DefaultAddressPools:[] Warnings:[]}
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.223310 20504 docker_service.go:279] “Setting cgroupDriver” cgroupDriver=“systemd”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.244005 20504 kubelet.go:416] “Attempting to sync node with API server”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.244053 20504 kubelet.go:278] “Adding static pod path” path=“/etc/kubernetes/manifests”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.244128 20504 kubelet.go:289] “Adding apiserver pod source”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.244156 20504 apiserver.go:42] “Waiting for node sync before watching apiserver pods”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.258267 20504 kuberuntime_manager.go:249] “Container runtime initialized” containerRuntime=“docker” version=“20.10.8” apiVersion=“1.41.0”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.259319 20504 server.go:1244] “Started kubelet”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:11:44.259468 20504 kubelet.go:1351] “Image garbage collection failed once. Stats initialization may not have completed yet” err=“failed to get imageFs info: unable to find data in memory cache”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.259552 20504 server.go:150] “Starting to listen” address=“0.0.0.0” port=10250
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.261436 20504 fs_resource_analyzer.go:67] “Starting FS ResourceAnalyzer”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.261558 20504 volume_manager.go:291] “Starting Kubelet Volume Manager”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.261564 20504 desired_state_of_world_populator.go:147] “Desired state populator starts to run”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.266234 20504 server.go:410] “Adding debug handlers to kubelet server”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.305290 20504 kubelet_network_linux.go:57] “Initialized protocol iptables rules.” protocol=IPv4
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.334525 20504 kubelet_network_linux.go:57] “Initialized protocol iptables rules.” protocol=IPv6
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.334588 20504 status_manager.go:161] “Starting to sync pod status with apiserver”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.334619 20504 kubelet.go:2016] “Starting kubelet main sync loop”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:11:44.334702 20504 kubelet.go:2040] “Skipping pod synchronization” err=“[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.362028 20504 kuberuntime_manager.go:1105] “Updating runtime config through cri with podcidr” CIDR=“10.233.72.0/24”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.362759 20504 docker_service.go:364] “Docker cri received runtime config” runtimeConfig=“&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.233.72.0/24,},}”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.363064 20504 kubelet_network.go:76] “Updating Pod CIDR” originalPodCIDR="" newPodCIDR=“10.233.72.0/24”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.378554 20504 kubelet_node_status.go:70] “Attempting to register node” node=“bj100-bcld-k8sslave185.bcld.com”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.386691 20504 kubelet_node_status.go:108] “Node was previously registered” node=“bj100-bcld-k8sslave185.bcld.com”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.386831 20504 kubelet_node_status.go:73] “Successfully registered node” node=“bj100-bcld-k8sslave185.bcld.com”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.402948 20504 setters.go:578] “Node became not ready” node=“bj100-bcld-k8sslave185.bcld.com” condition={Type:Ready Status:False LastHeartbeatTime:2023-11-16 16:11:44.402827489 +0800 CST m=+0.496776019 LastTransitionTime:2023-11-16 16:11:44.402827489 +0800 CST m=+0.496776019 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.414448 20504 docker_sandbox.go:402] “Failed to read pod IP from plugin/docker” err="networkPlugin cni failed on the status hook for pod \“fluent-bit-nx8sr_kubesphere-logging-system\”: CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:11:44.435105 20504 kubelet.go:2040] “Skipping pod synchronization” err=“container runtime status check may not have completed yet”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.443342 20504 cpu_manager.go:213] “Starting CPU manager” policy=“none”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.443371 20504 cpu_manager.go:214] “Reconciling” reconcilePeriod=“10s”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.443397 20504 state_mem.go:36] “Initialized new in-memory state store”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.443670 20504 state_mem.go:88] “Updated default CPUSet” cpuSet=""
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.443692 20504 state_mem.go:96] “Updated CPUSet assignments” assignments=map[]
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.443704 20504 policy_none.go:49] “None policy: Start”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.450902 20504 memory_manager.go:168] “Starting memorymanager” policy=“None”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.450950 20504 state_mem.go:35] “Initializing new in-memory state store”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.451226 20504 state_mem.go:75] “Updated machine memory state”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.456555 20504 manager.go:610] “Failed to read data from checkpoint” checkpoint=“kubelet_internal_checkpoint” err=“checkpoint is not found”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.456946 20504 plugin_manager.go:114] “Starting Kubelet Plugin Manager”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.635903 20504 pod_container_deletor.go:79] “Container not found in pod’s containers” containerID=“7bbe045b9b2b5aea8c2c1319cb23528e7ece38817ea764ac3121efa5e4b79eee”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.635977 20504 pod_container_deletor.go:79] “Container not found in pod’s containers” containerID=“2b3e30368caadbe02e306e0b1d3a5202fdd4bddc44ebc567883507af6d2e4096”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.636005 20504 pod_container_deletor.go:79] “Container not found in pod’s containers” containerID=“37846bd8876a3e4340df9500bb17c10cca820e6f6847a5a9e7a12d75604d9db7”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.636076 20504 pod_container_deletor.go:79] “Container not found in pod’s containers” containerID=“065836204be8e44c5dfb71c82c738f81618fc7dacc06ae4cb9e78cf48da4de0f”
11月 16 16:11:44 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:44.636104 20504 pod_container_deletor.go:79] “Container not found in pod’s containers” containerID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.244384 20504 apiserver.go:52] “Watching apiserver”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.248995 20504 topology_manager.go:200] “Topology Admit Handler”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.249318 20504 topology_manager.go:200] “Topology Admit Handler”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.249499 20504 topology_manager.go:200] “Topology Admit Handler”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.249638 20504 topology_manager.go:200] “Topology Admit Handler”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.249946 20504 topology_manager.go:200] “Topology Admit Handler”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.250221 20504 topology_manager.go:200] “Topology Admit Handler”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.267712 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“config-volume\” (UniqueName: \“kubernetes.io/configmap/ee873de2-1c2d-4d7a-aa34-a0a947f88435-config-volume\”) pod \“nodelocaldns-llbz4\” (UID: \“ee873de2-1c2d-4d7a-aa34-a0a947f88435\”) " pod=“kube-system/nodelocaldns-llbz4”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.267801 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“config\” (UniqueName: \“kubernetes.io/secret/e7a3329e-f527-4dc9-9328-a57656edef0b-config\”) pod \“fluent-bit-nx8sr\” (UID: \“e7a3329e-f527-4dc9-9328-a57656edef0b\”) " pod=“kubesphere-logging-system/fluent-bit-nx8sr”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.267858 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“lib-modules\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-lib-modules\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.267981 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“sysfs\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-sysfs\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268056 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“proc\” (UniqueName: \“kubernetes.io/host-path/30048163-5b3f-4097-ac20-d6f1dba9212b-proc\”) pod \“node-exporter-xjlvg\” (UID: \“30048163-5b3f-4097-ac20-d6f1dba9212b\”) " pod=“kubesphere-monitoring-system/node-exporter-xjlvg”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268106 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“root\” (UniqueName: \“kubernetes.io/host-path/30048163-5b3f-4097-ac20-d6f1dba9212b-root\”) pod \“node-exporter-xjlvg\” (UID: \“30048163-5b3f-4097-ac20-d6f1dba9212b\”) " pod=“kubesphere-monitoring-system/node-exporter-xjlvg”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268162 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“varlibcontainers\” (UniqueName: \“kubernetes.io/host-path/e7a3329e-f527-4dc9-9328-a57656edef0b-varlibcontainers\”) pod \“fluent-bit-nx8sr\” (UID: \“e7a3329e-f527-4dc9-9328-a57656edef0b\”) " pod=“kubesphere-logging-system/fluent-bit-nx8sr”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268209 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“positions\” (UniqueName: \“kubernetes.io/empty-dir/e7a3329e-f527-4dc9-9328-a57656edef0b-positions\”) pod \“fluent-bit-nx8sr\” (UID: \“e7a3329e-f527-4dc9-9328-a57656edef0b\”) " pod=“kubesphere-logging-system/fluent-bit-nx8sr”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268278 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“var-run-calico\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-var-run-calico\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268333 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“cni-bin-dir\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-cni-bin-dir\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268378 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“cni-log-dir\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-cni-log-dir\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.268466 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“xtables-lock\” (UniqueName: \“kubernetes.io/host-path/ee873de2-1c2d-4d7a-aa34-a0a947f88435-xtables-lock\”) pod \“nodelocaldns-llbz4\” (UID: \“ee873de2-1c2d-4d7a-aa34-a0a947f88435\”) " pod=“kube-system/nodelocaldns-llbz4”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270051 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“kube-api-access-9ns9d\” (UniqueName: \“kubernetes.io/projected/ee873de2-1c2d-4d7a-aa34-a0a947f88435-kube-api-access-9ns9d\”) pod \“nodelocaldns-llbz4\” (UID: \“ee873de2-1c2d-4d7a-aa34-a0a947f88435\”) " pod=“kube-system/nodelocaldns-llbz4”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270173 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“sys\” (UniqueName: \“kubernetes.io/host-path/30048163-5b3f-4097-ac20-d6f1dba9212b-sys\”) pod \“node-exporter-xjlvg\” (UID: \“30048163-5b3f-4097-ac20-d6f1dba9212b\”) " pod=“kubesphere-monitoring-system/node-exporter-xjlvg”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270298 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“kube-api-access-vhglp\” (UniqueName: \“kubernetes.io/projected/30048163-5b3f-4097-ac20-d6f1dba9212b-kube-api-access-vhglp\”) pod \“node-exporter-xjlvg\” (UID: \“30048163-5b3f-4097-ac20-d6f1dba9212b\”) " pod=“kubesphere-monitoring-system/node-exporter-xjlvg”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270406 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“systemd\” (UniqueName: \“kubernetes.io/host-path/e7a3329e-f527-4dc9-9328-a57656edef0b-systemd\”) pod \“fluent-bit-nx8sr\” (UID: \“e7a3329e-f527-4dc9-9328-a57656edef0b\”) " pod=“kubesphere-logging-system/fluent-bit-nx8sr”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270618 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“kube-api-access-ddvmz\” (UniqueName: \“kubernetes.io/projected/e7a3329e-f527-4dc9-9328-a57656edef0b-kube-api-access-ddvmz\”) pod \“fluent-bit-nx8sr\” (UID: \“e7a3329e-f527-4dc9-9328-a57656edef0b\”) " pod=“kubesphere-logging-system/fluent-bit-nx8sr”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270827 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“xtables-lock\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-xtables-lock\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.270963 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“varlogs\” (UniqueName: \“kubernetes.io/host-path/e7a3329e-f527-4dc9-9328-a57656edef0b-varlogs\”) pod \“fluent-bit-nx8sr\” (UID: \“e7a3329e-f527-4dc9-9328-a57656edef0b\”) " pod=“kubesphere-logging-system/fluent-bit-nx8sr”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.271082 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“var-lib-calico\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-var-lib-calico\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.271191 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“cni-net-dir\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-cni-net-dir\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.371799 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“lib-modules\” (UniqueName: \“kubernetes.io/host-path/128a1966-6e31-48b2-b699-68ba40ea9fc6-lib-modules\”) pod \“kube-proxy-l97tq\” (UID: \“128a1966-6e31-48b2-b699-68ba40ea9fc6\”) " pod=“kube-system/kube-proxy-l97tq”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.371890 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“kube-api-access-hzp5p\” (UniqueName: \“kubernetes.io/projected/128a1966-6e31-48b2-b699-68ba40ea9fc6-kube-api-access-hzp5p\”) pod \“kube-proxy-l97tq\” (UID: \“128a1966-6e31-48b2-b699-68ba40ea9fc6\”) " pod=“kube-system/kube-proxy-l97tq”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.372064 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“host-local-net-dir\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-host-local-net-dir\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.372137 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“kube-api-access-qcd5p\” (UniqueName: \“kubernetes.io/projected/7b238e96-c869-4183-b9a7-07b33d36153f-kube-api-access-qcd5p\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.372333 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“policysync\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-policysync\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.372385 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“kube-proxy\” (UniqueName: \“kubernetes.io/configmap/128a1966-6e31-48b2-b699-68ba40ea9fc6-kube-proxy\”) pod \“kube-proxy-l97tq\” (UID: \“128a1966-6e31-48b2-b699-68ba40ea9fc6\”) " pod=“kube-system/kube-proxy-l97tq”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.372432 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“xtables-lock\” (UniqueName: \“kubernetes.io/host-path/128a1966-6e31-48b2-b699-68ba40ea9fc6-xtables-lock\”) pod \“kube-proxy-l97tq\” (UID: \“128a1966-6e31-48b2-b699-68ba40ea9fc6\”) " pod=“kube-system/kube-proxy-l97tq”
11月 16 16:11:45 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:45.372710 20504 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \“flexvol-driver-host\” (UniqueName: \“kubernetes.io/host-path/7b238e96-c869-4183-b9a7-07b33d36153f-flexvol-driver-host\”) pod \“calico-node-m7847\” (UID: \“7b238e96-c869-4183-b9a7-07b33d36153f\”) " pod=“kube-system/calico-node-m7847”
11月 16 16:11:46 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:46.445216 20504 request.go:665] Waited for 1.188705252s due to client-side throttling, not priority and fairness, request: GET:https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/pods/nodelocaldns-llbz4
11月 16 16:11:47 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:47.353564 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:11:50 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:11:50.917641 20504 reconciler.go:157] “Reconciler: start to sync state”
11月 16 16:12:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:17.452932 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:12:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:17.453927 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:12:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:17.453986 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:12:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:17.454062 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:12:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:17.454122 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:12:28 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:12:28.338960 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:12:39 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:12:39.959398 20504 scope.go:110] “RemoveContainer” containerID=“0504a24c3be72bbcb9293a25b54b822d05f43387b3a245b6331fa06088562e22”
11月 16 16:12:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:58.439192 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:12:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:58.440059 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:12:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:58.440106 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:12:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:58.440177 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:12:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:12:58.440235 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:13:09 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:13:09.337657 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:13:39 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:13:39.436684 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:13:39 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:13:39.437571 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:13:39 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:13:39.437621 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:13:39 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:13:39.437694 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:13:39 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:13:39.437753 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:13:49 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:13:49.812474 20504 scope.go:110] “RemoveContainer” containerID=“38471c1796afb3b048fe26fb862434bb3899af91cfe2aca979d8f3bc9f3906e2”
11月 16 16:13:51 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:13:51.337880 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:14:21 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:14:21.439475 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:14:21 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:14:21.440354 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:14:21 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:14:21.440410 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:14:21 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:14:21.440485 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:14:21 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:14:21.440543 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:14:34 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:14:34.339333 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:14:59 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:14:59.518575 20504 scope.go:110] “RemoveContainer” containerID=“147a0fdb467db2e8fc9802cb219dd595aabede32561907b587386e34a285152f”
11月 16 16:15:04 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:04.438821 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:15:04 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:04.439745 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:15:04 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:04.439796 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:15:04 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:04.439871 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:15:04 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:04.439928 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:15:19 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:15:19.337683 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:15:49 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:49.437114 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:15:49 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:49.437937 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:15:49 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:49.437998 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:15:49 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:49.438076 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:15:49 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:15:49.438137 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:16:02 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:16:02.338623 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:16:10 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:16:10.219328 20504 scope.go:110] “RemoveContainer” containerID=“0df929f91db79686e2de56c9133315b3016f171aff200fe0716ff8bb8c54feb8”
11月 16 16:16:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:16:32.439111 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:16:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:16:32.439926 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:16:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:16:32.439986 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:16:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:16:32.440060 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:16:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:16:32.440126 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:16:47 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:16:47.338157 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:17:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:17.435510 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:17:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:17.436431 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:17:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:17.436487 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:17:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:17.436562 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:17:17 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:17.436624 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:17:19 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:19.538690 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:17:19 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:17:19.908376 20504 scope.go:110] “RemoveContainer” containerID=“42cdc92f4c77560b29bd060cdbfbc1e24dbae8a061b88e04bd42857181bb06f5”
11月 16 16:17:19 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:17:19.908972 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:17:19 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:19.909918 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:17:28 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:17:28.338294 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:17:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:17:32.336221 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:17:32 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:32.337312 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:17:46 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:17:46.336341 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:17:46 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:46.337379 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:17:57 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:17:57.336280 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:17:57 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:57.337283 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:17:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:58.429702 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:17:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:58.430721 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:17:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:58.430789 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:17:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:58.430866 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:17:58 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:17:58.430933 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:18:11 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:18:11.335522 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:18:11 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:11.336571 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:18:11 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:18:11.338042 20504 cni.go:334] “CNI failed to retrieve network namespace path” err="cannot find network namespace for the terminated container \“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb\”"
11月 16 16:18:23 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:18:23.336465 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:18:23 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:23.337465 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:18:35 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:18:35.336314 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”
11月 16 16:18:35 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:35.337326 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“StartContainer\” for \“calico-node\” with CrashLoopBackOff: \“back-off 1m20s restarting failed container=calico-node pod=calico-node-m7847_kube-system(7b238e96-c869-4183-b9a7-07b33d36153f)\”" pod=“kube-system/calico-node-m7847” podUID=7b238e96-c869-4183-b9a7-07b33d36153f
11月 16 16:18:41 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:41.435477 20504 cni.go:381] “Error deleting pod from network” err="error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb} podNetnsPath="" networkType=“calico” networkName=“k8s-pod-network”
11月 16 16:18:41 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:41.436318 20504 remote_runtime.go:245] “StopPodSandbox from runtime service failed” err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \“fluent-bit-nx8sr_kubesphere-logging-system\” network: error getting ClusterInformation: Get \“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\”: dial tcp 10.233.0.1:443: i/o timeout" podSandboxID=“c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb”
11月 16 16:18:41 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:41.436366 20504 kuberuntime_manager.go:1013] “Failed to stop sandbox” podSandboxID={Type:docker ID:c454cdf6bb8d804646747c889aa63fcadfa2890845321b4c62671eae7fad7fdb}
11月 16 16:18:41 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:41.436444 20504 kuberuntime_manager.go:756] “killPodWithSyncResult failed” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\""
11月 16 16:18:41 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: E1116 16:18:41.436505 20504 pod_workers.go:951] “Error syncing pod, skipping” err="failed to \“KillPodSandbox\” for \“e7a3329e-f527-4dc9-9328-a57656edef0b\” with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\“fluent-bit-nx8sr_kubesphere-logging-system\\\” network: error getting ClusterInformation: Get \\\“https://[10.233.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\”: dial tcp 10.233.0.1:443: i/o timeout\"" pod=“kubesphere-logging-system/fluent-bit-nx8sr” podUID=e7a3329e-f527-4dc9-9328-a57656edef0b
11月 16 16:18:48 bj100-bcld-k8sslave185.bcld.com kubelet[20504]: I1116 16:18:48.336340 20504 scope.go:110] “RemoveContainer” containerID=“136deb2867fa653a2080f634df76cdcc58cf4713aa65ce54a23db3049009e05b”