kevendeng ShadowOvO 我记得你说过在POD内ping不通114对吧?如果是无法连接外网,那问题的根源不在coredns,而是整个网络连接的问题。 从你POD内的ip route看上去,POD网络设置是没问题的,试试在节点上执行iptables-save看看是否设置了正确的拦截规则与转发规则。
laminar 可以在宿主机上抓包看pod访问外网的流量,然后看看flannel的配置/run/flannel/subnet.env是否和flannel.1的地址匹配 建议咨询阿里云服务排查相关网络配置是否正确,如安全组、防火墙等
kevendeng ShadowOvO 你的flannel设置的集群ip地段是10.244.0.0/16 但是你这个节点分到的ip段是10.1.0.0/24,不在集群ip段里 iptables规则里,flannel已经设置好了该有的masq规则,但是因为没有覆盖到节点ip段,导致masq失败,也就是pod不能连通外网 即使是手动加了masq规则,你目前的配置也只能支持一个节点的k8s运行,如果添加节点,网络还是会出错 至于为什么节点分配到的ip不在集群ip配置的范围内,值得研究
ShadowOvO kevendeng 还发现一个问题,不知您有遇到过没。就是 插入MASQUERADE规则到第一行,并且填写的是 IP/Mask 。但查看后发现变成了 HostName 形式,而 ip 形式的没有生效,也没有数据。这是什么问题
kevendeng ShadowOvO 你是执行的iptables -L吧,它默认会对ip地址进行反向DNS查询,显示hostname,你用iptables -L -n看就行了。 至于这个kubespheredev的hostname,应该是你自己配置的吧 iptables规则的行为也与这个无关,如果行为非预期,那应该是规则没写对
kevendeng ShadowOvO 你的截图只能说明你的本地客户端到宿主机的30880是连通的,但整个流程还需经过Service、NAT、Flannel的overlay网络、Pod,最后再回包,而你的Flannel配置应该是有问题的。
ShadowOvO kevendeng 对,内网访问没有任何问题。 —————-flannel 配置———————— apiVersion: v1 kind: Pod metadata: creationTimestamp: “2021-06-07T07:19:24Z” generateName: kube-flannel-ds- labels: app: flannel controller-revision-hash: 7fb8b954f9 pod-template-generation: “1” tier: node managedFields: apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app: {} f:controller-revision-hash: {} f:pod-template-generation: {} f:tier: {} f:ownerReferences: .: {} k:{“uid”:“eeffaee4-c706-4902-943a-dc674ed5fac9”}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: .: {} f:nodeSelectorTerms: {} f:containers: k:{“name”:“kube-flannel”}: .: {} f:args: {} f:command: {} f:env: .: {} k:{“name”:“POD_NAME”}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} k:{“name”:“POD_NAMESPACE”}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:capabilities: .: {} f:add: {} f:privileged: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{“mountPath”:“/etc/kube-flannel/”}: .: {} f:mountPath: {} f:name: {} k:{“mountPath”:“/run/flannel”}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostNetwork: {} f:initContainers: .: {} k:{“name”:“install-cni”}: .: {} f:args: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{“mountPath”:“/etc/cni/net.d”}: .: {} f:mountPath: {} f:name: {} k:{“mountPath”:“/etc/kube-flannel/”}: .: {} f:mountPath: {} f:name: {} f:priorityClassName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{“name”:“cni”}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{“name”:“flannel-cfg”}: .: {} f:configMap: .: {} f:defaultMode: {} f:name: {} f:name: {} k:{“name”:“run”}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} manager: kube-controller-manager operation: Update time: “2021-06-07T07:19:24Z” apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{“type”:“ContainersReady”}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{“type”:“Initialized”}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{“type”:“Ready”}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:initContainerStatuses: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{“ip”:“172.27.200.160”}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update time: “2021-06-07T07:22:58Z” name: kube-flannel-ds-zckq2 namespace: kube-system ownerReferences: apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: kube-flannel-ds uid: eeffaee4-c706-4902-943a-dc674ed5fac9 resourceVersion: “45705” selfLink: /api/v1/namespaces/kube-system/pods/kube-flannel-ds-zckq2 uid: 107e0185-230e-44ca-b6b7-25a153ed91d0 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - kubernetesdev containers: args: –ip-masq –kube-subnet-mgr command: /opt/bin/flanneld env: name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: quay.io/coreos/flannel:v0.14.0 imagePullPolicy: IfNotPresent name: kube-flannel resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi securityContext: capabilities: add: NET_ADMIN NET_RAW privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: mountPath: /run/flannel name: run mountPath: /etc/kube-flannel/ name: flannel-cfg mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: flannel-token-6nqmq readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true initContainers: args: -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist command: cp image: quay.io/coreos/flannel:v0.14.0 imagePullPolicy: IfNotPresent name: install-cni resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: mountPath: /etc/cni/net.d name: cni mountPath: /etc/kube-flannel/ name: flannel-cfg mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: flannel-token-6nqmq readOnly: true nodeName: kubernetesdev preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: flannel serviceAccountName: flannel terminationGracePeriodSeconds: 30 tolerations: effect: NoSchedule operator: Exists effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: hostPath: path: /run/flannel type: "" name: run hostPath: path: /etc/cni/net.d type: "" name: cni configMap: defaultMode: 420 name: kube-flannel-cfg name: flannel-cfg name: flannel-token-6nqmq secret: defaultMode: 420 secretName: flannel-token-6nqmq status: conditions: lastProbeTime: null lastTransitionTime: “2021-06-07T07:19:25Z” status: “True” type: Initialized lastProbeTime: null lastTransitionTime: “2021-06-07T07:22:49Z” status: “True” type: Ready lastProbeTime: null lastTransitionTime: “2021-06-07T07:22:49Z” status: “True” type: ContainersReady lastProbeTime: null lastTransitionTime: “2021-06-07T07:19:24Z” status: “True” type: PodScheduled containerStatuses: containerID: docker://37ae778489c6ee9202dbb9e0cc376afe12555f5bb6102052c332872532a3bb43 image: quay.io/coreos/flannel:v0.14.0 imageID: docker-pullable://quay.io/coreos/flannel@sha256:4a330b2f2e74046e493b2edc30d61fdebbdddaaedcb32d62736f25be8d3c64d5 lastState: terminated: containerID: docker://0adbc7924866769ed23b88816c7f5cf02d397154a0eb44c5ed767427edf16b94 exitCode: 0 finishedAt: “2021-06-07T07:19:30Z” reason: Completed startedAt: “2021-06-07T07:19:25Z” name: kube-flannel ready: true restartCount: 1 started: true state: running: startedAt: “2021-06-07T07:22:47Z” hostIP: 172.27.200.160 initContainerStatuses: containerID: docker://7d1e1d33afcf0eed98928c71c89e5381c7af2fbd28b413ebaf0a405d641df44d image: quay.io/coreos/flannel:v0.14.0 imageID: docker-pullable://quay.io/coreos/flannel@sha256:4a330b2f2e74046e493b2edc30d61fdebbdddaaedcb32d62736f25be8d3c64d5 lastState: {} name: install-cni ready: true restartCount: 1 state: terminated: containerID: docker://7d1e1d33afcf0eed98928c71c89e5381c7af2fbd28b413ebaf0a405d641df44d exitCode: 0 finishedAt: “2021-06-07T07:22:46Z” reason: Completed startedAt: “2021-06-07T07:22:46Z” phase: Running podIP: 172.27.200.160 podIPs: ip: 172.27.200.160 qosClass: Burstable startTime: “2021-06-07T07:19:24Z”