操作系统信息
云虚拟机,Ubuntu22.04,4C/16G
Kubernetes版本信息
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:27:46Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
容器运行时
Server: Docker Engine - Community
Engine:
Version: 24.0.6
API version: 1.43 (minimum version 1.12)
Go version: go1.20.7
Git commit: 1a79695
Built: Mon Sep 4 12:32:17 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.7.3
GitCommit: 7880925980b188f4c97b462f709d0db8e8962aff
runc:
Version: 1.1.9
GitCommit: v1.1.9-0-gccaecfc
docker-init:
Version: 0.19.0
GitCommit: de40ad0
KubeSphere版本信息
v3.4.1。在线安装。使用kk安装。
问题是什么
Service IP段 10.188.0.0/18 IP范围10.188.0.0 - 10.188.63.255
初始Pod IP段:10.188.192.0/18 IP范围 10.188.192.0 - 10.188.255.255
安装后再开启的容器组IP池功能,在控制台新建了ippool-dev IP池
ippool-dev IP段:10.189.0.0/20 IP范围 10.189.0.0 - 10.189.15.255
现在有容器IP池页面看到是两个IP池,default-ipv4-ippool和ippool-dev
Pod A 经过Service 调用Pod B
Pod A → Service → Pod B
Pod B是一个nginx服务,查看访问日志发现
如果Pod A 分配了 default-ipv4-ippool IP池的ip ,那么访问日志看到是容器IP
如果Pod A 分配了 新建的 ippool-dev IP池的ip ,那么访问日志看到是两种情况
1、服务器本身的eth0的 IP (当Pod A 和 Pod B 在同一台服务器上)
2、calico虚拟tun设备的ip(当Pod A 和 Pod B 在不同的服务器上)
容器的源IP丢失了,被进行了SNAT
这并不符合常规的逻辑:集群内部调用service不应该被NAT
而且为什么使用默认的default-ipv4-ippool IP池 时,又是正常的?
我尝试过将两个Pod 分配到同一个项目(命名空间)下,也是同样的问题。
请问是我哪里配置不对,要怎么修改才能和default-ipv4-ippool IP池 一样的效果,谢谢。
部分配置:
service:
kind: Service
apiVersion: v1
metadata:
name: font
namespace: dev
labels:
app: font
annotations:
kubesphere.io/creator: admin
spec:
ports:
- name: tcp-80
protocol: TCP
port: 80
targetPort: 80
selector:
app: font
clusterIP: 10.188.21.35
clusterIPs:
- 10.188.21.35
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
ip池:
apiVersion: network.kubesphere.io/v1alpha1
kind: IPPool
metadata:
annotations:
kubesphere.io/creator: admin
kubesphere.io/description: 测试环境
finalizers:
- finalizers.network.kubesphere.io/ippool
labels:
ippool.network.kubesphere.io/default: ''
ippool.network.kubesphere.io/id: '4099'
ippool.network.kubesphere.io/name: ippool-xft-dev
ippool.network.kubesphere.io/type: calico
name: ippool-dev
spec:
cidr: 10.189.0.0/20
dns: {}
type: calico
vlanConfig:
master: ''
vlanId: 0
apiVersion: network.kubesphere.io/v1alpha1
kind: IPPool
metadata:
finalizers:
- finalizers.network.kubesphere.io/ippool
labels:
ippool.network.kubesphere.io/default: ''
ippool.network.kubesphere.io/id: '4099'
ippool.network.kubesphere.io/name: default-ipv4-ippool
ippool.network.kubesphere.io/type: calico
name: default-ipv4-ippool
spec:
blockSize: 24
cidr: 10.188.192.0/18
dns: {}
type: calico
vlanConfig:
master: ''
vlanId: 0
kk config 部分:
network:
plugin: calico
kubePodsCIDR: 10.188.192.0/18
kubeServiceCIDR: 10.188.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
任何帮助将不胜感激~