E1021 09:30:42.664368 1 runtime.go:78] Observed a panic: “invalid memory address or nil pointer dereference” (runtime error: invalid memory address or nil pointer dereference)

panic: runtime error: invalid memory address or nil pointer dereference [recovered]

有看到这个error

不知道什么情况了

4 个月 后

重新开启下这个问题

Host 集群是部署在私有云中通过KK部署的ALL IN ONE模式

添加了华为云的CCE作为member后,开始报错重启

KubeFed controller-manager version: version.Info{Version:“v0.0.1-alpha.0”, GitCommit:“unknown”, GitTreeState:“unknown”, BuildDate:“unknown”, GoVersion:“go1.16.5”, Compiler:“gc”, Platform:“linux/amd64”}

I0307 11:38:54.705958 1 controller-manager.go:398] FLAG: –add_dir_header=“false”

I0307 11:38:54.706098 1 controller-manager.go:398] FLAG: –alsologtostderr=“false”

I0307 11:38:54.706129 1 controller-manager.go:398] FLAG: –healthz-addr=“:8080”

I0307 11:38:54.706135 1 controller-manager.go:398] FLAG: –help=“false”

I0307 11:38:54.706139 1 controller-manager.go:398] FLAG: –kubeconfig=""

I0307 11:38:54.706142 1 controller-manager.go:398] FLAG: –kubefed-config=""

I0307 11:38:54.706144 1 controller-manager.go:398] FLAG: –kubefed-namespace=""

I0307 11:38:54.706147 1 controller-manager.go:398] FLAG: –log-flush-frequency=“5s”

I0307 11:38:54.706151 1 controller-manager.go:398] FLAG: –log_backtrace_at=“:0”

I0307 11:38:54.706165 1 controller-manager.go:398] FLAG: –log_dir=""

I0307 11:38:54.706184 1 controller-manager.go:398] FLAG: –log_file=""

I0307 11:38:54.706186 1 controller-manager.go:398] FLAG: –log_file_max_size=“1800”

I0307 11:38:54.706189 1 controller-manager.go:398] FLAG: –logtostderr=“true”

I0307 11:38:54.706192 1 controller-manager.go:398] FLAG: –master=""

I0307 11:38:54.706195 1 controller-manager.go:398] FLAG: –metrics-addr=“:9090”

I0307 11:38:54.706197 1 controller-manager.go:398] FLAG: –one_output=“false”

I0307 11:38:54.706199 1 controller-manager.go:398] FLAG: –rest-config-burst=“100”

I0307 11:38:54.706212 1 controller-manager.go:398] FLAG: –rest-config-qps=“50”

I0307 11:38:54.706237 1 controller-manager.go:398] FLAG: –skip_headers=“false”

I0307 11:38:54.706240 1 controller-manager.go:398] FLAG: –skip_log_headers=“false”

I0307 11:38:54.706242 1 controller-manager.go:398] FLAG: –stderrthreshold=“2”

I0307 11:38:54.706245 1 controller-manager.go:398] FLAG: –v=“2”

I0307 11:38:54.706248 1 controller-manager.go:398] FLAG: –version=“false”

I0307 11:38:54.706250 1 controller-manager.go:398] FLAG: –vmodule=""

W0307 11:38:54.706354 1 client_config.go:615] Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.

I0307 11:38:54.706691 1 controller-manager.go:428] starting metrics server path /metrics

I0307 11:38:54.799705 1 controller-manager.go:232] Setting Options with KubeFedConfig “kube-federation-system/kubefed”

I0307 11:38:54.799756 1 controller-manager.go:360] Using valid KubeFedConfig “kube-federation-system/kubefed”

I0307 11:38:54.799770 1 controller-manager.go:392] “feature-gates” will be set to map[PushReconciler:true RawResourceStatusCollection:false SchedulerPreferences:true]

I0307 11:38:54.799814 1 feature_gate.go:243] feature gates: &{map[PushReconciler:true RawResourceStatusCollection:false SchedulerPreferences:true]}

I0307 11:38:54.799829 1 controller-manager.go:162] KubeFed will target all namespaces

I0307 11:38:54.800730 1 leaderelection.go:243] attempting to acquire leader lease kube-federation-system/kubefed-controller-manager…

I0307 11:39:10.424905 1 leaderelection.go:253] successfully acquired lease kube-federation-system/kubefed-controller-manager

I0307 11:39:10.425134 1 leaderelection.go:76] promoted as leader

I0307 11:39:10.482106 1 controller.go:90] Starting cluster controller

I0307 11:39:10.485386 1 controller.go:182] ClusterController observed a new cluster: aks-cloudhub-main-pre-cneast2-1

I0307 11:39:10.495282 1 controller.go:182] ClusterController observed a new cluster: host

I0307 11:39:10.496945 1 controller.go:73] Starting scheduling manager

I0307 11:39:10.498067 1 controller.go:182] ClusterController observed a new cluster: cce-cloudhub-main-hwapsoutheast3-pre-001

I0307 11:39:10.500030 1 cluster_util.go:96] Cluster cce-cloudhub-main-hwapsoutheast3-pre-001 will use a custom transport for TLS certificate validation

I0307 11:39:10.500075 1 controller.go:182] ClusterController observed a new cluster: aks-ascloud-main-pre-cneast2-1

E0307 11:39:10.500278 1 runtime.go:78] Observed a panic: “invalid memory address or nil pointer dereference” (runtime error: invalid memory address or nil pointer dereference)

goroutine 789 [running]:

k8s.io/apimachinery/pkg/util/runtime.logPanic(0×1bce7a0, 0×2d883a0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:74 +0×95

k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:48 +0×86

panic(0×1bce7a0, 0×2d883a0)

/usr/local/go/src/runtime/panic.go:965 +0×1b9

sigs.k8s.io/kubefed/pkg/controller/util.CustomizeCertificateValidation(0xc0004c1e60, 0×0, 0×0, 0×0)

/app/pkg/controller/util/cluster_util.go:155 +0×37

sigs.k8s.io/kubefed/pkg/controller/util.CustomizeTLSTransport(0xc0004c1e60, 0xc000439200, 0×0, 0×0)

/app/pkg/controller/util/cluster_util.go:127 +0×23a

sigs.k8s.io/kubefed/pkg/controller/util.BuildClusterConfig(0xc0004c1e60, 0×20e6fb8, 0xc0009839f0, 0xc0001390c8, 0×16, 0×0, 0×40e278, 0xb6)

/app/pkg/controller/util/cluster_util.go:97 +0×49a

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.NewClusterClientSet(0xc0004c1e60, 0×20e6fb8, 0xc0009839f0, 0xc0001390c8, 0×16, 0xb2d05e00, 0×1e81ebd, 0×2c, 0xc00073b900)

/app/pkg/controller/kubefedcluster/clusterclient.go:70 +0×5d

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).addToClusterSet(0xc0001f8620, 0xc0004c1e60)

/app/pkg/controller/kubefedcluster/controller.go:185 +0×1f0

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.newClusterController.func2(0×1e08800, 0xc0004c1e60)

/app/pkg/controller/kubefedcluster/controller.go:139 +0×45

k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(…)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:231

k8s.io/client-go/tools/cache.newInformer.func1(0×1bf3cc0, 0xc000327ed8, 0×1, 0xc000327ed8)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:407 +0×198

k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0007200a0, 0xc0009c4510, 0×0, 0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/delta_fifo.go:544 +0×322

k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000980120)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:183 +0×42

k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000690f90)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:155 +0×5f

k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c53f90, 0×20998e0, 0xc0009c4db0, 0xc000a17001, 0xc0002f9020)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:156 +0×9b

k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000690f90, 0×3b9aca00, 0×0, 0xc00045c301, 0xc0002f9020)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:133 +0×98

k8s.io/apimachinery/pkg/util/wait.Until(…)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:90

k8s.io/client-go/tools/cache.(*controller).Run(0xc000980120, 0xc0002f9020)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:154 +0×2e5

created by sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).Run

/app/pkg/controller/kubefedcluster/controller.go:197 +0×88

E0307 11:39:10.500337 1 runtime.go:78] Observed a panic: “invalid memory address or nil pointer dereference” (runtime error: invalid memory address or nil pointer dereference)

goroutine 789 [running]:

k8s.io/apimachinery/pkg/util/runtime.logPanic(0×1bce7a0, 0×2d883a0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:74 +0×95

k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:48 +0×86

panic(0×1bce7a0, 0×2d883a0)

/usr/local/go/src/runtime/panic.go:965 +0×1b9

k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:55 +0×109

panic(0×1bce7a0, 0×2d883a0)

/usr/local/go/src/runtime/panic.go:965 +0×1b9

sigs.k8s.io/kubefed/pkg/controller/util.CustomizeCertificateValidation(0xc0004c1e60, 0×0, 0×0, 0×0)

/app/pkg/controller/util/cluster_util.go:155 +0×37

sigs.k8s.io/kubefed/pkg/controller/util.CustomizeTLSTransport(0xc0004c1e60, 0xc000439200, 0×0, 0×0)

/app/pkg/controller/util/cluster_util.go:127 +0×23a

sigs.k8s.io/kubefed/pkg/controller/util.BuildClusterConfig(0xc0004c1e60, 0×20e6fb8, 0xc0009839f0, 0xc0001390c8, 0×16, 0×0, 0×40e278, 0xb6)

/app/pkg/controller/util/cluster_util.go:97 +0×49a

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.NewClusterClientSet(0xc0004c1e60, 0×20e6fb8, 0xc0009839f0, 0xc0001390c8, 0×16, 0xb2d05e00, 0×1e81ebd, 0×2c, 0xc00073b900)

/app/pkg/controller/kubefedcluster/clusterclient.go:70 +0×5d

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).addToClusterSet(0xc0001f8620, 0xc0004c1e60)

/app/pkg/controller/kubefedcluster/controller.go:185 +0×1f0

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.newClusterController.func2(0×1e08800, 0xc0004c1e60)

/app/pkg/controller/kubefedcluster/controller.go:139 +0×45

k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(…)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:231

k8s.io/client-go/tools/cache.newInformer.func1(0×1bf3cc0, 0xc000327ed8, 0×1, 0xc000327ed8)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:407 +0×198

k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0007200a0, 0xc0009c4510, 0×0, 0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/delta_fifo.go:544 +0×322

k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000980120)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:183 +0×42

k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000690f90)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:155 +0×5f

k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c53f90, 0×20998e0, 0xc0009c4db0, 0xc000a17001, 0xc0002f9020)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:156 +0×9b

k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000690f90, 0×3b9aca00, 0×0, 0xc00045c301, 0xc0002f9020)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:133 +0×98

k8s.io/apimachinery/pkg/util/wait.Until(…)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:90

k8s.io/client-go/tools/cache.(*controller).Run(0xc000980120, 0xc0002f9020)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:154 +0×2e5

created by sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).Run

/app/pkg/controller/kubefedcluster/controller.go:197 +0×88

panic: runtime error: invalid memory address or nil pointer dereference [recovered]

panic: runtime error: invalid memory address or nil pointer dereference [recovered]

panic: runtime error: invalid memory address or nil pointer dereference

[signal SIGSEGV: segmentation violation code=0×1 addr=0xa0 pc=0×18e2e77]

goroutine 789 [running]:

k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:55 +0×109

panic(0×1bce7a0, 0×2d883a0)

/usr/local/go/src/runtime/panic.go:965 +0×1b9

k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:55 +0×109

panic(0×1bce7a0, 0×2d883a0)

/usr/local/go/src/runtime/panic.go:965 +0×1b9

sigs.k8s.io/kubefed/pkg/controller/util.CustomizeCertificateValidation(0xc0004c1e60, 0×0, 0×0, 0×0)

/app/pkg/controller/util/cluster_util.go:155 +0×37

sigs.k8s.io/kubefed/pkg/controller/util.CustomizeTLSTransport(0xc0004c1e60, 0xc000439200, 0×0, 0×0)

/app/pkg/controller/util/cluster_util.go:127 +0×23a

sigs.k8s.io/kubefed/pkg/controller/util.BuildClusterConfig(0xc0004c1e60, 0×20e6fb8, 0xc0009839f0, 0xc0001390c8, 0×16, 0×0, 0×40e278, 0xb6)

/app/pkg/controller/util/cluster_util.go:97 +0×49a

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.NewClusterClientSet(0xc0004c1e60, 0×20e6fb8, 0xc0009839f0, 0xc0001390c8, 0×16, 0xb2d05e00, 0×1e81ebd, 0×2c, 0xc00073b900)

/app/pkg/controller/kubefedcluster/clusterclient.go:70 +0×5d

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).addToClusterSet(0xc0001f8620, 0xc0004c1e60)

/app/pkg/controller/kubefedcluster/controller.go:185 +0×1f0

sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.newClusterController.func2(0×1e08800, 0xc0004c1e60)

/app/pkg/controller/kubefedcluster/controller.go:139 +0×45

k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(…)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:231

k8s.io/client-go/tools/cache.newInformer.func1(0×1bf3cc0, 0xc000327ed8, 0×1, 0xc000327ed8)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:407 +0×198

k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0007200a0, 0xc0009c4510, 0×0, 0×0, 0×0, 0×0)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/delta_fifo.go:544 +0×322

k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000980120)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:183 +0×42

k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000690f90)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:155 +0×5f

k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c53f90, 0×20998e0, 0xc0009c4db0, 0xc000a17001, 0xc0002f9020)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:156 +0×9b

k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000690f90, 0×3b9aca00, 0×0, 0xc00045c301, 0xc0002f9020)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:133 +0×98

k8s.io/apimachinery/pkg/util/wait.Until(…)

/go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:90

k8s.io/client-go/tools/cache.(*controller).Run(0xc000980120, 0xc0002f9020)

/go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:154 +0×2e5

created by sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).Run

/app/pkg/controller/kubefedcluster/controller.go:197 +0×88

    jianxinzzw 你用的什么版本的 kubesphere?

    可以把 kubesphere 升级到 3.2.X 试一试,3.2版本里升级了kubefed: v0.7.0 -> v0.8.1

    2 个月 后
    1 个月 后
    4 个月 后

    GuoRui66 不使用insecure-skip-tls-verify的kubeconfig,或者kubefed升级到最新版本v0.10.0可以解决问题,升级kubefed升级到最新版本v0.10.0,已亲自测试https://github.com/kubernetes-sigs/kubefed/tree/master/charts/kubefed,不过helm chart升级kubefed也有点小问题,需要手动部署下controllermanager crds.yamlhttps://github.com/kubernetes-sigs/kubefed/blob/v0.10.0/charts/kubefed/charts/controllermanager/crds/crds.yaml