• 安装部署KubeSphere-2.1
  • kubesphere 使用子账户不能对 member集群企业命名空间不能查看,操作,会跳转到登录界面

bangbangzheng 看一下 member 集群上 ks-apiserver 的日志,排查一下是否是以下问题,https://kubesphere.com.cn/docs/faq/access-control/session-timeout/, 再检查一下联邦资源同步有没有问题

kubectl get federatedusers.types.kubefed.io <xxx> -o yaml

    hongming

    root@master1:~# kubectl get federatedusers.types.kubefed.io itsupport -o yaml

    apiVersion: types.kubefed.io/v1beta1

    kind: FederatedUser

    metadata:

    creationTimestamp: “2022-06-09T03:29:09Z”

    generation: 106

    name: itsupport

    ownerReferences:

    • apiVersion: iam.kubesphere.io/v1alpha2

      blockOwnerDeletion: true

      controller: true

      kind: User

      name: itsupport

      uid: 7509c069-bb0d-4e5f-b1c6-f968c92dea21

      resourceVersion: “30343408”

      uid: 4873c9f9-945d-4cb4-8dfc-bbe5fdfb6fd3

    spec:

    placement:

    clusterSelector: {}

    template:

    metadata:
    
      creationTimestamp: null
    
      labels:
    
        kubefed.io/managed: "false"
    
    spec:
    
      email: itsupport@ecoflow.com
    
      groups:
    
      - rd-frontendgbqhd
    
      password: $2a$10$6eh8bV0rpqIenhaAGVQclOMP6kULQNQVuO9u42NyY9h.52qBPV/I6
    
    status:
    
      lastLoginTime: "2022-06-09T07:11:33Z"
    
      lastTransitionTime: "2022-06-09T03:29:09Z"
    
      state: Active

    hongming 报这个错又影响吗?

    E0609 07:57:10.807068 1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 10.84.133.203:9090->10.84.164.94:46958: read: connection reset by peer

    E0609 07:59:02.712330 1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 10.84.133.203:9090->10.84.164.94:47690: read: connection reset by peer

    E0609 08:00:07.321176 1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 10.84.133.203:9090->10.84.156.179:51348: read: connection reset by peer

    E0609 08:01:02.746039 1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 10.84.133.203:9090->10.84.164.94:48308: read: connection reset by peer

    E0609 08:01:02.746544 1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 10.84.133.203:9090->10.84.156.179:51342: read: connection reset by peer

      bangbangzheng

      看起来是多集群同步出问题了,看下 host 集群 kube-federation-system 这个 namespace 下的 pod 是否都正常

      @“hongming”#p36367kubefed-controller-manager 报错

      容器日志

      KubeFed controller-manager version: version.Info{Version:“v0.0.1-alpha.0”, GitCommit:“unknown”, GitTreeState:“unknown”, BuildDate:“unknown”, GoVersion:“go1.16.5”, Compiler:“gc”, Platform:“linux/amd64”}

      I0609 08:17:08.586828 1 controller-manager.go:398] FLAG: –add_dir_header=“false”

      I0609 08:17:08.586923 1 controller-manager.go:398] FLAG: –alsologtostderr=“false”

      I0609 08:17:08.586928 1 controller-manager.go:398] FLAG: –healthz-addr=“:8080”

      I0609 08:17:08.586941 1 controller-manager.go:398] FLAG: –help=“false”

      I0609 08:17:08.586951 1 controller-manager.go:398] FLAG: –kubeconfig=""

      I0609 08:17:08.586961 1 controller-manager.go:398] FLAG: –kubefed-config=""

      I0609 08:17:08.586970 1 controller-manager.go:398] FLAG: –kubefed-namespace=""

      I0609 08:17:08.586983 1 controller-manager.go:398] FLAG: –log-flush-frequency=“5s”

      I0609 08:17:08.586992 1 controller-manager.go:398] FLAG: –log_backtrace_at=“:0”

      I0609 08:17:08.587003 1 controller-manager.go:398] FLAG: –log_dir=""

      I0609 08:17:08.587007 1 controller-manager.go:398] FLAG: –log_file=""

      I0609 08:17:08.587017 1 controller-manager.go:398] FLAG: –log_file_max_size=“1800”

      I0609 08:17:08.587024 1 controller-manager.go:398] FLAG: –logtostderr=“true”

      I0609 08:17:08.587028 1 controller-manager.go:398] FLAG: –master=""

      I0609 08:17:08.587032 1 controller-manager.go:398] FLAG: –metrics-addr=“:9090”

      I0609 08:17:08.587037 1 controller-manager.go:398] FLAG: –one_output=“false”

      I0609 08:17:08.587040 1 controller-manager.go:398] FLAG: –rest-config-burst=“100”

      I0609 08:17:08.587049 1 controller-manager.go:398] FLAG: –rest-config-qps=“50”

      I0609 08:17:08.587057 1 controller-manager.go:398] FLAG: –skip_headers=“false”

      I0609 08:17:08.587061 1 controller-manager.go:398] FLAG: –skip_log_headers=“false”

      I0609 08:17:08.587065 1 controller-manager.go:398] FLAG: –stderrthreshold=“2”

      I0609 08:17:08.587072 1 controller-manager.go:398] FLAG: –v=“2”

      I0609 08:17:08.587075 1 controller-manager.go:398] FLAG: –version=“false”

      I0609 08:17:08.587083 1 controller-manager.go:398] FLAG: –vmodule=""

      W0609 08:17:08.587173 1 client_config.go:615] Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.

      I0609 08:17:08.587484 1 controller-manager.go:428] starting metrics server path /metrics

      I0609 08:17:08.748456 1 controller-manager.go:232] Setting Options with KubeFedConfig “kube-federation-system/kubefed”

      I0609 08:17:08.748499 1 controller-manager.go:360] Using valid KubeFedConfig “kube-federation-system/kubefed”

      I0609 08:17:08.748512 1 controller-manager.go:392] “feature-gates” will be set to map[PushReconciler:true RawResourceStatusCollection:false SchedulerPreferences:true]

      I0609 08:17:08.748541 1 feature_gate.go:243] feature gates: &{map[PushReconciler:true RawResourceStatusCollection:false SchedulerPreferences:true]}

      I0609 08:17:08.748553 1 controller-manager.go:162] KubeFed will target all namespaces

      I0609 08:17:08.749418 1 leaderelection.go:243] attempting to acquire leader lease kube-federation-system/kubefed-controller-manager…

      I0609 08:17:24.210693 1 leaderelection.go:253] successfully acquired lease kube-federation-system/kubefed-controller-manager

      I0609 08:17:24.210831 1 leaderelection.go:76] promoted as leader

      I0609 08:17:24.243265 1 controller.go:90] Starting cluster controller

      I0609 08:17:24.249139 1 controller.go:182] ClusterController observed a new cluster: ecoflow

      I0609 08:17:24.258832 1 controller.go:73] Starting scheduling manager

      I0609 08:17:24.259951 1 controller.go:182] ClusterController observed a new cluster: iot-factory

      I0609 08:17:24.265557 1 controller.go:182] ClusterController observed a new cluster: iot-prod

      I0609 08:17:24.274193 1 controller.go:182] ClusterController observed a new cluster: aws-prod-jp

      I0609 08:17:24.281279 1 controller.go:182] ClusterController observed a new cluster: aws-prod-us

      I0609 08:17:24.287515 1 controller.go:182] ClusterController observed a new cluster: iot-uat

      I0609 08:17:24.294025 1 cluster_util.go:96] Cluster iot-uat will use a custom transport for TLS certificate validation

      E0609 08:17:24.294184 1 runtime.go:78] Observed a panic: “invalid memory address or nil pointer dereference” (runtime error: invalid memory address or nil pointer dereference)

      goroutine 961 [running]:

      k8s.io/apimachinery/pkg/util/runtime.logPanic(0×1bce7a0, 0×2d883a0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:74 +0×95

      k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:48 +0×86

      panic(0×1bce7a0, 0×2d883a0)

      /usr/local/go/src/runtime/panic.go:965 +0×1b9

      sigs.k8s.io/kubefed/pkg/controller/util.CustomizeCertificateValidation(0xc000502010, 0×0, 0×0, 0×0)

      /app/pkg/controller/util/cluster_util.go:155 +0×37

      sigs.k8s.io/kubefed/pkg/controller/util.CustomizeTLSTransport(0xc000502010, 0xc00040a900, 0×0, 0×0)

      /app/pkg/controller/util/cluster_util.go:127 +0×23a

      sigs.k8s.io/kubefed/pkg/controller/util.BuildClusterConfig(0xc000502010, 0×20e6fb8, 0xc00099f090, 0xc000048900, 0×16, 0×0, 0×40e278, 0xb6)

      /app/pkg/controller/util/cluster_util.go:97 +0×49a

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.NewClusterClientSet(0xc000502010, 0×20e6fb8, 0xc00099f090, 0xc000048900, 0×16, 0xb2d05e00, 0×1e81ebd, 0×2c, 0xc000f55aa0)

      /app/pkg/controller/kubefedcluster/clusterclient.go:70 +0×5d

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).addToClusterSet(0xc000688d20, 0xc000502010)

      /app/pkg/controller/kubefedcluster/controller.go:185 +0×1f0

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.newClusterController.func2(0×1e08800, 0xc000502010)

      /app/pkg/controller/kubefedcluster/controller.go:139 +0×45

      k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(…)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:231

      k8s.io/client-go/tools/cache.newInformer.func1(0×1bf3cc0, 0xc0005ae540, 0×1, 0xc0005ae540)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:407 +0×198

      k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000b866e0, 0xc000bbe6f0, 0×0, 0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/delta_fifo.go:544 +0×322

      k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000ea3170)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:183 +0×42

      k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000af8f90)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:155 +0×5f

      k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00115bf90, 0×20998e0, 0xc000b888d0, 0xc000d2dc01, 0xc000a8d140)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:156 +0×9b

      k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000af8f90, 0×3b9aca00, 0×0, 0xc00069c301, 0xc000a8d140)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:133 +0×98

      k8s.io/apimachinery/pkg/util/wait.Until(…)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:90

      k8s.io/client-go/tools/cache.(*controller).Run(0xc000ea3170, 0xc000a8d140)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:154 +0×2e5

      created by sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).Run

      /app/pkg/controller/kubefedcluster/controller.go:197 +0×88

      I0609 08:17:24.294243 1 controller.go:182] ClusterController observed a new cluster: iot-uat

      E0609 08:17:24.294250 1 runtime.go:78] Observed a panic: “invalid memory address or nil pointer dereference” (runtime error: invalid memory address or nil pointer dereference)

      goroutine 961 [running]:

      k8s.io/apimachinery/pkg/util/runtime.logPanic(0×1bce7a0, 0×2d883a0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:74 +0×95

      k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:48 +0×86

      panic(0×1bce7a0, 0×2d883a0)

      /usr/local/go/src/runtime/panic.go:965 +0×1b9

      k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:55 +0×109

      panic(0×1bce7a0, 0×2d883a0)

      /usr/local/go/src/runtime/panic.go:965 +0×1b9

      sigs.k8s.io/kubefed/pkg/controller/util.CustomizeCertificateValidation(0xc000502010, 0×0, 0×0, 0×0)

      /app/pkg/controller/util/cluster_util.go:155 +0×37

      sigs.k8s.io/kubefed/pkg/controller/util.CustomizeTLSTransport(0xc000502010, 0xc00040a900, 0×0, 0×0)

      /app/pkg/controller/util/cluster_util.go:127 +0×23a

      sigs.k8s.io/kubefed/pkg/controller/util.BuildClusterConfig(0xc000502010, 0×20e6fb8, 0xc00099f090, 0xc000048900, 0×16, 0×0, 0×40e278, 0xb6)

      /app/pkg/controller/util/cluster_util.go:97 +0×49a

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.NewClusterClientSet(0xc000502010, 0×20e6fb8, 0xc00099f090, 0xc000048900, 0×16, 0xb2d05e00, 0×1e81ebd, 0×2c, 0xc000f55aa0)

      /app/pkg/controller/kubefedcluster/clusterclient.go:70 +0×5d

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).addToClusterSet(0xc000688d20, 0xc000502010)

      /app/pkg/controller/kubefedcluster/controller.go:185 +0×1f0

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.newClusterController.func2(0×1e08800, 0xc000502010)

      /app/pkg/controller/kubefedcluster/controller.go:139 +0×45

      k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(…)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:231

      k8s.io/client-go/tools/cache.newInformer.func1(0×1bf3cc0, 0xc0005ae540, 0×1, 0xc0005ae540)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:407 +0×198

      k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000b866e0, 0xc000bbe6f0, 0×0, 0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/delta_fifo.go:544 +0×322

      k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000ea3170)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:183 +0×42

      k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000af8f90)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:155 +0×5f

      k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00115bf90, 0×20998e0, 0xc000b888d0, 0xc000d2dc01, 0xc000a8d140)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:156 +0×9b

      k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000af8f90, 0×3b9aca00, 0×0, 0xc00069c301, 0xc000a8d140)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:133 +0×98

      k8s.io/apimachinery/pkg/util/wait.Until(…)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:90

      k8s.io/client-go/tools/cache.(*controller).Run(0xc000ea3170, 0xc000a8d140)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:154 +0×2e5

      created by sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).Run

      /app/pkg/controller/kubefedcluster/controller.go:197 +0×88

      panic: runtime error: invalid memory address or nil pointer dereference [recovered]

      panic: runtime error: invalid memory address or nil pointer dereference [recovered]

      panic: runtime error: invalid memory address or nil pointer dereference

      [signal SIGSEGV: segmentation violation code=0×1 addr=0xa0 pc=0×18e2e77]

      goroutine 961 [running]:

      k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:55 +0×109

      panic(0×1bce7a0, 0×2d883a0)

      /usr/local/go/src/runtime/panic.go:965 +0×1b9

      k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/runtime/runtime.go:55 +0×109

      panic(0×1bce7a0, 0×2d883a0)

      /usr/local/go/src/runtime/panic.go:965 +0×1b9

      sigs.k8s.io/kubefed/pkg/controller/util.CustomizeCertificateValidation(0xc000502010, 0×0, 0×0, 0×0)

      /app/pkg/controller/util/cluster_util.go:155 +0×37

      sigs.k8s.io/kubefed/pkg/controller/util.CustomizeTLSTransport(0xc000502010, 0xc00040a900, 0×0, 0×0)

      /app/pkg/controller/util/cluster_util.go:127 +0×23a

      sigs.k8s.io/kubefed/pkg/controller/util.BuildClusterConfig(0xc000502010, 0×20e6fb8, 0xc00099f090, 0xc000048900, 0×16, 0×0, 0×40e278, 0xb6)

      /app/pkg/controller/util/cluster_util.go:97 +0×49a

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.NewClusterClientSet(0xc000502010, 0×20e6fb8, 0xc00099f090, 0xc000048900, 0×16, 0xb2d05e00, 0×1e81ebd, 0×2c, 0xc000f55aa0)

      /app/pkg/controller/kubefedcluster/clusterclient.go:70 +0×5d

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).addToClusterSet(0xc000688d20, 0xc000502010)

      /app/pkg/controller/kubefedcluster/controller.go:185 +0×1f0

      sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.newClusterController.func2(0×1e08800, 0xc000502010)

      /app/pkg/controller/kubefedcluster/controller.go:139 +0×45

      k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(…)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:231

      k8s.io/client-go/tools/cache.newInformer.func1(0×1bf3cc0, 0xc0005ae540, 0×1, 0xc0005ae540)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:407 +0×198

      k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000b866e0, 0xc000bbe6f0, 0×0, 0×0, 0×0, 0×0)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/delta_fifo.go:544 +0×322

      k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000ea3170)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:183 +0×42

      k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000af8f90)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:155 +0×5f

      k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00115bf90, 0×20998e0, 0xc000b888d0, 0xc000d2dc01, 0xc000a8d140)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:156 +0×9b

      k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000af8f90, 0×3b9aca00, 0×0, 0xc00069c301, 0xc000a8d140)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:133 +0×98

      k8s.io/apimachinery/pkg/util/wait.Until(…)

      /go/pkg/mod/k8s.io/apimachinery@v0.21.2/pkg/util/wait/wait.go:90

      k8s.io/client-go/tools/cache.(*controller).Run(0xc000ea3170, 0xc000a8d140)

      /go/pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/controller.go:154 +0×2e5

      created by sigs.k8s.io/kubefed/pkg/controller/kubefedcluster.(*ClusterController).Run

      /app/pkg/controller/kubefedcluster/controller.go:197 +0×88

      invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference