ks 3.0 allinone模式开启H集群后kube-federation-system/kubefed-controller-manager这个pod状态 CrashLoopBackOff
ks 3.0 allinone模式开启H集群后kube-federation-system/kubefed-controller-manager这个pod状态 CrashLoopBackOff
Jeff
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: “1”
meta.helm.sh/release-name: kubefed
meta.helm.sh/release-namespace: kube-federation-system
creationTimestamp: “2020-08-28T02:22:42Z”
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
kubefed-control-plane: controller-manager
name: kubefed-controller-manager
namespace: kube-federation-system
resourceVersion: “208827”
selfLink: /apis/apps/v1/namespaces/kube-federation-system/deployments/kubefed-controller-manager
uid: d6aaa3d8-9e8b-480c-91d6-998a173a2237
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
kubefed-control-plane: controller-manager
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
kubefed-control-plane: controller-manager
spec:
containers:
- command:
- /hyperfed/controller-manager
image: kubesphere/kubefed:v0.3.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 3
name: controller-manager
ports:
- containerPort: 9090
name: metrics
protocol: TCP
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 1001
serviceAccount: kubefed-controller
serviceAccountName: kubefed-controller
terminationGracePeriodSeconds: 10
status:
conditions:
Jeff apiVersion: core.kubefed.io/v1beta1
kind: KubeFedConfig
metadata:
creationTimestamp: “2020-08-28T02:25:44Z”
generation: 1
name: kubefed
namespace: kube-federation-system
resourceVersion: “200048″
selfLink: /apis/core.kubefed.io/v1beta1/namespaces/kube-federation-system/kubefedconfigs/kubefed
uid: 3548395d-f9a9-4640-9fdb-c8c0c855aebf
spec:
scope: ""
我看了下 确实 ks-installer-c8f4f5f65-tfmrz这个pod的日志有很多报错。
应该是最新的 kk文件我昨天才下载的。
方便加个微信吗 日志有点大贴不上来。我的:yeshihihi
erbiao3k 这个环境可以外部登录么,你把登录方式发到 kubesphere@yunify.com 吧,我们看下。最好使用 teamviewer 12 或者向日葵,能够ssh登录是最好的了
你好,我也遇到这种问题了,开启了多集群管理模式,然后pod一直 启不来,困扰好几天了,请问解决了吗?
1 client_config.go:615] Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.
I0804 02:47:49.610058 1 controller-manager.go:428] starting metrics server path /metrics
I0804 02:47:49.660695 1 controller-manager.go:225] Cannot retrieve KubeFedConfig “kube-federation-system/kubefed”: kubefedconfigs.core.kubefed.io “kubefed” not found. Default options will be used.
I0804 02:47:49.660721 1 controller-manager.go:328] Creating KubeFedConfig “kube-federation-system/kubefed” with default values
F0804 02:47:49.676414 1 controller-manager.go:299] Error creating KubeFedConfig “kube-federation-system/kubefed”: Internal error occurred: failed calling webhook “kubefedconfigs.core.kubefed.io”: Post “https://kubefed-admission-webhook.kube-federation-system.svc:443/validate-kubefedconfig?timeout=10s”: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubefed-admission-webhook-ca”)
goroutine 1 [running]: