卡在这里了,我想重新从零开始升级,好像也不行,卡住了
过渡无压力!KubeSphere v3.4.x 到 v4.x 平滑升级全攻略
hongmingK零SK壹S
ctrl + c
可以中断
hongming 中断了之后呢,我看pod 是pending 状态的,咋继续排查错误
hongmingK零SK壹S
hongming 脚本是一致的,我对比了一下,下载文档说明的脚本
hongmingK零SK壹S
xingxing122 upgrade.sh 可以重复执行的,脚本没有修改过吧?别把 set -e
去掉了,正常运行肯定会执行到上面我列出的这行命令
hongming 我之前用patch 打上helm 标签,我取消试试,for r in globalroles.iam.kubesphere.io globalrolebindings.iam.kubesphere.io workspacetemplates.tenant.kubesphere.io clusterroles.iam.kubesphere.io clusterrolebindings.iam.kubesphere.io; do \
kubectl get $r -o name | xargs -I{} kubectl label {} app.kubernetes.io/managed-by=Helm –overwrite; \
kubectl get $r -o name | xargs -I{} kubectl annotate {} meta.helm.sh/release-name=ks-core meta.helm.sh/release-namespace=kubesphere-system –overwrite; \
done
脚本没有修改过,我取消打的标签,重新跑一下脚本
- 已编辑
有执行命令,但是看是报错了
apply CRDs
Error: invalid argument "server" for "--dry-run" flag: strconv.ParseBool: parsing "server": invalid syntax
error: no objects passed to apply
customresourcedefinition.apiextensions.k8s.io/applications.app.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/applicationreleases.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/applications.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/applicationversions.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/repos.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusters.cluster.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/labels.cluster.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/apiservices.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensionentries.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/jsbundles.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/reverseproxies.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/ingressclassscopes.gateway.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/builtinroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterrolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/globalrolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/globalroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/groupbindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/groups.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/loginrecords.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/rolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/roles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/roletemplates.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/users.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspacerolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspaceroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensionversions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/installplans.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/repositories.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/serviceaccounts.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/resourcequotas.quota.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/provisionercapabilities.storage.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/storageclasscapabilities.storage.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspaces.tenant.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspacetemplates.tenant.kubesphere.io unchanged
review your upgrade values.yaml and make sure the extension configs matches the extension you published, you have 10 seconds before upgrade starts.
upgrade.go:142: [debug] preparing upgrade for ks-core
upgrade.go:150: [debug] performing update for ks-core
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
helm.go:84: [debug] GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
rendered manifests contain a resource that already exists. Unable to continue with update
helm.sh/helm/v3/pkg/action.(\*Upgrade).performUpgrade
helm.sh/helm/v3/pkg/action/upgrade.go:301
helm.sh/helm/v3/pkg/action.(\*Upgrade).RunWithContext
helm.sh/helm/v3/pkg/action/upgrade.go:151
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:199
github.com/spf13/cobra.(\*Command).execute
github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(\*Command).ExecuteC
github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(\*Command).Execute
github.com/spf13/cobra@v1.5.0/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
UPGRADE FAILED
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:201
github.com/spf13/cobra.(\*Command).execute
github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(\*Command).ExecuteC
github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(\*Command).Execute
github.com/spf13/cobra@v1.5.0/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
脚本是不是需要完善一下,跟helm 版本也有问题吧。
hongmingK零SK壹S
是的,helm 版本需要 3.13+
,我们调整一下脚本
更新好了,可以说一下,谢谢,社区还是给力
hongming 好的,谢谢
- 已编辑
I0421 13:14:14.563237 1 filepath.go:71] [Storage] LocalFileStorage File directory /tmp/ks-upgrade already exists
I0421 13:14:14.563378 1 executor.go:158] [Job] kubeedge is disabled
I0421 13:14:14.563390 1 executor.go:158] [Job] kubefed is disabled
I0421 13:14:14.563394 1 executor.go:158] [Job] servicemesh is disabled
I0421 13:14:14.563396 1 executor.go:158] [Job] storage-utils is disabled
I0421 13:14:14.563397 1 executor.go:158] [Job] tower is disabled
I0421 13:14:14.563399 1 executor.go:158] [Job] whizard-telemetry is disabled
I0421 13:14:14.563401 1 executor.go:158] [Job] whizard-alerting is disabled
I0421 13:14:14.563409 1 executor.go:155] [Job] application is enabled, priority 100
I0421 13:14:14.563413 1 executor.go:155] [Job] devops is enabled, priority 800
I0421 13:14:14.563421 1 executor.go:155] [Job] iam is enabled, priority 999
I0421 13:14:14.563428 1 executor.go:158] [Job] whizard-logging is disabled
I0421 13:14:14.563431 1 executor.go:158] [Job] metrics-server is disabled
I0421 13:14:14.563435 1 executor.go:155] [Job] network is enabled, priority 100
I0421 13:14:14.563443 1 executor.go:155] [Job] core is enabled, priority 10000
I0421 13:14:14.563446 1 executor.go:158] [Job] whizard-events is disabled
I0421 13:14:14.563457 1 executor.go:155] [Job] gateway is enabled, priority 90
I0421 13:14:14.563460 1 executor.go:158] [Job] opensearch is disabled
I0421 13:14:14.563462 1 executor.go:158] [Job] vector is disabled
I0421 13:14:14.563464 1 executor.go:158] [Job] whizard-monitoring is disabled
I0421 13:14:14.563466 1 executor.go:158] [Job] whizard-notification is disabled
I0421 13:14:14.568650 1 helm.go:145] getting history for release [ks-core]
I0421 13:14:14.633176 1 validator.go:57] [Validator] Current release's version is v3.3.2
I0421 13:14:14.633200 1 executor.go:220] [Job] core prepare-upgrade start
I0421 13:14:14.633209 1 executor.go:58] [Job] Detected that the plugin core is true
I0421 13:14:14.658523 1 core.go:314] scale down deployment kubesphere-system/ks-apiserver unchanged
I0421 13:14:14.668097 1 core.go:314] scale down deployment kubesphere-system/ks-console unchanged
I0421 13:14:14.680227 1 core.go:314] scale down deployment kubesphere-system/ks-controller-manager unchanged
I0421 13:14:14.690029 1 core.go:314] scale down deployment kubesphere-system/ks-installer unchanged
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x2025ccf]
goroutine 1 [running]:
kubesphere.io/ks-upgrade/pkg/jobs/core.(\*upgradeJob).deleteKubeSphereWebhook(0xc0009def00, {0x2ba2f40, 0x40e7c00})
/workspace/pkg/jobs/core/core.go:429 +0x22f
kubesphere.io/ks-upgrade/pkg/jobs/core.(\*upgradeJob).PrepareUpgrade(0xc0009def00, {0x2ba2f40, 0x40e7c00})
/workspace/pkg/jobs/core/core.go:118 +0xcc
kubesphere.io/ks-upgrade/pkg/executor.(\*Executor).PrepareUpgrade(0xc000403560, {0x2ba2f40, 0x40e7c00})
/workspace/pkg/executor/executor.go:227 +0x275
main.init.func5(0xc000158600?, {0x26e5683?, 0x4?, 0x26e5687?})
/workspace/cmd/ks-upgrade.go:102 +0x26
github.com/spf13/cobra.(\*Command).execute(0x4095a80, {0xc00081ce40, 0x3, 0x3})
/workspace/vendor/github.com/spf13/cobra/command.go:985 +0xaaa
github.com/spf13/cobra.(\*Command).ExecuteC(0x4094c20)
/workspace/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(\*Command).Execute(...)
/workspace/vendor/github.com/spf13/cobra/command.go:1041
main.main()
/workspace/cmd/ks-upgrade.go:136 +0x4e
helm 更新到了3.17之后,执行报错
hongmingK零SK壹S
- 已编辑
kubectl get validatingwebhookconfiguration -o json | jq '.items[] | .webhooks[] | select(.clientConfig.service == null)'
这么检查看看,可以先把这些 validatingwebhookconfiguration clientConfig 中 service url 修改为 ServiceReference
已提交修复,kubesphere/ks-upgrade#27
我这里是使用kubeconfig 文件连接的集群,url 中是集群名称,改为 ServiceReference 的时候, url": “https://gatekeeper-webhook-service.XXXtke集群.svc.cluster.local:18443/v1/admit” 这种格式,怎么改。
hongmingK零SK壹S
- 已编辑
改一下 ks-upgrade 的 imagePullPolicy,重新拉一下镜像
编辑 ks-core-values.yaml,把 pullPolicy 改为 Always
upgrade:
enabled: true
image:
registry: ""
repository: kubesphere/ks-upgrade
tag: ""
pullPolicy: Always
hongming 更新完毕之后,需要重新执行啥命令来加载一下呢,或者重新更新呢
- 已编辑
重新执行更新,还是报错, 我的电脑是mac M1 系列,跟客户是啥系统没关系吧,看这个报错,还是空指针,哪里不对
kubectl logs -f -n kubesphere-system prepare-upgrade-rnxs8
I0422 05:07:44.886017 1 filepath.go:71] [Storage] LocalFileStorage File directory /tmp/ks-upgrade already exists
I0422 05:07:44.886187 1 executor.go:158] [Job] whizard-alerting is disabled
I0422 05:07:44.886201 1 executor.go:158] [Job] whizard-logging is disabled
I0422 05:07:44.886205 1 executor.go:158] [Job] whizard-notification is disabled
I0422 05:07:44.886209 1 executor.go:158] [Job] tower is disabled
I0422 05:07:44.886212 1 executor.go:158] [Job] whizard-telemetry is disabled
I0422 05:07:44.886216 1 executor.go:158] [Job] whizard-events is disabled
I0422 05:07:44.886220 1 executor.go:158] [Job] kubefed is disabled
I0422 05:07:44.886224 1 executor.go:158] [Job] servicemesh is disabled
I0422 05:07:44.886227 1 executor.go:158] [Job] storage-utils is disabled
I0422 05:07:44.886240 1 executor.go:155] [Job] devops is enabled, priority 800
I0422 05:07:44.886261 1 executor.go:155] [Job] iam is enabled, priority 999
I0422 05:07:44.886272 1 executor.go:158] [Job] metrics-server is disabled
I0422 05:07:44.886276 1 executor.go:158] [Job] opensearch is disabled
I0422 05:07:44.886279 1 executor.go:158] [Job] whizard-monitoring is disabled
I0422 05:07:44.886287 1 executor.go:155] [Job] network is enabled, priority 100
I0422 05:07:44.886295 1 executor.go:158] [Job] vector is disabled
I0422 05:07:44.886304 1 executor.go:155] [Job] application is enabled, priority 100
I0422 05:07:44.886311 1 executor.go:155] [Job] core is enabled, priority 10000
I0422 05:07:44.886323 1 executor.go:155] [Job] gateway is enabled, priority 90
I0422 05:07:44.886327 1 executor.go:158] [Job] kubeedge is disabled
I0422 05:07:44.898462 1 helm.go:145] getting history for release [ks-core]
I0422 05:07:44.951846 1 validator.go:57] [Validator] Current release's version is v3.3.2
I0422 05:07:44.951869 1 executor.go:220] [Job] core prepare-upgrade start
I0422 05:07:44.951878 1 executor.go:58] [Job] Detected that the plugin core is true
I0422 05:07:44.977148 1 core.go:314] scale down deployment kubesphere-system/ks-apiserver unchanged
I0422 05:07:45.000332 1 core.go:314] scale down deployment kubesphere-system/ks-console unchanged
I0422 05:07:45.006025 1 core.go:314] scale down deployment kubesphere-system/ks-controller-manager unchanged
I0422 05:07:45.028711 1 core.go:314] scale down deployment kubesphere-system/ks-installer unchanged
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x2025ccf]
goroutine 1 [running]:
kubesphere.io/ks-upgrade/pkg/jobs/core.(\*upgradeJob).deleteKubeSphereWebhook(0xc000a2f630, {0x2ba2f40, 0x40e7c00})
/workspace/pkg/jobs/core/core.go:429 +0x22f
kubesphere.io/ks-upgrade/pkg/jobs/core.(\*upgradeJob).PrepareUpgrade(0xc000a2f630, {0x2ba2f40, 0x40e7c00})
/workspace/pkg/jobs/core/core.go:118 +0xcc
kubesphere.io/ks-upgrade/pkg/executor.(\*Executor).PrepareUpgrade(0xc000491710, {0x2ba2f40, 0x40e7c00})
/workspace/pkg/executor/executor.go:227 +0x275
main.init.func5(0xc00021a800?, {0x26e5683?, 0x4?, 0x26e5687?})
/workspace/cmd/ks-upgrade.go:102 +0x26
github.com/spf13/cobra.(\*Command).execute(0x4095a80, {0xc000898420, 0x3, 0x3})
/workspace/vendor/github.com/spf13/cobra/command.go:985 +0xaaa
github.com/spf13/cobra.(\*Command).ExecuteC(0x4094c20)
/workspace/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(\*Command).Execute(...)
/workspace/vendor/github.com/spf13/cobra/command.go:1041
main.main()
/workspace/cmd/ks-upgrade.go:136 +0x4e
脚本就卡在这里了,
deployment.apps/ks-installer scaled
etcd endpointIps is empty or localhost, will be filled with
clusterconfiguration.installer.kubesphere.io/ks-installer patched (no change)
remove redis
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
apply CRDs
job.batch “prepare-upgrade” deleted
configmap/ks-upgrade-prepare-config unchanged
job.batch/prepare-upgrade created
查看资源运行情况