过渡无压力!KubeSphere v3.4.x 到 v4.x 平滑升级全攻略
- 已编辑
升级中有报错,这种资源没有被helm 管理的
upgrade.go:142: [debug] preparing upgrade for ks-core
upgrade.go:150: [debug] performing update for ks-core
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
helm.go:84: [debug] GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
rendered manifests contain a resource that already exists. Unable to continue with update
helm.sh/helm/v3/pkg/action.(\*Upgrade).performUpgrade
helm.sh/helm/v3/pkg/action/upgrade.go:301
helm.sh/helm/v3/pkg/action.(\*Upgrade).RunWithContext
helm.sh/helm/v3/pkg/action/upgrade.go:151
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:199
github.com/spf13/cobra.(\*Command).execute
github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(\*Command).ExecuteC
github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(\*Command).Execute
github.com/spf13/cobra@v1.5.0/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
UPGRADE FAILED
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:201
github.com/spf13/cobra.(\*Command).execute
github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(\*Command).ExecuteC
github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(\*Command).Execute
github.com/spf13/cobra@v1.5.0/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
有没有脚本一次来处理这些不受helm 管理的资源呢。
hongmingK零SK壹S
检查一下日志 cat host-upgrade.log | grep "apply CRDs" -A 20
升级过程会有 prepare upgrade 这个步骤,你检查看看 https://github.com/kubesphere/ks-installer/blob/release-4.1/scripts/upgrade.sh#L180-L185
- 已编辑
我的集群是TKE上进行升级,
cat host-upgrade.log | grep "apply CRDs" -A 20
apply CRDs
customresourcedefinition.apiextensions.k8s.io/applications.app.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/applicationreleases.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/applications.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/applicationversions.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/repos.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusters.cluster.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/labels.cluster.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/apiservices.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensionentries.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/jsbundles.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/reverseproxies.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/ingressclassscopes.gateway.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/builtinroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterrolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterroles.iam.kubesphere.io cunchangedcustomresourcedefinition.apiextensions.k8s.io/globalrolebindings.iam.kubesphere.io cunchangedcustomresourcedefinition.apiextensions.k8s.io/globalroles.iam.kubesphere.io cunchangedcustomresourcedefinition.apiextensions.k8s.io/groupbindings.iam.kubesphere.io unchanged
hongmingK零SK壹S
- 已编辑
检查看看为什么这行命令为什么没有被执行, https://github.com/kubesphere/ks-installer/blob/release-4.1/scripts/upgrade.sh#L180-L185
,脚本内容一致吗
helm template -s templates/prepare-upgrade-job.yaml -n kubesphere-system --release-name \
--set upgrade.prepare=true,upgrade.image.registry=$IMAGE_REGISTRY,upgrade.image.tag=$KS_UPGRADE_TAG \
$EXTENSION_REGISTRY_ARG \
--set global.imageRegistry=$IMAGE_REGISTRY,global.tag=$TAG \
-f ks-core-values.yaml \
$chart --dry-run=server | kubectl -n kubesphere-system apply --wait -f - && kubectl -n kubesphere-system wait --for=condition=complete --timeout=600s job/prepare-upgrade
正常升级过程中应该有如下日志:
apply CRDs
configmap/ks-upgrade-prepare-config created
job.batch/prepare-upgrade created
persistentvolumeclaim/ks-upgrade created
job.batch/prepare-upgrade condition met
那我的升级过程就有问题,脚本没有执行这部分。 有点奇怪,现在升级到一半了,如何重新升级
卡在这里了,我想重新从零开始升级,好像也不行,卡住了
hongmingK零SK壹S
ctrl + c
可以中断
hongming 中断了之后呢,我看pod 是pending 状态的,咋继续排查错误
hongmingK零SK壹S
hongming 脚本是一致的,我对比了一下,下载文档说明的脚本
hongmingK零SK壹S
xingxing122 upgrade.sh 可以重复执行的,脚本没有修改过吧?别把 set -e
去掉了,正常运行肯定会执行到上面我列出的这行命令
hongming 我之前用patch 打上helm 标签,我取消试试,for r in globalroles.iam.kubesphere.io globalrolebindings.iam.kubesphere.io workspacetemplates.tenant.kubesphere.io clusterroles.iam.kubesphere.io clusterrolebindings.iam.kubesphere.io; do \
kubectl get $r -o name | xargs -I{} kubectl label {} app.kubernetes.io/managed-by=Helm –overwrite; \
kubectl get $r -o name | xargs -I{} kubectl annotate {} meta.helm.sh/release-name=ks-core meta.helm.sh/release-namespace=kubesphere-system –overwrite; \
done
脚本没有修改过,我取消打的标签,重新跑一下脚本
- 已编辑
有执行命令,但是看是报错了
apply CRDs
Error: invalid argument "server" for "--dry-run" flag: strconv.ParseBool: parsing "server": invalid syntax
error: no objects passed to apply
customresourcedefinition.apiextensions.k8s.io/applications.app.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/applicationreleases.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/applications.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/applicationversions.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/repos.application.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusters.cluster.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/labels.cluster.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/apiservices.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensionentries.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/jsbundles.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/reverseproxies.extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/ingressclassscopes.gateway.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/builtinroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterrolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/globalrolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/globalroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/groupbindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/groups.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/loginrecords.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/rolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/roles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/roletemplates.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/users.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspacerolebindings.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspaceroles.iam.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/categories.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/extensionversions.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/installplans.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/repositories.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/serviceaccounts.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/resourcequotas.quota.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/provisionercapabilities.storage.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/storageclasscapabilities.storage.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspaces.tenant.kubesphere.io unchanged
customresourcedefinition.apiextensions.k8s.io/workspacetemplates.tenant.kubesphere.io unchanged
review your upgrade values.yaml and make sure the extension configs matches the extension you published, you have 10 seconds before upgrade starts.
upgrade.go:142: [debug] preparing upgrade for ks-core
upgrade.go:150: [debug] performing update for ks-core
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
helm.go:84: [debug] GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
rendered manifests contain a resource that already exists. Unable to continue with update
helm.sh/helm/v3/pkg/action.(\*Upgrade).performUpgrade
helm.sh/helm/v3/pkg/action/upgrade.go:301
helm.sh/helm/v3/pkg/action.(\*Upgrade).RunWithContext
helm.sh/helm/v3/pkg/action/upgrade.go:151
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:199
github.com/spf13/cobra.(\*Command).execute
github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(\*Command).ExecuteC
github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(\*Command).Execute
github.com/spf13/cobra@v1.5.0/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
UPGRADE FAILED
main.newUpgradeCmd.func2
helm.sh/helm/v3/cmd/helm/upgrade.go:201
github.com/spf13/cobra.(\*Command).execute
github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(\*Command).ExecuteC
github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(\*Command).Execute
github.com/spf13/cobra@v1.5.0/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_arm64.s:1172
脚本是不是需要完善一下,跟helm 版本也有问题吧。
hongmingK零SK壹S
是的,helm 版本需要 3.13+
,我们调整一下脚本
更新好了,可以说一下,谢谢,社区还是给力
hongming 好的,谢谢