升级中有报错,这种资源没有被helm 管理的

upgrade.go:142: [debug] preparing upgrade for ks-core

upgrade.go:150: [debug] performing update for ks-core

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"

helm.go:84: [debug] GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"

rendered manifests contain a resource that already exists. Unable to continue with update

helm.sh/helm/v3/pkg/action.(\*Upgrade).performUpgrade

	helm.sh/helm/v3/pkg/action/upgrade.go:301

helm.sh/helm/v3/pkg/action.(\*Upgrade).RunWithContext

	helm.sh/helm/v3/pkg/action/upgrade.go:151

main.newUpgradeCmd.func2

	helm.sh/helm/v3/cmd/helm/upgrade.go:199

github.com/spf13/cobra.(\*Command).execute

	github.com/spf13/cobra@v1.5.0/command.go:872

github.com/spf13/cobra.(\*Command).ExecuteC

	github.com/spf13/cobra@v1.5.0/command.go:990

github.com/spf13/cobra.(\*Command).Execute

	github.com/spf13/cobra@v1.5.0/command.go:918

main.main

	helm.sh/helm/v3/cmd/helm/helm.go:83

runtime.main

	runtime/proc.go:250

runtime.goexit

	runtime/asm_arm64.s:1172

UPGRADE FAILED

main.newUpgradeCmd.func2

	helm.sh/helm/v3/cmd/helm/upgrade.go:201

github.com/spf13/cobra.(\*Command).execute

	github.com/spf13/cobra@v1.5.0/command.go:872

github.com/spf13/cobra.(\*Command).ExecuteC

	github.com/spf13/cobra@v1.5.0/command.go:990

github.com/spf13/cobra.(\*Command).Execute

	github.com/spf13/cobra@v1.5.0/command.go:918

main.main

	helm.sh/helm/v3/cmd/helm/helm.go:83

runtime.main

	runtime/proc.go:250

runtime.goexit

	runtime/asm_arm64.s:1172

    有没有脚本一次来处理这些不受helm 管理的资源呢。

    我的集群是TKE上进行升级,

    cat host-upgrade.log | grep "apply CRDs" -A 20
    
    apply CRDs
    
    customresourcedefinition.apiextensions.k8s.io/applications.app.k8s.io configured
    
    customresourcedefinition.apiextensions.k8s.io/applicationreleases.application.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/applications.application.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/applicationversions.application.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/categories.application.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/repos.application.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/clusters.cluster.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/labels.cluster.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/apiservices.extensions.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/extensionentries.extensions.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/jsbundles.extensions.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/reverseproxies.extensions.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/ingressclassscopes.gateway.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/builtinroles.iam.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/categories.iam.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/clusterrolebindings.iam.kubesphere.io unchanged
    
    customresourcedefinition.apiextensions.k8s.io/clusterroles.iam.kubesphere.io cunchangedcustomresourcedefinition.apiextensions.k8s.io/globalrolebindings.iam.kubesphere.io cunchangedcustomresourcedefinition.apiextensions.k8s.io/globalroles.iam.kubesphere.io cunchangedcustomresourcedefinition.apiextensions.k8s.io/groupbindings.iam.kubesphere.io unchanged
      • hongmingK零SK壹S

      • 已编辑

      xingxing122

      检查看看为什么这行命令为什么没有被执行, https://github.com/kubesphere/ks-installer/blob/release-4.1/scripts/upgrade.sh#L180-L185,脚本内容一致吗

      helm template -s templates/prepare-upgrade-job.yaml -n kubesphere-system --release-name \
              --set upgrade.prepare=true,upgrade.image.registry=$IMAGE_REGISTRY,upgrade.image.tag=$KS_UPGRADE_TAG \
              $EXTENSION_REGISTRY_ARG \
              --set global.imageRegistry=$IMAGE_REGISTRY,global.tag=$TAG \
              -f ks-core-values.yaml \
              $chart --dry-run=server | kubectl -n kubesphere-system apply --wait -f - && kubectl -n kubesphere-system wait --for=condition=complete --timeout=600s job/prepare-upgrade

      正常升级过程中应该有如下日志:

      apply CRDs
      configmap/ks-upgrade-prepare-config created
      job.batch/prepare-upgrade created
      persistentvolumeclaim/ks-upgrade created
      job.batch/prepare-upgrade condition met

      那我的升级过程就有问题,脚本没有执行这部分。 有点奇怪,现在升级到一半了,如何重新升级

      卡在这里了,我想重新从零开始升级,好像也不行,卡住了

        hongming 中断了之后呢,我看pod 是pending 状态的,咋继续排查错误

          hongming 脚本是一致的,我对比了一下,下载文档说明的脚本

          xingxing122 upgrade.sh 可以重复执行的,脚本没有修改过吧?别把 set -e 去掉了,正常运行肯定会执行到上面我列出的这行命令

            hongming 我之前用patch 打上helm 标签,我取消试试,for r in globalroles.iam.kubesphere.io globalrolebindings.iam.kubesphere.io workspacetemplates.tenant.kubesphere.io clusterroles.iam.kubesphere.io clusterrolebindings.iam.kubesphere.io; do \

            kubectl get $r -o name | xargs -I{} kubectl label {} app.kubernetes.io/managed-by=Helm –overwrite; \

            kubectl get $r -o name | xargs -I{} kubectl annotate {} meta.helm.sh/release-name=ks-core meta.helm.sh/release-namespace=kubesphere-system –overwrite; \

            done

            脚本没有修改过,我取消打的标签,重新跑一下脚本

            有执行命令,但是看是报错了

            apply CRDs
            
            Error: invalid argument "server" for "--dry-run" flag: strconv.ParseBool: parsing "server": invalid syntax
            
            error: no objects passed to apply
            
            customresourcedefinition.apiextensions.k8s.io/applications.app.k8s.io configured
            
            customresourcedefinition.apiextensions.k8s.io/applicationreleases.application.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/applications.application.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/applicationversions.application.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/categories.application.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/repos.application.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/clusters.cluster.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/labels.cluster.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/apiservices.extensions.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/extensionentries.extensions.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/jsbundles.extensions.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/reverseproxies.extensions.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/ingressclassscopes.gateway.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/builtinroles.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/categories.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/clusterrolebindings.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/clusterroles.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/globalrolebindings.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/globalroles.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/groupbindings.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/groups.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/loginrecords.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/rolebindings.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/roles.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/roletemplates.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/users.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/workspacerolebindings.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/workspaceroles.iam.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/categories.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/extensions.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/extensionversions.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/installplans.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/repositories.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/serviceaccounts.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/resourcequotas.quota.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/provisionercapabilities.storage.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/storageclasscapabilities.storage.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/workspaces.tenant.kubesphere.io unchanged
            
            customresourcedefinition.apiextensions.k8s.io/workspacetemplates.tenant.kubesphere.io unchanged
            
            review your upgrade values.yaml and make sure the extension configs matches the extension you published, you have 10 seconds before upgrade starts.
            
            upgrade.go:142: [debug] preparing upgrade for ks-core
            
            upgrade.go:150: [debug] performing update for ks-core
            
            Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
            
            helm.go:84: [debug] GlobalRole "anonymous" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ks-core"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubesphere-system"
            
            rendered manifests contain a resource that already exists. Unable to continue with update
            
            helm.sh/helm/v3/pkg/action.(\*Upgrade).performUpgrade
            
            	helm.sh/helm/v3/pkg/action/upgrade.go:301
            
            helm.sh/helm/v3/pkg/action.(\*Upgrade).RunWithContext
            
            	helm.sh/helm/v3/pkg/action/upgrade.go:151
            
            main.newUpgradeCmd.func2
            
            	helm.sh/helm/v3/cmd/helm/upgrade.go:199
            
            github.com/spf13/cobra.(\*Command).execute
            
            	github.com/spf13/cobra@v1.5.0/command.go:872
            
            github.com/spf13/cobra.(\*Command).ExecuteC
            
            	github.com/spf13/cobra@v1.5.0/command.go:990
            
            github.com/spf13/cobra.(\*Command).Execute
            
            	github.com/spf13/cobra@v1.5.0/command.go:918
            
            main.main
            
            	helm.sh/helm/v3/cmd/helm/helm.go:83
            
            runtime.main
            
            	runtime/proc.go:250
            
            runtime.goexit
            
            	runtime/asm_arm64.s:1172
            
            UPGRADE FAILED
            
            main.newUpgradeCmd.func2
            
            	helm.sh/helm/v3/cmd/helm/upgrade.go:201
            
            github.com/spf13/cobra.(\*Command).execute
            
            	github.com/spf13/cobra@v1.5.0/command.go:872
            
            github.com/spf13/cobra.(\*Command).ExecuteC
            
            	github.com/spf13/cobra@v1.5.0/command.go:990
            
            github.com/spf13/cobra.(\*Command).Execute
            
            	github.com/spf13/cobra@v1.5.0/command.go:918
            
            main.main
            
            	helm.sh/helm/v3/cmd/helm/helm.go:83
            
            runtime.main
            
            	runtime/proc.go:250
            
            runtime.goexit
            
            	runtime/asm_arm64.s:1172

            脚本是不是需要完善一下,跟helm 版本也有问题吧。

              xingxing122

              是的,helm 版本需要 3.13+,我们调整一下脚本

              更新好了,可以说一下,谢谢,社区还是给力

                I0421 13:14:14.563237       1 filepath.go:71] [Storage] LocalFileStorage File directory /tmp/ks-upgrade already exists
                
                I0421 13:14:14.563378       1 executor.go:158] [Job] kubeedge is disabled
                
                I0421 13:14:14.563390       1 executor.go:158] [Job] kubefed is disabled
                
                I0421 13:14:14.563394       1 executor.go:158] [Job] servicemesh is disabled
                
                I0421 13:14:14.563396       1 executor.go:158] [Job] storage-utils is disabled
                
                I0421 13:14:14.563397       1 executor.go:158] [Job] tower is disabled
                
                I0421 13:14:14.563399       1 executor.go:158] [Job] whizard-telemetry is disabled
                
                I0421 13:14:14.563401       1 executor.go:158] [Job] whizard-alerting is disabled
                
                I0421 13:14:14.563409       1 executor.go:155] [Job] application is enabled, priority 100
                
                I0421 13:14:14.563413       1 executor.go:155] [Job] devops is enabled, priority 800
                
                I0421 13:14:14.563421       1 executor.go:155] [Job] iam is enabled, priority 999
                
                I0421 13:14:14.563428       1 executor.go:158] [Job] whizard-logging is disabled
                
                I0421 13:14:14.563431       1 executor.go:158] [Job] metrics-server is disabled
                
                I0421 13:14:14.563435       1 executor.go:155] [Job] network is enabled, priority 100
                
                I0421 13:14:14.563443       1 executor.go:155] [Job] core is enabled, priority 10000
                
                I0421 13:14:14.563446       1 executor.go:158] [Job] whizard-events is disabled
                
                I0421 13:14:14.563457       1 executor.go:155] [Job] gateway is enabled, priority 90
                
                I0421 13:14:14.563460       1 executor.go:158] [Job] opensearch is disabled
                
                I0421 13:14:14.563462       1 executor.go:158] [Job] vector is disabled
                
                I0421 13:14:14.563464       1 executor.go:158] [Job] whizard-monitoring is disabled
                
                I0421 13:14:14.563466       1 executor.go:158] [Job] whizard-notification is disabled
                
                I0421 13:14:14.568650       1 helm.go:145] getting history for release [ks-core]
                
                I0421 13:14:14.633176       1 validator.go:57] [Validator] Current release's version is v3.3.2
                
                I0421 13:14:14.633200       1 executor.go:220] [Job] core prepare-upgrade start
                
                I0421 13:14:14.633209       1 executor.go:58] [Job] Detected that the plugin core is true
                
                I0421 13:14:14.658523       1 core.go:314] scale down deployment kubesphere-system/ks-apiserver unchanged
                
                I0421 13:14:14.668097       1 core.go:314] scale down deployment kubesphere-system/ks-console unchanged
                
                I0421 13:14:14.680227       1 core.go:314] scale down deployment kubesphere-system/ks-controller-manager unchanged
                
                I0421 13:14:14.690029       1 core.go:314] scale down deployment kubesphere-system/ks-installer unchanged
                
                panic: runtime error: invalid memory address or nil pointer dereference
                
                [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x2025ccf]
                
                goroutine 1 [running]:
                
                kubesphere.io/ks-upgrade/pkg/jobs/core.(\*upgradeJob).deleteKubeSphereWebhook(0xc0009def00, {0x2ba2f40, 0x40e7c00})
                
                	/workspace/pkg/jobs/core/core.go:429 +0x22f
                
                kubesphere.io/ks-upgrade/pkg/jobs/core.(\*upgradeJob).PrepareUpgrade(0xc0009def00, {0x2ba2f40, 0x40e7c00})
                
                	/workspace/pkg/jobs/core/core.go:118 +0xcc
                
                kubesphere.io/ks-upgrade/pkg/executor.(\*Executor).PrepareUpgrade(0xc000403560, {0x2ba2f40, 0x40e7c00})
                
                	/workspace/pkg/executor/executor.go:227 +0x275
                
                main.init.func5(0xc000158600?, {0x26e5683?, 0x4?, 0x26e5687?})
                
                	/workspace/cmd/ks-upgrade.go:102 +0x26
                
                github.com/spf13/cobra.(\*Command).execute(0x4095a80, {0xc00081ce40, 0x3, 0x3})
                
                	/workspace/vendor/github.com/spf13/cobra/command.go:985 +0xaaa
                
                github.com/spf13/cobra.(\*Command).ExecuteC(0x4094c20)
                
                	/workspace/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
                
                github.com/spf13/cobra.(\*Command).Execute(...)
                
                	/workspace/vendor/github.com/spf13/cobra/command.go:1041
                
                main.main()
                
                	/workspace/cmd/ks-upgrade.go:136 +0x4e 

                helm 更新到了3.17之后,执行报错