• 安装部署
  • k8sv1.23.6 &ks v4.1.2 安装WhizardTelemetry 监控插件安装异常

创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
虚拟机/ Centos7.9/ 4C/16G x86架构

Kubernetes版本信息
k8s v1.23.6

容器运行时
docker version / crictl version / nerdctl version 结果贴在下方
docker 20.10.12

KubeSphere版本信息
例如:v4.1.2。基于已有k8s集群helm 部署

问题是什么
WhizardTelemetry 监控

2024-12-12T08:18:39.720650986Z WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: kube.config
2024-12-12T08:18:39.720728775Z WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: kube.config
2024-12-12T08:18:39.741057735Z history.go:56: [debug] getting history for release whizard-monitoring-agent
2024-12-12T08:18:39.747817090Z install.go:214: [debug] Original chart version: ""
2024-12-12T08:18:39.748104380Z Release "whizard-monitoring-agent" does not exist. Installing it now.
2024-12-12T08:18:39.764997537Z install.go:231: [debug] CHART PATH: /tmp/helm-executor/repository/whizard-monitoring-1.1.1.tgz
2024-12-12T08:18:39.765023424Z
2024-12-12T08:18:40.323956007Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:40.411399272Z install.go:168: [debug] CRD alertmanagerconfigs.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:40.624665138Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:40.729359518Z install.go:168: [debug] CRD alertmanagers.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:40.821137915Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:40.875912148Z install.go:168: [debug] CRD podmonitors.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:40.928055906Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:40.957296671Z install.go:168: [debug] CRD probes.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:41.232759892Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:41.729336123Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:41.858969135Z install.go:168: [debug] CRD prometheuses.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:41.918006728Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:41.947157328Z install.go:168: [debug] CRD prometheusrules.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:42.124356739Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:42.256105671Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:42.330081582Z install.go:168: [debug] CRD servicemonitors.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:42.520442927Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:42.644925361Z install.go:168: [debug] CRD thanosrulers.monitoring.coreos.com is already present. Skipping.
2024-12-12T08:18:42.680985665Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:42.740697545Z client.go:142: [debug] creating 1 resource(s)
2024-12-12T08:18:42.758400944Z wait.go:48: [debug] beginning wait for 4 resources with timeout of 1m0s
2024-12-12T08:18:43.165348660Z install.go:205: [debug] Clearing REST mapper cache
2024-12-12T08:18:51.327332986Z Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Prometheus.spec): unknown field "automountServiceAccountToken" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "scrapeConfigNamespaceSelector" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "scrapeConfigSelector" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "tsdb" in com.coreos.monitoring.v1.Prometheus.spec]
2024-12-12T08:18:51.327423564Z helm.go:84: [debug] error validating "": error validating data: [ValidationError(Prometheus.spec): unknown field "automountServiceAccountToken" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "hostNetwork" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "scrapeConfigNamespaceSelector" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "scrapeConfigSelector" in com.coreos.monitoring.v1.Prometheus.spec, ValidationError(Prometheus.spec): unknown field "tsdb" in com.coreos.monitoring.v1.Prometheus.spec]
2024-12-12T08:18:51.327436603Z helm.sh/helm/v3/pkg/kube.scrubValidationError
2024-12-12T08:18:51.327443435Z helm.sh/helm/v3/pkg/kube/client.go:815
2024-12-12T08:18:51.327450630Z helm.sh/helm/v3/pkg/kube.(*Client).Build
2024-12-12T08:18:51.327457441Z helm.sh/helm/v3/pkg/kube/client.go:358
2024-12-12T08:18:51.327464260Z helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
2024-12-12T08:18:51.327471027Z helm.sh/helm/v3/pkg/action/install.go:320

    lydeng

    监控安装失败,报错CR 验证失败(多见于开源手动升级)
    问题原因:
    升级步骤漏执行删除旧版本 prometheus-operator CRD 或 cr 资源关联删除未成功
    处置方式:
    手动强制apply 下相关版本的CRD

    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_scrapeconfigs.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
    kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.75.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml