需求: 由于单套的prometheus不够用了,现在想在 kubesphere-monitoring-system 监控系统中再部署一套 prometheus
报错如下:
Internal Server Error
there is more than one prometheus custom resource in kubesphere-monitoring-system

相应的yaml:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
prometheus: k8s-business
name: k8s-business
namespace: kubesphere-monitoring-system
spec:
additionalScrapeConfigs:
key: prometheus-additional.yaml
name: additional-scrape-configs
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: node-role.kubernetes.io/monitoring
operator: Exists
weight: 100
alerting:
alertmanagers:
- name: alertmanager-main
namespace: kubesphere-monitoring-system
port: web
evaluationInterval: 5m
image: 'prom/prometheus:v2.26.0'
logLevel: debug
nodeSelector:
kubernetes.io/os: linux
podMonitorNamespaceSelector:
matchLabels:
kubesphere.io/namespace: prom-exporter
podMonitorSelector: {}
probeNamespaceSelector:
matchLabels:
kubesphere.io/namespace: prom-exporter
probeSelector: {}
query:
maxConcurrency: 1000
replicas: 1
resources:
limits:
cpu: '4'
memory: 8Gi
requests:
cpu: 200m
memory: 500Mi
retention: 3d
ruleSelector:
matchLabels:
prometheus: k8s-business
role: alert-rules-business
scrapeInterval: 1m
secrets:
- kube-etcd-client-certs
securityContext:
fsGroup: 0
runAsNonRoot: false
runAsUser: 0
serviceAccountName: prometheus-k8s
serviceMonitorNamespaceSelector:
matchLabels:
kubesphere.io/namespace: prom-exporter
serviceMonitorSelector: {}
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 20Gi
thanos:
baseImage: quay.io/thanos/thanos
version: v0.8.1
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: monitoring
version: v2.26.0
请问这种情况下,有什么建议么?
—
官方回复: kubesphere/kubesphere#3880