创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
操作系统信息
例如:虚拟机/物理机,Centos7.5,4C/8G
Kubernetes版本信息
将 kubectl version
命令执行结果贴在下方
容器运行时
将 docker version
/ crictl version
/ nerdctl version
结果贴在下方
KubeSphere版本信息
v3.2.1。在线安装。全新安装。
问题是什么
task openpitrix status is successful (2/4)
task multicluster status is successful (¾)
task monitoring status is failed (4/4)
**************************************************
Collecting installation results …
Task ‘monitoring’ failed:
******************************************************************************************************************************************************
{
“counter”: 71,
“created”: “2022-04-09T04:47:53.049525”,
“end_line”: 69,
“event”: “runner_on_failed”,
“event_data”: {
"duration": 151.119823,
"end": "2022-04-09T04:47:53.049320",
"event_loop": null,
"host": "localhost",
"ignore_errors": null,
"play": "localhost",
"play_pattern": "localhost",
"play_uuid": "0eec1041-0a37-70fc-6492-000000000005",
"playbook": "/kubesphere/playbooks/monitoring.yaml",
"playbook_uuid": "aaff0868-a185-497d-ac03-da8e62b3ddbe",
"remote_addr": "127.0.0.1",
"res": {
"_ansible_no_log": false,
"changed": true,
"cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force",
"delta": "0:02:30.793922",
"end": "2022-04-09 12:47:53.011382",
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2022-04-09 12:45:22.217460",
"stderr": "Error from server (InternalError): error when creating \\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\\": Internal error occurred: resource quota evaluation timed out\\nError from server (Timeout): error when retrieving current configuration of:\\nResource: \\"monitoring.coreos.com/v1, Resource=servicemonitors\\", GroupVersionKind: \\"monitoring.coreos.com/v1, Kind=ServiceMonitor\\"\\nName: \\"node-exporter\\", Namespace: \\"kubesphere-monitoring-system\\"\\nfrom server for: \\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\": the server was unable to return a response in the time allotted, but may still be processing the request (get servicemonitors.monitoring.coreos.com node-exporter)",
"stderr_lines": [
"Error from server (InternalError): error when creating \\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\\": Internal error occurred: resource quota evaluation timed out",
"Error from server (Timeout): error when retrieving current configuration of:",
"Resource: \\"monitoring.coreos.com/v1, Resource=servicemonitors\\", GroupVersionKind: \\"monitoring.coreos.com/v1, Kind=ServiceMonitor\\"",
"Name: \\"node-exporter\\", Namespace: \\"kubesphere-monitoring-system\\"",
"from server for: \\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\": the server was unable to return a response in the time allotted, but may still be processing the request (get servicemonitors.monitoring.coreos.com node-exporter)"
],
"stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created\\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created\\ndaemonset.apps/node-exporter created\\nservice/node-exporter created",
"stdout_lines": [
"clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created",
"daemonset.apps/node-exporter created",
"service/node-exporter created"
]
},
"role": "ks-monitor",
"start": "2022-04-09T04:45:21.929497",
"task": "Monitoring | Installing node-exporter",
"task_action": "command",
"task_args": "",
"task_path": "/kubesphere/installer/roles/ks-monitor/tasks/node-exporter.yaml:2",
"task_uuid": "0eec1041-0a37-70fc-6492-000000000035",
"uuid": "0401bb2f-a927-4808-8039-fee3bec32ecd"
},
“parent_uuid”: “0eec1041-0a37-70fc-6492-000000000035”,
“pid”: 4577,
“runner_ident”: “monitoring”,
“start_line”: 68,
“stdout”: "fatal: [localhost]: FAILED! => {\“changed\”: true, \“cmd\”: \“/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter –force\”, \“delta\”: \“0:02:30.793922\”, \“end\”: \“2022-04-09 12:47:53.011382\”, \“msg\”: \“non-zero return code\”, \“rc\”: 1, \“start\”: \“2022-04-09 12:45:22.217460\”, \“stderr\”: \“Error from server (InternalError): error when creating \\\”/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\\\": Internal error occurred: resource quota evaluation timed out\\nError from server (Timeout): error when retrieving current configuration of:\\nResource: \\\“monitoring.coreos.com/v1, Resource=servicemonitors\\\”, GroupVersionKind: \\\“monitoring.coreos.com/v1, Kind=ServiceMonitor\\\”\\nName: \\\“node-exporter\\\”, Namespace: \\\“kubesphere-monitoring-system\\\”\\nfrom server for: \\\“/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\\”: the server was unable to return a response in the time allotted, but may still be processing the request (get servicemonitors.monitoring.coreos.com node-exporter)\", \“stderr_lines\”: [\“Error from server (InternalError): error when creating \\\”/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceAccount.yaml\\\“: Internal error occurred: resource quota evaluation timed out\”, \“Error from server (Timeout): error when retrieving current configuration of:\”, \"Resource: \\\“monitoring.coreos.com/v1, Resource=servicemonitors\\\”, GroupVersionKind: \\\“monitoring.coreos.com/v1, Kind=ServiceMonitor\\\”\", \"Name: \\\“node-exporter\\\”, Namespace: \\\“kubesphere-monitoring-system\\\”\", \“from server for: \\\”/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\\“: the server was unable to return a response in the time allotted, but may still be processing the request (get servicemonitors.monitoring.coreos.com node-exporter)\”], \“stdout\”: \“clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created\\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created\\ndaemonset.apps/node-exporter created\\nservice/node-exporter created\”, \“stdout_lines\”: [\“clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created\”, \“clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created\”, \“daemonset.apps/node-exporter created\”, \“service/node-exporter created\”]}",
“uuid”: “0401bb2f-a927-4808-8039-fee3bec32ecd”
}
******************************************************************************************************************************************************
Failed to ansible-playbook ks-config.yaml
E0409 12:53:15.185983 1 reflector.go:284] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to watch *unstructured.Unstructured: Get “https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&resourceVersion=1389&timeoutSeconds=499&watch=true”: dial tcp 10.233.0.1:443: connect: connection refused
E0409 12:53:16.188006 1 reflector.go:131] pkg/mod/k8s.io/client-go@v0.0.0-20190411052641-7a6b4715b709/tools/cache/reflector.go:99: Failed to list *unstructured.Unstructured: Get “https://10.233.0.1:443/apis/installer.kubesphere.io/v1alpha1/namespaces/kubesphere-system/clusterconfigurations?fieldSelector=metadata.name%3Dks-installer&limit=500&resourceVersion=0”: dial tcp 10.233.0.1:443: connect: connection refused