freemankevinK零S
jenkins 更新异常,未做备份,计划重装,常规的步骤是怎么样的,请问大佬
freemankevinK零S
这里已经给集群yaml改成false,准备重装了
freemankevinK零S
@Feynman 劳烦大佬,能帮忙看看
shaowenchenK零SK贰SK壹S
现在是重装吗?还是修复Jenkins异常问题
CauchyK零SK壹S
- 已编辑
devops.enabled 改成了false么?
重装devops的话可以按下边步骤来:
1.删除旧的jenkins: helm del ks-jenkins -n kubesphere-devops-system
2.删除cc中devops的状态::kubectl edit cc -n kubesphere-system ks-installer
删掉status.devops
3.如果cc的spec中devops.enabled为false,改为ture即可,如果devops.enabled为true,删除devops的status后重启ks-installer
kubectl rollout restart deploy -n kubesphere-system ks-installer
freemankevinK零S
shaowenchen 修复好像有点麻烦,如果能修复最好,不过现在计划重置了
shaowenchenK零SK贰SK壹S
现在是什么症状,没有看到你描述进行了什么操作。如果没有数据的话,重置会比较快。
freemankevinK零S
shaowenchen 您好,现在在按照前面那位老师的建议进行了重置,然后重置有点不太顺利,目前仍在测试阶段,没有数据,可以随意重置
freemankevinK零S
shaowenchen 因为本地的jenkins是ks3.0文档给的yaml装的,非helm,所以执行了"
2.删除cc中devops的状态::kubectl edit cc -n kubesphere-system ks-installer 删掉status.devops
3.如果cc的spec中devops.enabled为false,改为ture即可,如果devops.enabled为true,删除devops的status后重启ks-installer
kubectl rollout restart deploy -n kubesphere-system ks-installer"
然后我的jenkins并未有重现,现在直接没了
shaowenchenK零SK贰SK壹S
首先看看 kubesphere-devops-system 下面负载是否全部清空,如果清空可以直接删掉这个ns
然后查看 kubectl edit cc -n kubesphere-system ks-installer 状态,去掉其中devops的 status,enable 改为 true
最后重启 ks-installer
freemankevinK零S
**************************************************
task monitoring status is failed
task multicluster status is successful
task alerting status is successful
task auditing status is successful
task devops status is successful
task events status is successful
task logging status is successful
task notification status is successful
task openpitrix status is successful
task servicemesh status is failed
total: 10 completed:10
**************************************************
Task 'monitoring' failed:
******************************************************************************************************************************************************
{
"counter": 105,
"created": "2020-12-16T01:56:31.718653",
"end_line": 104,
"event": "runner_on_failed",
"event_data": {
"duration": 41.486759,
"end": "2020-12-16T01:56:31.718415",
"event_loop": null,
"host": "localhost",
"ignore_errors": null,
"play": "localhost",
"play_pattern": "localhost",
"play_uuid": "e270d635-7838-eebd-949f-000000000005",
"playbook": "/kubesphere/playbooks/monitoring.yaml",
"playbook_uuid": "4389965f-d35d-42fa-91b7-338aba092893",
"remote_addr": "127.0.0.1",
"res": {
"changed": true,
"msg": "All items completed",
"results": [
{
"_ansible_item_label": "prometheus",
"_ansible_no_log": false,
"ansible_loop_var": "item",
"attempts": 5,
"changed": true,
"cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
"delta": "0:00:00.455121",
"end": "2020-12-16 09:56:09.511499",
"failed": true,
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "prometheus",
"msg": "non-zero return code",
"rc": 1,
"start": "2020-12-16 09:56:09.056378",
"stderr": "The servicemonitors \"kube-scheduler\" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update",
"stderr_lines": [
"The servicemonitors \"kube-scheduler\" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update"
],
"stdout": "secret/additional-scrape-configs unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nservicemonitor.monitoring.coreos.com/prometheus-operator unchanged\nprometheus.monitoring.coreos.com/k8s unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservice/kube-controller-manager-svc unchanged\nservice/kube-scheduler-svc unchanged\nservicemonitor.monitoring.coreos.com/prometheus unchanged\nservicemonitor.monitoring.coreos.com/kube-apiserver unchanged\nservicemonitor.monitoring.coreos.com/coredns unchanged\nservicemonitor.monitoring.coreos.com/kube-controller-manager unchanged\nservicemonitor.monitoring.coreos.com/kubelet unchanged",
"stdout_lines": [
"secret/additional-scrape-configs unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"servicemonitor.monitoring.coreos.com/prometheus-operator unchanged",
"prometheus.monitoring.coreos.com/k8s unchanged",
"rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"service/prometheus-k8s unchanged",
"serviceaccount/prometheus-k8s unchanged",
"service/kube-controller-manager-svc unchanged",
"service/kube-scheduler-svc unchanged",
"servicemonitor.monitoring.coreos.com/prometheus unchanged",
"servicemonitor.monitoring.coreos.com/kube-apiserver unchanged",
"servicemonitor.monitoring.coreos.com/coredns unchanged",
"servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged",
"servicemonitor.monitoring.coreos.com/kubelet unchanged"
]
},
{
"_ansible_item_label": "prometheus",
"_ansible_no_log": false,
"ansible_loop_var": "item",
"attempts": 5,
"changed": true,
"cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
"delta": "0:00:00.482267",
"end": "2020-12-16 09:56:31.671258",
"failed": true,
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "prometheus",
"msg": "non-zero return code",
"rc": 1,
"start": "2020-12-16 09:56:31.188991",
"stderr": "The servicemonitors \"kube-scheduler\" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update",
"stderr_lines": [
"The servicemonitors \"kube-scheduler\" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update"
],
"stdout": "secret/additional-scrape-configs unchanged\nclusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nservicemonitor.monitoring.coreos.com/prometheus-operator unchanged\nprometheus.monitoring.coreos.com/k8s unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservice/kube-controller-manager-svc unchanged\nservice/kube-scheduler-svc unchanged\nservicemonitor.monitoring.coreos.com/prometheus unchanged\nservicemonitor.monitoring.coreos.com/kube-apiserver unchanged\nservicemonitor.monitoring.coreos.com/coredns unchanged\nservicemonitor.monitoring.coreos.com/kube-controller-manager unchanged\nservicemonitor.monitoring.coreos.com/kubelet unchanged",
"stdout_lines": [
"secret/additional-scrape-configs unchanged",
"clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
"servicemonitor.monitoring.coreos.com/prometheus-operator unchanged",
"prometheus.monitoring.coreos.com/k8s unchanged",
"rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
"service/prometheus-k8s unchanged",
"serviceaccount/prometheus-k8s unchanged",
"service/kube-controller-manager-svc unchanged",
"service/kube-scheduler-svc unchanged",
"servicemonitor.monitoring.coreos.com/prometheus unchanged",
"servicemonitor.monitoring.coreos.com/kube-apiserver unchanged",
"servicemonitor.monitoring.coreos.com/coredns unchanged",
"servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged",
"servicemonitor.monitoring.coreos.com/kubelet unchanged"
]
}
]
},
"role": "ks-monitor",
"start": "2020-12-16T01:55:50.231656",
"task": "ks-monitor | Installing prometheus",
"task_action": "shell",
"task_args": "",
"task_path": "/kubesphere/installer/roles/ks-monitor/tasks/prometheus.yaml:2",
"task_uuid": "e270d635-7838-eebd-949f-000000000042",
"uuid": "65dfbe17-d2fd-4f57-93d8-00b40dadf993"
},
"parent_uuid": "e270d635-7838-eebd-949f-000000000042",
"pid": 3117,
"runner_ident": "monitoring",
"start_line": 104,
"stdout": "",
"uuid": "65dfbe17-d2fd-4f57-93d8-00b40dadf993"
}
******************************************************************************************************************************************************
Task 'servicemesh' failed:
******************************************************************************************************************************************************
{
"counter": 134,
"created": "2020-12-16T02:13:01.734597",
"end_line": 133,
"event": "runner_on_failed",
"event_data": {
"duration": 947.761468,
"end": "2020-12-16T02:13:01.734370",
"event_loop": null,
"host": "localhost",
"ignore_errors": null,
"play": "localhost",
"play_pattern": "localhost",
"play_uuid": "e270d635-7838-1e3b-110f-000000000005",
"playbook": "/kubesphere/playbooks/servicemesh.yaml",
"playbook_uuid": "35b30adc-6982-49ac-9f00-5b0143bc7d57",
"remote_addr": "127.0.0.1",
"res": {
"_ansible_no_log": false,
"attempts": 90,
"changed": true,
"cmd": "/usr/local/bin/kubectl -n istio-system get pod | grep istio-init-crd-10 | awk '{print $3}'",
"delta": "0:00:00.124853",
"end": "2020-12-16 10:13:01.694967",
"invocation": {
"module_args": {
"_raw_params": "/usr/local/bin/kubectl -n istio-system get pod | grep istio-init-crd-10 | awk '{print $3}'",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-12-16 10:13:01.570114",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
},
"role": "ks-istio",
"start": "2020-12-16T01:57:13.972902",
"task": "istio | Waiting for istio-init-crd-10",
"task_action": "command",
"task_args": "",
"task_path": "/kubesphere/installer/roles/ks-istio/tasks/main.yaml:48",
"task_uuid": "e270d635-7838-1e3b-110f-00000000001c",
"uuid": "7cc647c1-e7a3-4367-aea1-633481d3f55e"
},
"parent_uuid": "e270d635-7838-1e3b-110f-00000000001c",
"pid": 3131,
"runner_ident": "servicemesh",
"start_line": 132,
"stdout": "fatal: [localhost]: FAILED! => {\"attempts\": 90, \"changed\": true, \"cmd\": \"/usr/local/bin/kubectl -n istio-system get pod | grep istio-init-crd-10 | awk '{print $3}'\", \"delta\": \"0:00:00.124853\", \"end\": \"2020-12-16 10:13:01.694967\", \"rc\": 0, \"start\": \"2020-12-16 10:13:01.570114\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}",
"uuid": "7cc647c1-e7a3-4367-aea1-633481d3f55e"
}
******************************************************************************************************************************************************
重装未成功
shaowenchenK零SK贰SK壹S
怎么重装了 monitoring 和 servicemesh ?不是重装 devops吗
freemankevinK零S
freemankevinK零S
shaowenchen 按照说明,重启ks-installer后台日志输出就就这样了
shaowenchenK零SK贰SK壹S
先看看有没有异常的组件,UI 能否使用,devops 看着起来了。
freemankevinK零S
freemankevinK零S
freemankevinK零S
UI能进去,之前也是这样子,然后修改下插件就进不去了,报了一大堆error
shaowenchenK零SK贰SK壹S
插件不能随便加和修改,可以先看下 3.0 的运维文档 https://kubesphere.com.cn/forum/d/2408-kubesphere-devops-30 3.12
freemankevinK零S
shaowenchen 嗯,一开始只是想升级下jenkins的agent,也就是nodejs的版本,然后替换后发现没法用,jenkins也被玩坏了,然后没修好,计划重装,然后就有了上面的报错