ks-installer 的 configmap里面的logging是不是只能开不能关的
我这边设置成false跟0,之后pod还是保持原样
还有一个问题
我关闭了istio功能,重启logging组建的时候istio功能是一定要开启的么,因为之前装logging的时候istio是开启的,现在关闭了,是这个组件已经注入了,后续我需要删掉整个logging的namespace重新安装吗?
ks-installer 的 configmap里面的logging是不是只能开不能关的
我这边设置成false跟0,之后pod还是保持原样
还有一个问题
我关闭了istio功能,重启logging组建的时候istio功能是一定要开启的么,因为之前装logging的时候istio是开启的,现在关闭了,是这个组件已经注入了,后续我需要删掉整个logging的namespace重新安装吗?
jerli 可以提供 tv 环境吗,我们帮你远程看看 kubesphere@yunify.com
这个问题搞定了嘛
# kubectl get po -A -o wide |grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-anxi-tea service-cxj9e4-6ff554d6c6-56gwj 0/1 ImagePullBackOff 0 24h 10.233.92.152 node3 <none> <none>
demo-yanxuan service-q150tc-758bb7cf86-mtks9 0/1 ImagePullBackOff 0 24h 10.233.92.197 node3 <none> <none>
demo-yinfeng auth-hmwbrx-659c88b57b-5l6k4 0/1 ImagePullBackOff 0 24h 10.233.92.165 node3 <none> <none>
demo-yinfeng gateway-tjhkr1-7b4cf66964-z9rwf 0/1 ImagePullBackOff 0 24h 10.233.92.145 node3 <none> <none>
demo-yinfeng track-sbi554-7f8876686d-lth5h 0/1 ImagePullBackOff 0 24h 10.233.92.166 node3 <none> <none>
demo-zhibao auth-pujrt3-78d64cc7c7-dd66q 0/1 ImagePullBackOff 0 24h 10.233.92.173 node3 <none> <none>
demo-zhibao gateway-5dbd87cbb7-6kwbn 0/1 ImagePullBackOff 0 24h 10.233.92.179 node3 <none> <none>
demo-zhibao zhibao-e9r2q8-679bd5c6-f2sv9 0/1 ImagePullBackOff 0 24h 10.233.92.187 node3 <none> <none>
gago-sonarqube sonarqube-1-v7-8699bc689c-hhhwv 1/2 CrashLoopBackOff 289 24h 10.233.92.220 node3 <none> <none>
istio-system jaeger-collector-79b8876d7c-mwckz 0/1 CrashLoopBackOff 28 125m 10.233.96.48 node2 <none> <none>
istio-system jaeger-collector-8698b58b55-8hfp7 0/1 CrashLoopBackOff 28 125m 10.233.96.246 node2 <none> <none>
istio-system jaeger-query-6f9d8c8cdb-ccsv5 1/2 CrashLoopBackOff 29 126m 10.233.96.186 node2 <none> <none>
istio-system jaeger-query-7f9c7c84c-9s469 1/2 CrashLoopBackOff 28 125m 10.233.96.154 node2 <none> <none>
jl3rd service-1-5c59fc669b-jv77g 0/1 ErrImagePull 0 24h 10.233.92.180 node3 <none> <none>
jl3rd web-1-568cc584bd-wtx9m 0/1 ImagePullBackOff 0 24h 10.233.92.198 node3 <none> <none>
kubesphere-alerting-system alerting-db-ctrl-job-2xv2h 0/1 Completed 0 94m 10.233.96.125 node2 <none> <none>
kubesphere-alerting-system alerting-db-init-job-szvkk 0/1 Completed 0 94m 10.233.96.182 node2 <none> <none>
kubesphere-alerting-system notification-db-ctrl-job-vwqr5 0/1 Completed 0 94m 10.233.96.184 node2 <none> <none>
kubesphere-alerting-system notification-db-init-job-pksn4 0/1 Completed 0 94m 10.233.96.219 node2 <none> <none>
kubesphere-devops-system ks-devops-db-ctrl-job-hkqzb 0/1 Completed 0 96m 10.233.96.174 node2 <none> <none>
kubesphere-devops-system ks-devops-db-init-job-hfnll 0/1 Completed 0 97m 10.233.96.211 node2 <none> <none>
kubesphere-logging-system elasticsearch-logging-curator-elasticsearch-curator-159961hjsjl 0/1 Completed 0 7h23m 10.233.96.1 node2 <none> <none>
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready master 287d v1.16.7 192.168.8.4 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker://19.3.5
node1 Ready worker 287d v1.16.7 192.168.8.5 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker://19.3.5
node2 Ready worker 287d v1.16.7 192.168.8.6 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker://19.3.5
node3 Ready worker 30h v1.16.7 192.168.8.15 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.12
我把日志和istio插件false掉了