rysinal 补充日志:
/var/log/message
ec 27 15:32:12 k8s-master-01 kubelet: I1227 15:32:12.850288 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:32:14 k8s-master-01 etcd: 2019-12-27 07:32:14.038036 I | mvcc: store.index: compact 1417630
Dec 27 15:32:14 k8s-master-01 etcd: 2019-12-27 07:32:14.043797 I | mvcc: finished scheduled compaction at 1417630 (took 3.447693ms)
Dec 27 15:32:22 k8s-master-01 kubelet: I1227 15:32:22.884407 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:32:32 k8s-master-01 kubelet: I1227 15:32:32.912608 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:32:42 k8s-master-01 kubelet: I1227 15:32:42.942571 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:32:50 k8s-master-01 kubelet: I1227 15:32:50.829231 16620 container_manager_linux.go:457] [ContainerManager]: Discovered runtime cgroups name: /systemd/system.slice
Dec 27 15:32:52 k8s-master-01 kubelet: I1227 15:32:52.981012 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:33:03 k8s-master-01 kubelet: I1227 15:33:03.018194 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:33:13 k8s-master-01 kubelet: I1227 15:33:13.050434 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:33:23 k8s-master-01 kubelet: I1227 15:33:23.087517 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:33:33 k8s-master-01 kubelet: I1227 15:33:33.115882 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:33:43 k8s-master-01 kubelet: I1227 15:33:43.149908 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:33:53 k8s-master-01 kubelet: I1227 15:33:53.177542 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:34:03 k8s-master-01 kubelet: I1227 15:34:03.207790 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:34:13 k8s-master-01 kubelet: I1227 15:34:13.247204 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:34:23 k8s-master-01 kubelet: I1227 15:34:23.280909 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:34:33 k8s-master-01 kubelet: I1227 15:34:33.328219 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:34:43 k8s-master-01 kubelet: I1227 15:34:43.365624 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:34:53 k8s-master-01 kubelet: I1227 15:34:53.415273 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:35:03 k8s-master-01 kubelet: I1227 15:35:03.465946 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:35:13 k8s-master-01 kubelet: I1227 15:35:13.496189 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:35:23 k8s-master-01 kubelet: I1227 15:35:23.524022 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:35:33 k8s-master-01 kubelet: I1227 15:35:33.557483 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:35:43 k8s-master-01 kubelet: I1227 15:35:43.590727 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:35:53 k8s-master-01 kubelet: I1227 15:35:53.619431 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:36:03 k8s-master-01 kubelet: I1227 15:36:03.655416 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:36:13 k8s-master-01 kubelet: I1227 15:36:13.687568 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:36:23 k8s-master-01 kubelet: I1227 15:36:23.718562 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:36:33 k8s-master-01 kubelet: I1227 15:36:33.748267 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:36:43 k8s-master-01 kubelet: I1227 15:36:43.782118 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:36:53 k8s-master-01 kubelet: I1227 15:36:53.816435 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:37:03 k8s-master-01 kubelet: I1227 15:37:03.852241 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:37:13 k8s-master-01 kubelet: I1227 15:37:13.880346 16620 setters.go:73] Using node IP: "192.168.23.46"
Dec 27 15:37:14 k8s-master-01 etcd: 2019-12-27 07:37:14.067666 I | mvcc: store.index: compact 1418469
Dec 27 15:37:14 k8s-master-01 etcd: 2019-12-27 07:37:14.071479 I | mvcc: finished scheduled compaction at 1418469 (took 2.539292ms)
kubectl logs etcd-555778878f-n9s94 -n kubesphere-system
2019-12-21 13:14:26.434478 W | pkg/flags: unrecognized environment variable ETCD_PORT_2379_TCP_ADDR=10.223.18.7
2019-12-21 13:14:26.434509 W | pkg/flags: unrecognized environment variable ETCD_PORT_2379_TCP_PORT=2379
2019-12-21 13:14:26.434520 W | pkg/flags: unrecognized environment variable ETCD_PORT=tcp://10.223.18.7:2379
2019-12-21 13:14:26.434558 I | etcdmain: etcd Version: 3.2.18
2019-12-21 13:14:26.434581 I | etcdmain: Git SHA: eddf599c6
2019-12-21 13:14:26.434589 I | etcdmain: Go Version: go1.8.7
2019-12-21 13:14:26.434596 I | etcdmain: Go OS/Arch: linux/amd64
2019-12-21 13:14:26.434608 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
2019-12-21 13:14:26.437347 I | embed: listening for peers on http://localhost:2380
2019-12-21 13:14:26.437504 I | embed: listening for client requests on 0.0.0.0:2379
2019-12-21 13:14:26.468192 I | etcdserver: name = default
2019-12-21 13:14:26.468244 I | etcdserver: data dir = /data
2019-12-21 13:14:26.468262 I | etcdserver: member dir = /data/member
2019-12-21 13:14:26.468273 I | etcdserver: heartbeat = 100ms
2019-12-21 13:14:26.468284 I | etcdserver: election = 1000ms
2019-12-21 13:14:26.468297 I | etcdserver: snapshot count = 100000
2019-12-21 13:14:26.468331 I | etcdserver: advertise client URLs = http://etcd.kubesphere-system.svc:2379
2019-12-21 13:14:26.468349 I | etcdserver: initial advertise peer URLs = http://localhost:2380
2019-12-21 13:14:26.468367 I | etcdserver: initial cluster = default=http://localhost:2380
2019-12-21 13:14:26.495570 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
2019-12-21 13:14:26.495781 I | raft: 8e9e05c52164694d became follower at term 0
2019-12-21 13:14:26.495813 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2019-12-21 13:14:26.495824 I | raft: 8e9e05c52164694d became follower at term 1
2019-12-21 13:14:26.533891 W | auth: simple token is not cryptographically signed
2019-12-21 13:14:26.557816 I | etcdserver: starting server... [version: 3.2.18, cluster version: to_be_decided]
2019-12-21 13:14:26.559024 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2019-12-21 13:14:26.559820 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2019-12-21 13:14:27.400091 I | raft: 8e9e05c52164694d is starting a new election at term 1
2019-12-21 13:14:27.400168 I | raft: 8e9e05c52164694d became candidate at term 2
2019-12-21 13:14:27.400210 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
2019-12-21 13:14:27.400248 I | raft: 8e9e05c52164694d became leader at term 2
2019-12-21 13:14:27.400268 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
2019-12-21 13:14:27.401042 I | etcdserver: setting up the initial cluster version to 3.2
2019-12-21 13:14:27.401170 I | embed: ready to serve client requests
2019-12-21 13:14:27.401438 I | etcdserver: published {Name:default ClientURLs:[http://etcd.kubesphere-system.svc:2379]} to cluster cdf818194e3a8c32
2019-12-21 13:14:27.401890 N | embed: serving insecure client requests on [::]:2379, this is strongly discouraged!
2019-12-21 13:14:27.406834 N | etcdserver/membership: set the initial cluster version to 3.2
2019-12-21 13:14:27.406969 I | etcdserver/api: enabled capabilities for version 3.2
kubectl logs ks-installer-7987c659d6-5fkf9 -n kubesphere-system
task servicemesh status is successful
task metrics-server status is successful
total: 8 completed:7
**************************************************
task monitoring status is successful
task notification status is successful
task alerting status is successful
task logging status is successful
task openpitrix status is successful
task servicemesh status is successful
task metrics-server status is successful
total: 8 completed:7
**************************************************
task monitoring status is successful
task notification status is successful
task devops status is successful
task alerting status is successful
task logging status is successful
task openpitrix status is successful
task servicemesh status is successful
task metrics-server status is successful
total: 8 completed:8
**************************************************
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.23.46:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After logging into the console, please check the
monitoring status of service components in
the "Cluster Status". If the service is not
ready, please wait patiently. You can start
to use when all components are ready.
2. Please modify the default password after login.
#####################################################
kubectl logs ks-controller-manager-6dd9b76d75-2s5lz -n kubesphere-system
E1221 13:19:51.813583 1 routers.go:183] open /etc/kubesphere/ingress-controller: no such file or directory
E1221 13:19:51.814273 1 routers.go:61] open /etc/kubesphere/ingress-controller: no such file or directory
W1221 13:19:51.822007 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1221 13:19:51.892511 1 server.go:132] setting up manager
I1221 13:19:51.971339 1 server.go:138] setting up scheme
I1221 13:19:51.971758 1 server.go:143] Setting up controllers
I1221 13:19:51.975233 1 server.go:152] Starting the Cmd.
I1221 13:19:52.391972 1 application_controller.go:157] starting application controller
I1221 13:19:52.392067 1 s2ibinary_controller.go:149] starting s2ibinary controller
I1221 13:19:52.392101 1 virtualservice_controller.go:153] starting virtualservice controller
I1221 13:19:52.391979 1 job_controller.go:102] starting job controller
I1221 13:19:52.392140 1 destinationrule_controller.go:156] starting destinationrule controller
I1221 13:19:52.392156 1 s2irun_controller.go:158] starting s2irun controller
E1221 13:20:12.793669 1 namespace_controller.go:376] create runtime, namespace: kube-public, error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.223.2.156:9103: i/o timeout"
E1221 13:20:13.794325 1 namespace_controller.go:376] create runtime, namespace: kubesphere-system, error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.223.2.156:9103: i/o timeout"
E1221 13:20:14.901337 1 namespace_controller.go:403] create runtime, namespace: kubesphere-controls-system, error: rpc error: code = InvalidArgument desc = unsupported parameter [provider] value [kubernetes]
E1221 13:20:15.908782 1 namespace_controller.go:403] create runtime, namespace: kubesphere-logging-system, error: rpc error: code = InvalidArgument desc = unsupported parameter [provider] value [kubernetes]
E1221 13:20:18.260604 1 namespace_controller.go:419] create runtime, namespace: kube-system, error: rpc error: code = InvalidArgument desc = create resources failed: rpc error: code = AlreadyExists desc = namespace [kube-system] exists: namespace [kube-system] annotations openpitrix_runtime:runtime-9WPO9rEBzLox already exist
E1222 01:00:03.096421 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1576976400": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1576976400
E1222 01:00:17.912020 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1576976400": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1576976400
E1223 01:00:01.772803 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577062800": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577062800
E1224 01:00:05.226515 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577149200": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577149200
E1225 01:00:08.956404 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577235600": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577235600
E1225 01:00:11.053974 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577235600": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577235600
E1226 01:00:01.004930 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577322000": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577322000
E1226 09:48:17.304047 1 namespace_controller.go:231] resource name may not be empty
E1226 09:48:18.321299 1 namespace_controller.go:231] resource name may not be empty
E1226 09:50:56.641065 1 namespace_controller.go:231] resource name may not be empty
E1226 09:51:42.363459 1 namespace_controller.go:231] resource name may not be empty
E1226 09:51:43.377675 1 namespace_controller.go:231] resource name may not be empty
E1226 09:52:34.495460 1 namespace_controller.go:231] resource name may not be empty
E1226 09:52:35.505588 1 namespace_controller.go:231] resource name may not be empty
E1226 09:52:36.523131 1 namespace_controller.go:324] creating role binding namespace: tfsmy-platform, role binding: viewer, error: rolebindings.rbac.authorization.k8s.io "viewer" already exists
E1226 09:53:03.156908 1 namespace_controller.go:231] resource name may not be empty
E1226 09:54:38.987856 1 namespace_controller.go:231] resource name may not be empty
E1226 09:54:40.001096 1 namespace_controller.go:231] resource name may not be empty
E1226 09:54:41.028606 1 namespace_controller.go:276] creating role binding namespace: tfsmy-tools,role binding: admin, error: rolebindings.rbac.authorization.k8s.io"admin" already exists
E1227 01:00:04.516548 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577408400": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577408400
另外的controller-manager:
E1221 13:19:51.881377 1 routers.go:183] open /etc/kubesphere/ingress-controller: no such file or directory
E1221 13:19:51.883039 1 routers.go:61] open /etc/kubesphere/ingress-controller: no such file or directory
W1221 13:19:51.887141 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1221 13:19:51.889646 1 server.go:132] setting up manager
I1221 13:19:52.027085 1 server.go:138] setting up scheme
I1221 13:19:52.027680 1 server.go:143] Setting up controllers
I1221 13:19:52.031687 1 server.go:152] Starting the Cmd.
I1221 13:19:52.462070 1 s2ibinary_controller.go:149] starting s2ibinary controller
I1221 13:19:52.462259 1 s2irun_controller.go:158] starting s2irun controller
I1221 13:19:52.462382 1 virtualservice_controller.go:153] starting virtualservice controller
I1221 13:19:52.462420 1 destinationrule_controller.go:156] starting destinationrule controller
I1221 13:19:52.462444 1 application_controller.go:157] starting application controller
I1221 13:19:52.462469 1 job_controller.go:102] starting job controller
E1221 13:19:52.760489 1 namespace_controller.go:276] creating role binding namespace: kubesphere-devops-system,role binding: admin, error: rolebindings.rbac.authorization.k8s.io "admin" already exists
E1221 13:20:08.812064 1 namespace_controller.go:376] create runtime, namespace: istio-system, error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.223.2.156:9103: connect: connection refused"
E1221 13:20:09.812701 1 namespace_controller.go:376] create runtime, namespace: kubesphere-alerting-system, error: rpc error: code = Unavailable desc = all SubConnsare in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.223.2.156:9103: connect: connection refused"
E1221 13:20:12.828050 1 namespace_controller.go:376] create runtime, namespace: default, error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.223.2.156:9103: connect: connection refused"
E1221 13:20:13.828872 1 namespace_controller.go:376] create runtime, namespace: kube-node-lease, error: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.223.2.156:9103: connect: connection refused"
E1221 13:20:15.037279 1 namespace_controller.go:403] create runtime, namespace: kubesphere-system, error: rpc error: code = InvalidArgument desc = unsupported parameter [provider] value [kubernetes]
E1221 13:20:16.098848 1 namespace_controller.go:403] create runtime, namespace: kubesphere-monitoring-system, error: rpc error: code = InvalidArgument desc = unsupported parameter [provider] value [kubernetes]
E1221 13:20:18.298762 1 namespace_controller.go:419] create runtime, namespace: kubesphere-controls-system, error: rpc error: code = InvalidArgument desc = create resources failed: rpc error: code = AlreadyExists desc = namespace [kubesphere-controls-system] exists: namespace [kubesphere-controls-system] annotations openpitrix_runtime:runtime-KVKjL9AnYqVY already exist
E1221 13:20:40.851379 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "hyperpitrix-release-app-job": the object has been modified; please applyyour changes to the latest version and try againmake job revision failednamespaceopenpitrix-systemnamehyperpitrix-release-app-job
E1223 01:00:01.772582 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577062800": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577062800
E1223 01:00:04.433311 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577062800": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577062800
E1224 01:00:05.230900 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577149200": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577149200
E1224 01:00:07.910746 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577149200": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577149200
E1225 01:00:08.960110 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577235600": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577235600
E1225 01:00:11.054061 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577235600": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577235600
E1226 01:00:01.012279 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577322000": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577322000
E1226 01:00:01.058497 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577322000": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577322000
E1226 01:00:04.261002 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577322000": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577322000
E1226 09:48:18.304236 1 namespace_controller.go:231] resource name may not be empty
E1226 09:48:19.359253 1 namespace_controller.go:324] creating role binding namespace: tfsmy-nodejs, role binding: viewer, error: rolebindings.rbac.authorization.k8s.io "viewer" already exists
E1226 09:50:55.684309 1 namespace_controller.go:231] resource name may not be empty
E1226 09:50:56.712067 1 namespace_controller.go:231] resource name may not be empty
E1226 09:50:57.739800 1 namespace_controller.go:419] create runtime, namespace: tfsmy-frontend, error: rpc error: code = InvalidArgument desc = create resources failed: rpc error: code = AlreadyExists desc = namespace [tfsmy-frontend] exists: namespace [tfsmy-frontend] annotations openpitrix_runtime:runtime-gwyL1xp9rXEJ already exist
E1226 09:51:43.366690 1 namespace_controller.go:220] roles.rbac.authorization.k8s.io "operator" already exists
E1226 09:51:44.390731 1 namespace_controller.go:276] creating role binding namespace: tfsmy-springboot,role binding: admin, error: rolebindings.rbac.authorization.k8s.io "admin" already exists
E1226 09:52:35.500132 1 namespace_controller.go:220] roles.rbac.authorization.k8s.io "operator" already exists
E1226 09:52:36.521752 1 namespace_controller.go:276] creating role binding namespace: tfsmy-platform,role binding: admin, error: rolebindings.rbac.authorization.k8s.io "admin" already exists
E1226 09:53:03.165850 1 namespace_controller.go:220] roles.rbac.authorization.k8s.io "operator" already exists
E1226 09:53:04.183823 1 namespace_controller.go:276] creating role binding namespace: tfsmy-service,role binding: admin, error: rolebindings.rbac.authorization.k8s.io "admin" already exists
E1226 09:54:39.986617 1 namespace_controller.go:231] resource name may not be empty
E1227 01:00:02.206368 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577408400": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577408400
E1227 01:00:04.523228 1 job_controller.go:172] Operation cannot be fulfilled on jobs.batch "elasticsearch-logging-curator-elasticsearch-curator-1577408400": the object has been modified; please apply your changes to the latest version and try againmake job revision failednamespacekubesphere-logging-systemnameelasticsearch-logging-curator-elasticsearch-curator-1577408400