huanggze
依然有报错 这个是elasticsearch-logging-curator-elasticsearch-curator-157429m64sn 这个pod下看到的报错

    jerli curator 不用管,是定时删除日志的 CronJob,只要改好后第二天开始的 curator 不报错就行。现在日志功能如何?

      jerli 好的。不好意思,我今天下午在培训。我尽快回复你的邮件

      3 个月 后
      7 个月 后

      huanggze 大佬,我新增节点后升级到2.1.1的集群,出现了同样的问题,能帮忙给看看吗?

        huanggze 日志的discovery服务起不来,istio服务也有两个起不来,刚才把日志功能关了,istio还是起不来,把istio也关了后集群就登录不上了。。。。。。现在master节点的docker也重启不了。。。。。。。我重启下服务器试试

        重启完了,关掉了日志和istio,现在的问题感觉有两个:
        一、
        登录后这里一直转圈,显示不出来;
        二、部分项目点重新部署不生效。

          hetao

          看看 kubesphere-system 项目下 ks-apiserver 报什么错吗

            hetao 集群网络有问题,cluster ip 不通,检查一下集群状态

            kubectl get po -A -o wide
            kubectl get nodes -o wide

              hongming

              # kubectl get po -A -o wide |grep -v Running
              NAMESPACE                      NAME                                                              READY   STATUS             RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
              demo-anxi-tea                  service-cxj9e4-6ff554d6c6-56gwj                                   0/1     ImagePullBackOff   0          24h     10.233.92.152   node3    <none>           <none>
              demo-yanxuan                   service-q150tc-758bb7cf86-mtks9                                   0/1     ImagePullBackOff   0          24h     10.233.92.197   node3    <none>           <none>
              demo-yinfeng                   auth-hmwbrx-659c88b57b-5l6k4                                      0/1     ImagePullBackOff   0          24h     10.233.92.165   node3    <none>           <none>
              demo-yinfeng                   gateway-tjhkr1-7b4cf66964-z9rwf                                   0/1     ImagePullBackOff   0          24h     10.233.92.145   node3    <none>           <none>
              demo-yinfeng                   track-sbi554-7f8876686d-lth5h                                     0/1     ImagePullBackOff   0          24h     10.233.92.166   node3    <none>           <none>
              demo-zhibao                    auth-pujrt3-78d64cc7c7-dd66q                                      0/1     ImagePullBackOff   0          24h     10.233.92.173   node3    <none>           <none>
              demo-zhibao                    gateway-5dbd87cbb7-6kwbn                                          0/1     ImagePullBackOff   0          24h     10.233.92.179   node3    <none>           <none>
              demo-zhibao                    zhibao-e9r2q8-679bd5c6-f2sv9                                      0/1     ImagePullBackOff   0          24h     10.233.92.187   node3    <none>           <none>
              gago-sonarqube                 sonarqube-1-v7-8699bc689c-hhhwv                                   1/2     CrashLoopBackOff   289        24h     10.233.92.220   node3    <none>           <none>
              istio-system                   jaeger-collector-79b8876d7c-mwckz                                 0/1     CrashLoopBackOff   28         125m    10.233.96.48    node2    <none>           <none>
              istio-system                   jaeger-collector-8698b58b55-8hfp7                                 0/1     CrashLoopBackOff   28         125m    10.233.96.246   node2    <none>           <none>
              istio-system                   jaeger-query-6f9d8c8cdb-ccsv5                                     1/2     CrashLoopBackOff   29         126m    10.233.96.186   node2    <none>           <none>
              istio-system                   jaeger-query-7f9c7c84c-9s469                                      1/2     CrashLoopBackOff   28         125m    10.233.96.154   node2    <none>           <none>
              jl3rd                          service-1-5c59fc669b-jv77g                                        0/1     ErrImagePull       0          24h     10.233.92.180   node3    <none>           <none>
              jl3rd                          web-1-568cc584bd-wtx9m                                            0/1     ImagePullBackOff   0          24h     10.233.92.198   node3    <none>           <none>
              kubesphere-alerting-system     alerting-db-ctrl-job-2xv2h                                        0/1     Completed          0          94m     10.233.96.125   node2    <none>           <none>
              kubesphere-alerting-system     alerting-db-init-job-szvkk                                        0/1     Completed          0          94m     10.233.96.182   node2    <none>           <none>
              kubesphere-alerting-system     notification-db-ctrl-job-vwqr5                                    0/1     Completed          0          94m     10.233.96.184   node2    <none>           <none>
              kubesphere-alerting-system     notification-db-init-job-pksn4                                    0/1     Completed          0          94m     10.233.96.219   node2    <none>           <none>
              kubesphere-devops-system       ks-devops-db-ctrl-job-hkqzb                                       0/1     Completed          0          96m     10.233.96.174   node2    <none>           <none>
              kubesphere-devops-system       ks-devops-db-init-job-hfnll                                       0/1     Completed          0          97m     10.233.96.211   node2    <none>           <none>
              kubesphere-logging-system      elasticsearch-logging-curator-elasticsearch-curator-159961hjsjl   0/1     Completed          0          7h23m   10.233.96.1     node2    <none>           <none>
              
              # kubectl get nodes -o wide
              NAME     STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
              master   Ready    master   287d   v1.16.7   192.168.8.4    <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64    docker://19.3.5
              node1    Ready    worker   287d   v1.16.7   192.168.8.5    <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64    docker://19.3.5
              node2    Ready    worker   287d   v1.16.7   192.168.8.6    <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64    docker://19.3.5
              node3    Ready    worker   30h    v1.16.7   192.168.8.15   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://19.3.12

              我把日志和istio插件false掉了

              @hetao kube-system 和kubesphere-system 下的组件都还是正常的,再看看 ks-apigateway/ks-apiserver/ks-account 这几个组件的日志, 我看到上面贴的日志有大量的 connection refusedconnection reset by peer ,检查一下节点网络是否正常

                hongming

                # kubectl logs ks-apigateway-94687746b-89h9n -n kubesphere-system |tail -n 100 |grep ERROR
                2020/09/09 08:35:22 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:33476: use of closed network connection
                2020/09/09 08:35:25 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:33700: use of closed network connection
                2020/09/09 08:35:27 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:33834: use of closed network connection
                2020/09/09 08:36:26 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:33946: use of closed network connection
                2020/09/09 08:36:28 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:35558: use of closed network connection
                2020/09/09 08:36:34 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:35630: use of closed network connection
                2020/09/09 08:37:26 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:35804: use of closed network connection
                2020/09/09 08:37:30 [ERROR] failed to copy buffer:  read tcp 10.233.96.121:2018->10.233.96.201:37170: use of closed network connection
                2020/09/09 08:38:30 [ERROR] failed to copy buffer:  readfrom tcp 10.233.96.121:2018->10.233.96.201:37306: read tcp 10.233.96.121:35332->10.233.0.1:443: use of closed network connection
                
                # # kubectl logs ks-apiserver-74b4876f95-fcc52 -n kubesphere-system |tail -n 100 
                E0909 07:11:12.649654       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://10.233.0.1:443/api/v1/configmaps?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.650768       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha2.ServicePolicy: Get https://10.233.0.1:443/apis/servicemesh.kubesphere.io/v1alpha2/servicepolicies?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.651777       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Role: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/roles?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.652874       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.Ingress: Get https://10.233.0.1:443/apis/extensions/v1beta1/ingresses?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.653897       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://10.233.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.654998       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ControllerRevision: Get https://10.233.0.1:443/apis/apps/v1/controllerrevisions?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.656008       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.DaemonSet: Get https://10.233.0.1:443/apis/apps/v1/daemonsets?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.657119       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.RoleBinding: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/rolebindings?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.658163       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Deployment: Get https://10.233.0.1:443/apis/apps/v1/deployments?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.659282       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://10.233.0.1:443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.660340       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha2.Strategy: Get https://10.233.0.1:443/apis/servicemesh.kubesphere.io/v1alpha2/strategies?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.661385       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ClusterRoleBinding: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.662483       1 reflector.go:134] sigs.k8s.io/application/pkg/client/informers/externalversions/factory.go:117: Failed to list *v1beta1.Application: Get https://10.233.0.1:443/apis/app.k8s.io/v1beta1/applications?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.663487       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://10.233.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.664595       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha1.Workspace: Get https://10.233.0.1:443/apis/tenant.kubesphere.io/v1alpha1/workspaces?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.665609       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://10.233.0.1:443/api/v1/secrets?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.666674       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha1.S2iBinary: Get https://10.233.0.1:443/apis/devops.kubesphere.io/v1alpha1/s2ibinaries?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:12.667738       1 reflector.go:134] github.com/kubesphere/s2ioperator/pkg/client/informers/externalversions/factory.go:116: Failed to list *v1alpha1.S2iBuilderTemplate: Get https://10.233.0.1:443/apis/devops.kubesphere.io/v1alpha1/s2ibuildertemplates?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.624755       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://10.233.0.1:443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.638788       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.CronJob: Get https://10.233.0.1:443/apis/batch/v1beta1/cronjobs?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.639815       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v2beta2.HorizontalPodAutoscaler: Get https://10.233.0.1:443/apis/autoscaling/v2beta2/horizontalpodautoscalers?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.640842       1 reflector.go:134] github.com/kubesphere/s2ioperator/pkg/client/informers/externalversions/factory.go:116: Failed to list *v1alpha1.S2iBuilder: Get https://10.233.0.1:443/apis/devops.kubesphere.io/v1alpha1/s2ibuilders?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.641864       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ResourceQuota: Get https://10.233.0.1:443/api/v1/resourcequotas?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.642908       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://10.233.0.1:443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.643965       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Job: Get https://10.233.0.1:443/apis/batch/v1/jobs?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.645002       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ClusterRole: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.646053       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Namespace: Get https://10.233.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.647095       1 reflector.go:134] github.com/kubesphere/s2ioperator/pkg/client/informers/externalversions/factory.go:116: Failed to list *v1alpha1.S2iRun: Get https://10.233.0.1:443/apis/devops.kubesphere.io/v1alpha1/s2iruns?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.648199       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://10.233.0.1:443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.649208       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Pod: Get https://10.233.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.650336       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://10.233.0.1:443/api/v1/configmaps?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.651384       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha2.ServicePolicy: Get https://10.233.0.1:443/apis/servicemesh.kubesphere.io/v1alpha2/servicepolicies?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.652397       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Role: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/roles?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.653470       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.Ingress: Get https://10.233.0.1:443/apis/extensions/v1beta1/ingresses?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.654498       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://10.233.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.655559       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ControllerRevision: Get https://10.233.0.1:443/apis/apps/v1/controllerrevisions?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.656641       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.DaemonSet: Get https://10.233.0.1:443/apis/apps/v1/daemonsets?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.657724       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.RoleBinding: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/rolebindings?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.658774       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Deployment: Get https://10.233.0.1:443/apis/apps/v1/deployments?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.659837       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://10.233.0.1:443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.660900       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha2.Strategy: Get https://10.233.0.1:443/apis/servicemesh.kubesphere.io/v1alpha2/strategies?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.661974       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ClusterRoleBinding: Get https://10.233.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.663035       1 reflector.go:134] sigs.k8s.io/application/pkg/client/informers/externalversions/factory.go:117: Failed to list *v1beta1.Application: Get https://10.233.0.1:443/apis/app.k8s.io/v1beta1/applications?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.664104       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://10.233.0.1:443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.665148       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha1.Workspace: Get https://10.233.0.1:443/apis/tenant.kubesphere.io/v1alpha1/workspaces?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.666184       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://10.233.0.1:443/api/v1/secrets?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.667289       1 reflector.go:134] kubesphere.io/kubesphere/pkg/client/informers/externalversions/factory.go:120: Failed to list *v1alpha1.S2iBinary: Get https://10.233.0.1:443/apis/devops.kubesphere.io/v1alpha1/s2ibinaries?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:13.668351       1 reflector.go:134] github.com/kubesphere/s2ioperator/pkg/client/informers/externalversions/factory.go:116: Failed to list *v1alpha1.S2iBuilderTemplate: Get https://10.233.0.1:443/apis/devops.kubesphere.io/v1alpha1/s2ibuildertemplates?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
                E0909 07:11:17.080510       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:serviceaccount:kubesphere-system:kubesphere" cannot list resource "replicasets" in API group "apps" at the cluster scope
                E0909 07:31:01.680206       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34036-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:32:09.264097       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34034-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:32:42.992128       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34038-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:33:09.232241       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34040-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:33:43.088089       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34026-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:34:53.168116       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34032-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:37:50.540461       1 v2.go:105] websocket: close 1001 (going away)
                W0909 07:37:58.542511       1 terminal.go:133] 1Process exited
                E0909 07:42:15.021883       1 v2.go:105] websocket: close 1001 (going away)
                W0909 07:42:23.023352       1 terminal.go:133] 1Process exited
                E0909 07:43:35.536174       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34030-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:45:26.320262       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:34028-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:46:26.352092       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:38520-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:47:32.272127       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:40288-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:52:01.072128       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:47680-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:52:53.328232       1 v2.go:105] websocket: close 1006 (abnormal closure): unexpected EOF
                E0909 07:53:01.104153       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:40838-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                W0909 07:53:01.329598       1 terminal.go:133] 1Process exited
                E0909 07:57:38.032086       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:49300-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 07:59:38.736128       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:47676-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                E0909 08:00:38.704101       1 metrics.go:706] status: 500,message: {
                 "message": "unable to read LDAP response packet: read tcp 10.233.96.232:49298-\u003e10.233.70.66:389: read: connection reset by peer"
                }
                
                # kubectl logs ks-account-7f67d5966d-6td8z -n kubesphere-system |tail -n 100 
                W0909 07:13:07.174268       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
                I0909 07:13:07.789468       1 server.go:113] Server listening on 0.0.0.0:9090 
                E0909 07:31:01.679962       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34036->10.233.70.66:389: read: connection reset by peer
                E0909 07:31:01.680002       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34036->10.233.70.66:389: read: connection reset by peer
                E0909 07:32:09.263903       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34034->10.233.70.66:389: read: connection reset by peer
                E0909 07:32:09.263925       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34034->10.233.70.66:389: read: connection reset by peer
                E0909 07:32:42.991906       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34038->10.233.70.66:389: read: connection reset by peer
                E0909 07:32:42.991930       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34038->10.233.70.66:389: read: connection reset by peer
                E0909 07:33:09.231945       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34040->10.233.70.66:389: read: connection reset by peer
                E0909 07:33:09.231974       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34040->10.233.70.66:389: read: connection reset by peer
                E0909 07:33:43.087867       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34026->10.233.70.66:389: read: connection reset by peer
                E0909 07:33:43.087890       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34026->10.233.70.66:389: read: connection reset by peer
                E0909 07:34:53.167920       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34032->10.233.70.66:389: read: connection reset by peer
                E0909 07:34:53.167942       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34032->10.233.70.66:389: read: connection reset by peer
                E0909 07:43:35.535953       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34030->10.233.70.66:389: read: connection reset by peer
                E0909 07:43:35.535978       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34030->10.233.70.66:389: read: connection reset by peer
                E0909 07:45:26.320046       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:34028->10.233.70.66:389: read: connection reset by peer
                E0909 07:45:26.320069       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:34028->10.233.70.66:389: read: connection reset by peer
                E0909 07:46:26.351887       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:38520->10.233.70.66:389: read: connection reset by peer
                E0909 07:46:26.351907       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:38520->10.233.70.66:389: read: connection reset by peer
                E0909 07:47:32.271931       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:40288->10.233.70.66:389: read: connection reset by peer
                E0909 07:47:32.271955       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:40288->10.233.70.66:389: read: connection reset by peer
                E0909 07:52:01.071933       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:47680->10.233.70.66:389: read: connection reset by peer
                E0909 07:52:01.071954       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:47680->10.233.70.66:389: read: connection reset by peer
                E0909 07:53:01.103948       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:40838->10.233.70.66:389: read: connection reset by peer
                E0909 07:53:01.103968       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:40838->10.233.70.66:389: read: connection reset by peer
                E0909 07:57:38.031909       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:49300->10.233.70.66:389: read: connection reset by peer
                E0909 07:57:38.031925       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:49300->10.233.70.66:389: read: connection reset by peer
                E0909 07:59:38.735929       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:47676->10.233.70.66:389: read: connection reset by peer
                E0909 07:59:38.735951       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:47676->10.233.70.66:389: read: connection reset by peer
                E0909 08:00:38.703909       1 im.go:586] search user unable to read LDAP response packet: read tcp 10.233.96.232:49298->10.233.70.66:389: read: connection reset by peer
                E0909 08:00:38.703932       1 im.go:312] unable to read LDAP response packet: read tcp 10.233.96.232:49298->10.233.70.66:389: read: connection reset by peer

                我今天下午的操作顺序:
                重启master的docker,失败,然后重启master服务器,发现日志和istio还是异常,于是执行

                kubectl edit cm -n kubesphere-system ks-installer

                关掉了日志和istio