ks-console偶尔出现无法登录现象,输入用户名密码点击登录无任何反馈。

执行重置密码操作,kubectl apply 以下内容

apiVersion: iam.kubesphere.io/v1alpha2
kind: User
metadata:
  annotations:
    iam.kubesphere.io/password-encrypted: "true"
  finalizers:
  - finalizers.kubesphere.io/users
  labels:
    kubefed.io/managed: "false"
  name: admin
spec:
  email: admin@kubesphere.io
  password: $2a$10$uJd.JOmYNTeDcBeN9idI0e7lVh1YolZ3LosKUW6MDCU5oHmGc8lHG
status:
  lastTransitionTime: "2020-10-12T08:48:03Z"
  state: Active

重启ks-console 以及ks-controller-manager依然无效,
ks-controller-manager日志:

[root@jenkins kubersphere]# kubectl -n kubesphere-system logs -f ks-controller-manager-6c98f848-57r72
W1125 17:10:17.329271       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1125 17:10:17.674866       1 server.go:175] setting up manager
I1125 17:10:18.145592       1 user_controller.go:134] Setting up event handlers
I1125 17:10:18.145768       1 certificatesigningrequest_controller.go:90] Setting up event handlers
I1125 17:10:18.145871       1 clusterrolebinding_controller.go:94] Setting up event handlers
I1125 17:10:18.145985       1 globalrole_controller.go:95] Setting up event handlers
I1125 17:10:18.154604       1 workspacerole.go:106] Setting up event handlers
I1125 17:10:18.154687       1 globalrolebinding_controller.go:101] Setting up event handlers
I1125 17:10:18.154772       1 workspacerolebinding_controller.go:106] Setting up event handlers
I1125 17:10:18.154854       1 workspacetemplate_controller.go:122] Setting up event handlers
I1125 17:10:18.154945       1 server.go:225] Starting cache resource from apiserver...
I1125 17:10:18.155045       1 server.go:236] Starting the controllers.
I1125 17:10:18.155087       1 leaderelection.go:242] attempting to acquire leader lease  kubesphere-system/ks-controller-manager-leader-election...
E1125 17:13:25.723293       1 user_controller.go:146] Failed to enqueue login object, error: Operation cannot be fulfilled on users.iam.kubesphere.io "admin": the object has been modified; please apply your changes to the latest version and try again
E1125 17:15:49.648252       1 user_controller.go:146] Failed to enqueue login object, error: Operation cannot be fulfilled on users.iam.kubesphere.io "admin": the object has been modified; please apply your changes to the latest version and try again
E1125 17:16:03.387568       1 user_controller.go:146] Failed to enqueue login object, error: Operation cannot be fulfilled on users.iam.kubesphere.io "admin": the object has been modified; please apply your changes to the latest version and try again
E1125 17:24:10.637100       1 user_controller.go:146] Failed to enqueue login object, error: Operation cannot be fulfilled on users.iam.kubesphere.io "admin": the object has been modified; please apply your changes to the latest version and try again

ks-console日志

# kubectl -n kubesphere-system logs -f ks-console-786b9846d4-dc9cs
{ UnauthorizedError: Not Login
    at Object.throw (/opt/kubesphere/console/server/server.js:31701:11)
    at getCurrentUser (/opt/kubesphere/console/server/server.js:9037:14)
    at renderView (/opt/kubesphere/console/server/server.js:23231:46)
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:70183:16
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:77986:37
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:70183:16
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:77986:37
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32) message: 'Not Login' }
  --> GET / 302 1ms 43b 2020/11/25T17:47:37.744
  <-- GET /login 2020/11/25T17:47:37.744
  --> GET /login 200 28ms 14.82kb 2020/11/25T17:47:37.772
  <-- GET / 2020/11/25T17:47:47.743
{ UnauthorizedError: Not Login
    at Object.throw (/opt/kubesphere/console/server/server.js:31701:11)
    at getCurrentUser (/opt/kubesphere/console/server/server.js:9037:14)
    at renderView (/opt/kubesphere/console/server/server.js:23231:46)
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:70183:16
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:77986:37
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:70183:16
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:77986:37
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32) message: 'Not Login' }
  --> GET / 302 1ms 43b 2020/11/25T17:47:47.744
  <-- GET /login 2020/11/25T17:47:47.744
  --> GET /login 200 44ms 14.82kb 2020/11/25T17:47:47.788
  <-- GET / 2020/11/25T17:47:57.743
{ UnauthorizedError: Not Login
    at Object.throw (/opt/kubesphere/console/server/server.js:31701:11)
    at getCurrentUser (/opt/kubesphere/console/server/server.js:9037:14)
    at renderView (/opt/kubesphere/console/server/server.js:23231:46)
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:70183:16
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:77986:37
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:70183:16
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
    at next (/opt/kubesphere/console/server/server.js:6871:18)
    at /opt/kubesphere/console/server/server.js:77986:37
    at dispatch (/opt/kubesphere/console/server/server.js:6870:32) message: 'Not Login' }
  --> GET / 302 1ms 43b 2020/11/25T17:47:57.744
  <-- GET /login 2020/11/25T17:47:57.744
  --> GET /login 200 6ms 14.82kb 2020/11/25T17:47:57.750
^C
  • hongming 回复了此帖
  • willqy 根本原因是 集群中节点时间不同步, 先解决这个问题, 日志中有几条EOF 应该是 redis-ha-haproxy 异常重启的时候出现的

    willqy 无法登录应该会有错误提示, 可以看看浏览器报错, ks-console / ks-apiserver 的日志

    经测试每次要3至5次尝试才可以登录进去,不能登录时登录界面无任何提示和反应,下图是登录失败后的界面

    清理浏览器缓存,第一次登录提示密码错误,继续尝试几次无反应,然后登陆成功,一直循环这种现象

      willqy 看看 ks-apiserver 的日志, kubesphere-system 下 redis pod 正常吗,是否有反向代理(可以直接用nodeport试试)

        没用用到代理,ks-console用的就是nodeport方式暴露,所有pod运行正常

        [root@jenkins kubersphere]# kubectl get pods -A |grep kubesphere   
        kubesphere-controls-system     default-http-backend-857d7b6856-s9dtb                1/1     Running   0          9d
        kubesphere-controls-system     kubectl-admin-58f985d8f6-5lgj9                       1/1     Running   1          33d
        kubesphere-controls-system     kubesphere-router-demo-namespace-559594cddb-4qrdq    1/1     Running   1          9d
        kubesphere-devops-system       ks-jenkins-54455f5db8-glhbs                          1/1     Running   1          33d
        kubesphere-devops-system       s2ioperator-0                                        1/1     Running   2          33d
        kubesphere-devops-system       uc-jenkins-update-center-cd9464fff-r5txz             1/1     Running   0          9d
        kubesphere-monitoring-system   alertmanager-main-0                                  2/2     Running   0          9d
        kubesphere-monitoring-system   alertmanager-main-1                                  2/2     Running   4          35d
        kubesphere-monitoring-system   alertmanager-main-2                                  2/2     Running   4          33d
        kubesphere-monitoring-system   kube-state-metrics-95c974544-8fjd8                   3/3     Running   3          33d
        kubesphere-monitoring-system   node-exporter-mdqvj                                  2/2     Running   4          35d
        kubesphere-monitoring-system   node-exporter-p8glr                                  2/2     Running   4          35d
        kubesphere-monitoring-system   node-exporter-s8ffl                                  2/2     Running   4          35d
        kubesphere-monitoring-system   node-exporter-vsjkp                                  2/2     Running   6          33d
        kubesphere-monitoring-system   notification-manager-deployment-7c8df68d94-bdm25     1/1     Running   1          33d
        kubesphere-monitoring-system   notification-manager-deployment-7c8df68d94-k6c2l     1/1     Running   2          35d
        kubesphere-monitoring-system   notification-manager-operator-6958786cd6-lqtkq       2/2     Running   8          35d
        kubesphere-monitoring-system   prometheus-k8s-0                                     3/3     Running   36         33d
        kubesphere-monitoring-system   prometheus-k8s-1                                     3/3     Running   0          9d
        kubesphere-monitoring-system   prometheus-operator-84d58bf775-g7hv8                 2/2     Running   0          9d
        kubesphere-system              etcd-65796969c7-g4ds6                                1/1     Running   14         35d
        kubesphere-system              ks-apiserver-6b49b49dd5-4p66x                        1/1     Running   7          9d
        kubesphere-system              ks-apiserver-6b49b49dd5-kph6g                        1/1     Running   7          9d
        kubesphere-system              ks-apiserver-6b49b49dd5-zrp8b                        1/1     Running   7          9d
        kubesphere-system              ks-console-786b9846d4-dc9cs                          1/1     Running   0          55m
        kubesphere-system              ks-console-786b9846d4-lnjtr                          1/1     Running   0          55m
        kubesphere-system              ks-console-786b9846d4-s9r28                          1/1     Running   0          55m
        kubesphere-system              ks-controller-manager-6c98f848-57r72                 1/1     Running   0          57m
        kubesphere-system              ks-controller-manager-6c98f848-jblbp                 1/1     Running   0          57m
        kubesphere-system              ks-controller-manager-6c98f848-x8dvs                 1/1     Running   0          57m
        kubesphere-system              ks-installer-5f9ff5bb56-2nv98                        1/1     Running   1          33d
        kubesphere-system              minio-7bfdb5968b-f6c67                               1/1     Running   1          33d
        kubesphere-system              mysql-7f64d9f584-zwzmz                               1/1     Running   1          33d
        kubesphere-system              openldap-0                                           1/1     Running   1          33d
        kubesphere-system              openldap-1                                           1/1     Running   1          33d
        kubesphere-system              redis-ha-haproxy-5c6559d588-2f6x8                    1/1     Running   3          33d
        kubesphere-system              redis-ha-haproxy-5c6559d588-4x69j                    1/1     Running   287        34d
        kubesphere-system              redis-ha-haproxy-5c6559d588-rrxv7                    1/1     Running   0          9d
        kubesphere-system              redis-ha-server-0                                    2/2     Running   0          67m
        kubesphere-system              redis-ha-server-1                                    2/2     Running   0          67m
        kubesphere-system              redis-ha-server-2                                    2/2     Running   0          66m

        ks-apiserver日志

        
        [root@jenkins kubersphere]# kubectl -n kubesphere-system logs --tail 1000 ks-apiserver-6b49b49dd5-4p66x   
        W1116 09:48:36.460603       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
        I1116 09:48:37.082483       1 apiserver.go:300] Start cache objects
        I1116 09:48:38.133390       1 apiserver.go:502] Finished caching objects
        I1116 09:48:38.133418       1 apiserver.go:232] Start listening on :9090
        E1117 10:31:46.276599       1 token.go:65] EOF
        E1117 10:31:46.283219       1 jwt_token.go:45] EOF
        E1117 10:31:46.311069       1 authentication.go:60] Unable to authenticate the request due to error: EOF
        I1125 10:39:39.376402       1 apiserver.go:539] 100.87.223.26 - "GET /kapis/devops.kubesphere.io/v1alpha2/devops/demo-devops4t6ff/pipelines/demo-pipeline/sonarstatus HTTP/1.1" 404 19 10ms
        I1125 11:57:40.355449       1 apiserver.go:539] 100.111.156.128 - "GET /kapis/devops.kubesphere.io/v1alpha2/devops/demo-devops4t6ff/pipelines/demo-pipeline/sonarstatus HTTP/1.1" 404 19 0ms
        E1125 17:03:46.992627       1 token.go:142] EOF
        E1125 17:03:46.992655       1 token.go:103] EOF
        E1125 17:03:46.992665       1 handler.go:299] EOF
        I1125 17:03:46.992679       1 apiserver.go:539] 100.87.223.23 - "POST /oauth/token HTTP/1.1" 500 28 88ms
        E1125 17:07:12.416702       1 handler.go:275] incorrect password
        I1125 17:07:12.546243       1 apiserver.go:539] 100.105.225.38 - "POST /oauth/token HTTP/1.1" 401 32 198ms
        E1125 17:42:15.555425       1 handler.go:275] incorrect password
        I1125 17:42:15.587489       1 apiserver.go:539] 100.105.225.51 - "POST /oauth/token HTTP/1.1" 401 32 103ms
        
        
        
        
        [root@jenkins kubersphere]# kubectl -n kubesphere-system logs --tail 1000 ks-apiserver-6b49b49dd5-kph6g 
        W1116 09:47:22.676506       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
        I1116 09:47:24.961818       1 apiserver.go:300] Start cache objects
        I1116 09:47:27.201329       1 apiserver.go:502] Finished caching objects
        I1116 09:47:27.201343       1 apiserver.go:232] Start listening on :9090
        E1117 09:34:18.153713       1 handler.go:275] incorrect password
        I1117 09:34:18.275595       1 apiserver.go:539] 100.86.135.210 - "POST /oauth/token HTTP/1.1" 401 32 195ms
        E1117 10:31:47.034315       1 token.go:65] EOF
        E1117 10:31:47.034333       1 jwt_token.go:45] EOF
        E1117 10:31:47.034342       1 authentication.go:60] Unable to authenticate the request due to error: EOF
        E1125 10:37:29.935984       1 handler.go:275] incorrect password
        I1125 10:37:30.030289       1 apiserver.go:539] 100.111.156.128 - "POST /oauth/token HTTP/1.1" 401 32 94ms
        E1125 14:07:19.876748       1 jwt.go:51] token is expired by 15m43s
        E1125 14:07:19.876762       1 token.go:57] token is expired by 15m43s
        E1125 14:07:19.876765       1 jwt_token.go:45] token is expired by 15m43s
        E1125 14:07:19.876769       1 authentication.go:60] Unable to authenticate the request due to error: token is expired by 15m43s
        E1125 17:03:45.124594       1 token.go:142] EOF
        E1125 17:03:45.124614       1 token.go:103] EOF
        E1125 17:03:45.124630       1 handler.go:299] EOF
        I1125 17:03:45.124639       1 apiserver.go:539] 100.87.223.23 - "POST /oauth/token HTTP/1.1" 500 28 70ms
        
        
        
        
        [root@jenkins kubersphere]# kubectl -n kubesphere-system logs --tail 1000 ks-apiserver-6b49b49dd5-zrp8b  
        W1116 09:40:41.882950       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
        I1116 09:40:44.370243       1 apiserver.go:300] Start cache objects
        I1116 09:40:45.982826       1 apiserver.go:502] Finished caching objects
        I1116 09:40:45.982861       1 apiserver.go:232] Start listening on :9090
        E1117 09:28:08.691316       1 jwt.go:51] token is not valid yet
        E1117 09:28:08.725781       1 token.go:57] token is not valid yet
        E1117 09:28:08.725824       1 jwt_token.go:45] token is not valid yet
        E1117 09:28:08.726391       1 authentication.go:60] Unable to authenticate the request due to error: token is not valid yet
        E1117 09:28:26.694903       1 jwt.go:51] token is not valid yet
        E1117 09:28:26.695101       1 token.go:57] token is not valid yet
        E1117 09:28:26.695185       1 jwt_token.go:45] token is not valid yet
        E1117 09:28:26.695266       1 authentication.go:60] Unable to authenticate the request due to error: token is not valid yet
        E1117 09:28:54.160965       1 jwt.go:51] token is not valid yet
        E1117 09:28:54.160985       1 token.go:57] token is not valid yet
        E1117 09:28:54.160993       1 jwt_token.go:45] token is not valid yet
        E1117 09:28:54.161002       1 authentication.go:60] Unable to authenticate the request due to error: token 

          看起来像是这个 issue, 看看你 redis pod 正常吗, 有没有异常重启

          willqy

          1. 看日志中有 EOF (redis) 错误, redis-ha-haproxy-5c6559d588-4×69j 应该可以看到异常日志
          2. ks-apiserver 日志中异常jwt.go:51] token is not valid yet, 是因为集群中节点时间不同步,请检查一下

          可以通过kubespehere-config中maximumClockSkew配置时间偏差

          可是redis现在是正常的啊,没有任何报错日志

            willqy 根本原因是 集群中节点时间不同步, 先解决这个问题, 日志中有几条EOF 应该是 redis-ha-haproxy 异常重启的时候出现的

              hongming

              感谢,确认是时间同步问题,同步以后登录正常了