好,非常好。最好能补充下升级到1.6测试没问题后怎么卸载旧版的1.4

    370569218

    helm -n istio-system uninstall istio-init
    helm -n istio-system uninstall istio
    kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io istio-sidecar-injector
    2 个月 后
    13 天 后

    KS啥时候官方更新Istio至比较新的版本?

    ks3.1会更新至 istio-1.6.10,目前我们的开发版本已经支持,你可以试下: kubespheredev/ks-installer:latest,更换到这个镜像后,会直接升级。操作命令:

    kubectl -n kubesphere-system patch cc ks-installer --type merge --patch '{"status":{"servicemesh":{"status":"none"}}}' 
    
    kubectl -n kubesphere-system set image deployment/ks-installer installer=kubespheredev/ks-installer
    
     kubectl -n kubesphere-system rollout restart deploy/ks-installer

      zackzhang 你好,我用kubespheredev/ks-installer:latest镜像这个部署,ks-core核心组件不能正常启动,多集群组件也异常,单独启动pod也有问题。看起来是istio的准入策略导致的,请问还需要额外配置才能正常部署吗?

        weekyuan 可以提供下更详细的信息吗?

        1. 具体是哪些pod不正常;相关日志;describe该pod,查看对应事件及失败原因。
        2. 对应环境:多集群 & 单集群,有没有其他操作。

          zackzhang 这个我之前是配置文件直接多节点部署发现的,后来我通过ALLinOne方式然后add node就没问题问题,很顺利起来了。

          另外新镜像中通过ldap认证登陆提示这个错误,麻烦看下!
          页面提示:Internal Server Error
          request to http://ks-apiserver.kubesphere-system.svc/kapis/iam.kubesphere.io/v1alpha2/users failed, reason: socket hang up

          如下是ks-apiserver的错误:
          `
          2020/12/24 00:02:07 http: panic serving 192.168.17.21:51934: assignment to entry in nil map
          goroutine 4032 [running]:
          net/http.(conn).serve.func1(0xc0005d0320)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:1795 +0×139
          panic(0×2d6b7e0, 0×38941e0)
          /opt/hostedtoolcache/go/1.13.15/x64/src/runtime/panic.go:679 +0×1b2
          kubesphere.io/kubesphere/pkg/apiserver/auditing.(
          auditing).LogRequestObject(0xc000f4af90, 0xc000e00200, 0xc000aca000, 0×2b15c20)
          /home/runner/work/kubesphere/kubesphere/pkg/apiserver/auditing/types.go:187 +0×89a
          kubesphere.io/kubesphere/pkg/apiserver/filters.WithAuditing.func1(0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /home/runner/work/kubesphere/kubesphere/pkg/apiserver/filters/auditing.go:51 +0×100
          net/http.HandlerFunc.ServeHTTP(0xc000f4afc0, 0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2036 +0×44
          kubesphere.io/kubesphere/pkg/apiserver/filters.WithAuthorization.func1(0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /home/runner/work/kubesphere/kubesphere/pkg/apiserver/filters/authorization.go:50 +0×37c
          net/http.HandlerFunc.ServeHTTP(0xc0000c9740, 0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2036 +0×44
          kubesphere.io/kubesphere/pkg/apiserver/filters.WithMultipleClusterDispatcher.func1(0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /home/runner/work/kubesphere/kubesphere/pkg/apiserver/filters/dispatch.go:43 +0xd9
          net/http.HandlerFunc.ServeHTTP(0xc000f4b830, 0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2036 +0×44
          kubesphere.io/kubesphere/pkg/apiserver/filters.WithAuthentication.func1(0×393b1a0, 0xc000bdc2a0, 0xc000e00200)
          /home/runner/work/kubesphere/kubesphere/pkg/apiserver/filters/authentication.go:68 +0×5ce
          net/http.HandlerFunc.ServeHTTP(0xc0000c9800, 0×393b1a0, 0xc000bdc2a0, 0xc00221f200)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2036 +0×44
          kubesphere.io/kubesphere/pkg/apiserver/filters.WithRequestInfo.func1(0×393b1a0, 0xc000bdc2a0, 0xc00221e000)
          /home/runner/work/kubesphere/kubesphere/pkg/apiserver/filters/requestinfo.go:67 +0×3c5
          net/http.HandlerFunc.ServeHTTP(0xc000f7a000, 0×393b1a0, 0xc000bdc2a0, 0xc00221e000)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2036 +0×44
          net/http.serverHandler.ServeHTTP(0xc0009fc0e0, 0×393b1a0, 0xc000bdc2a0, 0xc00221e000)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2831 +0xa4
          net/http.(conn).serve(0xc0005d0320, 0×394da20, 0xc002a16480)
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:1919 +0×875
          created by net/http.(
          Server).Serve
          /opt/hostedtoolcache/go/1.13.15/x64/src/net/http/server.go:2957 +0×384
          2020/12/24 00:02:07 http: panic serving 192.168.17.21:51942: assignment to entry in nil map

          `

            weekyuan 这个的原因上是http server在bind IP+端口失败,你检查下这个192.168.17.21:51934 是不是可用,端口是不是被占用

              zackzhang 感谢回复,上面的错误日志是ks-apiserver的,涉及到的192.168.17.21:51934这个IP居然是console的pod地址,console中只有一个8000端口被监听,没有51934端口监听。按理说ks-apiserver需要bind地址不应该bind到console的IP才对。

              另外附上console的错误日志,根据console日志发现login登陆成功,在confirm时候调用ks-apiserver时socket hang up:
              <-- POST /login 2020/12/24T11:15:43.055
              --> POST /login 302 88ms 59b 2020/12/24T11:15:43.143
              <-- GET /login/confirm 2020/12/24T11:15:43.169
              --> GET /login/confirm 200 2ms 11.97kb 2020/12/24T11:15:43.171
              <-- GET /login/confirm 2020/12/24T11:15:43.209
              --> GET /login/confirm 200 3ms 11.97kb 2020/12/24T11:15:43.212
              <-- GET /kapis/iam.kubesphere.io/v1alpha2/users/yuanyuan 2020/12/24T11:15:46.305
              --> GET /kapis/iam.kubesphere.io/v1alpha2/users/yuanyuan 200 6ms 15b 2020/12/24T11:15:46.311
              <-- GET / 2020/12/24T11:15:48.538
              UnauthorizedError: Not Login
              at Object.throw (/opt/kubesphere/console/server/server.js:23953:11)
              at getCurrentUser (/opt/kubesphere/console/server/server.js:7995:14)
              at renderView (/opt/kubesphere/console/server/server.js:93770:7)
              at dispatch (/opt/kubesphere/console/server/server.js:5198:32)
              at next (/opt/kubesphere/console/server/server.js:5199:18)
              at /opt/kubesphere/console/server/server.js:64227:16
              at dispatch (/opt/kubesphere/console/server/server.js:5198:32)
              at next (/opt/kubesphere/console/server/server.js:5199:18)
              at /opt/kubesphere/console/server/server.js:72222:37
              at dispatch (/opt/kubesphere/console/server/server.js:5198:32)
              at next (/opt/kubesphere/console/server/server.js:5199:18)
              at /opt/kubesphere/console/server/server.js:64227:16
              at dispatch (/opt/kubesphere/console/server/server.js:5198:32)
              at next (/opt/kubesphere/console/server/server.js:5199:18)
              at /opt/kubesphere/console/server/server.js:72222:37
              at dispatch (/opt/kubesphere/console/server/server.js:5198:32)
              --> GET / 302 2ms 43b 2020/12/24T11:15:48.540
              <-- GET /login 2020/12/24T11:15:48.541
              --> GET /login 200 9ms 11.92kb 2020/12/24T11:15:48.549
              <-- GET /kapis/iam.kubesphere.io/v1alpha2/users?email=yuanyuan-g%40360.cn 2020/12/24T11:15:49.171
              <-- GET /kapis/iam.kubesphere.io/v1alpha2/users/yuanyuan 2020/12/24T11:15:49.173
              --> GET /kapis/iam.kubesphere.io/v1alpha2/users?email=yuanyuan-g%40360.cn 200 7ms 15b 2020/12/24T11:15:49.178
              --> GET /kapis/iam.kubesphere.io/v1alpha2/users/yuanyuan 200 6ms 15b 2020/12/24T11:15:49.179
              <-- POST /login/confirm 2020/12/24T11:15:49.192
              FetchError: request to http://ks-apiserver.kubesphere-system.svc/kapis/iam.kubesphere.io/v1alpha2/users failed, reason: socket hang up
              at ClientRequest.<anonymous> (/opt/kubesphere/console/server/server.js:74611:11)
              at ClientRequest.emit (events.js:314:20)
              at Socket.socketOnEnd (_http_client.js:458:9)
              at Socket.emit (events.js:326:22)
              at endReadableNT (_stream_readable.js:1241:12)
              at processTicksAndRejections (internal/process/task_queues.js:84:21) {
              type: 'system',
              errno: 'ECONNRESET',
              code: 'ECONNRESET'
              }
              --> POST /login/confirm 500 5ms 193b 2020/12/24T11:15:49.197

                13 天 后

                weekyuan

                把ks-installer ks-apiserver 的镜像的拉取策略改为always,然后

                kubectl -n kubesphere-system patch cc ks-installer --type merge --patch '{"status":{"servicemesh":{"status":"none"}}}' 
                kubectl -n kubesphere-system rollout restart deploy/ks-installer
                kubectl -n kubesphere-system rollout restart deploy/ks-apiserver
                24 天 后

                @zackzhang 你好 我按你说的将ks-installer镜像改为kubespheredev/ks-installer:latest,ks-installer ks-apiserver 的镜像的拉取策略改为always后 执行了你所说的命令 ,现在ks-installer的pods启动不了,一直CrashLoopBackOff状态 也没有日志

                  dylan 刚验证了这个镜像没有问题。

                  1. kubectl describe deploy/ks-installer -n kubesphere-system看下有没有提示信息

                  2. 检查kubelet日志,journalctl -xe看下是不是有错

                  3. 节点健康状况

                  dylan 感谢 我卸载了kubesphere 直接将yaml中的镜像替换了,再安装是可以 ,我晚点用3.0安装在线替换试试

                  @zackzhang 我上面那个错误应该是直接复制你文档中命令导致的,如图应该是ks-installer

                  还有一个问题请教,我如果用了image: kubespheredev/ks-installer:latest那其他组件例如ks-apiserver还需要改为开发版的镜像吗?还是继续用原来的正式版

                    dylan 谢谢提醒,这个已经修正了。只把ks-installer改为latest镜像就可以升级上去。