• 开发
  • 【直播分享】手把手教你搭建 KubeSphere 前后端本地开发环境

zwkdhm 在集群环境上访问下API看是否通,看下ks-console日志

    Jeff 大神我在telepresence加了个参数--method inject-tcp,好像就可以了,是什么原因呢,是因为这个headless svc的原因么
    root@k8s-01:~# kubectl get svc -A

    NAMESPACE                      NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
    default                        kubernetes                                ClusterIP   10.233.0.1      <none>        443/TCP                        190d
    kube-system                    coredns                                   ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP         190d
    kube-system                    etcd                                      ClusterIP   None            <none>        2379/TCP                       190d
    kube-system                    kube-controller-manager-svc               ClusterIP   None            <none>        10252/TCP                      190d
    kube-system                    kube-scheduler-svc                        ClusterIP   None            <none>        10251/TCP                      190d
    kube-system                    kubelet                                   ClusterIP   None            <none>        10250/TCP,10255/TCP,4194/TCP   190d
    kube-system                    metrics-server                            ClusterIP   10.233.24.198   <none>        443/TCP                        190d
    kubesphere-controls-system     default-http-backend                      ClusterIP   10.233.47.72    <none>        80/TCP                         190d
    kubesphere-monitoring-system   alertmanager-main                         ClusterIP   10.233.44.235   <none>        9093/TCP                       190d
    kubesphere-monitoring-system   alertmanager-operated                     ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP     190d
    kubesphere-monitoring-system   kube-state-metrics                        ClusterIP   None            <none>        8443/TCP,9443/TCP              190d
    kubesphere-monitoring-system   node-exporter                             ClusterIP   None            <none>        9100/TCP                       190d
    kubesphere-monitoring-system   notification-manager-controller-metrics   ClusterIP   10.233.53.154   <none>        8443/TCP                       190d
    kubesphere-monitoring-system   notification-manager-svc                  ClusterIP   10.233.9.248    <none>        19093/TCP                      190d
    kubesphere-monitoring-system   prometheus-k8s                            ClusterIP   10.233.48.130   <none>        9090/TCP                       190d
    kubesphere-monitoring-system   prometheus-operated                       ClusterIP   None            <none>        9090/TCP                       190d
    kubesphere-monitoring-system   prometheus-operator                       ClusterIP   None            <none>        8443/TCP                       190d
    kubesphere-system              ks-apiserver                              ClusterIP   10.233.21.82    <none>        80/TCP                         190d
    kubesphere-system              ks-console                                NodePort    10.233.24.152   <none>        80:30880/TCP                   190d
    kubesphere-system              ks-controller-manager                     ClusterIP   10.233.35.26    <none>        443/TCP                        190d
    kubesphere-system              openldap                                  ClusterIP   None            <none>        389/TCP                        190d
    kubesphere-system              redis                                     ClusterIP   10.233.59.1     <none>        6379/TCP                       190d

    之前的ks-console日志:

    {"log":"{ FetchError: request to http://ks-apiserver.kubesphere-system.svc/kapis/config.kubesphere.io/v1alpha2/configs/oauth failed, reason: socket hang up\n","stream":"stderr","time":"2021-08-12T08:17:25.154628598Z"}
    {"log":"    at ClientRequest.\u003canonymous\u003e (/opt/kubesphere/console/server/server.js:80604:11)\n","stream":"stderr","time":"2021-08-12T08:17:25.15465047Z"}
    {"log":"    at emitOne (events.js:116:13)\n","stream":"stderr","time":"2021-08-12T08:17:25.154656974Z"}
    {"log":"    at ClientRequest.emit (events.js:211:7)\n","stream":"stderr","time":"2021-08-12T08:17:25.154661551Z"}
    {"log":"    at Socket.socketOnEnd (_http_client.js:437:9)\n","stream":"stderr","time":"2021-08-12T08:17:25.15466586Z"}
    {"log":"    at emitNone (events.js:111:20)\n","stream":"stderr","time":"2021-08-12T08:17:25.154670278Z"}
    {"log":"    at Socket.emit (events.js:208:7)\n","stream":"stderr","time":"2021-08-12T08:17:25.154674514Z"}
    {"log":"    at endReadableNT (_stream_readable.js:1064:12)\n","stream":"stderr","time":"2021-08-12T08:17:25.154678527Z"}
    {"log":"    at _combinedTickCallback (internal/process/next_tick.js:139:11)\n","stream":"stderr","time":"2021-08-12T08:17:25.154682732Z"}
    {"log":"    at process._tickCallback (internal/process/next_tick.js:181:9)\n","stream":"stderr","time":"2021-08-12T08:17:25.154687012Z"}
    {"log":"  message: 'request to http://ks-apiserver.kubesphere-system.svc/kapis/config.kubesphere.io/v1alpha2/configs/oauth failed, reason: socket hang up',\n","stream":"stderr","time":"2021-08-12T08:17:25.154691412Z"}
    {"log":"  type: 'system',\n","stream":"stderr","time":"2021-08-12T08:17:25.154696141Z"}
    {"log":"  errno: 'ECONNRESET',\n","stream":"stderr","time":"2021-08-12T08:17:25.154700064Z"}
    {"log":"  code: 'ECONNRESET' }\n","stream":"stderr","time":"2021-08-12T08:17:25.154704175Z"}
    {"log":"  --\u003e GET /login 200 9ms 14.82kb 2021/08/12T16:17:25.159\n","stream":"stdout","time":"2021-08-12T08:17:25.159315736Z"}
    {"log":"  \u003c-- GET /kapis/resources.kubesphere.io/v1alpha2/components 2021/08/12T16:17:27.421\n","stream":"stdout","time":"2021-08-12T08:17:27.421992195Z"}
    {"log":"  \u003c-- GET /kapis/resources.kubesphere.io/v1alpha3/deployments?sortBy=updateTime\u0026limit=10 2021/08/12T16:17:29.688\n","stream":"stdout","time":"2021-08-12T08:17:29.689260211Z"}
    {"log":"  \u003c-- GET / 2021/08/12T16:17:35.147\n","stream":"stdout","time":"2021-08-12T08:17:35.148138272Z"}
    3 个月 后

    3.1.1安装后找不到kubesphere.yaml文件。采用all-in-one的方式安装kubesphere.yaml不存在/etc/kubesphere目录下

      开启telepresence时候报错 请问是什么原因呢

      kubesphere git:(master) ✗ sudo telepresence --namespace kubesphere-system --swap-deployment ks-apiserver --also-proxy redis.kubesphere-system.svc --also-proxy openldap.kubesphere-system.svc
      T: Using a Pod instead of a Deployment for the Telepresence proxy. If you experience problems, please file an issue!
      T: Set the environment variable TELEPRESENCE_USE_DEPLOYMENT to any non-empty value to force the old behavior, e.g.,
      T:     env TELEPRESENCE_USE_DEPLOYMENT=1 telepresence --run curl hello
      
      T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless 
      T: services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html
      T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
      T: Starting network proxy to cluster by swapping out Deployment ks-apiserver with a proxy Pod
      T: Forwarding remote port 9090 to local port 9090.
      
      
      Looks like there's a bug in our code. Sorry about that!
      
      Background process (SSH port forward (exposed ports)) exited with return code 255. Command was:
        ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 51702 telepresence@127.0.0.1 -R '*:9090:127.0.0.1:9090'
      
      
      Background process (SSH port forward (socks and proxy poll)) exited with return code 255. Command was:
        ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 51702 telepresence@127.0.0.1 -L127.0.0.1:51712:127.0.0.1:9050 -R9055:127.0.0.1:51713
      
      
      Here are the last few lines of the logfile (see /Users/zimingli/github/lesterhnu/kubesphere/telepresence.log for the complete logs):
      
        18.0  21 | c : DNS request from ('10.2.30.237', 13778) to None: 35 bytes
        18.5  21 | c : DNS request from ('10.2.30.237', 27803) to None: 35 bytes
        18.5 TEL | [17] SSH port forward (exposed ports): exit 255
        18.5 TEL | [18] SSH port forward (socks and proxy poll): exit 255
        19.1 TEL | [32] timed out after 5.01 secs.
        19.1 TEL | [33] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-5.a.sanity.check.telepresence.io")'
        19.1  21 | c : DNS request from ('10.2.30.237', 65254) to None: 68 bytes
        19.4  21 | c : DNS request from ('10.2.30.237', 60471) to None: 35 bytes
        20.1 TEL | [33] timed out after 1.01 secs.
        20.2  21 | c : DNS request from ('10.2.30.237', 65254) to None: 68 bytes
        20.4  21 | c : DNS request from ('10.2.30.237', 60471) to None: 35 bytes
        21.4  21 | c : DNS request from ('10.2.30.237', 64923) to None: 37 bytes

        lesterhnu

        ➜  kubesphere git:(master) ✗ ./bin/cmd/ks-apiserver --kubeconfig ~/.kube/config
        W1111 15:02:32.189924   55531 metricsserver.go:238] Metrics API not available.
        W1111 15:02:32.190132   55531 options.go:183] ks-apiserver starts without redis provided, it will use in memory cache. This may cause inconsistencies when running ks-apiserver with multiple replicas.
        I1111 15:02:32.412198   55531 interface.go:60] start helm repo informer
        W1111 15:02:32.432146   55531 routers.go:175] open /etc/kubesphere/ingress-controller: no such file or directory
        E1111 15:02:32.432186   55531 routers.go:70] error happened during loading external yamls, open /etc/kubesphere/ingress-controller: no such file or directory
        I1111 15:02:32.447266   55531 apiserver.go:356] Start cache objects
        W1111 15:02:32.797402   55531 apiserver.go:509] resource snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses not exists in the cluster
        W1111 15:02:32.797432   55531 apiserver.go:509] resource snapshot.storage.k8s.io/v1, Resource=volumesnapshots not exists in the cluster
        W1111 15:02:32.797445   55531 apiserver.go:509] resource snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents not exists in the cluster
        I1111 15:02:33.206151   55531 apiserver.go:562] Finished caching objects
        I1111 15:02:33.206182   55531 apiserver.go:278] Start listening on :9090
        W1111 15:04:48.461319   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
        W1111 15:04:48.461273   55531 reflector.go:436] pkg/client/informers/externalversions/factory.go:128: watch of *v1alpha2.Strategy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
        W1111 15:04:48.461273   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
        W1111 15:04:48.461318   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
        W1111 15:04:48.461852   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
        20 天 后
        1 个月 后

        kubesphere-3.2.1二次开发环境搭建,已经使用telepresence完成ks-apiserver流量代理,

        但是使用console登录一直提示认证失败,使用默认的admin密码P@88w0rd

          2 个月 后
          5 个月 后

          我用的是v3.3.0的kubesphere ,但是开启telepresence 的时候报错,这个啥意思?求大佬们帮忙看下。

            起来后traffic-agent 容器一直报错

            求教各位大佬这个怎么解?

            @Jeff @Feynman 请问go build -o ks-apiserver cmd/ks-apiserver/apiserver.go的时候报错需要怎么解决?

            现在论坛没人回答问题了吗? 😅

            @Jeff 你好,请问本地启动调试代码的时候,apiserver.go跟controller-manager.go2个都需要启动吗?
            只启动apiserver.go可以吗?

            我在goland上面启动了apiserver.go,但是controller-manager.go报错了,调接口试了几个接口都返回类似

            {

                "kind": "Status",

                "apiVersion": "v1",

                "metadata": {},

                "status": "Failure",

                "message": "namespaces.resources.kubesphere.io is forbidden: User \"system:anonymous\" cannot list resource \"namespaces\" in API group \"resources.kubesphere.io\" at the cluster scope",

                "reason": "Forbidden",

                "details": {

                    "group": "resources.kubesphere.io",

                    "kind": "namespaces"

                },

                "code": 403

            }

            尝试了在headers中加入参数,但是并不管用

            请问这2种情况要如何处理?

            7 个月 后

            同问,我的环境是kubesphere3.3.2 allInOne安装,开启了alerting组件也不行

            15 天 后
            8 个月 后