• 安装部署
  • 自建 kubernetes 集群下 安装 kubesphere 3.0 时,monitoring status is failed

其他的安装正常,应该不是自签CA证书的问题,

异常信息如下:

      "stderr_lines": [
        "error: error when retrieving current configuration of:",
        "Resource: \"monitoring.coreos.com/v1, Resource=servicemonitors\", GroupVersionKind: \"monitoring.coreos.com/v1, Kind=ServiceMonitor\"",
        "Name: \"node-exporter\", Namespace: \"kubesphere-monitoring-system\"",
        "Object: &{map[\"apiVersion\":\"monitoring.coreos.com/v1\" \"kind\":\"ServiceMonitor\" \"metadata\":map[\"annotations\":map[\"kubectl.kubernetes.io/last-applied-configuration\":\"\"] \"labels\":map[\"app.kubernetes.io/name\":\"node-exporter\" \"app.kubernetes.io/version\":\"ks-v0.18.1\"] \"name\":\"node-exporter\" \"namespace\":\"kubesphere-monitoring-system\"] \"spec\":map[\"endpoints\":[map[\"bearerTokenFile\":\"/var/run/secrets/kubernetes.io/serviceaccount/token\" \"interval\":\"1m\" \"metricRelabelings\":[map[\"action\":\"keep\" \"regex\":\"node_cpu_.+|node_memory_Mem.+_bytes|node_memory_SReclaimable_bytes|node_memory_Cached_bytes|node_memory_Buffers_bytes|node_network_(.+_bytes_total|up)|node_network_.+_errs_total|node_nf_conntrack_entries.*|node_disk_.+_completed_total|node_disk_.+_bytes_total|node_filesystem_files|node_filesystem_files_free|node_filesystem_avail_bytes|node_filesystem_size_bytes|node_filesystem_free_bytes|node_filesystem_readonly|node_load.+|node_timex_offset_seconds\" \"sourceLabels\":[\"__name__\"]]] \"port\":\"https\" \"relabelings\":[map[\"action\":\"labeldrop\" \"regex\":\"(service|endpoint)\"] map[\"action\":\"replace\" \"regex\":\"(.*)\" \"replacement\":\"$1\" \"sourceLabels\":[\"__meta_kubernetes_pod_node_name\"] \"targetLabel\":\"instance\"]] \"scheme\":\"https\" \"tlsConfig\":map[\"insecureSkipVerify\":%!q(bool=true)]]] \"jobLabel\":\"app.kubernetes.io/name\" \"selector\":map[\"matchLabels\":map[\"app.kubernetes.io/name\":\"node-exporter\"]]]]}",
        "from server for: \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\": Get https://10.254.0.1:443/apis/monitoring.coreos.com/v1/namespaces/kubesphere-monitoring-system/servicemonitors/node-exporter: stream error: stream ID 15; INTERNAL_ERROR"
      ],
      "stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged\ndaemonset.apps/node-exporter configured\nservice/node-exporter unchanged\nserviceaccount/node-exporter unchanged",
      "stdout_lines": [
        "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged",
        "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter unchanged",
        "daemonset.apps/node-exporter configured",
        "service/node-exporter unchanged",
        "serviceaccount/node-exporter unchanged"
      ]
    },
    "role": "ks-monitor",
    "start": "2020-08-17T09:14:27.416729",
    "task": "ks-monitor | Installing node-exporter",
    "task_action": "command",
    "task_args": "",
    "task_path": "/kubesphere/installer/roles/ks-monitor/tasks/node-exporter.yaml:2",
    "task_uuid": "0242ac1e-9005-0b95-fea9-000000000037",
    "uuid": "b7b6232b-c8cc-4690-b7d7-14d979849c8f"
**************************************************
task monitoring status is running
task multicluster status is successful
task notification status is successful
task openpitrix status is successful
total: 4     completed:3
**************************************************
task monitoring status is failed
task multicluster status is successful
task notification status is successful
task openpitrix status is successful
total: 4     completed:4
**************************************************

1、kubernetes 1.16 , 原生bin安装,1master 2node 3G * 3 台

2、安装之前已手工安装 metrics-server 是否冲突? 不在同一个namespace下正常不应该有冲突吧

    Unauthorized 异常

    [root@ford-k8s03 ~]# curl https://10.254.0.1:443/apis/monitoring.coreos.com/v1/namespaces/kubesphere-monitoring-system/servicemonitors/node-exporter --insecure
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {
        
      },
      "status": "Failure",
      "message": "Unauthorized",
      "reason": "Unauthorized",
      "code": 401
    }
    • Jeff 回复了此帖

      异常出在 prometheus-operator-78c5cdbc8f-bgzfc 容器上

      kubesphere-monitoring-system   node-exporter-qpzwd                                  2/2     Running             2          47h
      kubesphere-monitoring-system   node-exporter-qqwwk                                  2/2     Running             2          47h
      kubesphere-monitoring-system   prometheus-operator-78c5cdbc8f-bgzfc                 1/2     CrashLoopBackOff    354        30h
      kubesphere-system              etcd-85c98fb695-kxn6r                                1/1     Running             1          22h
      kubesphere-system              ks-apiserver-6f7db44647-drpjn                        1/1     Running             2          21m

      prometheus-operator-78c5cdbc8f-bgzfc 启动异常

      [root@ford-k8s01 ~]# kubectl logs -f prometheus-operator-78c5cdbc8f-bgzfc -n kubesphere-monitoring-system -c prometheus-operator
      ts=2020-08-18T07:57:52.383110188Z caller=main.go:188 msg="Starting Prometheus Operator version '0.38.3'."
      ts=2020-08-18T07:57:52.390588254Z caller=main.go:98 msg="Staring insecure server on :8080"
      level=info ts=2020-08-18T07:57:52.479019182Z caller=operator.go:308 component=thanosoperator msg="connection established" cluster-version=v1.16.0
      level=info ts=2020-08-18T07:57:52.479320007Z caller=operator.go:464 component=prometheusoperator msg="connection established" cluster-version=v1.16.0
      level=info ts=2020-08-18T07:57:52.483166451Z caller=operator.go:213 component=alertmanageroperator msg="connection established" cluster-version=v1.16.0
      level=info ts=2020-08-18T07:57:53.984129044Z caller=operator.go:718 component=thanosoperator msg="CRD updated" crd=ThanosRuler
      level=info ts=2020-08-18T07:57:54.086096814Z caller=operator.go:643 component=alertmanageroperator msg="CRD updated" crd=Alertmanager
      level=info ts=2020-08-18T07:57:54.183643325Z caller=operator.go:1941 component=prometheusoperator msg="CRD updated" crd=Prometheus
      level=info ts=2020-08-18T07:57:54.214103964Z caller=operator.go:1941 component=prometheusoperator msg="CRD updated" crd=ServiceMonitor
      level=info ts=2020-08-18T07:57:54.234581568Z caller=operator.go:1941 component=prometheusoperator msg="CRD updated" crd=PodMonitor
      level=info ts=2020-08-18T07:57:54.275259673Z caller=operator.go:1941 component=prometheusoperator msg="CRD updated" crd=PrometheusRule
      ts=2020-08-18T07:57:57.012671774Z caller=main.go:306 msg="Unhandled error received. Exiting..." err="creating CRDs failed: waiting for ThanosRuler crd failed: timed out waiting for Custom Resource: failed to list CRD: Get \"https://10.254.0.1:443/apis/monitoring.coreos.com/v1/prometheuses?limit=500\": stream error: stream ID 31; INTERNAL_ERROR"

      Jeff 谢谢,加上证书访问的结果

      # curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem https://10.254.0.1:443/apis/monitoring.coreos.com/v1/naespaces/kubesphere-monitoring-system/servicemonitors/node-exporter  --insecure
      curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
      # curl --cert ./client.pem --key ./client-key.pem --cacert ./ca.pem  https://10.254.0.1:443/apis/monitoring.coreos.com/v1/alertmanagers?limit=500   --insecure
      curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
      • Jeff 回复了此帖

        cloudnativelab

        你可以用kubectl么,kubectl get crd | grep monitoring 看下结果

        第一个 curl 命令 naespaces -> namespaces

          获取 alertmanagers crd 列表出错

          failed to list CRD: Get \“https://10.254.0.1:443/apis/monitoring.coreos.com/v1/alertmanagers?limit=500\”


          curl –cert ./client.pem –key ./client-key.pem –cacert ./ca.pem https://10.254.0.1:443/apis/monitoring.coreos.com/v1/alertmanagers?limit=500 –insecure

          curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)


          不了解 kubesphere 的架构,接下来应该排查哪里?

          • Jeff 回复了此帖

            Jeff thx

            # kubectl get crd | grep monitoring
            alertmanagers.monitoring.coreos.com              2020-08-16T08:45:18Z
            podmonitors.monitoring.coreos.com                2020-08-16T08:45:19Z
            prometheuses.monitoring.coreos.com               2020-08-16T08:45:19Z
            prometheusrules.monitoring.coreos.com            2020-08-16T08:45:19Z
            servicemonitors.monitoring.coreos.com            2020-08-16T08:45:20Z
            thanosrulers.monitoring.coreos.com               2020-08-16T08:45:20Z
            • Jeff 回复了此帖

              cloudnativelab 能把你的机器的信息发下么,什么系统的,哪个版本的,内核哪个版本的

              uname -a

              cloudnativelab 现在问题和kubesphere没有关系,和你的k8s有关系,你是用kubekey安装的,还是通过ks-installer装的

              1、ks-installer 安装

              2、kubernetes bin 原生安装,版本信息如下
              Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
              Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

              3、CentOS 8

              CentOS Linux release 8.0.1905 (Core) 
              Derived from Red Hat Enterprise Linux 8.0 (Source)
              NAME="CentOS Linux"
              VERSION="8 (Core)"
              ID="centos"
              ID_LIKE="rhel fedora"
              VERSION_ID="8"
              PLATFORM_ID="platform:el8"
              PRETTY_NAME="CentOS Linux 8 (Core)"
              ANSI_COLOR="0;31"
              CPE_NAME="cpe:/o:centos:centos:8"
              HOME_URL="https://www.centos.org/"
              BUG_REPORT_URL="https://bugs.centos.org/"
              
              CENTOS_MANTISBT_PROJECT="CentOS-8"
              CENTOS_MANTISBT_PROJECT_VERSION="8"
              REDHAT_SUPPORT_PRODUCT="centos"
              REDHAT_SUPPORT_PRODUCT_VERSION="8"
              
              CentOS Linux release 8.0.1905 (Core) 
              CentOS Linux release 8.0.1905 (Core) 
              cpe:/o:centos:centos:8

              4、内核如下
              4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

              • Jeff 回复了此帖

                cloudnativelab kube-apiserver panic了,这个可能是k8s的bug,我们会尝试复现下,你可以把k8s版本升级下再试试。另外你的集群节点配置太小了,2G内存不够用

                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: E0820 17:41:08.830174     777 wrap.go:39] apiserver panic'd on GET /apis/monitoring.coreos.com/v1/prometheuses?limit=500&resourceVersion=0
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: I0820 17:41:08.830292     777 log.go:172] http2: panic serving 192.168.2.181:59284: runtime error: invalid memory address or nil pointer dereference
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: goroutine 1021814 [running]:
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1(0xc011edad20)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:107 +0x107
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: panic(0x3ce83e0, 0xaa83850)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/usr/local/go/src/runtime/panic.go:522 +0x1b5
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1(0xc01dd90280, 0x7f3932468800, 0xc000c8c3c0, 0xaad6b78, 0x0, 0x0, 0x0, 0x0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/audit.go:88 +0x1e0
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: panic(0x3ce83e0, 0xaa83850)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/usr/local/go/src/runtime/panic.go:522 +0x1b5
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Structural).Unfold.func1(0xc0293c01b0, 0x0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/unfold.go:38 +0xa2
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0293c01b0, 0xc0283e6c00)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:41 +0x48e
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0293bfef0, 0xc0283e6d00)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:48 +0x173
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0293bfe60, 0xc0283e6f60)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:48 +0x173
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0293bfdd0, 0xc0283e70d0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:48 +0x173
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc029356990, 0xc0283e7240)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:48 +0x173
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0293bfc20, 0xc0283e7300)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:45 +0x478
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0293bfb90, 0xc0283e7520)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:48 +0x173
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).visitStructural(0xc0283e7638, 0xc0292ebef0, 0xc0292ebef0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:48 +0x173
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Visitor).Visit(...)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/visitor.go:35
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema.(*Structural).Unfold(0xc0292ebef0, 0xc0292ebef0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/unfold.go:60 +0x58
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder.BuildSwagger(0xc00cbf7080, 0xc01110e1f6, 0x2, 0x1010100, 0x0, 0x0, 0xd0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder.go:105 +0x1ade
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.buildOpenAPIModelsForApply(0xc000a1a500, 0xc00cbf7080, 0xc01110e1f6, 0x2, 0xc0283b0cb8, 0x0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go:1239 +0x177
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.(*crdHandler).getOrCreateServingInfoFor(0xc000ca5550, 0xc00f3d9fb0, 0x24, 0xc00f3d9f80, 0x22, 0x0, 0x0, 0x0)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go:647 +0x3f7
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.(*crdHandler).ServeHTTP(0xc000ca5550, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go:301 +0x2f1
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00c1cec80, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x38d
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00079ee00, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x85
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x44ee88f, 0x17, 0xc000726a20, 0xc00079ee00, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6c3
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc012bdb340, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:254 +0x1f7
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0015810a0, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x85
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x44cd233, 0xe, 0xc00092c2d0, 0xc0015810a0, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6c3
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc009723ea0, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:118 +0x162
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010d5a600, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x38d
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009cf8000, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x85
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x44d03ff, 0xf, 0xc008dd81b0, 0xc009cf8000, 0x7b10ca0, 0xc00f864d00, 0xc0247ee000)
                Aug 20 17:41:08 ford-k8s02 kube-apiserver[777]: #011/workspace/anago-v1.16.0-rc.2.1+2bd9643cee5b3b/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6c3