操作系统信息
例如:虚拟机 Centos7 4C/8G

Kubernetes版本信息
将 kubectl version 命令执行结果贴在下方

Client Version: version.Info{Major:“1”, Minor:“21”, GitVersion:“v1.21.5”, GitCommit:“aea7bbadd2fc0cd689de94a54e5b7b758869d691”, GitTreeState:“clean”, BuildDate:“2021-09-15T21:10:45Z”, GoVersion:“go1.16.8”, Compiler:“gc”, Platform:“linux/amd64”}

Server Version: version.Info{Major:“1”, Minor:“21+”, GitVersion:“v1.21.13-eks-84b4fe6”, GitCommit:“e1318dce57b3e319a2e3fecf343677d1c4d4aa75”, GitTreeState:“clean”, BuildDate:“2022-06-09T18:22:07Z”, GoVersion:“go1.16.15”, Compiler:“gc”, Platform:“linux/amd64”}

容器运行时
将 docker version / crictl version / nerdctl version 结果贴在下方

KubeSphere版本信息
KubeSphere 版本 : v3.3.0

我按照notification-manager 的github QuickStart https://github.com/kubesphere/notification-manager 在部署config 和receiver 遇到了点问题 现在altermanager告警发不到飞书

notification-manager 已经识别到receiver 被更新了

wanjunlei

kubesphere 3.3 + notification-manager 2.1 不能发到飞书吗

我看notification-manager 2.1里面是支持飞书的 。

另外 我们的告警时从alertmanager 发到notification-manager,不是从kubesphere里面发出来的

把你的global-receiver-feishu 发出来看看,从你贴出来的日志看现在的问题是notification-manager没有收到告警,你其他的通知渠道收到消息了吗?

    8 天 后

    wanjunlei

    这个是receiver的配置 其他的(飞书)可以收到altermanager发的告警推送

    但是notification-manger 没有推送成功

    - “name”: “notification-manager”

    “webhook_configs”:

    5 个月 后

    wanjunlei

    最近我也想使用飞书通知,但是kubesphere 3.2 默认的notification-manager 是v1.4.0;怎么手动升级notification-manager?有参考文档吗?

    1 个月 后

    wanjunlei 感谢回复。还有个问题,从集群也可以这样直接升级吗?我用了联邦集群管理,发现在从集群也有来自主集群的receivers配置,直接升级会不会删除或者覆盖这个资源。想接入飞书机器人,有什么配置示例文档吗。

    主集群上的receiver和Config会自动同步到member集群,这是ks的机制,不影响升级

    飞书机器人接入按如下配置,keywords 和 secret 根据机器人的设置选填

        chatbot:
          keywords: []
          secret:
    
          webhook:
            value: https://open.feishu.cn/open-apis/bot/v2/hook/xxxxxxx
      5 天 后

      wanjunlei 升级了2.2.0 notification-manager之后,ks-apiserver不断重启

      W0314 05:54:01.836313 1 client_config.go:615] Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.

      W0314 05:54:01.839840 1 client_config.go:615] Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.

      W0314 05:54:01.851445 1 options.go:191] ks-apiserver starts without redis provided, it will use in memory cache. This may cause inconsistencies when running ks-apiserver with multiple replicas.

      I0314 05:54:01.851492 1 interface.go:50] start helm repo informer

      I0314 05:54:02.438490 1 apiserver.go:417] Start cache objects

      E0314 05:54:03.349258 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v2beta1.Receiver: failed to list *v2beta1.Receiver: conversion webhook for notification.kubesphere.io/v2beta2, Kind=Receiver failed: Post “https://notification-manager-webhook.kubesphere-monitoring-system.svc:443/convert?timeout=30s”: x509: certificate signed by unknown authority

      升级的时候 helm 执行成功没?

      可以用下面的脚本修复一下

                    caBundle=$(kubectl get validatingWebhookConfiguration notification-manager-validating-webhook -o jsonpath='{.webhooks[0].clientConfig.caBundle}')
      
                    cat > /tmp/patch.yaml <<EOF
                    spec:
                      conversion:
                        webhook:
                          clientConfig:
                            caBundle: ${caBundle}
                            service:
                              namespace: kubesphere-monitoring-system
                    EOF
                    
                    kubectl patch crd configs.notification.kubesphere.io --type=merge --patch-file /tmp/patch.yaml
                    kubectl patch crd receivers.notification.kubesphere.io --type=merge --patch-file /tmp/patch.yaml

        wanjunlei
        当时执行helm的日志如下:

        $ kubectl apply -f https://github.com/kubesphere/notification-manager/releases/download/v2.2.0/bundle.yaml

        customresourcedefinition.apiextensions.k8s.io/configs.notification.kubesphere.io configured customresourcedefinition.apiextensions.k8s.io/notificationmanagers.notification.kubesphere.io configured customresourcedefinition.apiextensions.k8s.io/receivers.notification.kubesphere.io configured customresourcedefinition.apiextensions.k8s.io/routers.notification.kubesphere.io configured customresourcedefinition.apiextensions.k8s.io/silences.notification.kubesphere.io configured serviceaccount/notification-manager-sa unchanged Warning: resource roles/notification-manager-leader-election-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. role.rbac.authorization.k8s.io/notification-manager-leader-election-role configured Warning: resource clusterroles/notification-manager-controller-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/notification-manager-controller-role configured Warning: resource clusterroles/notification-manager-metrics-reader is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/notification-manager-metrics-reader configured Warning: resource clusterroles/notification-manager-proxy-role is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/notification-manager-proxy-role configured Warning: resource rolebindings/notification-manager-leader-election-rolebinding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. rolebinding.rbac.authorization.k8s.io/notification-manager-leader-election-rolebinding configured Warning: resource clusterrolebindings/notification-manager-controller-rolebinding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrolebinding.rbac.authorization.k8s.io/notification-manager-controller-rolebinding configured Warning: resource clusterrolebindings/notification-manager-proxy-rolebinding is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrolebinding.rbac.authorization.k8s.io/notification-manager-proxy-rolebinding configured secret/notification-manager-webhook-server-cert unchanged service/notification-manager-controller-metrics unchanged service/notification-manager-webhook unchanged deployment.apps/notification-manager-operator configured validatingwebhookconfiguration.admissionregistration.k8s.io/notification-manager-validating-webhook configured

        helm upgrade notification-manager -n kubesphere-monitoring-system notification-manager.tgz --set kubesphere=true --set notificationmanager.replicas=2

        Release "notification-manager" has been upgraded. Happy Helming! NAME: notification-manager LAST DEPLOYED: Fri Mar 10 11:54:59 2023 NAMESPACE: kubesphere-monitoring-system STATUS: deployed REVISION: 2 TEST SUITE: None