(二)ACK prometheus-operator之 配置自定义组件监控

本文涉及的产品
可观测监控 Prometheus 版,每月50GB免费额度
简介: 使用ack-prometheus-operator 在阿里云ACK专有版集群里,默认未采集 etcd / scheduler/ kcm/ccm/kube-proxy等管理组件的监控数据,需要手动配置证书、采集等配置。本文目的在于解决由于不正确的配置带来的监控异常,也顺便扫盲“更新Prometheus Server的配置prometheus.yml"这几个词在operator体系中的具体配置步骤。

本文以etcd监控配置为重点介绍示例,介绍两种配置方式:servicemonitor动态发现和prometheus-operator的additionalScrapeConfigs静态配置,配置过程中也会点出配置失误带来的异常现象和注意事项。


共完成 5 部分组件的配置,分别处理了各自特殊情况:

etcd :客户端证书tls认证,servicemonitor+addtionalconfig两种配置方式

scheduler/kcm:修改本地监听, servicemonitor配置

kube-proxy:修改本地监听,尝试podmonitor配置

ccm:开启 metrics监听端口,尝试podmonitor配置

net-exporter : 自由发挥选一种最方便的方式。


本篇重点为具体配置步骤,后期若能有阿里云官方配置文档发布,则以官方为准。


环境信息:

本文环境基于阿里云ACK专有集群1.22以及ACK应用市场的ack-prometheus-operator  chart 12.0.0。



一 ACK 专有集群 etcd监控


ACK专有集群 etcd区别于其他组件有两点:

  1. ACK 专有集群etcd直接运行在master node, 非pod模式。
  2. etcd在master节点有自己的客户端证书,认证部分会在另一章节单独解析,此处不赘述, 下文配置需要先将etcd证书打包成secret。


方式1: 采用servicemonitor配置etcd服务发现

从阿里云ACK应用市场安装的ack-prometheus-operator helm中默认没开启etcd 任务抓取。由于etcd非pod形式,若基于helm配置,则是servicemonitor方式抓取etcd指标,prom-operator 会自动创建service/endpoint等关联资源



1. 创建Secret-有ETCD监控指标采集权限

ACK专有版集群,会在master节点上,默认生成ETCD的证书:

/etc/kubernetes/pki/etcd/ca.pem
/etc/kubernetes/pki/etcd/etcd-client-key.pem
/etc/kubernetes/pki/etcd/etcd-client.pem


在 etcd 运行的任一master节点上,执行下面命令,生成证书的secret

kubectl create secret generic etcd-certs \
  --from-file=/etc/kubernetes/pki/etcd/etcd-client-key.pem \
  --from-file=/etc/kubernetes/pki/etcd/etcd-client.pem \
  --from-file=/etc/kubernetes/pki/etcd/ca.pem \
  -n monitoring


2. Helm更新添加secret的volumeMounts到prometheus pod中。


修改位置: prometheus/prometheusSpec:

volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
      name: etcd-certs
    volumes:
    - name: etcd-certs
      secret:
        secretName: etcd-certs


如图:


此处helm 配置的更新会自动反映在prometheus资源中 (若不使用helm,需要手动修改):


$ kubectl get prometheus ack-prometheus-operator-prometheus -n monitoring -oyaml


以上prometheus资源会控制 prometheus pod 自动加载volumemount , 本文环境中prometheus pod是sts  prometheus-ack-prometheus-operator-prometheus(无须手动修改):


// 注意:受控于prometheus CRD资源,手动修改prometheus sts yaml也不会生效。

$ kubectl get sts -n monitoring prometheus-ack-prometheus-operator-prometheus -oyaml
#删除无关行,仅显示相关配置
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: prometheus-ack-prometheus-operator-prometheus
  namespace: monitoring
spec:
  template:
    metadata:
    spec:
      containers:
        name: prometheus
        volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
          name: etcd-certs
#删除无关行,仅显示相关配置
      volumes:
      - name: etcd-certs
        secret:
          defaultMode: 420
          secretName: etcd-certs


3. 更新helm配置 kubeEtcd抓取任务


添加  endpoints为master node ip, 以及上述步骤中mount 的证书文件等。若不指定endpoint,由于etcd本身无pod 运行,会无法自动生成ep资源。

注意:阿里云ACK集群专有版etcd监听端口是2379。

kubeEtcd:
  enabled: true
  endpoints:
  - 10.0.0.9
  - 10.0.0.8
  - 10.0.0.7
  service:
    port: 2379
    targetPort: 2379
  serviceMonitor:
    caFile: "/var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.pem"
    certFile: "/var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client.pem"
    insecureSkipVerify: false
    interval: ""
    keyFile: "/var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client-key.pem"
    metricRelabelings: []
    relabelings: []
    scheme: https
    serverName: ""


如图:


  • helm更新后,prom-operator 会默认创建etcd的 servicemonitor 、service、 endpoint三个资源,无需手动创建。


# Source: ack-prometheus-operator/templates/exporters/kube-etcd/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: ack-prometheus-operator-kube-etcd
  namespace: monitoring
  labels:
    app: ack-prometheus-operator-kube-etcd
    chart: ack-prometheus-operator-12.0.0
    release: "ack-prometheus-operator"
    heritage: "Helm"
spec:
  jobLabel: jobLabel
  selector:
    matchLabels:
      app: ack-prometheus-operator-kube-etcd
      release: "ack-prometheus-operator"
  namespaceSelector:
    matchNames:
      - "kube-system"
  endpoints:
  - port: http-metrics
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.pem
      certFile: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client.pem
      keyFile: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client-key.pem
      insecureSkipVerify: false


注意,

虽然采用service方式,但是对应的k8s service中clusterIP: None ,且没有selector,不会自动生成endpoint (毕竟没有etcd pod). 这里是基于helm中的endpoint配置,prom-operator 完成的endpoint创建(没有helm需要 手动创建)。

这种对非pod服务手动创建svc+endpoint的方式要灵活使用,比如访问集群外的mysql也可以这样配置,方便在集群内通过svc访问。




方式2: 采用additionalScrapeConfigsSecret 配置抓取任务


考虑到后续其他自定义抓取任务的配置可能不方便通过servicemonitor或者podmonitor配置,还有一种高级配置方式,可以将所有抓取job的标准配置格式打包到一个secret中,直接基于additionalScrapeConfigsSecret添加新配置。

备注:两种配置方式不要同时使用,会重复收集数据,因此方式2需要在helm中disable掉kubeEtcd。

kubeEtcd:
  enabled: false


1. 创建Secret-有ETCD监控指标采集权限

同上步骤1,仅适配etcd


2. Helm更新添加secret的volumeMounts

同上步骤2,仅适配etcd


3. 配置Prometheus-operator的additionalScrapeConfigsSecret

需要参考官网:

https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md?spm=a2c4g.11186623.0.0.2b735695FoJ93z&file=additional-scrape-config.md


// 若job配置比较简单,可跳过这一步,直接编辑helm prometheus/prometheusSpec/additionalScrapeConfigs .

本文考虑到后续还会配置其他addtional job, 因此使用全部打包到secret的方式,简化helm配置内容。


具体步骤:

首先,配置抓取任务,这里采用static_configs::vi prometheus-additional-etcd.yaml

避坑:

  • 这里的ca file需要在前步骤中volume mount的路径,
  • 第一行直接写job_name,任务正文无须其他
  • 多job打包需要使用空行分隔


- job_name: 'k8s-etcd-yibei'
    scheme: https
    tls_config: 
      ca_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.pem
      cert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client.pem
      key_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client-key.pem
    static_configs:
    - targets: ['{master_ip1}:2379','{master_ip2}:2379','{master_ip3}:2379']

其中 {master_ip1}、{master_ip2}、{master_ip3} 是ETCD所在的master节点的内网IP。


其次,将配置包含到secret中:


kubectl create secret generic additional-scrape-configs-etcd --from-file=prometheus-additional-etcd.yaml --dry-run=client  -oyaml > additional-scrape-configs-etcd.yaml
kubectl apply -f additional-scrape-configs-etcd.yaml -n monitoring

4. Helm更新:配置抓取任务


由于我们将配置任务打包到了secret中,因此修改位置: prometheus/prometheusSpec/additionalScrapeConfigsSecret:

//key就是以上secret 中data的key,也是一开始用来创建secret 到的--from-file 的文件名, name是secret name.

additionalScrapeConfigsSecret:
      enabled: true
      key: prometheus-additional-etcd.yaml
      name: additional-scrape-configs-etcd




Prometheus CRD 资源的spec 中自动添加了additionalScrapeConfigs,无须手动修改  (不使用helm则需要手动修改此处CRD)



附录:

若job配置比较简单,不需要打包secret中,可跳过第三步,直接编辑helm prometheus/prometheusSpec/additionalScrapeConfigs 也可以, 但是additionalScrapeConfigs 跟additionalScrapeConfigsSecret 二者不可同时使用。

https://github.com/prometheus-community/helm-charts/blob/8b45bdbdabd9b54766c4beb3c562b766b268a034/charts/kube-prometheus-stack/values.yaml#L2691


示例:



验证

  1. 配置验证:


无论以什么方式配置,Prometheus-operator watch到 servicemonitor 或者prometheus CRD变化后,会将新更新的配置打包更新到secret , secret 是mount 到prometheus server sts 中的,于此便将更新后的配置可应用于prometheus server中。 具体配置更新流程不在此文做赘,简单看下secret:


secret name: prometheus-ack-prometheus-operator-prometheu 保存的是一个.gz打包文件。



解码上述 .gz 内容便是标准的prometheus配置格式。


kubectl get secret prometheus-ack-prometheus-operator-prometheus   -n monitoring -ojson|jq -r '.data["prometheus.yaml.gz"]' |base64 -d| gunzip


也可以直接UI查看 (prometheus svc 设置ingress或者走lb svc暴露或者kube proxy转发)


servicemonitor配置对应的etcd抓取job如下:

- job_name: monitoring/ack-prometheus-operator-kube-etcd/0
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.pem
    cert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client.pem
    key_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client-key.pem
    insecure_skip_verify: false
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: ack-prometheus-operator-kube-etcd
    replacement: $1
    action: keep
#省略100行
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - kube-system


静态配置到additionalScrapeConfigs中对应的job 如下

- job_name: k8s-etcd-yibei
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.pem
    cert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client.pem
    key_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/etcd-client-key.pem
    insecure_skip_verify: false
  static_configs:
  - targets:
    - 10.0.0.7:2379
    - 10.0.0.8:2379
    - 10.0.0.9:2379



  1. 查看targets


  1. 查询指标:

这里demo环境配置了两个任务,为避免重复收集数据,实际配置中需要删除其中一个job。


  1. grafana dashboard


指标解释

https://help.aliyun.com/document_detail/445445.html


或者可手动获取指标的help解析:



附录:如何手动获取etcd metrics


这一步用于prom监控指标异常时排查用,运行curl 即可,证书是etcd节点上的证书。 这部分tls认证在另一篇中讲解。


curl --cacert  /etc/kubernetes/pki/etcd/ca.pem --cert /etc/kubernetes/pki/etcd/etcd-client.pem  --key  /etc/kubernetes/pki/etcd/etcd-client-key.pem -X GET https://10.0.0.9:2379/metrics



二 ACK 专有集群KCM/Scheduler监控


由于KCM是以pod 运行在集群中的,采用servicemonitor方式进行配置相对简单。另外,监控配置会比etcd的配置简单,可以使用prometheus的serviceaccount的权限即可完成https端口的认证。关于监控https接口时的认证,会在另一篇中讲解。


注意,网络上很多关于kcm/scheduler http端口监听的配置,不适用于阿里云ack环境。比如,scheduler 仅在https 10259监听,若参考网络上的自建k8s的方式配置 http 10251监控是会connection refused的。


比如常见报错:


  1. 修改KCM/Scheduler默认本地监听为0.0.0.0监听


阿里云ACK专有集群中KCM/Scheduler默认本地https监听,因为无法被prometheus等客户端获取到指标。


比如KCM:


修改manifest文件的监听地址后kcm pod重启,重启后监听如下:

sed -e "s/- --bind-address=127.0.0.1/- --bind-address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-controller-manager.yaml 
sed -e "s/- --bind-address=127.0.0.1/- --bind-address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-scheduler.yaml



修改监听后可通过IP地址访问https:




  1. Helm中开启kcm/scheduler 抓取任务。


注意这一步需要使用HTTPS监听,tls证书使用serviceaccount的默认证书即可。


helm更新后, KCM/scheduler的service会自动随之创建. 由于svc 中基于selector匹配kcm pod, 因此endpoint自动生成,无需手动创建。注意: clusterIP: None

# Source: ack-prometheus-operator/templates/exporters/kube-scheduler/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: ack-prometheus-operator-kube-scheduler
  labels:
    app: ack-prometheus-operator-kube-scheduler
    jobLabel: kube-scheduler
    chart: ack-prometheus-operator-12.0.0
    release: "ack-prometheus-operator"
    heritage: "Helm"
  namespace: kube-system
spec:
  clusterIP: None
  ports:
    - name: http-metrics
      port: 10259
      protocol: TCP
      targetPort: 10259
  selector:
    component: kube-scheduler
  type: ClusterIP


apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: ack-prometheus-operator
    meta.helm.sh/release-namespace: monitoring
  labels:
    app: ack-prometheus-operator-kube-controller-manager
    chart: ack-prometheus-operator-12.0.0
    heritage: Helm
    jobLabel: kube-controller-manager
    release: ack-prometheus-operator
  name: ack-prometheus-operator-kube-controller-manager
  namespace: kube-system
spec:
  clusterIP: None
  ports:
    - name: http-metrics
      port: 10257
      protocol: TCP
      targetPort: 10257
  selector:
    component: kube-controller-manager
  type: ClusterIP



  1. 配置kcm的servicemonitor

Helm 更新后没有自动部署kcm 跟scheduler的servicemonitor资源有点奇怪,以后再研究。 默认只创建了这几个servicemonitor:


避坑tips,这里也是走了不少弯路,直接写注意事项:

  • ServiceMonitor的label  需要跟prometheus中定义的serviceMonitorSelector一致
  • ServiceMonitor的endpoints中port时对应k8s service资源中的portname, 不是port number.
  • ServiceMonitor的selector.matchLabels需要匹配k8s service中的label
  • ServiceMonitor资源创建在prometheus的namespace下,使用namespaceSelector匹配要监控的k8s svc的ns.

先对比一下prometheus crd中的定义再去写servicemonitor:

- apiVersion: monitoring.coreos.com/v1
  kind: Prometheus
  metadata:
    name: ack-prometheus-operator-prometheus
    namespace: monitoring
  spec:
    serviceMonitorNamespaceSelector: {}
    serviceMonitorSelector:
      matchLabels:
        release: ack-prometheus-operator



apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/name: kube-controller-manager
    release: ack-prometheus-operator
  name: ack-prom-kube-controller-manager-yibei
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: http-metrics
    scheme: https
    tlsConfig:
      insecureSkipVerify: true
  jobLabel: app.kubernetes.io/name
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app: ack-prometheus-operator-kube-controller-manager


apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/name: kube-scheduler
    release: ack-prometheus-operator
  name: ack-prom-kube-scheduler-yibei
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: http-metrics
    scheme: https
    tlsConfig:
      insecureSkipVerify: true
  jobLabel: app.kubernetes.io/name
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app: ack-prometheus-operator-kube-scheduler


参考:

https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/templates/exporters


  1. 验证
  • 配置验证
scrape_configs:
- job_name: monitoring/ack-prom-kube-controller-manager-yibei/0
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: https
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  tls_config:
    insecure_skip_verify: true
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: ack-prometheus-operator-kube-controller-manager
    replacement: $1
    action: keep
  #略一百行
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - kube-system
- job_name: monitoring/ack-prom-kube-scheduler-yibei/0
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: https
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  tls_config:
    insecure_skip_verify: true
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: ack-prometheus-operator-kube-scheduler
    replacement: $1
    action: keep
  #略一百行
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - kube-system



  • target是up

// demo中之修改了一台master节点的监听地址,因为只有一个target up.


  • 查询指标


  1. 指标解释

https://help.aliyun.com/document_detail/445448.html

https://help.aliyun.com/document_detail/445447.html


三 ACK 专有集群kube-proxy监控


ACK集群kube-proxy默认10249 http本地监听,无需tls认证可获取指标。但是跟scheduler/kcm一样,需要解决本地监听的问题。另外,kube-proxy是ds方式运行在ACK集群,可以使用servicemonitor或者podmonitor配置抓取任务。



curl http://localhost:10249/metrics


  1. 修改监听

网络上有一种修改方式是修改ds的container启动参数,亲测ACK环境不生效


kubectl edit ds kube-proxy-master -n kube-system
kubectl edit ds kube-proxy-worker -n kube-system

启动参数增加 - --metrics-bind-address=0.0.0.0 , ACK环境不生效。

spec:
  template:
    spec:
      containers:
      - command:
        - /usr/local/bin/kube-proxy
        - --config=/var/lib/kube-proxy/config.conf
        - --metrics-bind-address=0.0.0.0


正确方式是修改kube-proxy 的config文件,ACK集群是通过configmap实现config文件mount到pod的,config.conf中添加metricsBindAddress:

0.0.0.0:10249

kubectl edit cm -n kube-system kube-proxy-worker
kubectl edit cm -n kube-system kube-proxy-master
#示例如下
apiVersion: v1
data:
  config.conf: |
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0:10249



然后重启proxy pod :

kubectl -n kube-system rollout restart daemonset kube-proxy-master
kubectl -n kube-system rollout restart daemonset kube-proxy-worker


可以看到现在是:::10249监听了。


[root@ack ~]# netstat -nlpt |grep 10249
tcp6     0    0 :::10249     :::*      LISTEN     1145610/kube-proxy



  1. 创建PodMonitor

配置helm后也没更新servicemonitor, operator也没报错,奇怪。此处不追究自己手动创建。上一个组件采用了servicemonitor,这里也可以手动创建service以及servicemonitor,我们换一种方式体验一下。这次用podmonitor .


注意:本次纯属体验一下podmonitor,真实环境请参考schedueler配置方法采用servicemonitor.


避坑tips,这里也是走了不少弯路,直接写注意事项:

  • PodMonitor的label  需要跟prometheus中定义的podMonitorSelector一致
  • PodMonitor的spec.podMetricsEndpoints.port 需要写pod中定义的port name,不能写port number。
  • PodMonitor的selector.matchLabels需要匹配k8s pod 中的label
  • PodMonitor资源创建在prometheus的namespace下,使用namespaceSelector匹配要监控的k8s pod的ns.


既然podmonitor 要写kube-proxy pod 的 port name ,遇到一个难题,也是不推荐podmonitor的原因:

ACK kube-proxy-* ds中默认没有port定义,所以portname要怎么写的?需要自己加一下ports这一块的定义。


// KCM/Scheduler 的pod yaml 也没有ports 定义,但是由于我们采用的servicemonitor, 因此匹配的是service中的portname, 因此无需修改pod yaml.  这里kube-proxy采用podmonitor  , 生产环境不建议修改deploy/ds yaml ,可以新建svc部署svcmonitor.


参考https://github.com/prometheus-operator/kube-prometheus/issues/1603#issuecomment-1030815589

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-proxy-master
  namespace: kube-system
spec:
    spec:
      containers:
        - command:
            - /usr/local/bin/kube-proxy
            - '--config=/var/lib/kube-proxy/config.conf'
          name: kube-proxy-master
          ports:
          - containerPort: 10249
            hostPort: 10249
            name: metrics
            protocol: TCP

查一下pod label 和prometheus CRD中podMonitorSelector定义为podmonitor作准备:


- apiVersion: monitoring.coreos.com/v1
  kind: Prometheus
  metadata:
    name: ack-prometheus-operator-prometheus
    namespace: monitoring
  spec:
    podMonitorNamespaceSelector: {}
    podMonitorSelector:
      matchLabels:
        release: ack-prometheus-operator

PodMonitor正确配置


注意:由于kubhe-proxy-master和kube-proxy-worker label不一致,matchLabels 是AND关系,我们采用matchExpressions利用OR匹配两个label。

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  labels:
    k8s-app: kube-proxy
    release: ack-prometheus-operator
  name: kube-proxy
  namespace: monitoring
spec:
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - kube-system
  podMetricsEndpoints:
  - honorLabels: true
    relabelings:
    - action: replace
      regex: (.*)
      replacement: $1
      sourceLabels:
      - __meta_kubernetes_pod_node_name
      targetLabel: instance
    port: "metrics"
  selector:
    matchExpressions:
      - key: k8s-app
        operator: In
        values:
          - kube-proxy-master
          - kube-proxy-worker



  1. 验证

配置

- job_name: monitoring/kube-proxy-master/0
  honor_labels: true
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_k8s_app]
    separator: ;
    regex: kube-proxy-master|kube-proxy-worker
    replacement: $1
    action: keep
#略一百行
  kubernetes_sd_configs:
  - role: pod
    namespaces:
      names:
      - kube-system


targets:

指标


  1. 指标解释:

kube-proxy的metrics指标目前没有文档,自己获取:

[root@ack ~]# curl  10.0.2.136:10249/metrics -s|grep HELP

# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# HELP go_goroutines Number of goroutines that currently exist.
# HELP go_info Information about the Go environment.
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# HELP go_memstats_frees_total Total number of frees.
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# HELP go_memstats_heap_objects Number of allocated objects.
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# HELP go_memstats_lookups_total Total number of pointer lookups.
# HELP go_memstats_mallocs_total Total number of mallocs.
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# HELP go_threads Number of OS threads created.
# HELP kubeproxy_network_programming_duration_seconds [ALPHA] In Cluster Network Programming Latency in seconds
# HELP kubeproxy_sync_proxy_rules_duration_seconds [ALPHA] SyncProxyRules latency in seconds
# HELP kubeproxy_sync_proxy_rules_endpoint_changes_pending [ALPHA] Pending proxy rules Endpoint changes
# HELP kubeproxy_sync_proxy_rules_endpoint_changes_total [ALPHA] Cumulative proxy rules Endpoint changes
# HELP kubeproxy_sync_proxy_rules_iptables_restore_failures_total [ALPHA] Cumulative proxy iptables restore failures
# HELP kubeproxy_sync_proxy_rules_last_queued_timestamp_seconds [ALPHA] The last time a sync of proxy rules was queued
# HELP kubeproxy_sync_proxy_rules_last_timestamp_seconds [ALPHA] The last time proxy rules were successfully synced
# HELP kubeproxy_sync_proxy_rules_service_changes_pending [ALPHA] Pending proxy rules Service changes
# HELP kubeproxy_sync_proxy_rules_service_changes_total [ALPHA] Cumulative proxy rules Service changes
# HELP kubernetes_build_info [ALPHA] A metric with a constant '1' value labeled by major, minor, git version, git commit, git tree state, build date, Go version, and compiler from which Kubernetes was built, and platform on which it is running.
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# HELP process_max_fds Maximum number of open file descriptors.
# HELP process_open_fds Number of open file descriptors.
# HELP process_resident_memory_bytes Resident memory size in bytes.
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# HELP rest_client_exec_plugin_certificate_rotation_age [ALPHA] Histogram of the number of seconds the last auth exec plugin client certificate lived before being rotated. If auth exec plugin client certificates are unused, histogram will contain no data.
# HELP rest_client_exec_plugin_ttl_seconds [ALPHA] Gauge of the shortest TTL (time-to-live) of the client certificate(s) managed by the auth exec plugin. The value is in seconds until certificate expiry (negative if already expired). If auth exec plugins are unused or manage no TLS certificates, the value will be +INF.
# HELP rest_client_rate_limiter_duration_seconds [ALPHA] Client side rate limiter latency in seconds. Broken down by verb and URL.
# HELP rest_client_request_duration_seconds [ALPHA] Request latency in seconds. Broken down by verb and URL.
# HELP rest_client_requests_total [ALPHA] Number of HTTP requests, partitioned by status code, method, and host.


四 ACK专有集群CCM/自定义业务pod监控

Helm中没有CCM部分,需要全部自己创建。鉴于ACK中CCM基于ds运行,没有默认svc, 就不创建svcmonitor了,也使用podmonitor配置。这里不依赖helm,业务pod 配置可参考一样的步骤。

https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/running-exporters.md


  1. 准备工作

看下CCM pod label :


看下监听 port:


虽然是http监听,但是直接curl不通?因为这个10258没有metrics接口。


看ds也没有port定义,且看我这个版本2.1.0中启动参数禁用了metrics:--metrics-bind-addr=0


看看CCM文档吧:

https://help.aliyun.com/document_detail/94925.html


  1. 修改CCM ds:

保持跟文档中的一致,也设置metrics 8080.但是8080很容易node port conflit。暂时先这样配置,试了一下metrics也用10258会导致容器启动失败。这里不深究。

顺手加一下port name用于podmonitor配置 .

注意: KCM/Scheduler 的pod yaml 也没有ports 定义,但是由于我们采用的servicemonitor, 因此匹配的是service中的portname, 因此无需修改pod yaml.  这里ccm采用podmonitor  。生产环境不建议修改deploy/ds yaml ,可以新建svc部署svcmonitor.


现在看,可以获取指标了。


  1. 配置podmonitor
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  labels:
    k8s-app: ack-ccm-yibei
    release: ack-prometheus-operator
  name: ack-ccm-yibei
  namespace: monitoring
spec:
  jobLabel: k8s-app
  namespaceSelector:
    matchNames:
    - kube-system
  podMetricsEndpoints:
  - honorLabels: true
    relabelings:
    - action: replace
      regex: (.*)
      replacement: $1
      sourceLabels:
      - __meta_kubernetes_pod_node_name
      targetLabel: instance
    port: "metrics"
  selector:
    matchExpressions:
      - key: app
        operator: In
        values:
          - cloud-controller-manager


  1. 验证

配置

- job_name: monitoring/ack-ccm-yibei/0
  honor_labels: true
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_label_app]
    separator: ;
    regex: cloud-controller-manager
    replacement: $1
    action: keep
#略一百行
  kubernetes_sd_configs:
  - role: pod
    namespaces:
      names:
      - kube-system


可以看到targets up了。 这里podmonitor中的spec: jobLabel: 决定了job 名字。


  1. 指标解释

https://help.aliyun.com/document_detail/445450.html

注意:新版CCM已经是https监听了 .请关注阿里云官网动态。。


五:Net-exporter 监控配置


https://help.aliyun.com/document_detail/449682.html

文档中针对自建prometheus 的监控任务抓取配置采取固定ip的静态配置,其实非最优解。其实我们可以有很多选择。可以配置servicemonitor或者addtionalscrapeconfig。

基于上文的几种方式,一起来看看哪一种方式比较简单,net-exporter是ds形式运行, 默认无svc。


  1. podmonitor -不优选

podmonitor要匹配pod中的portname, 但是net-exporter中yaml没定义port字段,我们就不修改yaml了。

  1. 手写标准prometheus.yaml配置做动态发现,然后打包到additionalScrapeConfig或者additionalScrapeConfigssecret,。

不友好,写不了,抄来的也未必生效。这种方式比较适合静态配置ip的标准配置格式,比如下一条。


  1. 像文档里说的静态配置指定pod ip -

不优选,node扩缩容ds pod IP都会变,如此不可动态发现,不及时更新配置还会导致prom一直向不存在的ip 发请求,实在太鸡肋。


  1. servicemonitor -配置简单易读,动态发现,可。
  • 确认监听 http 9102,不需要https认证
  • 创建svc


由于配置了pod selector,因此默认自动生成endpoint


  • 再创建servicemonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/name: net-exporter
    release: ack-prometheus-operator
  name: ack-net-exporter-yibei
  namespace: monitoring
spec:
  endpoints:
  - interval: 30s
    port: metrics
    scheme: http
  jobLabel: app.kubernetes.io/name
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app: net-exporter


此时发现。 targets没识别,后来发现是我servicemonitor中  namespaceSelector填错了。


纠正错误后:

operator已经把servicemonitor翻译成了动态发现的标准配置,无需手写

- job_name: monitoring/ack-net-exporter-yibei/0
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: net-exporter
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Node;(.*)
    target_label: node
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Pod;(.*)
    target_label: pod
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_container_name]
    separator: ;
    regex: (.*)
    target_label: container
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
    separator: ;
    regex: (.+)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: metrics
    action: replace
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - kube-system




总结:

采用prometheus operator 配置抓取任务方式很多,选一种合适的即可。以上仅代表个人实验记录,最终解释权归个人所有~ 最终要以阿里云官方文档为主(若有)。



思考:


  1. servicemonitor匹配的service为何是无头服务?



简单来讲,servicemonitor只是为了通过匹配servicelabel从而找到service背后的endpoint ip,并不需要完成service 负载均衡的使命,所以,没必要。


若有其他原因,欢迎补充。


2. etcd 没有pod却可以用servicemonitor,为何?


一切都为了让Promethesu拿到endpoint ip做target,手动创建svc和endpoint 完成匹配即可。

这种方式也适用于集群内Promethes监控集群外的服务。


  1. 为何不直接svc加注解 prometheus.io/scrape: 'true'  自动发现?


     注意:operator中不支持直接在service中添加 prometheus.io/scrape: 'true'    的注解 。所以,没有捷径。但是可以给svc加这个label, 然后servicemonitor匹配这个label,一个任务把所有有label的都监控了,也不是不行。。。

https://github.com/prometheus-operator/prometheus-operator/issues/1547

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
24天前
|
Prometheus 运维 监控
智能运维实战:Prometheus与Grafana的监控与告警体系
【10月更文挑战第26天】Prometheus与Grafana是智能运维中的强大组合,前者是开源的系统监控和警报工具,后者是数据可视化平台。Prometheus具备时间序列数据库、多维数据模型、PromQL查询语言等特性,而Grafana支持多数据源、丰富的可视化选项和告警功能。两者结合可实现实时监控、灵活告警和高度定制化的仪表板,广泛应用于服务器、应用和数据库的监控。
133 3
|
2月前
|
JSON Kubernetes API
深入理解Kubernetes配置:编写高效的YAML文件
深入理解Kubernetes配置:编写高效的YAML文件
|
1月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
125 60
|
1月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
223 62
|
14天前
|
Prometheus 监控 Cloud Native
在 HBase 集群中,Prometheus 通常监控哪些类型的性能指标?
在 HBase 集群中,Prometheus 监控关注的核心指标包括 Master 和 RegionServer 的进程存在性、RPC 请求数、JVM 内存使用率、磁盘和网络错误、延迟和吞吐量、资源利用率及 JVM 使用信息。通过 Grafana 可视化和告警规则,帮助管理员实时监控集群性能和健康状况。
|
18天前
|
Prometheus Kubernetes Cloud Native
Prometheus的告警配置
【10月更文挑战第31天】Prometheus的告警配置
26 1
|
15天前
|
Kubernetes 监控 Java
如何在Kubernetes中配置镜像和容器的定期垃圾回收
如何在Kubernetes中配置镜像和容器的定期垃圾回收
|
23天前
|
Prometheus 运维 监控
智能运维实战:Prometheus与Grafana的监控与告警体系
【10月更文挑战第27天】在智能运维中,Prometheus和Grafana的组合已成为监控和告警体系的事实标准。Prometheus负责数据收集和存储,支持灵活的查询语言PromQL;Grafana提供数据的可视化展示和告警功能。本文介绍如何配置Prometheus监控目标、Grafana数据源及告警规则,帮助运维团队实时监控系统状态,确保稳定性和可靠性。
118 0
|
2月前
|
Prometheus 监控 Cloud Native
介绍如何使用Prometheus进行监控
介绍如何使用Prometheus进行监控
224 3
|
2月前
|
Prometheus 监控 Cloud Native
docker安装prometheus+Granfan并监控容器
【9月更文挑战第14天】本文介绍了在Docker中安装Prometheus与Grafana并监控容器的步骤,包括创建配置文件、运行Prometheus与Grafana容器,以及在Grafana中配置数据源和创建监控仪表盘,展示了如何通过Prometheus抓取数据并利用Grafana展示容器的CPU使用率等关键指标。
下一篇
无影云桌面