ASM集成自建Prometheus实现网格监控

本文涉及的产品
可观测监控 Prometheus 版,每月50GB免费额度
可观测可视化 Grafana 版,10个用户账号 1个月
简介: ### 1 部署自建prometheus #### 部署prometheus 执行如下命令创建prometheus实例 ```sh # ISTIO_SRC istio源代码路径 kubectl apply -f ${ISTIO_SRC}/samples/addons/prometheus.yaml ``` 预期结果如下 ```sh serviceaccount/pro

1 部署自建prometheus

部署prometheus

执行如下命令创建prometheus实例

# ISTIO_SRC istio源代码路径
kubectl apply -f ${ISTIO_SRC}/samples/addons/prometheus.yaml

预期结果如下

serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

部署grafana

执行如下命令创建grafana实例

# ISTIO_SRC istio源代码路径
kubectl apply -f ${ISTIO_SRC}/samples/addons/grafana.yaml

预期结果如下

serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created

修改prometheus配置

创建prometheus.yaml文件,将如下内容保存至该文件。

apiVersion: v1
data:
  prometheus.yml: |-
    global:
      scrape_interval: 15s
    scrape_configs:
    # Mixer scrapping. Defaults to Prometheus and mixer on same namespace.
    - job_name: 'istio-mesh'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-telemetry;prometheus

    # Scrape config for envoy stats
    - job_name: 'envoy-stats'
      metrics_path: /stats/prometheus
      kubernetes_sd_configs:
      - role: pod

      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_container_port_name]
        action: keep
        regex: '.*-envoy-prom'
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:15090
        target_label: __address__
      - action: labeldrop
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name

    - job_name: 'istio-policy'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system


      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-policy;http-policy-monitoring

    - job_name: 'istio-telemetry'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-telemetry;http-monitoring

    - job_name: 'pilot'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istiod;http-monitoring
      - source_labels: [__meta_kubernetes_service_label_app]
        target_label: app
    - job_name: 'galley'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-galley;http-monitoring

    - job_name: 'citadel'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-citadel;http-monitoring

    - job_name: 'sidecar-injector'

      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-sidecar-injector;http-monitoring

    # scrape config for API servers
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - default
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: kubernetes;https

    # scrape config for nodes (kubelet)
    - job_name: 'kubernetes-nodes'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

    # Scrape config for Kubelet cAdvisor.
    #
    # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
    # (those whose names begin with 'container_') have been removed from the
    # Kubelet metrics endpoint.  This job scrapes the cAdvisor endpoint to
    # retrieve those metrics.
    #
    # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
    # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
    # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
    # the --cadvisor-port=0 Kubelet flag).
    #
    # This job is not necessary and should be removed in Kubernetes 1.6 and
    # earlier versions, or it will cause the metrics to be scraped twice.
    - job_name: 'kubernetes-cadvisor'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    # scrape config for service endpoints.
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:  # If first two labels are present, pod should be scraped  by the istio-secure job.
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status]
        action: drop
        regex: (.+)
      - source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
        action: drop
        regex: (true)
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name
    - job_name: 'kubernetes-pods-istio-secure'
      scheme: https
      tls_config:
        ca_file: /etc/istio-certs/root-cert.pem
        cert_file: /etc/istio-certs/cert-chain.pem
        key_file: /etc/istio-certs/key.pem
        insecure_skip_verify: true  # prometheus does not support secure naming.
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      # sidecar status annotation is added by sidecar injector and
      # istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
      - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
        action: keep
        regex: (([^;]+);([^;]*))|(([^;]*);(true))
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__]  # Only keep address that is host:port
        action: keep    # otherwise an extra target with ':443' is added for https scheme
        regex: ([^:]+):(\d+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name
kind: ConfigMap
metadata:
  labels:
    app: prometheus
    chart: prometheus-11.0.2
    component: server
    heritage: Helm
    release: prometheus
  name: prometheus
  namespace: istio-system

使用如下命令更新prometheus的配置

kubectl apply -f prometheus.yaml

预期结果如下

configmap/prometheus configured

2 EnvoyFilter

进入ASM实例页面,在控制平面的EnvoyFilter面板,点击新建按钮,创建如下4个EnvoyFilter

metadata-exchange-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: metadata-exchange-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: ANY
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.metadata_exchange
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {}
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.metadata_exchange
                  runtime: envoy.wasm.runtime.null

stats-filter-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: stats-filter-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_OUTBOUND
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: envoy.router
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: stats_outbound
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: envoy.router
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: "{\n  \"debug\": \"true\",\n  \"stat_prefix\": \"istio\",\n  \"metrics\": [\n     {\n       \"name\": \"requests_total\",\n        \"dimensions\": {\n            \"destination_port\": \"string(destination.port)\",\n            \"request_host\": \"request.host\"\n        }     \n    }\n  ]\n}\n"
                root_id: stats_inbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: stats_inbound
    - applyTo: HTTP_FILTER
      match:
        context: GATEWAY
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: envoy.router
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio",
                    "disable_host_header_fallback": true
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: stats_outbound

tcp-metadata-exchange-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: tcp-metadata-exchange-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: NETWORK_FILTER
      match:
        context: SIDECAR_INBOUND
        listener: {}
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.metadata_exchange
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
            value:
              protocol: istio-peer-exchange
    - applyTo: CLUSTER
      match:
        cluster: {}
        context: SIDECAR_OUTBOUND
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: MERGE
        value:
          filters:
            - name: istio.metadata_exchange
              typed_config:
                '@type': type.googleapis.com/udpa.type.v1.TypedStruct
                type_url: type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
                value:
                  protocol: istio-peer-exchange
    - applyTo: CLUSTER
      match:
        cluster: {}
        context: GATEWAY
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: MERGE
        value:
          filters:
            - name: istio.metadata_exchange
              typed_config:
                '@type': type.googleapis.com/udpa.type.v1.TypedStruct
                type_url: type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
                value:
                  protocol: istio-peer-exchange

tcp-stats-filter-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: tcp-stats-filter-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: NETWORK_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: envoy.tcp_proxy
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_inbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: tcp_stats_inbound
    - applyTo: NETWORK_FILTER
      match:
        context: SIDECAR_OUTBOUND
        listener:
          filterChain:
            filter:
              name: envoy.tcp_proxy
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: tcp_stats_outbound
    - applyTo: NETWORK_FILTER
      match:
        context: GATEWAY
        listener:
          filterChain:
            filter:
              name: envoy.tcp_proxy
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null

结果如下图所示。

asm-cp.png

3 创建SLB

prometheus

如下图所示,进入ACK实例页面,在服务中搜索prometheus,在结果列表中点击更新。
slb-prometheus.png

进入更新服务页面后,选择类型为负载均衡,然后修改服务端口。详见下图。
slb-prometheus-2.png

grafana

类似地,更新grafana服务。结果见下图。

slb-grafana.png

4 刷新数据

进入bookinfo示例的productpage页面(参考:服务网格 ASM > 快速入门),多次刷新页面,以产生监控数据。

bookinfo.png

5 验证

进入prometheus页面,输入ist![slb-prometheus-2.png](https://ata2-img.oss-cn-zhangjiakou.aliyuncs.com/82ce49613a326e23dd62f800d1fd8567.png)io_requests_total,然后点击执行按钮。预期得到如下页面所示的结果。
prometheus-result.png

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
3月前
|
Prometheus 运维 监控
智能运维实战:Prometheus与Grafana的监控与告警体系
【10月更文挑战第26天】Prometheus与Grafana是智能运维中的强大组合,前者是开源的系统监控和警报工具,后者是数据可视化平台。Prometheus具备时间序列数据库、多维数据模型、PromQL查询语言等特性,而Grafana支持多数据源、丰富的可视化选项和告警功能。两者结合可实现实时监控、灵活告警和高度定制化的仪表板,广泛应用于服务器、应用和数据库的监控。
481 3
|
2月前
|
存储 数据采集 Prometheus
Grafana Prometheus Altermanager 监控系统
Grafana、Prometheus 和 Alertmanager 是一套强大的开源监控系统组合。Prometheus 负责数据采集与存储,Alertmanager 处理告警通知,Grafana 提供可视化界面。本文简要介绍了这套系统的安装配置流程,包括各组件的下载、安装、服务配置及开机自启设置,并提供了访问地址和重启命令。适用于希望快速搭建高效监控平台的用户。
176 20
|
2月前
|
Prometheus 监控 Cloud Native
Prometheus+Grafana监控Linux主机
通过本文的步骤,我们成功地在 Linux 主机上使用 Prometheus 和 Grafana 进行了监控配置。具体包括安装 Prometheus 和 Node Exporter,配置 Grafana 数据源,并导入预设的仪表盘来展示监控数据。通过这种方式,可以轻松实现对 Linux 主机的系统指标监控,帮助及时发现和处理潜在问题。
233 7
|
2月前
|
Prometheus 运维 监控
Prometheus+Grafana+NodeExporter:构建出色的Linux监控解决方案,让你的运维更轻松
本文介绍如何使用 Prometheus + Grafana + Node Exporter 搭建 Linux 主机监控系统。Prometheus 负责收集和存储指标数据,Grafana 用于可视化展示,Node Exporter 则采集主机的性能数据。通过 Docker 容器化部署,简化安装配置过程。完成安装后,配置 Prometheus 抓取节点数据,并在 Grafana 中添加数据源及导入仪表盘模板,实现对 Linux 主机的全面监控。整个过程简单易行,帮助运维人员轻松掌握系统状态。
330 3
|
2月前
|
Prometheus 监控 Cloud Native
无痛入门Prometheus:一个强大的开源监控和告警系统,如何快速安装和使用?
Prometheus 是一个完全开源的系统监控和告警工具包,受 Google 内部 BorgMon 系统启发,自2012年由前 Google 工程师在 SoundCloud 开发以来,已被众多公司采用。它拥有活跃的开发者和用户社区,现为独立开源项目,并于2016年加入云原生计算基金会(CNCF)。Prometheus 的主要特点包括多维数据模型、灵活的查询语言 PromQL、不依赖分布式存储、通过 HTTP 拉取时间序列数据等。其架构简单且功能强大,支持多种图形和仪表盘展示模式。安装和使用 Prometheus 非常简便,可以通过 Docker 快速部署,并与 Grafana 等可
655 2
|
2月前
|
监控 jenkins Linux
从 Jenkins 持续集成出发:探究如何监控员工电脑屏幕
Jenkins 在企业信息化管理中用于自动化构建、测试和部署,提高开发效率。本文讨论了其重要性,并从技术角度探讨了屏幕监控的可能性,但明确反对非法监控,强调应合法合规地管理企业和尊重员工隐私。
95 12
|
3月前
|
存储 Prometheus 监控
监控堆外第三方监控工具Prometheus
监控堆外第三方监控工具Prometheus
88 3
|
3月前
|
存储 Prometheus 运维
在云原生环境中,阿里云ARMS与Prometheus的集成提供了强大的应用实时监控解决方案
在云原生环境中,阿里云ARMS与Prometheus的集成提供了强大的应用实时监控解决方案。该集成结合了ARMS的基础设施监控能力和Prometheus的灵活配置及社区支持,实现了全面、精准的系统状态、性能和错误监控,提升了应用的稳定性和管理效率。通过统一的数据视图和高级查询功能,帮助企业有效应对云原生挑战,促进业务的持续发展。
110 3
|
3月前
|
Prometheus 监控 Cloud Native
在 HBase 集群中,Prometheus 通常监控哪些类型的性能指标?
在 HBase 集群中,Prometheus 监控关注的核心指标包括 Master 和 RegionServer 的进程存在性、RPC 请求数、JVM 内存使用率、磁盘和网络错误、延迟和吞吐量、资源利用率及 JVM 使用信息。通过 Grafana 可视化和告警规则,帮助管理员实时监控集群性能和健康状况。
|
3月前
|
Prometheus 运维 监控
智能运维实战:Prometheus与Grafana的监控与告警体系
【10月更文挑战第27天】在智能运维中,Prometheus和Grafana的组合已成为监控和告警体系的事实标准。Prometheus负责数据收集和存储,支持灵活的查询语言PromQL;Grafana提供数据的可视化展示和告警功能。本文介绍如何配置Prometheus监控目标、Grafana数据源及告警规则,帮助运维团队实时监控系统状态,确保稳定性和可靠性。
448 0

热门文章

最新文章