ASM集成自建Prometheus实现网格监控

本文涉及的产品
可观测可视化 Grafana 版,10个用户账号 1个月
可观测监控 Prometheus 版,每月50GB免费额度
简介: ### 1 部署自建prometheus #### 部署prometheus 执行如下命令创建prometheus实例 ```sh # ISTIO_SRC istio源代码路径 kubectl apply -f ${ISTIO_SRC}/samples/addons/prometheus.yaml ``` 预期结果如下 ```sh serviceaccount/pro

1 部署自建prometheus

部署prometheus

执行如下命令创建prometheus实例

# ISTIO_SRC istio源代码路径
kubectl apply -f ${ISTIO_SRC}/samples/addons/prometheus.yaml

预期结果如下

serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

部署grafana

执行如下命令创建grafana实例

# ISTIO_SRC istio源代码路径
kubectl apply -f ${ISTIO_SRC}/samples/addons/grafana.yaml

预期结果如下

serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created

修改prometheus配置

创建prometheus.yaml文件,将如下内容保存至该文件。

apiVersion: v1
data:
  prometheus.yml: |-
    global:
      scrape_interval: 15s
    scrape_configs:
    # Mixer scrapping. Defaults to Prometheus and mixer on same namespace.
    - job_name: 'istio-mesh'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-telemetry;prometheus

    # Scrape config for envoy stats
    - job_name: 'envoy-stats'
      metrics_path: /stats/prometheus
      kubernetes_sd_configs:
      - role: pod

      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_container_port_name]
        action: keep
        regex: '.*-envoy-prom'
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:15090
        target_label: __address__
      - action: labeldrop
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name

    - job_name: 'istio-policy'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system


      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-policy;http-policy-monitoring

    - job_name: 'istio-telemetry'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-telemetry;http-monitoring

    - job_name: 'pilot'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istiod;http-monitoring
      - source_labels: [__meta_kubernetes_service_label_app]
        target_label: app
    - job_name: 'galley'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-galley;http-monitoring

    - job_name: 'citadel'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-citadel;http-monitoring

    - job_name: 'sidecar-injector'

      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - istio-system

      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: istio-sidecar-injector;http-monitoring

    # scrape config for API servers
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - default
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: kubernetes;https

    # scrape config for nodes (kubelet)
    - job_name: 'kubernetes-nodes'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

    # Scrape config for Kubelet cAdvisor.
    #
    # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
    # (those whose names begin with 'container_') have been removed from the
    # Kubelet metrics endpoint.  This job scrapes the cAdvisor endpoint to
    # retrieve those metrics.
    #
    # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
    # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
    # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
    # the --cadvisor-port=0 Kubelet flag).
    #
    # This job is not necessary and should be removed in Kubernetes 1.6 and
    # earlier versions, or it will cause the metrics to be scraped twice.
    - job_name: 'kubernetes-cadvisor'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    # scrape config for service endpoints.
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:  # If first two labels are present, pod should be scraped  by the istio-secure job.
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status]
        action: drop
        regex: (.+)
      - source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
        action: drop
        regex: (true)
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name
    - job_name: 'kubernetes-pods-istio-secure'
      scheme: https
      tls_config:
        ca_file: /etc/istio-certs/root-cert.pem
        cert_file: /etc/istio-certs/cert-chain.pem
        key_file: /etc/istio-certs/key.pem
        insecure_skip_verify: true  # prometheus does not support secure naming.
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      # sidecar status annotation is added by sidecar injector and
      # istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
      - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
        action: keep
        regex: (([^;]+);([^;]*))|(([^;]*);(true))
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__]  # Only keep address that is host:port
        action: keep    # otherwise an extra target with ':443' is added for https scheme
        regex: ([^:]+):(\d+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod_name
kind: ConfigMap
metadata:
  labels:
    app: prometheus
    chart: prometheus-11.0.2
    component: server
    heritage: Helm
    release: prometheus
  name: prometheus
  namespace: istio-system

使用如下命令更新prometheus的配置

kubectl apply -f prometheus.yaml

预期结果如下

configmap/prometheus configured

2 EnvoyFilter

进入ASM实例页面,在控制平面的EnvoyFilter面板,点击新建按钮,创建如下4个EnvoyFilter

metadata-exchange-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: metadata-exchange-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: ANY
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.metadata_exchange
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {}
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.metadata_exchange
                  runtime: envoy.wasm.runtime.null

stats-filter-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: stats-filter-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_OUTBOUND
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: envoy.router
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: stats_outbound
    - applyTo: HTTP_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: envoy.router
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: "{\n  \"debug\": \"true\",\n  \"stat_prefix\": \"istio\",\n  \"metrics\": [\n     {\n       \"name\": \"requests_total\",\n        \"dimensions\": {\n            \"destination_port\": \"string(destination.port)\",\n            \"request_host\": \"request.host\"\n        }     \n    }\n  ]\n}\n"
                root_id: stats_inbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: stats_inbound
    - applyTo: HTTP_FILTER
      match:
        context: GATEWAY
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: envoy.router
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio",
                    "disable_host_header_fallback": true
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: stats_outbound

tcp-metadata-exchange-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: tcp-metadata-exchange-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: NETWORK_FILTER
      match:
        context: SIDECAR_INBOUND
        listener: {}
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.metadata_exchange
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
            value:
              protocol: istio-peer-exchange
    - applyTo: CLUSTER
      match:
        cluster: {}
        context: SIDECAR_OUTBOUND
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: MERGE
        value:
          filters:
            - name: istio.metadata_exchange
              typed_config:
                '@type': type.googleapis.com/udpa.type.v1.TypedStruct
                type_url: type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
                value:
                  protocol: istio-peer-exchange
    - applyTo: CLUSTER
      match:
        cluster: {}
        context: GATEWAY
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: MERGE
        value:
          filters:
            - name: istio.metadata_exchange
              typed_config:
                '@type': type.googleapis.com/udpa.type.v1.TypedStruct
                type_url: type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange
                value:
                  protocol: istio-peer-exchange

tcp-stats-filter-1.6.yaml

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: tcp-stats-filter-1.6
  namespace: istio-system
spec:
  configPatches:
    - applyTo: NETWORK_FILTER
      match:
        context: SIDECAR_INBOUND
        listener:
          filterChain:
            filter:
              name: envoy.tcp_proxy
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_inbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: tcp_stats_inbound
    - applyTo: NETWORK_FILTER
      match:
        context: SIDECAR_OUTBOUND
        listener:
          filterChain:
            filter:
              name: envoy.tcp_proxy
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null
                  vm_id: tcp_stats_outbound
    - applyTo: NETWORK_FILTER
      match:
        context: GATEWAY
        listener:
          filterChain:
            filter:
              name: envoy.tcp_proxy
        proxy:
          proxyVersion: ^1\.6.*
      patch:
        operation: INSERT_BEFORE
        value:
          name: istio.stats
          typed_config:
            '@type': type.googleapis.com/udpa.type.v1.TypedStruct
            type_url: type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm
            value:
              config:
                configuration: |
                  {
                    "debug": "true",
                    "stat_prefix": "istio"
                  }
                root_id: stats_outbound
                vm_config:
                  code:
                    local:
                      inline_string: envoy.wasm.stats
                  runtime: envoy.wasm.runtime.null

结果如下图所示。

asm-cp.png

3 创建SLB

prometheus

如下图所示,进入ACK实例页面,在服务中搜索prometheus,在结果列表中点击更新。
slb-prometheus.png

进入更新服务页面后,选择类型为负载均衡,然后修改服务端口。详见下图。
slb-prometheus-2.png

grafana

类似地,更新grafana服务。结果见下图。

slb-grafana.png

4 刷新数据

进入bookinfo示例的productpage页面(参考:服务网格 ASM > 快速入门),多次刷新页面,以产生监控数据。

bookinfo.png

5 验证

进入prometheus页面,输入ist![slb-prometheus-2.png](https://ata2-img.oss-cn-zhangjiakou.aliyuncs.com/82ce49613a326e23dd62f800d1fd8567.png)io_requests_total,然后点击执行按钮。预期得到如下页面所示的结果。
prometheus-result.png

相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。     相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
5月前
|
Prometheus 监控 Cloud Native
云原生监控实战:Prometheus+Grafana快速搭建指南
云原生监控实战:Prometheus+Grafana快速搭建指南
|
5月前
|
存储 Prometheus 监控
OSS监控体系搭建:Prometheus+Grafana实时监控流量、错误码、存储量(开源方案替代云监控自定义视图)
本方案基于Prometheus构建OSS监控系统,涵盖架构设计、指标采集、可视化、告警及性能优化,助力企业实现高可用、低成本的自建监控体系。
583 1
|
1月前
|
存储 Prometheus 监控
136_生产监控:Prometheus集成 - 设置警报与指标选择与LLM部署监控最佳实践
在大语言模型(LLM)部署的生产环境中,有效的监控系统是确保服务稳定性、可靠性和性能的关键。随着LLM模型规模的不断扩大和应用场景的日益复杂,传统的监控手段已难以满足需求。Prometheus作为当前最流行的开源监控系统之一,凭借其强大的时序数据收集、查询和告警能力,已成为LLM部署监控的首选工具。
|
6月前
|
Prometheus 监控 Cloud Native
除了Prometheus,还有哪些工具可以监控Docker Swarm集群的资源使用情况?
除了Prometheus,还有哪些工具可以监控Docker Swarm集群的资源使用情况?
534 79
|
5月前
|
存储 监控 Cloud Native
云原生监控实战:Prometheus+Grafana打造RDS多维度预警体系
本方案构建了基于Prometheus与Thanos的云原生RDS监控体系,涵盖数据采集、存储、可视化与告警全流程。支持10万+QPS采集、90%存储压缩,具备<30秒告警延迟能力。通过自定义指标与智能预警策略,显著提升故障发现效率,实现分钟级响应。
444 5
|
5月前
|
Prometheus 监控 Cloud Native
|
4月前
|
Prometheus 监控 Cloud Native
Docker 部署 Prometheus 和 Grafana 监控 Spring Boot 服务
Docker 部署 Prometheus 和 Grafana 监控 Spring Boot 服务实现步骤
521 0
|
8月前
|
Prometheus Kubernetes 监控
Kubernetes监控:Prometheus与AlertManager结合,配置邮件告警。
完成这些步骤之后,您就拥有了一个可以用邮件通知你的Kubernetes监控解决方案了。当然,所有的这些配置都需要相互照应,还要对你的Kubernetes集群状况有深入的了解。希望这份指南能帮助你创建出适合自己场景的监控系统,让你在首次发现问题时就能做出响应。
466 22
|
11月前
|
存储 数据采集 Prometheus
Grafana Prometheus Altermanager 监控系统
Grafana、Prometheus 和 Alertmanager 是一套强大的开源监控系统组合。Prometheus 负责数据采集与存储,Alertmanager 处理告警通知,Grafana 提供可视化界面。本文简要介绍了这套系统的安装配置流程,包括各组件的下载、安装、服务配置及开机自启设置,并提供了访问地址和重启命令。适用于希望快速搭建高效监控平台的用户。
610 20
|
11月前
|
Prometheus 监控 Cloud Native
Prometheus+Grafana监控Linux主机
通过本文的步骤,我们成功地在 Linux 主机上使用 Prometheus 和 Grafana 进行了监控配置。具体包括安装 Prometheus 和 Node Exporter,配置 Grafana 数据源,并导入预设的仪表盘来展示监控数据。通过这种方式,可以轻松实现对 Linux 主机的系统指标监控,帮助及时发现和处理潜在问题。
998 7
下一篇
oss云网关配置