云原生|kubernetes|搭建部署一个稳定高效的EFK日志系统(二)

简介: 云原生|kubernetes|搭建部署一个稳定高效的EFK日志系统

es-sts-deploy.yaml  集群部署清单:


cat << EOF > es-sts-deploy.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:7.8.0
        imagePullPolicy: IfNotPresent
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: managed-nfs-storage
      resources:
        requests:
          storage: 10Gi
EOF

OK,稍等几分钟后,es集群基本就部署好了,看看pod和svc是否正常吧:

[root@k8s-master ~]# k get po,svc -n kube-logging
NAME               READY   STATUS    RESTARTS   AGE
pod/es-cluster-0   1/1     Running   0          6m44s
pod/es-cluster-1   1/1     Running   0          6m37s
pod/es-cluster-2   1/1     Running   0          6m30s
NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   3m51s

四,kibana的部署


这个没什么好说的,干就完了,需要注意的是,镜像和上面的es版本一致,都是7.8.0哦。

总共两个清单文件,一个是service,该service是暴露节点端口的,如果有安装ingress,那么,此service可以设置为headless service不用设置为NodePort。

第二个文件是部署pod的文件,其中的value: http://elasticsearch:9200 是指的headless service的9200端口,假如啊,注意我这是假如,headless service名字是myes,那么,这里value就应该是 http://myes:9200,总之,此环境变量把kibana和elasticsearch集群联系起来了。

kibana-svc.yaml


cat << EOF > kibana-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  type: NodePort
  ports:
  - port: 5601
  selector:
    app: kibana
EOF

kibana-deploy.yaml


cat << EOF > kibana-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.8.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
EOF

稍等大概5分钟,查看一哈kibana的日志 ,直到有这个出现:http server running at http://0:5601  表示kibana部署完成。

","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["info","plugins","taskManager","taskManager"],"pid":6,"message":"TaskManager is identified by the Kibana UUID: dd9bcb6f-4353-4861-81e5-fe3ac42bb157"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["status","plugin:task_manager@7.8.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["status","plugin:encryptedSavedObjects@7.8.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["status","plugin:apm_oss@7.8.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["status","plugin:console_legacy@7.8.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["status","plugin:region_map@7.8.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["status","plugin:ui_metric@7.8.0","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-10-15T11:45:36Z","tags":["listening","info"],"pid":6,"message":"Server running at http://0:5601"}
{"type":"log","@timestamp":"2022-10-15T11:45:38Z","tags":["info","http","server","Kibana"],"pid":6,"message":"http server running at http://0:5601"}

看一下kibana相关的pod和service是否正常:

[root@k8s-master ~]# k get po,svc -n kube-logging
NAME                          READY   STATUS    RESTARTS   AGE
pod/es-cluster-0              1/1     Running   0          12m
pod/es-cluster-1              1/1     Running   0          12m
pod/es-cluster-2              1/1     Running   0          12m
pod/kibana-588d597485-wljbr   1/1     Running   0          49s
NAME                    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None          <none>        9200/TCP,9300/TCP   9m21s
service/kibana          NodePort    10.0.132.94   <none>        5601:32042/TCP      49s

打开浏览器,任意一个节点IP+32042就可以登录kibana了。

822355d4278b49288f8f6b1b01638c20.png

dfa4ca6aa5124178aa0dc71cedb32be0.png

五,采集器fluentd的部署


ServiceAccent清单文件:

cat << EOF > fluentd-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
EOF

fluentd的rbac:

cat << EOF > fluentd-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-logging
EOF

fluentd的DaemonSet部署清单 :

配置说明:

将宿主机Node的/var/log和/var/lib/docker/containers目录挂载到 fluentd容器中,用于读取容器输出到stdout和stderr的日志,以及kubernetes组件的日志。

资源限制根据实际情况进行调整,避免Fluentd占用太多资源。

利用环境变量,设置了elasticsarch服务的访问地址,此处使用了service名称,也就是这个:elasticsearch.kube-logging.svc.cluster.local。

cat << EOF > fluentd-deploy.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
        imagePullPolicy: IfNotPresent
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-logging.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
EOF

那么,fluent的变量有哪些呢?进入pod里,查看fluent的配置文件,里面有比如 FLUENT_ELASTICSEARCH_HOST,ENV['FLUENT_ELASTICSEARCH_PORT以及日志等级log_level 这里是info等等变量。

root@fluentd-d58br:/fluentd/etc# cat fluent.conf 
# AUTOMATICALLY GENERATED
# DO NOT EDIT THIS FILE DIRECTLY, USE /templates/conf/fluent.conf.erb
@include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
@include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"
@include kubernetes.conf
@include conf.d/*.conf
<match **>
   @type elasticsearch
   @id out_es
   @log_level info
   include_tag_key true
   host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
   port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
   path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
   scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
   ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
   ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1'}"
   reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
   reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
   reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
   log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
   logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
   logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
   index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
   type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
   <buffer>
     flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
     flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
     chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
     queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
     retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
     retry_forever true
   </buffer>
</match>
相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
4月前
|
运维 Dubbo Cloud Native
Dubbo 云原生重构出击:更快部署、更强控制台、更智能运维
Apache Dubbo 最新升级支持云原生,提供一键部署微服务集群与全新可视化控制台,提升全生命周期管理体验,助力企业高效构建云原生应用。
377 25
|
9月前
|
存储 Kubernetes 开发工具
使用ArgoCD管理Kubernetes部署指南
ArgoCD 是一款基于 Kubernetes 的声明式 GitOps 持续交付工具,通过自动同步 Git 存储库中的配置与 Kubernetes 集群状态,确保一致性与可靠性。它支持实时同步、声明式设置、自动修复和丰富的用户界面,极大简化了复杂应用的部署管理。结合 Helm Charts,ArgoCD 提供模块化、可重用的部署流程,显著减少人工开销和配置错误。对于云原生企业,ArgoCD 能优化部署策略,提升效率与安全性,是实现自动化与一致性的理想选择。
604 0
|
8月前
|
存储 Kubernetes 异构计算
Qwen3 大模型在阿里云容器服务上的极简部署教程
通义千问 Qwen3 是 Qwen 系列最新推出的首个混合推理模型,其在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。
|
9月前
|
存储 Kubernetes 监控
K8s集群实战:使用kubeadm和kuboard部署Kubernetes集群
总之,使用kubeadm和kuboard部署K8s集群就像回归童年一样,简单又有趣。不要忘记,技术是为人服务的,用K8s集群操控云端资源,我们不过是想在复杂的世界找寻简单。尽管部署过程可能遇到困难,但朝着简化复杂的目标,我们就能找到意义和乐趣。希望你也能利用这些工具,找到你的乐趣,满足你的需求。
893 33
|
9月前
|
Kubernetes 开发者 Docker
集群部署:使用Rancher部署Kubernetes集群。
以上就是使用 Rancher 部署 Kubernetes 集群的流程。使用 Rancher 和 Kubernetes,开发者可以受益于灵活性和可扩展性,允许他们在多种环境中运行多种应用,同时利用自动化工具使工作负载更加高效。
521 19
|
9月前
|
存储 测试技术 对象存储
使用容器服务ACK快速部署QwQ-32B模型并实现推理智能路由
阿里云最新发布的QwQ-32B模型,通过强化学习大幅度提升了模型推理能力。QwQ-32B模型拥有320亿参数,其性能可以与DeepSeek-R1 671B媲美。
|
存储 Cloud Native 数据处理
从嵌入式状态管理到云原生架构:Apache Flink 的演进与下一代增量计算范式
本文整理自阿里云资深技术专家、Apache Flink PMC 成员梅源在 Flink Forward Asia 新加坡 2025上的分享,深入解析 Flink 状态管理系统的发展历程,从核心设计到 Flink 2.0 存算分离架构,并展望未来基于流批一体的通用增量计算方向。
343 0
从嵌入式状态管理到云原生架构:Apache Flink 的演进与下一代增量计算范式
|
4月前
|
运维 监控 Cloud Native
从本土到全球,云原生架构护航灵犀互娱游戏出海
本文内容整理自「 2025 中企出海大会·游戏与互娱出海分论坛」,灵犀互娱基础架构负责人朱晓靖的演讲内容,从技术层面分享云原生架构护航灵犀互娱游戏出海经验。
473 16
|
4月前
|
运维 监控 Cloud Native
从本土到全球,云原生架构护航灵犀互娱游戏出海
内容整理自「 2025 中企出海大会·游戏与互娱出海分论坛」,灵犀互娱基础架构负责人朱晓靖的演讲内容,从技术层面分享云原生架构护航灵犀互娱游戏出海经验。
|
2月前
|
人工智能 Kubernetes Cloud Native
Higress(云原生AI网关) 架构学习指南
Higress 架构学习指南 🚀写在前面: 嘿,欢迎你来到 Higress 的学习之旅!
665 0

热门文章

最新文章

推荐镜像

更多