一 部署eck平台
官网链接:https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-install-helm.html
helm repo add elastic https://helm.elastic.co
helm repo update
# Install an eck-managed Elasticsearch and Kibana using the default values, which deploys the quickstart examples.
#这里我采用 helm pull 先下载chart然后再helm install
helm pull elastic/eck-stack
helm install es-kb-quickstart . -n elastic-stack --create-namespace
kubectl get po -n elastic-system
[root@lavm-g8l2vxjypl dockerfile]# kubectl get po -n elastic-system
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 25h
[root@lavm-g8l2vxjypl dockerfile]#
这样就安装完成了
部署方式采用了helm部署而不是默认的operator方式
如果使用operator方式可以参考:https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-deploy-eck.html
默认的部署命名空间配置尽量不要改动
二 部署es
先参考这里https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-deploy-elasticsearch.html默认的方式快速部署一个es集群,这里count对应集群的数量,这里不要直接使用示例,可以先将这个文件存成一个yaml文件其次再用kubectl apply -f 运行
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.19.4
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
count对应的副本数量,version对应es版本,这里es版本,kibana,filebeat版本都应该对应起来。如果使用自己镜像可以使用image指定,参考kubectl explain Elasticsearch.spec。其次Elasticsearch与kibana,filebeat需要部署在同一个ns下面不然默认的kibana会找不到es.。内存不足可以在env里面限定内存参数
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.11.1
# auth:
# fileRealm:
# - secretName: elasticsearch-bootstrap-password
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
#es_java_opts: "-Xms1g -Xmx1g"
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
resources:
requests: {}
limits: {}
# memory: 2Gi
# cpu: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: managed-nfs-storage
部署完毕后es会有默认的账号为elastic默认的密码可以通过secret查看,默认的secret为elasticsearch-es-elastic-user
kubectl get secret elasticsearch-es-elastic-user -n log -o yaml,可以使用kubectl edit secret elasticsearch-es-elastic-user -n log更改默认的密码更改完毕后直接删除pod重新启动。
curl -u elastic:password -k https://10.97.29.6:9200/_cluster/health?pretty这个命令可以测试es集群是否正常。观察返回无error就正常
三 部署 kibana
先参考下这里https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-deploy-kibana.html查看如何快速部署一个kibana,参考kubectl explain Kibana都有哪些规范。这里elasticsearchRef对应的name应为第一步部署的Elasticsearch里面的name,并且两者要在同一个命名空间不然eck operator找不到es资源无法连接es。
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 8.19.4
count: 1
elasticsearchRef:
name: quickstart
https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-kibana-es.html 这里对应Kibana更多的自定义配置,包括直接连接一个已有的不是使用eck部署的k8s集群。这里还有一个配置项需要注意,要禁用https不然filebeat默认采集日志会有问题。官方默认的例子里面说是设置http.tls.selfSignedCertificate.disabled=true就可以但是测试发现需要还加上server.ssl.enabled: false不然还是https。
http.tls.selfSignedCertificate.disabled=true:禁用 Kibana 自动生成的自签名证书 server.ssl.enabled: false:完全禁用 Kibana 服务器的 SSL/TLS 功能
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.11.1
count: 1
elasticsearchRef:
name: elasticsearch
config:
server.host: "0.0.0.0"
server.ssl.enabled: false # 明确禁用SSL
server.shutdownTimeout: "5s"
http:
tls:
selfSignedCertificate:
disabled: true
service:
spec:
type: NodePort
ports:
- name: http
port: 5601
targetPort: 5601
nodePort: 30601 # 可选:指定具体的 NodePort 端口(范围 30000-32767)
podTemplate:
spec:
containers:
- name: kibana
resources:
requests: {}
limits: {}
四 部署filebeat
先参考下这里https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-beat-quickstart.html查看如何快速部署一个Beat,参考kubectl explain Beat都有哪些规范。这里elasticsearchRef对应的name应为第一步部署的Elasticsearch里面的name,并且两者要在同一个命名空间不然eck operator找不到es资源无法连接es。这里可能刚开始的时候启动失败。静等一会即可。这里type对应采集的类型,这个资源之所以叫Beat是因为有很多组件类型来采集日志,这里采用fileBeat
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: quickstart
spec:
type: filebeat
version: 8.19.4
elasticsearchRef:
name: quickstart
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
daemonSet:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
这里fileBeat收集日志主要有三种方式
- 使用autodiscover自动发现采集日志,这里会采集pod的所有日志,并且收集pod名称,标签命名空间等数据
- 一些非云原生应用可以通过volume挂载hostpath至指定目录上,filebeat收集这个目录的日志
- 也可以使用sidecar注入一个容器,通过emptydit将两者目录挂载fileBeat作为另一个容器直接收集日志
autodiscover方式参考:
https://www.elastic.co/guide/en/cloud-on-k8s/2.16/k8s-beat-configuration-examples.html#k8s_filebeat_with_autodiscover
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_autodiscover.yaml
这种方式收集的日志需要注入一个sa用于获取pod信息,节点信息等权限,这个sa需要和beat在同一ns
kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml 这种方式与上面的区别在于这种可以收集指定ns或者指定label的pod日志,这种更适合生产使用,不然一次收集所有日志,es会存储很多数据。
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: log
spec:
type: filebeat
version: 8.11.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat:
# 系统日志采集配置
inputs:
- type: log
paths:
- /var/log/messages
fields:
log_type: "system"
fields_under_root: true
# 应用日志采集配置
- type: log
paths:
- /app/logs/*.log
- /app/logs/*/*.log
fields:
log_type: "application"
fields_under_root: true
# Kubernetes 容器日志自动发现
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
# 添加命名空间标签过滤
#namespace_labels:
# log: "true"
hints:
default_config:
enabled: "false"
templates:
- condition.equals.kubernetes.namespace: default
config:
- paths: ["/var/log/containers/*${data.kubernetes.container.id}.log"]
type: container
fields:
log_type: "pod"
fields_under_root: true
- condition.equals.kubernetes.labels.log: "true"
config:
- paths: ["/var/log/containers/*${data.kubernetes.container.id}.log"]
type: container
fields:
log_type: "pod"
fields_under_root: true
# 正确的输出配置和索引路由
output:
elasticsearch:
#hosts: ["elasticsearch-es-http.log.svc.cluster.local:9200"]
indices:
- index: "system-%{+yyyy.MM.dd}"
when:
equals:
fields.log_type: "system"
- index: "application-%{+yyyy.MM.dd}"
when:
equals:
fields.log_type: "application"
# 默认索引用于容器日志
- index: "filebeat-%{+yyyy.MM.dd}"
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
- name: applogs
mountPath: /app/logs
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: applogs
hostPath:
path: /app/logs
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- jobs
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: log
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: log
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
sidecar方式如果一个一个去添加有点麻烦可以考虑使用openkruise这种方式直接设置一个全局的sideacr注入