ELK7.6+Filebeat集群部署

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
注册配置 MSE Nacos/ZooKeeper,118元/月
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
简介: ELK7.6+Filebeat集群部署

一、部署环境及相关软件版本

软件名称 版本 操作系统 内核版本
Elasticsearch 7.6.2 CentOS 7.5.1804 3.10.0-862.el7
Logstach 7.6.2 CentOS 7.5.1804 3.10.0-862.el7
Kibana 7.6.2 CentOS 7.5.1804 3.10.0-862.el7
Filebeat 7.6.2 CentOS 7.5.1804 3.10.0-862.el7
JDK 11.0.7 CentOS 7.5.1804 3.10.0-862.el7
kafka/zookeeper 2.12-2.3.1 CentOS 7.5.1804 3.10.0-862.el7

二、安装JVM(所有服务器上)

tar xf jdk-11.0.7_linux-x64_bin.tar.gz -C /usr/local/
vim /etc/profile.d/java.sh  #设置环境变量,如果服务器上当前已经存在JVM环境变量请删除
export JAVA_HOME=/usr/local/jdk-11.0.7/
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 
source /etc/profile.d/java.sh
java -version   #检查
java version "11.0.7" 2020-04-14 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.7+8-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.7+8-LTS, mixed mode)

三、ES集群安装配置(ES集群上)

1.安装配置ES

tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/
ln -sv elasticsearch-7.6.2/ elasticsearch
cd elasticsearch/config/
grep "^[a-Z]"  /usr/local/elasticsearch/config/elasticsearch.yml #修改ES配置如下
cluster.name: pwb-elk-cluster #集群名称,所有机器相同 
node.name: node-2  #当前服务器的node名称,集群中保持唯一
path.data: /Data/es/data
path.logs: /Data/es/log
bootstrap.memory_lock: true
network.host: 172.16.150.158  #当前主机IP地址
http.port: 9200
discovery.seed_hosts: ["172.16.150.157", "172.16.150.158","172.16.150.159"] #集群主机IP
cluster.initial_master_nodes: ["172.16.150.157", "172.16.150.158","172.16.150.159"] #集群中首次启动时可被选举为master的节点
discovery.zen.minimum_master_nodes: 2  #最少有两个节点存活才可以选举master
gateway.recover_after_nodes: 2 #最少两个节点存活在开始数据存活

其他节点配置同上,各节点配置差异部分:

network.host:   #本机IP地址

node.name:   #分配的节点名称

2.创建启动用户及数据、日志目录

mkdir -pv /Data/es/

useradd elastic

chown -R elastic:elastic /Data/es/

chown -R elastic:elastic /usr/local/elasticsearch-7.6.2/

3.配置系统参数

tail  /etc/security/limits.conf  #新增或修改以下选项
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
* soft memlock unlimited
* hard memlock unlimited
echo "vm.max_map_count=262144 "  >>    /etc/sysctl.conf
sysctl -p
reboot

4.启动服务(三台同时启动,因为要选举master)

su - elastic

cd /usr/local/elasticsearch

nohup ./bin/elasticsearch > /tmp/elastic.log &

tailf  /tmp/elastic.log

确保日志出现以下内容:

master node changed {previous [], current [{node-2}{TA9XcpyMS8yH1YIkq7fN-Q}{FPgTcZnNRgSiKnHfrjsd-A}{172.16.150.158}{172.16.150.158:9300}

5.检查服务器状态

netstat -tnlp|grep -E "9200|9300"curl http://172.16.150.159:9200/  #任意节点IP地址{  "name" : "node-3",  "cluster_name" : "pwb-elk-cluster",  "cluster_uuid" : "mSE1bV1UTh-p1VSPLLQLLQ",  "version" : {    "number" : "7.6.2",    "build_flavor" : "default",    "build_type" : "tar",    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",    "build_date" : "2020-03-26T06:34:37.794943Z",    "build_snapshot" : false,    "lucene_version" : "8.4.0",    "minimum_wire_compatibility_version" : "6.8.0",    "minimum_index_compatibility_version" : "6.0.0-beta1"  },  "tagline" : "You Know, for Search"}

四、安装Kibana

1.安装配置Kibana(kibana服务器上)

tar xf kibana-7.6.2-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/
ln -sv kibana-7.6.2-linux-x86_64/ kibana
cd kibana/config
grep "^[a-Z]" /usr/local/kibana/config/kibana.yml 
server.port: 5601  #服务器端口,默认5601 必须
server.host: "172.16.150.159"   #主机IP地址  必须
elasticsearch.hosts: ["http://172.16.150.157:9200"]  #ES地址 必须
i18n.locale: "zh-CN"  #7版本支持中文,按需配置

2.启动服务

nohup ./kibana --allow-root > /tmp/kibana.log &

tailf /tmp/kibana.log  #确保出现一下信息

"tags":["listening","info"],"pid":13922,"message":"Server running at http://172.16.150.159:5601"}

3.访问kibana

web界面打开http://172.16.150.159:5601连接

五、安装ZK/kafka(zk/kafka集群)

1.安装配置ZK

tar xf kafka_2.12-2.3.1.tgz  -C /usr/local/
cd /usr/local/
ln -sv kafka_2.12-2.3.1 kafka
cd kafka/config/
grep "^[a-Z]" /usr/local/kafka/config/zookeeper.properties 
dataDir=/Data/zookeeper
clientPort=2181
maxClientCnxns=0
tickTime=2000
initLimit=20
syncLimit=10
server.1=172.16.150.164:2888:3888
server.2=172.16.150.165:2888:3888
server.3=172.16.150.166:2888:3888
mkdir -pv /Data/zookeeper #创建日志及快照目录
echo "1" > /Data/zookeeper/myid  #创建myid文件

2.安装配置kafka

grep "^[a-Z]" /usr/local/kafka/config/server.properties 
broker.id=1
listeners=PLAINTEXT://172.16.150.164:9092 #服务器IP地址和端口
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/Data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=172.16.150.164:2181,172.16.150.166:2181,172.16.150.166:2181  #zookeeper服务器IP和端口
zookeeper.connection.timeout.ms=20000
group.initial.rebalance.delay.ms=0

其他节点配置相同,除以下几点:

1)zookeeper的配置

echo "x" > /Data/zookeeper/myid #唯一

2)kafka的配置

broker.id=1 #唯一

host.name=本机IP

3.启动zk

nohup /usr/local/kafka/bin/zookeeper-server-start.sh  /usr/local/kafka/config/zookeeper.properties &

netstat -nlpt | grep -E "2181|2888|3888"  #哪台是leader,那么他就拥有2888端口

4.启动kafka

vim /etc.hosts #编辑hosts文件,添加127.0.0.1 对当前主机名称的解析

/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

5.测试

/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 172.16.150.164:2181 --replication-factor 2 --partitions 1 --topic summer  #创建一个测试 topic
/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 172.16.150.164:2181 #查看创建的topic
/usr/local/kafka/bin/kafka-topics.sh --describe  --zookeeper 172.16.150.164:2181 --topic summer #查看topic的详情
/bin/bash /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 172.16.150.164:9092  --topic summer #模拟生产者往 summertopic发送消息
#另起一个页面
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 172.16.150.165:9092 --topic summer --from-beginning  #另起一个页面,查看是否可以读取summertopic消息

六、安装配置filebeat(日志客户端)

1.安装配置filebeat

tar xf filebeat-7.6.2-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/filebeat-7.6.2-linux-x86_64/
vim filebeat.yml
15 filebeat.inputs:
16
17 # Each - is an input. Most options can be set at the input level, so
18 # you can use different inputs for various configurations.
19 # Below are the input specific configurations.
20
21 - type: log   #日志类型
22
23   # Change to true to enable this input configuration.
24   enabled: true  
25   json.keys_under_root: true #可以让字段位于根节点
26   json.overwrite_keys: true #对于同名的key,覆盖原有key值
27   fields_under_root: true #可以让字段位于根节点
28
29   # Paths that should be crawled and fetched. Glob based paths.
30   paths:
31     - /opt/logs/nginx/access.log  #日志文件路径
32 #  document_type: dev-nginx-access
33   fields:  
34     type: log
35     log_topic: dev-nginx-access  #指定日志topic名称
96 name: dev-nginx-150-153  
 
229 output.kafka:
230   # Boolean flag to enable or disable the output module.
231   enabled: true
232
233   # The list of Kafka broker addresses from which to fetch the cluster metadata.
234   # The cluster metadata contain the actual Kafka brokers events are published
235   # to.
236   hosts: ["172.16.150.164:9092","172.16.150.165:9092","172.16.150.166:9092"] #kafka集群地址
237
238   # The Kafka topic used for produced events. The setting can be a format string
239   # using any event field. To set the topic from document type use `%{[type]}`.
240   topic: '%{[log_topic]}' #fileds.log_topic 定义的值

2.安装Nginx客户端并修改日志格式为json格式

Nginx安装步骤略vim nginx.conf  #修改Nginx配置文件添加以下内容    log_format json '{"@timestamp":"$time_iso8601",'    '"@version":"1",'    '"client_ip":"$remote_addr",'    '"status":"$status",'    '"host":"$server_addr",'    '"url":"$request_uri",'    '"domain":"$host",'    '"size":"$body_bytes_sent",'    '"responsetime":"$request_time",'    '"referer":"$scheme://$server_addr$request_uri",'    '"user_agent":"$http_user_agent"' '}';    access_log  /opt/logs/nginx/access.log json; /usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf/usr/local/nginx/sbin/nginx  -c /usr/local/nginx/conf/nginx.conf #启动Nginx

3.启动filebeat

nohup启动:

nohup /usr/local/filebeat/filebeat -e -c /usr/local/filebeat/filebeat.yml > /tmp/filebeat.log &

使用systemd托管:

vim /etc/systemd/system/filebeat.service
[Unit]
Description=filebeat server daemon
Documentation=/usrl/local/filebeat/filebeat -help
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
ExecStart=/usr/local/filebeat/filebeat  -c /usr/local/filebeat/filebeat.yml --path.logs /usr/local/filebeat/logs
Restart=always
[Install]
WantedBy=multi-user.target
systemctl restart filebeat.service

查看kafka上topic信息是否创建成功

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 172.16.150.164:2181

七、安装logstach(logstach集群)

tar xf logstash-7.6.2.tar.gz -C /usr/local/
cd /usr/local/logstash-7.6.2/config/
vim messages.conf
input {
    kafka {
        bootstrap_servers => "172.16.150.164:9092,172.16.150.165:9092,172.16.150.166:9092"  #kafka集群地址
        topics => "dev-nginx-access"  #接受topic的名称
        codec => "json"  #解析格式
        consumer_threads => 5   #最大线程
        decorate_events => true  #将当前topic、offset、group、partition等信息也带到message中
    }
}
output {
    elasticsearch {
        hosts => ["172.16.150.157:9200","172.16.150.158:9200"] #ES集群信息
        index => "dev-nginx-access-%{+YYYY-MM-dd}"  #索引格式建议按天切割
  }
}
../bin/logstash -f messages.conf -t  --verbose
nohup /usr/local/logstash-7.6.2/bin/logstash -f messages.conf > /tmp/logstch.log &

验证:

访问kibana界面,查看dev-nginx-access-*索引是否存在

相关文章
|
存储 NoSQL Redis
容器部署日志分析平台ELK7.10.1(Elasisearch+Filebeat+Redis+Logstash+Kibana)
容器部署日志分析平台ELK7.10.1(Elasisearch+Filebeat+Redis+Logstash+Kibana)
539 0
|
3月前
|
存储 应用服务中间件 nginx
部署ELK+filebeat收集nginx日志
部署ELK+filebeat收集nginx日志
141 0
部署ELK+filebeat收集nginx日志
|
3月前
|
NoSQL 关系型数据库 MySQL
一文吃透企业级elk技术栈:6. filebeat安装配置
一文吃透企业级elk技术栈:6. filebeat安装配置
|
3月前
|
消息中间件 Kafka
一文吃透企业级elk技术栈:4. kafka 集群部署
一文吃透企业级elk技术栈:4. kafka 集群部署
|
6月前
|
消息中间件 Kafka Apache
Kafka 架构深入介绍 及搭建Filebeat+Kafka+ELK
Kafka 架构深入介绍 及搭建Filebeat+Kafka+ELK
|
6月前
|
监控 NoSQL Redis
ELK7.x日志系统搭建 3. 采用轻量级日志收集Filebeat
ELK7.x日志系统搭建 3. 采用轻量级日志收集Filebeat
149 0
|
监控 NoSQL Redis
ELK7.x日志系统搭建 3. 采用轻量级日志收集Filebeat
ELK7.x日志系统搭建 3. 采用轻量级日志收集Filebeat
141 0
|
消息中间件 数据可视化 JavaScript
日志可视化方案:ELK+filebeat
日志可视化方案:ELK+filebeat
|
NoSQL Redis
手把手教你搭建ELK-新手必看-第六章:搭建filebeat
手把手教你搭建ELK-新手必看-第六章:搭建filebeat
186 0
手把手教你搭建ELK-新手必看-第六章:搭建filebeat
|
监控 Linux 开发工具
日志收集分析利器-ELK加FileBeat
日志收集分析利器-ELK加FileBeat
218 0
日志收集分析利器-ELK加FileBeat