一,环境
服务器环境:
192.168.2.1:elasticsearch192.168.2.2:filebeat+nginx192.168.2.3:kafka192.168.2.4:logstash
二,服务的安装
elasticseatch+filebeat+kafka+logsstash(6.60)清华源下载: https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.6.0/
zookeeper官网下载: https://zookeeper.apache.org/releases.html
kafka官网下载: https://kafka.apache.org/downloads
一,配置elasticsearch
1.验证java环境(存在无需安装)
java -version#验证java环境 安装JDK1.8:yum -y install java-1.8.0-openjdk.x86_64
2.安装elasticsearch
rpm -ivh /mnt/elk-6.6/elasticsearch-6.6.0.rpm
3.修改配置文件
vi /etc/elasticsearch/elasticsearch.yml 修改一下内容: node.name: node-1 #群集中本机节点名 network.host: 192.168.2.1,127.0.0.1 #监听的ip地址 http.port: 9200
4.开启elasticsearch
systemctl start elasticsearch
5.查看启动情况
[root@localhost ~]# netstat -anpt | grep java tcp6 0 0192.168.2.1:9200 :::* LISTEN 12564/java tcp6 0 0127.0.0.1:9200 :::* LISTEN 12564/java tcp6 0 0192.168.2.1:9300 :::* LISTEN 12564/java tcp6 0 0127.0.0.1:9300 :::* LISTEN 12564/java tcp6 0 0192.168.2.1:9200 192.168.2.4:34428 ESTABLISHED 12564/java tcp6 0 0192.168.2.1:9200 192.168.2.4:34436 ESTABLISHED 12564/java
二,配置filebeat+nginx
1.安装nginx
yum -y install nginx
2.安装filebeat
rpm -ivh /mnt/elk-6.6/filebeat-6.6.0-x86_64.rpm
3.修改filebeat配置文件
[root@localhost ~]# vi /etc/filebeat/filebeat.yml 添加一下内容: filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log output.kafka: enabled: true hosts: ["192.168.2.3:9092"] #kafka的IP地址和端口 topic: test1 #kafka的topic
4.开启服务
1. systemctl start nginx 2. systemctl start filebeat
三,配置kafka环境
1.安装java环境(存在无需安装)
yum -y install java-1.8.0-openjdk.x86_64
2.安装zookeeper
tar xf /mnt/zookeeper-3.4.9.tar.gz -C /usr/local/ mv /usr/local/zookeeper-3.4.9/ /usr/local/zookeeper cd /usr/local/zookeeper/conf cp zoo_sample.cfg zoo.cfg vi zoo.cfg mkdir data logs echo 1 > data/myid /usr/local/zookeeper/bin/zkServer.sh start /usr/local/zookeeper/bin/zkServer.sh status
1.修改zookeeper配置文件
vi zoo.cfg 添加一下内容: dataDir=/usr/local/zookeeper/data dataLogDir=/usr/local/zookeeper/logs server.1=192.168.2.3:3188:3288 保存退出 mkdir data logs echo1 > data/myid
2.启动zookeeper文件
1. /usr/local/zookeeper/bin/zkServer.sh start 2. /usr/local/zookeeper/bin/zkServer.sh status
3.安装kafka
1. tar xf /mnt/kafka_2.11-2.2.1.tgz -C /usr/local/ 2. mv /usr/local/kafka_2.11-2.2.1/ /usr/local/kafka
4.配置kafka
cd /usr/local/kafka/config/ cp server.properties server.properties.bak vi server.properties 修改一下内容: broker.id=1 listeners=PLAINTEXT://192.168.2.3:9092 zookeeper.connect=192.168.2.3:2181
5.启动kafka
1. cd /usr/local/kafka/ 2. ./bin/kafka-server-start.sh ./config/server.properties &
6.kafka创建topic
./bin/kafka-topics.sh --create--zookeeper localhost:2181 --replication-factor1--partitions1--topic test1 #创建名为test1的topic ./bin/kafka-topics.sh --list--zookeeper localhost:2181 #查看当前有哪些topic ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --from-beginning #查看test1中有那些信息
四,配置logstash
1.安装java环境(存在可省略)
yum -y install java-1.8.0-openjdk.x86_64
2.安装logstash
rpm -ivh /mnt/elk-6.6/logstash-6.6.0.rpm
3.编写配置文件
vi /etc/logstash/conf.d/kafka.conf input { kafka { bootstrap_servers => ["192.168.2.3:9092"] group_id => "es-test" topics => ["test1"] #与filebeat使用的topic一致 codec => json } } output { kafka{ codec => json { charset => "UTF-8" } topic_id => "test1" bootstrap_servers => "192.168.2.3:9092" } elasticsearch { hosts => "http://192.168.2.1:9200" index => "kafka‐%{+YYYY.MM.dd}" } }
4.启动服务
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf
五,配置kibana
1.安装kiana
rpm -ihv /mnt/elk-6.6/kibana-6.6.0-x86_64.rpm
2.配置kiana
vim /etc/kibana/kibana.yml 修改: server.port: 5601 server.host: "192.168.2.5" server.name: "db01" elasticsearch.hosts: ["http://192.168.2.1:9200"] #es服务器的ip,便于接收日志数据
3.开启kiana服务
systemctl start kibana
三,收集日志
一,kibana收集日志
- 添加日志信息
- 选择日志格式
3.查看日志信息