EFK日志收集
Elasticsearch: 数据库,存储数据 java
logstash: 日志收集,过滤数据 java
kibana: 分析,过滤,展示 java
filebeat: 收集日志,传输到ES或logstash go filebeat官方文档:
https://www.elastic.co/guide/en/beats/filebeat/current/index.html
环境:
es主机:192.168.8.10 (内存:4G)
elasticsearch
kibana
filebeat
nginx
##################################################################
安装es主机:192.168.8.10
1.安装elasticsearch:
前提:jdk-1.8.0
复制elasticsearch-6.6.0.rpm到虚拟机
rpm -ivh elasticsearch-6.6.0.rpm
2.修改配置文件:
vim /etc/elasticsearch/elasticsearch.yml node.name: node-1 path.data: /data/elasticsearch path.logs: /var/log/elasticsearch bootstrap.memory_lock: true network.host: 192.168.8.10,127.0.0.1 http.port: 9200
3.创建数据目录,并修改权限
mkdir -p /data/elasticsearch chown -R elasticsearch.elasticsearch /data/elasticsearch/
4.分配锁定内存:
vim /etc/elasticsearch/jvm.options -Xms1g #分配最小内存 -Xmx1g #分配最大内存,官方推荐为物理内存的一半,但最大为32G
5.修改锁定内存后,无法重启,解决方法如下:
systemctl edit elasticsearch
添加:
[Service] LimitMEMLOCK=infinity F2保存退出 systemctl daemon-reload systemctl restart elasticsearch
##################################################################
在es主机上安装kibana
(1)安装kibana
rpm -ivh kibana-6.6.0-x86_64.rpm
(2)修改配置文件
vim /etc/kibana/kibana.yml
修改:
server.port: 5601 server.host: "192.168.8.10" server.name: "db01" #自己所在主机的主机名 elasticsearch.hosts: ["http://192.168.8.10:9200"] #es服务器的ip,便于接收日志数据
保存退出
(3)启动kibana
systemctl start kibana
###################################################################
在nginx(192.168.8.20)主机上安装filebeat
1.安装filebeat
rpm -ivh filebeat-6.6.0-x86_64.rpm
2.修改配置文件
vim /etc/filebeat/filebeat.yml
修改:
filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log output.elasticsearch: hosts: ["192.168.8.10:9200"]
保存退出
3.启动filebeat
systemctl start filebeat
######################################################################
在es主机安装httpd-tools
1.配置yum源,安装 httpd-tools
yum -y install httpd-tools
2.使用ab压力测试工具测试访问
ab -c 1000 -n 20000 http://192.168.8.20/
3.在es浏览器查看filebeat索引和数据
4.在kibana添加索引
management--create index
discover--右上角--选择today
5.修改nginx的日志格式为json
vim /etc/nginx/nginx.conf
添加在http {}内: log_format log_json '{ "@timestamp": "$time_local", ' '"remote_addr": "$remote_addr", ' '"referer": "$http_referer", ' '"request": "$request", ' '"status": $status, ' '"bytes": $body_bytes_sent, ' '"agent": "$http_user_agent", ' '"x_forwarded": "$http_x_forwarded_for", ' '"up_addr": "$upstream_addr",' '"up_host": "$upstream_http_host",' '"up_resp_time": "$upstream_response_time",' '"request_time": "$request_time"' ' }'; access_log /var/log/nginx/access.log log_json;
保存退出
systemctl restart nginx
清空日志:vim /var/log/nginx/access.log
ab测试访问,生成json格式日志
7.修改filebeat配置文件
vim /etc/filebeat/filebeat.yml
修改为:
filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log json.keys_under_root: true json.overwrite_keys: true output.elasticsearch: hosts: ["192.168.8.10:9200"] index: "nginx-%{+yyyy.MM.dd}" setup.template.name: "nginx" setup.template.patten: "nginx-*" setup.template.enabled: false setup.template.overwrite: true
保存退出
重启服务:systemctl restart filebeat
8.配置access.log和error.log分开
vim /etc/filebeat/filebeat.yml
修改为:
filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log json.keys_under_root: true json.overwrite_keys: true tags: ["access"] - type: log enabled: true paths: - /var/log/nginx/error.log tags: ["error"] output.elasticsearch: hosts: ["192.168.8.10:9200"] indices: - index: "nginx-access-%{+yyyy.MM.dd}" when.contains: tags: "access" - index: "nginx-error-%{+yyyy.MM.dd}" when.contains: tags: "error" setup.template.name: "nginx" setup.template.patten: "nginx-*" setup.template.enabled: false setup.template.overwrite: true
保存退出
重启服务:systemctl restart filebeat
===============================================================
kibana图表:
登录--左侧面板选择visualize--点击“+”号--选择图表类型--选择索引--Buckets--x-Axis--Aggregation(选择Terms)--
Field(remote_addr.keyword)--size(5)--点击上方三角标志
kibana监控(x-pack):
登录--左侧面板选择--Monitoring--启用监控
===============================================================
构建filebeat+redis+logstash+es+kibana架构
1.安装redis,并启动
(1)准备安装和数据目录
mkdir -p /data/soft mkdir -p /opt/redis_cluster/redis_6379/{conf,logs,pid}
(2)下载redis安装包
cd /data/soft wget http://download.redis.io/releases/redis-5.0.7.tar.gz
(3)解压redis到/opt/redis_cluster/
tar xf redis-5.0.7.tar.gz -C /opt/redis_cluster/ ln -s /opt/redis_cluster/redis-5.0.7 /opt/redis_cluster/redis
(4)切换目录安装redis
cd /opt/redis_cluster/redis make && make install
(5)编写配置文件
vim /opt/redis_cluster/redis_6379/conf/6379.conf
添加:
bind 127.0.0.1 192.168.8.10 port 6379 daemonize yes pidfile /opt/redis_cluster/redis_6379/pid/redis_6379.pid logfile /opt/redis_cluster/redis_6379/logs/redis_6379.log databases 16 dbfilename redis.rdb dir /opt/redis_cluster/redis_6379
保存退出
(6)启动当前redis服务
redis-server /opt/redis_cluster/redis_6379/conf/6379.conf
2.修改filebeat配置文件,output给redis
(参考文档:https://www.elastic.co/guide/en/beats/filebeat/6.6/index.html)
(1)修改filebeat配置output指向redis,重启
vim /etc/filebeat/filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log json.keys_under_root: true json.overwrite_keys: true tags: ["access"] - type: log enabled: true paths: - /var/log/nginx/error.log tags: ["error"] setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.8.10"] key: "filebeat" db: 0 timeout: 5
保存退出
重启服务:systemctl restart filebeat
(2)测试访问网站,登录redis,查看键值
redis-cli #登录 keys * #列出所有键 type filebeat #filebeat为键值名 LLEN filebeat #查看list长度 LRANGE filebeat 0 -1 #查看list所有内容
3.安装logstash,收集redis的日志,提交给es
(1)安装logstash(安装包提前放在了/data/soft下)
cd /data/soft/ rpm -ivh logstash-6.6.0.rpm
(2)修改logstash配置文件,实现access和error日志分离
vim /etc/logstash/conf.d/redis.conf
添加:
input { redis { host => "192.168.8.10" port => "6379" db => "0" key => "filebeat" data_type => "list" } } filter { mutate { convert => ["upstream_time","float"] convert => ["request_time","float"] } } output { stdout {} if "access" in [tags] { elasticsearch { hosts => ["http://192.168.8.10:9200"] index => "nginx_access-%{+YYYY.MM.dd}" manage_template => false } } if "error" in [tags] { elasticsearch { hosts => ["http://192.168.8.10:9200"] index => "nginx_error-%{+YYYY.MM.dd}" manage_template => false } } }
保存退出
重启logstash:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf