本文是根据上一篇文章拓展的,观看时请结合上一篇文章:容器部署企业级日志分析平台ELK7.10.1(Elasisearch+Filebeat+Redis+Logstash+Kibana)
- 验证1:
当我们的Nginx日志文件大小超过在filebeat.yml文件中限制的日志大小时,Filebeat在采集时是不会采集超过限制大小的日志的。
准备Nginx日志
[root@es-master21 ~]# ll -h /var/log/nginx/ 总用量 69M -rwxr-xr-x. 1 root root 68M 12月 13 15:15 access.log -rwxr-xr-x. 1 root root 1.8M 12月 7 23:09 error.log
可以看到目前 /var/log/nginx/ 目录下的 access.log 日志文件大小是68M,而 error.log 日志文件大小是1.8M。
搭建Filebeat
注意:
Filebeat要采集的日志目录必须挂载至Filebeat容器中,不然可能无法正常采集。
[root@es-master21 ~]# mkdir -p /mnt/filebeat/ [root@es-master21 ~]# cd /mnt/ [root@es-master21 mnt]# vim docker-compose.yml ...... ...... filebeat: image: store/elastic/filebeat:7.10.1 #镜像依然选择与elk的版本一致 container_name: filebeat restart: always volumes: - /var/log/nginx/:/var/log/nginx #filebeat需要采集的nginx日志目录(必须要挂载至容器中) - /mnt/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro #filebeat的配置文件 - /etc/localtime:/etc/localtime
编写filebeat.yml文件
[root@es-master21 mnt]# cd filebeat/ [root@es-master21 filebeat]# vim filebeat.yml (使用时删除文件中带#的配置项,不然yml文件格式不对) filebeat.inputs: #inputs为复数,表名type可以有多个 - type: log #输入类型 access: enabled: true #启用这个type配置 max_bytes: 20480 #单条日志的大小限制,建议限制(默认为10M,上面的设置表示超过20M就不会采集了) paths: - /var/log/nginx/access.log #监控nginx的access日志,建议采集日志时先采集单一nginx的access日志。 fields: #额外的字段(表示在filebeat收集Nginx的日志中多增加一个字段source,其值是nginx-access-21,用来在logstash的output输出到elasticsearch中判断日志的来源,从而建立相应的索引,也方便后期再Kibana中查看筛选数据,结尾有图) source: nginx-access-21 - type: log access: enabled: true paths: - /var/log/nginx/error.log fields: source: nginx-error-21 #额外的字段(表示在filebeat收集Nginx的日志中多增加一个字段source,其值是nginx-error-21,用来在logstash的output输出到elasticsearch中判断日志的来源,从而建立相应的索引,也方便后期再Kibana中查看筛选数据,结尾有图) setup.ilm.enabled: false output.redis: #输出到redis hosts: ["192.168.1.21:6379"] #redis地址及端口 password: "123456" #redis密码 db: 0 #redis的库 key: "nginx_log" #定义输入到redis的key名 [root@es-node22 mnt]# docker-compose -f docker-compose.yml up -d [root@es-master21 mnt]# docker-compose ps Name Command State Ports --------------------------------------------------------------------------------------------------------- elasticsearch /tini -- /usr/local/bin/do ... Up 0.0.0.0:9200->9200/tcp,:::9200->9200/tcp, 0.0.0.0:9300->9300/tcp,:::9300->9300/tcp elasticsearch-head /bin/sh -c grunt server Up 0.0.0.0:9100->9100/tcp,:::9100->9100/tcp filebeat /usr/local/bin/docker-entr ... Up kibana /usr/local/bin/dumb-init - ... Up 0.0.0.0:5601->5601/tcp,:::5601->5601/tcp logstash /usr/local/bin/docker-entr ... Up 0.0.0.0:5044->5044/tcp,:::5044->5044/tcp, 9600/tcp redis docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp,:::6379->6379/tcp
注意:
由于我们在filebeat.yml文件中限制了Filebeat要采集的日志大小最大不超过20M,所以在Filebeat采集日志时是不会采集access.log日志的。
验证1:
1.查看error.log日志文件行数
[root@es-master21 mnt]# cat /var/log/nginx/error.log | wc -l 7166
2.查看Redis中由Filebeat采集后写进来的nginx日志数据
[root@es-master21 mnt]# docker exec -it redis bash root@d5a4be90c7f6:/# redis-cli -h 192.168.1.21 192.168.1.21:6379> AUTH 123456 OK 192.168.1.21:6379> SELECT 0 OK 192.168.1.21:6379> KEYS * #创建的key为nginx_log 1) "nginx_log" 192.168.1.21:6379> LLEN nginx_log (integer) 7166 #可以看到redis的nginx_log中有7166条数据未被消费
可以看到在Redis中只有7167条数据被写入,说明Filebeat只采集了error.log的日志数据。
3.查看Logstash日志,输出从Redis消费并处理的日志数据
[root@es-master21 mnt]# docker-compose logs -f logstash ... ... logstash | "message" => 2021/10/21 15:27:19 [error] 27216#0: *357254 open() "/usr/local/nginx/html/dist/.well-known/security.txt" failed (2: No such file or directory), client: 11.64.24.136, server: localhost, request: "GET /.well-known/security.txt HTTP/1.1", host: "32.91.142.93", referrer: "http://32.91.142.93/.well-known/security.txt" logstash | } logstash | { logstash | "log" => { logstash | "file" => { logstash | "path" => "/var/log/nginx/error.log" #可以看到采集的日志为error.log logstash | }, logstash | "offset" => 1863165 logstash | }, logstash | "@timestamp" => 2021-12-27T09:22:37.192Z, logstash | "host" => { logstash | "name" => "7fddb131a96b" logstash | }, logstash | "fields" => { logstash | "source" => "nginx-error-21" logstash | }, logstash | "@version" => "1", logstash | "input" => { logstash | "type" => "log" logstash | }, logstash | "ecs" => { logstash | "version" => "1.6.0" logstash | }, logstash | "agent" => { logstash | "hostname" => "7fddb131a96b", logstash | "version" => "7.10.1", logstash | "name" => "7fddb131a96b", logstash | "id" => "95b7e319-228c-4eed-b9c8-26fad0c59d3b", logstash | "type" => "filebeat", logstash | "ephemeral_id" => "76ec6bc4-39e9-4364-94fb-20f0a7f49122" logstash | }, ...... ......
4.查看Elasticsearch集群中由Logstash写入了多少条日志数据
5.访问Kibana并创建索引模式,展示ES中的Nginx日志数据
结论:
当我们的Nginx日志文件大小超过在filebeat.yml文件中限制的日志最大值时,Filebeat在采集日志时是不会采集超过限制大小的日志文件的。
- 验证2:
将Nginx的access.log日志切割一部分,让其小于filebeat.yml文件中限制的20M,再验证采集结果。
1.切割access.log日志文件
由于是测试,我就直接将access.log日志备份后删除部分内容,让其小于20M。(推荐用脚本和信号对Nginx日志进行自动切割)
[root@es-master21 mnt]# ll -h /var/log/nginx/ 总用量 19M -rwxr-xr-x. 1 root root 17M 12月 30 09:09 access.log -rwxr-xr-x. 1 root root 1.8M 12月 30 08:58 error.log
2.查看Logstash日志,输出从Redis消费并处理的日志数据
[root@es-master21 mnt]# docker-compose logs -f logstash ... ... logstash | "message" => "42.75.14.10 - - [06/Aug/2021:07:16:41 +0800] \"HEAD /\\xD7\\xEE\\xD0\\xC2.txt HTTP/1.1\" 301 0 \"-\" \"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0)\"" logstash | } logstash | { logstash | "log" => { logstash | "file" => { logstash | "path" => "/var/log/nginx/access.log" logstash | }, logstash | "offset" => 61173368 logstash | }, logstash | "@timestamp" => 2021-12-30T01:08:23.839Z, logstash | "host" => { logstash | "name" => "7fddb131a96b" logstash | }, logstash | "fields" => { logstash | "source" => "nginx-access-21" logstash | }, logstash | "@version" => "1", logstash | "input" => { logstash | "type" => "log" logstash | }, logstash | "agent" => { logstash | "hostname" => "7fddb131a96b", logstash | "version" => "7.10.1", logstash | "name" => "7fddb131a96b", logstash | "id" => "95b7e319-228c-4eed-b9c8-26fad0c59d3b", logstash | "type" => "filebeat", logstash | "ephemeral_id" => "2d9ae3f6-e506-4332-a986-dff8b4aa5e47" logstash | }, logstash | "ecs" => { logstash | "version" => "1.6.0" logstash | },
3.查看Elasticsearch集群中由Logstash写入了多少条日志数据
4.访问Kibana并创建索引模式,展示ES中的Nginx日志数据
结论:
在我们将Nginx的access.log日志切割一部分,让其小于filebeat.yml文件中限制的20M后,Filebeat马上就开始采集并写入Redis中供Logstash消费,然后再写入ES集群中,最后由Kibana展示出来。