Nginx filebeat+logstash+Elasticsearch+kibana实现nginx日志图形化展示

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
日志服务 SLS,月写入数据量 50GB 1个月
简介: Nginx filebeat+logstash+Elasticsearch+kibana实现nginx日志图形化展示

filebeat+logstash+Elasticsearch+kibana实现nginx日志图形化展示

 


 

测试环境

Win7 64

 

CentOS-7-x86_64-DVD-1503-01.iso(kibana安装环境)

 

CentOS 6.5-x86_64(其它软件安装环境)

 

nginx-1.10.0

 

filebeat-5.5.2-linux-x86_64.tar.gz

下载地址:

https://pan.baidu.com/s/1dEBkIuH

https://www.elastic.co/downloads/beats/filebeat#ga-release

https://www.elastic.co/start

 

 

kibana-5.5.0-linux-x86_64.tar.gz

下载地址:

https://pan.baidu.com/s/1dEBkIuH

https://www.elastic.co/start

 

 

logstash-5.5.2.tar.gz

下载地址:

https://pan.baidu.com/s/1dEBkIuH

https://www.elastic.co/downloads/logstash

 

 

elasticsearch-5.5.2

下载地址:

https://pan.baidu.com/s/1dEBkIuH

https://www.elastic.co/downloads/elasticsearch#preview-release

 

 

安装Nginx

Nginx日志配置

http {

   include       mime.types;

   default_type  application/octet-stream;

 

   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" $status $request_time $upstream_response_time $request_length $bytes_sent $body_bytes_sent $gzip_ratio $connection_requests "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';

 

   access_log  logs/access.log  main;

 

运行nginx

 

 

安装java

参考文章:

http://blog.sina.com.cn/s/blog_13cc013b50102w01m.html#_Toc438402186

 

 

[root@bogon ~]# java -version

java version "1.8.0_65"

 

64-Bit Server VM (build 25.65-b01, mixed mode)

 

注意:logstash要求Java 8,不支持java9

 

https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

 

安装logstash

# tar -xzvf logstash-5.5.2.tar.gz

# ls

logstash-5.5.2  logstash-5.5.2.tar.gz

# mkdir -p /usr/local/logstash

# mv logstash-5.5.2 /usr/local/logstash/

 

配置logstash

 

# vim /usr/local/logstash/logstash-5.5.2/logstash.conf

input { stdin {} }

output {

  elasticsearch { hosts => ["192.168.1.101:9200"] }

  stdout { codec => rubydebug }

}

 

说明:

input { stdin {} }  表示从标准输入中接收数据

192.168.1.101:9200  分别代表Elasticsearch搜索访问ip和监听端口

stdout { codec => rubydebug }  表示输出到控制台

 

参考链接:

https://www.elastic.co/guide/en/logstash/current/config-examples.html

 

运行logstash

#cd/usr/local/logstash/logstash-5.5.2/

# bin/logstash -f logstash.conf

……()

The stdin plugin is now waiting for input:

[2017-07-14T03:40:50,373][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

hello world

{

   "@timestamp" => 2017-07-13T19:59:53.848Z,

     "@version" => "1",

         "host" => "0.0.0.0",

      "message" => "hello world"

}

 

说明:启动后,输入上述带背景色内容 hello world,待控制台输出带黄色背景色内容后,在Elasticsearch中执行搜索,如下

 

GET /logstash-2017.07.13/_search

 

 

如上图,能搜索到输入数据,说明成功了

 

停止运行logstash

CTRL +D

 

参考链接:

https://www.elastic.co/guide/en/logstash/current/first-event.html

 

安装Elasticsearch

安装kibana

# mkdir -p /usr/local/kibana

# tar -xvzf kibana-5.5.0-linux-x86_64.tar.gz

# mv kibana-5.5.0-linux-x86_64 /usr/local/kibana/

 

参考链接:

https://www.elastic.co/guide/en/kibana/current/targz.html

 

 

配置kibana

# cd /usr/local/kibana/kibana-5.5.0-linux-x86_64/config/

# vim kibana.yml

server.host: "192.168.1.104"

elasticsearch.url: "http://192.168.1.101:9200"

 

参考链接:

https://www.elastic.co/guide/en/kibana/current/settings.html

 

 

运行kibana

# cd /usr/local/kibana/kibana-5.5.0-linux-x86_64/

# ./bin/kibana

 log   [23:51:04.051] [info][status][plugin:kibana@5.5.0] Status changed from uninitialized to green - Ready

 log   [23:51:04.510] [info][status][plugin:elasticsearch@5.5.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch

 log   [23:51:04.594] [info][status][plugin:console@5.5.0] Status changed from uninitialized to green - Ready

 log   [23:51:04.617] [warning] You're running Kibana 5.5.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v5.5.2 @ 192.168.1.101:9200 (192.168.1.101)

 log   [23:51:04.674] [info][status][plugin:metrics@5.5.0] Status changed from uninitialized to green - Ready

 log   [23:51:04.706] [info][status][plugin:elasticsearch@5.5.0] Status changed from yellow to green - Kibana index ready

 log   [23:51:06.992] [info][status][plugin:timelion@5.5.0] Status changed from uninitialized to green - Ready

 log   [23:51:07.032] [info][listening] Server running at http://192.168.1.104:5601

 log   [23:51:07.037] [info][status][ui settings] Status changed from uninitialized to green - Ready

 

 

验证

浏览器中访问:http://192.168.1.104:5601/status

结果发现打不开

解决方法:停止防火墙

# systemctl stop firewalld.service

 

 

再次访问

 

 

 

参考链接:

https://www.elastic.co/guide/en/kibana/current/access.html

 

 

配置索引模式(index pattern

要使用Kibana至少需要配置一个索引模式(index pattern)。索引模式用于确认执行搜索和分析的Elasticsearch索引。

Index name or pattern

配置索引名称,或者索引模式。索引模式允许使用通配符 * 。比如logstash-*

 

Time Filter field name

设置时间过滤器,方便在Discover页面中按时间筛选数据

Management -> Index Patterns -> Create Index Pattern,重新设置

 

 

 

参考链接:

https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html

 

安装filebeat

# tar -xvzf filebeat-5.5.2-linux-x86_64.tar.gz

# mkdir -p /usr/local/filebeat

# mv filebeat-5.5.2-linux-x86_64 /usr/local/filebeat/

 

配置

# vim /usr/local/filebeat/filebeat-5.5.2-linux-x86_64/filebeat.yml

 

 

配置日志文件路径

 

 

如上,可以指定具体的文件名,

   - /usr/local/ngnix/logs/access.log

- /usr/local/ngnix/logs/error.log

 

可以使用通配符,表示/usr/local/ngnix/logs/目录下,所有.log文件

- /usr/local/ngnix/logs/*.log

 

配置logstash输出

 

 

注意:hosts: 后面必须接一个空格,否则会报错

 

测试配置是否正确

# cd/usr/local/filebeat/filebeat-5.5.2-linux-x86_64/

# ./filebeat -configtest -e

2017/08/17 23:55:32.651228 beat.go:285: INFO Home path: [/usr/local/filebeat/filebeat-5.5.2-linux-x86_64] Config path: [/usr/local/filebeat/filebeat-5.5.2-linux-x86_64] Data path: [/usr/local/filebeat/filebeat-5.5.2-linux-x86_64/data] Logs path: [/usr/local/filebeat/filebeat-5.5.2-linux-x86_64/logs]

2017/08/17 23:55:32.651335 beat.go:186: INFO Setup Beat: filebeat; Version: 5.5.2

2017/08/17 23:55:32.651564 logstash.go:90: INFO Max Retries set to: 3

2017/08/17 23:55:32.652006 outputs.go:108: INFO Activated logstash as output plugin.

2017/08/17 23:55:32.652250 metrics.go:23: INFO Metrics logging every 30s

2017/08/17 23:55:32.662026 publish.go:295: INFO Publisher name: bogon

2017/08/17 23:55:32.698907 async.go:63: INFO Flush Interval set to: 1s

2017/08/17 23:55:32.699214 async.go:64: INFO Max Bulk Size set to: 2048

Config OK

 

 

运行filebeat

#./filebeat -e -c filebeat.yml -d "publish"

 

 

参考链接:

https://www.elastic.co/guide/en/beats/filebeat/5.5/filebeat-starting.html

 

https://www.elastic.co/guide/en/beats/filebeat/5.5/config-filebeat-logstash.html

 

修改logstash配置

[root@bogon logstash-5.5.2]# vim logstash.conf

 

 

input {

  beats {

        port => "9400"

  }

}

filter{

  grok {

      match => {"message" => "%{IP:remote_addr} - %{USER:remote_user} \[%{HTTPDATE:time_local}\] "%{WORD:method} %{URIPATHPARAM:request} %{DATA:http_version}" %{NUMBER:status:int} %{NUMBER:request_time:float} %{NUMBER:upstream_response_time:float} %{NUMBER:request_length:int} %{NUMBER:bytes_sent:int} %{NUMBER:body_bytes_sent:int} %{DATA:gzip_ratio:float} %{NUMBER:connection_requests:int} "%{DATA:http_referer}" %{QUOTEDSTRING:http_user_agent} %{DATA:http_x_forwarded_for}"}

  }

}

output {

  elasticsearch { hosts => ["192.168.1.101:9200"] }

  stdout { codec => rubydebug }

}

~                                                                                                                                                                                              

~                                                                                                                                                                                              

message对应的日志样例如下:  

"192.168.1.101 - - [15/Sep/2017:01:04:51 +0800] "GET /zentaopms/www/theme/default/zh-cn.default.css?v=8.0 HTTP/1.1" 304 0.006 0.006 652 141 0 - 1 "http://192.168.1.102:8080/zentaopms/www/index.php?m=user&f=login&referer=L3plbnRhb3Btcy93d3cvaW5kZXgucGhw" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0" "-"",

 

                                                       

测试配置是否正确

# cd /usr/local/logstash/logstash-5.5.2/

# bin/logstash -f logstash.conf --config.test_and_exit

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

Sending Logstash's logs to /usr/local/logstash/logstash-5.5.2/logs which is now configured via log4j2.properties

Configuration OK

[2017-08-31T00:14:15,049][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

 

运行logstash

说明:如果以--config.reload.automatic方式运行,已经在运行了,修改配置后,会自动重新加载配置,不需要重新运行logstash

 

# bin/logstash -f logstash.conf --config.reload.automatic

17/08/18 00:53:20.024649 output.go:109: DBG  output worker: publish 323 events

2017/08/18 00:53:20.075676 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 192.168.1.103:9400: getsockopt: no route to host

2017/08/18 00:53:21.109983 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 192.168.1.103:9400: getsockopt: no route to host

2017/08/18 00:53:23.270575 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 192.168.1.103:9400: getsockopt: no route to host

2017/08/18 00:53:27.467576 single.go:140: ERR Connecting error publishing

……

 

解决方法:防火墙开放端口

#  firewall-cmd --permanent --zone=public --add-port=9400/tcp

success

# firewall-cmd --reload

success

 

[root@bogon logstash-5.5.2]# bin/logstash -f logstash.conf --config.test_and_exit

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

Sending Logstash's logs to /usr/local/logstash/logstash-5.5.2/logs which is now configured via log4j2.properties

Configuration OK

[2017-09-03T17:56:46,275][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

 

# bin/logstash -f logstash.conf --config.reload.automatic

 

字段:

 

 

 

参考链接:

https://github.com/elastic/logstash/blob/v1.1.9/patterns/grok-patterns

 

参考链接:

https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html

 

 

 

 

 

相关实践学习
使用阿里云Elasticsearch体验信息检索加速
通过创建登录阿里云Elasticsearch集群,使用DataWorks将MySQL数据同步至Elasticsearch,体验多条件检索效果,简单展示数据同步和信息检索加速的过程和操作。
ElasticSearch 入门精讲
ElasticSearch是一个开源的、基于Lucene的、分布式、高扩展、高实时的搜索与数据分析引擎。根据DB-Engines的排名显示,Elasticsearch是最受欢迎的企业搜索引擎,其次是Apache Solr(也是基于Lucene)。 ElasticSearch的实现原理主要分为以下几个步骤: 用户将数据提交到Elastic Search 数据库中 通过分词控制器去将对应的语句分词,将其权重和分词结果一并存入数据 当用户搜索数据时候,再根据权重将结果排名、打分 将返回结果呈现给用户 Elasticsearch可以用于搜索各种文档。它提供可扩展的搜索,具有接近实时的搜索,并支持多租户。
目录
相关文章
|
26天前
|
存储 运维 监控
金融场景 PB 级大规模日志平台:中信银行信用卡中心从 Elasticsearch 到 Apache Doris 的先进实践
中信银行信用卡中心每日新增日志数据 140 亿条(80TB),全量归档日志量超 40PB,早期基于 Elasticsearch 构建的日志云平台,面临存储成本高、实时写入性能差、文本检索慢以及日志分析能力不足等问题。因此使用 Apache Doris 替换 Elasticsearch,实现资源投入降低 50%、查询速度提升 2~4 倍,同时显著提高了运维效率。
金融场景 PB 级大规模日志平台:中信银行信用卡中心从 Elasticsearch 到 Apache Doris 的先进实践
|
3月前
|
存储 JSON Java
ELK 圣经:Elasticsearch、Logstash、Kibana 从入门到精通
ELK是一套强大的日志管理和分析工具,广泛应用于日志监控、故障排查、业务分析等场景。本文档将详细介绍ELK的各个组件及其配置方法,帮助读者从零开始掌握ELK的使用。
|
3月前
|
存储 SQL 监控
|
3月前
|
自然语言处理 监控 数据可视化
|
3月前
|
运维 监控 安全
|
缓存 负载均衡 应用服务中间件
干货 | Nginx实现Elasticsearch后台服务的负载均衡
1、题记 Elasticsearch后台程序开发完毕后,相关的ES配置、部署、ES DSL查询、聚合语句也做了优化,但实际客户仍然要求提高QPS,要求保障性能的前提下的很高的并发用户数。 这时候,你能想到的方案是什么呢? 实际调研发现,优选方案是Nginx负载均衡方案。
1059 0
干货 | Nginx实现Elasticsearch后台服务的负载均衡
|
3月前
|
缓存 应用服务中间件 网络安全
Nginx中配置HTTP2协议的方法
Nginx中配置HTTP2协议的方法
227 7
|
4月前
|
应用服务中间件 BI nginx
Nginx的location配置详解
【10月更文挑战第16天】Nginx的location配置详解
|
21天前
|
存储 应用服务中间件 Linux
nginx配置证书和私钥进行SSL通信验证
nginx配置证书和私钥进行SSL通信验证
56 4
|
3月前
|
负载均衡 监控 应用服务中间件
配置Nginx反向代理时如何指定后端服务器的权重?
配置Nginx反向代理时如何指定后端服务器的权重?
220 61