在运维系统中,经常遇到如下情况:

①开发人员不能登录线上服务器查看详细日志

②各个系统都有日志,日志数据分散难以查找

③日志数据量大,查询速度慢,或者数据不够实时

④一个调用会涉及多个系统,难以在这些系统的日志中快速定位数据

我们可以采用目前比较流行的ELKStack来满足以上需求。


ELK Stack组成部分:


wKiom1c9pvCQIqpsAAFRZMM2Yr4163.png


原理流程图如下:

wKiom1c9pyHQw78qAAIfEXuRMnA381.png

系统环境:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 ~] # cat /etc/redhat-release <br data-filtered="filtered">CentOS release 6.4 (Final)<br data-filtered="filtered">[root@node01 ~]# uname -a<br data-filtered="filtered">Linux node01 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux<br data-filtered="filtered">[root@node01 ~]#<br data-filtered="filtered"></span>
IP地址 运行程序 角色 备注
10.0.0.41 elasticsearch node01
10.0.0.42 elasticsearch node02
10.0.0.43 elasticsearch node03
10.0.0.44 redis kibana kibana
10.0.0.9 nginx logstash web01
10.0.0.10 logstash logstash

实战操作:

①下载安装包安装Java环境以及配置环境变量:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 tools] # ll<br data-filtered="filtered">total 289196<br data-filtered="filtered">-rw-r--r-- 1 root root  28487351 Mar 24 11:29 elasticsearch-1.7.5.tar.gz<br data-filtered="filtered">-rw-r--r-- 1 root root 173271626 Mar 24 11:19 jdk-8u45-linux-x64.tar.gz<br data-filtered="filtered">-rw-r--r-- 1 root root  18560531 Mar 24 11:00 kibana-4.1.6-linux-x64.tar.gz<br data-filtered="filtered">-rw-r--r-- 1 root root  74433726 Mar 24 11:06 logstash-2.1.3.tar.gz<br data-filtered="filtered">-rw-r--r-- 1 root root   1375200 Mar 24 11:03 redis-3.0.7.tar.gz<br data-filtered="filtered">[root@node01 tools]#<br data-filtered="filtered"></span>


安装jdk(在本次实验中所有服务器上做同样的操作安装jdk,这里以node01为例):

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 tools] # tar xf jdk-8u45-linux-x64.tar.gz <br data-filtered="filtered">[root@node01 tools]# ll<br data-filtered="filtered">[root@node01 tools]# mv jdk1.8.0_45 /usr/java/<br data-filtered="filtered">[root@node01 tools]# vim /etc/profile<br data-filtered="filtered">export JAVA_HOME=/usr/java/jdk1.8.0_45<br data-filtered="filtered">export PATH=$JAVA_HOME/bin:$PATH<br data-filtered="filtered">export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar<br data-filtered="filtered">[root@node01 ~]#<br data-filtered="filtered">[root@node01 ~]# source  /etc/profile<br data-filtered="filtered">[root@node01 ~]# java -version<br data-filtered="filtered">java version "1.8.0_45"<br data-filtered="filtered">Java(TM) SE Runtime Environment (build 1.8.0_45-b14)<br data-filtered="filtered">Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)<br data-filtered="filtered">[root@node01 ~]#<br data-filtered="filtered"></span>

②安装配置elasticsearch(node01、node02、node03上做同样操作)

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 tools] # tar xf elasticsearch-1.7.5.tar.gz<br data-filtered="filtered">[root@node01 tools]# mv elasticsearch-1.7.5 /usr/local/elasticsearch<br data-filtered="filtered">[root@node01 tools]#<br data-filtered="filtered">[root@node01 tools]# cd /usr/local/elasticsearch/<br data-filtered="filtered">[root@node01 elasticsearch]# ll<br data-filtered="filtered">total 40<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 11:34 bin<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 11:34 config<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 11:34 lib<br data-filtered="filtered">-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt<br data-filtered="filtered">-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt<br data-filtered="filtered">-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile<br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered">[root@node01 elasticsearch]# cd config/<br data-filtered="filtered">[root@node01 config]# ll<br data-filtered="filtered">total 20<br data-filtered="filtered">-rw-rw-r-- 1 root root 13476 Feb  2 17:24 elasticsearch.yml<br data-filtered="filtered">-rw-rw-r-- 1 root root  2054 Feb  2 17:24 logging.yml<br data-filtered="filtered">[root@node01 config]#<br data-filtered="filtered"></span>


修改配置文件如下:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 config] # <br data-filtered="filtered">[root@node01 config]# grep -vE '^#|^$' elasticsearch.yml   <br data-filtered="filtered">cluster.name: elasticsearch<br data-filtered="filtered">node.name: "node01"<br data-filtered="filtered">node.master: true<br data-filtered="filtered">node.data: true<br data-filtered="filtered">index.number_of_shards: 5<br data-filtered="filtered">index.number_of_replicas: 1<br data-filtered="filtered">path.conf: /usr/local/elasticsearch/config<br data-filtered="filtered">path.data: /usr/local/elasticsearch/data<br data-filtered="filtered">path.work: /usr/local/elasticsearch/work<br data-filtered="filtered">path.logs: /usr/local/elasticsearch/logs<br data-filtered="filtered">path.plugins: /usr/local/elasticsearch/plugins<br data-filtered="filtered">bootstrap.mlockall: true<br data-filtered="filtered">[root@node01 config]#<br data-filtered="filtered"></span>


创建对应的目录:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # mkdir -p /usr/local/elasticsearch/{data,work,logs,plugins}<br data-filtered="filtered">[root@node01 elasticsearch]# ll<br data-filtered="filtered">total 56<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 11:34 bin<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 12:52 config<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 12:51 data<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 11:34 lib<br data-filtered="filtered">-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 13:00 logs<br data-filtered="filtered">-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 13:01 plugins<br data-filtered="filtered">-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 13:00 work<br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered"></span>


 

③启动elasticsearch服务

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # pwd<br data-filtered="filtered">/usr/local/elasticsearch<br data-filtered="filtered">[root@node01 elasticsearch]# ll<br data-filtered="filtered">total 44<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 11:34 bin<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 12:52 config<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 12:51 data<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 11:34 lib<br data-filtered="filtered">-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt<br data-filtered="filtered">-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt<br data-filtered="filtered">-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile<br data-filtered="filtered">[root@node01 elasticsearch]# /usr/local/elasticsearch/bin/elasticsearch<br data-filtered="filtered">[root@node01 elasticsearch]<br data-filtered="filtered"></span>


如果要elasticsearch在后台运行则只需要添加-d,即:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # /usr/local/elasticsearch/bin/elasticsearch -d<br data-filtered="filtered"></span>


启动之后查看端口状态:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # netstat -lpnut|grep java<br data-filtered="filtered">tcp        0      0 :::9300                     :::*                        LISTEN      26868/java         <br data-filtered="filtered">tcp        0      0 :::9200                     :::*                        LISTEN      26868/java         <br data-filtered="filtered">udp        0      0 :::54328                    :::*                                    26868/java         <br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered"></span>


访问查看状态信息:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # curl http://10.0.0.41:9200<br data-filtered="filtered">{<br data-filtered="filtered">  "status" : 200,<br data-filtered="filtered">  "name" : "node01",<br data-filtered="filtered">  "cluster_name" : "elasticsearch",<br data-filtered="filtered">  "version" : {<br data-filtered="filtered">    "number" : "1.7.5",<br data-filtered="filtered">    "build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",<br data-filtered="filtered">    "build_timestamp" : "2016-02-02T09:55:30Z",<br data-filtered="filtered">    "build_snapshot" : false,<br data-filtered="filtered">    "lucene_version" : "4.10.4"<br data-filtered="filtered">  },<br data-filtered="filtered">  "tagline" : "You Know, for Search"<br data-filtered="filtered">}<br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered"></span>


④管理elasticsearch服务的脚本(该部分可选)

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # git clone https://github.com/elastic/elasticsearch-servicewrapper<br data-filtered="filtered">Initialized empty Git repository in /usr/local/elasticsearch/elasticsearch-servicewrapper/.git/<br data-filtered="filtered">remote: Counting objects: 184, done.<br data-filtered="filtered">remote: Total 184 (delta 0), reused 0 (delta 0), pack-reused 184<br data-filtered="filtered">Receiving objects: 100% (184/184), 4.55 MiB | 46 KiB/s, done.<br data-filtered="filtered">Resolving deltas: 100% (53/53), done.<br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered">[root@node01 elasticsearch]# mv elasticsearch-servicewrapper/service/ /usr/local/elasticsearch/bin/<br data-filtered="filtered">[root@node01 elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch<br data-filtered="filtered">Usage: /usr/local/elasticsearch/bin/service/elasticsearch [ console | start | stop | restart | condrestart | status | install | remove | dump ]<br data-filtered="filtered">Commands:<br data-filtered="filtered">  console      Launch in the current console.<br data-filtered="filtered">  start        Start in the background as a daemon process.<br data-filtered="filtered">  stop         Stop if running as a daemon or in another console.<br data-filtered="filtered">  restart      Stop if running and then start.<br data-filtered="filtered">  condrestart  Restart only if already running.<br data-filtered="filtered">  status       Query the current status.<br data-filtered="filtered">  install      Install to start automatically when system boots.<br data-filtered="filtered">  remove       Uninstall.<br data-filtered="filtered">  dump         Request a Java thread dump if running.<br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered"></span>


根据提示,这里我们将其安装到系统启动项中:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 elasticsearch] # /usr/local/elasticsearch/bin/service/elasticsearch install<br data-filtered="filtered">Detected RHEL or Fedora:<br data-filtered="filtered">Installing the Elasticsearch daemon..<br data-filtered="filtered">[root@node01 elasticsearch]# chkconfig --list|grep elas<br data-filtered="filtered">elasticsearch   0:off   1:off   2:on    3:on    4:on    5:on    6:off<br data-filtered="filtered">[root@node01 elasticsearch]#<br data-filtered="filtered"></span>


通过service命令来启动elasticsearch服务

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 logs] # service elasticsearch start<br data-filtered="filtered">Starting Elasticsearch...<br data-filtered="filtered">Waiting for Elasticsearch......<br data-filtered="filtered">running: PID:28084<br data-filtered="filtered">[root@node01 logs]#<br data-filtered="filtered"></span>


如果启动报错,即输入如下内容: 

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 service] # service elasticsearch start<br data-filtered="filtered">Starting Elasticsearch...<br data-filtered="filtered">Waiting for Elasticsearch......................<br data-filtered="filtered">WARNING: Elasticsearch may have failed to start.<br data-filtered="filtered"></span>


通过启动日志排查问题:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 logs] # pwd<br data-filtered="filtered">/usr/local/elasticsearch/logs<br data-filtered="filtered">[root@node01 logs]# ll<br data-filtered="filtered">total 4<br data-filtered="filtered">-rw-r--r-- 1 root root 3909 Mar 24 13:32 service.log<br data-filtered="filtered">[root@node01 logs]# cat service.log<br data-filtered="filtered">The configured wrapper.java.command could not be found, attempting to launch anyway: java<br data-filtered="filtered">Launching a JVM...<br data-filtered="filtered">VM...<br data-filtered="filtered">jvm 3    | VM...<br data-filtered="filtered">wrapper  |<br data-filtered="filtered">wrapper  | ------------------------------------------------------------------------<br data-filtered="filtered">wrapper  | Advice:<br data-filtered="filtered">wrapper  | Usually when the Wrapper fails to start the JVM process, it is because<br data-filtered="filtered">wrapper  | of a problem with the value of the configured Java command.  Currently:<br data-filtered="filtered">wrapper  | wrapper.java.command=java<br data-filtered="filtered">wrapper  | Please make sure that the PATH or any other referenced environment<br data-filtered="filtered">wrapper  | variables are correctly defined for the current environment.<br data-filtered="filtered">wrapper  | ------------------------------------------------------------------------<br data-filtered="filtered">wrapper  |<br data-filtered="filtered">wrapper  | The configured wrapper.java.command could not be found, attempting to launch anyway: java<br data-filtered="filtered">wrapper  | Launching a JVM...<br data-filtered="filtered"></span>


根据上述提示,根据实际配置环境将对应参数修改为红色部分,然后重启

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 service] # pwd<br data-filtered="filtered">[root@node01 service]#<br data-filtered="filtered">[root@node01 service]# ll elasticsearch.conf<br data-filtered="filtered">-rw-r--r-- 1 root root 4768 Mar 24 13:32 elasticsearch.conf<br data-filtered="filtered">[root@node01 service]#<br data-filtered="filtered"></span>


wKioL1c9qEnQDz6NAADA3kZaySI001.png


1
<span style= "font-family:'宋体', SimSun;" >[root@node01 logs] # service elasticsearch start<br data-filtered="filtered">Starting Elasticsearch...<br data-filtered="filtered">Waiting for Elasticsearch......<br data-filtered="filtered">running: PID:28084<br data-filtered="filtered">[root@node01 logs]#<br data-filtered="filtered"></span>


检查启动状态:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 service] # netstat -lnput<br data-filtered="filtered">Active Internet connections (only servers)<br data-filtered="filtered">Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   <br data-filtered="filtered">tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      968/sshd           <br data-filtered="filtered">tcp        0      0 127.0.0.1:32000             0.0.0.0:*                   LISTEN      28086/java         <br data-filtered="filtered">tcp        0      0 :::9300                     :::*                        LISTEN      28086/java         <br data-filtered="filtered">tcp        0      0 :::22                       :::*                        LISTEN      968/sshd           <br data-filtered="filtered">tcp        0      0 :::9200                     :::*                        LISTEN      28086/java         <br data-filtered="filtered">udp        0      0 :::54328                    :::*                                    28086/java         <br data-filtered="filtered">[root@node01 service]# curl http://10.0.0.41:9200<br data-filtered="filtered">{<br data-filtered="filtered">  "status" : 200,<br data-filtered="filtered">  "name" : "node01",<br data-filtered="filtered">  "cluster_name" : "elasticsearch",<br data-filtered="filtered">  "version" : {<br data-filtered="filtered">    "number" : "1.7.5",<br data-filtered="filtered">    "build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",<br data-filtered="filtered">    "build_timestamp" : "2016-02-02T09:55:30Z",<br data-filtered="filtered">    "build_snapshot" : false,<br data-filtered="filtered">    "lucene_version" : "4.10.4"<br data-filtered="filtered">  },<br data-filtered="filtered">  "tagline" : "You Know, for Search"<br data-filtered="filtered">}<br data-filtered="filtered">[root@node01 service]#<br data-filtered="filtered"></span>


⑤JAVA API

node client

Transport client

⑥RESTful API

⑦javascript,.NET,PHP,Perl,Python,Ruby

查询例子:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 service] # curl -XGET 'http://10.0.0.41:9200/_count?pretty' -d {"query":{"match_all":{}}}<br data-filtered="filtered">{<br data-filtered="filtered">  "count" : 0,<br data-filtered="filtered">  "_shards" : {<br data-filtered="filtered">    "total" : 0,<br data-filtered="filtered">    "successful" : 0,<br data-filtered="filtered">    "failed" : 0<br data-filtered="filtered">  }<br data-filtered="filtered">}<br data-filtered="filtered">[root@node01 service]#<br data-filtered="filtered"></span>


插件能额外扩展elasticsearch功能,提供各类功能等等。有三种类型的插件:

    1. java插件

只包含JAR文件,必须在集群中每个节点上安装而且需要重启才能使插件生效。

    2. 网站插件

这类插件包含静态web内容,如js、css、html等等,可以直接从elasticsearch服务,如head插件。只需在一个节点上安装,不需要重启服务。可以通过下面的URL访问,如:http://node-ip:9200/_plugin/plugin_name

    3. 混合插件

顾名思义,就是包含上面两种的插件。

安装elasticsearch插件Marvel:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 service] # /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest<br data-filtered="filtered">-> Installing elasticsearch/marvel/latest...<br data-filtered="filtered">Trying http://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.zip...<br data-filtered="filtered">Downloading ....................................................................................................................................................................................................................................................................................................................................................DONE<br data-filtered="filtered">Installed elasticsearch/marvel/latest into /usr/local/elasticsearch/plugins/marvel<br data-filtered="filtered">[root@node01 service]#<br data-filtered="filtered"></span>


安装完成之后通过浏览器访问查看集群状况:

1
<span style= "font-family:'宋体', SimSun;" >http: //10 .0.0.41:9200 /_plugin/marvel/kibana/index .html #/dashboard/file/marvel.overview.json<br data-filtered="filtered"></span>

wKioL1ez8-7j4cdyAAHKHcpVraY207.png

下面是集群节点和索引状况:

wKiom1e0Hc_j8n4gAAHT4ebRkTk091.png


选择Dashboards菜单:

wKioL1ez9MKB5-ofAAIW8au2MFE914.png

点击sense进入如下界面:

wKioL1ez9S7QaWISAACG-US__Y4548.png



编写测试内容:

wKiom1ez-JSTt6a3AADTQ0dmlug512.png记录右边生成的id,然后在左边通过id进行查询

wKioL1ez-UaS18_EAADfwb28-aY082.png


如果要进一步查询,则可以做如下操作:

wKiom1ez-fuhT6D_AACv2aWLkHw837.png



⑧集群管理插件安装

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 service] # /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest<br data-filtered="filtered">-> Installing elasticsearch/marvel/latest...<br data-filtered="filtered">Trying http://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.zip...<br data-filtered="filtered">Downloading ....................................................................................................................................................................................................................................................................................................................................................DONE<br data-filtered="filtered">Installed elasticsearch/marvel/latest into /usr/local/elasticsearch/plugins/marvel<br data-filtered="filtered">[root@node01 service]# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head<br data-filtered="filtered">-> Installing mobz/elasticsearch-head...<br data-filtered="filtered">Trying https://github.com/mobz/elasticsearch-head/archive/master.zip...<br data-filtered="filtered">Downloading ...................................................................................................................................................................................................................................................................................DONE<br data-filtered="filtered">Installed mobz/elasticsearch-head into /usr/local/elasticsearch/plugins/head<br data-filtered="filtered">[root@node01 service]#<br data-filtered="filtered"></span>


安装完成后,通过浏览器访问:


http://10.0.0.41:9200/_plugin/head/

wKiom1ez-kzjPuRqAAFbPQBHs1A955.png

部署另外一个节点node02,需要在elasticsearch的配置文件中修改node.name: "node02"即可,其他与node01保持一致

1
<span style= "font-family:'宋体', SimSun;" >[root@node02 tools] # cd<br data-filtered="filtered">[root@node02 ~]# grep '^[a-z]' /usr/local/elasticsearch/config/elasticsearch.yml<br data-filtered="filtered">cluster.name: elasticsearch<br data-filtered="filtered">node.name: "node02"<br data-filtered="filtered">node.master: true<br data-filtered="filtered">node.data: true<br data-filtered="filtered">index.number_of_shards: 5<br data-filtered="filtered">index.number_of_replicas: 1<br data-filtered="filtered">path.conf: /usr/local/elasticsearch/config<br data-filtered="filtered">path.data: /usr/local/elasticsearch/data<br data-filtered="filtered">path.work: /usr/local/elasticsearch/work<br data-filtered="filtered">path.logs: /usr/local/elasticsearch/logs<br data-filtered="filtered">path.plugins: /usr/local/elasticsearch/plugins<br data-filtered="filtered">bootstrap.mlockall: true<br data-filtered="filtered">[root@node02 ~]#<br data-filtered="filtered">[root@node02 ~]# mkdir -p /usr/local/elasticsearch/{data,work,logs.plugins}<br data-filtered="filtered">[root@node02 ~]# ll /usr/local/elasticsearch/<br data-filtered="filtered">total 56<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:23 bin<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:23 config<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:31 data<br data-filtered="filtered">drwxr-xr-x 4 root root  4096 Mar 24 14:27 elasticsearch-servicewrapper<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 14:23 lib<br data-filtered="filtered">-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:31 logs.plugins<br data-filtered="filtered">-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt<br data-filtered="filtered">-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:31 work<br data-filtered="filtered">[root@node02 ~]#<br data-filtered="filtered"></span>


安装管理elasticsearch的管理工具elasticsearch-servicewrapper

1
<span style= "font-family:'宋体', SimSun;" >[root@node02 elasticsearch] # git clone https://github.com/elastic/elasticsearch-servicewrapper<br data-filtered="filtered">Initialized empty Git repository in /usr/local/elasticsearch/elasticsearch-servicewrapper/.git/<br data-filtered="filtered">remote: Counting objects: 184, done.<br data-filtered="filtered">remote: Total 184 (delta 0), reused 0 (delta 0), pack-reused 184<br data-filtered="filtered">Receiving objects: 100% (184/184), 4.55 MiB | 10 KiB/s, done.<br data-filtered="filtered">Resolving deltas: 100% (53/53), done.<br data-filtered="filtered">[[root@node02 elasticsearch]# ll<br data-filtered="filtered">total 80<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 14:33 bin<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:23 config<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:31 data<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 14:33 elasticsearch-servicewrapper<br data-filtered="filtered">drwxr-xr-x 3 root root  4096 Mar 24 14:23 lib<br data-filtered="filtered">-rw-rw-r-- 1 root root 11358 Feb  2 17:24 LICENSE.txt<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:31 logs<br data-filtered="filtered">-rw-rw-r-- 1 root root   150 Feb  2 17:24 NOTICE.txt<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:36 plugins<br data-filtered="filtered">-rw-rw-r-- 1 root root  8700 Feb  2 17:24 README.textile<br data-filtered="filtered">drwxr-xr-x 2 root root  4096 Mar 24 14:31 work<br data-filtered="filtered">-rw-r--r-- 1 root root 18208 Mar 24 14:35 wrapper.log<br data-filtered="filtered">[root@node02 elasticsearch]#<br data-filtered="filtered">[root@node02 elasticsearch]# mv elasticsearch-servicewrapper/service /usr/local/elasticsearch/bin/<br data-filtered="filtered">[root@node02 elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch<br data-filtered="filtered">Usage: /usr/local/elasticsearch/bin/service/elasticsearch [ console | start | stop | restart | condrestart | status | install | remove | dump ]<br data-filtered="filtered">Commands:<br data-filtered="filtered">  console      Launch in the current console.<br data-filtered="filtered">  start        Start in the background as a daemon process.<br data-filtered="filtered">  stop         Stop if running as a daemon or in another console.<br data-filtered="filtered">  restart      Stop if running and then start.<br data-filtered="filtered">  condrestart  Restart only if already running.<br data-filtered="filtered">  status       Query the current status.<br data-filtered="filtered">  install      Install to start automatically when system boots.<br data-filtered="filtered">  remove       Uninstall.<br data-filtered="filtered">  dump         Request a Java thread dump if running.<br data-filtered="filtered">[root@node02 elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch install<br data-filtered="filtered">Detected RHEL or Fedora:<br data-filtered="filtered">Installing the Elasticsearch daemon..<br data-filtered="filtered">[root@node02 elasticsearch]# /etc/init.d/elasticsearch start<br data-filtered="filtered">Starting Elasticsearch...<br data-filtered="filtered">Waiting for Elasticsearch..........................<br data-filtered="filtered">running: PID:26753<br data-filtered="filtered">[root@node02 elasticsearch]#<br data-filtered="filtered">[root@node02 service]# pwd<br data-filtered="filtered">/usr/local/elasticsearch/bin/service<br data-filtered="filtered">[root@node02 service]# vim elasticsearch.conf<br data-filtered="filtered"></span>


wKiom1c9qQqxv-MEAACkwT6OhoY905.png


提示:set.default.ES_HEAP_SIZE值设置小于服务器的物理内存,不能等于实际的内存,否则就会启动失败

接上操作,刷新 http://10.0.0.42:9200/_plugin/head/

wKioL1ez-viCpjhpAAFbgll75yA562.png

查看node01的信息:

wKioL1ez-zTzD6KcAAJnvIsS5Dk661.png

概览:

wKioL1ez-6PiahV-AAFTIIzf1r4942.png


索引:

wKioL1ez-7WRuntQAADNsZF0mO8478.png


数据浏览:

wKiom1ez-8yQg5vjAAMLOQz4zQc183.png-wh_50

基本信息:

wKioL1ez_BeygzVPAAGheJ_dHnk433.png


符合查询:

wKioL1ez_Fzz1f1BAAGBrDg5KOk326.png



以上步骤在上述三个节点node01、node02、node03中分别执行


logstash快速入门

 

官方建议yum安装:

Download and install the public signing key:

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo

[logstash-2.2]

name=Logstash repository for 2.2.x packages

baseurl=http://packages.elastic.co/logstash/2.2/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

And your repository is ready for use. You can install it with:

yum install logstash

spacer.gif

我这里采用源码解压安装: web01上部署nginx和logstash

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 tools] # tar xf logstash-2.1.3.tar.gz<br data-filtered="filtered">[root@web01 tools]# mv logstash-2.1.3 /usr/local/logstash<br data-filtered="filtered">[root@web01 tools]# cd /usr/local/logstash/<br data-filtered="filtered">[root@web01 logstash]# ll<br data-filtered="filtered">total 152<br data-filtered="filtered">drwxr-xr-x 2 root root   4096 Mar 24 15:50 bin<br data-filtered="filtered">-rw-rw-r-- 1 root root 100805 Feb 17 05:00 CHANGELOG.md<br data-filtered="filtered">-rw-rw-r-- 1 root root   2249 Feb 17 05:00 CONTRIBUTORS<br data-filtered="filtered">-rw-rw-r-- 1 root root   3771 Feb 17 05:05 Gemfile<br data-filtered="filtered">-rw-rw-r-- 1 root root  21970 Feb 17 05:00 Gemfile.jruby-1.9.lock<br data-filtered="filtered">drwxr-xr-x 4 root root   4096 Mar 24 15:50 lib<br data-filtered="filtered">-rw-rw-r-- 1 root root    589 Feb 17 05:00 LICENSE<br data-filtered="filtered">-rw-rw-r-- 1 root root    149 Feb 17 05:00 NOTICE.TXT<br data-filtered="filtered">drwxr-xr-x 4 root root   4096 Mar 24 15:50 vendor<br data-filtered="filtered">[root@web01 logstash]#<br data-filtered="filtered"></span>


logstash配置文件格式:

1
<span style= "font-family:'宋体', SimSun;" >input { stdin { } }<br data-filtered= "filtered" >output {<br data-filtered= "filtered" >  elasticsearch { hosts => [ "localhost:9200" ] }<br data-filtered= "filtered" >  stdout { codec => rubydebug }<br data-filtered= "filtered" >}<br data-filtered= "filtered" >< /span >


配置文件结构:

# This is a comment. You should use comments to describe

# parts of your configuration.

1
<span style= "font-family:'宋体', SimSun;" >input {<br data-filtered= "filtered" >  ...<br data-filtered= "filtered" >}<br data-filtered= "filtered" >filter {<br data-filtered= "filtered" >  ...<br data-filtered= "filtered" >}<br data-filtered= "filtered" >output {<br data-filtered= "filtered" >  ...<br data-filtered= "filtered" >}<br data-filtered= "filtered" >< /span >


插件的结构:

1
<span style= "font-family:'宋体', SimSun;" >input {<br data-filtered= "filtered" >   file  {<br data-filtered= "filtered" >    path =>  "/var/log/messages" <br data-filtered= "filtered" >     type  =>  "syslog" <br data-filtered= "filtered" >  }<br data-filtered= "filtered" >   file  {<br data-filtered= "filtered" >    path =>  "/var/log/apache/access.log" <br data-filtered= "filtered" >     type  =>  "apache" <br data-filtered= "filtered" >  }<br data-filtered= "filtered" >}<br data-filtered= "filtered" >< /span >


插件的数据结构:

数组array

Example:

  path => [ "/var/log/messages", "/var/log/*.log" ]

  path => "/data/mysql/mysql.log"

布尔类型

Example:

  ssl_enable => true

字节

Examples:

  my_bytes => "1113"   # 1113 bytes

  my_bytes => "10MiB"  # 10485760 bytes

  my_bytes => "100kib" # 102400 bytes

  my_bytes => "180 mb" # 180000000 bytes

codeC

Example:

  codec => "json"



HASH

Example:

match => {

  "field1" => "value1"

  "field2" => "value2"

  ...

}

Number

Numbers must be valid numeric values (floating point or integer).

Example:

  port => 33

Password

A password is a string with a single value that is not logged or printed.

Example:

  my_password => "password"

Pathedit

A path is a string that represents a valid operating system path.

Example:

  my_path => "/tmp/logstash"

String

A string must be a single character sequence. Note that string values are enclosed in quotes, either double or single. Literal quotes in the string need to be escaped with a backslash if they are of the same kind as the string delimiter, i.e. single quotes within a single-quoted string need to be escaped as well as double quotes within a double-quoted string.

Example:

  name => "Hello world"

  name => 'It\'s a beautiful day'

Comments

Comments are the same as in perl, ruby, and python. A comment starts with a # character, and does not need to be at the beginning of a line. For example:

# this is a comment

input { # comments can appear at the end of a line, too

  # ...

}

wKioL1c9q9PQ2A_lAAFOiVIxJ9Y910.png



logstash版本不同,插件也会不同,我这里用的logstash-2.1.3.tar.gz

https://www.elastic.co/guide/en/logstash/2.1/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-hosts 

logstash的启动方式为:

/usr/local/logstash/bin/logstash  -f /usr/local/logstash/logstash.conf  &

使用方法如下:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 ~] # /usr/local/logstash/bin/logstash -t<br data-filtered="filtered">Error: Usage:<br data-filtered="filtered">    /bin/logstash agent [OPTIONS]<br data-filtered="filtered">Options:<br data-filtered="filtered">    -f, --config CONFIG_PATH      Load the logstash config from a specific file<br data-filtered="filtered">                                  or directory.  If a directory is given, all<br data-filtered="filtered">                                  files in that directory will be concatenated<br data-filtered="filtered">                                  in lexicographical order and then parsed as a<br data-filtered="filtered">                                  single config file. You can also specify<br data-filtered="filtered">                                  wildcards (globs) and any matched files will<br data-filtered="filtered">                                  be loaded in the order described above.<br data-filtered="filtered">    -e CONFIG_STRING              Use the given string as the configuration<br data-filtered="filtered">                                  data. Same syntax as the config file. If no<br data-filtered="filtered">                                  input is specified, then the following is<br data-filtered="filtered">                                  used as the default input:<br data-filtered="filtered">                                  "input { stdin { type => stdin } }"<br data-filtered="filtered">                                  and if no output is specified, then the<br data-filtered="filtered">                                  following is used as the default output:<br data-filtered="filtered">                                  "output { stdout { codec => rubydebug } }"<br data-filtered="filtered">                                  If you wish to use both defaults, please use<br data-filtered="filtered">                                  the empty string for the '-e' flag.<br data-filtered="filtered">                                   (default: "")<br data-filtered="filtered">    -w, --filterworkers COUNT     Sets the number of filter workers to run.<br data-filtered="filtered">                                   (default: 0)<br data-filtered="filtered">    -l, --log FILE                Write logstash internal logs to the given<br data-filtered="filtered">                                  file. Without this flag, logstash will emit<br data-filtered="filtered">                                  logs to standard output.<br data-filtered="filtered">    -v                            Increase verbosity of logstash internal logs.<br data-filtered="filtered">                                  Specifying once will show 'informational'<br data-filtered="filtered">                                  logs. Specifying twice will show 'debug'<br data-filtered="filtered">                                  logs. This flag is deprecated. You should use<br data-filtered="filtered">                                  --verbose or --debug instead.<br data-filtered="filtered">    --quiet                       Quieter logstash logging. This causes only<br data-filtered="filtered">                                  errors to be emitted.<br data-filtered="filtered">    --verbose                     More verbose logging. This causes 'info'<br data-filtered="filtered">                                  level logs to be emitted.<br data-filtered="filtered">    --debug                       Most verbose logging. This causes 'debug'<br data-filtered="filtered">                                  level logs to be emitted.<br data-filtered="filtered">    -V, --version                 Emit the version of logstash and its friends,<br data-filtered="filtered">                                  then exit.<br data-filtered="filtered">    -p, --pluginpath PATH         A path of where to find plugins. This flag<br data-filtered="filtered">                                  can be given multiple times to include<br data-filtered="filtered">                                  multiple paths. Plugins are expected to be<br data-filtered="filtered">                                  in a specific directory hierarchy:<br data-filtered="filtered">                                  'PATH/logstash/TYPE/NAME.rb' where TYPE is<br data-filtered="filtered">                                  'inputs' 'filters', 'outputs' or 'codecs'<br data-filtered="filtered">                                  and NAME is the name of the plugin.<br data-filtered="filtered">    -t, --configtest              Check configuration for valid syntax and then exit.<br data-filtered="filtered">    --[no-]allow-unsafe-shutdown  Force logstash to exit during shutdown even<br data-filtered="filtered">                                  if there are still inflight events in memory.<br data-filtered="filtered">                                  By default, logstash will refuse to quit until all<br data-filtered="filtered">                                  received events have been pushed to the outputs.<br data-filtered="filtered">                                   (default: false)<br data-filtered="filtered">    -h, --help                    print help<br data-filtered="filtered"></span>


07-logstash-file-redis-es


redis的安装和配置:

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana ~] #tar xf redis-3.0.7.tar.gz<br data-filtered="filtered">[root@kibana ~]#cd redis-3.0.7<br data-filtered="filtered">[root@kibana ~]#make MALLOC=jemalloc<br data-filtered="filtered">[root@kibana ~]#make PREFIX=/application/redis-3.0.7 install<br data-filtered="filtered">[root@kibana ~]#ln -s /application/redis-3.0.7 /application/redis<br data-filtered="filtered">[root@kibana ~]#echo "export PATH=/application/redis/bin:$PATH" >>/etc/profile<br data-filtered="filtered">[root@kibana ~]#source /etc/profile<br data-filtered="filtered">[root@kibana ~]#mkdir -p /application/redis/conf<br data-filtered="filtered">[root@kibana ~]#cp redis.conf  /application/redis/conf/<br data-filtered="filtered">[root@kibana ~]#vim /application/redis/conf/redis.conf<br data-filtered="filtered"></span>


修改bind地址,为本机ip地址

[root@kibana ~]#redis-server /application/redis/conf/redis.conf &

切换到node01,登录node02的redis服务

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana ~] # redis-cli -h 10.0.0.44 -p 6379<br data-filtered="filtered">10.0.0.11:6379> info<br data-filtered="filtered"># Server<br data-filtered="filtered">redis_version:3.0.7<br data-filtered="filtered">redis_git_sha1:00000000<br data-filtered="filtered"></span>


修改web01上logstash配置文件如下:

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 ~] # vim /usr/local/logstash/conf/logstash.conf<br data-filtered="filtered">input {<br data-filtered="filtered">    file {<br data-filtered="filtered">         #type => "accesslog"<br data-filtered="filtered">         path => "/application/nginx/logs/access.log"<br data-filtered="filtered">         start_position => "beginning"<br data-filtered="filtered">     }<br data-filtered="filtered"> }<br data-filtered="filtered">output {<br data-filtered="filtered">    redis {<br data-filtered="filtered">         data_type => "list"<br data-filtered="filtered">         key => "system-messages"<br data-filtered="filtered">         host => "10.0.0.44"<br data-filtered="filtered">         port => "6379"<br data-filtered="filtered">         db => "0"<br data-filtered="filtered">          }<br data-filtered="filtered">}<br data-filtered="filtered">[root@web01 ~]#<br data-filtered="filtered"></span>


这里用nginx访问日志作为测试,定义输入文件为"/application/nginx/logs/access.log ",输出文件存储到redis里面,定义key为 "system-messages",host为10.0.0.44,端口6379,存储数据库库为db=0。

保证nginx是运行状态:

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 ~] # ps -ef|grep nginx<br data-filtered="filtered">root      1109     1  0 12:59 ?        00:00:00 nginx: master process /application/nginx/sbin/nginx<br data-filtered="filtered">nobody    1113  1109  0 12:59 ?        00:00:00 nginx: worker process        <br data-filtered="filtered">nobody    1114  1109  0 12:59 ?        00:00:00 nginx: worker process        <br data-filtered="filtered">nobody    1115  1109  0 12:59 ?        00:00:00 nginx: worker process        <br data-filtered="filtered">nobody    1116  1109  0 12:59 ?        00:00:00 nginx: worker process        <br data-filtered="filtered">root      1572  1185  0 14:06 pts/0    00:00:00 grep nginx<br data-filtered="filtered">[root@web01 ~]# <br data-filtered="filtered">[root@web01 ~]# tail -f /application/nginx/logs/access.log <br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">10.0.0.10 - - [17/Aug/2016:14:10:02 +0800] "GET /index.html HTTP/1.0" 200 612 "-" "ApacheBench/2.3"<br data-filtered="filtered">[root@web01 ~]#<br data-filtered="filtered"></span>

登录redis,查看数据是否已经传入到redis上:

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana ~] # redis-cli <br data-filtered="filtered">127.0.0.1:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">127.0.0.1:6379> <br data-filtered="filtered">127.0.0.1:6379> <br data-filtered="filtered">127.0.0.1:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">127.0.0.1:6379><br data-filtered="filtered"></span>

发现通过浏览器访问的日志已经存入到了redis中,这一步实现日志的转存,下一步实现日志的从redis中通过logstash转存到elasticsearch中

wKioL1c9rBXDWeZBAAD3mO4Yf2g339.png



以上实现了数据从数据源(access.log)中转存到redis中,下一步实现从redis中转存到elasticsearch中

这里在logstash上开启一个logstash,并且编写配置文件/usr/local/logstash/logstash.conf

1
<span style= "font-family:'宋体', SimSun;" >[root@logstash ~] # cd /usr/local/logstash<br data-filtered="filtered">[root@logstash logstash]# pwd<br data-filtered="filtered">/usr/local/logstash<br data-filtered="filtered">[root@logstash logstash]# <br data-filtered="filtered">[root@node02 conf]# ps -ef|grep logstash<br data-filtered="filtered">root      43072  42030  0 15:46 pts/1    00:00:00 grep logstash<br data-filtered="filtered">[root@node02 conf]# cat /usr/local/logstash/logstash.conf<br data-filtered="filtered">input {<br data-filtered="filtered">   redis {<br data-filtered="filtered">    data_type => "list"<br data-filtered="filtered">    key => "system-messages"<br data-filtered="filtered">    host => "10.0.0.44"<br data-filtered="filtered">    port => "6379"<br data-filtered="filtered">    db => "0"<br data-filtered="filtered">   }<br data-filtered="filtered">}<br data-filtered="filtered">output {<br data-filtered="filtered">   elasticsearch {<br data-filtered="filtered">      hosts => "10.0.0.41"<br data-filtered="filtered">      #protocol => "http"<br data-filtered="filtered">      index => "redis-messages-%{+YYYY.MM.dd}"<br data-filtered="filtered">  }<br data-filtered="filtered">}<br data-filtered="filtered">[root@logstash conf]#<br data-filtered="filtered"></span>


说明:定义数据写入文件为redis,对应的键和主机以及端口如下:

 

1
<span style= "font-family:'宋体', SimSun;" >redis {<br data-filtered= "filtered" >    data_type =>  "list" <br data-filtered= "filtered" >    key =>  "system-messages" <br data-filtered= "filtered" >    host =>  "10.0.0.44" <br data-filtered= "filtered" >    port =>  "6379" <br data-filtered= "filtered" >    db =>  "0" <br data-filtered= "filtered" >   }<br data-filtered= "filtered" >< /span >

数据输出到 elasticsearch中,具体配置为:

1
<span style= "font-family:'宋体', SimSun;" >elasticsearch {<br data-filtered= "filtered" >      hosts =>  "10.0.0.41" <br data-filtered= "filtered" >      index =>  "system-redis-messages-%{+YYYY.MM.dd}" <br data-filtered= "filtered" >  }<br data-filtered= "filtered" >< /span >


启动logstash进行日志收集

1
<span style= "font-family:'宋体', SimSun;" >[root@logstash conf] # /usr/local/logstash/bin/logstash  -f /usr/local/logstash/conf/logstash.conf  &<br data-filtered="filtered">[1] 43097<br data-filtered="filtered">[root@node02 conf]#<br data-filtered="filtered">[root@node02 conf]#<br data-filtered="filtered">[root@node02 conf]#<br data-filtered="filtered">[root@node02 conf]#<br data-filtered="filtered">[root@node02 conf]#<br data-filtered="filtered">[root@node02 conf]#<br data-filtered="filtered">[root@node02 conf]# ps -ef|grep logstash<br data-filtered="filtered">[root@logstash logstash]# ps -ef|grep logstash<br data-filtered="filtered">root      1169  1141  7 13:11 pts/0    00:04:30 /usr/java/jdk1.8.0_73/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xss2048k -Djffi.boot.library.path=/usr/local/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/logstash/heapdump.hprof -Xbootclasspath/a:/usr/local/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/local/logstash/vendor/jruby -Djruby.lib=/usr/local/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /usr/local/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /usr/local/logstash/conf/logstash.conf<br data-filtered="filtered">root      1339  1141  0 14:14 pts/0    00:00:00 grep logstash<br data-filtered="filtered">[root@logstash logstash]#<br data-filtered="filtered"></span>

再回过来查看redis的db中的数据是否已经转存过去(即存储到elasticserach中)

1
<span style= "font-family:'宋体', SimSun;" >127.0.0.1:6379> LLEN system-messages<br data-filtered= "filtered" >(integer) 0<br data-filtered= "filtered" >127.0.0.1:6379> <br data-filtered= "filtered" >127.0.0.1:6379> <br data-filtered= "filtered" >127.0.0.1:6379> keys *<br data-filtered= "filtered" >(empty list or  set )<br data-filtered= "filtered" >127.0.0.1:6379><br data-filtered= "filtered" >< /span >


从上述输入可以知道数据应转存到elasticsearch中,下面我可以进行查看:

wKiom1e0Ai6QZ-hFAAFecqohNnk995.png

进一步查看日志信息:

wKioL1e0AnnCMEL8AAK1-efQO0o145.png

logstash收集json格式的nginx日志,然后将日志转存到elasticsearch中

json格式分割nginx日志配置参数:

log_format logstash_json '{ "@timestamp": "$time_local", '

                         '"@fields": { '

                         '"remote_addr": "$remote_addr", '

                         '"remote_user": "$remote_user", '

                         '"body_bytes_sent": "$body_bytes_sent", '

                         '"request_time": "$request_time", '

                         '"status": "$status", '

                         '"request": "$request", '

                         '"request_method": "$request_method", '

                         '"http_referrer": "$http_referer", '

                         '"body_bytes_sent":"$body_bytes_sent", '

                         '"http_x_forwarded_for": "$http_x_forwarded_for", '

                         '"http_user_agent": "$http_user_agent" } }';

修改后的nginx配置文件为:

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 conf] # pwd<br data-filtered="filtered">/application/nginx/conf<br data-filtered="filtered">[root@web01 conf]# cat nginx.conf<br data-filtered="filtered">worker_processes  4;<br data-filtered="filtered">events {<br data-filtered="filtered">    worker_connections  1024;<br data-filtered="filtered">}<br data-filtered="filtered">http {<br data-filtered="filtered">    include       mime.types;<br data-filtered="filtered">    default_type  application/octet-stream;<br data-filtered="filtered">    log_format logstash_json '{ "@timestamp": "$time_local", '<br data-filtered="filtered">                         '"@fields": { '<br data-filtered="filtered">                         '"remote_addr": "$remote_addr", '<br data-filtered="filtered">                         '"remote_user": "$remote_user", '<br data-filtered="filtered">                         '"body_bytes_sent": "$body_bytes_sent", '<br data-filtered="filtered">                         '"request_time": "$request_time", '<br data-filtered="filtered">                         '"status": "$status", '<br data-filtered="filtered">                         '"request": "$request", '<br data-filtered="filtered">                         '"request_method": "$request_method", '<br data-filtered="filtered">                         '"http_referrer": "$http_referer", '<br data-filtered="filtered">                         '"body_bytes_sent":"$body_bytes_sent", '<br data-filtered="filtered">                         '"http_x_forwarded_for": "$http_x_forwarded_for", '<br data-filtered="filtered">                         '"http_user_agent": "$http_user_agent" } }';<br data-filtered="filtered">    sendfile        on;<br data-filtered="filtered">    keepalive_timeout  65;<br data-filtered="filtered">    server {<br data-filtered="filtered">        listen       80;<br data-filtered="filtered">        server_name  localhost;<br data-filtered="filtered">        access_log  logs/access_json.log logstash_json;<br data-filtered="filtered">        location / {<br data-filtered="filtered">            root   html;<br data-filtered="filtered">            index  index.html index.htm;<br data-filtered="filtered">        }<br data-filtered="filtered">        error_page   500 502 503 504  /50x.html;<br data-filtered="filtered">        location = /50x.html {<br data-filtered="filtered">            root   html;<br data-filtered="filtered">        }<br data-filtered="filtered">    }<br data-filtered="filtered">}<br data-filtered="filtered">[root@node01 conf]#<br data-filtered="filtered"></span>


启动nginx:

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 nginx] # killall nginx<br data-filtered="filtered">nginx: no process killed<br data-filtered="filtered">[root@web01 nginx]# <br data-filtered="filtered">[root@web01 nginx]# /application/nginx/sbin/nginx  -t<br data-filtered="filtered">nginx: the configuration file /application/nginx-1.6.0/conf/nginx.conf syntax is ok<br data-filtered="filtered">nginx: configuration file /application/nginx-1.6.0/conf/nginx.conf test is successful<br data-filtered="filtered">[root@web01 nginx]# <br data-filtered="filtered">[root@web01 nginx]# cd -<br data-filtered="filtered">/application/nginx/logs<br data-filtered="filtered">[root@web01 logs]# ll<br data-filtered="filtered">total 9976<br data-filtered="filtered">-rw-r--r--  1 root root        0 Aug 17 14:27 access_json.log<br data-filtered="filtered">-rw-r--r--. 1 root root 10056370 Aug 17 14:26 access.log<br data-filtered="filtered">-rw-r--r--. 1 root root   146445 Aug 17 00:15 error.log<br data-filtered="filtered">-rw-r--r--  1 root root        5 Aug 17 14:27 nginx.pid<br data-filtered="filtered">[root@web01 logs]#<br data-filtered="filtered"></span>


通过ab命令访问本地nginx服务:

1
<span style= "font-family:'宋体', SimSun;" >[root@logstash conf] # ab -n 10000 -c 50 http://10.0.0.9/index.html<br data-filtered="filtered">This is ApacheBench, Version 2.3 <$Revision: 655654 $><br data-filtered="filtered">Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/<br data-filtered="filtered">Licensed to The Apache Software Foundation, http://www.apache.org/<br data-filtered="filtered">Benchmarking 10.0.0.9 (be patient)<br data-filtered="filtered">Completed 1000 requests<br data-filtered="filtered">Completed 2000 requests<br data-filtered="filtered">Completed 3000 requests<br data-filtered="filtered">Completed 4000 requests<br data-filtered="filtered">Completed 5000 requests<br data-filtered="filtered">Completed 6000 requests<br data-filtered="filtered">Completed 7000 requests<br data-filtered="filtered">Completed 8000 requests<br data-filtered="filtered">Completed 9000 requests<br data-filtered="filtered">Completed 10000 requests<br data-filtered="filtered">Finished 10000 requests<br data-filtered="filtered">Server Software:        nginx/1.6.2<br data-filtered="filtered">Server Hostname:        10.0.0.9<br data-filtered="filtered">Server Port:            80<br data-filtered="filtered">Document Path:          /index.html<br data-filtered="filtered">Document Length:        612 bytes<br data-filtered="filtered">Concurrency Level:      50<br data-filtered="filtered">Time taken for tests:   2.277 seconds<br data-filtered="filtered">Complete requests:      10000<br data-filtered="filtered">Failed requests:        0<br data-filtered="filtered">Write errors:           0<br data-filtered="filtered">Total transferred:      8447596 bytes<br data-filtered="filtered">HTML transferred:       6125508 bytes<br data-filtered="filtered">Requests per second:    4391.57 [#/sec] (mean)<br data-filtered="filtered">Time per request:       11.385 [ms] (mean)<br data-filtered="filtered">Time per request:       0.228 [ms] (mean, across all concurrent requests)<br data-filtered="filtered">Transfer rate:          3622.87 [Kbytes/sec] received<br data-filtered="filtered">Connection Times (ms)<br data-filtered="filtered">              min  mean[+/-sd] median   max<br data-filtered="filtered">Connect:        0    2   2.5      2      27<br data-filtered="filtered">Processing:     2    9   4.9      9      50<br data-filtered="filtered">Waiting:        1    8   5.1      8      49<br data-filtered="filtered">Total:          3   11   4.8     11      54<br data-filtered="filtered">Percentage of the requests served within a certain time (ms)<br data-filtered="filtered">  50%     11<br data-filtered="filtered">  66%     11<br data-filtered="filtered">  75%     12<br data-filtered="filtered">  80%     13<br data-filtered="filtered">  90%     15<br data-filtered="filtered">  95%     19<br data-filtered="filtered">  98%     26<br data-filtered="filtered">  99%     33<br data-filtered="filtered"> 100%     54 (longest request)<br data-filtered="filtered">[root@logstash conf]#<br data-filtered="filtered"></span>


访问nginx产生的日志为:

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 logs] # head -n 10 access_json.log <br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.006", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.000", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.000", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">{ "@timestamp": "17/Aug/2016:14:27:49 +0800", "@fields": { "remote_addr": "10.0.0.10", "remote_user": "-", "body_bytes_sent": "612", "request_time": "0.007", "status": "200", "request": "GET /index.html HTTP/1.0", "request_method": "GET", "http_referrer": "-", "body_bytes_sent":"612", "http_x_forwarded_for": "-", "http_user_agent": "ApacheBench/2.3" } }<br data-filtered="filtered">[root@web01 logs]#<br data-filtered="filtered"></span>

logstash的配置文件文件为:

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 logs] # vim /usr/local/logstash/conf/logstash.conf<br data-filtered="filtered">input {<br data-filtered="filtered">    file {<br data-filtered="filtered">         #type => "accesslog"<br data-filtered="filtered">         path => "/application/nginx/logs/access_json.log"<br data-filtered="filtered">         start_position => "beginning"<br data-filtered="filtered">     }<br data-filtered="filtered"> }<br data-filtered="filtered">output {<br data-filtered="filtered">    redis {<br data-filtered="filtered">         data_type => "list"<br data-filtered="filtered">         key => "system-messages"<br data-filtered="filtered">         host => "10.0.0.44"<br data-filtered="filtered">         port => "6379"<br data-filtered="filtered">         db => "0"<br data-filtered="filtered">          }<br data-filtered="filtered">}<br data-filtered="filtered">[root@node01 logs]#<br data-filtered="filtered"></span>


重启logstash

1
<span style= "font-family:'宋体', SimSun;" >[root@web01 logs] # /usr/local/logstash/bin/logstash  -f /usr/local/logstash/conf/logstash.conf  &<br data-filtered="filtered">[1] 20328<br data-filtered="filtered">[root@web01 logs]#<br data-filtered="filtered">[root@web01 logs]#<br data-filtered="filtered">[root@web01 logs]#<br data-filtered="filtered">[root@web01 logs]#<br data-filtered="filtered">[root@web01 logs]#ps -ef|grep logstash<br data-filtered="filtered">root      1638  1185 99 14:32 pts/0    00:03:56 /usr/java/jdk1.8.0_73/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xss2048k -Djffi.boot.library.path=/usr/local/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/logstash/heapdump.hprof -Xbootclasspath/a:/usr/local/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/local/logstash/vendor/jruby -Djruby.lib=/usr/local/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /usr/local/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /usr/local/logstash/conf/logstash.conf<br data-filtered="filtered">root      1678  1185  0 14:34 pts/0    00:00:00 grep logstash<br data-filtered="filtered">[root@web01 ~]#<br data-filtered="filtered"></span>



连接redis查看日志是否写入:

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana] # redis-cli -h 10.0.0.44 -p 6379<br data-filtered="filtered">10.0.0.44:6379> select 2<br data-filtered="filtered">OK<br data-filtered="filtered">10.0.0.44:6379[2]> keys *<br data-filtered="filtered">(empty list or set)<br data-filtered="filtered">10.0.0.44:6379[2]><br data-filtered="filtered"></span>


通过ab命令访问nginx,

1
<span style= "font-family:'宋体', SimSun;" >[root@node01 ~] # ab -n1000 -c 100 http://10.0.0.9/<br data-filtered="filtered">This is ApacheBench, Version 2.3 <$Revision: 655654 $><br data-filtered="filtered">Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/<br data-filtered="filtered">Licensed to The Apache Software Foundation, http://www.apache.org/<br data-filtered="filtered">Benchmarking 10.0.0.10 (be patient)<br data-filtered="filtered">Completed 100 requests<br data-filtered="filtered">Completed 200 requests<br data-filtered="filtered">Completed 300 requests<br data-filtered="filtered">Completed 400 requests<br data-filtered="filtered">Completed 500 requests<br data-filtered="filtered">Completed 600 requests<br data-filtered="filtered">Completed 700 requests<br data-filtered="filtered">Completed 800 requests<br data-filtered="filtered">Completed 900 requests<br data-filtered="filtered">Completed 1000 requests<br data-filtered="filtered">Finished 1000 requests<br data-filtered="filtered">Server Software:        nginx/1.6.2<br data-filtered="filtered">Server Hostname:        10.0.0.9<br data-filtered="filtered">Server Port:            80<br data-filtered="filtered">Document Path:          /<br data-filtered="filtered">Document Length:        612 bytes<br data-filtered="filtered">Concurrency Level:      100<br data-filtered="filtered">Time taken for tests:   0.274 seconds<br data-filtered="filtered">Complete requests:      1000<br data-filtered="filtered">Failed requests:        0<br data-filtered="filtered">Write errors:           0<br data-filtered="filtered">Total transferred:      881136 bytes<br data-filtered="filtered">HTML transferred:       638928 bytes<br data-filtered="filtered">Requests per second:    3649.85 [#/sec] (mean)<br data-filtered="filtered">Time per request:       27.398 [ms] (mean)<br data-filtered="filtered">Time per request:       0.274 [ms] (mean, across all concurrent requests)<br data-filtered="filtered">Transfer rate:          3140.64 [Kbytes/sec] received<br data-filtered="filtered">Connection Times (ms)<br data-filtered="filtered">              min  mean[+/-sd] median   max<br data-filtered="filtered">Connect:        0    8   4.3      9      15<br data-filtered="filtered">Processing:     4   18   8.6     16      51<br data-filtered="filtered">Waiting:        1   15   9.3     12      49<br data-filtered="filtered">Total:         12   26   5.7     25      52<br data-filtered="filtered">Percentage of the requests served within a certain time (ms)<br data-filtered="filtered">  50%     25<br data-filtered="filtered">  66%     27<br data-filtered="filtered">  75%     28<br data-filtered="filtered">  80%     29<br data-filtered="filtered">  90%     34<br data-filtered="filtered">  95%     38<br data-filtered="filtered">  98%     42<br data-filtered="filtered">  99%     42<br data-filtered="filtered"> 100%     52 (longest request)<br data-filtered="filtered">[root@node01 ~]#<br data-filtered="filtered"></span>


再次检查redis是否写入数据:

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana] # redis-cli -h 10.0.0.44 -p 6379<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">(empty list or set)<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379><br data-filtered="filtered"></span>


数据成功写入redis,接下来将redis数据写入elasticsearch中

在logstash主机上编写logstash配置文件

1
<span style= "font-family:'宋体', SimSun;" >[root@logstash] #cd /usr/local/logstash/conf<br data-filtered="filtered">[root@logstash conf]# cat logstash.conf<br data-filtered="filtered">input {  <br data-filtered="filtered">     redis {<br data-filtered="filtered">         data_type => "list"<br data-filtered="filtered">         key => "system-messages"<br data-filtered="filtered">         host => "10.0.0.44"<br data-filtered="filtered">         port => "6379"<br data-filtered="filtered">         db => "0"<br data-filtered="filtered">          }<br data-filtered="filtered">    }<br data-filtered="filtered">output {<br data-filtered="filtered">   elasticsearch {<br data-filtered="filtered">      hosts => "10.0.0.41"<br data-filtered="filtered">      index => "nginx-access-log-%{+YYYY.MM.dd}"<br data-filtered="filtered">  }<br data-filtered="filtered">}<br data-filtered="filtered">[root@logstash conf]#<br data-filtered="filtered"></span>


测试文件是否正确,然后重启logstash

1
<span style= "font-family:'宋体', SimSun;" >[root@logstash conf] # /usr/local/logstash/bin/logstash -t -f /usr/local/logstash/conf/logstash.conf<br data-filtered="filtered">Configuration OK<br data-filtered="filtered">[root@logstash conf]#<br data-filtered="filtered">[root@logstash conf]# /usr/local/logstash/bin/logstash -f /usr/local/logstash/conf/logstash.conf  &<br data-filtered="filtered">[1] 44494<br data-filtered="filtered">[root@logstash conf]#<br data-filtered="filtered"></span>

 

用ab进行测试访问:

1
<span style= "font-family:'宋体', SimSun;" >[root@logstash ~] # ab -n 10000 -c100 http://10.0.0.41/<br data-filtered="filtered">This is ApacheBench, Version 2.3 <$Revision: 655654 $><br data-filtered="filtered">Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/<br data-filtered="filtered">Licensed to The Apache Software Foundation, http://www.apache.org/<br data-filtered="filtered">Benchmarking 10.0.0.10 (be patient)<br data-filtered="filtered">Completed 1000 requests<br data-filtered="filtered">Completed 2000 requests<br data-filtered="filtered">Completed 3000 requests<br data-filtered="filtered">Completed 4000 requests<br data-filtered="filtered">Completed 5000 requests<br data-filtered="filtered">Completed 6000 requests<br data-filtered="filtered">Completed 7000 requests<br data-filtered="filtered">Completed 8000 requests<br data-filtered="filtered">Completed 9000 requests<br data-filtered="filtered">Completed 10000 requests<br data-filtered="filtered">Finished 10000 requests<br data-filtered="filtered">Server Software:        nginx/1.6.2<br data-filtered="filtered">Server Hostname:        10.0.0.10<br data-filtered="filtered">Server Port:            80<br data-filtered="filtered">Document Path:          /<br data-filtered="filtered">Document Length:        612 bytes<br data-filtered="filtered">Concurrency Level:      100<br data-filtered="filtered">Time taken for tests:   2.117 seconds<br data-filtered="filtered">Complete requests:      10000<br data-filtered="filtered">Failed requests:        0<br data-filtered="filtered">Write errors:           0<br data-filtered="filtered">Total transferred:      8510052 bytes<br data-filtered="filtered">HTML transferred:       6170796 bytes<br data-filtered="filtered">Requests per second:    4722.57 [#/sec] (mean)<br data-filtered="filtered">Time per request:       21.175 [ms] (mean)<br data-filtered="filtered">Time per request:       0.212 [ms] (mean, across all concurrent requests)<br data-filtered="filtered">Transfer rate:          3924.74 [Kbytes/sec] received<br data-filtered="filtered">Connection Times (ms)<br data-filtered="filtered">              min  mean[+/-sd] median   max<br data-filtered="filtered">Connect:        0    4   3.7      3      25<br data-filtered="filtered">Processing:     2   17  10.8     15      75<br data-filtered="filtered">Waiting:        1   15  10.3     12      75<br data-filtered="filtered">Total:          5   21  10.9     18      75<br data-filtered="filtered">Percentage of the requests served within a certain time (ms)<br data-filtered="filtered">  50%     18<br data-filtered="filtered">  66%     24<br data-filtered="filtered">  75%     27<br data-filtered="filtered">  80%     29<br data-filtered="filtered">  90%     36<br data-filtered="filtered">  95%     44<br data-filtered="filtered">  98%     49<br data-filtered="filtered">  99%     57<br data-filtered="filtered"> 100%     75 (longest request)<br data-filtered="filtered">[root@node01 ~]#<br data-filtered="filtered"></span>


 

redis查看数据写入

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana ~] # redis-cli -h 10.0.0.44 -p 6379<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">(empty list or set)<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">1) "system-messages"<br data-filtered="filtered">10.0.0.44:6379> keys *<br data-filtered="filtered">(empty list or set)<br data-filtered="filtered">10.0.0.44:6379><br data-filtered="filtered"></span>


查看elasticsearch数据

wKiom1e0CRLhgXmmAAFnFnvOEiI928.png

进一步查看数据

wKiom1e0CWyxrbK2AAOPCQwmtF8434.png-wh_50

数据格式为键值形式,即json格式

wKioL1e0CauQPoKMAATEjhUYMB0024.png


运用kibana插件对elasticsearch+logstash+redis收集到的数据进行可视化展示

解压:

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana ~] #cd /opt/tools<br data-filtered="filtered">[root@kibana tools]# tar xf kibana-4.1.6-linux-x64.tar.gz<br data-filtered="filtered">[root@kibana tools]# mv kibana-4.1.6-linux-x64 /usr/local/kibana<br data-filtered="filtered">[root@kibana tools]# cd /usr/local/kibana/<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered"></span>


配置:

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana config] # pwd<br data-filtered="filtered">/usr/local/kibana/config<br data-filtered="filtered">[root@kibana config]# ll<br data-filtered="filtered">total 4<br data-filtered="filtered">-rw-r--r-- 1 logstash games 2933 Mar 10 03:29 kibana.yml<br data-filtered="filtered">[root@kibana config]#<br data-filtered="filtered">[root@kibana config]# vim kibana.yml<br data-filtered="filtered"></span>


wKiom1c9rGiigONeAAA66LDOH7w548.png

修改为:

wKioL1e0CvzguuNkAAB6OTqUmPA663.png


1
<span style= "font-family:'宋体', SimSun;" >[root@kibana config] # cd ..<br data-filtered="filtered">[root@kibana kibana]# pwd<br data-filtered="filtered">/usr/local/kibana<br data-filtered="filtered">[root@kibana kibana]# ll<br data-filtered="filtered">total 28<br data-filtered="filtered">drwxr-xr-x 2 logstash games 4096 Mar 28 23:24 bin<br data-filtered="filtered">drwxr-xr-x 2 logstash games 4096 Mar 28 23:31 config<br data-filtered="filtered">-rw-r--r-- 1 logstash games  563 Mar 10 03:29 LICENSE.txt<br data-filtered="filtered">drwxr-xr-x 6 logstash games 4096 Mar 28 23:24 node<br data-filtered="filtered">drwxr-xr-x 2 logstash games 4096 Mar 28 23:24 plugins<br data-filtered="filtered">-rw-r--r-- 1 logstash games 2510 Mar 10 03:29 README.txt<br data-filtered="filtered">drwxr-xr-x 9 logstash games 4096 Mar 10 03:29 src<br data-filtered="filtered">[root@kibana kibana]# ./bin/kibana -h<br data-filtered="filtered">  Usage: kibana [options]<br data-filtered="filtered">  Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.<br data-filtered="filtered">  Options:<br data-filtered="filtered">    -h, --help                 output usage information<br data-filtered="filtered">    -V, --version              output the version number<br data-filtered="filtered">    -e, --elasticsearch <uri>  Elasticsearch instance<br data-filtered="filtered">    -c, --config <path>        Path to the config file<br data-filtered="filtered">    -p, --port <port>          The port to bind to<br data-filtered="filtered">    -q, --quiet                Turns off logging<br data-filtered="filtered">    -H, --host <host>          The host to bind to<br data-filtered="filtered">    -l, --log-file <path>      The file to log to<br data-filtered="filtered">    --plugins <path>           Path to scan for plugins<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered"></span>


将kibana放入后台运行

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana kibana] # nohup ./bin/kibana &<br data-filtered="filtered">[1] 20765<br data-filtered="filtered">[root@kibana kibana]# nohup: ignoring input and appending output to `nohup.out'<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered"></span>


查看启动状态

1
<span style= "font-family:'宋体', SimSun;" >[root@kibana kibana] # netstat -pnutl|grep 5601<br data-filtered="filtered">tcp        0      0 0.0.0.0:5601                0.0.0.0:*                   LISTEN      20765/./bin/../node<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered">[root@kibana kibana]# ps -ef|grep kibana<br data-filtered="filtered">root      20765  20572  4 23:34 pts/6    00:00:02 ./bin/../node/bin/node ./bin/../src/bin/kibana.js<br data-filtered="filtered">root      20780  20572  0 23:35 pts/6    00:00:00 grep kibana<br data-filtered="filtered">[root@kibana kibana]#<br data-filtered="filtered"></span>


浏览器输入http://192.168.0.44:5601/


wKiom1e0DJLCxRhGAAFgbRUte8U760.png-wh_50

点击“create”创建nginx-access-log索引


切换到discover

wKiom1e0DRSydPkmAAFlftI4Q80063.png

在这里可以选择"today、Laster minutes等时间",例如我们选择"Laster minutes";

点击“Laster minutes”

wKiom1e0DaTB4dnnAAGhToH2kLI034.pngy因为我们创建的索引是在15分钟以内的,所以这里显示找不到结果,这里选择today,时间选长一点,再看

点击“today”

wKioL1e0D1vSaFRXAAG__2ChNgc456.png

下面是根据需要列出以前实验的操作步骤,仅供参考

如果根据需求显示(默认是显示全部):这里用add来选择要显示的内容


wKioL1c9rvviTV73AAE7oWVL28w332.png


搜索功能,比如搜索状态为404的

wKioL1c9r07gkbg0AAFLWDqpSns412.png


搜索状态为200的

wKioL1c9r5jApy2JAABybJSXF_w407.png


总结ELKStack工作流程

元数据(tomcat、Apache、PHP等服务器的日志文件log)-------->logstash将原始数据写入到redis中,然后通过logstash将redis中的数据写入到elasticsearch,最后通过kibana对elasticsearch数据进行分析整理,并展示出来


wKioL1c9r8XSAEz_AABTp1KHDcY326.png

补充elasticsearch配置文件注释:

1
<span style="font-family:'宋体', SimSun;">elasticsearch的config文件夹里面有两个配置文 件:elasticsearch.yml和logging.yml,第一个是es的基本配置文件,第二个是日志配置文件,<br data-filtered="filtered">es也是使用log4j来记录日 志的,所以logging.yml里的设置按普通log4j配置文件来设置就行了。下面主要讲解下elasticsearch.yml这个文件中可配置的 东西。<br data-filtered="filtered">cluster.name: elasticsearch<br data-filtered="filtered">配置es的集群名称,默认是elasticsearch,es会自动发现在同一网段下的es,如果在同一网段下有多个集群,就可以用这个属性来区分不同的集群。<br data-filtered="filtered">node.name: "Franz Kafka"<br data-filtered="filtered">节点名,默认随机指定一个name列表中名字,该列表在es的jar包中config文件夹里name.txt文件中,其中有很多作者添加的有趣名字。<br data-filtered="filtered">node.master: true<br data-filtered="filtered">指定该节点是否有资格被选举成为node,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master。<br data-filtered="filtered">node.data: true<br data-filtered="filtered">指定该节点是否存储索引数据,默认为true。<br data-filtered="filtered">index.number_of_shards: 5<br data-filtered="filtered">设置默认索引分片个数,默认为5片。<br data-filtered="filtered">index.number_of_replicas: 1<br data-filtered="filtered">设置默认索引副本个数,默认为1个副本。<br data-filtered="filtered">path.conf: /path/to/conf<br data-filtered="filtered">设置配置文件的存储路径,默认是es根目录下的config文件夹。<br data-filtered="filtered">path.data: /path/to/data<br data-filtered="filtered">设置索引数据的存储路径,默认是es根目录下的data文件夹,可以设置多个存储路径,用逗号隔开,例:<br data-filtered="filtered">path.data: /path/to/data1,/path/to/data2<br data-filtered="filtered">path.work: /path/to/work<br data-filtered="filtered">设置临时文件的存储路径,默认是es根目录下的work文件夹。<br data-filtered="filtered">path.logs: /path/to/logs<br data-filtered="filtered">设置日志文件的存储路径,默认是es根目录下的logs文件夹<br data-filtered="filtered">path.plugins: /path/to/plugins<br data-filtered="filtered">设置插件的存放路径,默认是es根目录下的plugins文件夹<br data-filtered="filtered">bootstrap.mlockall: true<br data-filtered="filtered">设置为true来锁住内存。因为当jvm开始swapping时es的效率 会降低,所以要保证它不swap,可以把ES_MIN_MEM和ES_MAX_MEM两个环境变量设置成同一个值,并且保证机器有足够的内存分配给es。 同时也要允许elasticsearch的进程可以锁住内存,linux下可以通过`ulimit -l unlimited`命令。<br data-filtered="filtered">network.bind_host: 192.168.0.1<br data-filtered="filtered">设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0。<br data-filtered="filtered">network.publish_host: 192.168.0.1<br data-filtered="filtered">设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址。<br data-filtered="filtered">network.host: 192.168.0.1<br data-filtered="filtered">这个参数是用来同时设置bind_host和publish_host上面两个参数。<br data-filtered="filtered">transport.tcp.port: 9300<br data-filtered="filtered">设置节点间交互的tcp端口,默认是9300。<br data-filtered="filtered">transport.tcp.compress: true<br data-filtered="filtered">设置是否压缩tcp传输时的数据,默认为false,不压缩。<br data-filtered="filtered">http.port: 9200<br data-filtered="filtered">设置对外服务的http端口,默认为9200。<br data-filtered="filtered">http.max_content_length: 100mb<br data-filtered="filtered">设置内容的最大容量,默认100mb<br data-filtered="filtered">http.enabled: false<br data-filtered="filtered">是否使用http协议对外提供服务,默认为true,开启。<br data-filtered="filtered">gateway.type: local<br data-filtered="filtered">gateway的类型,默认为local即为本地文件系统,可以设置为本地文件系统,分布式文件系统,hadoop的HDFS,和amazon的s3服务器,其它文件系统的设置方法下次再详细说。<br data-filtered="filtered">gateway.recover_after_nodes: 1<br data-filtered="filtered">设置集群中N个节点启动时进行数据恢复,默认为1。<br data-filtered="filtered">gateway.recover_after_time: 5m<br data-filtered="filtered">设置初始化数据恢复进程的超时时间,默认是5分钟。<br data-filtered="filtered">gateway.expected_nodes: 2<br data-filtered="filtered">设置这个集群中节点的数量,默认为2,一旦这N个节点启动,就会立即进行数据恢复。<br data-filtered="filtered">cluster.routing.allocation.node_initial_primaries_recoveries: 4<br data-filtered="filtered">初始化数据恢复时,并发恢复线程的个数,默认为4。<br data-filtered="filtered">cluster.routing.allocation.node_concurrent_recoveries: 2<br data-filtered="filtered">添加删除节点或负载均衡时并发恢复线程的个数,默认为4。<br data-filtered="filtered">indices.recovery.max_size_per_sec: 0<br data-filtered="filtered">设置数据恢复时限制的带宽,如入100mb,默认为0,即无限制。<br data-filtered="filtered">indices.recovery.concurrent_streams: 5<br data-filtered="filtered">设置这个参数来限制从其它分片恢复数据时最大同时打开并发流的个数,默认为5。<br data-filtered="filtered">discovery.zen.minimum_master_nodes: 1<br data-filtered="filtered">设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,可以设置大一点的值(2-4)<br data-filtered="filtered">discovery.zen.ping.timeout: 3s<br data-filtered="filtered">设置集群中自动发现其它节点时ping连接超时时间,默认为3秒,对于比较差的网络环境可以高点的值来防止自动发现时出错。<br data-filtered="filtered">discovery.zen.ping.multicast.enabled: false<br data-filtered="filtered">设置是否打开多播发现节点,默认是true。<br data-filtered="filtered">discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]<br data-filtered="filtered">设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点。<br data-filtered="filtered">下面是一些查询时的慢日志参数设置<br data-filtered="filtered">index.search.slowlog.level: TRACE<br data-filtered="filtered">index.search.slowlog.threshold.query.warn: 10s<br data-filtered="filtered">index.search.slowlog.threshold.query.info: 5s<br data-filtered="filtered">index.search.slowlog.threshold.query.debug: 2s<br data-filtered="filtered">index.search.slowlog.threshold.query.trace: 500ms<br data-filtered="filtered">index.search.slowlog.threshold.fetch.warn: 1s<br data-filtered="filtered">index.search.slowlog.threshold.fetch.info: 800ms<br data-filtered="filtered">index.search.slowlog.threshold.fetch.debug:500ms<br data-filtered="filtered">index.search.slowlog.threshold.fetch.trace: 200ms<br data-filtered="filtered"></span>