【大数据开发运维解决方案】Elasticsearch+Logstash+Kibana(6.7.1版本)安装部署

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
简介: Elasticsearch+Logstash+Kibana(6.7.1版本)安装部署目前Elasticsearch、Logstash、Kibana三个组件都准备安装在虚拟机供个人学习使用。一、部署Elasticsearch1、下载安装包官网下载地址:ES下载官网选择Elasticsearch组件2、上传解压安装包[root@s133061 elk]# pwd/hadoop/elk[root@s133061 elk]# lselasticsearch-6.7.1.tar.gz kiba

Elasticsearch+Logstash+Kibana(6.7.1版本)安装部署

目前Elasticsearch、Logstash、Kibana三个组件都准备安装在虚拟机供个人学习使用。

一、部署Elasticsearch

1、下载安装包

官网下载地址:
ES下载官网
选择Elasticsearch组件

2、新增ES用户

因为elasticsearch不允许root用户启动,所以需要创建新的用户和组。


[root@s133061 elasticsearch-6.7.1]# useradd es
[root@s133061 elasticsearch-6.7.1]# passwd es
Changing password for user es.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

3、上传解压安装包

[root@s133061 elk]# pwd
/hadoop/elk
[root@s133061 elk]# ls
elasticsearch-6.7.1.tar.gz  kibana-6.7.1-linux-x86_64.tar.gz  logstash-6.7.1.tar.gz

[root@s133061 elk]# tar -xvf elasticsearch-6.7.1.tar.gz -C /home/es/

4、#修改elasticsearch的权限给es用户和组

[root@s133061 elk]# cd /home/es/
[root@s133061 es]# ls
elasticsearch-6.7.1
[root@s133061 es]# chown -R es:es elasticsearch-6.7.1
[root@s133061 es]# ll
total 0
drwxr-xr-x 8 es es 143 Apr  3  2019 elasticsearch-6.7.1

5、修改配置文件

[root@s133061 es]# cd elasticsearch-6.7.1/
[root@s133061 elasticsearch-6.7.1]# ls
bin  config  lib  LICENSE.txt  logs  modules  NOTICE.txt  plugins  README.textile
[root@s133061 elasticsearch-6.7.1]# cd config/
[root@s133061 config]# ls
elasticsearch.yml  jvm.options  log4j2.properties  role_mapping.yml  roles.yml  users  users_roles
切换用户为es后创建目录:
先创建es的日志目录和数据存放目录:
[es@s133061 elasticsearch-6.7.1]$ mkdir -p data/es
[es@s133061 elasticsearch-6.7.1]$ mkdir -p data/logs/es
接下来修改配置文件:
[es@s133061 elasticsearch-6.7.1]$ pwd
/home/es/elasticsearch-6.7.1
[es@s133061 elasticsearch-6.7.1]$ cd config/
[es@s133061 config]$ vim elasticsearch.yml
修改部分配置如下:
path.data: /home/es/elasticsearch-6.7.1/data/es
path.logs: /home/es/elasticsearch-6.7.1/data/logs/es
--目的是使ES支持跨域请求
network.host: 10.241.133.61
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: '*'

6、修改linux系统资源配置

下面是本人在上面配置修改后启动报的错误以及解决方案,建议各位先按照下面配置修改自己配置再启动es。

a.max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
修改 /etc/security/limits.conf 文件,增加配置,来改变用户 es 每个进程最大同时打开文件数的大小:
es soft nofile 65535 
es hard nofile 65537
可切换到es用户下,然后通过下面2个命令查看当前数量:
ulimit -Hn
ulimit -Sn
注意:用户退出重新登录后配置才会刷新生效。

b. max number of threads [3818] for user [es] is too low, increase to at least [4096]
最大线程个数太低。修改配置文件 /etc/security/limits.conf ,增加配置:
es - nproc 4096 
# 或者 
es soft nproc 4096 
es hard nproc 4096
可切换到es用户下,然后通过下面2个命令查看当前最大线程数:
ulimit -Hu
ulimit –Su
注意:用户退出重新登录后配置才会刷新生效。
c. max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
修改 /etc/sysctl.conf 文件,在文末增加配置
vm.max_map_count=262144
执行命令sysctl -p生效。
d. memory locking requested for elasticsearch process but memory is not locked
修改 /etc/security/limits.conf 文件,增加配置:
* soft memlock unlimited 
* hard memlock unlimited

7、启动elasticsearch

[es@s133061 elasticsearch-6.7.1]$ cd bin/
[es@s133061 bin]$ nohup ./elasticsearch &
[1] 23324
[es@s133061 bin]$ nohup: ignoring input and appending output to ‘nohup.out’

访问url:http://10.241.133.61:9200/
出现下面提示代表es启动成功:

{
  "name" : "GRQFxQq",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "j916An9yQ9GSJ6eA5qt_-w",
  "version" : {
    "number" : "6.7.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "2f32220",
    "build_date" : "2019-04-02T15:59:27.961366Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

8、停止命令

netstat -ntlp | grep 9200 | awk '{print $7}' | awk -F '/' '{print $1}' | xargs kill -9 

二、Logstash部署

1、下载安装包

官网下载地址:
ES下载官网
选择logstash组件

2、解压缩

[root@s133061 elk]# pwd
/hadoop/elk
[root@s133061 elk]# tar -xvf logstash-6.7.1.tar.gz
[root@s133061 elk]# ls
elasticsearch-6.7.1.tar.gz  kibana-6.7.1-linux-x86_64.tar.gz  logstash-6.7.1  logstash-6.7.1.tar.gz
[root@s133061 elk]# cd logstash-6.7.1/
[root@s133061 logstash-6.7.1]# ls
bin  config  CONTRIBUTORS  data  Gemfile  Gemfile.lock  lib  LICENSE.txt  logstash-core  logstash-core-plugin-api  modules  NOTICE.TXT  tools  vendor  x-pack
[root@s133061 logstash-6.7.1]# mkdir zhaoyd
[root@s133061 logstash-6.7.1]# cd zhaoyd

3、在ES创建索引

创建索引
[root@s133061 zhaoyd]#  curl -XPUT http://10.241.133.61:9200/zydtest
{"acknowledged":true,"shards_acknowledged":true,"index":"zydtest"}
查询刚刚创建的索引:
[root@s133061 zhaoyd]# curl -XGET "http://10.241.133.61:9200/zydtest/_search?pretty"
{
  "took" : 85,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : null,
    "hits" : [ ]
  }
}

4、Logstash向Elasticsearch导数测试

编辑测试数据:

[root@s133061 zhaoyd]# cat log.log
2020-12-14T10:50:58.000+00:00 INFO this is a test! 10.241.133.13

编辑配置文件:

[root@s133061 zhaoyd]# cat test.conf
input{
        file{
                path => "/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"
                start_position => "beginning"
        }
}
filter{
        grok{
                match=>{ "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} %{DATA:message} %{IP:address}" }
        }
}
output{
        elasticsearch{
                hosts => "10.241.133.61:9200"
                index => "zydtest"
        }
        stdout {}
}

执行导入任务,将log.log文件内容导入到ES:

[root@s133061 logstash-6.7.1]# pwd
/hadoop/elk/logstash-6.7.1
[root@s133061 logstash-6.7.1]# ./bin/logstash -f zhaoyd/test.conf
Sending Logstash logs to /hadoop/elk/logstash-6.7.1/logs which is now configured via log4j2.properties
[2020-12-16T16:36:34,297][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-12-16T16:36:34,319][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.7.1"}
[2020-12-16T16:36:44,171][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-12-16T16:36:44,792][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.241.133.61:9200/]}}
[2020-12-16T16:36:45,063][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.241.133.61:9200/"}
[2020-12-16T16:36:45,147][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2020-12-16T16:36:45,152][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2020-12-16T16:36:45,195][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.241.133.61:9200"]}
[2020-12-16T16:36:45,217][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2020-12-16T16:36:45,315][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-12-16T16:36:45,429][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2020-12-16T16:36:46,042][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/hadoop/elk/logstash-6.7.1/data/plugins/inputs/file/.sincedb_a1dc436238800e668fd2df0a021e5d89", :path=>["/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"]}
[2020-12-16T16:36:46,106][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1fe96bd7 run>"}
[2020-12-16T16:36:46,197][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2020-12-16T16:36:46,204][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-16T16:36:46,834][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
/hadoop/elk/logstash-6.7.1/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
       "message" => [
        [0] "2020-12-14T10:50:58.000+00:00 INFO this is a test! 10.241.133.13",
        [1] "this is a test!"
    ],
    "@timestamp" => 2020-12-16T08:36:47.058Z,
          "host" => "s133061",
     "log-level" => "INFO",
       "address" => "10.241.133.13",
      "@version" => "1",
     "timestamp" => "2020-12-14T10:50:58.000+00:00",
          "path" => "/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"
}

上面导入任务已经执行成功了,接下来查询下索引数据:

[root@s133061 logstash-6.7.1]# curl -XGET "http://10.241.133.61:9200/zydtest/_search?pretty"
{
  "took" : 25,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "zydtest",
        "_type" : "doc",
        "_id" : "ZR6xanYBLtVoEIHt0n-X",
        "_score" : 1.0,
        "_source" : {
          "message" : [
            "2020-12-14T10:50:58.000+00:00 INFO this is a test! 10.241.133.13",
            "this is a test!"
          ],
          "@timestamp" : "2020-12-16T08:36:47.058Z",
          "host" : "s133061",
          "log-level" : "INFO",
          "address" : "10.241.133.13",
          "@version" : "1",
          "timestamp" : "2020-12-14T10:50:58.000+00:00",
          "path" : "/hadoop/elk/logstash-6.7.1/zhaoyd/log.log"
        }
      }
    ]
  }
}

数据已经能够正常导入了,证明安装没问题。

三、安装部署Kibana

1、下载安装包

官网下载地址:
ES下载官网
选择Kibana组件

2、解压

[root@s133061 elk]# pwd
/hadoop/elk
[root@s133061 elk]# tar -xvf kibana-6.7.1-linux-x86_64.tar.gz
[root@s133061 elk]# mv kibana-6.7.1-linux-x86_64 kibana-6.7.1

3、配置参数文件

[root@s133061 elk]# cd kibana-6.7.1/
[root@s133061 kibana-6.7.1]# ls
bin  built_assets  config  data  LICENSE.txt  node  node_modules  NOTICE.txt  optimize  package.json  plugins  README.txt  src  target  webpackShims
[root@s133061 kibana-6.7.1]# cd config/
[root@s133061 config]# vim kibana.yml
#解除注释,并修改成以下内容
server.port: 5601
server.host: "10.241.133.61"     #允许远程访问
elasticsearch.hosts: ["http://10.241.133.61:9200"]    //修改成自己集群的地址
kibana.index: ".kibana"

4、启动Kibana

[root@s133061 bin]# pwd
/hadoop/elk/kibana-6.7.1/bin
[root@s133061 bin]# nohup ./kibana &
相关实践学习
使用阿里云Elasticsearch体验信息检索加速
通过创建登录阿里云Elasticsearch集群,使用DataWorks将MySQL数据同步至Elasticsearch,体验多条件检索效果,简单展示数据同步和信息检索加速的过程和操作。
ElasticSearch 入门精讲
ElasticSearch是一个开源的、基于Lucene的、分布式、高扩展、高实时的搜索与数据分析引擎。根据DB-Engines的排名显示,Elasticsearch是最受欢迎的企业搜索引擎,其次是Apache Solr(也是基于Lucene)。 ElasticSearch的实现原理主要分为以下几个步骤: 用户将数据提交到Elastic Search 数据库中 通过分词控制器去将对应的语句分词,将其权重和分词结果一并存入数据 当用户搜索数据时候,再根据权重将结果排名、打分 将返回结果呈现给用户 Elasticsearch可以用于搜索各种文档。它提供可扩展的搜索,具有接近实时的搜索,并支持多租户。
相关文章
|
18天前
|
数据可视化 索引
elasticsearch head、kibana 安装和使用
elasticsearch head、kibana 安装和使用
|
1月前
|
监控 安全 Linux
【Elasticsearch专栏 14】深入探索:Elasticsearch使用Logstash的日期过滤器删除旧数据
使用Logstash的日期过滤器可以有效删除Elasticsearch中的旧数据,释放存储空间并提高集群性能。通过配置Logstash,可以指定索引模式、筛选时间戳早于特定阈值的文档,并在输出阶段删除这些旧数据。执行配置时,需确保Logstash与Elasticsearch连接正常,并监控日志以确保操作安全。定期执行此操作可确保旧数据不会过多积累。总之,Logstash的日期过滤器提供了一种简单而高效的方法,帮助管理和优化Elasticsearch中的数据。
|
2月前
|
API 网络安全 网络架构
浅谈Elastic Search V8版本的一些重大改进
浅谈Elastic Search V8版本的一些重大改进
38 0
|
3月前
|
分布式计算 DataWorks MaxCompute
DataWorks中odps到容器部署starrocks的单表同步遇到写入问题
【1月更文挑战第6天】【1月更文挑战第29篇】DataWorks中odps到容器部署starrocks的单表同步遇到写入问题
46 3
|
4月前
|
安全 大数据 Java
elasticsearch|大数据|低版本的elasticsearch集群的官方安全插件x-pack的详解
elasticsearch|大数据|低版本的elasticsearch集群的官方安全插件x-pack的详解
53 0
|
4月前
|
存储 运维 容灾
带你读《云上自动化运维宝典》——一文详解云上跨可用区容灾解决方案和异地多活能力建设最佳案例(3)
带你读《云上自动化运维宝典》——一文详解云上跨可用区容灾解决方案和异地多活能力建设最佳案例(3)
111 0
|
3月前
|
存储 自然语言处理 Java
Elasticsearch全文搜索技术之二kibana的简介和使用
Elasticsearch全文搜索技术之二kibana的简介和使用
29 2
|
3月前
|
消息中间件 缓存 运维
云HIS运维运营平台 云HIS解决方案
云HIS重建统一的信息架构体系,重构管理服务流程,重造病人服务环境,向不同类型的医疗机构提供SaaS化HIS服务解决方案。
62 2
|
4月前
elasticsearch7.x kibana的常用DSL(自己练习的)
elasticsearch7.x kibana的常用DSL(自己练习的)
|
4月前
|
弹性计算 运维 容灾
带你读《云上自动化运维宝典》——一文详解云上跨可用区容灾解决方案和异地多活能力建设最佳案例(1)
带你读《云上自动化运维宝典》——一文详解云上跨可用区容灾解决方案和异地多活能力建设最佳案例(1)
182 1