基本环境
操作系统:
centos 7.6
主机信息:
hostname | ip |
hadoop1 | 10.0.2.9 |
hadoop2 | 10.0.2.78 |
hadoop3 | 10.0.2.211 |
下载与安装
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-x86_64.rpm
yum install elasticsearch-7.16.2-x86_64.rpm
配置
本文以/data/elasticsearch目录为es的数据存储目录,所以需要先创建此文件夹
mkdir /data/elasticsearch
vim /etc/elasticsearch/elasticsearch.yml
公共配置部分
#需要设置成一样的才能加入到集群
cluster.name: es-cluster
discovery.seed_hosts: ["10.0.2.9", "10.0.2.78", "10.0.2.211"]
cluster.initial_master_nodes: ["hadoop1", "hadoop2", "hadoop3"]
## 每隔多长时间ping一个node
##master选举/节点间通讯超时时间(这个时间的配置需要根据实际情况设置)
discovery.zen.fd.ping_interval: 30s
## 每次ping的超时时间
discovery.zen.fd.ping_timeout: 120s
## 一个node被ping多少次失败就认为是故障了
discovery.zen.fd.ping_retries: 6
## 一般建议的目录地址
path.logs: /var/log/elasticsearch
path.data: /data/elasticsearch
##集群脑裂问题参数配置
## elasticsearch则可以配置返回消息的节点数量, 一般情况下会配置(n/2 + 1)个节点
discovery.zen.minimum_master_nodes: 2
## 多少个节点启动后就可以组成集群
gateway.recover_after_nodes: 2
## 期望多少个节点联通之后才开始分配shard
gateway.expected_nodes: 3
## 超时时间
gateway.recover_after_time: 1m
node.master: true
node.data: true
node.ingest: true
http.cors.enabled: true
http.cors.allow-origin: true
各节点独立部分
# 每个节点的名称不能相同
node.name: hadoop1
## 绑定本机ip, 否则不能远程访问
network.host: 10.0.2.9
# 每个节点的名称不能相同
node.name: hadoop2
## 绑定本机ip, 否则不能远程访问
network.host: 10.0.2.78
# 每个节点的名称不能相同
node.name: hadoop3
## 绑定本机ip, 否则不能远程访问
network.host: 10.0.2.211
启动
systemctl start elasticsearch
浏览器访问
浏览器输入http://hadoop1:9200或者http://hadoop2:9200或http://hadoop3:9200,可得到:
{
"name": "hadoop3",
"cluster_name": "es-cluster",
"cluster_uuid": "5C1j6QEHRTmRYlhM8tifxA",
"version": {
"number": "7.16.2",
"build_flavor": "default",
"build_type": "rpm",
"build_hash": "2b937c44140b6559905130a8650c64dbd0879cfb",
"build_date": "2021-12-18T19:42:46.604893745Z",
"build_snapshot": false,
"lucene_version": "8.10.1",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
配置账号密码登录
生成证书
在hadoop1节点执行如下命令
/usr/share/elasticsearch/bin/elasticsearch-certutil cert -out /etc/elasticsearch/elasticsearch-certificates.p12 -pass ""
将证书/etc/elasticsearch/elasticsearch-certificates.p12拷贝到hadoop2和hadoop3节点
scp /etc/elasticsearch/elasticsearch-certificates.p12 hadoop2:/etc/elasticsearch/
scp /etc/elasticsearch/elasticsearch-certificates.p12 hadoop3:/etc/elasticsearch/
修改elasticsearch.yml配置文件,新增如下内容
# 配置账号密码
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elasticsearch-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elasticsearch-certificates.p12
重启elasticsearch
systemctl restart elasticsearch
设置账号密码
重启完elasticsearch之后,执行如下命令设置密码
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
执行该命令时会让你输入elastic等用户的密码,按照要求输入即可。
至此,再次通过浏览器访问elasticsearch集群的9200端口时,即需要输入账号密码: