配置TLS
Elastic Stack安全功能使你可以加密来自elasticsearch集群的流量。使用传输层安全性(TLS)来保护连接
- 每个节点生成私钥(以下操作使用es_user用户)
cd /opt/server/elasticsearch #创建存放私钥目录 mkdir config/certs #生成ca文件,默认文件名(elastic-stack-ca.p12), 期间会要求输入自己的密码,记得拿小本本记下 ./bin/elasticsearch-certutil ca #使用ca生成私钥,默认文件名(elastic-certificates.p12),期间会要求输入自己的密码,记得拿小本本记下 ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 #将生成的密钥移到存放目录下 mv elastic-certificates.p12 config/certs/
- 修改配置
vim /opt/server/elasticsearch/config/elasticsearch.yml
增加以下配置, 启用TLS并指定访问节点证书所需的信息
http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
- 配置key-store
#输入生成密钥时使用的密码 ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password ./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
- 重启
ps aux | grep java | grep elastic kill -9 pid nohup /opt/server/elasticsearch/bin/elasticsearch 2>&1 > /opt/server/elasticsearch/logs/std.out &
注意:其他两台需要使用同一份密钥文件(elastic-certificates.p12),可将这台服务器的文件传到其他两台进行操作
配置HTTP SSL(可选)
开启SSL意味着客户端访问elasticsearch时必须提供相应的证书签名
- 修改配置
xpack.security.http.ssl.enabled: true xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12
- 添加key
#密码与生成p12私钥时的密码保持一致 bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password bin/elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password
- 重启
设置密码
使用命令./bin/elasticsearch-setup-passwords interactive
,将出现以下交互情况
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Passwords do not match. Try again. Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana_system]: Reenter password for [kibana_system]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
每台服务器都要整一遍,这里直接把我整吐了
安装中文分词插件
下载插件
release: https://github.com/medcl/elasticsearch-analysis-ik/releases
下载与elastic相对应的版本插件
安装
#在plugins目录下新建目录 ik cd plugins mkdir ik #解压中文分词插件 unzip elasticsearch-analysis-ik-7.11.1.zip
重启es即可
安装kibana
下载安装包并解压
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.11.1-linux-aarch64.tar.gz #解压 tar -xf kibana-7.11.1-linux-aarch64.tar.gz #重命名 mv kibana-7.11.1-linux-aarch64 kibana
修改配置
vim kibana/config/kibana.yml
server.port: 5601 server.host: "192.168.1.11" elasticsearch.hosts: ["http://192.168.1.13:9200","http://192.168.1.14:9200","http://192.168.1.15:9200"] elasticsearch.username: "kibana" elasticsearch.password: "your_password" i18n.locale: "zh-CN"
配置SSL(可选)
如果elasticsearch配置了HTTP SSL,则kibana也要有相应的配置
- 将elasticsearch证书传输到kibana服务器
- 由于kibana为node写的,不支持p12证书,将证书转为pem格式
openssl pkcs12 -in elastic-stack-ca.p12 -out elastic-stack-ca.pem
- 修改配置
#hosts修改为https elasticsearch.hosts: ["https://192.168.1.13:9200","https://192.168.1.14:9200","https://192.168.1.15:9200"] elasticsearch.ssl.certificateAuthorities: [ "/opt/server/kibana/kibana-7.11.1-linux-aarch64/config/elastic-stack-ca.pem" ] elasticsearch.ssl.verificationMode: certificate #加密key,不能少于32位 xpack.encryptedSavedObjects.encryptionKey: 1234567891112131415161718192021222324 xpack.security.encryptionKey: 1234567891112131415161718192021222324 xpack.reporting.encryptionKey: 1234567891112131415161718192021222324
启动
#增加用户 adduser kibana_user #增加权限 chown -R kibana_user kibana-7.11.1-linux-aarch64 #切换用户 su es_user #启动 nohup ./bin/kibana 2>&1 &
查看控制台
kibana用户在该版本已过时,不能再用于登陆控制台,只能用于与elasticsearch通信,所以我们使用elastic账户登陆
关于kibana的使用
安装Filebeat
下载并解压
cd /opt/server wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.11.1-linux-x86_64.tar.gz tar -xf filebeat-7.11.1-linux-x86_64.tar.gz mv filebeat-7.11.1-linux-x86_64 filebeat
修改配置
filebeat.inputs: - type: log #启用该type配置 enabled: true #收集的日志地址 paths: - /opt/docker_volumes/ih-front-center/logs/ih-front-center/log.out #排除匹配正则的行,该配置为排除前缀为DBG的行 exclude_lines: ['^DBG'] #包含匹配正则的行,该配置为读取包含hello的行,如果inclue和exclue都配置了,那么会先inclued再exclued include_lines: ['hello'] #排除匹配正则的文件 prospector.scanner.exclude_files: ['.gz$'] #默认这个值是FALSE的,也就是我们的json日志解析后会被放在json键上。设为TRUE,所有的keys就会被放到根节点 #如果日志不是json格式,请使用false json.keys_under_root: true #把filebeat默认的key值覆盖 #如果日志不是json格式,请使用false json.overwrite_keys: true #属性,index为自定义字段 fields: index: 'ih-front-center-ljey-staging' #匹配多行,该正则表示以空格开头,后面跟 at 或者 ... 的行或者以 Caused by: 开头的行将合并到上一行,主要是为了匹配java的异常信息 multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:' #定义模式是否是否定,就是上面这个正则是直接匹配,还是匹配后取反 multiline.negate: false #如何把多行合并成一个事件 after 或者 before #三个配置合在一起表示 多个以空格或者Casued开头的行合并到上一行,如果有过滤规则就是合并后进行过滤 multiline.match: after - type: log enabled: true paths: - /opt/logs/ih-inquiry-center/ih-inquiry-center/log.out exclude_lines: ['DEBUG'] include_lines: ['^ERR', '^WARN'] prospector.scanner.exclude_files: ['.gz$'] json.keys_under_root: true fields: index: 'ih-inquiry-center-staging' multiline.pattern: '^\s+(at|\.{3})\b|^Caused by:' multiline.negate: true multiline.match: after #组件配置地址 filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false #分片数量 setup.template.settings: index.number_of_shards: 6 #kibana地址 setup.kibana: host: "192.168.1.11:5601" #es地址 output.elasticsearch: hosts: ["192.168.1.13:9200","192.168.1.14:9200","192.168.1.15:9200"] #默认索引,如果下面的indices一个都没生效就用这个 index: "ihis-staging-%{+yyyy.MM.dd}" indices: #索引名称 - index: "ih-front-center-ljey-staging-%{+yyyy.MM.dd}" #当匹配到fields index值为ih-front-center-ljey-staging时生效,与上面配置呼应 when.contains: fields: index: 'ih-front-center-ljey-staging' - index: "ih-inquiry-center-staging-%{+yyyy.MM.dd}" when.contains: fields: index: 'ih-inquiry-center-staging' username: "elastic" password: "elastic" #自定义索引模板 setup.template.name: "his-log" setup.template.pattern: "his-log-*" setup.template.enabled: true setup.template.overwrite: true #解除限制,否则无法使用自定义模板 setup.ilm.enabled: false processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~
启动
nohup ./filebeat -e -c filebeat.yml 2>&1 &