Logstash官网文档译文(一)

简介: Logstash官网文档译文

设置并启动Logstash

Logstash目录布局

本节描述在解包Logstash安装包时创建的默认目录结构。

.zip和.tar.gz包的目录布局

.zip和.tar.gz包是完全独立的。默认情况下,所有文件和目录都包含在主目录中——解压归档文件时创建的目录。

这非常方便,因为您不需要创建任何目录就可以开始使用Logstash,而卸载Logstash就像删除主目录一样简单。但是,建议更改config和logs目录的默认位置,这样以后就不会删除重要的数据。

Type Description Default Location Setting
home Logstash安装的主目录。 {extract.path}- 解压缩归档文件创建的目录
bin 二进制脚本,包括启动logstash的logstash和安装插件的logstash-plugin {extract.path}/bin
settings 配置文件,包括logstash.ymljvm.options {extract.path}/config path.settings
logs Log files {extract.path}/logs path.logs
plugins 本地,非Ruby-Gem插件文件。每个插件都包含在一个子目录中。建议仅用于开发。 {extract.path}/plugins path.plugins
data logstash及其插件用于任何持久性需求的数据文件。 {extract.path}/data path.data

Debian和RPM包目录布局

Type Description Default Location Setting
home Logstash安装的主目录 /usr/share/logstash
bin 二进制脚本,包括启动logstash的logstash和安装插件的logstash-plugin /usr/share/logstash/bin
settings 配置文件,包括logstash.ymljvm.optionsstartup.options /etc/logstash path.settings
conf Logstashpipeline配置文件 /etc/logstash/conf.d/*.conf See /etc/logstash/pipelines.yml
logs Log files /var/log/logstash path.logs
plugins 本地,非Ruby-Gem插件文件。每个插件都包含在一个子目录中。建议仅用于开发。 /usr/share/logstash/plugins path.plugins
data logstash及其插件用于任何持久性需求的数据文件。 /var/lib/logstash path.data

Docker镜像的目录布局

Docker映像是从.tar.gz包中创建的,并遵循类似的目录布局。

Type Description Default Location Setting
home Logstash安装的主目录 /usr/share/logstash
bin 二进制脚本,包括启动logstash的logstash和安装插件的logstash-plugin /usr/share/logstash/bin
settings 配置文件,包括logstash.ymljvm.options /usr/share/logstash/config path.settings
conf Logstash pipeline配置文件 /usr/share/logstash/pipeline path.config
plugins 本地,非Ruby-Gem插件文件。每个插件都包含在一个子目录中。建议仅用于开发。 /usr/share/logstash/plugins path.plugins
data logstash及其插件用于任何持久性需求的数据文件。 /usr/share/logstash/data path.data

Logstash配置文件

Logstash有两种类型的配置文件:pipeline配置文件(定义Logstash处理管道)和setting文件(指定控制Logstash启动和执行的选项)。

Pipeline配置文件

在定义Logstash处理管道的各个阶段时,您将创建管道配置文件。在deb和rpm中,将管道配置文件放在/etc/logstash/conf的目录下。Logstash尝试只加载/etc/logstash/conf中扩展名为.conf的文件并忽略所有其他文件。

Settings配置文件

在Logstash安装中已经定义了设置文件。Logstash包含以下设置文件:

logstash.yaml

包含Logstash配置指令。您可以在这个文件中设置指令,而不是在命令行中传递指令。您在命令行中设置的任何指令都会覆盖logstash.yaml中相应的设置。查到logstash.yml以获取更多信息。

pipeline.yaml

包含用于在单个Logstash实例中运行多个管道的框架和指令。有关更多信息,请参阅多个管道

jvm.options

包含JVM配置指令。使用此文件设置总堆空间的初始值和最大值。还可以使用此文件设置Logstash的区域设置。在单独的行中指定每个指令。该文件中的所有其他设置都被认为是专家设置。

log4j2.properties

包含log4j2库的默认设置。有关更多信息,请参阅Log4j2配置

startup.options(Linux)

包含系统安装脚本在/usr/share/logstash/bin中使用的选项,用于为您的系统构建适当的启动脚本。在安装Logstash包时,在安装过程的最后执行system-install脚本,并使用在启动时指定的设置。选项,用于设置用户、组、服务名称和服务描述等选项。默认情况下,Logstash服务安装在用户Logstash下。启动。options文件使您更容易安装Logstash服务的多个实例。您可以复制文件并更改特定设置的值。注意这个启动。启动时没有读取选项文件。如果您希望更改Logstash启动脚本(例如,更改Logstash用户或从不同的配置路径读取),则必须重新运行system-install脚本(以root身份)以传入新的设置。

Logstash.yml

您可以在Logstash设置文件logstash.yaml中设置选项,以控制Logstash的执行。例如,您可以指定管道设置、配置文件的位置、日志记录选项和其他设置。当您运行logstash时,logstash文件中的大多数设置也可以作为命令行指令使用。您在命令行中设置的任何指令都会覆盖logstash.yaml文件中相应的设置。

logstash.yaml文件是用YAML编写的。它的位置因平台而异(请参阅Logstash目录布局)。您可以以层次形式指定设置或使用 flat keys。例如,要使用分层形式设置管道批大小和批延迟,可以指定:

pipeline:
  batch:
    size: 125
    delay: 50

要表示与普通键相同的值,可以指定:

pipeline.batch.size: 125
pipeline.batch.delay: 50

logstash.yaml文件还支持环境变量和keystore secrets的bash式填写设置值。

pipeline:
  batch:
    size: ${BATCH_SIZE}
    delay: ${BATCH_DELAY:50}
node:
  name: "node_${LS_NODE_NAME}"
path:
   queue: "/tmp/${QUEUE_DIR:queue}"

注意,支持 $ 符号,在上面的例子中,设置默认批处理延迟为 50,默认 path.queue 为 /tmp/queue。

模块也可以在logstash.yml文件中指定。模块的定义将有这样的格式:

modules:
  - name: MODULE_NAME1
    var.PLUGIN_TYPE1.PLUGIN_NAME1.KEY1: VALUE
    var.PLUGIN_TYPE1.PLUGIN_NAME1.KEY2: VALUE
    var.PLUGIN_TYPE2.PLUGIN_NAME2.KEY1: VALUE
    var.PLUGIN_TYPE3.PLUGIN_NAME3.KEY1: VALUE
  - name: MODULE_NAME2
    var.PLUGIN_TYPE1.PLUGIN_NAME1.KEY1: VALUE
    var.PLUGIN_TYPE1.PLUGIN_NAME1.KEY2: VALUE

如果使用命令行指令 --modules,任何在logstash.yml文件中定义的模块将被忽略。

logstash.yml文件包括以下设置:

Setting Description Default value
node.name A descriptive name for the node. Machine’s hostname
path.data The directory that Logstash and its plugins use for any persistent needs. LOGSTASH_HOME/data
pipeline.id The ID of the pipeline. main
pipeline.java_execution Use the Java execution engine. true
pipeline.workers The number of workers that will, in parallel, execute the filter and output stages of the pipeline. This setting uses the java.lang.Runtime.getRuntime.availableProcessors value as a default if not overridden by pipeline.workers in pipelines.yml or pipeline.workers from logstash.yml. If you have modified this setting and see that events are backing up, or that the CPU is not saturated, consider increasing this number to better utilize machine processing power. Number of the host’s CPU cores
pipeline.batch.size The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs. Larger batch sizes are generally more efficient, but come at the cost of increased memory overhead. You may need to increase JVM heap space in the jvm.options config file. See Logstash Configuration Files for more info. 125
pipeline.batch.delay When creating pipeline event batches, how long in milliseconds to wait for each event before dispatching an undersized batch to pipeline workers. 50
pipeline.unsafe_shutdown When set to true, forces Logstash to exit during shutdown even if there are still inflight events in memory. By default, Logstash will refuse to quit until all received events have been pushed to the outputs. Enabling this option can lead to data loss during shutdown. false
pipeline.plugin_classloaders (Beta) Load Java plugins in independent classloaders to isolate their dependencies. false
pipeline.ordered Set the pipeline event ordering.Valid options are:auto``true``false``auto will automatically enable ordering if the pipeline.workers setting is also set to 1. true will enforce ordering on the pipeline and prevent logstash from starting if there are multiple workers. false will disable the processing required to preserve order. Ordering will not be guaranteed, but you save the processing cost of preserving order. auto
pipeline.ecs_compatibility Sets the pipeline’s default value for ecs_compatibility, a setting that is available to plugins that implement an ECS compatibility mode for use with the Elastic Common Schema. Possible values are:disabled``v1``v8This option allows the early opt-in (or preemptive opt-out) of ECS compatibility modes in plugins, which is scheduled to be on-by-default in a future major release of Logstash.Values other than disabled are currently considered BETA, and may produce unintended consequences when upgrading Logstash. disabled
path.config 主管道的Logstash配置的路径。如果你指定一个目录或通配符,配置文件将按字母顺序从该目录中读取。 特定平台. 见 Logstash目录布局
config.string 一个字符串,包含要用于主管道的管道配置。使用与配置文件相同的语法。 None
config.test_and_exit When set to true, checks that the configuration is valid and then exits. Note that grok patterns are not checked for correctness with this setting. Logstash can read multiple config files from a directory. If you combine this setting with log.level: debug, Logstash will log the combined config file, annotating each config block with the source file it came from. false
config.reload.automatic When set to true, periodically checks if the configuration has changed and reloads the configuration whenever it is changed. This can also be triggered manually through the SIGHUP signal. false
config.reload.interval How often in seconds Logstash checks the config files for changes. Note that the unit qualifier (s) is required. 3s
config.debug When set to true, shows the fully compiled configuration as a debug log message. You must also set log.level: debug. WARNING: The log message will include any password options passed to plugin configs as plaintext, and may result in plaintext passwords appearing in your logs! false
config.support_escapes When set to true, quoted strings will process the following escape sequences: \n becomes a literal newline (ASCII 10). \r becomes a literal carriage return (ASCII 13). \t becomes a literal tab (ASCII 9). \\ becomes a literal backslash \. \" becomes a literal double quotation mark. \' becomes a literal quotation mark. false
modules When configured, modules must be in the nested YAML structure described above this table. None
queue.type The internal queuing model to use for event buffering. Specify memory for legacy in-memory based queuing, or persisted for disk-based ACKed queueing (persistent queues). memory
path.queue The directory path where the data files will be stored when persistent queues are enabled (queue.type: persisted). path.data/queue
queue.page_capacity The size of the page data files used when persistent queues are enabled (queue.type: persisted). The queue data consists of append-only data files separated into pages. 64mb
queue.max_events The maximum number of unread events in the queue when persistent queues are enabled (queue.type: persisted). 0 (unlimited)
queue.max_bytes The total capacity of the queue in number of bytes. Make sure the capacity of your disk drive is greater than the value you specify here. If both queue.max_events and queue.max_bytes are specified, Logstash uses whichever criteria is reached first. 1024mb (1g)
queue.checkpoint.acks The maximum number of ACKed events before forcing a checkpoint when persistent queues are enabled (queue.type: persisted). Specify queue.checkpoint.acks: 0 to set this value to unlimited. 1024
queue.checkpoint.writes The maximum number of written events before forcing a checkpoint when persistent queues are enabled (queue.type: persisted). Specify queue.checkpoint.writes: 0 to set this value to unlimited. 1024
queue.checkpoint.retry When enabled, Logstash will retry once per attempted checkpoint write for any checkpoint writes that fail. Any subsequent errors are not retried. This is a workaround for failed checkpoint writes that have been seen only on filesystems with non-standard behavior such as SANs and is not recommended except in those specific circumstances. false
queue.drain When enabled, Logstash waits until the persistent queue is drained before shutting down. false
dead_letter_queue.enable Flag to instruct Logstash to enable the DLQ feature supported by plugins. false
dead_letter_queue.max_bytes The maximum size of each dead letter queue. Entries will be dropped if they would increase the size of the dead letter queue beyond this setting. 1024mb
path.dead_letter_queue The directory path where the data files will be stored for the dead-letter queue. path.data/dead_letter_queue
http.host The bind address for the metrics REST endpoint. "127.0.0.1"
http.port The bind port for the metrics REST endpoint. 9600
log.level The log level. Valid options are:fatal``error``warn``info``debug``trace info
log.format The log format. Set to json to log in JSON format, or plain to use Object#.inspect. plain
path.logs The directory where Logstash will write its log to. LOGSTASH_HOME/logs
pipeline.separate_logs This a boolean setting to enable separation of logs per pipeline in different log files. If enabled Logstash will create a different log file for each pipeline, using the pipeline.id as name of the file. The destination directory is taken from the path.logs setting. When there are many pipelines configured in Logstash, separating each log lines per pipeline could be helpful in case you need to troubleshoot what’s happening in a single pipeline, without interference of the other ones. false
path.plugins Where to find custom plugins. You can specify this setting multiple times to include multiple paths. Plugins are expected to be in a specific directory hierarchy: PATH/logstash/TYPE/NAME.rb where TYPE is inputs, filters, outputs, or codecs, and NAME is the name of the plugin. Platform-specific. See Logstash Directory Layout.

用于安全设置的秘密密钥库

当你配置Logstash时,你可能需要指定敏感的设置或配置,如密码。与其依靠文件系统权限来保护这些值,你可以使用Logstash keystore来安全地存储用于配置设置的秘密值。

在向keystore添加密钥及其秘密值后,你可以在配置敏感设置时使用密钥来代替秘密值。

引用密钥的语法与环境变量的语法相同:

${KEY}

其中KEY是钥匙的名称。

目录
相关文章
|
6月前
|
自然语言处理 安全 关系型数据库
|
存储 缓存 JSON
elasticsearch简介安装使用事例大全
Elasticsearch是面向文档(document oriented)的,可以存储整个对象或文档(document)、索引(index)每个文档的内容,可以快速搜索。Elasticsearch中,可以对文档(而非成行成列的数据)进行索引、搜索、排序、过滤。
182 0
elasticsearch简介安装使用事例大全
|
编解码 Java Shell
Logstash官网文档译文(二)
Logstash官网文档译文
840 0
|
存储 监控 安全
Logstash官网文档译文(三)
Logstash官网文档译文
509 0
|
消息中间件 JSON NoSQL
logstash 7.6.2 基础教程
logstash 7.6.2 基础教程
305 0
五分钟带你玩转Elasticsearch(六)看了这篇文档,教你无脑安装logstash
五分钟带你玩转Elasticsearch(六)看了这篇文档,教你无脑安装logstash
136 0
五分钟带你玩转Elasticsearch(六)看了这篇文档,教你无脑安装logstash
五分钟带你玩转Elasticsearch(五)看了这篇文档,教你无脑安装kibana
五分钟带你玩转Elasticsearch(五)看了这篇文档,教你无脑安装kibana
117 0
五分钟带你玩转Elasticsearch(五)看了这篇文档,教你无脑安装kibana
|
Java 数据安全/隐私保护 Spring
五分钟带你玩转Elasticsearch(二)看了这篇文档,教你无脑安装es
五分钟带你玩转Elasticsearch(二)看了这篇文档,教你无脑安装es
149 0
五分钟带你玩转Elasticsearch(二)看了这篇文档,教你无脑安装es
|
API 数据格式 JSON
Solr快速入门文档阅读推荐——官方文档常用章节推荐
本文整理了Solr常见用法涉及的基础章节列表,通过这些章节的阅读学习,同学可以零基础快速入门使用Solr,并能够满足大部分企业的业务检索需求开发,掌握了熟悉使用Solr的基本技能。
1757 0
|
监控 索引
elasticsearch插件二—— kibana插件安装详解(Elasticsearch教程09)|MVP讲堂
横扫你学习 Elasticsearch 的诸多疑惑,让你少走半年弯路!
1924 0