02. Spark Streaming实时流处理学习——分布式日志收集框架Flume

本文涉及的产品
日志服务 SLS,月写入数据量 50GB 1个月
简介: 2. 分布式日志收集框架Flume 2.1 业务现状分析 如上图,大量的系统和各种服务的日志数据持续生成。用户有了很好的商业创意想要充分利用这些系统日志信息。比如用户行为分析,轨迹跟踪等等。如何将日志上传到Hadoop集群上?对比方案存在什么问题,以及有什么优势? 方案1: 容错,负载均衡,高延时等问题如何消除? 方案2: Flume框架 2.

2. 分布式日志收集框架Flume

image.png

2.1 业务现状分析

image.png
如上图,大量的系统和各种服务的日志数据持续生成。用户有了很好的商业创意想要充分利用这些系统日志信息。比如用户行为分析,轨迹跟踪等等。
如何将日志上传到Hadoop集群上?
对比方案存在什么问题,以及有什么优势?

  • 方案1: 容错,负载均衡,高延时等问题如何消除?
  • 方案2: Flume框架

2.2 Flume概述

flume官网 http://flume.apache.org
Flume is a distributed, reliable, and available service for efficiently collecting(收集), aggregating(聚合), and moving(移动)large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.

Flume是有Cloudera提供的一个分布式、高可靠、高可用的服务,用于分布式的海量日志的高效收集、聚合、移动的系统
Flume的设计目标

  • 可靠性
  • 扩展性
  • 管理性(agent有效的管理者)

业界同类产品对比

  • Flume(*): Cloudera/Apache Java
  • Scribe: Facebook C/C++ 不再维护
  • Chukwa:Yahoo/Apache Java 不再维护
  • Fluentd:Ruby
  • Logstash(*):ELK(ElasticSearch,Kibana)

Flume发展史

  • Cloudera 0.9.2 Flume-OG
  • flume-728 Flume-NG => Apache
  • 2012.7 1.0
  • 2015.5 1.6 (* +)
  • ~ 1.8

2.3 Flume架构及核心组件

image.png

  1. Source(收集)
  2. Channel(聚合)
  3. Sink(输出)

multi-agent flow

image.png
In order to flow the data across multiple agents or hops, the sink of the previous agent and source of the current hop need to be avro type with the sink pointing to the hostname (or IP address) and port of the source.
A very common scenario in log collection is a large number of log producing clients sending data to a few consumer agents that are attached to the storage subsystem. For example, logs collected from hundreds of web servers sent to a dozen of agents that write to HDFS cluster.

image.png
This can be achieved in Flume by configuring a number of first tier agents with an avro sink, all pointing to an avro source of single agent (Again you could use the thrift sources/sinks/clients in such a scenario). This source on the second tier agent consolidates the received events into a single channel which is consumed by a sink to its final destination.

Multiplexing the flow

Flume supports multiplexing the event flow to one or more destinations. This is achieved by defining a flow multiplexer that can replicate or selectively route an event to one or more channels.
image.png
The above example shows a source from agent “foo” fanning out the flow to three different channels. This fan out can be replicating or multiplexing. In case of replicating flow, each event is sent to all three channels. For the multiplexing case, an event is delivered to a subset of available channels when an event’s attribute matches a preconfigured value. For example, if an event attribute called “txnType” is set to “customer”, then it should go to channel1 and channel3, if it’s “vendor” then it should go to channel2, otherwise channel3. The mapping can be set in the agent’s configuration file.

2.4 Flume环境部署

前置条件

  • Java Runtime Environment - Java 1.8 or later
  • Memory - Sufficient memory for configurations used by sources, channels or sinks
  • Disk Space - Sufficient disk space for configurations used by channels or sinks
  • Directory Permissions - Read/Write permissions for directories used by agent

安装JDK

  • 下载JDK包
  • 解压JDK包
tar -zxvf jdk-8u162-linux-x64.tar.gz  [install dir]
* 配置JAVA环境变量:
修改系统配置文件 /etc/profile  或者  ~/.bash_profile
export JAVA_HOME=[jdk install dir]
export PATH = $JAVA_HOME/bin:$PATH
执行指令 
source /etc/profile  或者 
source ~/.bash_profile 
使得配置生效。
执行指令 
java -version 
检测环境配置是否生效。

安装Flume

  • 下载Flume包
wget http://www.apache.org/dist/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz
  • 解压Flume包
tar -zxvf apache-flume-1.7.0-bin.tar.gz -C [install dir]
  • 配置Flume环境变量
vim /etc/profile  或者
vim ~/.bash_profile
export FLUME_HOME=[flume install dir]
export PATH = $FLUME_HOME/bin:$PATH
执行指令 
source /etc/profile  或者 
source ~/.bash_profile 
使得配置生效。
  • 修改flume-env.sh脚本文件
export JAVA_HOME=[jdk install dir]
执行指令
flume-ng version
检测安装情况

2.5 Flume实战

  • 需求1:从指定的网络端口采集数据输出到控制台

使用Flume的关键就是写配置文件

  1. 配置source
  2. 配置Channel
  3. 配置Sink
  4. 把以上三个组件链接起来

a1: agent名称
r1: source的名称
k1: sink的名称
c1: channel的名称

单一节点 Flume 配置

# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动Flume agent

flume-ng agent \
--name a1 \
--conf  $FLUME_HOME/conf    \
--conf-file  $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO,console

使用telnet或者nc进行测试

telnet [hostname]  [port]     或者
nc [hostname]  [port]

Event = 可选的headers + byte array

Event: { headers:{} body: 74 68 69 73 20 69 73 20 61 20 74 65 73 74 20 70 this is a test p }
  • 需求2:监控一个文件实时采集新增的数据输出到控制台
    技术(Agent)选型:exec source + memory channel + logger sink
# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f  /root/data/data.log
a1.sources.r1.shell = /bin/bash -c

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动Flume agent

flume-ng agent \
--name a1 \
--conf  $FLUME_HOME/conf    \
--conf-file  $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO,console

修改data.log文件,监测是否数据是否输出到控制台

echo hello >> data.log
echo world >> data.log
echo welcome >> data.log

控制台输出

2018-09-02 03:55:00,672 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 68 65 6C 6C 6F                                  hello }
2018-09-02 03:55:06,748 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 77 6F 72 6C 64                                  world }
2018-09-02 03:55:22,280 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 77 65 6C 63 6F 6D 65                            welcome }

至此,需求2成功实现。

  • 需求3(*):将A服务器上的日志实时采集到B服务器上(重点掌握)
    技术(Agent)选型:

exec source + memory channel + avro sink
avro source + memory channel + logger sink
image.png

# exec-memory-avro.conf: A single-node Flume configuration

# Name the components on this agent
exec-memory-avro.sources = exec-source
exec-memory-avro.sinks = avro-sink
exec-memory-avro.channels = memory-channel

# Describe/configure the source
exec-memory-avro.sources.exec-source.type = exec
exec-memory-avro.sources.exec-source.command = tail -f  /root/data/data.log
exec-memory-avro.sources.exec-source.shell = /bin/bash -c

# Describe the sink
exec-memory-avro.sinks.avro-sink.type = avro
exec-memory-avro.sinks.avro-sink.hostname = c7-master
exec-memory-avro.sinks.avro-sink.port = 44444

# Use a channel which buffers events in memory
exec-memory-avro.channels.memory-channel.type = memory
exec-memory-avro.channels.memory-channel.capacity = 1000
exec-memory-avro.channels.memory-channel.transactionCapacity = 100

# Bind the source and sink to the channel
exec-memory-avro.sources.exec-source.channels = memory-channel
exec-memory-avro.sinks.avro-sink.channel = memory-channel
# avro-memory-logger.conf: A single-node Flume configuration

# Name the components on this agent
avro-memory-logger.sources = avro-source
avro-memory-logger.sinks = logger-sink
avro-memory-logger.channels = memory-channel

# Describe/configure the source
avro-memory-logger.sources.avro-source.type = avro
avro-memory-logger.sources.avro-source.bind = c7-master
avro-memory-logger.sources.avro-source.port = 44444

# Describe the sink
avro-memory-logger.sinks.logger-sink.type = logger

# Use a channel which buffers events in memory
avro-memory-logger.channels.memory-channel.type = memory
avro-memory-logger.channels.memory-channel.capacity = 1000
avro-memory-logger.channels.memory-channel.transactionCapacity = 100

# Bind the source and sink to the channel
avro-memory-logger.sources.avro-source.channels = memory-channel
avro-memory-logger.sinks.logger-sink.channel = memory-channel

优先启动 avro-memory-logger agent

flume-ng agent \
--name avro-memory-logger \
--conf  $FLUME_HOME/conf    \
--conf-file  $FLUME_HOME/conf/avro-memory-logger.conf \
-Dflume.root.logger=INFO,console

再启动 exec-memory-avro agent

flume-ng agent \
--name exec-memory-avro \
--conf  $FLUME_HOME/conf    \
--conf-file  $FLUME_HOME/conf/exec-memory-avro.conf \
-Dflume.root.logger=INFO,console

日志收集过程:
1)机器A上监控一个文件,当我们访问主站时会有用户行为日志记录到access.log中
2)avro sink把新产生的日志输出到对应的avro source指定的hostname:port主机上。
3)通过avro source对应的agent将我们的日志输出到控制台。

相关实践学习
日志服务之使用Nginx模式采集日志
本文介绍如何通过日志服务控制台创建Nginx模式的Logtail配置快速采集Nginx日志并进行多维度分析。
相关文章
|
4月前
|
存储 分布式计算 监控
【Flume】Flume 监听日志文件案例分析
【4月更文挑战第4天】【Flume】Flume 监听日志文件案例分析
|
4月前
|
存储 运维 监控
【Flume】flume 日志管理中的应用
【4月更文挑战第4天】【Flume】flume 日志管理中的应用
|
19天前
|
存储 数据采集 数据处理
【Flume拓扑揭秘】掌握Flume的四大常用结构,构建强大的日志收集系统!
【8月更文挑战第24天】Apache Flume是一个强大的工具,专为大规模日志数据的收集、聚合及传输设计。其核心架构包括源(Source)、通道(Channel)与接收器(Sink)。Flume支持多样化的拓扑结构以适应不同需求,包括单层、扇入(Fan-in)、扇出(Fan-out)及复杂多层拓扑。单层拓扑简单直观,适用于单一数据流场景;扇入结构集中处理多源头数据;扇出结构则实现数据多目的地分发;复杂多层拓扑提供高度灵活性,适合多层次数据处理。通过灵活配置,Flume能够高效构建各种规模的数据收集系统。
26 0
|
10天前
|
分布式计算 Shell Scala
学习使用Spark
学习使用Spark
28 3
|
12天前
|
分布式计算 Shell Scala
如何开始学习使用Spark?
【8月更文挑战第31天】如何开始学习使用Spark?
26 2
|
19天前
|
存储 分布式计算 大数据
【Flume的大数据之旅】探索Flume如何成为大数据分析的得力助手,从日志收集到实时处理一网打尽!
【8月更文挑战第24天】Apache Flume是一款高效可靠的数据收集系统,专为Hadoop环境设计。它能在数据产生端与分析/存储端间搭建桥梁,适用于日志收集、数据集成、实时处理及数据备份等多种场景。通过监控不同来源的日志文件并将数据标准化后传输至Hadoop等平台,Flume支持了性能监控、数据分析等多种需求。此外,它还能与Apache Storm或Flink等实时处理框架集成,实现数据的即时分析。下面展示了一个简单的Flume配置示例,说明如何将日志数据导入HDFS进行存储。总之,Flume凭借其灵活性和强大的集成能力,在大数据处理流程中占据了重要地位。
32 3
|
1月前
|
消息中间件 Java Kafka
"Kafka快速上手:从环境搭建到Java Producer与Consumer实战,轻松掌握分布式流处理平台"
【8月更文挑战第10天】Apache Kafka作为分布式流处理平台的领头羊,凭借其高吞吐量、可扩展性和容错性,在大数据处理、实时日志收集及消息队列领域表现卓越。初学者需掌握Kafka基本概念与操作。Kafka的核心组件包括Producer(生产者)、Broker(服务器)和Consumer(消费者)。Producer发送消息到Topic,Broker负责存储与转发,Consumer则读取这些消息。首先确保已安装Java和Kafka,并启动服务。接着可通过命令行创建Topic,并使用提供的Java API实现Producer发送消息和Consumer读取消息的功能。
50 8
|
3月前
|
消息中间件 存储 Java
Kafka 详解:全面解析分布式流处理平台
Kafka 详解:全面解析分布式流处理平台
96 0
|
4月前
|
SQL 分布式计算 Java
Spark学习---SparkSQL(概述、编程、数据的加载和保存、自定义UDFA、项目实战)
Spark学习---SparkSQL(概述、编程、数据的加载和保存、自定义UDFA、项目实战)
276 1