本文是关于Flume成功应用Kafka的研究案例,深入剖析它是如何将RDBMS实时数据流导入到HDFS的Hive表中。
对于那些想要把数据快速摄取到Hadoop中的企业来讲,Kafka是一个很好的选择。Kafka是什么?Kafka是一个分布式、可伸缩、可信赖的消息传递系统,利用发布-订阅模型来集成应用程序/数据流。同时,Kafka还是Hadoop技术堆栈中的关键组件,能够很好地支持实时数据分析或者货币化的物联网数据。
本文服务于技术人群。下面就图解Kafka是如何把数据流从RDBMS(关系数据库管理系统)导入Hive,同时借助一个实时分析用例加以说明。作为参考,本文中使用的组件版本分别为Hive 1.2.1,Flume 1.6 以及 Kafka 0.9。
Kafka所在位置:解决方案的整体结构
下图显示了解决方案的整体结构: Kafka 和 Flume 的结合,再加上Hive的交易功能,RDBMS的交易数据被成功传递到目标 Hive 表中。
七步实现Hadoop实时数据导入
现在让我们深入方案细节,并展示如何在几个步骤内将数据流导入Hadoop。
1.从RDBMS中提取数据
所有关系型数据库都有一个日志文件,用来记录最新的交易。解决方案的第一步就是获取这些交易数据,同时要确保这些数据格式是可以被Hadoop所接受的。
2.设置Kafka生产商
发布Kafka话题消息的过程称为“生产商”。“话题”里有各种Kafka所需要维护的信息类别,RDBMS数据也会被转换成Kafka话题。对于这个示例,要求设置一个服务于整个销售团队的数据库,且该数据库中的交易数据均以Kafka话题形式发布。以下步骤都需要设置Kafka 生产商:
- $cd /usr/hdp/2.4.0.0-169/kafka
- $bin/kafka-topics.sh --create --zookeeper sandbox.hortonworks.com:2181 --replication-factor 1 --partitions 1 --topic SalesDBTransactions
- Created topic "SalesDBTransactions".
- $bin/kafka-topics.sh --list --zookeeper sandbox.hortonworks.com:2181
- SalesDBTransactions
3.设置Hive
接下来将创建一个Hive表,准备接收销售团队的数据库交易数据。这个例子中,我们将创建一个用户数据表:
- [bedrock@sandbox ~]$ beeline -u jdbc:hive2:// -n hive -p hive
- 0: jdbc:hive2://> use raj;
- create table customers (id string, name string, email string, street_address string, company string)
- partitioned by (time string)
- clustered by (id) into 5 buckets stored as orc
- location '/user/bedrock/salescust'
- TBLPROPERTIES ('transactional'='true');
为了确保Hive能够有效处理交易数据,以下设置要求在Hive配置中进行:
- hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
4.为Kafka到Hive的数据流设置Flume代理
现在来看下如何创建一个Flume代理,用于收集Kafka话题资料并向Hive表发送数据。
在启用Flume代理前,要通过这几个步骤设置运行环境:
- $ pwd
- /home/bedrock/streamingdemo
- $ mkdir flume/checkpoint
- $ mkdir flume/data
- $ chmod 777 -R flume
- $ export HIVE_HOME=/usr/hdp/current/hive-server2
- $ export HCAT_HOME=/usr/hdp/current/hive-webhcat
- $ pwd
- /home/bedrock/streamingdemo/flume
- $ mkdir logs
再如下所示创建一个log4j属性文件:
- [bedrock@sandbox conf]$ vi log4j.properties
- flume.root.logger=INFO,LOGFILE
- flume.log.dir=/home/bedrock/streamingdemo/flume/logs
- flumeflume.log.file=flume.log
然后为Flume代理配置以下文件:
- $ vi flumetohive.conf
- flumeagent1.sources = source_from_kafka
- flumeagent1.channels = mem_channel
- flumeagent1.sinks = hive_sink
- # Define / Configure source
- flumeagent1.sources.source_from_kafka.type = org.apache.flume.source.kafka.KafkaSource
- flumeagent1.sources.source_from_kafka.zookeeperConnect = sandbox.hortonworks.com:2181
- flumeagent1.sources.source_from_kafka.topic = SalesDBTransactions
- flumeflumeagent1.sources.source_from_kafka.groupID = flume
- flumeagent1.sources.source_from_kafka.channels = mem_channel
- flumeagent1.sources.source_from_kafka.interceptors = i1
- flumeagent1.sources.source_from_kafka.interceptors.i1.type = timestamp
- flumeagent1.sources.source_from_kafka.consumer.timeout.ms = 1000
- # Hive Sink
- flumeagent1.sinks.hive_sink.type = hive
- flumeagent1.sinks.hive_sink.hive.metastore = thrift://sandbox.hortonworks.com:9083
- flumeagent1.sinks.hive_sink.hive.database = raj
- flumeagent1.sinks.hive_sink.hive.table = customers
- flumeagent1.sinks.hive_sink.hive.txnsPerBatchAsk = 2
- flumeagent1.sinks.hive_sink.hive.partition = %y-%m-%d-%H-%M
- flumeagent1.sinks.hive_sink.batchSize = 10
- flumeagent1.sinks.hive_sink.serializer = DELIMITED
- flumeagent1.sinks.hive_sink.serializer.delimiter = ,
- flumeagent1.sinks.hive_sink.serializer.fieldnames = id,name,email,street_address,company
- # Use a channel which buffers events in memory
- flumeagent1.channels.mem_channel.type = memory
- flumeagent1.channels.mem_channel.capacity = 10000
- flumeagent1.channels.mem_channel.transactionCapacity = 100
- # Bind the source and sink to the channel
- flumeagent1.sources.source_from_kafka.channels = mem_channel
- flumeagent1.sinks.hive_sink.channel = mem_channel
5.启用Flume代理
通过以下指令启用Flume代理:
- $ /usr/hdp/apache-flume-1.6.0/bin/flume-ng agent -n flumeagent1 -f ~/streamingdemo/flume/conf/flumetohive.conf
6.启用Kafka流
作为示例下面是一个模拟交易的消息集,这在实际系统中需要通过源数据库才能生成。例如,以下可能来自Oracle流,在回放被提交到数据库的SQL交易数据,也可能来自GoldenGate。
- $ cd /usr/hdp/2.4.0.0-169/kafka
- $ bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic SalesDBTransactions
- 1,"Nero Morris","porttitor.interdum@Sedcongue.edu","P.O. Box 871, 5313 Quis Ave","Sodales Company"
- 2,"Cody Bond","ante.lectus.convallis@antebibendumullamcorper.ca","232-513 Molestie Road","Aenean Eget Magna Incorporated"
- 3,"Holmes Cannon","a@metusAliquam.edu","P.O. Box 726, 7682 Bibendum Rd.","Velit Cras LLP"
- 4,"Alexander Lewis","risus@urna.edu","Ap #375-9675 Lacus Av.","Ut Aliquam Iaculis Inc."
- 5,"Gavin Ortiz","sit.amet@aliquameu.net","Ap #453-1440 Urna. St.","Libero Nec Ltd"
- 6,"Ralph Fleming","sociis.natoque.penatibus@quismassaMauris.edu","363-6976 Lacus. St.","Quisque Fringilla PC"
- 7,"Merrill Norton","at.sem@elementum.net","P.O. Box 452, 6951 Egestas. St.","Nec Metus Institute"
- 8,"Nathaniel Carrillo","eget@massa.co.uk","Ap #438-604 Tellus St.","Blandit Viverra Corporation"
- 9,"Warren Valenzuela","tempus.scelerisque.lorem@ornare.co.uk","Ap #590-320 Nulla Av.","Ligula Aliquam Erat Incorporated"
- 10,"Donovan Hill","facilisi@augue.org","979-6729 Donec Road","Turpis In Condimentum Associates"
- 11,"Kamal Matthews","augue.ut@necleoMorbi.org","Ap #530-8214 Convallis, St.","Tristique Senectus Et Foundation"
7.接收Hive数据
如果上面所有的步骤都完成了,那么现在就可以从Kafka发送数据,可以看到数据流在几秒钟内就会被发送到Hive表。
本文作者:佚名
来源:51CTO