Beam编程系列之Apache Beam WordCount Examples(MinimalWordCount example、WordCount example、Debugging WordCount example、WindowedWordCount example)(官网的推荐步骤)

简介:
https://beam.apache.org/get-started/wordcount-example/

 

 

 

 

  来自官网的:

The WordCount examples demonstrate how to set up a processing pipeline that can read text, tokenize the text lines into individual words, and perform a frequency count on each of those words. The Beam SDKs contain a series of these four successively more detailed WordCount examples that build on each other. The input text for all the examples is a set of Shakespeare’s texts.

Each WordCount example introduces different concepts in the Beam programming model. Begin by understanding Minimal WordCount, the simplest of the examples. Once you feel comfortable with the basic principles in building a pipeline, continue on to learn more concepts in the other examples.

  • Minimal WordCount demonstrates the basic principles involved in building a pipeline.
  • WordCount introduces some of the more common best practices in creating re-usable and maintainable pipelines.
  • Debugging WordCount introduces logging and debugging practices.
  • Windowed WordCount demonstrates how you can use Beam’s programming model to handle both bounded and unbounded datasets.

 

 

 

 

 

 

 

  我这里仅以Minimal WordCount为例。

  首先说明一下,为了简单起见,我直接在代码中显式配置指定PipelineRunner,示例代码片段如下所示:

PipelineOptions options = PipelineOptionsFactory.create();
options.setRunner(DirectRunner.class);

 

  如果要部署到服务器上,可以通过命令行的方式指定PipelineRunner,比如要在Spark集群上运行,类似如下所示命令行:

spark-submit --class org.shirdrn.beam.examples.MinimalWordCountBasedSparkRunner 2017-01-18 --master spark://myserver:7077 target/my-beam-apps-0.0.1-SNAPSHOT-shaded.jar --runner=SparkRunner

 

 

 

 

  下面,我们从几个典型的例子来看(基于Apache Beam软件包的examples有所改动),Apache Beam如何构建Pipeline并运行在指定的PipelineRunner上:

  • WordCount(Count/Source/Sink)

  我们根据Apache Beam的MinimalWordCount示例代码开始,看如何构建一个Pipeline,并最终执行它。 MinimalWordCount的实现,代码如下所示:

复制代码
package org.shirdrn.beam.examples;

import org.apache.beam.runners.direct.DirectRunner;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Count;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.KV;

public class MinimalWordCount {

    @SuppressWarnings("serial")
    public static void main(String[] args) {

        PipelineOptions options = PipelineOptionsFactory.create();
        options.setRunner(DirectRunner.class); // 显式指定PipelineRunner:DirectRunner(Local模式)

        Pipeline pipeline = Pipeline.create(options);

        pipeline.apply(TextIO.Read.from("/tmp/dataset/apache_beam.txt")) // 读取本地文件,构建第一个PTransform
                .apply("ExtractWords", ParDo.of(new DoFn<String, String>() { // 对文件中每一行进行处理(实际上Split)

                    @ProcessElement
                    public void processElement(ProcessContext c) {
                        for (String word : c.element().split("[\\s:\\,\\.\\-]+")) {
                            if (!word.isEmpty()) {
                                c.output(word);
                            }
                        }
                    }

                }))
                .apply(Count.<String> perElement()) // 统计每一个Word的Count
                .apply("ConcatResultKVs", MapElements.via( // 拼接最后的格式化输出(Key为Word,Value为Count)
                        new SimpleFunction<KV<String, Long>, String>() {

                    @Override
                    public String apply(KV<String, Long> input) {
                        return input.getKey() + ": " + input.getValue();
                    }

                }))
                .apply(TextIO.Write.to("wordcount")); // 输出结果

        pipeline.run().waitUntilFinish();
    }
}
复制代码

 

 

  Pipeline的具体含义,可以看上面代码的注释信息。下面,我们考虑以HDFS数据源作为Source,如何构建第一个PTransform,代码片段如下所示:

PCollection<KV<LongWritable, Text>> resultCollection = pipeline.apply(HDFSFileSource.readFrom(
        "hdfs://myserver:8020/data/ds/beam.txt",
        TextInputFormat.class, LongWritable.class, Text.class))

 

 

  可以看到,返回的是具有键值分别为LongWritable、Text类型的KV对象集合,后续处理和上面处理逻辑类似。如果使用Maven构建Project,需要加上如下依赖(这里beam.version的值可以为最新Release版本0.4.0):

<dependency>
    <groupId>org.apache.beam</groupId>
    <artifactId>beam-sdks-java-io-hdfs</artifactId>
    <version>${beam.version}</version>
</dependency>

 

 
 
  • 去重(Distinct)

去重也是对数据集比较常见的操作,使用Apache Beam来实现,示例代码如下所示:

复制代码
package org.shirdrn.beam.examples;

import org.apache.beam.runners.direct.DirectRunner;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Distinct;

public class DistinctExample {

    public static void main(String[] args) throws Exception {

         PipelineOptions options = PipelineOptionsFactory.create();
         options.setRunner(DirectRunner.class); // 显式指定PipelineRunner:DirectRunner(Local模式)

         Pipeline pipeline = Pipeline.create(options);
         pipeline.apply(TextIO.Read.from("/tmp/dataset/MY_ID_FILE.txt"))
             .apply(Distinct.<String> create()) // 创建一个处理String类型的PTransform:Distinct
             .apply(TextIO.Write.to("deduped.txt")); // 输出结果
         pipeline.run().waitUntilFinish();
    }
}
复制代码

 

 

 

 

 

 

  • 分组(GroupByKey)

对数据进行分组操作也非常普遍,我们拿一个最基础的PTransform实现GroupByKey来实现一个例子,代码如下所示:

复制代码
package org.shirdrn.beam.examples;

import org.apache.beam.runners.direct.DirectRunner;
import org.apache.beam.runners.direct.repackaged.com.google.common.base.Joiner;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.GroupByKey;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.KV;

public class GroupByKeyExample {

    @SuppressWarnings("serial")
    public static void main(String[] args) {

        PipelineOptions options = PipelineOptionsFactory.create();
        options.setRunner(DirectRunner.class); // 显式指定PipelineRunner:DirectRunner(Local模式)

        Pipeline pipeline = Pipeline.create(options);

        pipeline.apply(TextIO.Read.from("/tmp/dataset/MY_INFO_FILE.txt"))
            .apply("ExtractFields", ParDo.of(new DoFn<String, KV<String, String>>() {

                @ProcessElement
                public void processElement(ProcessContext c) {
                    // file format example: 35451605324179    3G    CMCC
                    String[] values = c.element().split("\t");
                    if(values.length == 3) {
                        c.output(KV.of(values[1], values[0]));
                    }
                }
            }))
            .apply("GroupByKey", GroupByKey.<String, String>create()) // 创建一个GroupByKey实例的PTransform
            .apply("ConcatResults", MapElements.via(
                    new SimpleFunction<KV<String, Iterable<String>>, String>() {

                        @Override
                        public String apply(KV<String, Iterable<String>> input) {
                            return new StringBuffer()
                                    .append(input.getKey()).append("\t")
                                    .append(Joiner.on(",").join(input.getValue()))
                                    .toString();
                        }


            }))
            .apply(TextIO.Write.to("grouppedResults"));

        pipeline.run().waitUntilFinish();

    }
}
复制代码

  使用DirectRunner运行,输出文件名称类似于grouppedResults-00000-of-00002、grouppedResults-00001-of-00002等等。

 

 

 

 

  • 连接(Join)

  最后,我们通过实现一个Join的例子,其中,用户的基本信息包含ID和名称,对应文件格式如下所示:

35451605324179    Jack
35236905298306    Jim
35236905519469    John
35237005022314    Linda

 

 

  另一个文件是用户使用手机的部分信息,文件格式如下所示:

35451605324179    3G    中国移动
35236905298306    2G    中国电信
35236905519469    4G    中国移动

 

 

  我们希望通过Join操作后,能够知道用户使用的什么网络(用户名+网络),使用Apache Beam实现,具体实现代码如下所示:

复制代码
package org.shirdrn.beam.examples;

import org.apache.beam.runners.direct.DirectRunner;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.transforms.join.CoGbkResult;
import org.apache.beam.sdk.transforms.join.CoGroupByKey;
import org.apache.beam.sdk.transforms.join.KeyedPCollectionTuple;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.TupleTag;

public class JoinExample {

    @SuppressWarnings("serial")
    public static void main(String[] args) {

        PipelineOptions options = PipelineOptionsFactory.create();
        options.setRunner(DirectRunner.class);  // 显式指定PipelineRunner:DirectRunner(Local模式)

        Pipeline pipeline = Pipeline.create(options);

        // create ID info collection
        final PCollection<KV<String, String>> idInfoCollection = pipeline
                .apply(TextIO.Read.from("/tmp/dataset/MY_ID_INFO_FILE.txt"))
                .apply("CreateUserIdInfoPairs", MapElements.via(
                        new SimpleFunction<String, KV<String, String>>() {

                    @Override
                    public KV<String, String> apply(String input) {
                        // line format example: 35451605324179    Jack
                        String[] values = input.split("\t");
                        return KV.of(values[0], values[1]);
                    }

                }));

        // create operation collection
        final PCollection<KV<String, String>> opCollection = pipeline
                .apply(TextIO.Read.from("/tmp/dataset/MY_ID_OP_INFO_FILE.txt"))
                .apply("CreateIdOperationPairs", MapElements.via(
                        new SimpleFunction<String, KV<String, String>>() {

                    @Override
                    public KV<String, String> apply(String input) {
                        // line format example: 35237005342309    3G    CMCC
                        String[] values = input.split("\t");
                        return KV.of(values[0], values[1]);
                    }

                }));

        final TupleTag<String> idInfoTag = new TupleTag<String>();
        final TupleTag<String> opInfoTag = new TupleTag<String>();

        final PCollection<KV<String, CoGbkResult>> cogrouppedCollection = KeyedPCollectionTuple
                .of(idInfoTag, idInfoCollection)
                .and(opInfoTag, opCollection)
                .apply(CoGroupByKey.<String>create());

        final PCollection<KV<String, String>> finalResultCollection = cogrouppedCollection
                .apply("CreateJoinedIdInfoPairs", ParDo.of(new DoFn<KV<String, CoGbkResult>, KV<String, String>>() {

                @ProcessElement
                public void processElement(ProcessContext c) {
                    KV<String, CoGbkResult> e = c.element();
                    String id = e.getKey();
                    String name = e.getValue().getOnly(idInfoTag);
                    for (String opInfo : c.element().getValue().getAll(opInfoTag)) {
                      // Generate a string that combines information from both collection values
                      c.output(KV.of(id, "\t" + name + "\t" + opInfo));
                    }
                }
        }));

        PCollection<String> formattedResults = finalResultCollection
                .apply("FormatFinalResults", ParDo.of(new DoFn<KV<String, String>, String>() {
                  @ProcessElement
                  public void processElement(ProcessContext c) {
                    c.output(c.element().getKey() + "\t" + c.element().getValue());
                  }
                }));

         formattedResults.apply(TextIO.Write.to("joinedResults"));
         pipeline.run().waitUntilFinish();

    }



本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/7610420.html,如需转载请自行联系原作者
相关文章
|
4月前
|
数据处理 Apache 数据库
将 Python UDF 部署到 Apache IoTDB 的详细步骤与注意事项
【10月更文挑战第21天】将 Python UDF 部署到 Apache IoTDB 中需要一系列的步骤和注意事项。通过仔细的准备、正确的部署和测试,你可以成功地将自定义的 Python UDF 应用到 Apache IoTDB 中,为数据处理和分析提供更灵活和强大的支持。在实际操作过程中,要根据具体情况进行调整和优化,以确保实现最佳的效果。还可以结合具体的代码示例和实际部署经验,进一步深入了解和掌握这一过程。
50 2
|
5月前
|
前端开发 JavaScript Java
Apache Wicket 框架:踏上从新手到英雄的逆袭之路,成就你的编程传奇!
【9月更文挑战第4天】Apache Wicket是一款基于Java的开源Web应用框架,以简洁、易维护及强大功能著称。它采用组件化设计,让页面开发更为模块化。Wicket的简洁编程模型、丰富的组件库、良好的可维护性以及对Ajax的支持,使其成为高效开发Web应用的理想选择。下文将通过解析Wicket的基本概念与特性,帮助读者深入了解这一框架的优势。
243 1
|
5月前
|
Java API Apache
从零到英雄的蜕变:如何用Apache Wicket打造你的第一个Web应用——不仅是教程,更是编程之旅的启航
【9月更文挑战第4天】学习Apache Wicket这一开源Java Web应用框架是一段激动人心的旅程。本文将指导你通过Maven搭建环境,并创建首个“Hello, World!”应用。从配置`pom.xml`到实现`HelloWorldApplication`类,再到`web.xml`的设置,一步步教你构建与部署简单网页。适合初学者快速上手,体验其简洁API与强大组件化设计的魅力。
145 1
|
9月前
|
SQL Java Apache
超详细步骤!整合Apache Hudi + Flink + CDH
超详细步骤!整合Apache Hudi + Flink + CDH
494 0
|
9月前
|
API Windows
解决echarts.apache.org官网不能访问的问题(主要是用于查看配置项API参数细节说明)
解决echarts.apache.org官网不能访问的问题(主要是用于查看配置项API参数细节说明)
|
网络协议 Apache
“Apache官网打不开”怎么办?
“Apache官网打不开”怎么办?
|
分布式计算 算法 大数据
T-thinker | 继MapReduce, Apache Spark之后的下一代大数据并行编程框架
T-thinker | 继MapReduce, Apache Spark之后的下一代大数据并行编程框架
159 0
|
分布式计算 Apache Spark
《# Apache Spark系列技术直播# 第五讲【 Spark RDD编程入门 】》电子版地址
# Apache Spark系列技术直播# 第五讲【 Spark RDD编程入门 】
120 0
《# Apache Spark系列技术直播# 第五讲【 Spark RDD编程入门 】》电子版地址
|
Apache Windows
Windows下安装 Apache 步骤
Windows下安装 Apache 步骤
Windows下安装 Apache 步骤
|
Apache PHP
【PHP编程之路-1】设置apache虚拟目录
【PHP编程之路-1】设置apache虚拟目录
418 0
【PHP编程之路-1】设置apache虚拟目录

热门文章

最新文章

推荐镜像

更多