Why Apache Beam? A data Artisans perspective

简介:

https://cloud.google.com/dataflow/blog/dataflow-beam-and-spark-comparison

https://github.com/apache/incubator-beam

https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101

https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102

http://data-artisans.com/why-apache-beam/

 

As the Dataflow SDK and the Runners were moving to Apache Incubator as Apache Beam, we were asked by Google to bring the Flink runner into the codebase of Beam, and become committers and PMC members in the new project. We decided to go full st(r)eam ahead with this opportunity as we believe that

(1) the Beam model is the future reference programming model for writing data applications in both stream and batch, and

(2) Flink is the definitive platform to execute these data applications. As Beam is now taking shape, Flink is currently the only practical execution engine for Beam programs outside Google’s Cloud.

Beam包括,SDK部分和runner部分,而在Google外,当前Flink是一个可以作为runner的比较好的选择

 

Flink and Beam are completely aligned in their concepts, which makes the translation of Beam programs to Flink jobs both straightforward and very efficient. 
With full support for concepts such as event time, watermarks, and triggers, and with the new features we are contributing to Flink, we believe that the superiority of the Flink runner will stick for the foreseeable future.

Flink和beam的设计和概念都很相似,所以从Beam programs翻译到Flink jobs 是非常直接的。

 

One question that remains is what is the relationship between Flink’s own native API (DataStream), and the Beam API?

Will both of these continue to be supported, and is it confusing for developers to have two different APIs that in the end generate Flink jobs? 
Our committers at data Artisans will continue to fully support both the Flink DataStream API (which is, as of Flink 1.0, stable and backwards compatible), as well as the Beam API as it evolves for Beam programs that run on Flink.

The differences between the two APIs are largely syntactical (and a matter of taste), and, we are working together with Google towards unifying the APIs, with the end goal of making the Beam and Flink APIs source compatible. 
We believe that the two communities can learn from each other, and we encourage users to use either of the two APIs to implement their Flink jobs for stream data processing.

With the native Flink DataStream API you get an already mature and backwards-compatible API, built-in libraries (e.g., CEP and upcoming SQL), mature tooling and connectors, key-value state (with the ability to query that state in the future), and an API which fully utilizes all the features of Flink’s powerful engine. 
With the Beam API, you get the option of portability down the line as more Beam runners mature.

Flink API和Beam API的区别是什么? 
区别主要是语法上的,并且会致力于unify两种接口,但两种接口会都有其存在的价值

API, model, and engine

To clarify our points above, we would like to explain what we mean by choice of API, choice of programming model, and choice of execution engine.

Currently, Beam has three available runners: the Google Cloud Dataflow proprietary runner by Google, as well as the Flink and Spark runners, included in the open source Apache Beam project. Let us look at this ecosystem, and add Flink and Spark themselves with their native APIs, as well as Storm:

image

可以看出Beam的好处,关键在于,API和Model的统一,虽然Engine可以是不一样的

 

Even more, Google and data Artisans are working together to make the two APIs semantically equivalent, ironing out any minor inconsistencies. 
This means that users of either API can switch with relatively low effort. 
Our long term goal is to make the Beam and Flink DataStream APIs source-compatible, so that programs written in one can natively run on the other with no code changes.

If you choose to invest in the Beam  programming model now, you have two options:

  1. Use the Flink DataStream API in Java and Scala
  2. Use the Beam API directly in Java (and  soon Python) with the Flink runner

 

长期看, Beam和Flink的API会兼容,所以如果想用Beam编程模型,可以有两种选择,直接用Flink DataStream API或用Beam API加上Flink Runner

 

We recommend option 1 to users that want to get started immediately, using an already mature and backwards-compatible API, access to libraries (e.g., the existing CEP library and the upcoming SQL functionality), mature tooling and connectors (e.g., to Kafka), as well as an API that fully and natively utilizes all the existing and upcoming features of the Flink engine. In addition, we recommend the Flink native API for use cases that use Flink’s key-value state abstraction, and in the future Flink’s facilities for querying that state.

We recommend option 2 to users that want to keep the option of engine portability (as other Beam runners progress).

选择BeamAPI的唯一好处是,可以做到各个runner的兼容;而显然,Flink的API更丰富和成熟

 

In a recent blog post, Google compared Beam and Spark Streaming from a programming model perspective. They took a mobile gaming scenario, and implemented several use cases in Beam and Spark Streaming, focusing their analysis on how well are the following concerns separated in the code:

  • What results are calculated? Sums, joins, histograms, machine learning models?
  • Where in event time are results calculated? Does the time each event originally occurred affect results? Are results aggregated in fixed windows, sessions, or a single global window?
  • When in processing time are results materialized? Does the time each event is observed within the system affect results? When are results emitted? Speculatively, as data evolve? When data arrive late and results must be revised? Some combination of these?
  • How do refinements of results relate? If additional data arrive and results change, are they independent and distinct, do they build upon one another, etc.?

 

image

 

用一个实际例子比较一下,Flink和Beam API的不同,不同颜色部分解决不同的问题,一共4个问题

 

Conclusion

We firmly believe that the Beam model is the correct programming model for streaming and batch data processing.

We encourage users to adopt this model for their future data applications, embodied in either the Beam API itself or the Flink DataStream API.

Further, we believe that Flink, with its current features and roadmap, is currently the most advanced open source stream processor, and at the same time the only practical solution for deploying Beam programs in production on on-premise or non-GCP clusters. We are looking forward to continue pushing the envelope in stream processing and enabling enterprises to use stream processing technology for their data applications.

相关实践学习
基于Hologres+Flink搭建GitHub实时数据大屏
通过使用Flink、Hologres构建实时数仓,并通过Hologres对接BI分析工具(以DataV为例),实现海量数据实时分析.
实时计算 Flink 实战课程
如何使用实时计算 Flink 搞定数据处理难题?实时计算 Flink 极客训练营产品、技术专家齐上阵,从开源 Flink功能介绍到实时计算 Flink 优势详解,现场实操,5天即可上手! 欢迎开通实时计算 Flink 版: https://cn.aliyun.com/product/bigdata/sc Flink Forward Asia 介绍: Flink Forward 是由 Apache 官方授权,Apache Flink Community China 支持的会议,通过参会不仅可以了解到 Flink 社区的最新动态和发展计划,还可以了解到国内外一线大厂围绕 Flink 生态的生产实践经验,是 Flink 开发者和使用者不可错过的盛会。 去年经过品牌升级后的 Flink Forward Asia 吸引了超过2000人线下参与,一举成为国内最大的 Apache 顶级项目会议。结合2020年的特殊情况,Flink Forward Asia 2020 将在12月26日以线上峰会的形式与大家见面。
相关文章
|
5月前
|
SQL 人工智能 API
Apache Flink 2.1.0: 面向实时 Data + AI 全面升级,开启智能流处理新纪元
Apache Flink 2.1.0 正式发布,标志着实时数据处理引擎向统一 Data + AI 平台迈进。新版本强化了实时 AI 能力,支持通过 Flink SQL 和 Table API 创建及调用 AI 模型,新增 Model DDL、ML_PREDICT 表值函数等功能,实现端到端的实时 AI 工作流。同时增强了 Flink SQL 的流处理能力,引入 Process Table Functions(PTFs)、Variant 数据类型,优化流式 Join 及状态管理,显著提升作业稳定性与资源利用率。
657 0
|
5月前
|
人工智能 运维 监控
智能运维与数据治理:基于 Apache Doris 的 Data Agent 解决方案
本文基于 Apache Doris 数据运维治理 Agent 展开讨论,如何让 AI 成为 Doris 数据运维工程师和数据治理专家的智能助手,并在某些场景下实现对人工操作的全面替代。这种变革不仅仅是技术层面的进步,更是数据运维治理思维方式的根本性转变:从“被动响应”到“主动预防”,从“人工判断”到“智能决策”,从“孤立处理”到“协同治理”。
990 11
智能运维与数据治理:基于 Apache Doris 的 Data Agent 解决方案
|
分布式计算 Java Go
Golang深入浅出之-Go语言中的分布式计算框架Apache Beam
【5月更文挑战第6天】Apache Beam是一个统一的编程模型,适用于批处理和流处理,主要支持Java和Python,但也提供实验性的Go SDK。Go SDK的基本概念包括`PTransform`、`PCollection`和`Pipeline`。在使用中,需注意类型转换、窗口和触发器配置、资源管理和错误处理。尽管Go SDK文档有限,生态系统尚不成熟,且性能可能不高,但它仍为分布式计算提供了可移植的解决方案。通过理解和掌握Beam模型,开发者能编写高效的数据处理程序。
592 1
|
机器学习/深度学习 分布式计算 大数据
一文读懂Apache Beam:统一的大数据处理模型与工具
【4月更文挑战第8天】Apache Beam是开源的统一大数据处理模型,提供抽象化编程模型,支持批处理和流处理。它提倡"一次编写,到处运行",可在多种引擎(如Spark、Dataflow、Flink)上运行。Beam的核心特性包括抽象化概念(PCollection、PTransform和PipelineRunner)、灵活性(支持多种数据源和转换)和高效执行。它广泛应用在ETL、实时流处理、机器学习和大数据仓库场景,助力开发者轻松应对数据处理挑战。
3117 1
|
数据处理 分布式数据库 Apache
《使用Apache Beam和HBase进行高效数据处理》电子版地址
使用Apache Beam和HBase进行高效数据处理
199 0
《使用Apache Beam和HBase进行高效数据处理》电子版地址
|
存储 SQL 分布式计算
开放可编程API(兼容Apache Beam)
开放可编程API(兼容Apache Beam)
203 0
|
分布式计算 大数据 测试技术
|
分布式计算 大数据 数据处理

推荐镜像

更多