Apache Tez Design

简介:

Tez aims to be a general purpose execution runtime that enhances various scenarios that are not well served by classic Map-Reduce. 
In the short term the major focus is to support Hive and Pig, specifically to enable performance improvements to batch and ad-hoc interactive queries.

image

 

What services will Tez provide

Tez兼容传统的map-reduce jobs, 当然主要focus提供基于DAG的jobs和相应的API以及primitives. 

Tez provides runtime components:

  • An execution environment that can handle traditional map-reduce jobs
  • An execution environment that handles DAG-based jobs comprising various built-in and extendable primitives
  • Cluster-side determination of input pieces
  • Runtime planning such as task cardinality determination and dynamic modification to the DAG structure


Tez provides APIs to access these services:

  • Traditional map-reduce functionality is accessed via java classes written to the Job interface: org.apache.hadoop.mapred.Job and/or org.apache.hadoop.mapreduce.v2.app.job.Job; 
    and by specifying in yarn-site that the map-reduce framework should be Tez.
  • DAG-based execution is accessed via the new Tez DAG API: org.apache.tez.dag.api.*, org.apache.tez.engine.api.*.


Tez provides pre-made primitives for use with the DAG API (org.apache.tez.engine.common.*)

  • Vertex Input
  • Vertex Output
  • Sorting
  • Shuffling
  • Merging
  • Data transfer

 

Tez-YARN architecture

In the above figure Tez is represented by the red components: client-side API, an AppMaster, and multiple containers that execute child processes under the control of the AppMaster.

image

Three separate software stacks are involved in the execution of a Tez job, each using components from the clientapplication, Tez, and YARN:

image

 

DAG topologies and scenarios

The following terminology is used:

Job Vertex: A “stage” in the job plan. 逻辑顶点, 可以理解成stage 
Job Edge: The logical connections between Job Vertices. 逻辑边, 关联 
Vertex: A materialized stage at runtime comprising a certain number of materialized tasks. 物理顶点, 由并行的tasks节点组成 
Edge: Represents actual data movement between tasks. 物理边, 代表实际数据流向 
Task: A process performing computation within a YARN container. Task, 一个执行节点 
Task cardinality: The number of materialized tasks in a Vertex. Task基数, Vertex的并发度 
Static plan: Planning decisions fixed before job submission. 
Dynamic plan: Planning decisions made at runtime in the AppMaster process.

 

Tez API

The Tez API comprises many services that support applications to run DAG-style jobs. An application that makes use of Tez will need to: 
1. Create a job plan (the DAG) comprising vertices, edges, and data source references 
2. Create task implementations that perform computations and interact with the DAG AppMaster 
3. Configure Yarn and Tez appropriately

DAG definition API

抽象DAG的定义接口

public class DAG{
    DAG();
    void addVertex(Vertex);
    void addEdge(Edge);
    void addConfiguration(String, String);
    void setName(String);
    void verify();
    DAGPlan createDaG();
}

public class Vertex {
    Vertex(String vertexName, String processorName, int parallelism);
    void setTaskResource();
    void setTaskLocationsHint(TaskLocationHint[]);
    void setJavaOpts(String);
    String getVertexName();
    String getProcessorName();
    int getParallelism();
    Resource getTaskResource();
    TaskLocationHint[] getTaskLocationsHint();
    String getJavaOpts();
}

public class Edge {
    Edge(Vertex inputVertex, Vertex outputVertex, EdgeProperty edgeProperty);
    String getInputVertex();
    String getOutputVertex();
    EdgeProperty getEdgeProperty();
    String getId();
}

Execution APIs

Task作为Tez的执行者, 遵循input, output, processor的模式

public interface Master
//a context object for task execution. currently only stub

public interface Input{
    void initialize(Configuration conf, Master master)
    boolean hasNext()
    Object getNextKey()
    Iterable<Object> getNextValues()
    float getProgress()
    void close()
}

public interface Output{
    void initialize(Configuration conf, Master master);
    void write(Object key, Object value);
    OutputContext getOutputContext();
    void close();
}

public interface Partitioner {
    int getPartition(Object key, Object value, int numPartitions);
}

public interface Processor {
    void initialize(Configuration conf, Master master)
    void process(Input[] in, Output[] out)
    void close()
}

public interface Task{
    void initialize(Configuration conf, Master master)
    Input[] getInputs();
    Processor getProcessor();
    Output[] getOutputs();
    void run()
    void close()
}

本文章摘自博客园,原文发布日期: 2013-10-19
目录
相关文章
|
SQL 分布式计算 资源调度
apache tez 编译安装与验证
本文介绍apache tez 编译安装与验证
apache tez 编译安装与验证
|
SQL 分布式计算 资源调度
在文件存储HDFS版上使用 Apache Tez
本文档主要介绍在挂载文件存储HDFS版的 Hadoop 集群上安装及使用 Tez。
357 0
|
分布式计算 Apache Hadoop
|
1天前
|
消息中间件 Kafka Apache
Apache Flink 是一个开源的分布式流处理框架
Apache Flink 是一个开源的分布式流处理框架
599 5
|
1天前
|
消息中间件 API Apache
官宣|阿里巴巴捐赠的 Flink CDC 项目正式加入 Apache 基金会
本文整理自阿里云开源大数据平台徐榜江 (雪尽),关于阿里巴巴捐赠的 Flink CDC 项目正式加入 Apache 基金会。
1634 2
官宣|阿里巴巴捐赠的 Flink CDC 项目正式加入 Apache 基金会
|
1天前
|
SQL Java API
官宣|Apache Flink 1.19 发布公告
Apache Flink PMC(项目管理委员)很高兴地宣布发布 Apache Flink 1.19.0。
1639 2
官宣|Apache Flink 1.19 发布公告
|
1天前
|
SQL Apache 流计算
Apache Flink官方网站提供了关于如何使用Docker进行Flink CDC测试的文档
【2月更文挑战第25天】Apache Flink官方网站提供了关于如何使用Docker进行Flink CDC测试的文档
294 3
|
1天前
|
Oracle 关系型数据库 流计算
flink cdc 同步问题之报错org.apache.flink.util.SerializedThrowable:如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
361 0
|
1天前
|
XML Java Apache
Apache Flink自定义 logback xml配置
Apache Flink自定义 logback xml配置
170 0

热门文章

最新文章

推荐镜像

更多