Apache Tez Design

简介:

Tez aims to be a general purpose execution runtime that enhances various scenarios that are not well served by classic Map-Reduce. 
In the short term the major focus is to support Hive and Pig, specifically to enable performance improvements to batch and ad-hoc interactive queries.

image

 

What services will Tez provide

Tez兼容传统的map-reduce jobs, 当然主要focus提供基于DAG的jobs和相应的API以及primitives. 

Tez provides runtime components:

  • An execution environment that can handle traditional map-reduce jobs
  • An execution environment that handles DAG-based jobs comprising various built-in and extendable primitives
  • Cluster-side determination of input pieces
  • Runtime planning such as task cardinality determination and dynamic modification to the DAG structure


Tez provides APIs to access these services:

  • Traditional map-reduce functionality is accessed via java classes written to the Job interface: org.apache.hadoop.mapred.Job and/or org.apache.hadoop.mapreduce.v2.app.job.Job; 
    and by specifying in yarn-site that the map-reduce framework should be Tez.
  • DAG-based execution is accessed via the new Tez DAG API: org.apache.tez.dag.api.*, org.apache.tez.engine.api.*.


Tez provides pre-made primitives for use with the DAG API (org.apache.tez.engine.common.*)

  • Vertex Input
  • Vertex Output
  • Sorting
  • Shuffling
  • Merging
  • Data transfer

 

Tez-YARN architecture

In the above figure Tez is represented by the red components: client-side API, an AppMaster, and multiple containers that execute child processes under the control of the AppMaster.

image

Three separate software stacks are involved in the execution of a Tez job, each using components from the clientapplication, Tez, and YARN:

image

 

DAG topologies and scenarios

The following terminology is used:

Job Vertex: A “stage” in the job plan. 逻辑顶点, 可以理解成stage 
Job Edge: The logical connections between Job Vertices. 逻辑边, 关联 
Vertex: A materialized stage at runtime comprising a certain number of materialized tasks. 物理顶点, 由并行的tasks节点组成 
Edge: Represents actual data movement between tasks. 物理边, 代表实际数据流向 
Task: A process performing computation within a YARN container. Task, 一个执行节点 
Task cardinality: The number of materialized tasks in a Vertex. Task基数, Vertex的并发度 
Static plan: Planning decisions fixed before job submission. 
Dynamic plan: Planning decisions made at runtime in the AppMaster process.

 

Tez API

The Tez API comprises many services that support applications to run DAG-style jobs. An application that makes use of Tez will need to: 
1. Create a job plan (the DAG) comprising vertices, edges, and data source references 
2. Create task implementations that perform computations and interact with the DAG AppMaster 
3. Configure Yarn and Tez appropriately

DAG definition API

抽象DAG的定义接口

public class DAG{
    DAG();
    void addVertex(Vertex);
    void addEdge(Edge);
    void addConfiguration(String, String);
    void setName(String);
    void verify();
    DAGPlan createDaG();
}

public class Vertex {
    Vertex(String vertexName, String processorName, int parallelism);
    void setTaskResource();
    void setTaskLocationsHint(TaskLocationHint[]);
    void setJavaOpts(String);
    String getVertexName();
    String getProcessorName();
    int getParallelism();
    Resource getTaskResource();
    TaskLocationHint[] getTaskLocationsHint();
    String getJavaOpts();
}

public class Edge {
    Edge(Vertex inputVertex, Vertex outputVertex, EdgeProperty edgeProperty);
    String getInputVertex();
    String getOutputVertex();
    EdgeProperty getEdgeProperty();
    String getId();
}

Execution APIs

Task作为Tez的执行者, 遵循input, output, processor的模式

public interface Master
//a context object for task execution. currently only stub

public interface Input{
    void initialize(Configuration conf, Master master)
    boolean hasNext()
    Object getNextKey()
    Iterable<Object> getNextValues()
    float getProgress()
    void close()
}

public interface Output{
    void initialize(Configuration conf, Master master);
    void write(Object key, Object value);
    OutputContext getOutputContext();
    void close();
}

public interface Partitioner {
    int getPartition(Object key, Object value, int numPartitions);
}

public interface Processor {
    void initialize(Configuration conf, Master master)
    void process(Input[] in, Output[] out)
    void close()
}

public interface Task{
    void initialize(Configuration conf, Master master)
    Input[] getInputs();
    Processor getProcessor();
    Output[] getOutputs();
    void run()
    void close()
}

本文章摘自博客园,原文发布日期: 2013-10-19
目录
相关文章
|
SQL 分布式计算 资源调度
apache tez 编译安装与验证
本文介绍apache tez 编译安装与验证
apache tez 编译安装与验证
|
SQL 分布式计算 资源调度
在文件存储HDFS版上使用 Apache Tez
本文档主要介绍在挂载文件存储HDFS版的 Hadoop 集群上安装及使用 Tez。
441 0
|
分布式计算 Apache Hadoop
|
3月前
|
存储 消息中间件 Java
Apache Flink 实践问题之原生TM UI日志问题如何解决
Apache Flink 实践问题之原生TM UI日志问题如何解决
44 1
|
21天前
|
SQL Java API
Apache Flink 2.0-preview released
Apache Flink 社区正积极筹备 Flink 2.0 的发布,这是自 Flink 1.0 发布以来的首个重大更新。Flink 2.0 将引入多项激动人心的功能和改进,包括存算分离状态管理、物化表、批作业自适应执行等,同时也包含了一些不兼容的变更。目前提供的预览版旨在让用户提前尝试新功能并收集反馈,但不建议在生产环境中使用。
505 13
Apache Flink 2.0-preview released
|
25天前
|
存储 缓存 算法
分布式锁服务深度解析:以Apache Flink的Checkpointing机制为例
【10月更文挑战第7天】在分布式系统中,多个进程或节点可能需要同时访问和操作共享资源。为了确保数据的一致性和系统的稳定性,我们需要一种机制来协调这些进程或节点的访问,避免并发冲突和竞态条件。分布式锁服务正是为此而生的一种解决方案。它通过在网络环境中实现锁机制,确保同一时间只有一个进程或节点能够访问和操作共享资源。
59 3
|
2月前
|
SQL 消息中间件 关系型数据库
Apache Doris Flink Connector 24.0.0 版本正式发布
该版本新增了对 Flink 1.20 的支持,并支持通过 Arrow Flight SQL 高速读取 Doris 中数据。
|
3月前
|
消息中间件 监控 数据挖掘
基于RabbitMQ与Apache Flink构建实时分析系统
【8月更文第28天】本文将介绍如何利用RabbitMQ作为数据源,结合Apache Flink进行实时数据分析。我们将构建一个简单的实时分析系统,该系统能够接收来自不同来源的数据,对数据进行实时处理,并将结果输出到另一个队列或存储系统中。
205 2
|
3月前
|
消息中间件 分布式计算 Hadoop
Apache Flink 实践问题之Flume与Hadoop之间的物理墙问题如何解决
Apache Flink 实践问题之Flume与Hadoop之间的物理墙问题如何解决
46 3

推荐镜像

更多