Kylin Cube Build and Job Monitoring

简介: Kylin Cube Build and Job Monitoring Cube Build First of all, make sure that you have authority of the cube you want to build.

Kylin Cube Build and Job Monitoring

Cube Build

First of all, make sure that you have authority of the cube you want to build.

  1. In Models page, click the Action drop down button in the right of a cube column and select operation Build.

  2. There is a pop-up window after the selection, click END DATE input box to select end date of this incremental cube build.

  3. Click Submit to send the build request. After success, you will see the new job in the Monitor page.

  4. The new job is in “pending” status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.

  5. Wait the job to finish. In the between if you want to discard it, click Actions -> Discard button.

  6. After the job is 100% finished, the cube’s status becomes to “Ready”, means it is ready to serve SQL queries. In the Model tab, find the cube, click cube name to expand the section, in the “HBase” tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.

If you have more source data, repeate the steps above to build them into the cube.

Job Monitoring

In the Monitor page, click the job detail button to see detail information show in the right side.

The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.

Click the icon buttons showing in each step to see the details: Parameters, Log, MRJob.

  • Parameters

  • Log

  • MRJob(MapReduce Job)

目录
相关文章
68 Azkaban Command类型多job工作流flow
68 Azkaban Command类型多job工作流flow
68 0
|
9月前
|
自然语言处理 数据处理 调度
《Havenask分布式索引构建服务--Build Service》
Havenask是阿里巴巴智能引擎事业部自研的开源高性能搜索引擎,深度支持了包括淘宝、天猫、菜鸟、高德、饿了么在内几乎整个阿里的搜索业务。本文针对性介绍了Havenask分布式索引构建服务——Build Service,主打稳定、快速、易管理,是在线系统提升竞争力的一大利器。
101335 3
《Havenask分布式索引构建服务--Build Service》
|
分布式计算 资源调度 Scala
《Monitoring the Dynamic Resource Usage of Scala and Python Spark Jobs in Yarn》电子版地址
Monitoring the Dynamic Resource Usage of Scala and Python Spark Jobs in Yarn
96 0
《Monitoring the Dynamic Resource Usage of Scala and Python Spark Jobs in Yarn》电子版地址
|
分布式计算 Spark
《Sparklint a Tool for Identifying and Tuning Inefficient Spark Jobs Across Your Cluster》电子版地址
Sparklint a Tool for Identifying and Tuning Inefficient Spark Jobs Across Your Cluster
109 0
《Sparklint a Tool for Identifying and Tuning Inefficient Spark Jobs Across Your Cluster》电子版地址
|
SQL HIVE 流计算
《Production-Ready Flink and Hive Integration what story you can tell now》电子版地址
03-李锐-Production-Ready Flink and Hive Integration what story you can tell now.的副本
101 0
《Production-Ready Flink and Hive Integration what story you can tell now》电子版地址
|
SQL HIVE Python
Hive - Cube, Rollup, GroupingId 示例与详解
​上篇文章讲到了Grouping Sets 的使用方法,Grouping Sets 可以看做是将 group by 的内容进行 union 整合,这篇文章将基于同一思想进行扩展介绍两个方法 Cube 以及 Rollup,同时给出辅助函数 GroupingId 的生成方法与使用方法。...
606 0
Hive - Cube, Rollup, GroupingId 示例与详解
|
SQL 存储 运维
Improvements of Job Scheduler and Query Execution on Flink OLAP
字节跳动基础架构工程师方勇在 FFA 2021 的分享
Improvements of Job Scheduler and Query Execution on Flink OLAP
|
存储 分布式数据库 Apache
|
数据库 SQL HIVE