Kylin Cube Build and Job Monitoring

简介: Kylin Cube Build and Job Monitoring Cube Build First of all, make sure that you have authority of the cube you want to build.

Kylin Cube Build and Job Monitoring

Cube Build

First of all, make sure that you have authority of the cube you want to build.

  1. In Models page, click the Action drop down button in the right of a cube column and select operation Build.

  2. There is a pop-up window after the selection, click END DATE input box to select end date of this incremental cube build.

  3. Click Submit to send the build request. After success, you will see the new job in the Monitor page.

  4. The new job is in “pending” status; after a while, it will be started to run and you will see the progress by refresh the web page or click the refresh button.

  5. Wait the job to finish. In the between if you want to discard it, click Actions -> Discard button.

  6. After the job is 100% finished, the cube’s status becomes to “Ready”, means it is ready to serve SQL queries. In the Model tab, find the cube, click cube name to expand the section, in the “HBase” tab, it will list the cube segments. Each segment has a start/end time; Its underlying HBase table information is also listed.

If you have more source data, repeate the steps above to build them into the cube.

Job Monitoring

In the Monitor page, click the job detail button to see detail information show in the right side.

The detail information of a job provides a step-by-step record to trace a job. You can hover a step status icon to see the basic status and information.

Click the icon buttons showing in each step to see the details: Parameters, Log, MRJob.

  • Parameters

  • Log

  • MRJob(MapReduce Job)

目录
相关文章
68 Azkaban Command类型多job工作流flow
68 Azkaban Command类型多job工作流flow
56 0
|
流计算
Flink CDC在运行过程中遇到"Could not upload job files"的问题
Flink CDC在运行过程中遇到"Could not upload job files"的问题
285 1
|
7月前
|
SQL 大数据 Apache
Flink Has Become the De-facto Standard of Streaming Compute
本文整理自 Apache Flink 中文社区发起人、阿里巴巴开源大数据平台负责人王峰(莫问),在 Flink Forward Asia 2023 主会场的分享。
420 0
Flink Has Become the De-facto Standard of Streaming Compute
|
索引
gitlab--job 分组
gitlab--job 分组
|
SQL 算法 关系型数据库
Hive中order by,sort by,distribute by和cluster by详解
Hive中order by,sort by,distribute by和cluster by详解
696 0
|
分布式计算 资源调度 Scala
《Monitoring the Dynamic Resource Usage of Scala and Python Spark Jobs in Yarn》电子版地址
Monitoring the Dynamic Resource Usage of Scala and Python Spark Jobs in Yarn
88 0
《Monitoring the Dynamic Resource Usage of Scala and Python Spark Jobs in Yarn》电子版地址
|
监控 调度 数据安全/隐私保护
04-PDI(Kettle)job案例
文章目录 04-PDI(Kettle)job案例 job简介 job创建案例 1.创建空作业
04-PDI(Kettle)job案例
|
SQL HIVE 流计算
《Production-Ready Flink and Hive Integration what story you can tell now》电子版地址
03-李锐-Production-Ready Flink and Hive Integration what story you can tell now.的副本
94 0
《Production-Ready Flink and Hive Integration what story you can tell now》电子版地址
|
存储 缓存 监控
Flink State - Backend Improvements and Evolution in 2021
李钰 (绝顶)、唐云 (茶干) 在 FFA 2021 核心技术专场的分享
Flink State - Backend Improvements and Evolution in 2021
|
存储 测试技术 API
【Elastic Engineering】Elasticsearch:Runtime fields 入门, Elastic 的 schema on read 实现 - 7.11 发布
Elasticsearch:Runtime fields 入门, Elastic 的 schema on read 实现 - 7.11 发布
214 0
【Elastic Engineering】Elasticsearch:Runtime fields 入门, Elastic 的 schema on read 实现 - 7.11 发布