iceberg 元数据文件以及如何查看

简介: 了解iceberg 的文件结构

官网文档:https://iceberg.apache.org/spec/#manifests

一:了解iceberg 的文件结构

image.png

二:首先建立一个iceberg 表

CREATE TABLE sample7 (

 `id` BIGINT,

 `data` STRING,

 `name` STRING,

 `category` STRING)

USING iceberg

PARTITIONED BY (category, name)

TBLPROPERTIES( "format-version"="2")


插入3条数据(2个分区[xc1,pt2] [xc3,pt3])

spark-sql> select * from sample7;

22/08/09 20:17:18 WARN conf.HiveConf: HiveConf of name hive.merge.mapfile does not exist

22/08/09 20:17:18 WARN conf.HiveConf: HiveConf hive.mapjoin.smalltable.filesize expects LONG type value

1       1       xc1     pt2

2       2       xc1     pt2

2       2       xc3     pt3


三:如何查看iceberg 文件信息

 1: show create table sample8;

获得信息如下:

CREATE TABLE spark_catalog.zdk.sample8 (
  `id` BIGINT,
  `data` STRING,
  `name` STRING,
  `category` STRING)
USING iceberg
PARTITIONED BY (category)
LOCATION 'hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8'
TBLPROPERTIES(
  'current-snapshot-id' = '3329158953122539010',
  'format' = 'iceberg/parquet')

2.根据属性信息中的 metadata_location 获取 metadata.json 的位置;

  * metadata.json 包含所有的snapshot信息;

  * 获取文件中的current-snapshot-id 的值:eg: 3329158953122539010

  * 根据获取到的snapshot-id 在当前的文件中查找对应的snapshot信息:

{
    "sequence-number" : 6,
    "snapshot-id" : 3329158953122539010,
    "parent-snapshot-id" : 6402031869467028731,
    "timestamp-ms" : 1660186257707,
    "summary" : {
      "operation" : "append",
      "spark.app.id" : "local-1660183319013",
      "added-data-files" : "1",
      "added-records" : "1",
      "added-files-size" : "1120",
      "changed-partition-count" : "1",
      "total-records" : "8",
      "total-files-size" : "7997",
      "total-data-files" : "7",
      "total-delete-files" : "0",
      "total-position-deletes" : "0",
      "total-equality-deletes" : "0"
    },
    "manifest-list" : "hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/snap-3329158953122539010-1-f4d2b1bd-22d3-4957-a5fe-f84118663407.avro",
    "schema-id" : 0
  } 

 * 获取manifest-list 的位置:  hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/snap-3329158953122539010-1-f4d2b1bd-22d3-4957-a5fe-f84118663407.avro,

 * 下载 avro 解析工具:avro-tools-1.9.2.jar

 * 执行命令:java -jar avro-tools-1.9.2.jar tojson snap-3329158953122539010-1-f4d2b1bd-22d3-4957-a5fe-f84118663407.avro

{"manifest_path":"hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/9cfc5807-02d1-4cb4-a705-863982a58b05-m1.avro","manifest_length":7173,"partition_spec_id":1,"content":0,"sequence_number":5,"min_sequence_number":3,"added_snapshot_id":6402031869467028731,"added_data_files_count":0,"existing_data_files_count":2,"deleted_data_files_count":0,"added_rows_count":0,"existing_rows_count":2,"deleted_rows_count":0,"partitions":{"array":[{"contains_null":false,"contains_nan":{"boolean":false},"lower_bound":{"bytes":"pt3"},"upper_bound":{"bytes":"pt3"}},{"contains_null":false,"contains_nan":{"boolean":false},"lower_bound":{"bytes":"xc3"},"upper_bound":{"bytes":"xc3"}}]}}
{"manifest_path":"hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/f4d2b1bd-22d3-4957-a5fe-f84118663407-m0.avro","manifest_length":6888,"partition_spec_id":0,"content":0,"sequence_number":6,"min_sequence_number":6,"added_snapshot_id":3329158953122539010,"added_data_files_count":1,"existing_data_files_count":0,"deleted_data_files_count":0,"added_rows_count":1,"existing_rows_count":0,"deleted_rows_count":0,"partitions":{"array":[{"contains_null":false,"contains_nan":{"boolean":false},"lower_bound":{"bytes":"pt3"},"upper_bound":{"bytes":"pt3"}}]}}
{"manifest_path":"hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/9cfc5807-02d1-4cb4-a705-863982a58b05-m0.avro","manifest_length":6890,"partition_spec_id":0,"content":0,"sequence_number":5,"min_sequence_number":5,"added_snapshot_id":6402031869467028731,"added_data_files_count":1,"existing_data_files_count":0,"deleted_data_files_count":0,"added_rows_count":1,"existing_rows_count":0,"deleted_rows_count":0,"partitions":{"array":[{"contains_null":false,"contains_nan":{"boolean":false},"lower_bound":{"bytes":"pt3"},"upper_bound":{"bytes":"pt3"}}]}}
{"manifest_path":"hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/a56dfef2-4592-4d1d-a642-e16f4467dac2-m1.avro","manifest_length":7020,"partition_spec_id":0,"content":0,"sequence_number":3,"min_sequence_number":1,"added_snapshot_id":7048362733565878395,"added_data_files_count":0,"existing_data_files_count":3,"deleted_data_files_count":0,"added_rows_count":0,"existing_rows_count":4,"deleted_rows_count":0,"partitions":{"array":[{"contains_null":false,"contains_nan":{"boolean":false},"lower_bound":{"bytes":"pt2"},"upper_bound":{"bytes":"pt3"}}]}}

当前的manifest_list包含4个manifest 文件,获取其中的一个manifest_path路径:hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/metadata/9cfc5807-02d1-4cb4-a705-863982a58b05-m1.avro

进行查看信息:

{"status":0,"snapshot_id":{"long":32144809447347390},"sequence_number":{"long":4},"data_file":{"content":0,"file_path":"hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/data/category=pt3/name=xc3/00000-0-9f003a4c-d5c0-4b56-9b8f-aab4f1d3a42d-00001.parquet","file_format":"PARQUET","partition":{"category":{"string":"pt3"},"name":{"string":"xc3"}},"record_count":1,"file_size_in_bytes":1120,"column_sizes":{"array":[{"key":1,"value":46},{"key":2,"value":48},{"key":3,"value":50},{"key":4,"value":50}]},"value_counts":{"array":[{"key":1,"value":1},{"key":2,"value":1},{"key":3,"value":1},{"key":4,"value":1}]},"null_value_counts":{"array":[{"key":1,"value":0},{"key":2,"value":0},{"key":3,"value":0},{"key":4,"value":0}]},"nan_value_counts":{"array":[]},"lower_bounds":{"array":[{"key":1,"value":"\u0005\u0000\u0000\u0000\u0000\u0000\u0000\u0000"},{"key":2,"value":"6"},{"key":3,"value":"xc3"},{"key":4,"value":"pt3"}]},"upper_bounds":{"array":[{"key":1,"value":"\u0005\u0000\u0000\u0000\u0000\u0000\u0000\u0000"},{"key":2,"value":"6"},{"key":3,"value":"xc3"},{"key":4,"value":"pt3"}]},"key_metadata":null,"split_offsets":{"array":[4]},"equality_ids":null,"sort_order_id":{"int":0}}}
{"status":0,"snapshot_id":{"long":7048362733565878395},"sequence_number":{"long":3},"data_file":{"content":0,"file_path":"hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/data/category=pt3/name=xc3/00000-3-817aac21-1f4a-49ae-97aa-337b3c77a0f8-00001.parquet","file_format":"PARQUET","partition":{"category":{"string":"pt3"},"name":{"string":"xc3"}},"record_count":1,"file_size_in_bytes":1119,"column_sizes":{"array":[{"key":1,"value":45},{"key":2,"value":48},{"key":3,"value":50},{"key":4,"value":50}]},"value_counts":{"array":[{"key":1,"value":1},{"key":2,"value":1},{"key":3,"value":1},{"key":4,"value":1}]},"null_value_counts":{"array":[{"key":1,"value":0},{"key":2,"value":0},{"key":3,"value":0},{"key":4,"value":0}]},"nan_value_counts":{"array":[]},"lower_bounds":{"array":[{"key":1,"value":"\u0004\u0000\u0000\u0000\u0000\u0000\u0000\u0000"},{"key":2,"value":"5"},{"key":3,"value":"xc3"},{"key":4,"value":"pt3"}]},"upper_bounds":{"array":[{"key":1,"value":"\u0004\u0000\u0000\u0000\u0000\u0000\u0000\u0000"},{"key":2,"value":"5"},{"key":3,"value":"xc3"},{"key":4,"value":"pt3"}]},"key_metadata":null,"split_offsets":{"array":[4]},"equality_ids":null,"sort_order_id":{"int":0}}}

当前manifest文件包含了两个数据文件,数据文件的路径可以通过data_file获取;

hdfs://ns1/dtInsight/hive/warehouse/zdk.db/sample8/data/category=pt3/name=xc3/00000-0-9f003a4c-d5c0-4b56-9b8f-aab4f1d3a42d-00001.parquet

同时manifest 文件中还记录了对parquet 的列的统计信息

image.png

上面的列的统计信息lower_bound,upbounds 在iceberg 里面具体的作用在之后会进行分析;

相关文章
|
3月前
|
SQL 存储 关系型数据库
Hive 元数据更新
【8月更文挑战第12天】
|
3月前
|
存储 SQL 缓存
实时数仓 Hologres产品使用合集之如何查看每个表的文件数和占用磁盘大小
实时数仓Hologres是阿里云推出的一款高性能、实时分析的数据库服务,专为大数据分析和复杂查询场景设计。使用Hologres,企业能够打破传统数据仓库的延迟瓶颈,实现数据到决策的无缝衔接,加速业务创新和响应速度。以下是Hologres产品的一些典型使用场景合集。
|
5月前
|
分布式计算 DataWorks 关系型数据库
DataWorks产品使用合集之当需要将数据从ODPS同步到RDS,且ODPS表是二级分区表时,如何同步所有二级分区的数据
DataWorks作为一站式的数据开发与治理平台,提供了从数据采集、清洗、开发、调度、服务化、质量监控到安全管理的全套解决方案,帮助企业构建高效、规范、安全的大数据处理体系。以下是对DataWorks产品使用合集的概述,涵盖数据处理的各个环节。
64 7
|
5月前
|
SQL 关系型数据库 Serverless
PolarDB产品使用问题之分区表中,一般建议多少条记录创建一个分区
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
5月前
|
SQL Oracle 关系型数据库
实时计算 Flink版产品使用问题之同步oracle表时,数据量约800万,检查点异常,该如何排查
实时计算Flink版作为一种强大的流处理和批处理统一的计算框架,广泛应用于各种需要实时数据处理和分析的场景。实时计算Flink版通常结合SQL接口、DataStream API、以及与上下游数据源和存储系统的丰富连接器,提供了一套全面的解决方案,以应对各种实时计算需求。其低延迟、高吞吐、容错性强的特点,使其成为众多企业和组织实时数据处理首选的技术平台。以下是实时计算Flink版的一些典型使用合集。
|
5月前
|
机器学习/深度学习 分布式计算 DataWorks
MaxCompute产品使用合集之表分区被误删了,该如何恢复
MaxCompute作为一款全面的大数据处理平台,广泛应用于各类大数据分析、数据挖掘、BI及机器学习场景。掌握其核心功能、熟练操作流程、遵循最佳实践,可以帮助用户高效、安全地管理和利用海量数据。以下是一个关于MaxCompute产品使用的合集,涵盖了其核心功能、应用场景、操作流程以及最佳实践等内容。
|
6月前
|
存储 关系型数据库 分布式数据库
PolarDB中的pdb冷归档存储和X-Engine的价格对比
【2月更文挑战第22天】PolarDB中的pdb冷归档存储和X-Engine的价格对比
74 1
|
存储 SQL 监控
实践笔记:Oracle-表按天分区
实践笔记:Oracle-表按天分区
169 0
|
SQL 存储 Java
FlinkSQL 元数据管理(使用hive的catalog)
catalog 分类 flink sql 使用hive的catalog 集成hive 读取hive
|
SQL 存储 大数据
如何将 Hive 数据按分区归档到 OSS | 学习笔记
快速学习如何将 Hive 数据按分区归档到 OSS。
606 0
如何将 Hive 数据按分区归档到 OSS | 学习笔记