Apache Doris Profile&Explain详解

简介: Apache Doris Profile&Explain详解

一、简述

Apache Doris中运行EXPLAIN + SQL就可以得到SQL对应的Query Plan,再结合Apche Doris的Profile可以了解Doris是如何处理SQL语句,用于分析查询语句或是结构的性能瓶颈,从而帮助选择更好的索引和写出更优化的查询语句。

二、Plan分析

2.1 sql准备

tpcds query96.sql为例

explain
-- explain graph 生成对应执行计划图表
select  count(*)
from store_sales
    ,household_demographics
    ,time_dim
    , store
where ss_sold_time_sk = time_dim.t_time_sk
    and ss_hdemo_sk = household_demographics.hd_demo_sk
    and ss_store_sk = s_store_sk
    and time_dim.t_hour = 8
    and time_dim.t_minute >= 30
    and household_demographics.hd_dep_count = 5
    and store.s_store_name = 'ese'
order by count(*) limit 100;

2.2 explain结果分析

Query Plan可以分为逻辑执行计划(Logical Query Plan)和物理执行计划(Physical Query Plan),当前讲述的Query Plan默认指逻辑执行计划;tpcds query96.sql对应的Query Plan展示如下。

-- graph 
                                         ┌───────────────┐
                                        │[8: ResultSink]│
                                        │[Fragment: 4]  │
                                        │RESULT SINK    │
                                        └───────────────┘
                                         ┌─────────────┐
                                         │[8: TOP-N]   │
                                         │[Fragment: 4]│
                                         └─────────────┘
                               ┌────────────────────────────────┐
                               │[13: AGGREGATE (merge finalize)]│
                               │[Fragment: 4]                   │
                               └────────────────────────────────┘
                                        ┌──────────────┐
                                        │[12: EXCHANGE]│
                                        │[Fragment: 4] │
                                        └──────────────┘
                                     ┌────────────────────┐
                                     │[12: DataStreamSink]│
                                     │[Fragment: 0]       │
                                     │STREAM DATA SINK    │
                                     │  EXCHANGE ID: 12   │
                                     │  UNPARTITIONED     │
                                     └────────────────────┘
                               ┌─────────────────────────────────┐
                               │[7: AGGREGATE (update serialize)]│
                               │[Fragment: 0]                    │
                               └─────────────────────────────────┘
                                ┌───────────────────────────────┐
                                │[6: HASH JOIN]                 │
                                │[Fragment: 0]                  │
                                │join op: INNER JOIN (BROADCAST)│
                                └───────────────────────────────┘
                                    ┌───────────┴─────────────────────────────────────┐
                                    │                                                 │
                    ┌───────────────────────────────┐                         ┌──────────────┐
                    │[4: HASH JOIN]                 │                         │[11: EXCHANGE]│
                    │[Fragment: 0]                  │                         │[Fragment: 0] │
                    │join op: INNER JOIN (BROADCAST)│                         └──────────────┘
                    └───────────────────────────────┘                                 │
                    ┌───────────────┴─────────────────────┐                           │
                    │                                     │                ┌────────────────────┐
    ┌───────────────────────────────┐             ┌──────────────┐         │[11: DataStreamSink]│
    │[2: HASH JOIN]                 │             │[10: EXCHANGE]│         │[Fragment: 3]       │
    │[Fragment: 0]                  │             │[Fragment: 0] │         │STREAM DATA SINK    │
    │join op: INNER JOIN (BROADCAST)│             └──────────────┘         │  EXCHANGE ID: 11   │
    └───────────────────────────────┘                     │                │  UNPARTITIONED     │
          ┌─────────┴──────────┐                          │                └────────────────────┘
          │                    │               ┌────────────────────┐                ┌┘
┌──────────────────┐    ┌─────────────┐        │[10: DataStreamSink]│                │
│[0: OlapScanNode] │    │[9: EXCHANGE]│        │[Fragment: 2]       │       ┌─────────────────┐
│[Fragment: 0]     │    │[Fragment: 0]│        │STREAM DATA SINK    │       │[5: OlapScanNode]│
│TABLE: store_sales│    └─────────────┘        │  EXCHANGE ID: 10   │       │[Fragment: 3]    │
└──────────────────┘           │               │  UNPARTITIONED     │       │TABLE: store     │
                               │               └────────────────────┘       └─────────────────┘
                     ┌───────────────────┐                │
                     │[9: DataStreamSink]│                │
                     │[Fragment: 1]      │ ┌─────────────────────────────┐
                     │STREAM DATA SINK   │ │[3: OlapScanNode]            │
                     │  EXCHANGE ID: 09  │ │[Fragment: 2]                │
                     │  UNPARTITIONED    │ │TABLE: household_demographics│
                     └───────────────────┘ └─────────────────────────────┘
                      ┌─────────────────┐
                      │[1: OlapScanNode]│
                      │[Fragment: 1]    │
                      │TABLE: time_dim  │
                      └─────────────────┘
-- 非graph 
PLAN FRAGMENT 0
 OUTPUT EXPRS:<slot 11> <slot 10> count(*)
  PARTITION: UNPARTITIONED
  RESULT SINK
  8:TOP-N
  |  order by: <slot 11> <slot 10> count(*) ASC
  |  offset: 0
  |  limit: 100
  |  
  13:AGGREGATE (merge finalize)
  |  output: count(<slot 10> count(*))
  |  group by: 
  |  cardinality=-1
  |  
  12:EXCHANGE
PLAN FRAGMENT 1
 OUTPUT EXPRS:
  PARTITION: HASH_PARTITIONED: `default_cluster:tpcds`.`store_sales`.`ss_item_sk`, `default_cluster:tpcds`.`store_sales`.`ss_ticket_number`
  STREAM DATA SINK
    EXCHANGE ID: 12
    UNPARTITIONED
  7:AGGREGATE (update serialize)
  |  output: count(*)
  |  group by: 
  |  cardinality=1
  |  
  6:HASH JOIN
  |  join op: INNER JOIN (BROADCAST)
  |  hash predicates:
  |  colocate: false, reason: Tables are not in the same group
  |  equal join conjunct: `ss_store_sk` = `s_store_sk`
  |  runtime filters: RF000[in] <- `s_store_sk`
  |  cardinality=2880403
  |  
  |----11:EXCHANGE
  |    
  4:HASH JOIN
  |  join op: INNER JOIN (BROADCAST)
  |  hash predicates:
  |  colocate: false, reason: Tables are not in the same group
  |  equal join conjunct: `ss_hdemo_sk` = `household_demographics`.`hd_demo_sk`
  |  runtime filters: RF001[in] <- `household_demographics`.`hd_demo_sk`
  |  cardinality=2880403
  |  
  |----10:EXCHANGE
  |    
  2:HASH JOIN
  |  join op: INNER JOIN (BROADCAST)
  |  hash predicates:
  |  colocate: false, reason: Tables are not in the same group
  |  equal join conjunct: `ss_sold_time_sk` = `time_dim`.`t_time_sk`
  |  runtime filters: RF002[in] <- `time_dim`.`t_time_sk`
  |  cardinality=2880403
  |  
  |----9:EXCHANGE
  |    
  0:OlapScanNode
     TABLE: store_sales
     PREAGGREGATION: OFF. Reason: conjunct on `ss_sold_time_sk` which is StorageEngine value column
     PREDICATES: `default_cluster:tpcds.store_sales`.`__DORIS_DELETE_SIGN__` = 0
     runtime filters: RF000[in] -> `ss_store_sk`, RF001[in] -> `ss_hdemo_sk`, RF002[in] -> `ss_sold_time_sk`
     partitions=1/1
     rollup: store_sales
     tabletRatio=3/3
     tabletList=20968,20972,20976
     cardinality=2880403
     avgRowSize=67.95811
     numNodes=3
PLAN FRAGMENT 2
 OUTPUT EXPRS:
  PARTITION: HASH_PARTITIONED: `default_cluster:tpcds`.`store`.`s_store_sk`
  STREAM DATA SINK
    EXCHANGE ID: 11
    UNPARTITIONED
  5:OlapScanNode
     TABLE: store
     PREAGGREGATION: OFF. Reason: null
     PREDICATES: `store`.`s_store_name` = 'ese', `default_cluster:tpcds.store`.`__DORIS_DELETE_SIGN__` = 0
     partitions=1/1
     rollup: store
     tabletRatio=3/3
     tabletList=20773,20777,20781
     cardinality=23
     avgRowSize=1798.8695
     numNodes=3
PLAN FRAGMENT 3
 OUTPUT EXPRS:
  PARTITION: HASH_PARTITIONED: `default_cluster:tpcds`.`household_demographics`.`hd_demo_sk`
  STREAM DATA SINK
    EXCHANGE ID: 10
    UNPARTITIONED
  3:OlapScanNode
     TABLE: household_demographics
     PREAGGREGATION: OFF. Reason: null
     PREDICATES: `household_demographics`.`hd_dep_count` = 5, `default_cluster:tpcds.household_demographics`.`__DORIS_DELETE_SIGN__` = 0
     partitions=1/1
     rollup: household_demographics
     tabletRatio=3/3
     tabletList=20848,20852,20856
     cardinality=14399
     avgRowSize=2.8781166
     numNodes=3
PLAN FRAGMENT 4
 OUTPUT EXPRS:
  PARTITION: HASH_PARTITIONED: `default_cluster:tpcds`.`time_dim`.`t_time_sk`
  STREAM DATA SINK
    EXCHANGE ID: 09
    UNPARTITIONED
  1:OlapScanNode
     TABLE: time_dim
     PREAGGREGATION: OFF. Reason: null
     PREDICATES: `time_dim`.`t_hour` = 8, `time_dim`.`t_minute` >= 30, `default_cluster:tpcds.time_dim`.`__DORIS_DELETE_SIGN__` = 0
     partitions=1/1
     rollup: time_dim
     tabletRatio=3/3
     tabletList=20713,20717,20721
     cardinality=172799
     avgRowSize=11.671202
     numNodes=3

2.2.1 常见属性说明

Colocate Join 适合几张表按照相同字段分桶,并高频根据相同字段 Join 的场景,比如电商的不少应用都按照商家 Id 分桶,并高频按照商家 Id 进行 Join。

2.2.2 plan分析

  • Query96的Query Plan分为五个Plan Fragment,编号从0~4
  • 分析Query Plan可以采用自底向上的方式进行,逐个进行分析
  • 最底部的Plan Fragment为Fragment 4分析
  • 主要负责扫描time_dim表,并提前执行相关查询条件,即谓词下推
  • 对于聚合表(Aggregate Key),doris会根据不同查询选择是否开启PREAGGREGATION,上图中time_dim的预聚合为关闭状态,关闭状态之下会读取time_dim的全部维度列,当表中维度列多的时候,这个可能会成为影响性能的一个关键因素
  • 如果time_dim表有选择Range Partition进行数据划分,Query Plan中的partitions会表征查询命中几个分区,无关分区被自动过滤会有效减少扫描数据量
  • 如果有物化视图,doris会根据查询去自动选择物化视图,如果没有物化视图,那么查询自动命中base table,也就是上图中展示的rollup: time_dim,可参考doris测试物化视图
  • 当time_dim数据扫描完成之后,Fragment 4的执行过程也就随之结束,此时它将扫描得到的数据传递给你其它Fragment,EXCHANGE ID : 09表示数据传递给了标号为9的接收节点,可通过graph查看
  • 对于Query96的Query Plan而言,Fragment 2, 3, 4功能类似,只是负责扫描的表不同;具体到查询中的Order/Aggregation/Join算子,都在Fragment 1中进行,着重分析Fragment 1
  • Fragment 1集成了三个Join算子的执行,采用默认的BROADCAST方式进行执行,也就是小表向大表广播的方式进行,如果两个Join的表都是大表,建议采用SHUFFLE的方式进行
  • 目前doris只支持HASH JOIN,也就是采用哈希算法进行Join
  • 其中有一个colocate字段,这个用来表述两张Join表采用同样的分区/分桶方式,如此执行Join的过程中可以直接在本地执行,不用进行数据的移动
  • Join执行完成之后,就是执行上成的Aggregation, Order by和TOP-N的算子

三、Doris-Profile简述

可通过8030页面的QueryProfile模块查看任务执行详情,以下为query96.sql实际执行的QueryProfile部分内容,各指标名详情可参考:Apache Doris查询分析

Query:
    Summary:
          -  Query  ID:  7dd4ba245012441c-b0aadbed39f80f20
          -  Start  Time:  2022-04-15  15:52:22
          -  End  Time:  2022-04-15  15:52:22
          -  Total:  611ms
          -  Query  Type:  Query
          -  Query  State:  EOF
          -  Doris  Version:  0.15.0-rc04
          -  User:  root
          -  Default  Db:  default_cluster:tpcds
          -  Sql  Statement:  /*  ApplicationName=DBeaver  Enterprise  7.0.0  -  SQLEditor  <20220321常用命令-doris.sql>  */  select    count(*)
from  store_sales
        ,household_demographics
        ,time_dim
        ,  store
where  ss_sold_time_sk  =  time_dim.t_time_sk
        and  ss_hdemo_sk  =  household_demographics.hd_demo_sk
        and  ss_store_sk  =  s_store_sk
        and  time_dim.t_hour  =  8
        and  time_dim.t_minute  >=  30
        and  household_demographics.hd_dep_count  =  5
        and  store.s_store_name  =  'ese'
order  by  count(*)  limit  100
          -  Is  Cached:  No
        Execution  Summary:
              -  Analysis  Time:  636.648us
              -  Plan  Time:  19.230ms
              -  Schedule  Time:  125.121ms
              -  Wait  and  Fetch  Result  Time:  466.30ms
    Execution  Profile  7dd4ba245012441c-b0aadbed39f80f20:(Active:  611.44ms,  %  non-child:  100.00%)
        Fragment  0:
            Instance  7dd4ba245012441c-b0aadbed39f80f2d  (host=TNetworkAddress(hostname:10.192.119.70,  port:9060)):(Active:  586.950ms,  %  non-child:  0.00%)
                  -  FragmentCpuTime:  756.962us
                  -  MemoryLimit:  2.00  GB
                  -  PeakMemoryUsage:  48.01  KB
                  -  PeakReservation:  0.00  
                  -  PeakUsedReservation:  0.00  
                  -  RowsProduced:  1
                BlockMgr:
                      -  BlockWritesOutstanding:  0
                      -  BlocksCreated:  0
                      -  BlocksRecycled:  0
                      -  BufferedPins:  0
                      -  BytesWritten:  0.00  
                      -  MaxBlockSize:  8.00  MB
                      -  TotalBufferWaitTime:  0ns
                      -  TotalEncryptionTime:  0ns
                      -  TotalIntegrityCheckTime:  0ns
                      -  TotalReadBlockTime:  0ns
                DataBufferSender  (dst_fragment_instance_id=7dd4ba245012441c-b0aadbed39f80f2d):
                      -  AppendBatchTime:  124.481us
                          -  ResultSendTime:  119.257us
                          -  TupleConvertTime:  4.217us
                      -  NumSentRows:  1
                SORT_NODE  (id=8):(Active:  587.36ms,  %  non-child:  0.01%)
                      -  PeakMemoryUsage:  16.00  KB
                      -  RowsReturned:  1
                      -  RowsReturnedRate:  1
                    AGGREGATION_NODE  (id=13):(Active:  586.958ms,  %  non-child:  0.10%)
                          -  Probe  Method:  HashTable  Linear  Probing
                          -  BuildTime:  10.533us
                          -  GetResultsTime:  0ns
                          -  HTResize:  0
                          -  HTResizeTime:  0ns
                          -  HashBuckets:  0
                          -  HashCollisions:  0
                          -  HashFailedProbe:  0
                          -  HashFilledBuckets:  0
                          -  HashProbe:  0
                          -  HashTravelLength:  0
                          -  LargestPartitionPercent:  0
                          -  MaxPartitionLevel:  0
                          -  NumRepartitions:  0
                          -  PartitionsCreated:  0
                          -  PeakMemoryUsage:  28.00  KB
                          -  RowsProcessed:  0
                          -  RowsRepartitioned:  0
                          -  RowsReturned:  1
                          -  RowsReturnedRate:  1
                          -  SpilledPartitions:  0
                        EXCHANGE_NODE  (id=12):(Active:  586.364ms,  %  non-child:  95.96%)
                              -  BytesReceived:  32.00  B
                              -  ConvertRowBatchTime:  7.320us
                              -  DataArrivalWaitTime:  586.282ms
                              -  DeserializeRowBatchTimer:  22.637us
                              -  FirstBatchArrivalWaitTime:  349.530ms
                              -  PeakMemoryUsage:  12.01  KB
                              -  RowsReturned:  3
                              -  RowsReturnedRate:  5
                              -  SendersBlockedTotalTimer(*):  0ns
        Fragment  1:
            Instance  7dd4ba245012441c-b0aadbed39f80f23  (host=TNetworkAddress(hostname:10.192.119.68,  port:9060)):(Active:  472.511ms,  %  non-child:  0.10%)
                  -  FragmentCpuTime:  5.714ms
                  -  MemoryLimit:  2.00  GB
                  -  PeakMemoryUsage:  610.00  KB
                  -  PeakReservation:  0.00  
                  -  PeakUsedReservation:  0.00  
                  -  RowsProduced:  1
                BlockMgr:
                      -  BlockWritesOutstanding:  0
                      -  BlocksCreated:  0
                      -  BlocksRecycled:  0
                      -  BufferedPins:  0
                      -  BytesWritten:  0.00  
                      -  MaxBlockSize:  8.00  MB
                      -  TotalBufferWaitTime:  0ns
                      -  TotalEncryptionTime:  0ns
                      -  TotalIntegrityCheckTime:  0ns
                      -  TotalReadBlockTime:  0ns
                DataStreamSender  (dst_id=12,  dst_fragments=[7dd4ba245012441c-b0aadbed39f80f2d]):(Active:  186.357us,  %  non-child:  0.03%)
                      -  BytesSent:  16.00  B
                      -  IgnoreRows:  0
                      -  LocalBytesSent:  0.00  
                      -  OverallThroughput:  83.84375  KB/sec
                      -  PeakMemoryUsage:  16.00  KB
                      -  SerializeBatchTime:  7.0us
                      -  UncompressedRowBatchSize:  16.00  B
                AGGREGATION_NODE  (id=7):(Active:  471.713ms,  %  non-child:  0.14%)
                      -  Probe  Method:  HashTable  Linear  Probing
                      -  BuildTime:  45.223us
                      -  GetResultsTime:  0ns
                      -  HTResize:  0
                      -  HTResizeTime:  0ns
                      -  HashBuckets:  0
                      -  HashCollisions:  0
                      -  HashFailedProbe:  0
                      -  HashFilledBuckets:  0
                      -  HashProbe:  0
                      -  HashTravelLength:  0
                      -  LargestPartitionPercent:  0
                      -  MaxPartitionLevel:  0
                      -  NumRepartitions:  0
                      -  PartitionsCreated:  0
                      -  PeakMemoryUsage:  280.00  KB
                      -  RowsProcessed:  0
                      -  RowsRepartitioned:  0
                      -  RowsReturned:  1
                      -  RowsReturnedRate:  2
                      -  SpilledPartitions:  0
                    HASH_JOIN_NODE  (id=6):(Active:  470.881ms,  %  non-child:  0.08%)
                          -  ExecOption:  Hash  Table  Built  Asynchronously
                          -  BuildBuckets:  1.024K  (1024)
                          -  BuildRows:  1
                          -  BuildTime:  1.129ms
                          -  HashTableMaxList:  1
                          -  HashTableMinList:  1
                          -  LoadFactor:  4562146422526312400.00
                          -  PeakMemoryUsage:  308.00  KB
                          -  ProbeRows:  341
                          -  ProbeTime:  34.697us
                          -  PushDownComputeTime:  156.171us
                          -  PushDownTime:  4.423us
                          -  RowsReturned:  341
                          -  RowsReturnedRate:  724
  • Active:表示该节点(包含其所有子节点)的执行时间
  • BuildTime:扫描右表并构建hash表的时间
  • ProbeTime:获取左表并搜索hashtable进行匹配并输出的时间
相关文章
|
3月前
|
消息中间件 OLAP Kafka
Apache Doris 实时更新技术揭秘:为何在 OLAP 领域表现卓越?
Apache Doris 为何在 OLAP 领域表现卓越?凭借其主键模型、数据延迟、查询性能、并发处理、易用性等多方面特性的表现,在分析领域展现了独特的实时更新能力。
319 9
|
2月前
|
存储 自然语言处理 分布式计算
Apache Doris 3.1 正式发布:半结构化分析全面升级,湖仓一体能力再跃新高
Apache Doris 3.1 正式发布!全面升级半结构化分析,支持 VARIANT 稀疏列与模板化 Schema,提升湖仓一体能力,增强 Iceberg/Paimon 集成,优化存储引擎与查询性能,助力高效数据分析。
427 4
Apache Doris 3.1 正式发布:半结构化分析全面升级,湖仓一体能力再跃新高
|
2月前
|
SQL 人工智能 数据挖掘
Apache Doris 4.0 AI 能力揭秘(二):为企业级应用而生的 AI 函数设计与实践
Apache Doris 4.0 原生集成 LLM 函数,将大语言模型能力深度融入 SQL 引擎,实现文本处理智能化与数据分析一体化。通过十大函数,支持智能客服、内容分析、金融风控等场景,提升实时决策效率。采用资源池化管理,保障数据一致性,降低传输开销,毫秒级完成 AI 分析。结合缓存复用、并行执行与权限控制,兼顾性能、成本与安全,推动数据库向 AI 原生演进。
253 0
Apache Doris 4.0 AI 能力揭秘(二):为企业级应用而生的 AI 函数设计与实践
|
3月前
|
存储 分布式计算 Apache
湖仓一体:小米集团基于 Apache Doris + Apache Paimon 实现 6 倍性能飞跃
小米通过将 Apache Doris(数据库)与 Apache Paimon(数据湖)深度融合,不仅解决了数据湖分析的性能瓶颈,更实现了 “1+1>2” 的协同效应。在这些实践下,小米在湖仓数据分析场景下获得了可观的业务收益。
729 9
湖仓一体:小米集团基于 Apache Doris + Apache Paimon 实现 6 倍性能飞跃
|
3月前
|
人工智能 运维 监控
智能运维与数据治理:基于 Apache Doris 的 Data Agent 解决方案
本文基于 Apache Doris 数据运维治理 Agent 展开讨论,如何让 AI 成为 Doris 数据运维工程师和数据治理专家的智能助手,并在某些场景下实现对人工操作的全面替代。这种变革不仅仅是技术层面的进步,更是数据运维治理思维方式的根本性转变:从“被动响应”到“主动预防”,从“人工判断”到“智能决策”,从“孤立处理”到“协同治理”。
552 11
智能运维与数据治理:基于 Apache Doris 的 Data Agent 解决方案
|
3月前
|
SQL 存储 JSON
Apache Doris 2.1.10 版本正式发布
亲爱的社区小伙伴们,Apache Doris 2.1.10 版本已正式发布。2.1.10 版本对湖仓一体、半结构化数据类型、查询优化器、执行引擎、存储管理进行了若干改进优化。欢迎大家下载使用。
220 5
|
3月前
|
人工智能 自然语言处理 数据挖掘
Apache Doris 4.0 AI 能力揭秘(一):AI 函数之 LLM 函数介绍
在即将发布的 Apache Doris 4.0 版本中,我们正式引入了一系列 LLM 函数,将前沿的 AI 能力与日常的数据分析相结合,无论是精准提取文本信息,还是对评论进行情感分类,亦或生成精炼的文本摘要,皆可在数据库内部无缝完成。
236 0
Apache Doris 4.0 AI 能力揭秘(一):AI 函数之 LLM 函数介绍
|
1月前
|
人工智能 数据处理 API
阿里云、Ververica、Confluent 与 LinkedIn 携手推进流式创新,共筑基于 Apache Flink Agents 的智能体 AI 未来
Apache Flink Agents 是由阿里云、Ververica、Confluent 与 LinkedIn 联合推出的开源子项目,旨在基于 Flink 构建可扩展、事件驱动的生产级 AI 智能体框架,实现数据与智能的实时融合。
271 6
阿里云、Ververica、Confluent 与 LinkedIn 携手推进流式创新,共筑基于 Apache Flink Agents 的智能体 AI 未来
|
存储 Cloud Native 数据处理
从嵌入式状态管理到云原生架构:Apache Flink 的演进与下一代增量计算范式
本文整理自阿里云资深技术专家、Apache Flink PMC 成员梅源在 Flink Forward Asia 新加坡 2025上的分享,深入解析 Flink 状态管理系统的发展历程,从核心设计到 Flink 2.0 存算分离架构,并展望未来基于流批一体的通用增量计算方向。
250 0
从嵌入式状态管理到云原生架构:Apache Flink 的演进与下一代增量计算范式
|
3月前
|
SQL 人工智能 数据挖掘
Apache Flink:从实时数据分析到实时AI
Apache Flink 是实时数据处理领域的核心技术,历经十年发展,已从学术项目成长为实时计算的事实标准。它在现代数据架构中发挥着关键作用,支持实时数据分析、湖仓集成及实时 AI 应用。随着 Flink 2.0 的发布,其在流式湖仓、AI 驱动决策等方面展现出强大潜力,正推动企业迈向智能化、实时化的新阶段。
452 9
Apache Flink:从实时数据分析到实时AI

推荐镜像

更多