HybridDB · 性能优化 · Count Distinct的几种实现方式

本文涉及的产品
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
RDS SQL Server Serverless,2-4RCU 50GB 3个月
推荐场景:
简介: 前言最近遇到几个客户在HybridDB上做性能测试时,都遇到Count Distinct的性能调优问题。这里我们总结一下HybridDB中,对Count Distinct的几种处理方式。我们以一个客户的案例来做说明。客户的典型的业务场景是,在用户行为日志中统计对应类别的行为数,类别有几千个,独立的行为的总量很多,有几千万;为分析行为,要查询一段时间内的基于类别的独立行为数,查询如下(tes

前言

最近遇到几个客户在HybridDB上做性能测试时,都遇到Count Distinct的性能调优问题。这里我们总结一下HybridDB中,对Count Distinct的几种处理方式。

我们以一个客户的案例来做说明。客户的典型的业务场景是,在用户行为日志中统计对应类别的行为数,类别有几千个,独立的行为的总量很多,有几千万;为分析行为,要查询一段时间内的基于类别的独立行为数,查询如下(test的建表语句见附录):

select category, count(distinct actionId) as ct from test_user_log
where receivetime between '2017-03-07 11:00:00' and '2017-03-07 12:00:00' group by category
order by ct desc limit 10;

下面我们针对这个查询,来看一下Count Distinct是怎么处理的。

Count Distinct的基本处理方式

利用explain analyze命令,看一下这个查询执行过程的信息:

test=# explain analyze select category, count(distinct actionId) as ct from test_user_log
where receivetime between '2017-03-07 11:00:00' and '2017-03-07 12:00:00' group by category
order by ct desc limit 10;
                                                                                            QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------
 Limit  (cost=0.00..431.00 rows=10 width=16)
 Gather Motion 16:1  (slice2; segments: 16)  (cost=5968.98..5968.99 rows=1 width=40)
   Merge Key: ct
   Rows out:  745 rows at destination with 2469 ms to end, start offset by 77 ms.
   ->  Sort  (cost=5968.98..5968.99 rows=1 width=40)
         Sort Key: ct
         Rows out:  Avg 46.6 rows x 16 workers.  Max 55 rows (seg0) with 2461 ms to end, start offset by 85 ms.
         Executor memory:  58K bytes avg, 58K bytes max (seg0).
         Work_mem used:  58K bytes avg, 58K bytes max (seg0). Workfile: (0 spilling, 0 reused)
         ->  GroupAggregate  (cost=5968.94..5968.97 rows=1 width=40)
               Group By: public.test_user_log.category
               Rows out:  Avg 46.6 rows x 16 workers.  Max 55 rows (seg0) with 2460 ms to end, start offset by 85 ms.
               ->  Sort  (cost=5968.94..5968.94 rows=1 width=40)
                     Sort Key: public.test_user_log.category
                     Rows out:  Avg 461.6 rows x 16 workers.  Max 572 rows (seg4) with 2458 ms to end, start offset by 88 ms.
                     Executor memory:  85K bytes avg, 145K bytes max (seg4).
                     Work_mem used:  85K bytes avg, 145K bytes max (seg4). Workfile: (0 spilling, 0 reused)
                     ->  Redistribute Motion 16:16  (slice1; segments: 16)  (cost=5960.60..5968.93 rows=1 width=40)
                           Hash Key: public.test_user_log.category
                           Rows out:  Avg 461.6 rows x 16 workers at destination.  Max 572 rows (seg4) with 2316 ms to first row, 2458 ms to end, start offset by 88 ms.
                           ->  GroupAggregate  (cost=5960.60..5968.91 rows=1 width=40)
                                 Group By: public.test_user_log.category
                                 Rows out:  Avg 461.6 rows x 16 workers.  Max 472 rows (seg7) with 536 ms to first row, 2455 ms to end, start offset by 89 ms.
                                 Executor memory:  318587K bytes avg, 330108K bytes max (seg7).
                                 Work_mem used:  8544K bytes avg, 8544K bytes max (seg0).
                                 Work_mem wanted: 8414K bytes avg, 8472K bytes max (seg14) to lessen workfile I/O affecting 16 workers.
                                 ->  Sort  (cost=5960.60..5963.37 rows=70 width=64)
                                       Sort Key: public.test_user_log.category
                                       Rows out:  Avg 367982.3 rows x 16 workers.  Max 369230 rows (seg8) with 527 ms to first row, 625 ms to end, start offset by 90 ms.
                                       Executor memory:  61433K bytes avg, 61433K bytes max (seg0).
                                       Work_mem used:  61433K bytes avg, 61433K bytes max (seg0). Workfile: (0 spilling, 0 reused)
                                       ->  Append  (cost=0.00..5904.72 rows=70 width=64)
                                             Rows out:  Avg 367982.3 rows x 16 workers.  Max 369230 rows (seg8) with 2.710 ms to first row, 265 ms to end, start offset by 91 ms
.
                                             ->  Append-only Columnar Scan on test_user_log_1_prt_usual test_user_log  (cost=0.00..0.00 rows=1 width=64)
                                                   Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00'::timestamp
 without time zone
                                                   Rows out:  0 rows (seg0) with 0.542 ms to end, start offset by 93 ms.
                                             ->  Append-only Columnar Scan on test_user_log_1_prt_157 test_user_log  (cost=0.00..3134.45 rows=37 width=64)
                                                   Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00'::timestamp
 without time zone
                                                   Rows out:  Avg 367882.1 rows x 16 workers.  Max 369131 rows (seg8) with 2.178 ms to first row, 132 ms to end, start offset by
 91 ms.
                                             ->  Append-only Columnar Scan on test_user_log_1_prt_158 test_user_log  (cost=0.00..2770.27 rows=33 width=64)
                                                   Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00'::timestamp
 without time zone
                                                   Rows out:  Avg 100.2 rows x 16 workers.  Max 124 rows (seg11) with 2.135 ms to first row, 73 ms to end, start offset by 394 m
s.
 Slice statistics:
   Settings:  effective_cache_size=8GB; gp_statistics_use_fkeys=on; optimizer=off
 Optimizer status: legacy query optimizer
 Total runtime: 2546.862 ms
(51 rows)


可以发现,看似很简单的查询,处理流程却有点复杂。整个处理流程大致如下:

Scan (Columnar Scan + Append) -> Sort(category) -> Group by(category) -> Redistribute -> Sort(category) -> Group by(category) -> Sort -> Gather

从各个节点的实际执行时间的记录可以看出,主要的时间花在了前三步,因为这三步完成后,中间结果只有几百行了。我们重点关注这三个步骤。其实这几个步骤的逻辑比较好理解,扫描出来的数据,直接做排序,排序后,再把结果扫描一遍,同时进行聚合运算。

这里需要注意的一个细节是,查询的表test_user_log是按actionId做分布键的,相同的actionId都会分布在同一个节点上,所以每个节点本地按category做分组后,会在每个分组记录分组中出现的不同actionId值,最终的聚合的结果是category加上一个对应的actionId的计数。

这里有个疑问,其实category的唯一值很少(只有几百个),很适合利用Hash的方式做聚合呢,那么为什么没有选择Hash的方式而是采用了Sort的方式呢?

观察上述查询计划中test_user_log_1_prt_157这个表分区的中间结果估计((cost=0.00..3134.45 rows=37 width=64)),可以发现预估的结果只有37行,而实际是Rows out: Avg 367882.1 rows,即36万多,这说明表的统计信息不准确。

执行一下Analyze来更新统计信息:

test=# analyze test_user_log_1_prt_157;
analyze test_user_log_1_prt_158;ANALYZE
test=# analyze test_user_log_1_prt_158;
ANALYZE

更新统计信息后,再执行一下查询,查询计划果然发生了变化:排序聚合变成了Hash方式聚合!执行时间也由2546.862ms缩短到2099.144ms。

                                                                                               QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------
 Limit  (cost=0.00..431.00 rows=10 width=16)
   Gather Motion 16:1  (slice2; segments: 16)  (cost=320695.70..320695.92 rows=10 width=40)
         Merge Key: ct
         ->  Limit  (cost=320695.70..320695.72 rows=1 width=40)
               ->  Sort  (cost=320695.70..320695.94 rows=7 width=40)
                     Sort Key (Limit): ct
                     ->  HashAggregate  (cost=320691.23..320692.46 rows=7 width=40)
                           Group By: partial_aggregation.category
                           ->  HashAggregate  (cost=303756.92..311454.34 rows=38488 width=64)
                                 Group By: public.test_user_log.category, public.test_user_log.actionId
                                 ->  Redistribute Motion 16:16  (slice1; segments: 16)  (cost=280664.69..292980.55 rows=38488 width=64)
                                       Hash Key: public.test_user_log.category
                                       ->  HashAggregate  (cost=280664.69..280664.69 rows=38488 width=64)
                                             Group By: public.test_user_log.category, public.test_user_log.actionId
                                             ->  Append  (cost=0.00..236527.23 rows=367813 width=43)
                                                   ->  Append-only Columnar Scan on test_user_log_1_prt_usual test_user_log  (cost=0.00..0.00 rows=1 width=64)
                                                         Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00'::tim
estamp without time zone
                                                   ->  Append-only Columnar Scan on test_user_log_1_prt_157 test_user_log  (cost=0.00..124267.69 rows=367813 width=42)
                                                         Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00'::tim
estamp without time zone
                                                   ->  Append-only Columnar Scan on test_user_log_1_prt_158 test_user_log  (cost=0.00..112259.54 rows=1 width=42)
                                                         Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00'::tim
estamp without time zone

上面的计划主要流程如下。两次Group by(Hash(category,actionId))都是为了去除重复值,保证(category,actionId)组合字段值唯一。

Scan (Columnar Scan + Append) -> Group by(Hash(category,actionId)) -> Redistribute(category) -> Group by(Hash(category, acitonId)) -> Group by(Hash(category)) -> Sort -> Gather

那么还有其他可能的处理方式吗?我们知道,HybridDB支持新型的orca优化器,orca考虑更多的查询执行方式。我们下面试试使用orca来生成查询计划。

test=# set optimizer=on;
SET
test=# explain analyze select category, count(distinct actionId) as ct from test_user_log
where receivetime between '2017-03-07 11:00:00' and '2017-03-07 12:00:00' group by category
order by ct desc limit 10;
                                                                                                  QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------
 Limit  (cost=0.00..431.00 rows=1 width=16)
   Rows out:  10 rows with 2690 ms to end, start offset by 0.500 ms.
   ->  Gather Motion 16:1  (slice2; segments: 16)  (cost=0.00..431.00 rows=1 width=16)
         Merge Key: ct
         Rows out:  10 rows at destination with 2690 ms to end, start offset by 0.501 ms.
         ->  Sort  (cost=0.00..431.00 rows=1 width=16)
               Sort Key: ct
               Rows out:  Avg 46.6 rows x 16 workers.  Max 55 rows (seg0) with 2688 ms to end, start offset by 2.356 ms.
               Executor memory:  33K bytes avg, 33K bytes max (seg0).
               Work_mem used:  33K bytes avg, 33K bytes max (seg0). Workfile: (0 spilling, 0 reused)
               ->  GroupAggregate  (cost=0.00..431.00 rows=1 width=16)
                     Group By: category
                     Rows out:  Avg 46.6 rows x 16 workers.  Max 55 rows (seg0) with 2687 ms to first row, 2688 ms to end, start offset by 2.372 ms.
                     ->  Sort  (cost=0.00..431.00 rows=1 width=16)
                           Sort Key: category
                           Rows out:  Avg 461.6 rows x 16 workers.  Max 572 rows (seg4) with 2687 ms to end, start offset by 3.104 ms.
                           Executor memory:  85K bytes avg, 145K bytes max (seg4).
                           Work_mem used:  85K bytes avg, 145K bytes max (seg4). Workfile: (0 spilling, 0 reused)
                           ->  Redistribute Motion 16:16  (slice1; segments: 16)  (cost=0.00..431.00 rows=1 width=16)
                                 Hash Key: category
                                 Rows out:  Avg 461.6 rows x 16 workers at destination.  Max 572 rows (seg4) with 2442 ms to first row, 2687 ms to end, start offset by 3.113 ms
.
                                 ->  Result  (cost=0.00..431.00 rows=1 width=16)
                                       Rows out:  Avg 461.6 rows x 16 workers.  Max 472 rows (seg7) with 1070 ms to first row, 2583 ms to end, start offset by 3.898 ms.
                                       ->  GroupAggregate  (cost=0.00..431.00 rows=1 width=16)
                                             Group By: category
                                             Rows out:  Avg 461.6 rows x 16 workers.  Max 472 rows (seg7) with 1070 ms to first row, 2583 ms to end, start offset by 3.898 ms.
                                             Executor memory:  316808K bytes avg, 328245K bytes max (seg7).
                                             Work_mem used:  8184K bytes avg, 8184K bytes max (seg0).
                                             Work_mem wanted: 8414K bytes avg, 8472K bytes max (seg14) to lessen workfile I/O affecting 16 workers.
                                             ->  Sort  (cost=0.00..431.00 rows=1 width=16)
                                                   Sort Key: category, actionId
                                                   Rows out:  Avg 367982.3 rows x 16 workers.  Max 369230 rows (seg8) with 1064 ms to first row, 1143 ms to end, start offset by
 3.812 ms.
                                                   Executor memory:  143353K bytes avg, 143353K bytes max (seg0).
                                                   Work_mem used:  143353K bytes avg, 143353K bytes max (seg0). Workfile: (0 spilling, 0 reused)
                                                   ->  Sequence  (cost=0.00..431.00 rows=1 width=24)
                                                         Rows out:  Avg 367982.3 rows x 16 workers.  Max 369230 rows (seg8) with 1.905 ms to first row, 241 ms to end, start off
set by 4.032 ms.
                                                         ->  Partition Selector for test_user_log (dynamic scan id: 1)  (cost=10.00..100.00 rows=7 width=4)
                                                               Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00
'::timestamp without time zone
                                                               Partitions selected:  3 (out of 745)
                                                               Rows out:  0 rows (seg0) with 0.013 ms to end, start offset by 4.033 ms.
                                                         ->  Dynamic Table Scan on test_user_log (dynamic scan id: 1)  (cost=0.00..431.00 rows=1 width=24)
                                                               Filter: receivetime >= '2017-03-07 11:00:00'::timestamp without time zone AND receivetime <= '2017-03-07 12:00:00
'::timestamp without time zone
                                                               Rows out:  Avg 367982.3 rows x 16 workers.  Max 369230 rows (seg8) with 1.891 ms to first row, 202 ms to end, sta
rt offset by 4.046 ms.
                                                               Partitions scanned:  Avg 3.0 (out of 745) x 16 workers.  Max 3 parts (seg0).
 Slice statistics:
   (slice0)    Executor memory: 351K bytes.
   (slice1)  * Executor memory: 152057K bytes avg x 16 workers, 152057K bytes max (seg0).  Work_mem: 143353K bytes max, 8472K bytes wanted.
   (slice2)    Executor memory: 363K bytes avg x 16 workers, 423K bytes max (seg4).  Work_mem: 145K bytes max.
 Statement statistics:
   Memory used: 2047000K bytes
   Memory wanted: 34684K bytes
 Settings:  effective_cache_size=8GB; gp_statistics_use_fkeys=on; optimizer=on
 Optimizer status: PQO version 1.609
 Total runtime: 2690.675 ms
(54 rows)

 

使用orca生成的查询计划,又回到了使用Sort+Groupby的方式来做聚合(这是因为,我们使用Analyze只更新了子分区表的统计信息,而orca只会考虑主表上的统计信息,要想是orca的计划转为使用Hash方式,需要在主表上使用Analyze,这里我们不继续讨论)。而上述使用orca生成的计划,与使用缺省优化器有很大不同。orca的查询计划采用了下面的流程:

Scan (Dynamic Scan) -> Sort (category, actionId) -> Group by (category) -> Redistribute -> Sort (category) -> Group by(category) -> Sort -> Gather

注意,第一次Sort用了(category, actionId)两个字段的组合,但后面的Group by时只适应了category一个字段!这是一种特殊的聚合方式。在做这种聚合时,对应一个不同的category,只需保留一个actionId的计数即可,而不是像在缺省优化器计划中那样,对每个不同的category,需要保留所有不同的actionId值,这样省去了建立类似Hash表的数据结构的时间。但由于Sort的时候用了两个字段,时间消耗比使用一个字段高,导致整个查询计划的性能不如缺省优化器产生的计划。

延伸

上面的讨论所举的例子中的表,正好是以Count Distinct的字段(即actionId)作为分布键的。如果以其他字段作为分布键,会产生不一样的查询计划,但基本原理都是类似的。

另外,我们没有涉及一个查询中涉及多个字段上有Count Distinct的情况,读者可以自行尝试。

附录

  • 建表语句
create  table test_user_log
(
        actionId text,
        code text,
        receiveTime timestamp,
        gmtCreate timestamp,
        category text,
        version text,
        tag text,
        siteId int4
) 
with  (APPENDONLY=true, ORIENTATION=column, BLOCKSIZE=524288)
distributed by (actionId)
PARTITION BY RANGE (receivetime) 
(START ('2017-03-07') INCLUSIVE END ('2017-03-07') EXCLUSIVE EVERY (INTERVAL '1 hour' ), DEFAULT PARTITION usual);

相关实践学习
基于Redis实现在线游戏积分排行榜
本场景将介绍如何基于Redis数据库实现在线游戏中的游戏玩家积分排行榜功能。
云数据库 Redis 版使用教程
云数据库Redis版是兼容Redis协议标准的、提供持久化的内存数据库服务,基于高可靠双机热备架构及可无缝扩展的集群架构,满足高读写性能场景及容量需弹性变配的业务需求。 产品详情:https://www.aliyun.com/product/kvstore &nbsp; &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
相关文章
|
存储 负载均衡 NoSQL
MongoDB·最佳实践·count不准原因分析
背景 一般来说,除了由于secondary延迟可能造成查询secondary节点数据不准以外,关于count的准确性问题,在MongoDB4.0官方文档中有这么一段话On a sharded cluster, db.
|
SQL 算法 数据库
OBCP第三章、SQL引擎高级技术-查询改写
OBCP第三章、SQL引擎高级技术-查询改写
99 0
|
SQL Oracle 关系型数据库
MySQL数据库,从入门到精通:第八篇——MySQL聚合函数实战探究:优化SELECT过程助力高效查询
MySQL数据库,从入门到精通:第八篇——MySQL聚合函数实战探究:优化SELECT过程助力高效查询
453 0
|
SQL 缓存 关系型数据库
【MySQL技术之旅】(6)一直都倾向于优化查询,这次学习一下优化Insert插入语句
【MySQL技术之旅】(6)一直都倾向于优化查询,这次学习一下优化Insert插入语句
156 0
|
MySQL 关系型数据库 内存技术
MySQL · 新特性分析 · CTE执行过程与实现原理
众所周知,Common table expression(CTE)是在大多数的关系型数据库里都存在的特性,包括ORACLE, SQLSERVER,POSTGRESQL等,唯独开源数据库老大MySQL缺失。CTE作为一个方便用户使用的功能,原本是可以利用普通的SQL语句替代的,但是对于复杂的CTE来说,要模拟出CTE的效果还是需要很大的功夫。如果考虑性能那就更是难上加难了。2013年Guilhem
4181 1
|
关系型数据库 MySQL 索引
MySQL · 捉虫动态 · order by limit 造成优化器选择索引错误
问题描述 bug 触发条件如下: 优化器先选择了 where 条件中字段的索引,该索引过滤性较好; SQL 中必须有 order by limit 从而引导优化器尝试使用 order by 字段上的索引进行优化,最终因代价问题没有成功。 复现case 表结构 create table t
7798 0
|
关系型数据库 索引
MySQL · 源码分析 · 聚合函数(Aggregate Function)的实现过程
--- title: MySQL · 源码分析 · 聚合函数(Aggregate Function)的实现过程 author: 道客 --- ## 总览 聚合函数(Aggregate Function)顾名思义,就是将一组数据进行统一计算,常常用于分析型数据库中,当然在应用中是非常重要不可或缺的函数计算方式。比如我们常见的COUNT/AVG/SUM/MIN/MAX等等。本文主要分析下
1885 0
|
SQL 监控 Go
MSSQL · 应用案例 · 日志表设计优化与实现
摘要 这篇文章从日志表问题引入、日志表的共有特性、日志表的设计需求、设计思路以及设计详细实现的角度,阐述了在SQL Server数据库中如何最优化设计日志表来降低系统资源的占用和提高系统吞吐量。问题引入 在平时与客户服务与交流过程中,我们不止一次的被客人问及这样的场景:我们现在面临如何设计SQL Server日志表方案,如何最优化设计数据库日志记录表。
1517 0
下一篇
无影云桌面