标签
PostgreSQL , cpu 并行 , smp 并行 , 并行计算 , gpu 并行 , 并行过程支持
背景
PostgreSQL 11 优化器已经支持了非常多场合的并行。简单估计,已支持27余种场景的并行计算。
parallel seq scan
parallel index scan
parallel index only scan
parallel bitmap scan
parallel filter
parallel hash agg
parallel group agg
parallel cte
parallel subquery
parallel create table
parallel create index
parallel select into
parallel CREATE MATERIALIZED VIEW
parallel 排序 : gather merge
parallel nestloop join
parallel hash join
parallel merge join
parallel 自定义并行聚合
parallel 自定义并行UDF
parallel append
parallel append merge
parallel union
parallel fdw table scan
parallel partition join
parallel partition agg
parallel gather
parallel gather merge
parallel rc 并行
parallel rr 并行
parallel GPU 并行
parallel unlogged table
lead parallel
接下来进行一一介绍。
关键知识请先自行了解:
1、优化器自动并行度算法 CBO
《PostgreSQL 11 并行计算算法,参数,强制并行度设置》
parallel append merge
多段并行执行并且排序
例如分区表的操作,当一个QUERY涉及多个分区时,每个分区的执行部分为一个独立段,多个分区可以并行执行,优化器支持结果并行 append。
如果多段执行的结果需要排序,那么优化器可以在每个段内返回有序结果,可以使用归并排序(类似merge sort, gather merge)(parallel append merge)。
parallel append与gather merge同时出现,说明使用了MergeAppend。
src/backend/executor/nodeMergeAppend.c
/* INTERFACE ROUTINES
* ExecInitMergeAppend - initialize the MergeAppend node
* ExecMergeAppend - retrieve the next tuple from the node
* ExecEndMergeAppend - shut down the MergeAppend node
* ExecReScanMergeAppend - rescan the MergeAppend node
*
* NOTES
* A MergeAppend node contains a list of one or more subplans.
* These are each expected to deliver tuples that are sorted according
* to a common sort key. The MergeAppend node merges these streams
* to produce output sorted the same way.
*
* MergeAppend nodes don't make use of their left and right
* subtrees, rather they maintain a list of subplans so
* a typical MergeAppend node looks like this in the plan tree:
*
* ...
* /
* MergeAppend---+------+------+--- nil
* / \ | | |
* nil nil ... ... ...
* subplans
*/
数据量:10亿
场景 | 数据量 | 关闭并行 | 开启并行 | 并行度 | 开启并行性能提升倍数 |
---|---|---|---|---|---|
parallel append merge | 10亿 | 99.4 秒 | 5.87 秒 | 24 | 16.93 倍 |
由于parallel append可能与SCAN parallel重复使用,为了避免重复,可以将分区表的parallel_workers设置为0,同时使用hit强制设置主表的parallel,这样就可以达到设定parallel append并行度的目的。
https://www.openscg.com/bigsql/docs/hintplan/
git clone https://github.com/ossc-db/pg_hint_plan
cd pg_hint_plan/
git checkout PG11
USE_PGXS=1 make
USE_PGXS=1 make install
vi $PGDATA/postgresql.conf
shared_preload_libraries = 'pg_hint_plan'
pg_ctl restart -m fast
例子,24个分区的HASH分区表。
CREATE unlogged TABLE ccc (
order_id bigint not null,
cust_id bigint not null,
status text
) PARTITION BY HASH (cust_id);
do language plpgsql $$
declare
begin
for i in 0..23 loop
execute format('CREATE unlogged TABLE %s%s PARTITION OF %s FOR VALUES WITH (MODULUS %s, REMAINDER %s)', 'ccc', i, 'ccc', 24, i);
execute format('alter table %s%s set(parallel_workers =0)', 'ccc',i);
end loop;
end;
$$;
postgres=# \d ccc
Unlogged table "public.ccc"
Column | Type | Collation | Nullable | Default
----------+--------+-----------+----------+---------
order_id | bigint | | not null |
cust_id | bigint | | not null |
status | text | | |
Partition key: HASH (cust_id)
Number of partitions: 24 (Use \d+ to list them.)
写入10亿数据
insert into ccc select i, random()*960 from generate_series(1,1000000000) t(i);
vacuum (analyze,verbose) ccc;
postgres=# show max_worker_processes ;
max_worker_processes
----------------------
128
(1 row)
postgres=# set min_parallel_table_scan_size =0;
postgres=# set min_parallel_index_scan_size =0;
postgres=# set parallel_tuple_cost =0;
postgres=# set parallel_setup_cost =0;
postgres=# set max_parallel_workers=128;
postgres=# set max_parallel_workers_per_gather =24;
postgres=# set enable_parallel_hash =on;
postgres=# set enable_parallel_append =on;
postgres=# set enable_partitionwise_aggregate =off;
postgres=# set work_mem ='128MB';
1、关闭并行,耗时: 99.4 秒。
postgres=# set max_parallel_workers_per_gather =0;
postgres=# set enable_parallel_append =off;
postgres=# explain select * from ccc order by order_id limit 10;
QUERY PLAN
------------------------------------------------------------------------------------
Limit (cost=42015064.65..42015064.67 rows=10 width=48)
-> Sort (cost=42015064.65..44515064.93 rows=1000000114 width=48)
Sort Key: ccc0.order_id
-> Append (cost=0.00..20405421.71 rows=1000000114 width=48)
-> Seq Scan on ccc0 (cost=0.00..641839.96 rows=41663296 width=48)
-> Seq Scan on ccc1 (cost=0.00..625842.88 rows=40624888 width=48)
-> Seq Scan on ccc2 (cost=0.00..722107.36 rows=46873636 width=48)
-> Seq Scan on ccc3 (cost=0.00..545575.32 rows=35414332 width=48)
-> Seq Scan on ccc4 (cost=0.00..657705.92 rows=42693192 width=48)
-> Seq Scan on ccc5 (cost=0.00..609836.16 rows=39585616 width=48)
-> Seq Scan on ccc6 (cost=0.00..625934.32 rows=40630732 width=48)
-> Seq Scan on ccc7 (cost=0.00..673876.80 rows=43742880 width=48)
-> Seq Scan on ccc8 (cost=0.00..601729.04 rows=39059604 width=48)
-> Seq Scan on ccc9 (cost=0.00..609919.96 rows=39591296 width=48)
-> Seq Scan on ccc10 (cost=0.00..674124.76 rows=43758976 width=48)
-> Seq Scan on ccc11 (cost=0.00..529544.24 rows=34373924 width=48)
-> Seq Scan on ccc12 (cost=0.00..818443.04 rows=53127004 width=48)
-> Seq Scan on ccc13 (cost=0.00..674104.80 rows=43757680 width=48)
-> Seq Scan on ccc14 (cost=0.00..786195.28 rows=51033728 width=48)
-> Seq Scan on ccc15 (cost=0.00..609709.04 rows=39577604 width=48)
-> Seq Scan on ccc16 (cost=0.00..633745.96 rows=41137896 width=48)
-> Seq Scan on ccc17 (cost=0.00..673951.76 rows=43747376 width=48)
-> Seq Scan on ccc18 (cost=0.00..802394.72 rows=52085272 width=48)
-> Seq Scan on ccc19 (cost=0.00..529621.20 rows=34378920 width=48)
-> Seq Scan on ccc20 (cost=0.00..642042.32 rows=41676432 width=48)
-> Seq Scan on ccc21 (cost=0.00..401251.50 rows=26046150 width=48)
-> Seq Scan on ccc22 (cost=0.00..673891.04 rows=43743804 width=48)
-> Seq Scan on ccc23 (cost=0.00..642033.76 rows=41675876 width=48)
(28 rows)
postgres=# select * from ccc order by order_id limit 10;
order_id | cust_id | status
----------+---------+--------
1 | 649 |
2 | 226 |
3 | 816 |
4 | 844 |
5 | 827 |
6 | 456 |
7 | 810 |
8 | 365 |
9 | 49 |
10 | 75 |
(10 rows)
Time: 99416.529 ms (01:39.417)
2、开启并行,耗时: 5.87 秒。
postgres=# set max_parallel_workers_per_gather =24;
postgres=# set enable_parallel_append =on;
postgres=# set client_min_messages =debug;
postgres=# set pg_hint_plan.debug_print =on;
postgres=# set pg_hint_plan.enable_hint=on;
postgres=# set pg_hint_plan.message_level =debug;
postgres=# explain /*+ Parallel(ccc 24 hard) */ select * from ccc order by order_id limit 10;
DEBUG: pg_hint_plan:
used hint:
Parallel(ccc 24 hard)
not used hint:
duplication hint:
error hint:
QUERY PLAN
------------------------------------------------------------------------------------------
Limit (cost=4321929.13..4321929.39 rows=10 width=48)
-> Gather Merge (cost=4321929.13..128274490.70 rows=4800000504 width=48)
Workers Planned: 24
-> Sort (cost=4321928.55..4821928.60 rows=200000021 width=48)
Sort Key: ccc12.order_id
-> Parallel Append (cost=0.00..0.00 rows=200000021 width=48)
-> Seq Scan on ccc12 (cost=0.00..818443.04 rows=53127004 width=48)
-> Seq Scan on ccc18 (cost=0.00..802394.72 rows=52085272 width=48)
-> Seq Scan on ccc14 (cost=0.00..786195.28 rows=51033728 width=48)
-> Seq Scan on ccc2 (cost=0.00..722107.36 rows=46873636 width=48)
-> Seq Scan on ccc10 (cost=0.00..674124.76 rows=43758976 width=48)
-> Seq Scan on ccc13 (cost=0.00..674104.80 rows=43757680 width=48)
-> Seq Scan on ccc17 (cost=0.00..673951.76 rows=43747376 width=48)
-> Seq Scan on ccc22 (cost=0.00..673891.04 rows=43743804 width=48)
-> Seq Scan on ccc7 (cost=0.00..673876.80 rows=43742880 width=48)
-> Seq Scan on ccc4 (cost=0.00..657705.92 rows=42693192 width=48)
-> Seq Scan on ccc20 (cost=0.00..642042.32 rows=41676432 width=48)
-> Seq Scan on ccc23 (cost=0.00..642033.76 rows=41675876 width=48)
-> Seq Scan on ccc0 (cost=0.00..641839.96 rows=41663296 width=48)
-> Seq Scan on ccc16 (cost=0.00..633745.96 rows=41137896 width=48)
-> Seq Scan on ccc6 (cost=0.00..625934.32 rows=40630732 width=48)
-> Seq Scan on ccc1 (cost=0.00..625842.88 rows=40624888 width=48)
-> Seq Scan on ccc9 (cost=0.00..609919.96 rows=39591296 width=48)
-> Seq Scan on ccc5 (cost=0.00..609836.16 rows=39585616 width=48)
-> Seq Scan on ccc15 (cost=0.00..609709.04 rows=39577604 width=48)
-> Seq Scan on ccc8 (cost=0.00..601729.04 rows=39059604 width=48)
-> Seq Scan on ccc3 (cost=0.00..545575.32 rows=35414332 width=48)
-> Seq Scan on ccc19 (cost=0.00..529621.20 rows=34378920 width=48)
-> Seq Scan on ccc11 (cost=0.00..529544.24 rows=34373924 width=48)
-> Seq Scan on ccc21 (cost=0.00..401251.50 rows=26046150 width=48)
(30 rows)
postgres=# /*+ Parallel(ccc 24 hard) */ select * from ccc order by order_id limit 10;
DEBUG: pg_hint_plan:
used hint:
Parallel(ccc 24 hard)
not used hint:
duplication hint:
error hint:
order_id | cust_id | status
----------+---------+--------
1 | 649 |
2 | 226 |
3 | 816 |
4 | 844 |
5 | 827 |
6 | 456 |
7 | 810 |
8 | 365 |
9 | 49 |
10 | 75 |
(10 rows)
Time: 5868.558 ms (00:05.869)
由于parallel append可能与SCAN parallel重复使用,为了避免重复,可以将分区表的parallel_workers设置为0,同时使用hit强制设置主表的parallel,这样就可以达到设定parallel append并行度的目的。
期待将来的PG版本在parallel append的并行度上有更好的控制参数。或者有更好的避免与SCAN parallel重复使用的情况。
其他知识
1、优化器自动并行度算法 CBO
《PostgreSQL 11 并行计算算法,参数,强制并行度设置》
2、function, op 识别是否支持parallel
postgres=# select proparallel,proname from pg_proc;
proparallel | proname
-------------+----------------------------------------------
s | boolin
s | boolout
s | byteain
s | byteaout
3、subquery mapreduce unlogged table
对于一些情况,如果期望简化优化器对非常非常复杂的SQL并行优化的负担,可以自己将SQL拆成几段,中间结果使用unlogged table保存,类似mapreduce的思想。unlogged table同样支持parallel 计算。
4、vacuum,垃圾回收并行。
5、dblink 异步调用并行
《PostgreSQL VOPS 向量计算 + DBLINK异步并行 - 单实例 10亿 聚合计算跑进2秒》
《PostgreSQL 相似搜索分布式架构设计与实践 - dblink异步调用与多机并行(远程 游标+记录 UDF实例)》
暂时不允许并行的场景(将来PG会继续扩大支持范围):
1、修改行,锁行,除了create table as , select into, create mview这几个可以使用并行。
2、query 会被中断时,例如cursor , loop in PL/SQL ,因为涉及到中间处理,所以不建议开启并行。
3、paralle unsafe udf ,这种UDF不会并行
4、嵌套并行(udf (内部query并行)),外部调用这个UDF的SQL不会并行。(主要是防止large parallel workers )
5、SSI 隔离级别
参考
https://www.postgresql.org/docs/11/parallel-plans.html
《PostgreSQL 11 并行计算算法,参数,强制并行度设置》
《PostgreSQL 11 preview - 并行计算 增强 汇总》
《PostgreSQL 10 自定义并行计算聚合函数的原理与实践 - (含array_agg合并多个数组为单个一元数组的例子)》