PostgreSQL 11 preview - 分页内核层优化 - 索引扫描offset优化(使用vm文件skip heap scan)

本文涉及的产品
云原生数据库 PolarDB 分布式版,标准版 2核8GB
云数据库 RDS SQL Server,基础系列 2核4GB
RDS PostgreSQL Serverless,0.5-4RCU 50GB 3个月
推荐场景:
对影评进行热评分析
简介:

标签

PostgreSQL , visilibity map , offset , skip heap scan , index only scan


背景

OFFSE limit是分页常用的功能。很多人可能有过这样的感受,分页越到后面越慢。

实际上原因是由于数据库在OFFSET指定记录数之前,是需要扫过这么多的符合条件的TUPLE才能知道应该从哪里开始返回。

比如

1、索引扫描时,并不知道一个索引页有多少条有效记录(因为索引中没有版本号,需要回表才知道这条记录是否对当前事务可见)。

优化手段:

PostgreSQL 9.1开始支持了index only scan,如果查询只包含索引列,并且索引(item)对应heap page是完全可见的(通过扫描visibility map标记得到),那么不需要回HEAP PAGE TOUCH TUPLE。这样的话在offset比较大时,可以用来优化翻页的CASE。

OFFSET index only scan对比index scan

1、创建测试表

postgres=# create table t_only (id int primary key, info text);  
CREATE TABLE  

2、写入测试数据

postgres=# insert into t_only select t, 'test' from generate_series(1,10000000) t ;  

3、生成visilibity map文件

postgres=# vacuum ANALYZE t_only ;  
VACUUM  

4、使用index only scan,OFFSET 10万记录。扫描了277个数据块(里面包含多次TOUCH的visibility map BLOCK)

postgres=# explain (analyze,verbose,timing,costs,buffers) select id from t_only order by id offset 100000 limit 10;  
                                                                       QUERY PLAN                                                                         
--------------------------------------------------------------------------------------------------------------------------------------------------------  
 Limit  (cost=1774.64..1774.82 rows=10 width=4) (actual time=20.556..20.558 rows=10 loops=1)  
   Output: id  
   Buffers: shared hit=277  
   ->  Index Only Scan using t_only_pkey on public.t_only  (cost=0.43..177422.89 rows=10000097 width=4) (actual time=0.031..12.721 rows=100010 loops=1)  
         Output: id  
         Heap Fetches: 0  
         Buffers: shared hit=277  
 Planning time: 0.146 ms  
 Execution time: 20.579 ms  
(9 rows)  

5、关闭index only scan,需要扫描817个数据块。这里涉及到大量的回表操作。

postgres=# set enable_indexonlyscan =off;  
SET  
postgres=# explain (analyze,verbose,timing,costs,buffers) select * from t_only order by id offset 100000 limit 10;  
                                                                    QUERY PLAN                                                                      
--------------------------------------------------------------------------------------------------------------------------------------------------  
 Limit  (cost=2315.20..2315.43 rows=10 width=9) (actual time=27.388..27.391 rows=10 loops=1)  
   Output: id, info  
   Buffers: shared hit=817  
   ->  Index Scan using t_only_pkey on public.t_only  (cost=0.43..231475.26 rows=9999922 width=9) (actual time=0.022..19.013 rows=100010 loops=1)  
         Output: id, info  
         Buffers: shared hit=817  
 Planning time: 0.081 ms  
 Execution time: 27.414 ms  
(8 rows)  

前面说了index only scan仅仅适合于SELECT中包含的字段都在INDEX中,如果扫描的字段不在index中,是绝对用不到index only scan的。

但是我们可以强制使用index only scan,改写SQL。

分页SQL层优化(强行index only scan)

例如我们要输出2个字段,而索引只是ID字段。那么可以改写SQL如下。

1、修改SQL如下。

postgres=# explain (analyze,verbose,timing,costs,buffers)   
select * from t_only   
where id= any(  
  array(select id from t_only order by id offset 100000 limit 10)  -- 这里使用index only scan  
);  
                                                                               QUERY PLAN                                                                                 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
 Index Scan using t_only_pkey on public.t_only  (cost=1775.26..1790.35 rows=10 width=9) (actual time=22.359..22.371 rows=10 loops=1)  
   Output: t_only.id, t_only.info  
   Index Cond: (t_only.id = ANY ($0))  
   Buffers: shared hit=311  
   InitPlan 1 (returns $0)  
     ->  Limit  (cost=1774.65..1774.82 rows=10 width=4) (actual time=22.311..22.314 rows=10 loops=1)  
           Output: t_only_1.id  
           Buffers: shared hit=277  
           ->  Index Only Scan using t_only_pkey on public.t_only t_only_1  (cost=0.43..177420.27 rows=9999922 width=4) (actual time=0.026..13.821 rows=100010 loops=1)  
                 Output: t_only_1.id  
                 Heap Fetches: 0  
                 Buffers: shared hit=277  
 Planning time: 0.165 ms  
 Execution time: 22.400 ms  
(14 rows)  
  
postgres=# select * from t_only where id= any(array(select id from t_only order by id offset 100000 limit 10));  
   id   | info   
--------+------  
 100001 | test  
 100002 | test  
 100003 | test  
 100004 | test  
 100005 | test  
 100006 | test  
 100007 | test  
 100008 | test  
 100009 | test  
 100010 | test  
(10 rows)  

使用以上方法,我们在offset时,用到了index only scan使得大部分TUPLE不需要回表就可以判断是否可见。

2、而使用正常的SQL写法,走了index scan需要回表来判断tuple的可见性。扫描了更多的数据块。

postgres=# explain (analyze,verbose,timing,costs,buffers) select * from t_only order by id offset 100000 limit 10;  
                                                                    QUERY PLAN                                                                      
--------------------------------------------------------------------------------------------------------------------------------------------------  
 Limit  (cost=2315.20..2315.43 rows=10 width=9) (actual time=27.388..27.391 rows=10 loops=1)  
   Output: id, info  
   Buffers: shared hit=817  
   ->  Index Scan using t_only_pkey on public.t_only  (cost=0.43..231475.26 rows=9999922 width=9) (actual time=0.022..19.013 rows=100010 loops=1)  
         Output: id, info  
         Buffers: shared hit=817  
 Planning time: 0.081 ms  
 Execution time: 27.414 ms  
(8 rows)  
  
postgres=# select * from t_only order by id offset 100000 limit 10;  
   id   | info   
--------+------  
 100001 | test  
 100002 | test  
 100003 | test  
 100004 | test  
 100005 | test  
 100006 | test  
 100007 | test  
 100008 | test  
 100009 | test  
 100010 | test  
(10 rows)  

内核优化,利用index only scan作为offset的SQL优化

既然手工修改可以达到避免回表判断TUPLE可见性的效果,那么在内核层面实际上也能使用同样的方法来优化。

PATCH如下

https://commitfest.postgresql.org/17/1513/

https://www.postgresql.org/message-id/flat/CANtu0oi3a1Rf1PVsBufQbm+g9ytSv75+kp7kJYvK5C1qF_0Siw@mail.gmail.com#CANtu0oi3a1Rf1PVsBufQbm+g9ytSv75+kp7kJYvK5C1qF_0Siw@mail.gmail.com

Hello.  
  
WIP-Patch for optimisation of OFFSET + IndexScan using visibility map.  
Patch based on idea of Maxim Boguk [1] with some inspiration from Douglas  
Doole [2].  
---------  
Everyone knows - using OFFSET (especially big) is not an good practice.  
But in reality they widely used mostly for paging (because it is simple).  
  
Typical situation is some table (for example tickets) with indexes used for  
paging\sorting:  
  
VACUUM FULL;  
VACUUM ANALYZE ticket;  
SET work_mem = '512MB';  
SET random_page_cost = 1.0;  
  
CREATE TABLE ticket AS  
SELECT  
id,  
TRUNC(RANDOM() * 100 + 1) AS project_id,  
NOW() + (RANDOM() * (NOW()+'365 days' - NOW())) AS created_date,  
repeat((TRUNC(RANDOM() * 100 + 1)::text), 1000) as payload  
FROM GENERATE_SERIES(1, 1000000) AS g(id);  
  
CREATE INDEX simple_index ON ticket using btree(project_id, created_date);  
  
And some typical query to do offset on tickets of some project with paging,  
some filtration (based on index) and sorting:  
  
SELECT * FROM ticket  
WHERE project_id = ?  
AND created_date > '20.06.2017'  
ORDER BY created_date offset 500 limit 100;  
  
At the current moment IndexScan node will be required to do 600 heap  
fetches to execute the query.  
But first 500 of them are just dropped by the NodeLimit.  
  
The idea of the patch is to push down offset information in  
ExecSetTupleBound (like it done for Top-sort) to IndexScan in case  
of simple scan (without projection, reordering and qual). In such situation  
we could use some kind of index only scan  
(even better because we dont need index data) to avoid fetching tuples  
while they are just thrown away by nodeLimit.  
  
Patch is also availble on Github:  
https://github.com/michail-nikolaev/postgres/commit/a368c3483250e4c02046d418a27091678cb963f4?diff=split  
And some test here:  
https://gist.github.com/michail-nikolaev/b7cbe1d6f463788407ebcaec8917d1e0  
  
So, at the moment everything seems to work (check-world is ok too) and I  
got next result for test ticket table:  
| offset | master | patch  
| 100 | ~1.3ms | ~0.7ms  
| 1000 | ~5.6ms | ~1.1ms  
| 10000 | ~46.7ms | ~3.6ms  
  
To continue development I have following questions:  
0) Maybe I missed something huge...  
1) Is it required to support non-mvvc (dirty) snapshots? They are not  
supported for IndexOnlyScan - not sure about IndexScan.  
2) Should I try to pass informaiton about such optimisation to  
planner/optimizer? It is not too easy with current desigh but seems  
possible.  
3) If so, should I add something to EXPLAIN?  
4) If so, should I add some counters to EXPLAIN ANALYZE? (for example  
number of heap fetch avoided).  
5) Should I add description of optimisation to  
https://www.postgresql.org/docs/10/static/queries-limit.html ?  
6) Maybe you have some ideas for additional tests I need to add.  
  
Thanks a lot.  
  
[1]  
https://www.postgresql.org/message-id/CAK-MWwQpZobHfuTtHj9%2B9G%2B5%3Dck%2BaX-ANWHtBK_0_D_qHYxWuw%40mail.gmail.com  
[2]  
https://www.postgresql.org/message-id/CADE5jYLuugnEEUsyW6Q_4mZFYTxHxaVCQmGAsF0yiY8ZDggi-w%40mail.gmail.com  

小结

visilibity map文件,除了能用在本文提到的offset优化。

还能用在vacuum,vacuum freeze, index only scan等场景。用来跳过不需要垃圾回收的页,跳过不需要freeze的页,跳过扫描不需要回表扫描的页。

visilibity map文件的结构,每个HEAP页对应2个比特位。

/* Flags for bit map */  
#define VISIBILITYMAP_ALL_VISIBLE       0x01  
#define VISIBILITYMAP_ALL_FROZEN        0x02  
#define VISIBILITYMAP_VALID_BITS        0x03    /* OR of all valid visibilitymap  
                                                 * flags bits */  

其他分页优化技巧,比index only scan优化效果还要更佳(可以达到每一页都丝般柔滑):

《论count与offset使用不当的罪名 和 分页的优化》

《妙用explain Plan Rows快速估算行 - 分页数估算》

《分页优化 - order by limit x offset y performance tuning》

《分页优化, add max_tag column speedup Query in max match enviroment》

《PostgreSQL's Cursor USAGE with SQL MODE - 分页优化》

参考

https://commitfest.postgresql.org/17/1513/

https://www.postgresql.org/message-id/flat/CANtu0oi3a1Rf1PVsBufQbm+g9ytSv75+kp7kJYvK5C1qF_0Siw@mail.gmail.com#CANtu0oi3a1Rf1PVsBufQbm+g9ytSv75+kp7kJYvK5C1qF_0Siw@mail.gmail.com

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
3月前
|
存储 缓存 关系型数据库
关系型数据库MySQL的Memory (HEAP)
【6月更文挑战第17天】
40 4
|
Oracle 安全 关系型数据库
如何在openGauss/PostgreSQL手动清理XLOG/WAL 文件?
openGauss/PostgreSQL中的预写式日志WAL(Write Ahead Log),又名Xlog或redo log,相当于oracle的online redo log, 不同的是oracle online redo log是提前创建几组滚动使用,但在opengauss中只需要本配置参数控制WAL日志的周期,数据库会一直的创建并自动清理,但存在一些情况WAL日志未清理导致目录空间耗尽,或目录空间紧张时手动删除wal日志时,比如如何确认在非归档模式下哪些WAL日志文件可以安全删除?
841 0
|
存储 缓存 关系型数据库
【PostgreSQL内核】Trigger的一生
前言本文简单介绍 PostgreSQL 数据库的 Trigger 从创建、存储、触发、执行、修改,到删除的过程,贯穿 Trigger 的一生。文中引用的函数、结构体来源于 PG 14 源码,分支为 REL_14_STABLE,对应的 commit id 如下。此外还引用了 PG 14 官方文档。commit be0b0528cb64d49750fcb632faa2cfcd8d920be2 Auth
525 0
|
存储 SQL 关系型数据库
PostgreSQL插件HypoPG:支持虚拟索引
PostgreSQL插件HypoPG:支持虚拟索引
372 0
|
存储 缓存 关系型数据库
PostgreSQL 14新特性--减少索引膨胀
PostgreSQL 14新特性--减少索引膨胀
453 0
|
关系型数据库 PostgreSQL 索引
PostgreSQL通过索引获取heap tuple解析
PostgreSQL通过索引获取heap tuple解析
137 0
|
存储 关系型数据库 PostgreSQL
PostgreSQL表扫描方法解析
PostgreSQL表扫描方法解析
105 0
|
存储 SQL Oracle
PostgreSQL扫描方法综述
PostgreSQL扫描方法综述
113 0
|
存储 SQL 关系型数据库
PostgreSQL的B-tree索引(下)
PostgreSQL的B-tree索引(下)
115 0
|
存储 算法 关系型数据库
PostgreSQL的B-tree索引(上)
PostgreSQL的B-tree索引
116 0

相关产品

  • 云原生数据库 PolarDB
  • 云数据库 RDS PostgreSQL 版