PostgreSQL 并行计算tpc-h测试和优化分析

本文涉及的产品
RDS SQL Server Serverless,2-4RCU 50GB 3个月
推荐场景:
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云数据库 RDS SQL Server,基础系列 2核4GB
简介:

PostgreSQL 并行计算tpc-h测试和优化分析

作者

digoal

日期

2016-11-08

标签

PostgreSQL , 并行计算 , TPC-H


背景

PostgreSQL 9.6首次推出支持聚合、全表扫描、HASH JOIN、nestloop的并行计算。

https://www.postgresql.org/docs/9.6/static/release-9-6.html

Parallel queries (Robert Haas, Amit Kapila, David Rowley, many others)   

With 9.6, PostgreSQL introduces initial support for parallel execution of large queries.   

Only strictly read-only queries where the driving table is accessed via a sequential scan can be parallelized.   

Hash joins and nested loops can be performed in parallel, as can aggregation (for supported aggregates).   

Much remains to be done, but this is already a useful set of features.   

Parallel query execution is not (yet) enabled by default. To allow it, set the new configuration parameter max_parallel_workers_per_gather to a value larger than zero.   

Additional control over use of parallelism is available through other new configuration parameters force_parallel_mode, parallel_setup_cost, parallel_tuple_cost, and min_parallel_relation_size.   

Provide infrastructure for marking the parallel-safety status of functions (Robert Haas, Amit Kapila)   

那么他对TPC-H有多少的性能提升呢?

Robert的PostgreSQL 9.6 TPC-H测试说明

并行度设为4,22条查询有17条使用了并行执行计划。

15条比单核执行更快,其中11条提升至少2倍,1条速度未变化,还有1条变慢。

I decided to try out parallel query, as implemented in PostgreSQL 9.6devel, on the TPC-H queries.

To do this, I followed the directions at https://github.com/tvondra/pg_tpch - thanks to Tomas Vondra for those instructions.

I did the test on an IBM POWER7 server provided to the PostgreSQL community by IBM.

I scaled the database to use 10GB of input data; the resulting database size was 22GB, of which 8GB was indexes.

I tried out each query just once without really tuning the database at all, except for increasing shared_buffers to 8GB.

Then I tested them again after enabling parallel query by configuring max_parallel_degree = 4.

Of the 22 queries, 17 switched to a parallel plan, while the plans for the other 5 were unchanged.

Of the 17 queries where the plan changed, 15 got faster, 1 ran at the same speed, and 1 got slower.

11 of the queries ran at least twice as fast with parallelism as they did without parallelism.

Here are the comparative results for the queries where the plan changed(Parallel vs 单核执行):

Q1: 229 seconds → 45 seconds (5.0x)  

Q3: 45 seconds → 17 seconds (2.6x)  

Q4: 12 seconds → 3 seconds (4.0x)  

Q5: 38 seconds → 17 seconds (2.2x)  

Q6: 17 seconds → 6 seconds (2.8x)  

Q7: 41 seconds → 12 seconds (3.4x)  

Q8: 10 seconds → 4 seconds (2.5x)  

Q9: 81 seconds → 61 seconds (1.3x)  

Q10: 37 seconds → 18 seconds (2.0x)  

Q12: 34 seconds → 7 seconds (4.8x)  

Q15: 33 seconds → 24 seconds (1.3x)  

Q16: 17 seconds → 16 seconds (1.0x)  

Q17: 140 seconds → 55 seconds (2.5x)  

Q19: 2 seconds → 1 second (2.0x)  

Q20: 70 seconds → 70 seconds (1.0x)  

Q21: 80 seconds → 99 seconds (0.8x)  

Q22: 4 seconds → 3 seconds (1.3x)  

Linear scaling with a leader process and 4 workers would mean a 5.0x speedup, which we achieved in only one case.

However, for many users, that won't matter: if you have CPUs that would otherwise be sitting idle, it's better to get some speedup than no speedup at all.

Of course, I couldn't resist analyzing what went wrong here, especially for Q21, which actually got slower.

Q21变慢的原因,是work_mem的配置问题,以及当前HASH JOIN并行机制的问题。

To some degree, that's down to misconfiguration:

I ran this test with the default value of work_mem=4MB, but Q21 chooses a plan that builds a hash table on the largest table in the database, which is about 9.5GB in this test.

Therefore, it ends up doing a 1024-batch hash join, which is somewhat painful under the best of circumstances.

With work_mem=1GB, the regression disappears, and it's 6% faster with parallel query than without.

目前HASH JOIN,每一个并行的WORKER都需要一份hash table的拷贝,如果大表hash的话,会在大表基础上放大N倍的CPU和内存的开销。

小表HASH这个问题可以缓解。

However, there's a deeper problem, which is that while PostgreSQL 9.6 can perform a hash join in parallel, each process must build its own copy of the hash table.

That means we use N times the CPU and N times the memory, and we may induce I/O contention, locking contention, or memory pressure as well.

It would be better to have the ability to build a shared hash table, and EnterpriseDB is working on that as a feature, but it won't be ready in time for PostgreSQL 9.6, which is already in feature freeze.

Since Q21 needs a giant hash table, this limitation really stings.

HASH JOIN可以提升的点,使用共享的HASH TABLE,而不是每个woker process都拷贝一份。

这个可能要等到PostgreSQL 10.0加进来了。

In fact, there are a number of queries here where it seems like building a shared hash table would speed things up significantly: Q3, Q5, Q7, Q8, and Q21.

An even more widespread problem is that, at present, the driving table for a parallel query must be accessed via a parallel sequential scan;

that's the only operation we have that can partition the input data.

另一个提升的点,bitmap scan,因为有几个QUERY的瓶颈是在bitmap scan哪里,但是目前并行计算还不支持bitmap scan。

Many of these queries - Q4, Q5, Q6, Q7, Q14, Q15, and Q20 - would have been better off using a bitmap index scan on the driving table, but unfortunately that's not supported in PostgreSQL 9.6.

We still come out ahead on these queries in terms of runtime because the system simply substitutes raw power for finesse:

with enough workers, we can scan the whole table quicker than a single process can scan the portion identified as relevant by the index.

However, it would clearly be nice to do better.

Four queries - Q2, Q15, Q16, Q22 - were parallelized either not at all or only to a limited degree due to restrictions related to the handling of subqueries,

about which the current implementation of parallel query is not always smart.

Three queries - Q2, Q13, and Q15 - made no or limited use of parallelism because the optimal join strategy is a merge join, which can't be made parallel in a trivial way.

One query - Q17 - managed to perform the same an expensive sort twice, once in the workers and then again in the leader.

This is because the Gather operation reads tuples from the workers in an arbitrary and not necessarily predictable order;

so even if each worker's stream of tuples is sorted, the way those streams get merged together will probably destroy the sort ordering.

There are no doubt other issues here that I haven't found yet, but on the whole I find these results pretty encouraging.

Parallel query basically works, and makes queries that someone thought were representative of real workloads significantly faster.

There's a lot of room for further improvement, but that's likely to be true of the first version of almost any large feature.

并行需要继续提升的点

HASH JOIN可以提升的点,使用共享的HASH TABLE,而不是每个woker process都拷贝一份。

这个可能要等到PostgreSQL 10.0加进来了。

另一个提升的点,bitmap scan,因为有几个QUERY的瓶颈是在bitmap scan哪里,但是目前并行计算还不支持bitmap scan。

支持merge join。

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
9月前
|
NoSQL 算法 大数据
国内首个图计算标准发布,悦数图数据库通过测试
近日,经中国通信标准化协会批准,《大数据图计算平台技术要求与测试方法》标准正式发布,这是我国首个图计算平台标准,为图计算平台的发展提供了一个标准化的指导方针,对于推动我国图技术的发展具有重要意义。
|
5月前
|
存储 关系型数据库 Serverless
PostgreSQL计算两个点之间的距离
PostgreSQL计算两个点之间的距离
509 60
|
4月前
|
JSON 算法 数据可视化
测试专项笔记(一): 通过算法能力接口返回的检测结果完成相关指标的计算(目标检测)
这篇文章是关于如何通过算法接口返回的目标检测结果来计算性能指标的笔记。它涵盖了任务描述、指标分析(包括TP、FP、FN、TN、精准率和召回率),接口处理,数据集处理,以及如何使用实用工具进行文件操作和数据可视化。文章还提供了一些Python代码示例,用于处理图像文件、转换数据格式以及计算目标检测的性能指标。
109 0
测试专项笔记(一): 通过算法能力接口返回的检测结果完成相关指标的计算(目标检测)
|
5月前
|
Oracle NoSQL 关系型数据库
主流数据库对比:MySQL、PostgreSQL、Oracle和Redis的优缺点分析
主流数据库对比:MySQL、PostgreSQL、Oracle和Redis的优缺点分析
974 2
|
6月前
|
存储 算法 Cloud Native
【PolarDB-X列存魔法】揭秘TPC-H测试背后的性能优化秘籍!
【8月更文挑战第25天】阿里巴巴的云原生数据库PolarDB-X以其出色的性能、可靠性和扩展性闻名,在多种业务场景中广泛应用。尤其在列存储模式下,PolarDB-X针对分析型查询进行了优化,显著提升了数据读取效率。本文通过TPC-H基准测试探讨PolarDB-X列存执行计划的优化策略,包括高效数据扫描、专用查询算法以及动态调整执行计划等功能,以满足复杂查询的需求并提高数据分析性能。
143 1
|
7月前
|
SQL 弹性计算 测试技术
实时数仓Hologres TPC-H及点查性能开箱测试
Hologres现在仍然是TPCH-30000榜单的全球第一,领先第二名高达23%,最新发布的2.2版本相比之前的1.x的版本性能大约提升100%。
|
6月前
|
C# Windows IDE
WPF入门实战:零基础快速搭建第一个应用程序,让你的开发之旅更上一层楼!
【8月更文挑战第31天】在软件开发领域,WPF(Windows Presentation Foundation)是一种流行的图形界面技术,用于创建桌面应用程序。本文详细介绍如何快速搭建首个WPF应用,包括安装.NET Framework和Visual Studio、理解基础概念、创建新项目、设计界面、添加逻辑及运行调试等关键步骤,帮助初学者顺利入门并完成简单应用的开发。
229 0
|
6月前
|
Java 测试技术 API
SpringBoot单元测试快速写法问题之计算测试用例的分支覆盖率如何解决
SpringBoot单元测试快速写法问题之计算测试用例的分支覆盖率如何解决
|
9月前
|
算法 TensorFlow 算法框架/工具
基于直方图的图像阈值计算和分割算法FPGA实现,包含tb测试文件和MATLAB辅助验证
这是一个关于图像处理的算法实现摘要,主要包括四部分:展示了四张算法运行的效果图;提到了使用的软件版本为VIVADO 2019.2和matlab 2022a;介绍了算法理论,即基于直方图的图像阈值分割,通过灰度直方图分布选取阈值来区分图像区域;并提供了部分Verilog代码,该代码读取图像数据,进行处理,并输出结果到"result.txt"以供MATLAB显示图像分割效果。
|
9月前
|
存储 关系型数据库 MySQL
TiDB与MySQL、PostgreSQL等数据库的比较分析
【2月更文挑战第25天】本文将对TiDB、MySQL和PostgreSQL等数据库进行详细的比较分析,探讨它们各自的优势和劣势。TiDB作为一款分布式关系型数据库,在扩展性、并发性能等方面表现突出;MySQL以其易用性和成熟性受到广泛应用;PostgreSQL则在数据完整性、扩展性等方面具有优势。通过对比这些数据库的特点和适用场景,帮助企业更好地选择适合自己业务需求的数据库系统。
1344 4

相关产品

  • 云原生数据库 PolarDB
  • 云数据库 RDS PostgreSQL 版