徐春阳老师发文爆MySQL 8.0 hash join有重大缺陷。
文章核心观点如下:多表(比如3个个表)join时,只会简单的把表数据量小的放在前面作为驱动表,大表放在最后面,从而导致可能产生极大结果集的笛卡尔积,甚至耗尽CPU和磁盘空间。
就此现象,我也做了个测试。
1. 利用TPC-H工具准备测试环境
TPC-H工具在这里下载 http://www.tpc.org/tpch/default5.asp。默认并不支持MySQL,需要自己手动做些调整,参见 https://imysql.com/2012/12/21/tpch-for-mysql-manual.html。
在本案中,我指定的 Scale Factor 参数是10,即:
[root@yejr.run dbgen]# ./dbgen -s 10 && ls -l *tbl -rw-r--r-- 1 root root 244847642 Apr 14 09:52 customer.tbl -rw-r--r-- 1 root root 7775727688 Apr 14 09:52 lineitem.tbl -rw-r--r-- 1 root root 2224 Apr 14 09:52 nation.tbl -rw-r--r-- 1 root root 1749195031 Apr 14 09:52 orders.tbl -rw-r--r-- 1 root root 243336157 Apr 14 09:52 part.tbl -rw-r--r-- 1 root root 1204850769 Apr 14 09:52 partsupp.tbl -rw-r--r-- 1 root root 389 Apr 14 09:52 region.tbl -rw-r--r-- 1 root root 14176368 Apr 14 09:52 supplier.tbl
2. 创建测试表,导入测试数据。
查看几个表的数据量分别是:
+----------+------------+----------+----------------+-------------+--------------+ | Name | Row_format | Rows | Avg_row_length | Data_length | Index_length | +----------+------------+----------+----------------+-------------+--------------+ | customer | Dynamic | 1476605 | 197 | 291258368 | 0 | | lineitem | Dynamic | 59431418 | 152 | 9035579392 | 0 | | nation | Dynamic | 25 | 655 | 16384 | 0 | | orders | Dynamic | 14442405 | 137 | 1992294400 | 0 | | part | Dynamic | 1980917 | 165 | 327991296 | 0 | | partsupp | Dynamic | 9464104 | 199 | 1885339648 | 0 | | region | Dynamic | 5 | 3276 | 16384 | 0 | | supplier | Dynamic | 99517 | 184 | 18366464 | 0 | +----------+------------+----------+----------------+-------------+--------------+
提醒:几个测试表都不要加任何索引,包括主键,上表中 Index_length的值均为0。
3. 运行测试SQL
本案选用的MySQL版本是8.0.19:
[root@yejr.run]> \s ... Server version: 8.0.19-commercial MySQL Enterprise Server - Commercial ...
徐老师是在用TPC-H中的Q5时遇到的问题,本案也同样选择这个SQL。
不过,本案主要测试Hash Join,因此去掉了其中的GROUP BY和ORDER BY子句。
先看下执行计划吧,都是全表扫描,好可怕...
[root@yejr.run]> desc select count(*) -> from -> customer, -> orders, -> lineitem, -> supplier, -> nation, -> region -> where -> c_custkey = o_custkey -> and l_orderkey = o_orderkey -> and l_suppkey = s_suppkey -> and c_nationkey = s_nationkey -> and s_nationkey = n_nationkey -> and n_regionkey = r_regionkey -> and r_name = 'AMERICA' -> and o_orderdate >= date '1993-01-01' -> and o_orderdate < date '1993-01-01' + interval '1' year; +----------+------+----------+----------+----------------------------------------------------+ | table | type | rows | filtered | Extra | +----------+------+----------+----------+----------------------------------------------------+ | region | ALL | 5 | 20.00 | Using where | | nation | ALL | 25 | 10.00 | Using where; Using join buffer (Block Nested Loop) | | supplier | ALL | 98705 | 10.00 | Using where; Using join buffer (Block Nested Loop) | | customer | ALL | 1485216 | 10.00 | Using where; Using join buffer (Block Nested Loop) | | orders | ALL | 14932433 | 1.11 | Using where; Using join buffer (Block Nested Loop) | | lineitem | ALL | 59386314 | 1.00 | Using where; Using join buffer (Block Nested Loop) | +----------+------+----------+----------+----------------------------------------------------+
加上 format=tree 再看下(真壮观啊。。。)
*************************** 1. row *************************** EXPLAIN: -> Aggregate: count(0) -> Inner hash join (lineitem.L_SUPPKEY = supplier.S_SUPPKEY), (lineitem.L_ORDERKEY = orders.O_ORDERKEY) (cost=40107736685515472896.00 rows=4010763818487343104) -> Table scan on lineitem (cost=0.07 rows=59386314) -> Hash -> Inner hash join (orders.O_CUSTKEY = customer.C_CUSTKEY) (cost=60799566599072.12 rows=6753683238538) -> Filter: ((orders.O_ORDERDATE >= DATE'1993-01-01') and (orders.O_ORDERDATE < <cache>((DATE'1993-01-01' + interval '1' year)))) (cost=0.16 rows=165883) -> Table scan on orders (cost=0.16 rows=14932433) -> Hash -> Inner hash join (customer.C_NATIONKEY = nation.N_NATIONKEY) (cost=3664985889.79 rows=3664956624) -> Table scan on customer (cost=0.79 rows=1485216) -> Hash -> Inner hash join (supplier.S_NATIONKEY = nation.N_NATIONKEY) (cost=24976.50 rows=24676) -> Table scan on supplier (cost=513.52 rows=98705) -> Hash -> Inner hash join (nation.N_REGIONKEY = region.R_REGIONKEY) (cost=3.50 rows=3) -> Table scan on nation (cost=0.50 rows=25) -> Hash -> Filter: (region.R_NAME = 'AMERICA') (cost=0.75 rows=1) -> Table scan on region (cost=0.75 rows=5)
看起来的确是把最小的表放在最前面,把最大的放在最后面。
在开始跑之前,我们先看一眼手册中关于Hash Join的描述,其中有一段是这样的:
Memory usage by hash joins can be controlled using the join_buffer_size system variable; a hash join cannot use more memory than this amount. When the memory required for a hash join exceeds the amount available, MySQL handles this by using files on disk. If thishappens, you should be aware that the join may not succeed if a hash join cannot fit into memory and it creates more files than set for open_files_limit. To avoid such problems, make either of the following changes: - Increase join_buffer_size so that the hash join does not spill over to disk. - Increase open_files_limit.
简言之,当 join_buffer_size 不够时,会在hash join的过程中转储大量的磁盘表(把一个hash表切分成多个小文件放在磁盘上,再逐个读入内存进行hash join),因此建议加大 join_buffer_size,或者加大 open_files_limit 上限。
所以,正式开跑前,我先把join_buffer_size调大到1GB,并顺便看下其他几个参数值:
[root@yejr.run]> select @@join_buffer_size, @@tmp_table_size, @@innodb_buffer_pool_size; +--------------------+------------------+---------------------------+ | @@join_buffer_size | @@tmp_table_size | @@innodb_buffer_pool_size | +--------------------+------------------+---------------------------+ | 1073741824 | 16777216 | 10737418240 | +--------------------+------------------+---------------------------+
并且为了保险起见,在执行SQL时也用 SET_VAR(8.0新特性) 设置了 join_bufer_size,走起。