PostgreSQL 编译选项-g的影响

本文涉及的产品
云原生数据库 PolarDB MySQL 版,Serverless 5000PCU 100GB
云原生数据库 PolarDB 分布式版,标准版 2核8GB
云数据库 RDS MySQL Serverless,0.5-2RCU 50GB
简介:

背景

PostgreSQL 的编译选项之一--enable-debug,用途编译时是给编译器一个flag,告诉编译器产生用于调试的symbols。

但是它会影响性能吗?

对于gcc编译器,支持debug模式与optimizer模式同时开启,所以性能影响比较微小,但是显而易见还是有一定的影响的 (编译器在混用模式下-O2的效果没有直接使用-O2好吗? )。

对于其他的编译器,debug模式是不能与optimizer模式同时开启的,所以性能影响会比较大。 慎用。

--enable-debug
Compiles all programs and libraries with debugging symbols. 

This means that you can run the programs in a debugger to analyze problems. 

This enlarges the size of the installed executables considerably, and on non-GCC compilers it usually also disables compiler optimization, causing slowdowns. 

However, having the symbols available is extremely helpful for dealing with any problems that might arise. 

Currently, this option is recommended for production installations only if you use GCC. 

But you should always have it on if you are doing development work or running a beta version.

PostgreSQL configure文件中可以看到--enable-debug就是开启了-g选项。

# supply -g if --enable-debug
if test "$enable_debug" = yes && test "$ac_cv_prog_cc_g" = yes; then
  CFLAGS="$CFLAGS -g"
fi

gcc 帮助文档中-g的含义解释
man gcc

   Options for Debugging Your Program or GCC
       GCC has various special options that are used for debugging either your program or GCC:

       -g  Produce debugging information in the operating system's native format 
            (stabs, COFF, XCOFF, or DWARF 2).  GDB can work with this debugging information.

           On most systems that use stabs format, -g enables use of extra debugging information that only GDB can use; 
           this extra information makes debugging work better in GDB but probably makes other debuggers crash or refuse to read the program.  
           If you want to control for certain whether to generate the extra information, use -gstabs+, -gstabs, -gxcoff+, -gxcoff, or -gvms (see below).  

           GCC allows you to use -g with -O.  
           The shortcuts taken by optimized code may occasionally produce surprising results: 
           some variables you declared may not exist at all; 
           flow of control may briefly move where you did not expect it; 
           some statements may not be executed because they compute constant results or their values are already at hand; 
            some statements may execute in different places because they have been moved out of loops.

           Nevertheless it proves possible to debug optimized output. 
           This makes it reasonable to use the optimizer for programs that might have bugs.

           The following options are useful when GCC is generated with the capability for more than one debugging format.

测试对比开启与不开启DEBUG模式的性能

CentOS 7.x x64
> uname -a
Linux iZ28tqoemgtZ 3.10.0-123.9.3.el7.x86_64 #1 SMP Thu Nov 6 15:06:03 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

> gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/cloog-install --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) 

32核64GB
lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    1
Core(s) per socket:    32
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 63
Model name:            Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Stepping:              2
CPU MHz:               2494.224
BogoMIPS:              4988.44
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              30720K
NUMA node0 CPU(s):     0-31

编译两套PostgreSQL,分别是--enable-debug与不带--enable-debug的二进制

./configure --prefix=/home/digoal/pgsql_debug --enable-debug
make world -j 32 && make install-world -j 32

./configure --prefix=/home/digoal/pgsql_nodebug 
make world -j 32 && make install-world -j 32

数据库配置

listen_addresses = '0.0.0.0'            # what IP address(es) to listen on;
port = 1921                             # (change requires restart)
max_connections = 100                   # (change requires restart)
superuser_reserved_connections = 3      # (change requires restart)
unix_socket_directories = '.'   # comma-separated list of directories
tcp_keepalives_idle = 10                # TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 10            # TCP_KEEPINTVL, in seconds;
tcp_keepalives_count = 60               # TCP_KEEPCNT;
shared_buffers = 8GB                    # min 128kB
dynamic_shared_memory_type = posix      # the default is the first option
shared_preload_libraries = ''           # (change requires restart)
wal_level = hot_standby                 # minimal, archive, hot_standby, or logical
fsync = on                              # turns forced synchronization on or off
synchronous_commit = off                # synchronization level;
full_page_writes = off                  # recover from partial page writes
wal_buffers = 16MB                      # min 32kB, -1 sets based on shared_buffers
wal_writer_delay = 10ms         # 1-10000 milliseconds
max_wal_senders = 10            # max number of walsender processes
wal_keep_segments = 100         # in logfile segments, 16MB each; 0 disables
hot_standby = on                        # "on" allows queries during recovery
wal_receiver_status_interval = 1s       # send replies at least this often
hot_standby_feedback = on               # send info from standby to prevent
wal_retrieve_retry_interval = 1s        # time to wait before retrying to
random_page_cost = 1.5                  # same scale as above
log_destination = 'csvlog'              # Valid values are combinations of
logging_collector = on          # Enable capturing of stderr and csvlog
log_truncate_on_rotation = on           # If on, an existing log file with the
log_timezone = 'PRC'
autovacuum = on                 # Enable autovacuum subprocess?  'on'
autovacuum_vacuum_cost_delay = 0ms      # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 0        # default vacuum cost limit for
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C'                       # locale for system error message
lc_monetary = 'C'                       # locale for monetary formatting
lc_numeric = 'C'                        # locale for number formatting
lc_time = 'C'                           # locale for time formatting
default_text_search_config = 'pg_catalog.english'
max_locks_per_transaction = 10000               # min 10

测试表

postgres=# create table test01(id int primary key,info text,crt_time timestamp);
CREATE TABLE
postgres=# insert into test01 select generate_series(1,1000000),md5(random()::text),clock_timestamp();
INSERT 0 1000000

测试方法

vi test.sql

\setrandom id 1 1000000
select * from test01 where id=:id;

pgbench -M prepared -n -r -P 1 -f ./test.sql -c 48 -j 48 -T 100

测试结果

不带--enable-debug的测试结果

使用/home/digoal/pgsql_nodebug/bin/postgres 启动数据库
使用/home/digoal/pgsql_nodebug/bin/pgbench 测试

nohup pgbench -M prepared -n -r -P 1 -f ./test.sql -c 48 -j 48 -T 100 > ./nodebug.log 2>&1 &

progress: 1.0 s, 272818.1 tps, lat 0.171 ms stddev 0.177
progress: 2.0 s, 279225.3 tps, lat 0.170 ms stddev 0.251
progress: 3.0 s, 290603.4 tps, lat 0.163 ms stddev 0.122
progress: 4.0 s, 294643.2 tps, lat 0.161 ms stddev 0.149
progress: 5.0 s, 290804.4 tps, lat 0.163 ms stddev 0.140
progress: 6.0 s, 294559.7 tps, lat 0.161 ms stddev 0.219
......
transaction type: Custom query
scaling factor: 1
query mode: prepared
number of clients: 48
number of threads: 48
duration: 100 s
number of transactions actually processed: 29298968
latency average: 0.162 ms
latency stddev: 0.247 ms
tps = 292968.715159 (including connections establishing)
tps = 292996.921785 (excluding connections establishing)
statement latencies in milliseconds:
        0.003216        \setrandom id 1 1000000
        0.157631        select * from test01 where id=:id;

带--enable-debug的测试结果

使用/home/digoal/pgsql_debug/bin/postgres 启动数据库
使用/home/digoal/pgsql_debug/bin/pgbench 测试

nohup pgbench -M prepared -n -r -P 1 -f ./test.sql -c 48 -j 48 -T 100 > ./debug.log 2>&1 &

progress: 1.0 s, 262350.8 tps, lat 0.177 ms stddev 0.129
progress: 2.0 s, 281712.9 tps, lat 0.168 ms stddev 0.267
progress: 3.0 s, 289221.4 tps, lat 0.164 ms stddev 0.160
progress: 4.0 s, 285303.5 tps, lat 0.166 ms stddev 0.225
progress: 5.0 s, 284841.1 tps, lat 0.166 ms stddev 0.285
progress: 6.0 s, 288347.7 tps, lat 0.165 ms stddev 0.279
progress: 7.0 s, 288152.3 tps, lat 0.165 ms stddev 0.208
......
transaction type: Custom query
scaling factor: 1
query mode: prepared
number of clients: 48
number of threads: 48
duration: 100 s
number of transactions actually processed: 28685127
latency average: 0.165 ms
latency stddev: 0.337 ms
tps = 286832.327593 (including connections establishing)
tps = 286860.764434 (excluding connections establishing)
statement latencies in milliseconds:
        0.003302        \setrandom id 1 1000000
        0.161113        select * from test01 where id=:id;

qps性能相差 2.14%

postgres=# select (292996.921785 -286860.764434)/286860.764434 ;
        ?column?        
------------------------
 0.02139071672317106756
(1 row)

影响较小,如果瓶颈不在CPU的话,可能差别更小。

祝大家玩得开心,欢迎随时来 阿里云促膝长谈 业务需求 ,恭候光临。

阿里云的小伙伴们加油,努力做 最贴地气的云数据库

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
相关文章
|
关系型数据库 测试技术 PostgreSQL
postgresql实现影响分析
通过postgresql模仿分析假如城市发布通知,位于街道的人员是否受到了影响
97 0
postgresql实现影响分析
|
关系型数据库 分布式数据库 数据库
PolarDB-X 1.0-VPC 问题-如果RDS实例切换到VPC而DRDS没有切换,对DRDS会产生什么影响?
由于DRDS依赖于RDS,所以如果将RDS实例切换了网络类型后(无论是从经典网络切换VPC还是从VPC切换经典网络),DRDS与RDS之间的网络连通性会被破坏。为此,需要到DRDS控制台对DRDS实例的分库连接进行修复操作。
199 0
PolarDB-X 1.0-VPC 问题-如果RDS实例切换到VPC而DRDS没有切换,对DRDS会产生什么影响?
|
关系型数据库 数据库 RDS
PolarDB-X 1.0-常见问题-分库分表问题-删除数据库时,PolarDB-X各物理分库是否会被自动删除?是否会影响RDS上的数据库?
在控制台上删除PolarDB-X的数据库时,只会删除之前由PolarDB-X所创建的数据库,不会影响原本在RDS上创建的数据库。
198 0
|
Java 关系型数据库 PostgreSQL
PostgreSQL物理"备库"的哪些操作或配置,可能影响"主库"的性能、垃圾回收、IO波动
标签 PostgreSQL , 物理复制 , 垃圾回收 , vacuum_defer_cleanup_age , hot_standby_feedback , max_standby_archive_delay , max_standby_streaming_delay 背景 PostgreSQL 物理备库的哪些配置,或者哪些操作,可能影响到主库呢? 首先,简单介绍一下PostgreSQL的物理备库,物理备库就是基于PostgreSQL WAL流式复制,实时恢复的备库。
4169 0
|
缓存 关系型数据库 PostgreSQL
postgresql 优化 order by 对索引使用的影响
--原始sql SELECT * FROM tops_order.eticket WHERE ( issue_complete_date >= '2015-10-01 00:00:00+08' AND issue_compl
10418 0
|
关系型数据库 数据库 PostgreSQL
POSTGRESQL区域设置对索引和排序的影响
--区域支持是在使用 initdb 创建一个数据库集群的时候自动初始化的,但可在创建数据库时单独指定 --区域设置特别影响下面的 SQL 特性 * 查询中使用 ORDER BY 或者对文本数据的标准比较操作符进行排序 * upper, lower 和 initcap 函数 * 模式匹配运算符(LIKE, SIMILAR TO, 以及 POSIX-风格的正则表达式); 区域影 响大小写不敏感的匹配和通过字符分类正则表达式的字符分类。
1067 0
|
9月前
|
SQL Cloud Native 关系型数据库
ADBPG(AnalyticDB for PostgreSQL)是阿里云提供的一种云原生的大数据分析型数据库
ADBPG(AnalyticDB for PostgreSQL)是阿里云提供的一种云原生的大数据分析型数据库
745 1
|
9月前
|
数据可视化 关系型数据库 MySQL
将 PostgreSQL 迁移到 MySQL 数据库
将 PostgreSQL 迁移到 MySQL 数据库
1073 2
|
8月前
|
SQL 存储 自然语言处理
玩转阿里云RDS PostgreSQL数据库通过pg_jieba插件进行分词
在当今社交媒体的时代,人们通过各种平台分享自己的生活、观点和情感。然而,对于平台管理员和品牌经营者来说,了解用户的情感和意见变得至关重要。为了帮助他们更好地了解用户的情感倾向,我们可以使用PostgreSQL中的pg_jieba插件对这些发帖进行分词和情感分析,来构建一个社交媒体情感分析系统,系统将根据用户的发帖内容,自动判断其情感倾向是积极、消极还是中性,并将结果存储在数据库中。
玩转阿里云RDS PostgreSQL数据库通过pg_jieba插件进行分词

相关产品

  • 云原生数据库 PolarDB