MySQL:mysqldump 导出数据异常重启及drop栈帧

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
RDS MySQL Serverless 高可用系列,价值2615元额度,1个月
简介:

一、现象

在进行mysqldump的时候只要访问到库中一个表,只要一访问就抛错重启如下:

stack_bottom = 7f656f792e28 thread_stack 0x30000
/dbdata/mysql5600/bin/mysqld(my_print_stacktrace+0x35)[0x905a25]
/dbdata/mysql5600/bin/mysqld(handle_fatal_signal+0x43b)[0x65c50b]
/dbdata/mysql5600/bin/mysqld(_Z29page_find_rec_max_not_deletedPKh+0xa0)[0x9b5570]
/dbdata/mysql5600/bin/mysqld[0x9fdca0]
/dbdata/mysql5600/bin/mysqld[0x967202]
/dbdata/mysql5600/bin/mysqld[0x9682f9]
/dbdata/mysql5600/bin/mysqld(_ZN7handler7ha_openEP5TABLEPKcii+0x3e)[0x599f3e]
/dbdata/mysql5600/bin/mysqld(_Z21open_table_from_shareP3THDP11TABLE_SHAREPKcjjjP5TABLEb+0x68c)[0x77dedc]
/dbdata/mysql5600/bin/mysqld(_Z10open_tableP3THDP10TABLE_LISTP18Open_table_context+0xcaf)[0x699fff]
/dbdata/mysql5600/bin/mysqld(_Z11open_tablesP3THDPP10TABLE_LISTPjjP19Prelocking_strategy+0xf50)[0x69bbf0]
/dbdata/mysql5600/bin/mysqld(_Z30open_normal_and_derived_tablesP3THDP10TABLE_LISTj+0x48)[0x69bd48]
/dbdata/mysql5600/bin/mysqld[0x71dc3f]
/dbdata/mysql5600/bin/mysqld(_Z14get_all_tablesP3THDP10TABLE_LISTP4Item+0x738)[0x72d388]
/dbdata/mysql5600/bin/mysqld(_Z24get_schema_tables_resultP4JOIN23enum_schema_table_state+0x2e1)[0x718c71]
/dbdata/mysql5600/bin/mysqld(_ZN4JOIN14prepare_resultEPP4ListI4ItemE+0x9d)[0x70c94d]
/dbdata/mysql5600/bin/mysqld(_ZN4JOIN4execEv+0xdc)[0x6c5abc]
/dbdata/mysql5600/bin/mysqld(_Z12mysql_selectP3THDP10TABLE_LISTjR4ListI4ItemEPS4_P10SQL_I_ListI8st_orderESB_S7_yP13select_resultP18st_select_lex_unitP13st_select_le
x+0x218)[0x70e3c8]
/dbdata/mysql5600/bin/mysqld(_Z13handle_selectP3THDP13select_resultm+0x17f)[0x70ecbf]
/dbdata/mysql5600/bin/mysqld[0x6e6b05]
/dbdata/mysql5600/bin/mysqld(_Z21mysql_execute_commandP3THD+0x26ce)[0x6eb6ce]
/dbdata/mysql5600/bin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x598)[0x6ee818]
/dbdata/mysql5600/bin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1766)[0x6f0026]
/dbdata/mysql5600/bin/mysqld(_Z24do_handle_one_connectionP3THD+0x115)[0x6b6e95]
/dbdata/mysql5600/bin/mysqld(handle_one_connection+0x42)[0x6b7012]
/dbdata/mysql5600/bin/mysqld(pfs_spawn_thread+0x127)[0x941627]
libc.so.6(clone+0x6d)[0x34582e8b5d]

二、分析

稍做分析发现出错的函数是page_find_rec_max_not_deleted,并且出现在打开表的时候,打开表为什么要去访问实际的数据块呢?我在debug环境做了一个断点在page_find_rec_max_not_deleted上栈帧如下:

#0  page_find_rec_max_not_deleted (page=0x7fffb055c000 "\213\032\063", <incomplete sequence \316>)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/page/page0page.cc:2762
#1  0x0000000001b505be in row_search_get_max_rec (index=0x7fff249e8cf0, mtr=0x7ffff02d77b0)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0sel.cc:6335
#2  0x0000000001b506f5 in row_search_max_autoinc (index=0x7fff249e8cf0, col_name=0x7fff249e2551 "id", value=0x7ffff02d7d10)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0sel.cc:6373
#3  0x00000000019ac668 in ha_innobase::innobase_initialize_autoinc (this=0x7fff249e28a0)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:6118
#4  0x00000000019ad722 in ha_innobase::open (this=0x7fff249e28a0, name=0x7fff249eac00 "./test/mch_pay_rate", mode=2, test_if_locked=2)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:6533
#5  0x0000000000f664be in handler::ha_open (this=0x7fff249e28a0, table_arg=0x7fff249e43a0, name=0x7fff249eac00 "./test/mch_pay_rate", mode=2, test_if_locked=2)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:2904
#6  0x00000000016a1542 in open_table_from_share (thd=0x7fff24000b70, share=0x7fff249ea820, alias=0x7fff24005a38 "mch_pay_rate", db_stat=39, prgflag=8, 
    ha_open_flags=0, outparam=0x7fff249e43a0, is_create_table=false) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/table.cc:3335
#7  0x00000000015186cc in open_table (thd=0x7fff24000b70, table_list=0x7ffff02d9600, ot_ctx=0x7ffff02d8d40)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:3560
#8  0x000000000151b24f in open_and_process_table (thd=0x7fff24000b70, lex=0x7fff24003150, tables=0x7ffff02d9600, counter=0x7fff24003210, flags=1024, 
    prelocking_strategy=0x7ffff02d8e70, has_prelocking_list=false, ot_ctx=0x7ffff02d8d40) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:5171
#9  0x000000000151c3ab in open_tables (thd=0x7fff24000b70, start=0x7ffff02d8e30, counter=0x7fff24003210, flags=1024, prelocking_strategy=0x7ffff02d8e70)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:5789
#10 0x000000000151d7e5 in open_tables_for_query (thd=0x7fff24000b70, tables=0x7ffff02d9600, flags=1024)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:6564
#11 0x000000000160cda1 in mysqld_list_fields (thd=0x7fff24000b70, table_list=0x7ffff02d9600, wild=0x7fff24006388 "")
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_show.cc:1092
#12 0x00000000015a248f in dispatch_command (thd=0x7fff24000b70, com_data=0x7ffff02d9d70, command=COM_FIELD_LIST)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1622
#13 0x00000000015a09c6 in do_command (thd=0x7fff24000b70) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010
#14 0x00000000016e29d0 in handle_connection (arg=0x387b9e0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/conn_handler/connection_handler_per_thread.cc:312
#15 0x0000000001d7b4b0 in pfs_spawn_thread (arg=0x380b6e0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/perfschema/pfs.cc:2188
#16 0x0000003f74807aa1 in start_thread () from /lib64/libpthread.so.0
#17 0x0000003f740e8bcd in clone () from /lib64/libc.so.6

其实原因可以一目了然,原来打开表的时候如果有自增字段需要访问索引的最后一个块,而数据文件已经损坏了只要打开表就会重启。

三、处理

还好这个表是不需要的,只是简单drop 就行了。再次进行mysqldump正常了。8.0听说对于初始化自增值做了加强,不需要访问数据文件了而是做了持久化,也许8.0就不会有这种问题了。

四、附带两个drop table 和drop database的栈帧

有的时候即便我们idb文件不存在drop table是可以进行的下面是两个栈帧。

drop db

#0  open_table (thd=0x7fff20000b70, table_list=0x7ffff02d5ec0, ot_ctx=0x7ffff02d5da0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:3612
#1  0x000000000151d21c in open_ltable (thd=0x7fff20000b70, table_list=0x7ffff02d5ec0, lock_type=TL_WRITE, lock_flags=2048)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:6400
#2  0x000000000152692f in open_system_table_for_update (thd=0x7fff20000b70, one_table=0x7ffff02d5ec0)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:10710
#3  0x00000000014d90e9 in open_proc_table_for_update (thd=0x7fff20000b70) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sp.cc:485
#4  0x00000000014dc840 in sp_drop_db_routines (thd=0x7fff20000b70, db=0x7fff20006388 "employees") at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sp.cc:1634
#5  0x0000000001551203 in mysql_rm_db (thd=0x7fff20000b70, db=..., if_exists=false, silent=false) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_db.cc:891
#6  0x00000000015a86e0 in mysql_execute_command (thd=0x7fff20000b70, first_level=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:4007
#7  0x00000000015adcd6 in mysql_parse (thd=0x7fff20000b70, parser_state=0x7ffff02d9600) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5836
#8  0x00000000015a1b95 in dispatch_command (thd=0x7fff20000b70, com_data=0x7ffff02d9d70, command=COM_QUERY)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1447
#9  0x00000000015a09c6 in do_command (thd=0x7fff20000b70) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010
#10 0x00000000016e29d0 in handle_connection (arg=0x387d890) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/conn_handler/connection_handler_per_thread.cc:312
#11 0x0000000001d7b4b0 in pfs_spawn_thread (arg=0x3866c20) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/perfschema/pfs.cc:2188
#12 0x0000003f74807aa1 in start_thread () from /lib64/libpthread.so.0
#13 0x0000003f740e8bcd in clone () from /lib64/libc.so.6


drop table

(gdb) bt
#0  os_file_handle_error_no_exit (name=0x7fff24028dd8 "./test/t1.ibd", operation=0x22e1538 "delete", on_error_silent=false)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/os/os0file.cc:5946
#1  0x0000000001a7b154 in os_file_delete_func (name=0x7fff24028dd8 "./test/t1.ibd") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/os/os0file.cc:3849
#2  0x0000000001cd6d8d in pfs_os_file_delete_func (key=46, name=0x7fff24028dd8 "./test/t1.ibd", 
    src_file=0x2364a38 "/root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/fil/fil0fil.cc", src_line=2896)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/include/os0file.ic:470
#3  0x0000000001cdf3ae in fil_delete_tablespace (id=617, buf_remove=BUF_REMOVE_FLUSH_NO_WRITE)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/fil/fil0fil.cc:2896
#4  0x0000000001b1686a in row_drop_single_table_tablespace (space_id=617, tablename=0x7fff24010e00 "test/t1", filepath=0x7fff2403b258 "./test/t1.ibd", is_temp=false, 
    is_encrypted=false, trx=0x7ffff2f2e5d0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0mysql.cc:4217
#5  0x0000000001b17be2 in row_drop_table_for_mysql (name=0x7ffff02d63d0 "test/t1", trx=0x7ffff2f2e5d0, drop_db=false, nonatomic=true, handler=0x0)
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0mysql.cc:4751
#6  0x00000000019b9e1a in ha_innobase::delete_table (this=0x7fff24006b30, name=0x7ffff02d7840 "./test/t1")
    at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:12955
#7  0x0000000000f6c2da in handler::ha_delete_table (this=0x7fff24006b30, name=0x7ffff02d7840 "./test/t1")
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:5071
#8  0x0000000000f65ba5 in ha_delete_table (thd=0x7fff24000b70, table_type=0x2e9edd0, path=0x7ffff02d7840 "./test/t1", db=0x7fff24006938 "test", 
    alias=0x7fff24006378 "t1", generate_warning=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:2722
#9  0x00000000016347d9 in mysql_rm_table_no_locks (thd=0x7fff24000b70, tables=0x7fff240063b0, if_exists=false, drop_temporary=false, drop_view=false, 
    dont_log_query=false) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_table.cc:2644
#10 0x0000000001633572 in mysql_rm_table (thd=0x7fff24000b70, tables=0x7fff240063b0, if_exists=0 '\000', drop_temporary=0 '\000')
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_table.cc:2207
#11 0x00000000015a78df in mysql_execute_command (thd=0x7fff24000b70, first_level=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:3742
#12 0x00000000015adcd6 in mysql_parse (thd=0x7fff24000b70, parser_state=0x7ffff02d9600) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5836
#13 0x00000000015a1b95 in dispatch_command (thd=0x7fff24000b70, com_data=0x7ffff02d9d70, command=COM_QUERY)
    at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1447
#14 0x00000000015a09c6 in do_command (thd=0x7fff24000b70) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010
#15 0x00000000016e29d0 in handle_connection (arg=0x3855a50) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/conn_handler/connection_handler_per_thread.cc:312
#16 0x0000000001d7b4b0 in pfs_spawn_thread (arg=0x37ef3b0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/perfschema/pfs.cc:2188
#17 0x0000003f74807aa1 in start_thread () from /lib64/libpthread.so.0
#18 0x0000003f740e8bcd in clone () from /lib64/libc.so.6

row_drop_single_table_tablespace 函数INNODB删除物理文件如果文件不存在row_drop_single_table_tablespace -> fil_delete_tablespace 中有如下代码

if (!os_file_delete(innodb_data_file_key, path)
            && !os_file_delete_if_exists(
                innodb_data_file_key, path, NULL)) {

            /* Note: This is because we have removed the
            tablespace instance from the cache. */

            err = DB_IO_ERROR;
        }
  • 第一个条件os_file_delete函数会返回flase他会调用如下:
os_file_handle_error_no_exit ->os_file_handle_error_cond_exit 中会直接返回false 
os_file_handle_error_cond_exit 的err值为:
(gdb) p err
$24 = 71

并且报错

  • 第二个条件os_file_delete_if_exists

会返回

(gdb) p result
$26 = true

也就是 函数os_file_delete_if_exists_func返回了true,即便文件不存也是true。
因此本条件不会触发报错,则drop table即便idb文件不存在也会继续。

作者微信:

微信.jpg

相关实践学习
如何在云端创建MySQL数据库
开始实验后,系统会自动创建一台自建MySQL的 源数据库 ECS 实例和一台 目标数据库 RDS。
全面了解阿里云能为你做什么
阿里云在全球各地部署高效节能的绿色数据中心,利用清洁计算为万物互联的新世界提供源源不断的能源动力,目前开服的区域包括中国(华北、华东、华南、香港)、新加坡、美国(美东、美西)、欧洲、中东、澳大利亚、日本。目前阿里云的产品涵盖弹性计算、数据库、存储与CDN、分析与搜索、云通信、网络、管理与监控、应用服务、互联网中间件、移动服务、视频服务等。通过本课程,来了解阿里云能够为你的业务带来哪些帮助 &nbsp; &nbsp; 相关的阿里云产品:云服务器ECS 云服务器 ECS(Elastic Compute Service)是一种弹性可伸缩的计算服务,助您降低 IT 成本,提升运维效率,使您更专注于核心业务创新。产品详情: https://www.aliyun.com/product/ecs
相关文章
|
2月前
|
安全 关系型数据库 MySQL
如何将数据从MySQL同步到其他系统
【10月更文挑战第17天】如何将数据从MySQL同步到其他系统
199 0
|
2月前
|
SQL 前端开发 关系型数据库
全表数据核对 ,行数据核对,列数据核对,Mysql 8.0 实例(sample database classicmodels _No.3 )
全表数据核对 ,行数据核对,列数据核对,Mysql 8.0 实例(sample database classicmodels _No.3 )
52 0
全表数据核对 ,行数据核对,列数据核对,Mysql 8.0 实例(sample database classicmodels _No.3 )
|
2月前
|
关系型数据库 MySQL 数据库
mysql 里创建表并插入数据
【10月更文挑战第5天】
135 1
|
23天前
|
存储 Oracle 关系型数据库
【赵渝强老师】MySQL InnoDB的数据文件与重做日志文件
本文介绍了MySQL InnoDB存储引擎中的数据文件和重做日志文件。数据文件包括`.ibd`和`ibdata`文件,用于存放InnoDB数据和索引。重做日志文件(redo log)确保数据的可靠性和事务的持久性,其大小和路径可由相关参数配置。文章还提供了视频讲解和示例代码。
130 11
【赵渝强老师】MySQL InnoDB的数据文件与重做日志文件
|
23天前
|
缓存 NoSQL 关系型数据库
Redis和Mysql如何保证数据⼀致?
在项目中,为了解决Redis与Mysql的数据一致性问题,我们采用了多种策略:对于低一致性要求的数据,不做特别处理;时效性数据通过设置缓存过期时间来减少不一致风险;高一致性但时效性要求不高的数据,利用MQ异步同步确保最终一致性;而对一致性和时效性都有高要求的数据,则采用分布式事务(如Seata TCC模式)来保障。
55 14
|
26天前
|
SQL 前端开发 关系型数据库
SpringBoot使用mysql查询昨天、今天、过去一周、过去半年、过去一年数据
SpringBoot使用mysql查询昨天、今天、过去一周、过去半年、过去一年数据
50 9
|
2月前
|
SQL Java 关系型数据库
java连接mysql查询数据(基础版,无框架)
【10月更文挑战第12天】该示例展示了如何使用Java通过JDBC连接MySQL数据库并查询数据。首先在项目中引入`mysql-connector-java`依赖,然后通过`JdbcUtil`类中的`main`方法实现数据库连接、执行SQL查询及结果处理,最后关闭相关资源。
|
1月前
|
SQL 关系型数据库 MySQL
定时任务频繁插入数据导致锁表问题 -> 查询mysql进程
定时任务频繁插入数据导致锁表问题 -> 查询mysql进程
46 1
|
1月前
|
SQL 关系型数据库 MySQL
mysql数据误删后的数据回滚
【11月更文挑战第1天】本文介绍了四种恢复误删数据的方法:1. 使用事务回滚,通过 `pymysql` 库在 Python 中实现;2. 使用备份恢复,通过 `mysqldump` 命令备份和恢复数据;3. 使用二进制日志恢复,通过 `mysqlbinlog` 工具恢复特定位置的事件;4. 使用延迟复制从副本恢复,通过停止和重启从库复制来恢复数据。每种方法都有详细的步骤和示例代码。
193 2
|
2月前
|
存储 关系型数据库 MySQL
面试官:MySQL一次到底插入多少条数据合适啊?
本文探讨了数据库插入操作的基础知识、批量插入的优势与挑战,以及如何确定合适的插入数据量。通过面试对话的形式,详细解析了单条插入与批量插入的区别,磁盘I/O、内存使用、事务大小和锁策略等关键因素。最后,结合MyBatis框架,提供了实际应用中的批量插入策略和优化建议。希望读者不仅能掌握技术细节,还能理解背后的原理,从而更好地优化数据库性能。