[20161002]impdp导入很慢.txt

本文涉及的产品
日志服务 SLS,月写入数据量 50GB 1个月
简介: [20161002]impdp导入很慢.txt --如果在导入前表以及对应索引已经存在,impdp导入(使用参数TABLE_EXISTS_ACTION=append)要维护索引,这样在导入时产生大量日志,比 --没有表存在的情况下慢很多,通过例子来说明.
[20161002]impdp导入很慢.txt

--如果在导入前表以及对应索引已经存在,impdp导入(使用参数TABLE_EXISTS_ACTION=append)要维护索引,这样在导入时产生大量日志,比
--没有表存在的情况下慢很多,通过例子来说明.

1.环境:
SCOTT@test01p> @ ver1
PORT_STRING                    VERSION        BANNER                                                                               CON_ID
------------------------------ -------------- -------------------------------------------------------------------------------- ----------
IBMPC/WIN_NT64-9.1.0           12.1.0.1.0     Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0

create table t (id number ,name varchar2(20),pad varchar2(100));
insert into t select rownum,rownum||'test',lpad('x',100,'x') from dual connect by level <=1e5;
create unique index pk_t on t(id);
create index i_t_name on t(name);

SYS@test> alter system set log_checkpoints_to_alert=true;
System altered.
--//设置log_checkpoints_to_alert=true,这样alert文件有记录。用来简单测试redo产生量。

2.导出:
d:\blog>expdp scott/btbtms@test01p directory=TMP_EXPDP  dumpfile=t.dp logfile=t.log tables=(t)
Export: Release 12.1.0.1.0 - Production on Sun Oct 2 22:45:19 2016
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/a**@test01p directory=TMP_EXPDP dumpfile=t.dp logfile=t.log tables=(t)
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 14 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SCOTT"."T"                                 11.42 MB  100000 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  D:\TMP\EXPDP\T.DP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at Sun Oct 2 22:45:56 2016 elapsed 0 00:00:36


3.导入:
SCOTT@test01p> truncate table t drop storage;
Table truncated.

alter system archive log current;
alter system checkpoint;
impdp scott/btbtms@test01p directory=TMP_EXPDP dumpfile=t.dp logfile=t1.log tables=(t) TABLE_EXISTS_ACTION=append
alter system checkpoint;

--记录当时的输出:
d:\blog>impdp scott/btbtms@test01p directory=TMP_EXPDP dumpfile=t.dp logfile=t1.log tables=(t) TABLE_EXISTS_ACTION=append
Import: Release 12.1.0.1.0 - Production on Sun Oct 2 22:33:29 2016
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01":  scott/a*@test01p directory=TMP_EXPDP dumpfile=t.dp logfile=t1.log tables=(t) TABLE_EXISTS_ACTION=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "SCOTT"."T" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."T"                                 11.42 MB  100000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SCOTT"."SYS_IMPORT_TABLE_01" successfully completed at Sun Oct 2 22:33:34 2016 elapsed 0 00:00:04

--检查日志产生大小:
ALTER SYSTEM ARCHIVE LOG
Sun Oct 02 22:33:14 2016
Beginning log switch checkpoint up to RBA [0x565.2.10], SCN: 24349292
Sun Oct 02 22:33:14 2016
Thread 1 advanced to log sequence 1381 (LGWR switch)
  Current log# 2 seq# 1381 mem# 0: D:\APP\ORACLE\ORADATA\TEST\REDO02.LOG
Sun Oct 02 22:33:15 2016
Archived Log entry 7 added for thread 1 sequence 1380 ID 0x7e537b63 dest 1:
Sun Oct 02 22:33:26 2016
Beginning global checkpoint up to RBA [0x565.9.10], SCN: 24349299
Completed checkpoint up to RBA [0x565.9.10], SCN: 24349299
Completed checkpoint up to RBA [0x565.2.10], SCN: 24349292
Sun Oct 02 22:33:30 2016
DM00 started with pid=66, OS id=7620, job SCOTT.SYS_IMPORT_TABLE_01
Sun Oct 02 22:33:31 2016

DW00 started with pid=67, OS id=7576, wid=1, job SCOTT.SYS_IMPORT_TABLE_01
Sun Oct 02 22:33:43 2016
Beginning global checkpoint up to RBA [0x565.f34a.10], SCN: 24350612
Completed checkpoint up to RBA [0x565.f34a.10], SCN: 24350612

--f34a = 62282, 62282-2= 62280.

4.测试表不存在的情况:
drop table t purge;

alter system archive log current;
alter system checkpoint;
impdp scott/btbtms@test01p directory=TMP_EXPDP dumpfile=t.dp logfile=t1.log tables=(t) TABLE_EXISTS_ACTION=append
alter system checkpoint;

--检查日志产生大小:
Sun Oct 02 22:35:55 2016
ALTER SYSTEM ARCHIVE LOG
Sun Oct 02 22:35:55 2016
Beginning log switch checkpoint up to RBA [0x566.2.10], SCN: 24350915
Sun Oct 02 22:35:55 2016
Thread 1 advanced to log sequence 1382 (LGWR switch)
  Current log# 1 seq# 1382 mem# 0: D:\APP\ORACLE\ORADATA\TEST\REDO01.LOG
Sun Oct 02 22:35:56 2016
Archived Log entry 8 added for thread 1 sequence 1381 ID 0x7e537b63 dest 1:
Sun Oct 02 22:36:02 2016
Beginning global checkpoint up to RBA [0x566.2.10], SCN: 24350918
Completed checkpoint up to RBA [0x566.2.10], SCN: 24350918
Completed checkpoint up to RBA [0x566.2.10], SCN: 24350915
Sun Oct 02 22:36:07 2016
DM00 started with pid=66, OS id=944, job SCOTT.SYS_IMPORT_TABLE_01
Sun Oct 02 22:36:08 2016

DW00 started with pid=67, OS id=7612, wid=1, job SCOTT.SYS_IMPORT_TABLE_01
Sun Oct 02 22:36:41 2016
Beginning global checkpoint up to RBA [0x566.a38e.10], SCN: 24352610
Completed checkpoint up to RBA [0x566.a38e.10], SCN: 24352610

-- a38e = 41870 ,41870-2=41868,对比前面减少了
-- 62280-41868=20412
-- 20412*512/1024/1024=9.966796875 ,差不多增加了10M。

--我建立的表不是很大,才14M,增加日志就10M。如果索引很多问题不是更严重。

5.测试表存在索引不存在的情况。
SCOTT@test01p> truncate table t drop storage;
Table truncated.

SCOTT@test01p> drop index I_T_NAME;
Index dropped.

SCOTT@test01p> drop index pk_t;
Index dropped.

alter system archive log current;
alter system checkpoint;
impdp scott/btbtms@test01p directory=TMP_EXPDP dumpfile=t.dp logfile=t1.log tables=(t) TABLE_EXISTS_ACTION=append
alter system checkpoint;

--检查日志产生大小:
ALTER SYSTEM ARCHIVE LOG
Sun Oct 02 22:49:44 2016
Beginning log switch checkpoint up to RBA [0x568.2.10], SCN: 24356138
Sun Oct 02 22:49:44 2016
Thread 1 advanced to log sequence 1384 (LGWR switch)
  Current log# 2 seq# 1384 mem# 0: D:\APP\ORACLE\ORADATA\TEST\REDO02.LOG
Sun Oct 02 22:49:44 2016
Archived Log entry 10 added for thread 1 sequence 1383 ID 0x7e537b63 dest 1:
Sun Oct 02 22:49:49 2016
Beginning global checkpoint up to RBA [0x568.8.10], SCN: 24356142
Completed checkpoint up to RBA [0x568.8.10], SCN: 24356142
Completed checkpoint up to RBA [0x568.2.10], SCN: 24356138
Sun Oct 02 22:49:56 2016
DM00 started with pid=68, OS id=3856, job SCOTT.SYS_IMPORT_TABLE_01
Sun Oct 02 22:49:57 2016

DW00 started with pid=69, OS id=7772, wid=1, job SCOTT.SYS_IMPORT_TABLE_01
Sun Oct 02 22:50:01 2016
Incremental checkpoint up to RBA [0x568.15.0], current log tail at RBA [0x568.7f45.0]
Beginning global checkpoint up to RBA [0x568.7f71.10], SCN: 24356938
Completed checkpoint up to RBA [0x568.7f71.10], SCN: 24356938

-- 7f71 = 32625,这样日志最小。很奇怪这样索引并不会建立,以后在工作中也要注意!!

SCOTT@test01p> select * from user_indexes where table_name='T';
no rows selected

--//使用TABLE_EXISTS_ACTION=replace,会替换并建立索引。估计与表不存在产生日志相当。

6.总结:

--以后在做这类导入导出工作中注意,特别是大表,可能索引存在N多个,这样性能会更慢.
--看来以后使用这种空表导入时要注意,特别是大表带有N多索引的情况。
相关实践学习
日志服务之使用Nginx模式采集日志
本文介绍如何通过日志服务控制台创建Nginx模式的Logtail配置快速采集Nginx日志并进行多维度分析。
目录
相关文章
|
安全 关系型数据库 MySQL
sqlyog导入csv失败的解决方法
sqlyog导入csv失败的解决方法
|
关系型数据库 Oracle
|
Oracle 关系型数据库 开发工具
[20161020]bbed保存执行结果.txt
[20161020]bbed保存执行结果.txt --别人问的问题,实际上如果输出内容不多,直接界面上查看。 --如果输出很多,屏幕显示一闪而过,如果在tmux下使用就很简单,使用前缀键+[进入tmux的buffer模式,浏览以前的显示的内容 --我设置vi模式,可以像vim使用?或者/查询需要找的内容,一般使用?,因为tmux的显示是从下向上的。
764 0
|
Oracle 关系型数据库 OLAP
[20161002]impdp导入空表.txt
[20161002]impdp导入空表.txt --业务需求要求建立新的测试库,由于磁盘空间有限,要求几个大表导入空表,11g支持段延迟提交,即使表init很大也不会出现空间问题.
1314 0
|
Oracle 关系型数据库 索引
[20160713]impdp与统计信息导入.txt
[20160713]impdp与统计信息导入.txt --许多做迁移使用导入imdp加入EXCLUDE=STATISTICS参数,避免统计信息导入。具体原因我觉得可能人为重新组织数据后,统计信息不准确。
1158 0
|
Oracle 网络协议 关系型数据库
通过EXPDP/IMPDP导出导入远端数据倒本地
有的时候我们不想导出EXPDP文件然后FTP到目标服务器IMPDP 因为这样会占用大量的空间,并且浪费时间,我们就可以通过如下的方式 直接将远端的数据导入到本地。
1425 0