[20170908]imp参数buffer的简单探究.txt

简介: [20170908]imp参数buffer的简单探究.txt --//exp,imp已经很少在用,如果存在8i估计还会用一下,下面因为别人遇到升级忘记家buffer参数(8i),导致导入缓慢, --//当然还有许多因素,比如存在lob字段,不过还是简单探究参数buffer.

[20170908]imp参数buffer的简单探究.txt

--//exp,imp已经很少在用,如果存在8i估计还会用一下,下面因为别人遇到升级忘记家buffer参数(8i),导致导入缓慢,
--//当然还有许多因素,比如存在lob字段,不过还是简单探究参数buffer.

1.环境:
SCOTT@book> @ &r/ver1

PORT_STRING                    VERSION        BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx            11.2.0.4.0     Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

alter system set pga_aggregate_target=4G;
--//以前设置太小256M,因为测试环境,connect by方式建表经常报错,设置大一些.

create table t(x number, x2 varchar2(1000),x3 varchar2(1000))  SEGMENT CREATION IMMEDIATE;
insert into t select level, rpad(' ', 100, ' '),rpad('a',100,'a') from dual connect by level <= 1e6;
commit ;
exec sys.dbms_stats.gather_table_stats ( OwnName => 'SCOTT',TabName => 't',Estimate_Percent => NULL,Method_Opt => 'FOR ALL COLUMNS SIZE 1 ',Cascade => True ,No_Invalidate => false);

SCOTT@book> @ &r/tpt/seg2 scott.t
    SEG_MB OWNER SEGMENT_NAME SEG_PART_NAME SEGMENT_TYPE SEG_TABLESPACE_NAME BLOCKS     HDRFIL     HDRBLK
---------- ----- ------------ ------------- ------------ ------------------- ------ ---------- ----------
       234 SCOTT T                          TABLE        USERS                29952          4        546

--//234M.


2.导出:
$ exp scott/book tables=T file=t.dmp direct=y buffer=1280000
Export: Release 11.2.0.4.0 - Production on Fri Sep 8 11:48:29 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in ZHS16GBK character set and AL16UTF16 NCHAR character set
About to export specified tables via Direct Path ...
. . exporting table                              T    1000000 rows exported
Export terminated successfully without warnings.

3.测试导入:

--//如果检索NESTED_TABLE_SET_REFS,可以找到imp导入执行如下语句.
SCOTT@book> select sql_id,sql_text,executions from v$sql where sql_id='62m8tgc8mhwr2';
SQL_ID        SQL_TEXT                                                       EXECUTIONS
------------- ------------------------------------------------------------ ------------
62m8tgc8mhwr2 INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("X", "X2", "X3"         1935
              ) VALUES (:1, :2, :3)

SCOTT@book> alter system flush shared_pool;
System altered.

SCOTT@book> select sql_id,sql_text,executions from v$sql where sql_id='62m8tgc8mhwr2';
no rows selected

--//开始测试不同buffer数量的情况:
alter table t rename to t1;
//drop table t purge ;
alter system flush shared_pool;

$ imp scott/book tables=T file=t.dmp  buffer=N

SCOTT@book> select sql_id,sql_text,executions from v$sql where sql_id='62m8tgc8mhwr2';
SQL_ID        SQL_TEXT                                                       EXECUTIONS
------------- ------------------------------------------------------------ ------------
62m8tgc8mhwr2 INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("X", "X2", "X3"         1935
              ) VALUES (:1, :2, :3)

--//上面仅仅显示N=1048576的情况.做一个表格记录:

N             EXECUTIONS
----------------------------------
8388608       242
4194304       484
2097152       968
1048576       1935
524288        3876
-----------------------------------

--//可以发现1个规律N成倍减少,插入语句的执行次数EXECUTIONS成倍增加.
--//另外并看不出设置再大buffer,加快导入速度.

4.胡乱分析:
--//自己的假象,buffer参数就是一个缓存,在这段内存组织数据,然后批量插入,这样你通过v$sql看到的执行次数就不是1e6,而是根据buffer参数变化.
--//拿N=1048576来分析看看.这里的1M如果表示缓存内存大小的话,前面记录的表大小232M,这样大约23X次,差距太大.

--//1000000/1935 = 516.795865633074

--//大约每次插入517条记录.

select DBMS_ROWID.ROWID_BLOCK_NUMBER (rowid) ,count(*) from t group by DBMS_ROWID.ROWID_BLOCK_NUMBER (rowid);
SCOTT@book> select * from (select DBMS_ROWID.ROWID_BLOCK_NUMBER (rowid) ,count(*) from t group by DBMS_ROWID.ROWID_BLOCK_NUMBER (rowid) ) where rownum<=5;
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)     COUNT(*)
------------------------------------ ------------
                                1035           34
                                1036           34
                                1037           34
                                1038           34
                                1039           34

--//这样一块基本34条记录.
--//517/34=15.2058823529411764758,这样517条占用15-16块. 分析不对..换一个思路.

SCOTT@book> select table_name,blocks,avg_row_len from dba_tables where owner=user and table_name in ('T');
TABLE_NAME       BLOCKS  AVG_ROW_LEN
---------- ------------ ------------
T                 29477          207

--//1024*1024/207=5065.5845410628193236714,这样也不对.

--//数量类型占用22字节(最大),这个可以通过V$SQL_BIND_CAPTURE视图确定.
22+1000+1000=2022
1024*1024/2022=518.5835806132542375865

--//这样计算1M容纳的数量518与前面的计算516非常接近.

5.有了前面的分析:
--//如果表定义如下(其中1个字段修改长度2000):
1024*1024/3022=346.9808074123972865651
1000000/346.9808074123972865651=2882.0037841789624875591
--//按照这样推断插入是2882次,验证看看是否正确.

//drop table t purge;
//drop table t1 purge;
create table t(x number, x2 varchar2(2000),x3 varchar2(1000))  SEGMENT CREATION IMMEDIATE;
insert into t select level, rpad(' ', 100, ' '),rpad('a',100,'a') from dual connect by level <= 1e6;
commit ;
exec sys.dbms_stats.gather_table_stats ( OwnName => 'SCOTT',TabName => 't',Estimate_Percent => NULL,Method_Opt => 'FOR ALL COLUMNS SIZE 1 ',Cascade => True ,No_Invalidate => false);

$ exp scott/book tables=T file=t.dmp direct=y buffer=1280000
...

alter table t rename to t1;
//drop table t purge ;
alter system flush shared_pool;

$ imp scott/book tables=T file=t.dmp  buffer=1048576

SCOTT@book> select sql_id,sql_text,executions from v$sql where sql_id='62m8tgc8mhwr2';
SQL_ID        SQL_TEXT                                                       EXECUTIONS
------------- ------------------------------------------------------------ ------------
62m8tgc8mhwr2 INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("X", "X2", "X3"         2891
              ) VALUES (:1, :2, :3)
--//执行次数是2891.与推断的结果非常接近.

--//如果表定义如下(其中2个字段修改长度2000):
1024*1024/4022=260.7100944803580308343
1000000/260.7100944803580308343=3835.6781005859374999431
--//按照这样推断插入是3835次,验证看看是否正确.

//drop table t purge;
//drop table t1 purge;
create table t(x number, x2 varchar2(2000),x3 varchar2(2000))  SEGMENT CREATION IMMEDIATE;
insert into t select level, rpad(' ', 100, ' '),rpad('a',100,'a') from dual connect by level <= 1e6;
commit ;
exec sys.dbms_stats.gather_table_stats ( OwnName => 'SCOTT',TabName => 't',Estimate_Percent => NULL,Method_Opt => 'FOR ALL COLUMNS SIZE 1 ',Cascade => True ,No_Invalidate => false);

$ exp scott/book tables=T file=t.dmp direct=y buffer=1280000
...

alter table t rename to t1;
//drop table t purge ;
alter system flush shared_pool;

$ imp scott/book tables=T file=t.dmp  buffer=1048576

SCOTT@book> select sql_id,sql_text,executions from v$sql where sql_id='62m8tgc8mhwr2';
SQL_ID        SQL_TEXT                                                       EXECUTIONS
------------- ------------------------------------------------------------ ------------
62m8tgc8mhwr2 INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("X", "X2", "X3"         3847
              ) VALUES (:1, :2, :3)

--//执行次数是3847.与推断的结果非常接近.
--//好久不写blog,有点无聊....

目录
相关文章
|
7月前
|
算法 Unix Linux
select函数中的文件描述符(File Descriptor)范围
select函数中的文件描述符(File Descriptor)范围
95 0
select函数中的文件描述符(File Descriptor)范围
av_dump_format参数分析与使用
av_dump_format参数分析与使用
134 0
av_dump_format参数分析与使用
File对象和相关方法02
File对象和相关方法02
File对象和相关方法01
File对象和相关方法01
|
SQL Oracle 关系型数据库
[20160722]对象C_OBJ#_INTCOL#有坏块.txt
[20160722]对象C_OBJ#_INTCOL#有坏块.txt --前几天看到的帖子,一直没时间测试,链接如下: http://www.itpub.net/thread-2063836-1-1.html --我以前按照eygle的链接http://www.eygle.com/archives/2012/05/event_38003_c_obj_intcol.html做过测试,测试在11.2.0.2下做的。
1825 0
|
SQL Oracle 关系型数据库
跟着吕大师(VAGE)揭密隐含参数:_db_writer_coalesce_area_size
[size=13.913043975830078px]最近在看吕大师的大作《Oracle核心揭密》,这部大作可以与Jonathan Lewis大师的《Oracle Core_ Essential Internals for DBA》相提并论,看了几天收益颇多,哈哈美国有Lewis,中国有VAGE。
跟着吕大师(VAGE)揭密隐含参数:_db_writer_coalesce_area_size
|
Oracle 关系型数据库 Linux
[20180224]理解exp direct导出操作.txt
[20180224]理解exp direct导出操作.txt 1.环境: SCOTT@book> @ &r/ver1 PORT_STRING                    VERSION        BANNER -------------...
1090 0
|
SQL 缓存 Oracle
[20180226]exp buffer RECORDLENGTH.txt
[20180226]exp buffer RECORDLENGTH.txt --//虽然已经很少使用exp导致,如果加入direct=y参数,设置RECORDLENGTH参数能加快数据导出.
1267 0
|
Oracle 关系型数据库 SQL
[20171105]exp imp buffer参数解析.txt
[20171105]exp imp buffer参数解析.txt oracle官方所给的关于buffer的解释如下: https://docs.oracle.com/cd/A84870_01/doc/server.
1774 0

热门文章

最新文章