9.读书笔记收获不止Oracle之 表设计
2017-11-03 22:45
393 查看
9.读书笔记收获不止Oracle之 表设计
表位于体系物理结构的数据文件部分。
在逻辑结构中,一张表就是一个段,如果有索引,一个索引就是一个段。
l 更新有日志开销
l 表DELETE操作有瑕疵
l 表记录太大检索较慢
l 索引回表读开销很大
l 有序插入难有序读出
在合适场合选择合适技术。没有在高级的技术,只有最适合的技术。
NAME VALUE
--------------------------------------------------------------------------
redo size 712
利用v$statname 和v$mystat两个动态视图来跟踪当前SESSION操作产生的日志量。
drop table t purge;
create table t as select * from dba_objects;
创建视图,便于后续查看redo 更新:
SQL>create or replace view v_redo_size as
select a.name,b.value fromv$statname a,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
SQL> grant select on v_redo_size to public;
SQL> grant select on v_$statname totoad;
SQL> grant select on v_$mystat to toad;
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 728
SQL> delete from t;
91661 rows deleted.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 38935360
日志更新了38M差不多,是不是很多?
再插入一下,
SQL> insert into t select * fromdba_objects;
91662 rows created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 51219460
有更细了差不多13MB。
进行修改更新:
SQL> update t set object_id=rownum;
91662 rows updated.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 67699932
又更新了16MB。
无论是删除、插入还是修改,都会产生日志。
删除产生的日志最多,然后是更新,最后是插入。
因为删除产生的UNDO最多,而UNDO需要REDO保护的。
试验如下:
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects;
Table created.
SQL> set autotrace on
SQL> select count(*) from t;
COUNT(*)
----------
91662
Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0| SELECT STATEMENT | | 1 | 429 (1)| 00:00:01 |
| 1| SORT AGGREGATE | | 1 | | |
| 2| TABLE ACCESS FULL| T |91662 | 429 (1)| 00:00:01 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
2 recursive calls
0 dbblock gets
1542 consistent gets
1538 physical reads
0 redosize
544 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rowsprocessed
然后进行DELETE操作,如下:
SQL> set autotrace off
SQL> delete from t;
91662 rows deleted.
SQL> commit;
Commit complete.
SQL> set autotrace on
SQL> select count(*) from t;
COUNT(*)
----------
0
Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0| SELECT STATEMENT | | 1 | 429 (1)| 00:00:01 |
| 1| SORT AGGREGATE | | 1 | | |
| 2| TABLE ACCESS FULL| T |91662 | 429 (1)| 00:00:01 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 dbblock gets
1543 consistent gets
0 physical reads
0 redosize
541 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rowsprocessed
发现逻辑读并没有减少。使用truncate 命令试试。
SQL> truncate table t;
Table truncated.
SQL> select count(*) from t;
COUNT(*)
----------
0
Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0| SELECT STATEMENT | | 1 | 429 (1)| 00:00:01 |
| 1| SORT AGGREGATE | | 1 | | |
| 2| TABLE ACCESS FULL| T |91662 | 429 (1)| 00:00:01 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
1 recursive calls
1 dbblock gets
5 consistent gets
0 physical reads
104 redo size
541 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
使用truncate命令后,逻辑读就变得很少了。
DELETE并不能释放空间,将块的记录删除了,但是空块依旧在,ORACLE还会去查询这些空块。TRUNCATE是一种释放高水平位的动作,这些空块被回收,空间就释放了。
TRUNCATE是一种DDL操作,而非DML操作。不能带条件,但是可以是有效的分区。
当大量的DELETE删除后再右大量INSERT插入时,ORACLE会去这些DELETE的空块中首先完成插入,所以频繁DELETE又频繁INSERT的应用,不会出现空块过多的情况。
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects whererownum<=200;
Table created.
SQL> create index idx_obj_id on t (object_id);
Index created.
SQL> set linesize 1000
SQL> set autotrace traceonly
SQL> select * from t where object_id<=10;
9 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3784017797
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0| SELECT STATEMENT | | 9 | 819 | 2 (0)| 00:00:01 |
| 1| TABLE ACCESSBY INDEX ROWID BATCHED| T | 9| 819 | 2 (0)| 00:00:01 |
|* 2| INDEX RANGE SCAN | IDX_OBJ_ID | 9 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
Predicate Information (identified byoperation id):
---------------------------------------------------
2- access("OBJECT_ID"<=10)
Statistics
----------------------------------------------------------
1 recursive calls
0 dbblock gets
4 consistent gets
0 physical reads
0 redosize
2654 bytes sent via SQL*Net toclient
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9 rowsprocessed
索引来检索记录,会有一个先从索引中找到记录,再根据索引列上的ROWID定位到表中从而返回索引列以外的其他列的动作。这就是TABLE ACCESS BY INDEX ROWID。
继续试验:
SQL> select object_id from t where object_id<=10;
9 rowsselected.
ExecutionPlan
----------------------------------------------------------
Planhash value: 188501954
-------------------------------------------------------------------------------
|Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9 | 36 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| IDX_OBJ_ID | 9 | 36 | 1 (0)|00:00:01 |
-------------------------------------------------------------------------------
PredicateInformation (identified by operation id):
---------------------------------------------------
1 - access("OBJECT_ID"<=10)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
653 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/fromclient
0 sorts (memory)
0 sorts (disk)
9 rows processed
发现TABLE ACCESS BY INDEX ROWID没了。逻辑读从4变为了2. 代价从2变为了1. 不用从索引中回到表中获取索引以外的其他列了。
Table dropped.
SQL> create table t
2 (aint,
3 bvarchar2(4000) default rpad('*',4000,'*'),
4 cvarchar2(3000) default rpad('*',3000,'*')
5 );
Table created.
SQL> insert into t (a)values (1);
SQL> insert into t (a) values (2);
SQL> insert into t (a) values (3);
SQL> select A from t;
A
----------
1
2
3
SQL> delete from t where a=2;
1 row deleted.
SQL> insert into t (a) values (4);
1 row created.
SQL> commit;
Commit complete.
SQL> select A from t;
A
----------
1
4
3
因为BLOCK大小默认是8KB,使用rpad(‘*’,4000,’*’) ,rpad(‘*’,3000,’*’)来填充B,C字段。保证一个块只插入一条数据。
要顺序展现,要用order by操作,但是开销很大。
SQL> set linesize 1000
SQL> set autotrace traceonly
SQL> select A from t;
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0| SELECT STATEMENT | | 3 | 39 | 3 (0)| 00:00:01 |
| 1| TABLE ACCESS FULL| T | 3 | 39 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
Note
-----
-dynamic statistics used: dynamic sampling (level=2)
Statistics
----------------------------------------------------------
0 recursive calls
0 dbblock gets
7 consistent gets
0 physical reads
0 redosize
597 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rowsprocessed
SQL> select A from t order by A;
Execution Plan
----------------------------------------------------------
Plan hash value: 961378228
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0| SELECT STATEMENT | | 3 | 39 | 4 (25)| 00:00:01 |
| 1| SORT ORDER BY | | 3 | 39 | 4 (25)| 00:00:01 |
| 2| TABLE ACCESS FULL| T | 3 | 39 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------
Note
-----
-dynamic statistics used: dynamic sampling (level=2)
Statistics
----------------------------------------------------------
4 recursive calls
0 dbblock gets
15 consistent gets
0 physical reads
0 redosize
597 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3 rowsprocessed
在有排序的操作的统计信息模块有一个sorts,表示发生了排序。执行计划中葛优SORT ORDER BY的关键字。性能上是由差异的。在大数量时将会非常明显。
避免排序的方法有两种思路。第一种是在ORDER BY的排序列建索引;第二种是将普通表改造为有序散列聚簇表,保证顺利插入,ORDER BY展现时无须再有排序动作。
SQL> create global temporary table t_tmp_session on commitpreserve rows as select * from dba_objects where 1=2;
Table created.
SQL> select table_name,temporary,duration from user_tables wheretable_name='T_TMP_SESSION';
TABLE_NAME TDURATION
--------------- - ---------------
T_TMP_SESSION Y SYS$SESSION
SQL> create global temporary table t_tmp_transaction on commitdelete rows as select * from dba_objects where 1=2;
Table created.
SQL> select table_name,temporary ,duration from user_tables wheretable_name='T_TMP_TRANSACTION';
TABLE_NAME TDURATION
--------------- - ---------------
T_TMP_TRANSACTI Y SYS$TRANSACTION
ON
然后来观察下对全局临时表操作引起的日志 REDO变化。
先查看日志变化,执行如下:
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 292004
PS:该视图之前创建的。
无论插入更新还是删除,操作普通表产生的日志都比全局临时表要多。
看个例子:
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects where 1=2;
Table created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 1853048
SQL> insert into t select * from dba_objects;
91669 rows created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 14421480
插入产生的日志 14421480-1853048=12568432
SQL> update t set object_id=rownum;
91669 rows updated.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 28701856
更新普通表产生日志:28701856-14421480=14280376
SQL> delete from t;
91669 rows deleted.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 67639048
删除产生的日志:67639048-28701856=38937192
无论插入更新还是删除,操作普通表产生的日志都比全局临时表要多。
基于回话的全局临时表是SESSION连接退出后,临时表记录自动删除。
SQL> select count(*) from t_tmp_transaction;
COUNT(*)
----------
0
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 67639048
SQL> insert into t_tmp_transaction select * from dba_objects;
91669 rows created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 68243388
SQL> commit;
Commit complete.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 68243564
SQL> select count(*) from t_tmp_transaction;
COUNT(*)
----------
0
基于会话的全局临时表同理,退出SESSION,再重新连接后,记录消失。
如果全局临时表在程序的一次调用执行过程中需要多次情况记录再插入记录的,就考虑基于事务的,COMMIT可以把结果快速清理。如果不存在这种情况,就用基于SESSION的,更简单,连COMMIT动作都省了。
表位于体系物理结构的数据文件部分。
在逻辑结构中,一张表就是一个段,如果有索引,一个索引就是一个段。
1. 普通堆表
缺点:、l 更新有日志开销
l 表DELETE操作有瑕疵
l 表记录太大检索较慢
l 索引回表读开销很大
l 有序插入难有序读出
在合适场合选择合适技术。没有在高级的技术,只有最适合的技术。
1.1 表更新日志开销查看
SQL> select a.name,b.value from v$statname a,v$mystat b where a.statistic#=b.statistic#and a.name='redo size';NAME VALUE
--------------------------------------------------------------------------
redo size 712
利用v$statname 和v$mystat两个动态视图来跟踪当前SESSION操作产生的日志量。
drop table t purge;
create table t as select * from dba_objects;
创建视图,便于后续查看redo 更新:
SQL>create or replace view v_redo_size as
select a.name,b.value fromv$statname a,v$mystat b where a.statistic#=b.statistic# and a.name='redo size';
SQL> grant select on v_redo_size to public;
SQL> grant select on v_$statname totoad;
SQL> grant select on v_$mystat to toad;
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 728
SQL> delete from t;
91661 rows deleted.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 38935360
日志更新了38M差不多,是不是很多?
再插入一下,
SQL> insert into t select * fromdba_objects;
91662 rows created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 51219460
有更细了差不多13MB。
进行修改更新:
SQL> update t set object_id=rownum;
91662 rows updated.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 67699932
又更新了16MB。
无论是删除、插入还是修改,都会产生日志。
删除产生的日志最多,然后是更新,最后是插入。
因为删除产生的UNDO最多,而UNDO需要REDO保护的。
1.2 DELETE无法释放空间
DELETE是最消耗性能的操作,产生的UNDO最多,而UNDO需要REDO来保护,所以不少性能问题都和DELETE操作有关。试验如下:
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects;
Table created.
SQL> set autotrace on
SQL> select count(*) from t;
COUNT(*)
----------
91662
Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0| SELECT STATEMENT | | 1 | 429 (1)| 00:00:01 |
| 1| SORT AGGREGATE | | 1 | | |
| 2| TABLE ACCESS FULL| T |91662 | 429 (1)| 00:00:01 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
2 recursive calls
0 dbblock gets
1542 consistent gets
1538 physical reads
0 redosize
544 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rowsprocessed
然后进行DELETE操作,如下:
SQL> set autotrace off
SQL> delete from t;
91662 rows deleted.
SQL> commit;
Commit complete.
SQL> set autotrace on
SQL> select count(*) from t;
COUNT(*)
----------
0
Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0| SELECT STATEMENT | | 1 | 429 (1)| 00:00:01 |
| 1| SORT AGGREGATE | | 1 | | |
| 2| TABLE ACCESS FULL| T |91662 | 429 (1)| 00:00:01 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 dbblock gets
1543 consistent gets
0 physical reads
0 redosize
541 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rowsprocessed
发现逻辑读并没有减少。使用truncate 命令试试。
SQL> truncate table t;
Table truncated.
SQL> select count(*) from t;
COUNT(*)
----------
0
Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0| SELECT STATEMENT | | 1 | 429 (1)| 00:00:01 |
| 1| SORT AGGREGATE | | 1 | | |
| 2| TABLE ACCESS FULL| T |91662 | 429 (1)| 00:00:01 |
-------------------------------------------------------------------
Statistics
----------------------------------------------------------
1 recursive calls
1 dbblock gets
5 consistent gets
0 physical reads
104 redo size
541 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
使用truncate命令后,逻辑读就变得很少了。
DELETE并不能释放空间,将块的记录删除了,但是空块依旧在,ORACLE还会去查询这些空块。TRUNCATE是一种释放高水平位的动作,这些空块被回收,空间就释放了。
TRUNCATE是一种DDL操作,而非DML操作。不能带条件,但是可以是有效的分区。
当大量的DELETE删除后再右大量INSERT插入时,ORACLE会去这些DELETE的空块中首先完成插入,所以频繁DELETE又频繁INSERT的应用,不会出现空块过多的情况。
1.3 表记录太大检索慢
ORACLE 为了尽可能减少访问路径提供了两种主要技术。一种是索引技术,另一种是分区技术。1.3.1 索引回表读开销很大
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects whererownum<=200;
Table created.
SQL> create index idx_obj_id on t (object_id);
Index created.
SQL> set linesize 1000
SQL> set autotrace traceonly
SQL> select * from t where object_id<=10;
9 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3784017797
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0| SELECT STATEMENT | | 9 | 819 | 2 (0)| 00:00:01 |
| 1| TABLE ACCESSBY INDEX ROWID BATCHED| T | 9| 819 | 2 (0)| 00:00:01 |
|* 2| INDEX RANGE SCAN | IDX_OBJ_ID | 9 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
Predicate Information (identified byoperation id):
---------------------------------------------------
2- access("OBJECT_ID"<=10)
Statistics
----------------------------------------------------------
1 recursive calls
0 dbblock gets
4 consistent gets
0 physical reads
0 redosize
2654 bytes sent via SQL*Net toclient
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9 rowsprocessed
索引来检索记录,会有一个先从索引中找到记录,再根据索引列上的ROWID定位到表中从而返回索引列以外的其他列的动作。这就是TABLE ACCESS BY INDEX ROWID。
继续试验:
SQL> select object_id from t where object_id<=10;
9 rowsselected.
ExecutionPlan
----------------------------------------------------------
Planhash value: 188501954
-------------------------------------------------------------------------------
|Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9 | 36 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| IDX_OBJ_ID | 9 | 36 | 1 (0)|00:00:01 |
-------------------------------------------------------------------------------
PredicateInformation (identified by operation id):
---------------------------------------------------
1 - access("OBJECT_ID"<=10)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
653 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/fromclient
0 sorts (memory)
0 sorts (disk)
9 rows processed
发现TABLE ACCESS BY INDEX ROWID没了。逻辑读从4变为了2. 代价从2变为了1. 不用从索引中回到表中获取索引以外的其他列了。
1.3.2 有序插入很难有序读出
SQL> drop table t purge;Table dropped.
SQL> create table t
2 (aint,
3 bvarchar2(4000) default rpad('*',4000,'*'),
4 cvarchar2(3000) default rpad('*',3000,'*')
5 );
Table created.
SQL> insert into t (a)values (1);
SQL> insert into t (a) values (2);
SQL> insert into t (a) values (3);
SQL> select A from t;
A
----------
1
2
3
SQL> delete from t where a=2;
1 row deleted.
SQL> insert into t (a) values (4);
1 row created.
SQL> commit;
Commit complete.
SQL> select A from t;
A
----------
1
4
3
因为BLOCK大小默认是8KB,使用rpad(‘*’,4000,’*’) ,rpad(‘*’,3000,’*’)来填充B,C字段。保证一个块只插入一条数据。
要顺序展现,要用order by操作,但是开销很大。
SQL> set linesize 1000
SQL> set autotrace traceonly
SQL> select A from t;
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0| SELECT STATEMENT | | 3 | 39 | 3 (0)| 00:00:01 |
| 1| TABLE ACCESS FULL| T | 3 | 39 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
Note
-----
-dynamic statistics used: dynamic sampling (level=2)
Statistics
----------------------------------------------------------
0 recursive calls
0 dbblock gets
7 consistent gets
0 physical reads
0 redosize
597 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rowsprocessed
SQL> select A from t order by A;
Execution Plan
----------------------------------------------------------
Plan hash value: 961378228
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0| SELECT STATEMENT | | 3 | 39 | 4 (25)| 00:00:01 |
| 1| SORT ORDER BY | | 3 | 39 | 4 (25)| 00:00:01 |
| 2| TABLE ACCESS FULL| T | 3 | 39 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------
Note
-----
-dynamic statistics used: dynamic sampling (level=2)
Statistics
----------------------------------------------------------
4 recursive calls
0 dbblock gets
15 consistent gets
0 physical reads
0 redosize
597 bytes sent via SQL*Net to client
551 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3 rowsprocessed
在有排序的操作的统计信息模块有一个sorts,表示发生了排序。执行计划中葛优SORT ORDER BY的关键字。性能上是由差异的。在大数量时将会非常明显。
避免排序的方法有两种思路。第一种是在ORDER BY的排序列建索引;第二种是将普通表改造为有序散列聚簇表,保证顺利插入,ORDER BY展现时无须再有排序动作。
1.4 全局临时表
在现实中,有一部分应用对表的某些操作是不需要恢复的,比如运算过程中临时处理的中间结果集,可以考虑用全局临时表来实现。1.3.3 分析全局临时表的类型
全局临时表分为两种类型:基于会话的全局临时表(commit preserve rows),一种是基于事务的全局临时表(on commit delete rows).SQL> create global temporary table t_tmp_session on commitpreserve rows as select * from dba_objects where 1=2;
Table created.
SQL> select table_name,temporary,duration from user_tables wheretable_name='T_TMP_SESSION';
TABLE_NAME TDURATION
--------------- - ---------------
T_TMP_SESSION Y SYS$SESSION
SQL> create global temporary table t_tmp_transaction on commitdelete rows as select * from dba_objects where 1=2;
Table created.
SQL> select table_name,temporary ,duration from user_tables wheretable_name='T_TMP_TRANSACTION';
TABLE_NAME TDURATION
--------------- - ---------------
T_TMP_TRANSACTI Y SYS$TRANSACTION
ON
然后来观察下对全局临时表操作引起的日志 REDO变化。
先查看日志变化,执行如下:
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 292004
PS:该视图之前创建的。
无论插入更新还是删除,操作普通表产生的日志都比全局临时表要多。
看个例子:
SQL> drop table t purge;
Table dropped.
SQL> create table t as select * from dba_objects where 1=2;
Table created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 1853048
SQL> insert into t select * from dba_objects;
91669 rows created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 14421480
插入产生的日志 14421480-1853048=12568432
SQL> update t set object_id=rownum;
91669 rows updated.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 28701856
更新普通表产生日志:28701856-14421480=14280376
SQL> delete from t;
91669 rows deleted.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 67639048
删除产生的日志:67639048-28701856=38937192
无论插入更新还是删除,操作普通表产生的日志都比全局临时表要多。
1.3.4 全局临时表两大重要特性
l 高效删除记录
基于事务的全局临时表COMMIT或者SESSION连接退出后,临时表记录自动删除。基于回话的全局临时表是SESSION连接退出后,临时表记录自动删除。
SQL> select count(*) from t_tmp_transaction;
COUNT(*)
----------
0
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 67639048
SQL> insert into t_tmp_transaction select * from dba_objects;
91669 rows created.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 68243388
SQL> commit;
Commit complete.
SQL> select * from sys.v_redo_size;
NAME VALUE
--------------------------------------------------------------------------
redo size 68243564
SQL> select count(*) from t_tmp_transaction;
COUNT(*)
----------
0
基于会话的全局临时表同理,退出SESSION,再重新连接后,记录消失。
如果全局临时表在程序的一次调用执行过程中需要多次情况记录再插入记录的,就考虑基于事务的,COMMIT可以把结果快速清理。如果不存在这种情况,就用基于SESSION的,更简单,连COMMIT动作都省了。
l 不同会话独立
两个会话都使用全局临时表时候,可以保证两个回话使用的临时表相互独立,这个也是一个相当IMBA的特性。相关文章推荐
- 10.读书笔记收获不止Oracle之 表设计之分区表
- 11.读书笔记收获不止Oracle之 表设计之分区使用
- 26.读书笔记收获不止Oracle之 位图索引即席查询
- 36.读书笔记收获不止Oracle之表哈希、合并排序连接与索引
- 【读书笔记】【收获,不止Oracle】不同类型下表连接限制的对比
- 37.读书笔记收获不止Oracle之常用诊断命令汇总
- 【读书笔记】【收获,不止Oracle】索引组织表
- 【读书笔记】【收获,不止Oracle】位图索引(2)
- 读收获不止oracle--表设计有感
- 12.读书笔记收获不止Oracle之 索引分区表
- 1.读书笔记收获不止Oracle之开篇
- 13.读书笔记收获不止Oracle之 簇表
- 【读书笔记】【收获,不止Oracle】嵌套循环表(Nested Loops Join)
- 2.读书笔记收获不止Oracle之 意识
- 14.读书笔记收获不止Oracle之 索引
- 17.读书笔记收获不止Oracle之 索引存储列值
- 27.读书笔记收获不止Oracle之 位图索引更新
- 3.读书笔记收获不止Oracle之 物理体系
- 15.读书笔记收获不止Oracle之 索引高度
- 18.读书笔记收获不止Oracle之 索引SUM和AVG优化