|
Oracle 2013. 4. 9. 09:36
Michael -- Thanks for the question regarding "Timestamp vs. Date - is Date still appropriate for new systems?", version 10.2
Submitted on 26-Nov-2006 2:49 UTC
Last updated 18-Feb-2010 9:54
You AskedThe new Timestamp datatypes are described as extensions of the Date datatype in Oracle's
documentation.
They are clearly more powerful and include capabilities such as tracking times down to
milliseconds, incorporating time zones, etc.
It's also generally accepted that you should minimize the complexity of a system where
possible, and that one way of doing this is not to use redundant entities (in the general
sense, not the ER diagram sense).
Given these points, is there any reason to still use Dates in new system development?
Are there issues with performance, compatibility, etc? Or is this like the switch from
LONG and LONG RAW to LOBs where all new development should use the LOB? Oracle's
decision not to extend Date to also include Date With Timezone and Date With Local
Timezone suggests that Oracle doesn't see the point in extending Date... presumably
because it will be deprecated and replaced with Timestamp.
-----------------------
BTW, on a closely related topic, I don't understand the logic behind making Timestamp
With Time Zone and Timestamp With Local Time Zone datatypes.
Seems to be that a database (or any other system) should store all Date / Times in a
canonical form - presumably seconds since an agreed on starting second. Then all of this
Time Zone stuff is just a matter of Locale and client settings - my client should specify
its Timezone to the Server and then all dates / times should be converted to the
canonical form using that timezone unless I specify another time zone together with the
date / time.
The With Local Timezone and With Timezone datatypes seem to describe how to present the
information to clients more than what the datatype is.
and we said...I think dates will be here for a long time to come.
Timestamp is an ANSI thing, for compliance.
The choice is yours.
And you cannot really store things in a canonical format - there are too many variations.
Dates are 7 bytes always
Timestamps are also 7 (no fractional seconds)
or 11 (fractional seconds)
or 13 (fractional seconds with timezone)
Probably - just using timestamp would be good going forward (there might be tools that
cannot "deal" correctly with a timestamp yet - they would cause it to convert to a date -
you'll use to_timestamp and such with them).
But in reality, date is just coming out of my fingers still :) I typically have no need
for the fractional seconds nor timezone.
What about variations? November 26, 2006 - 10am UTC
Reviewer: Mike Friedman from Shenzhen, China Mind explaining this a bit more?
"And you cannot really store things in a canonical format - there are too many variations."
Does this mean that Oracle doesn't store dates internally as seconds since a reference date / time?
I know Windows does... and they have to deal with as many variations as Oracle, don't they?
Followup November 26, 2006 - 11am UTC: The date datatype does, but not a "seconds since something".
(windows cannot do a wide range of dates? hmmm, you might not be entirely accurate on that seconds
thing, I'll bet windows uses more than one format to store dates, or they cannot store dates that
fall outside a very very very small range!)
<quote src=Expert Oracle Database Architecture>
DATE Type
The DATE type is a fixed-width 7-byte date/time datatype. It will always contain the seven
attributes of the century, the year within the century, the month, the day of the month, the hour,
the minute, and the second. Oracle uses an internal format to represent that information, so it is
not really storing 20, 05, 06, 25, 12, 01, 00 for June 25, 2005, at 12:01:00. Using the built-in
DUMP function, we can see what Oracle really stores:
ops$tkyte@ORA10G> create table t ( x date );
Table created.
ops$tkyte@ORA10G> insert into t (x) values
2 ( to_date( '25-jun-2005 12:01:00',
3 'dd-mon-yyyy hh24:mi:ss' ) );
1 row created.
ops$tkyte@ORA10G> select x, dump(x,10) d from t;
X D
--------- -----------------------------------
25-JUN-05 Typ=12 Len=7: 120,105,6,25,13,2,1
The century and year bytes (the 120,105 in the DUMP output) are stored in an excess-100 notation.
You would have to subtract 100 from them to determine the correct century and year. The reason for
the excess-100 notation is support of BC and AD dates. If you subtract 100 from the century byte
and get a negative number, it is a BC date, for example:
ops$tkyte@ORA10G> insert into t (x) values
2 ( to_date( '01-jan-4712bc',
3 'dd-mon-yyyybc hh24:mi:ss' ) );
1 row created.
ops$tkyte@ORA10G> select x, dump(x,10) d from t;
X D
--------- -----------------------------------
25-JUN-05 Typ=12 Len=7: 120,105,6,25,13,2,1
01-JAN-12 Typ=12 Len=7: 53,88,1,1,1,1,1
So, when we insert 01-JAN-4712BC, the century byte is 53 and 53 100 = 47, the century we
inserted. Because it is negative, we know that it is a BC date. This storage format also allows the
dates to be naturally sortable in a binary sense. Since 4712 BC is less than 4710 BC, wed like a
binary representation that supports that. By dumping those two dates, we can see that 01-JAN-4710BC
is larger than the same day in 4712 BC, so they will sort and compare nicely:
ops$tkyte@ORA10G> insert into t (x) values
2 ( to_date( '01-jan-4710bc',
3 'dd-mon-yyyybc hh24:mi:ss' ) );
1 row created.
ops$tkyte@ORA10G> select x, dump(x,10) d from t;
X D
--------- -----------------------------------
25-JUN-05 Typ=12 Len=7: 120,105,6,25,13,2,1
01-JAN-12 Typ=12 Len=7: 53,88,1,1,1,1,1
01-JAN-10 Typ=12 Len=7: 53,90,1,1,1,1,1
The month and day bytes, the next two fields, are stored naturally, without any modification. So,
June 25 used a month byte of 6 and a day byte of 25. The hour, minute, and second fields are stored
in excess-1 notation, meaning we must subtract 1 from each component to see what time it really
was. Hence midnight is represented as 1,1,1 in the date field.
This 7-byte format is naturally sortable, as you have seenit is a 7 byte field that can be sorted
in a binary fashion from small to larger (or vice versa) very efficiently. Additionally, its
structure allows for easy truncation, without converting the date into some other format. For
example, truncating the date we just stored, 25-JUN-2005 12:01:00, to the day (remove the hours,
minutes, seconds) is very straightforward. Just set the trailing three bytes to 1,1,1 and the time
component is as good as erased.
</quote>
Timestamps get more complicated - there are timestamps without timezones, with local timezones -
you cannot store those in "one fashion" - one preserves the original timezone for example (so you
cannot put it into UTC, you need to remember what timezone it used to be) and the other puts it
into the database timezone.
Thanks November 26, 2006 - 12pm UTC
Reviewer: Michael Friedman from Shenzhen, China
to_timestamp September 10, 2008 - 8pm UTC
Reviewer: Paras Bansal from Milpitas, CA Can you explain this -
SELECT cast(to_timestamp('01-may-2005 23:59:59.501','dd-mon-yyyy hh24:mi:ss:ff') AS DATE) FROM
dual;
It returns me 2nd may as DATE. Can we do something so that it truncs the millisec portion and we
get only 1st May.
regards,
Paras Bansal
Followup September 16, 2008 - 11am UTC: substr the string you are trying to convert and just use to_date. ops$tkyte%ORA10GR2> SELECT to_date(substr( '01-may-2005 23:59:59.501', 1, 20),'dd-mon-yyyy
hh24:mi:ss') from dual;
TO_DATE(SUBSTR('01-M
--------------------
01-may-2005 23:59:59
curious TO_TIMESTAMPing around September 16, 2008 - 2pm UTC
Reviewer: Duke Ganote from Amelia, Ohio USA SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 16 13:35:17 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the OLAP and Data Mining Scoring Engine options
[glogin.sql includes:
alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
]
select cast(ts1 as date) dt1
, ts1
, cast(ts2 as date) dt2
, ts2
from
(
select to_timestamp('01-may-2005 23:59:59.501'
,'dd-mon-yyyy hh24:mi:ss:ff') ts1
, systimestamp ts2
from dual
)
/
DT1
TS1
DT2
TS2
--------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2008-09-16 13:33:39
16-SEP-08 01.33.38.512485 PM -04:00
Followup September 16, 2008 - 11pm UTC: do you have access to support to file a bug?
where is the bug ? September 17, 2008 - 4am UTC
Reviewer: Sokrates sorry, I don't see it.
Can you please explain ?
What was expected output ?
Followup September 17, 2008 - 9am UTC: when you cast a to_timestamp value as a date - the time component disappeared. when you cast a timestamp value as a date - the time component did NOT disappear.
Isn't it just rounding February 16, 2010 - 8pm UTC
Reviewer: Warwick from Sydney, Australia Hi Tom,
Isn't this due to rounding? I tried this using both 9i and 11r1
alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Running the original query returned the results above. Changing it to
select cast(ts1 as date) dt1
, ts1
, cast(ts2 as date) dt2
, ts2
from
(
select to_timestamp('01-may-2005 23:59:58.501'
,'dd-mon-yyyy hh24:mi:ss:ff') ts1
, systimestamp ts2
from dual
)
/
DT1
TS1
DT2
TS2
---------------------
2005-05-01 23:59:59
2005-05-01 23:59:58.501000000
2010-02-17 12:49:57
17-FEB-10 12.49.56.986028 PM +11:00
Cheers Warwick
Followup February 17, 2010 - 8am UTC: 9i and 10g (and 11gr1) all lose the time component: ops$tkyte%ORA9IR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for Linux: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
ops$tkyte%ORA9IR2> alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
Session altered.
ops$tkyte%ORA9IR2> alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Session altered.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2010-02-17 09:54:28
17-FEB-10 09.54.27.579168 AM -05:00
10g: ops$tkyte%ORA10GR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
ops$tkyte%ORA10GR2> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2010-02-17 09:55:29
17-FEB-10 09.55.28.592716 AM -05:00
and 11gR1: ops$tkyte%ORA11GR1> select * from v$version;
BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
ops$tkyte%ORA11GR1> alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
Session altered.
ops$tkyte%ORA11GR1> alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Session altered.
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2010-02-17 09:56:01
17-FEB-10 09.56.01.087210 AM -05:00
but 11gr2: ops$tkyte%ORA11GR2> select * from v$version;
BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
ops$tkyte%ORA11GR2> alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
Session altered.
ops$tkyte%ORA11GR2> alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Session altered.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-01 23:59:59
2005-05-01 23:59:59.501000000
2010-02-17 09:57:22
17-FEB-10 09.57.22.406995 AM -05:00
Isn't it rounding to nearest second? February 18, 2010 - 9am UTC
Reviewer: John Abate from Missouri, USA It looks to me that it is rounding the .501 fraction of a second to the nearest whole second
01-may-2005 23:59:59.501 rounds to 02-may-2005 00:00:00
Or am I missing something?
Followup February 18, 2010 - 9am UTC: doh, you are correct, I didn't notice the DATE flipped! and I now do vaguely recall that the behavior changed - in that the timestamp would trunc() not round in 11g. You are correct - I did not see the forest for all of the trees. I was so concentrated on the time going to 00:00:00, that I did not even really look at what the time was beforehand... | | | |
Reference : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:77562635789106
Oracle 2013. 3. 6. 15:35
Oracle Club에서 작성된 Oracle PL/SQL 기초 교육 문서입니다.
자세한 내용은 첨부파일의 내용을 확인해보세요.
pl_sql.doc
Oracle 2013. 1. 21. 17:47
Oracle DB에서 특정 Table의 Data를 Record별로 Text File을 생성하여 내보내고 싶을 때 어떻게 할까?
UTL_FILE 패키지를 이용하여 PL/SQL에서 파일의 입력 or 출력을 할 수 있다.
이는 PL/SQL에서 Log를 File별로 생성하고 싶을 때도 사용이 가능하다.
먼저, Oracle Directory를 생성해주어야 한다.
▣ Create Directory
DBA 계정으로 oracle에 로그인을 한 후, Directory를 생성하고, 이를 사용하고자 하는 계정에 부여하면 된다.
1. Create Directory exp_text as 'c:\temp'; --Temp Folder를 Directory로 설정하고, 이를 exp_text로 이름을 부여함
2. grant read, write on directory exp_text to scott; -- Scott 계정에 exp_text Directory에 대한 읽고, 쓰기 권한을 부여함.
|
위와 같이 Directory 생성 및 권한 부여가 되었다면, Scott 계정에서 아래와 같이 간단하게 PL/SQL을 이용하여 Text File로 Data를
추출할 수 있다.
▣ Export to Text File
create or replace procedure Export2Text IS vHandle UTL_FILE.FILE_TYPE; vDirectory varchar2(100); vFile varchar2(256); vOutput varchar2(32767); cursor c_cur is select empno, ename, hiredate, sal from emp;
begin vDirectory := 'exp_text'; for r_cur in c_cur loop if r_cur.empno > 0 then vFile := r_cur.empno || '.txt'; vHandle := UTL_FILE.FOPEN(vDirectory ,vFile,'W', 32767); vOutput := r_cur.ename || ', ' || r_cur.hiredate || ', ' || r_cur.sal; UTL_FILE.PUT_LINE(vHandle, vOutput); UTL_FILE.FCLOSE(vHandle); end if; end loop;
exception when others then UTL_FILE.FCLOSE_ALL; raise; end; / |
아래는 Directory내에 존재하는 file을 읽어오고자 할 때, 사용할 수 있는 PL/SQL이다.
▣ Import to Text File
create or replace procedure Import2Text is vHandle UTL_FILE.FILE_TYPE; vDirectory varchar2(100); vFile varchar2(256); vInput varchar2(32767); begin vDirectory := 'exp_text'; vFile := 'test.txt'; vHandle := UTL_FILE.FOPEN(vDirectory ,vFile,'R', 32767); begin loop UTL_FILE.GET_LINE(vHandle, vInput,32767); DBMS_OUTPUT.PUT_LINE(vInput); end loop; exception when NO_DATA_FOUND then dbms_output.put_line('last line of file'); end; UTL_FILE.FCLOSE(vHandle); exception when others then UTL_FILE.FCLOSE_ALL; raise; end; / |
Oracle 2012. 7. 26. 13:19
Oracle 11g에서는 Compound Triggers 기능이 생겨서 각 Event별로 Trigger를 만들 필요가 없어졌네요.
In earlier versions of Oracle, triggers were the database objects whose ability to collide with the current database state of an object and session state maintenance, kept the database developers alert while designing triggers. Oracle11g offers considerable improvements in database triggers. These additions have not only enhanced the language usability but also promise a better performance in a real application environment.
Oracle 11g fixes major issues by the enhancements as listed below.
- Trigger firing sequence can be set in Oracle 11g using FOLLOWS keyword
- Compound Triggers to encapsulate multiple triggering timings in one body
- Triggers can be created in ENABLED/DISABLED mode
- The DML triggers in 11g are 25% faster than their earlier versions in terms of compilation, execution and firing
In the tutorial, you will learn how Compound Triggers works and its benefits to the users.
1. Compound Triggers: Introduction
Compound triggers are the database DML triggers which ‘compounds’ or combines all the triggering timings under one trigger body. The triggering timings available for database triggers are BEFORE STATEMENT, AFTER STATEMENT, BEFORE EACH ROW, and AFTER EACH ROW. Trigger level variables can be defined which would be visible and accessible in all the timing blocks.
Compound trigger body does not contain any exception section but recommends each block to contain its own exception handler under exception section.
2. Compound Triggers: Reaping the Benefits
Compound trigger resolves few big issues, which have been the nightmare for the developers in the past.
- Mutating table error ORA-04091
- Multithread behavior maintains persistent state of session variables till the statement execution finishes. These are defined in the declaration section of the compound trigger.
- Enhanced performance in bulk operations
- Supports encapsulation of multiple program units; thus enhances code interactivity
3. Compound Triggers: Usage Guidelines
As a new induction to the database family, there are some introductory guidelines which must be followed while working with compound triggers
- Compound trigger are meant only for DML operations. No support still for DDL and system operations.
- Exception handling process has to be done for each timing block.
- Compound trigger remains passive, if the DML does not changes any rows
- :OLD and :NEW variable identifiers can exist only in ROW level blocks, where :NEW values can only be modified in BEFORE EACH ROW block
- No support for WHEN clause. Earlier, WHEN clause could be utilized to impose condition on the trigger action.
- No support for PRAGMA_AUTONOMOUS_TRANSACTION
There are many debates over the above guidelines, which many identify as compound trigger restrictions. But from developer’s perspective, the benefits from compound triggers are much heavier than these restrictions. The assumptions and restrictions can be melted into code practice and as ‘usage guidelines’.
4. Compound Triggers: Syntax
The syntax shows the four timing blocks in a set order.
Example Syntax [1a]: For DML Triggers
Sample Code
-
CREATE OR REPLACE TRIGGER [TRIGGER NAME]
-
FOR [DML] ON [TABLE NAME]
-
COMPOUND TRIGGER
-
-- Initial section
-
-- Declarations
-
-- Subprograms
-
Optional section
-
BEFORE STATEMENT IS
-
…;
-
Optional section
-
AFTER STATEMENT IS
-
…;
-
Optional section
-
BEFORE EACH ROW IS
-
…;
-
Optional section
-
AFTER EACH ROW IS
-
…;
-
END;
Copyright exforsys.com
In the syntax, note that the compound trigger offers only the declaration section, which are again locally available in the timing blocks.
Example Syntax [1b]: For database views
INSTEAD OF EACH ROW IS ...; END;
Note that none of the timing blocks should be duplicated. Oracle server raises exception PLS-00676 if duplicate timing blocks is found in the compound trigger definition.

5. Compound Triggers: Additional Notes
5.1. USER_TRIGGERS dictionary view structure has been restructured to include the metadata for compound triggers. The new columns specify whether a timing block is available in the compound trigger body.

5.2. The new triggering event has been introduced for compound trigger as ‘COMPOUND’. The figure below shows the TRIGGER_TYPE, TRIGGERING_EVENT and TABLE_NAME of the trigger TRG_COMP_DEMO.

5.3. The columns BEFORE_STATEMENT, BEFORE_ROW, AFTER_ROW, and AFTER_STATEMENT are set as YES/NO based on the timing blocks available in the compound trigger. For example, the trigger TRG_COMP_DEMO contains all the timing blocks in the trigger body, the columns can be queried as below.

6. Applications of Compound Triggers
A compound trigger is best suited to achieve two objectives.
- Yield better performance while loading a table simultaneous to a running transaction and using its values.
- Resolve mutating table error (ORA-04091)
I will illustrate both the accomplishments in below steps
6.1. Demonstrating performance gains during simultaneous loading
1. Two tables were created
Sample Code
-
SQL> CREATE TABLE ORDERS (ORD_ID NUMBER,
-
ITEM_CODE VARCHAR2(100),
-
ORD_QTY NUMBER,
-
ORD_DATE DATE);
-
TABLE created.
-
SQL> CREATE TABLE ORDER_ARCHIVE(ORD_ID NUMBER,
-
ORD_CODE VARCHAR2(100));
-
-
TABLE created.
Copyright exforsys.com
2. To demonstrate the performance difference between the versions, I create a normal FOR EACH ROW trigger on ORDERS table to insert data into ORDER_ARCHIVE table.
Sample Code
-
SQL> CREATE OR REPLACE TRIGGER TRG_ORDERS
-
BEFORE INSERT ON ORDERS
-
FOR EACH ROW
-
BEGIN
-
DBMS_OUTPUT.PUT_LINE('Insert order‘||:NEW.ORD_ID||’ into ORDER_ARCHIVE');
-
INSERT INTO ORDER_ARCHIVE VALUES (:NEW.ORD_ID,:NEW.ITEM_CODE);
-
END;
-
/
-
TRIGGER created.
Copyright exforsys.com
3. Now, I will insert the test data into the ORDERS table using SELECT statement
Sample Code
-
SQL> INSERT INTO orders(ord_id, item_code, ord_date)
-
2 SELECT (rownum+1)*100, 'ITEM_'||rownum, sysdate-rownum
-
3 FROM DUAL
-
4* CONNECT BY ROWNUM < 15;
-
INSERT ORDER 1 INTO ORER_ARCHIVE
-
INSERT ORDER 2 INTO ORER_ARCHIVE
-
INSERT ORDER 3 INTO ORER_ARCHIVE
-
INSERT ORDER 4 INTO ORER_ARCHIVE
-
INSERT ORDER 5 INTO ORER_ARCHIVE
-
INSERT ORDER 6 INTO ORER_ARCHIVE
-
INSERT ORDER 7 INTO ORER_ARCHIVE
-
INSERT ORDER 8 INTO ORER_ARCHIVE
-
INSERT ORDER 9 INTO ORER_ARCHIVE
-
INSERT ORDER 10 INTO ORER_ARCHIVE
-
INSERT ORDER 11 INTO ORER_ARCHIVE
-
INSERT ORDER 12 INTO ORER_ARCHIVE
-
INSERT ORDER 13 INTO ORER_ARCHIVE
-
INSERT ORDER 14 INTO ORER_ARCHIVE
-
14 rows created.
Copyright exforsys.com
Note the output of the above INSERT process. For each of the 14 records inserted into the ORDERS table through the trigger, Oracle server makes simultaneous inserts into the ORDER_ARCHIVE table.
4. Compound triggers in Oracle 11g have the ability to perform simultaneous bulk insert process into the table. I will create the compound trigger TRG_COMP_ORDERS to implement the bulk loading process. Note the use of associative arrays, which hold the data to be inserted. It flushes off once the data is replicated into the ORDER_ARCHIVE table when the index count reaches 20 or the statement execution completes.
Sample Code
-
SQL> CREATE OR REPLACE TRIGGER TRG_COMP_SAL
-
FOR INSERT ON ORDERS
-
COMPOUND TRIGGER
-
TYPE ORDER_T IS TABLE OF ORDER_ARCHIVE%ROWTYPE
-
INDEX BY PLS_INTEGER;
-
L_ORDERS ORDER_T;
-
I NUMBER := 0;
-
AFTER EACH ROW IS
-
BEGIN
-
I := I+1;
-
L_ORDERS(I).ORD_ID := :NEW.ORD_ID;
-
L_ORDERS(I).ORD_CODE := :NEW.ITEM_CODE;
-
IF I >= 20 THEN
-
DBMS_OUTPUT.PUT_LINE('Bulk Load for 20 orders');
-
FOR J IN 1..I
-
LOOP
-
INSERT INTO ORDER_ARCHIVE VALUES L_ORDERS(J);
-
END LOOP;
-
L_ORDERS.DELETE;
-
I := 0;
-
END IF;
-
END AFTER EACH ROW;
-
AFTER STATEMENT IS
-
BEGIN
-
DBMS_OUTPUT.PUT_LINE('Statement level loading');
-
FORALL J IN 1..L_ORDERS.COUNT
-
INSERT INTO ORDER_ARCHIVE VALUES L_ORDERS(J);
-
L_ORDERS.DELETE;
-
I := 0;
-
END AFTER STATEMENT;
-
END;
-
/
-
-
TRIGGER created.
Copyright exforsys.com
5. I will insert 64 rows (instead of just 14) into ORDERS table using SELECT query. The simultaneous inserts into ORDER_ARCHIVE table are achieved only in 3 bulk insert process.
Sample Code
-
SQL> INSERT INTO orders(ord_id, item_code, ord_date)
-
2 SELECT (rownum+1)*100, 'ITEM_'||rownum, sysdate-rownum
-
3 FROM DUAL
-
4 connect BY rownum < 65
-
5 /
-
Bulk LOAD FOR 20 orders
-
Bulk LOAD FOR 20 orders
-
Bulk LOAD FOR 20 orders
-
Statement level loading
-
64 rows created.
Copyright exforsys.com
Above results clearly differentiate the compound trigger from the conventional triggers in terms of performance while simultaneous data loading.
6.2. Demonstration of Mutating table solution using Compound triggers
Now, let us learn about the most accomplished achievement of Compound Triggers, i.e. their ability to tackle with Mutating table error (ORA-04091).
Mutating table occurs when a table is referenced when it is in floating state. A table is in flux or floating state when it is the participant of a running transaction. Earlier, it used to be taken as a precaution during coding or its effects were diluted using workaround solutions. Few of the efficient workarounds are as below.
- Change in logic implementation and code design
- Using PRAGMA_AUTONOMOUS_TRANSACTION
- Conversion of row level trigger to statement level
- Defining package to hold variables at session level; these variables would hold the values and later would be inserted once the statement execution finishes.
How compound trigger fixes the problem?
Compound trigger contains the timing blocks and variables, which are persistent till the statement completes and are visible in all the timing blocks of the trigger. These variables are of collection type which holds the table data, before it enters the flux mode. Table gets locked once it moves to flux mode, but the backup data in the collection variables remains persistent and can be used for reference within the trigger.
Therefore, the logic used is same as earlier, but being a single compiled unit, it is more convenient and easily maintained.
In the example code below, the compound trigger fires on update of ORDERS table. It queries the ORDERS table to fetch and display the old quantity. For a conventional DML trigger, the situation is the best candidate for ORA-04091 Mutating table error.
A user updates the order quantity for order id 600. Now before the UPDATE statement executes, the BEFORE STATEMENT block fetches the old ‘order quantity’ value (90) and stores in a collection type variable. Now when the statement executes, it updates the ‘order quantity’ to 150. AFTER STATEMENT block in the trigger displays the ‘old quantity’ value.
Sample Code
-
CREATE OR REPLACE TRIGGER TRG_ORDERS
-
FOR UPDATE ON ORDERS
-
COMPOUND TRIGGER
-
TYPE ORD_QTY_T IS TABLE OF ORDERS.ORD_QTY%TYPE;
-
L_ORD_QTY ORD_QTY_T;
-
BEFORE STATEMENT IS
-
BEGIN
-
SELECT ORD_QTY
-
BULK COLLECT INTO L_ORD_QTY
-
FROM ORDERS WHERE ORD_ID=600;
-
END BEFORE STATEMENT;
-
AFTER EACH ROW IS
-
BEGIN
-
DBMS_OUTPUT.PUT_LINE('Old Quantity Value:'||L_ORD_QTY(L_ORD_QTY.FIRST));
-
END AFTER EACH ROW;
-
END;
-
/
-
SQL> UPDATE ORDERS
-
2 SET ORD_QTY = 150
-
3 WHERE ORD_ID = 600;
-
Old Quantity Value:90
-
1 row updated.
Copyright exforsys.com
7. Conclusion
Compound trigger combines the properties of both statement and row level triggers. The logic which was earlier maintained at row and statement level can now be placed into one single body, with state and persistency in data across the timings.
출처 : http://www.exforsys.com/tutorials/oracle-11g/compound-triggers-in-oracle-11g.html
Oracle 2012. 5. 15. 11:01
"ORA-20023: Missing start and end values for time model
stat"
이는 Oracle11g의 License의 정책으로 인해서 생성된 Parameter설정과 연관되어
있는 Error 입니다.
하여 CONTROL_MANAGEMENT_PACK_ACCESS Parameter의 값을 확인 해보셔야 합니다.
11g에서는 기본값이 DIAGNOSTIC+TUNING 으로 되어 있는데,
이를, NONE으로 설정 시, Snapshot에 대한 조회 권한이 없어집니다.
해당 Parameter 값을 변경하기 위해서는 아래와 같이 해주시면 됩니다.
alter system set
CONTROL_MANAGEMENT_PACK_ACCESS="DIAGNOSTIC+TUNING" scope=both;
|
Snapshot 정보는 계속 자동으로 생성되더라도, 위와
같이 설정이 되어 있지 않다면 이를 이용하여 분석할 수 없습니다.
보다 자세한 정보는 아래 Link에서 확인해보시기 바랍니다.
http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams037.htm
Oracle 2011. 8. 11. 15:24
버그 속성
유형 |
B - Defect |
제품 버전에서 수정됨 |
- (수정된 버전 없음) |
중요도 |
2 - Severe Loss of Service |
제품 버전 |
10.2.0.2 |
상태 |
44 - Not Feasible to fix, to Filer |
플랫폼 |
46 - Linux x86 |
생성 날짜 |
02-Apr-2007 |
플랫폼 버전 |
LINUX AS4.0 |
업데이트 날짜 |
06-Aug-2008 |
기본 버그 |
- |
데이터베이스 버전 |
10.2.0.2 |
|
|
영향을 받는 플랫폼 |
Generic |
|
|
제품 소스 |
Oracle |
|
|
관련 제품
라인 |
Oracle Database Products |
제품군 |
Oracle Database |
영역 |
Oracle Database |
제품 |
5 - Oracle Server - Enterprise Edition |
Hdr: 5969934 10.2.0.2 RDBMS 10.2.0.2 DATA PUMP EXP PRODID-5 PORTID-46
Abstract: EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
*** 04/02/07 10:22 pm ***
TAR:
----
PROBLEM:
--------
1. Clear description of the problem encountered:
EXPDP client gets UDE-00008 and ORA-31626 while the server side export is
OK.
The export log from the server side looks OK.
// client console log
------------------------------
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-6512: at "SYS.DBMS_DATAPUMP", line 2745
ORA-6512: at "SYS.DBMS_DATAPUMP", line 3712
ORA-6512: at line 1
------------------------------
2. Pertinent configuration information (MTS/OPS/distributed/etc)
2-node RAC
3. Indication of the frequency and predictability of the problem
100% reproducible at the customer's site.
4. Sequence of events leading to the problem
Please see attached trace files.
DIAGNOSTIC ANALYSIS:
--------------------
No error in alert log on both nodes.
The following is the output of expdp with trace=EB0300.
Deleting queue from the master table fails, same as BUG 5663241.
// expdep command line
expdp \'sys/XXX as sysdba\' trace=EB0300
DUMPFILE=work_app_dmp:20070330_expdp_err_test.dmp
FULL=y LOGFILE=work_app_dmp:20070330_expdp_err_test.log
// trace of Master Control Process
------------------------------
KUPM: 17:54:40.520: Closing job....
KUPM: 17:54:40.520: Final state in Close_job is: COMPLETED
KUPM: 17:54:40.520: dropping master since job never started
KUPM: 17:54:40.520: keep_master_flag = FALSE
KUPM: 17:54:40.520: Delete files = FALSE
KUPM: 17:54:40.520: File subsystem shutdown
KUPM: 17:54:40.520: In set_longops
KUPM: 17:54:40.521: Work so far is: 66.80495548248291015625
KUPM: 17:54:40.521: Master delete flag is: TRUE
KUPV: 17:54:40.521: Delete request for job: SYS.SYS_EXPORT_FULL_01
KUPV: 17:54:40.535: Deleting FT job entry
KUPV: 17:54:40.535: Deleting queues
KUPC: 17:54:40.966: Deleted queue SYS.KUPC$C_1_20070330174720.
KUPC: 17:54:40.968: Error Code: -24018
KUPC: 17:54:40.968: Error Text: deleteQueue: ORA-24018: STOP_QUEUE on
SYS.KUPC$S_1_20070330174720 failed, outstanding transactions found
KUPV: 17:54:40.968: Delete request for MT: SYS.SYS_EXPORT_FULL_01
*** 17:54:51.426
KUPM: 17:54:51.426: Fixed views cleaned up
KUPM: 17:54:51.427: Log file is closed.
KUPV: 17:54:51.427: Detach request
KUPV: 17:54:51.434: Deleting FT att entry
------------------------------
Importing a table (TABLES=test_tbl_2) with the dump file
(20070330_expdp_err_test.dmp)
is successfully done, which means that the export is OK regarding the table
(test_tbl_2).
// impdp command line
impdp \'sys/XXX as sysdba\' DUMPFILE=work_app_dmp:20070330_expdp_err_test.dmp
TABLES=test_tbl_2 LOGFILE=work_app_dmp:20070330_impdp_test.log
WORKAROUND:
-----------
None.
RELATED BUGS:
-------------
BUG 5416274
BUG 5663241 (duplicate bug of BUG 5416274)
REPRODUCIBILITY:
----------------
reproduced twice at customer's site.
CT says every time expdp was done, this phenomenon occured.
TEST CASE:
----------
STACK TRACE:
------------
SUPPORTING INFORMATION:
-----------------------
I will be loading the following files:
dbram01_dm00_19485.trc
dbram01_dw01_19569.trc
expdp.txt
imp.txt
24 HOUR CONTACT INFORMATION FOR P1 BUGS:
----------------------------------------
DIAL-IN INFORMATION:
--------------------
IMPACT DATE:
------------
*** 04/02/07 10:26 pm ***
*** 04/02/07 10:44 pm ***
*** 04/02/07 10:45 pm ***
*** 04/10/07 01:09 am ***
*** 04/10/07 01:09 am ***
*** 04/10/07 01:20 am ***
*** 04/10/07 01:20 am *** (CHG: Sta->11)
*** 04/10/07 01:20 am ***
*** 04/10/07 01:20 am ***
*** 05/06/07 10:57 pm ***
*** 05/22/07 05:02 pm ***
*** 05/31/07 11:52 pm ***
*** 06/13/07 06:39 pm ***
*** 06/18/07 02:38 am ***
*** 06/18/07 02:38 am ***
*** 06/18/07 02:38 am ***
*** 06/20/07 03:41 am ***
*** 06/20/07 03:50 am ***
*** 07/16/07 06:48 pm ***
*** 07/24/07 06:48 pm ***
*** 08/02/07 09:40 pm ***
*** 08/21/07 06:45 pm ***
*** 09/25/07 07:46 pm ***
*** 10/15/07 06:39 pm ***
*** 11/02/07 05:06 pm ***
*** 11/15/07 01:48 am ***
*** 03/06/08 09:52 pm ***
*** 03/20/08 08:17 am ***
*** 03/20/08 08:17 am ***
*** 05/13/08 11:26 am ***
*** 07/08/08 06:35 am ***
*** 08/06/08 08:35 am ***
*** 08/06/08 09:21 am *** (CHG: Sta->44)
*** 08/06/08 09:21 am ***
In the 10.2.0.2 release, there were a number of problems that caused the
expdp and impdp clients to exit prematurely, interpreting a nonfatal error as
a fatal one, giving the appearance that the job had failed when it hadn't. In
fact, inspection of the log file, if one was specified for the job, showed
that the job ran successfully to completion. Often a trace file written by
one of the Data Pump processes would provide more detail on the error that
had been misinterpreted as a fatal one. Many of these errors involved the
queues used for communication between the Data Pump processes, but there were
other issues as well.
With each subsequent release, these problems have been addressed, and the
client has become more robust and rarely, if ever, runs into situations like
this. However, this is the result of dozens of bug fixes in subsequent
releases, some in Data Pump and some in supporting layers. It's impossible to
know, at this point, what combination of bug fixes would address this
specific failure, and even if that was possible, it wouldn't address other
possible failures that look very similar on the client side.
Relying on information in the log file is one way to verify that the job
actually completed successfully. Problems like this one became much more
intermittent by the 10.2.0.4 and 10.2.0.5 releases and are rarely, if ever,
seen in 11.1 or later.
출처 : Oracle Support
Oracle 2011. 8. 11. 15:19
수정 날짜 29-APR-2010 유형 PROBLEM 상태 PUBLISHED |
|
Applies to:Oracle Server - Enterprise Edition - Version: 10.2.0.2 to 11.1.0.7 - Release: 10.2 to 11.1
Information in this document applies to any platform.
SymptomsDatapump export from the client side completes with errors. Example:
expdp system/password full=y dumpfile=full_exp.dmp logfile=full_explog
...
. . exported "CALYPSO"."KICKOFF_CFG" 0 KB 0 rows
. . exported "CALYPSO"."LE_AGR_CHILD" 0 KB 0 rows
. . exported "CALYPSO"."LE_ATTR_CODE" 0 KB 0 rows
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2745
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3712
ORA-06512: at line 1
However, reviewing the logfile shows that the "job successfully completed"
CauseThis issue has been discussed in Bug 5969934 EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
SolutionThe expdp client makes calls to DBMS_DATAPUMP package to start and monitor export job. Once the export job is underway, the client just monitors the job status by issuing DBMS_DATAPUMP.GET_STAUS.
Therefore, if the export logfile says "job successfully completed", the dump file generated by the job should be fine.
You can simply ignore the errors, since the dump file is still valid for an import.
In the 10.2.0.2 release, there were a number of problems that caused the expdp and impdp clients to exit prematurely, interpreting a nonfatal error as a fatal one, giving the appearance that the job had failed when it hadn't. In fact, inspection of the log file, if one was specified for the job, showed that the job ran successfully to completion. Often a trace file written by one of the Data Pump processes would provide more detail on the error that had been misinterpreted as a fatal one. Many of these errors involved the queues used for communication between the Data Pump processes, but there were other issues as well.
.
With each subsequent release, these problems have been addressed, and the client has become more robust and rarely, if ever, runs into situations like this. However, this is the result of many bug fixes in subsequent releases, some in Data Pump and some in supporting layers. It's impossible to know, at this point, what combination of bug fixes would address this specific failure, and even if that was possible, it wouldn't address other possible failures that look very similar on the client side.
.
Relying on information in the log file is one way to verify that the job actually completed successfully. Problems like this one became much more intermittent by the 10.2.0.4 release and are rarely, if ever, seen in 11.1 or later.
ReferencesBUG:5969934 - EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
관련 자료
|
출처 : Oracle Support
Data Pump에 생각보다 많은 Bug들이 있다. 하지만, 성능상으로는 이런 Bug들을 감수할만큼 효율적이니 잘 피해서 사용해보자.
Oracle 2011. 8. 11. 11:43
버그 속성
유형 |
B - Defect |
제품 버전에서 수정됨 |
11.1 |
중요도 |
2 - Severe Loss of Service |
제품 버전 |
10.2.0.2.0 |
상태 |
80 - Development to Q/A |
플랫폼 |
212 - IBM AIX on POWER Systems (64-bit) |
생성 날짜 |
12-Jul-2006 |
플랫폼 버전 |
- |
업데이트 날짜 |
16-Mar-2011 |
기본 버그 |
- |
데이터베이스 버전 |
10.2.0.2.0 |
|
|
영향을 받는 플랫폼 |
Generic |
|
|
제품 소스 |
Oracle |
|
|
관련 제품
라인 |
Oracle Database Products |
제품군 |
Oracle Database |
영역 |
Oracle Database |
제품 |
5 - Oracle Server - Enterprise Edition |
Hdr: 5387030 10.2.0.2.0 RDBMS 10.2.0.2.0 TXN MGMT LOCAL PRODID-5 PORTID-212
Abstract: AUTOMATIC TUNING OF UNDO_RETENTION CAUSING SPACE PROBLEMS
*** 07/12/06 08:56 am REQUEST TEXT ***
So when:
a) not using GUARANTEED retention at undo tablespace level and
b) not using auto-extensible data files (fixed sized data file). then the
tuned_undoretention stays high to satisfy the maxquerylen at that time and
causes out of undo space error. The tuned_undoretention value did not return
to
a lower value even after 36 hours of quiet time (not a single query was
submitted).
Then, we changed undo tablespace's data files to be auto_extensible, but did
not allow it to actually extend any further. The max size was set to the
current allocated size.
With this change the tuned_undoretention returned to the set value for und
o_retention parameter immediately after the long running query completed.
This is same behavior we have seen in 10g R1 without the need for
auto_extensible data files.
And that is the difference in 10g R2 from R1. This change forces the bogus
use
of auto_extensible option.
I will upload a text file for you to review in support of what I mentioned
here.
*** 07/12/06 08:56 am BUSINESS NEED ***
It is causes confusion and misleading to customer needs.
So, this workaround is not a big deal, but 10g R2 is not backward compatible
to 10g R1 in this respect. And is not clearly documented as such. If there is
no chance to change this to match what 10g R1did, then I think a Metalink
Note to that effect will clear things up for all who may run into this
issue.
*** 07/12/06 08:57 am RESPONSE ***
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ -----------
------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDO_TBS
SQL>
SQL> alter session set nls_date_format='mm/dd/yy hh24:mi:ss';
Session altered.
SQL> set lines 132
SQL> get f1.sql
1 select begin_time
2 , end_time
3 , undoblks
4 , tuned_undoretention
5 , maxquerylen
6 , maxqueryid
7* from v$undostat
SQL> /
BEGIN_TIME END_TIME UNDOBLKS TUNED_UNDORETENTION
MAXQUERYLEN MAXQUERYID
----------------- ----------------- ---------- -------------------
----------- -------------
07/05/06 14:32:23 07/05/06 14:37:25 1 900
0
07/05/06 14:22:23 07/05/06 14:32:23 3 900
0
07/05/06 14:12:23 07/05/06 14:22:23 4 900
0
07/05/06 14:02:23 07/05/06 14:12:23 2 900
0
07/05/06 13:52:23 07/05/06 14:02:23 50 900
0
07/05/06 13:42:23 07/05/06 13:52:23 5 900
0
07/05/06 13:32:23 07/05/06 13:42:23 2 900
0
07/05/06 13:22:23 07/05/06 13:32:23 4 900
0
07/05/06 13:12:23 07/05/06 13:22:23 6 900
0
07/05/06 13:02:23 07/05/06 13:12:23 0 900
0
07/05/06 12:52:23 07/05/06 13:02:23 63 900
0
BEGIN_TIME END_TIME UNDOBLKS TUNED_UNDORETENTION
MAXQUERYLEN MAXQUERYID
----------------- ----------------- ---------- -------------------
----------- -------------
07/05/06 12:42:23 07/05/06 12:52:23 265 1800
962 271b023vduant
07/05/06 12:32:23 07/05/06 12:42:23 449 1201
362 271b023vduant
07/05/06 12:22:23 07/05/06 12:32:23 404 2948
2108 271b023vduant
07/05/06 12:12:23 07/05/06 12:22:23 372 2348
1506 271b023vduant
07/05/06 12:02:23 07/05/06 12:12:23 353 1747
906 271b023vduant
07/05/06 11:52:23 07/05/06 12:02:23 513 1421
582 271b023vduant
07/05/06 11:42:23 07/05/06 11:52:23 169 1005
164 fby3wmzn3hzvw
07/05/06 11:32:23 07/05/06 11:42:23 3 900
0
07/05/06 11:22:23 07/05/06 11:32:23 2 900
0
07/05/06 11:12:23 07/05/06 11:22:23 11 900
0
07/05/06 11:02:23 07/05/06 11:12:23 4 900
0
*** 07/12/06 08:58 am RESPONSE ***
SQL> get f2.sql
1 select tablespace_name
2 , retention
3 from dba_tablespaces
4* where tablespace_name = 'UNDO_TBS'
SQL> /
TABLESPACE_NAME RETENTION
------------------------------ -----------
UNDO_TBS NOGUARANTEE
SQL> col file_name for a45
SQL>
SQL> get f3.sql
SQL> get f3.sql
1 select file_name
2 , autoextensible
3 , bytes/1048576 MB
4 , increment_by
5 , maxbytes/1048576 Max_MB
6 from dba_data_files
7* where tablespace_name = 'UNDO_TBS'
SQL> /
FILE_NAME AUT MB INCREMENT_BY
MAX_MB
--------------------------------------------- --- ---------- ------------
----------
/u10/oradata/VZCT/undo_tbs_02.dbf YES 3000 128
3000
/u11/oradata/VZCT/undo_tbs_01.dbf YES 2000 128
2000
/u13/oradata/VZCT/undo_tbs_03.dbf YES 1000 128
1000
*** 07/26/06 02:54 pm *** (CHG: Sta->11 Asg->NEW OWNER)
*** 07/27/06 12:09 pm ***
*** 07/28/06 02:22 pm ***
*** 07/28/06 02:22 pm ***
*** 08/16/06 10:17 am ***
*** 08/16/06 02:36 pm *** ESCALATED
*** 08/16/06 02:36 pm ***
*** 08/21/06 10:24 am ***
*** 08/21/06 02:17 pm ***
*** 08/21/06 02:17 pm ***
*** 08/21/06 04:07 pm *** (CHG: Sta->30)
*** 08/21/06 04:07 pm ***
*** 09/18/06 09:12 am *** (CHG: Sta->16)
*** 09/18/06 09:12 am ***
*** 09/18/06 09:49 am *** (CHG: Sta->11)
*** 09/18/06 01:15 pm *** (CHG: Sta->30)
*** 09/18/06 01:15 pm ***
*** 09/19/06 08:13 am *** (CHG: Sta->16)
*** 09/19/06 08:13 am ***
*** 09/19/06 10:56 am *** (CHG: Sta->30)
*** 09/19/06 10:56 am ***
*** 09/25/06 08:25 am *** (CHG: Sta->11)
*** 09/25/06 08:25 am ***
*** 09/26/06 06:30 am ***
*** 09/26/06 02:33 pm ***
*** 09/26/06 02:39 pm ***
*** 09/26/06 02:43 pm *** (CHG: Sta->30)
*** 09/26/06 02:43 pm ***
*** 09/28/06 04:12 pm ***
TAR: 5337754.992
----
PROBLEM:
--------
1) Unused Undo segments are not being release, originally the database was
running on 10.1 without problem, but after upgrade to release 10.2.0.2 the
problem started.
2) UNDO_RETENTION parameter was correctly set = 900
3) Undo Tablespace was fixed sized, in other word the Undo tablespace was not
enabled with Autoextensible option for the associated datafiles.
4) Also, the Undo tablespace was never enabled with the guaranty retention
option.
5) So, after upgrade to 10.2 the problem is the TUNED_UNDORETENTION stayed
very high causing UNDO_TABLESPACE run out of disk space and it is not
releasing the unused undo segments.
6) The Oracle Undo Advisor is always asking to increase the Undo tablespace
(e.g. 17032 MB) which is not acceptable.
7) TUNED_UNDORETENTION column (v$undostat) is always increasing and never
returning back to 900 as on 10gR1:
=============================================================
SQL> get f1.sql
1 select begin_time, end_time, undoblks, tuned_undoretention, maxquerylen,
maxqueryid
2* from v$undostat
SQL> /
BEGIN_TIME END_TIME UNDOBLKS TUNED_UNDORETENTION
MAXQUERYLEN MAXQUERYID
----------------- ----------------- ---------- -------------------
----------- -------------
04/15/06 12:37:38 04/15/06 12:40:00 1443 61233
36031 gf37j3d3q2h15
04/15/06 12:27:38 04/15/06 12:37:38 6285 61261
36031 gf37j3d3q2h15
04/15/06 12:17:38 04/15/06 12:27:38 5450 60206
35430 gf37j3d3q2h15
04/15/06 12:07:38 04/15/06 12:17:38 6348 60110
34830 gf37j3d3q2h15
04/15/06 11:57:38 04/15/06 12:07:38 5005 60094
34229 gf37j3d3q2h15
04/15/06 11:47:38 04/15/06 11:57:38 5440 59954
33628 gf37j3d3q2h15
04/15/06 11:37:38 04/15/06 11:47:38 6662 58994
33028 gf37j3d3q2h15
04/15/06 11:27:38 04/15/06 11:37:38 5197 58995
32427 gf37j3d3q2h15
04/15/06 11:17:38 04/15/06 11:27:38 5597 59184
31826 gf37j3d3q2h15
04/15/06 11:07:38 04/15/06 11:17:38 6035 59943
31225 gf37j3d3q2h15
04/15/06 06:47:38 04/15/06 06:57:38 5185 56068
15607 gf37j3d3q2h15
04/15/06 06:37:38 04/15/06 06:47:38 5002 54669
15007 gf37j3d3q2h15
04/15/06 06:27:38 04/15/06 06:37:38 4905 54491
14406 gf37j3d3q2h15
04/13/06 21:57:38 04/13/06 22:07:38 3474 345600
213 0mdbrtbvdd9gu
04/13/06 21:47:38 04/13/06 21:57:38 25 345600
0
04/13/06 21:37:38 04/13/06 21:47:38 22 345600
0
04/13/06 21:27:38 04/13/06 21:37:38 96 345600
0
236 rows selected.
=============================================================
*** 09/28/06 04:12 pm ***
DIAGNOSTIC ANALYSIS:
--------------------
1) This behavior was diagnosed through the v$undostat view (OWC session)
2) Please check the undo_retention_too_much.txt log file.
WORKAROUND:
-----------
None.
RELATED BUGS:
-------------
None
REPRODUCIBILITY:
----------------
It is always reproducible in the customers environment.
TEST CASE:
----------
None
STACK TRACE:
------------
SUPPORTING INFORMATION:
-----------------------
Analyst: Esteban Bernal
AIM: ebernalus
24 HOUR CONTACT INFORMATION FOR P1 BUGS:
----------------------------------------
DIAL-IN INFORMATION:
--------------------
IMPACT DATE:
------------
*** 09/28/06 04:13 pm *** (CHG: Sta->16)
*** 09/28/06 04:19 pm ***
*** 09/28/06 04:31 pm *** (CHG: Sta->11)
*** 09/28/06 08:10 pm *** (CHG: Sta->30)
*** 09/28/06 08:10 pm ***
*** 09/29/06 06:34 am *** (CHG: Sta->11)
*** 09/29/06 06:34 am ***
*** 09/29/06 05:05 pm *** (CHG: Sta->30)
*** 09/29/06 05:05 pm ***
*** 10/02/06 02:21 pm *** (CHG: Sta->11)
*** 10/02/06 02:21 pm ***
*** 10/02/06 04:29 pm *** (CHG: Sta->30)
*** 10/02/06 04:29 pm ***
*** 10/03/06 10:59 am *** (CHG: Sta->11)
*** 10/03/06 10:59 am ***
*** 10/03/06 11:19 am ***
*** 10/03/06 12:27 pm ***
*** 10/03/06 01:52 pm ***
*** 10/03/06 01:52 pm ***
*** 10/03/06 04:00 pm ***
*** 10/06/06 11:56 am ***
*** 10/06/06 12:13 pm ***
*** 10/06/06 12:13 pm ***
*** 10/09/06 06:11 am ***
*** 10/10/06 08:42 am ***
*** 10/12/06 06:04 am ***
*** 10/16/06 11:15 am ***
*** 10/18/06 12:01 pm ***
*** 10/23/06 07:59 am ***
*** 10/23/06 10:58 am ***
*** 10/26/06 04:43 pm ***
*** 10/26/06 06:28 pm ***
*** 11/01/06 12:19 pm ***
*** 11/09/06 03:54 pm ***
*** 11/13/06 03:52 pm ***
*** 11/14/06 11:19 am ***
*** 11/15/06 08:43 am ***
*** 11/15/06 08:43 am ***
*** 11/17/06 02:57 pm *** (CHG: Sta->30)
*** 11/17/06 02:57 pm ***
*** 11/17/06 04:09 pm ***
*** 11/27/06 11:04 am ***
*** 11/27/06 11:05 am *** (CHG: Sta->11)
*** 11/27/06 11:05 am ***
*** 11/29/06 09:45 am ***
*** 11/30/06 06:52 am ***
*** 11/30/06 06:56 am ***
*** 11/30/06 07:36 am ***
*** 11/30/06 12:36 pm ***
*** 12/04/06 03:26 pm ***
*** 12/04/06 05:21 pm ***
*** 12/05/06 02:15 pm ***
*** 12/05/06 02:16 pm *** (CHG: Sta->30)
*** 12/05/06 02:49 pm ***
*** 12/05/06 02:50 pm *** (CHG: Sta->11)
*** 12/05/06 02:50 pm ***
*** 12/05/06 03:52 pm *** (CHG: Sta->30)
*** 12/05/06 03:52 pm ***
*** 12/06/06 07:35 am *** (CHG: Sta->11)
*** 12/06/06 07:35 am ***
*** 12/06/06 08:25 am *** (CHG: Sta->30)
*** 12/06/06 08:25 am ***
*** 12/06/06 03:12 pm ***
*** 12/06/06 03:15 pm *** (CHG: Sta->11)
*** 12/06/06 03:15 pm ***
*** 12/06/06 04:09 pm ***
*** 12/06/06 04:34 pm ***
*** 12/07/06 10:05 am ***
*** 12/07/06 05:47 pm *** (CHG: Sta->30)
*** 12/07/06 05:47 pm ***
*** 12/08/06 07:22 am *** (CHG: Sta->11)
*** 12/08/06 07:23 am ***
*** 12/08/06 10:20 am *** (CHG: Sta->30)
*** 12/08/06 10:20 am ***
*** 12/08/06 12:09 pm *** (CHG: Sta->11)
*** 12/08/06 12:09 pm ***
*** 12/11/06 04:29 pm ***
*** 12/12/06 07:22 am ***
*** 12/14/06 11:37 am ***
*** 12/21/06 04:56 pm ***
*** 12/22/06 04:06 pm ***
*** 01/02/07 06:18 am ***
*** 01/02/07 04:25 pm ***
*** 01/03/07 03:48 pm ***
*** 01/03/07 04:53 pm ***
*** 01/04/07 04:22 pm ***
*** 01/08/07 06:49 am ***
*** 01/11/07 04:01 pm ***
*** 01/12/07 12:28 pm ***
*** 01/12/07 11:02 pm ***
REDISCOVERY INFORMATION:
Look for:
- System Managed Undo
undo_management=auto in init.ora
- Fixed size undo tablespace
select autoextensible from dba_data_files
where tablespace_name='<current_undo_ts>'
returns "No" for all the undo ts datafiles.
- The undo tablespace is already sized such that
it always has more than enough space to store
all the undo generated within the undo_retention
time, and the in-use undo space never exceeds the
undo tablespace warning alert threshold (see
below for the query to show the thresholds).
- The tablespace threshold alerts recommend that
the DBA add more space to the undo tablespace:
select creation_time, metric_value, message_type,
reason, suggested_action
from dba_outstanding_alerts
where object_name='<current_undo_ts>';
returns a suggested action of:
"Add space to the tablespace"
Or, this recommendation has been reported in the
past but the condition has now cleared:
select creation_time, metric_value, message_type,
reason, suggested_action, resolution
from dba_alert_history
where object_name='<current_undo_ts>';
- The undo tablespace in-use space exceeded the
warning alert threshold at some point in time:
To see the warning alert percentage threshold:
select object_type, object_name,
warning_value, critical_value
from dba_thresholds
where object_type='TABLESPACE';
To see the (current) undo tablespace percent of
space in-use:
select
((select (nvl(sum(bytes),0))
from dba_undo_extents
where tablespace_name='<current_undo_ts>'
and status in ('ACTIVE','UNEXPIRED')) *100) /
(select sum(bytes)
from dba_data_files
where tablespace_name='<current_undo_ts>')
"PCT_INUSE"
from dual;
WORKAROUND:
There are 3 possible alternate workarounds (any one of
these should resolve the problem of the alerts triggering
unnecessarily):
1. Set the autoextend and maxsize attribute of each datafile
in the undo ts so it is autoextensible and its maxsize is
equal to its current size so the undo tablespace now has
the autoextend attribute but does not autoend:
alter database datafile '<datafile_flename>'
autoextend on maxsize <current_size>;
With this setting, v$undostat.tuned_undoretention is not
calculated based on a percentage of the undo tablespace
size, instead v$undostat.tuned_undoretention is set to
the maximum of:
maxquerylen secs + 300
undo_retention specified in init.ora
2. Set the following hidden parameter:
in init.ora: _smu_debug_mode=33554432
or use: alter system set "_smu_debug_mode" = 33554432;
With this setting, v$undostat.tuned_undoretention is not
calculated based on a percentage of the fixed size undo
tablespace, instead v$undostat.tuned_undoretention is
set to the maximum of:
maxquerylen secs + 300
undo_retention specified in init.ora
3. Set the following hidden parameter:
in init.ora: _undo_autotune = false
or use: alter system set "_undo_autotune" = false;
With this setting, v$undostat (and therefore
v$undostat.tuned_undoretention) is not maintained and
and the undo_retention used is the one specified in
init.ora
RELEASE NOTES:
]]In 10.2, for a non-autoextend SMU undo tablespace which
]]is already adequately sized to handle the transaction
]]workload for the specified undo_retention time, the
]]tablespace threshold alert mechanism may still recommend
]]that the DBA add more space to the undo tablespace.
*** 01/12/07 11:20 pm ***
*** 01/12/07 11:20 pm *** (CHG: Sta->80)
*** 01/15/07 03:53 am ***
*** 01/16/07 04:21 am ***
*** 01/16/07 06:28 am ***
*** 01/16/07 08:24 am ***
*** 01/16/07 11:18 am ***
*** 01/16/07 02:06 pm ***
*** 01/16/07 02:07 pm ***
*** 01/16/07 02:07 pm ***
*** 01/16/07 03:56 pm ***
*** 01/16/07 10:24 pm ***
*** 01/17/07 06:42 am ***
*** 01/17/07 08:45 am ***
*** 01/17/07 11:22 am ***
*** 01/17/07 07:17 pm ***
*** 01/17/07 07:19 pm ***
*** 01/19/07 11:25 am ***
*** 01/19/07 12:40 pm ***
*** 01/19/07 02:10 pm ***
*** 01/23/07 07:50 am ***
*** 01/23/07 01:01 pm *** (CHG: Sta->16)
*** 01/23/07 01:01 pm ***
*** 01/23/07 04:36 pm *** (CHG: Sta->30)
*** 01/23/07 04:36 pm ***
*** 01/24/07 08:07 am ***
*** 01/24/07 08:57 am ***
*** 01/24/07 08:58 am *** (CHG: Sta->11)
*** 01/24/07 08:58 am ***
*** 01/24/07 02:21 pm ***
*** 01/24/07 03:32 pm ***
*** 01/25/07 02:44 pm ***
*** 01/25/07 02:44 pm *** (CHG: Sta->80)
*** 01/25/07 02:44 pm ***
*** 02/20/07 05:57 am ***
*** 02/21/07 02:55 am ***
*** 02/23/07 05:04 am ***
*** 03/05/07 07:18 am ***
*** 03/05/07 07:23 am ***
*** 03/06/07 12:13 am ***
*** 03/12/07 07:14 am ***
*** 04/11/07 08:11 pm ***
*** 04/11/07 08:13 pm ***
*** 04/13/07 04:19 am ***
*** 04/13/07 04:20 am ***
*** 04/13/07 04:23 am ***
*** 04/26/07 05:03 pm ***
*** 04/26/07 05:30 pm ***
*** 05/02/07 08:14 am ***
*** 05/03/07 12:40 am ***
*** 05/03/07 12:42 am ***
*** 07/05/07 04:19 pm ***
*** 07/06/07 05:24 pm ***
*** 07/12/07 05:21 pm ***
*** 09/14/07 02:22 pm ***
*** 09/17/07 11:45 pm ***
*** 09/17/07 11:47 pm ***
*** 09/18/07 10:29 am ***
*** 09/28/07 04:02 pm ***
*** 10/09/07 03:41 pm ***
*** 10/09/07 03:43 pm ***
*** 10/18/07 02:12 pm ***
*** 10/18/07 10:10 pm ***
*** 10/18/07 10:12 pm ***
*** 11/21/07 08:54 am ***
*** 11/21/07 01:01 pm ***
*** 11/21/07 01:01 pm ***
*** 11/30/07 02:59 am ***
*** 11/30/07 12:09 pm ***
*** 12/02/07 12:50 am ***
*** 12/11/07 05:30 am ***
*** 12/11/07 09:25 am ***
*** 02/10/08 10:04 pm ***
*** 04/08/08 12:55 pm ***
*** 04/09/08 12:51 am ***
*** 04/17/08 03:49 am ***
*** 04/17/08 03:49 am ***
*** 04/17/08 04:06 am ***
*** 04/17/08 04:06 am ***
*** 04/17/08 10:57 pm ***
*** 04/17/08 11:08 pm ***
*** 04/17/08 11:08 pm ***
*** 04/18/08 03:55 am ***
*** 04/18/08 03:55 am ***
*** 04/18/08 07:55 am ***
*** 04/18/08 02:01 pm ***
*** 04/18/08 02:01 pm ***
*** 04/18/08 02:33 pm ***
*** 04/21/08 12:58 am ***
*** 04/24/08 04:40 am ***
*** 04/24/08 04:49 am ***
*** 04/24/08 04:54 am ***
*** 04/28/08 05:53 am ***
*** 04/28/08 11:43 am ***
*** 04/28/08 12:32 pm *** (CHG: Sta->11)
*** 04/28/08 12:32 pm ***
*** 04/28/08 01:13 pm ***
*** 04/28/08 01:13 pm *** (CHG: Sta->80)
*** 04/28/08 01:13 pm ***
*** 04/28/08 11:03 pm ***
*** 05/05/08 06:32 am ***
*** 05/19/08 03:41 am ***
*** 05/19/08 06:47 am ***
*** 05/19/08 06:47 am ***
*** 05/21/08 11:44 pm ***
*** 05/22/08 10:41 am ***
*** 05/22/08 06:58 pm ***
*** 05/22/08 06:58 pm ***
*** 06/12/08 05:16 am ***
64-bit) for base bug 7145055
*** 06/12/08 06:35 am ***
*** 06/12/08 06:39 am ***
bug 7145055
*** 06/12/08 06:43 am ***
*** 06/12/08 06:46 am ***
base bug 7145055
*** 07/21/08 01:36 am ***
base bug 7117498
*** 07/25/08 04:54 pm ***
*** 08/08/08 02:17 am ***
base bug 7285182
*** 08/11/08 12:47 pm ***
*** 08/19/08 07:34 am ***
*** 08/20/08 06:16 pm ***
*** 08/22/08 06:39 am ***
7342997
*** 08/29/08 02:40 pm ***
*** 09/18/08 04:37 am ***
*** 10/02/08 11:57 pm ***
*** 12/03/08 01:24 am ***
bug 5471994 bug 4367986 bug 6085625 bug 5648872 bug 5863277 bug 5177766
bug 5254759 bug 6452485
*** 01/15/09 06:03 am ***
*** 01/27/09 11:23 pm ***
*** 02/25/09 10:59 pm ***
*** 03/24/09 02:03 am ***
bug 5476873 bug 5599293 bug 5922239
*** 03/27/09 12:19 pm ***
*** 04/09/09 03:23 am ***
*** 04/21/09 08:57 pm ***
*** 04/21/09 08:57 pm ***
*** 04/21/09 08:57 pm ***
*** 04/21/09 08:58 pm ***
*** 04/21/09 08:58 pm ***
*** 06/02/09 09:08 am ***
bug 5476873 bug 5599293 bug 5922239 bug 5081798 bug 5057695
*** 06/23/09 04:27 am ***
bug 5476873 bug 5599293 bug 5922239 bug 5010657 bug 6859515
*** 07/09/09 09:16 am ***
*** 09/08/09 01:46 am ***
*** 01/28/10 09:21 am ***
*** 01/28/10 01:21 pm ***
*** 01/29/10 08:31 am ***
*** 01/29/10 08:32 am ***
*** 01/30/10 01:58 am ***
*** 02/03/10 03:09 pm ***
*** 02/03/10 03:09 pm ***
*** 04/02/10 08:30 am ***
*** 07/01/10 05:38 pm ***
*** 08/30/10 03:03 pm ***
*** 09/15/10 01:08 am ***
*** 11/09/10 07:39 am ***
*** 12/15/10 02:05 pm ***
*** 12/19/10 11:21 pm ***
*** 12/20/10 03:34 am ***
*** 12/21/10 10:47 pm ***
*** 12/21/10 10:48 pm ***
*** 12/29/10 07:14 am ***
*** 01/03/11 10:53 pm ***
*** 01/04/11 01:08 am ***
*** 01/11/11 12:19 am ***
*** 01/26/11 02:47 am ***
*** 03/03/11 09:15 am ***
*** 03/16/11 05:26 pm ***
Oracle 2011. 8. 11. 11:39
Subject: |
Automatic Tuning of Undo_retention Causes Space Problems |
|
Doc ID: |
Note:420525.1 |
Type: |
PROBLEM |
|
Last Revision Date: |
26-MAY-2008 |
Status: |
PUBLISHED |
Applies to:Oracle Server - Enterprise Edition - Version: 10.2 to 11.0
This problem can occur on any platform.
Symptoms
Look for:
1.) System Managed Undo
undo_management=auto in init.ora file
2.) Fixed size undo tablespace
SQL> select autoextensible from dba_data_files where tablespace_name='<current_undo_tablespace >'
returns "No" for all the undo tablespace datafiles.
3.) The undo tablespace is already sized such that it always has more than enough space to store all the undo generated within the undo_retention time, and the in-use undo space never exceeds the undo tablespace warning alert threshold (see below for the query to show the thresholds).
4.) The tablespace threshold alerts recommend that the DBA add more space to the undo tablespace:
SQL> select creation_time, metric_value, message_type,reason, suggested_action from dba_outstanding_alerts where object_name='<current_undo_ts>';
returns a suggested action of: "Add space to the tablespace"
Or,
This recommendation has been reported in the past but the condition has now cleared:
SQL> select creation_time, metric_value, message_type, reason, suggested_action, resolution from dba_alert_history where object_name='<current_undo_ts>';
5.) The undo tablespace in-use space exceeded the warning alert threshold at some point in time:
To see the warning alert percentage threshold:
SQL> select object_type, object_name, warning_value, critical_value from dba_thresholds where object_type='TABLESPACE';
To see the (current) undo tablespace percent of space in-use:
SQL> select
((select (nvl(sum(bytes),0))
from dba_undo_extents
where tablespace_name='<current_undo_ts>'
and status in ('ACTIVE','UNEXPIRED')) *100) /
(select sum(bytes)
from dba_data_files
where tablespace_name='<current_undo_ts>')
"PCT_INUSE"
from dual;
CauseThis is due to Bug 5387030 that is fixed in 11.1 version of RDBMS server.
Solution
Apply the patch for the Bug 5387030. Check metalink for patch availability for your platform and RDBMS version.
Workaround
There are 3 possible alternate workarounds (any one of these should resolve the problem of the alerts triggering unnecessarily):
1.) Set the autoextend and maxsize attribute of each datafile in the undo ts so it is autoextensible and its maxsize is equal to its current size so the undo tablespace now has the autoextend attribute but does not autoend:
SQL> alter database datafile '<datafile_flename>'
autoextend on maxsize <current_size>;
With this setting, v$undostat.tuned_undoretention is not calculated based on a percentage of the undo tablespace size, instead v$undostat.tuned_undoretention is set to the maximum of (maxquerylen secs + 300) undo_retention specified in init.ora file.
2.) Set the following hidden parameter in init.ora file:
_smu_debug_mode=33554432
or
SQL> Alter system set "_smu_debug_mode" = 33554432;
With this setting, v$undostat.tuned_undoretention is not calculated based on a percentage of the fixed size undo tablespace, instead v$undostat.tuned_undoretention is set to the maximum of (maxquerylen secs + 300) undo_retention specified in init.ora file.
3.) Set the following hidden parameter in init.ora:
_undo_autotune = false
or
SQL> Alter system set "_undo_autotune" = false;
With this setting, v$undostat (and therefore v$undostat.tuned_undoretention) is not maintained and and the undo_retention used is the one specified in init.ora file.
ReferencesBug 5387030 - AUTOMATIC TUNING OF UNDO_RETENTION CAUSING SPACE PROBLEMS
출처 : Oracle Support
alter system set "_IN_MEMORY_UNDO"=false;
Alter system set "_undo_autotune" = false;
Oracle 2011. 8. 11. 11:30
수정 날짜 20-MAY-2010 유형 PROBLEM 상태 PUBLISHED |
|
Applies to:Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.4 - Release: 10.2 to 10.2
Information in this document applies to any platform.
SymptomsRunning an expdp on a certain table usually takes an hour.
However, it now runs for over a day. The problem appears to be with one table (the largest in the database).
When the expdp is started the first table that it processes has an estimated process time of over 24hrs.
When the same export is run from two other servers, it shows the elapsed time for the same table as around 20 minutes
ERROR
-----------------------
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
ORA-31693 : Table data object "MERIDIAN"."LINEPAST" failed to load/unload and is being skipped due to error:
ORA-02354 : error in exporting/importing data
ORA-01555 : snapshot too old: rollback segment number 1 with name "_SYSSMU1$" too small
The table does not have any LOB columns.
Increasing the undo_retention parameter did not help.
ChangesMost of the rows in the table were recently updated.
CauseUpdating the rows in the table have caused high fragmentation (chained/migrated rows), which increased significantly the time to perform the export of this table.
Solution1.Recreate the fragmented table through :
SQL> create table <new_table_name> as (select * from <fragmented_table);
then drop the fragmented table.
You may re-name the <new_table_name> to the old table name.
2. Perform the expdp job. (this time it ended in few minutes)
You may also refer to Note 746778.1 How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
ReferencesNOTE:746778.1 - How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
관련 자료
출처 : oracle support
('' ) NOTE:746778.1의 내용 |
|