|
Oracle 2013. 11. 15. 18:39
TRIM
Syntax  Description of the illustration trim.gif
Purpose TRIM enables you to trim leading or trailing characters (or both) from a character string. If trim_character or trim_source is a character literal, then you must enclose it in single quotation marks.
If you specify LEADING , then Oracle Database removes any leading characters equal to trim_character . If you specify TRAILING , then Oracle removes any trailing characters equal to trim_character . If you specify BOTH or none of the three, then Oracle removes leading and trailing characters equal to trim_character . If you do not specify trim_character , then the default value is a blank space. If you specify only trim_source , then Oracle removes leading and trailing blank spaces. The function returns a value with datatype VARCHAR2 . The maximum length of the value is the length of trim_source . If either trim_source or trim_character is null, then the TRIM function returns null.
Both trim_character and trim_source can be VARCHAR2 or any datatype that can be implicitly converted to VARCHAR2 . The string returned is of VARCHAR2 datatype if trim_source is a character datatype and a LOB if trim_source is a LOB datatype. The return string is in the same character set as trim_source . Examples This example trims leading zeros from the hire date of the employees in the hr schema: SELECT employee_id,
TO_CHAR(TRIM(LEADING 0 FROM hire_date))
FROM employees
WHERE department_id = 60
ORDER BY employee_id;
EMPLOYEE_ID TO_CHAR(T
----------- ---------
103 3-JAN-90
104 21-MAY-91
105 25-JUN-97
106 5-FEB-98
107 7-FEB-99
LTRIMSyntax  Description of the illustration ltrim.gif
Purpose LTRIM removes from the left end of char all of the characters contained in set . If you do not specify set , then it defaults to a single blank. If char is a character literal, then you must enclose it in single quotation marks. Oracle Database begins scanning char from its first character and removes all characters that appear in set until reaching a character not in set and then returns the result.
Both char and set can be any of the datatypes CHAR , VARCHAR2 , NCHAR , NVARCHAR2 , CLOB , or NCLOB . The string returned is of VARCHAR2 datatype if char is a character datatype, NVARCHAR2 if char is a national character datatype, and a LOB if char is a LOB datatype. The following example trims the redundant first word from a group of product names in the oe.products table: SELECT product_name, LTRIM(product_name, 'Monitor ') "Short Name"
FROM products
WHERE product_name LIKE 'Monitor%';
PRODUCT_NAME Short Name
-------------------- ---------------
Monitor 17/HR 17/HR
Monitor 17/HR/F 17/HR/F
Monitor 17/SD 17/SD
Monitor 19/SD 19/SD
Monitor 19/SD/M 19/SD/M
Monitor 21/D 21/D
Monitor 21/HR 21/HR
Monitor 21/HR/M 21/HR/M
Monitor 21/SD 21/SD
Monitor Hinge - HD Hinge - HD
Monitor Hinge - STD Hinge - STD
RTRIMSyntax  Description of the illustration rtrim.gif
Purpose RTRIM removes from the right end of char all of the characters that appear in set . This function is useful for formatting the output of a query.
If you do not specify set , then it defaults to a single blank. If char is a character literal, then you must enclose it in single quotation marks. RTRIM works similarly to LTRIM . Both char and set can be any of the datatypes CHAR , VARCHAR2 , NCHAR , NVARCHAR2 , CLOB , or NCLOB . The string returned is of VARCHAR2 datatype if char is a character datatype, NVARCHAR2 if expr1 is a national character datatype, and a LOB if char is a LOB datatype. The following example trims all the right-most occurrences of period, slash, and equal sign from a string: SELECT RTRIM('BROWNING: ./=./=./=./=./=.=','/=.') "RTRIM example" FROM DUAL;
RTRIM exam
----------
BROWNING:
reference : Oracle® Database SQL Language Reference - Oracle11gR1
그동안 너무 RTRM과 LTRIM만 사용한듯....
Oracle 2013. 8. 21. 18:14
Oracle Database 12c : Tom’s Top 12 Things About the Latest Generation of Database Technology
Oracle 2013. 4. 9. 09:50
A Comparison of Oracle's DATE and TIMESTAMP Datatypes
By James Koopmann
If you want to store date and time information in Oracle, you really only have two different options for the column's datatype. Lets take a quick look at these two datatypes and what they offer.
DATE datatype
This is the datatype that we are all too familiar with when we think about representing date and time values. It has the ability to store the month, day, year, century, hours, minutes, and seconds. It is typically good for representing data for when something has happened or should happen in the future. The problem with the DATE datatype is its' granularity when trying to determine a time interval between two events when the events happen within a second of each other. This issue is solved later in this article when we discuss the TIMESTAMP datatype. In order to represent the date stored in a more readable format, the TO_CHAR function has traditionally been wrapped around the date as in Listing A.
LISTING A:
Formatting a date
SQL> SELECT TO_CHAR(date1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table;
Date
---------------------------
06/20/2003 16:55:14
06/26/2003 11:16:36
About the only trouble I have seen people get into when using the DATE datatype is doing arithmetic on the column in order to figure out the number of years, weeks, days, hours, and seconds between two dates. What needs to be realized when doing the calculation is that when you do subtraction between dates, you get a number that represents the number of days. You should then multiply that number by the number of seconds in a day (86400) before you continue with calculations to determine the interval with which you are concerned. Check out Listing B for my solution on how to extract the individual time intervals for a subtraction of two dates. I am aware that the fractions could be reduced but I wanted to show all the numbers to emphasize the calculation.
LISTING B:
Determine the interval breakdown between two dates for a DATE datatype
1 SELECT TO_CHAR(date1,'MMDDYYYY:HH24:MI:SS') date1,
2 TO_CHAR(date2,'MMDDYYYY:HH24:MI:SS') date2,
3 trunc(86400*(date2-date1))-
4 60*(trunc((86400*(date2-date1))/60)) seconds,
5 trunc((86400*(date2-date1))/60)-
6 60*(trunc(((86400*(date2-date1))/60)/60)) minutes,
7 trunc(((86400*(date2-date1))/60)/60)-
8 24*(trunc((((86400*(date2-date1))/60)/60)/24)) hours,
9 trunc((((86400*(date2-date1))/60)/60)/24) days,
10 trunc(((((86400*(date2-date1))/60)/60)/24)/7) weeks
11* FROM date_table
DATE1 DATE2 SECONDS MINUTES HOURS DAYS WEEKS
----------------- ----------------- ---------- ---------- ---------- ---------- ----------
06202003:16:55:14 07082003:11:22:57 43 27 18 17 2
06262003:11:16:36 07082003:11:22:57 21 6 0 12 1
TIMESTAMP datatype
One of the main problems with the DATE datatype was its' inability to be granular enough to determine which event might have happened first in relation to another event. Oracle has expanded on the DATE datatype and has given us the TIMESTAMP datatype which stores all the information that the DATE datatype stores, but also includes fractional seconds. If you want to convert a DATE datatype to a TIMESTAMP datatype format, just use the CAST function as I do in Listing C. As you can see, there is a fractional seconds part of '.000000' on the end of this conversion. This is only because when converting from the DATE datatype that does not have the fractional seconds it defaults to zeros and the display is defaulted to the default timestamp format (NLS_TIMESTAMP_FORMAT). If you are moving a DATE datatype column from one table to a TIMESTAMP datatype column of another table, all you need to do is a straight INSERTSELECT FROM and Oracle will do the conversion for you. Look at Listing D for a formatting of the new TIMESTAMP datatype where everything is the same as formatting the DATE datatype as we did in Listing A. Beware while the TO_CHAR function works with both datatypes, the TRUNC function will not work with a datatype of TIMESTAMP. This is a clear indication that the use of TIMESTAMP datatype should explicitly be used for date and times where a difference in time is of utmost importance, such that Oracle won't even let you compare like values. If you wanted to show the fractional seconds within a TIMESTAMP datatype, look at Listing E. In Listing E, we are only showing 3 place holders for the fractional seconds.
LISTING C:
Convert DATE datatype to TIMESTAMP datatype
SQL> SELECT CAST(date1 AS TIMESTAMP) "Date" FROM t;
Date
-----------------------------------------------------
20-JUN-03 04.55.14.000000 PM
26-JUN-03 11.16.36.000000 AM
Formatting of the TIMESTAMP datatype
1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table
Date
-------------------
06/20/2003 16:55:14
06/26/2003 11:16:36
LISTING E:
Formatting of the TIMESTAMP datatype with fractional seconds
1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS:FF3') "Date" FROM date_table
Date
-----------------------
06/20/2003 16:55:14:000
06/26/2003 11:16:36:000
Calculating the time difference between two TIMESTAMP datatypes is much easier than the old DATE
datatype. Look at what happens when you just do straight subtraction of the columns in Listing F. As you can see, the results are much easier to recognize, 17days, 18hours, 27minutes, and 43seconds for the first row of output. This means no more worries about how many seconds in a day and all those cumbersome calculations. And therefore the calculations for getting the weeks, days, hours, minutes, and seconds becomes a matter of picking out the number by using the SUBSTR function as can be seen in Listing G.
LISTING F:
Straight subtraction of two TIMESTAMP datatypes
1 SELECT time1, time2, (time2-time1)
2* FROM date_table
TIME1 TIME2 (TIME2-TIME1)
------------------------------ ---------------------------- ----------------------
06/20/2003:16:55:14:000000 07/08/2003:11:22:57:000000 +000000017 18:27:43.000000
06/26/2003:11:16:36:000000 07/08/2003:11:22:57:000000 +000000012 00:06:21.000000
LISTING G:
Determine the interval breakdown between two dates for a TIMESTAMP datatype
1 SELECT time1,
2 time2,
3 substr((time2-time1),instr((time2-time1),' ')+7,2) seconds,
4 substr((time2-time1),instr((time2-time1),' ')+4,2) minutes,
5 substr((time2-time1),instr((time2-time1),' ')+1,2) hours,
6 trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))) days,
7 trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))/7) weeks
8* FROM date_table
TIME1 TIME2 SECONDS MINUTES HOURS DAYS WEEKS
------------------------- -------------------------- ------- ------- ----- ---- -----
06/20/2003:16:55:14:000000 07/08/2003:11:22:57:000000 43 27 18 17 2
06/26/2003:11:16:36:000000 07/08/2003:11:22:57:000000 21 06 00 12 1
System Date and Time
In order to get the system date and time returned in a DATE datatype, you can use the SYSDATE function such as :
SQL> SELECT SYSDATE FROM DUAL;
In order to get the system date and time returned in a TIMESTAMP datatype, you can use the SYSTIMESTAMP function such as:
SQL> SELECT SYSTIMESTAMP FROM DUAL;
You can set the initialization parameter FIXED_DATE to return a constant value for what is returned from the SYSDATE function. This is a great tool for testing date and time sensitive code. Just beware that this parameter has no effect on the SYSTIMESTAMP function. This can be seen in Listing H.
LISTING H:
Setting FIXED_DATE and effects on SYSDATE and SYSTIMESTAMP
SQL> ALTER SYSTEM SET fixed_date = '2003-01-01-10:00:00';
System altered.
SQL> select sysdate from dual;
SYSDATE
---------
01-JAN-03
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------
09-JUL-03 11.05.02.519000 AM -06:00
When working with date and time, the options are clear. You have at your disposal the DATE and TIMESTAMP datatypes. Just be aware, while there are similarities, there are also differences that could create havoc if you try to convert to the more powerful TIMESTAMP datatype. Each of the two has strengths in simplicity and granularity. Choose wisely.
Oracle 2013. 4. 9. 09:36
Michael -- Thanks for the question regarding "Timestamp vs. Date - is Date still appropriate for new systems?", version 10.2
Submitted on 26-Nov-2006 2:49 UTC
Last updated 18-Feb-2010 9:54
You AskedThe new Timestamp datatypes are described as extensions of the Date datatype in Oracle's
documentation.
They are clearly more powerful and include capabilities such as tracking times down to
milliseconds, incorporating time zones, etc.
It's also generally accepted that you should minimize the complexity of a system where
possible, and that one way of doing this is not to use redundant entities (in the general
sense, not the ER diagram sense).
Given these points, is there any reason to still use Dates in new system development?
Are there issues with performance, compatibility, etc? Or is this like the switch from
LONG and LONG RAW to LOBs where all new development should use the LOB? Oracle's
decision not to extend Date to also include Date With Timezone and Date With Local
Timezone suggests that Oracle doesn't see the point in extending Date... presumably
because it will be deprecated and replaced with Timestamp.
-----------------------
BTW, on a closely related topic, I don't understand the logic behind making Timestamp
With Time Zone and Timestamp With Local Time Zone datatypes.
Seems to be that a database (or any other system) should store all Date / Times in a
canonical form - presumably seconds since an agreed on starting second. Then all of this
Time Zone stuff is just a matter of Locale and client settings - my client should specify
its Timezone to the Server and then all dates / times should be converted to the
canonical form using that timezone unless I specify another time zone together with the
date / time.
The With Local Timezone and With Timezone datatypes seem to describe how to present the
information to clients more than what the datatype is.
and we said...I think dates will be here for a long time to come.
Timestamp is an ANSI thing, for compliance.
The choice is yours.
And you cannot really store things in a canonical format - there are too many variations.
Dates are 7 bytes always
Timestamps are also 7 (no fractional seconds)
or 11 (fractional seconds)
or 13 (fractional seconds with timezone)
Probably - just using timestamp would be good going forward (there might be tools that
cannot "deal" correctly with a timestamp yet - they would cause it to convert to a date -
you'll use to_timestamp and such with them).
But in reality, date is just coming out of my fingers still :) I typically have no need
for the fractional seconds nor timezone.
What about variations? November 26, 2006 - 10am UTC
Reviewer: Mike Friedman from Shenzhen, China Mind explaining this a bit more?
"And you cannot really store things in a canonical format - there are too many variations."
Does this mean that Oracle doesn't store dates internally as seconds since a reference date / time?
I know Windows does... and they have to deal with as many variations as Oracle, don't they?
Followup November 26, 2006 - 11am UTC: The date datatype does, but not a "seconds since something".
(windows cannot do a wide range of dates? hmmm, you might not be entirely accurate on that seconds
thing, I'll bet windows uses more than one format to store dates, or they cannot store dates that
fall outside a very very very small range!)
<quote src=Expert Oracle Database Architecture>
DATE Type
The DATE type is a fixed-width 7-byte date/time datatype. It will always contain the seven
attributes of the century, the year within the century, the month, the day of the month, the hour,
the minute, and the second. Oracle uses an internal format to represent that information, so it is
not really storing 20, 05, 06, 25, 12, 01, 00 for June 25, 2005, at 12:01:00. Using the built-in
DUMP function, we can see what Oracle really stores:
ops$tkyte@ORA10G> create table t ( x date );
Table created.
ops$tkyte@ORA10G> insert into t (x) values
2 ( to_date( '25-jun-2005 12:01:00',
3 'dd-mon-yyyy hh24:mi:ss' ) );
1 row created.
ops$tkyte@ORA10G> select x, dump(x,10) d from t;
X D
--------- -----------------------------------
25-JUN-05 Typ=12 Len=7: 120,105,6,25,13,2,1
The century and year bytes (the 120,105 in the DUMP output) are stored in an excess-100 notation.
You would have to subtract 100 from them to determine the correct century and year. The reason for
the excess-100 notation is support of BC and AD dates. If you subtract 100 from the century byte
and get a negative number, it is a BC date, for example:
ops$tkyte@ORA10G> insert into t (x) values
2 ( to_date( '01-jan-4712bc',
3 'dd-mon-yyyybc hh24:mi:ss' ) );
1 row created.
ops$tkyte@ORA10G> select x, dump(x,10) d from t;
X D
--------- -----------------------------------
25-JUN-05 Typ=12 Len=7: 120,105,6,25,13,2,1
01-JAN-12 Typ=12 Len=7: 53,88,1,1,1,1,1
So, when we insert 01-JAN-4712BC, the century byte is 53 and 53 100 = 47, the century we
inserted. Because it is negative, we know that it is a BC date. This storage format also allows the
dates to be naturally sortable in a binary sense. Since 4712 BC is less than 4710 BC, wed like a
binary representation that supports that. By dumping those two dates, we can see that 01-JAN-4710BC
is larger than the same day in 4712 BC, so they will sort and compare nicely:
ops$tkyte@ORA10G> insert into t (x) values
2 ( to_date( '01-jan-4710bc',
3 'dd-mon-yyyybc hh24:mi:ss' ) );
1 row created.
ops$tkyte@ORA10G> select x, dump(x,10) d from t;
X D
--------- -----------------------------------
25-JUN-05 Typ=12 Len=7: 120,105,6,25,13,2,1
01-JAN-12 Typ=12 Len=7: 53,88,1,1,1,1,1
01-JAN-10 Typ=12 Len=7: 53,90,1,1,1,1,1
The month and day bytes, the next two fields, are stored naturally, without any modification. So,
June 25 used a month byte of 6 and a day byte of 25. The hour, minute, and second fields are stored
in excess-1 notation, meaning we must subtract 1 from each component to see what time it really
was. Hence midnight is represented as 1,1,1 in the date field.
This 7-byte format is naturally sortable, as you have seenit is a 7 byte field that can be sorted
in a binary fashion from small to larger (or vice versa) very efficiently. Additionally, its
structure allows for easy truncation, without converting the date into some other format. For
example, truncating the date we just stored, 25-JUN-2005 12:01:00, to the day (remove the hours,
minutes, seconds) is very straightforward. Just set the trailing three bytes to 1,1,1 and the time
component is as good as erased.
</quote>
Timestamps get more complicated - there are timestamps without timezones, with local timezones -
you cannot store those in "one fashion" - one preserves the original timezone for example (so you
cannot put it into UTC, you need to remember what timezone it used to be) and the other puts it
into the database timezone.
Thanks November 26, 2006 - 12pm UTC
Reviewer: Michael Friedman from Shenzhen, China
to_timestamp September 10, 2008 - 8pm UTC
Reviewer: Paras Bansal from Milpitas, CA Can you explain this -
SELECT cast(to_timestamp('01-may-2005 23:59:59.501','dd-mon-yyyy hh24:mi:ss:ff') AS DATE) FROM
dual;
It returns me 2nd may as DATE. Can we do something so that it truncs the millisec portion and we
get only 1st May.
regards,
Paras Bansal
Followup September 16, 2008 - 11am UTC: substr the string you are trying to convert and just use to_date. ops$tkyte%ORA10GR2> SELECT to_date(substr( '01-may-2005 23:59:59.501', 1, 20),'dd-mon-yyyy
hh24:mi:ss') from dual;
TO_DATE(SUBSTR('01-M
--------------------
01-may-2005 23:59:59
curious TO_TIMESTAMPing around September 16, 2008 - 2pm UTC
Reviewer: Duke Ganote from Amelia, Ohio USA SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 16 13:35:17 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the OLAP and Data Mining Scoring Engine options
[glogin.sql includes:
alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
]
select cast(ts1 as date) dt1
, ts1
, cast(ts2 as date) dt2
, ts2
from
(
select to_timestamp('01-may-2005 23:59:59.501'
,'dd-mon-yyyy hh24:mi:ss:ff') ts1
, systimestamp ts2
from dual
)
/
DT1
TS1
DT2
TS2
--------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2008-09-16 13:33:39
16-SEP-08 01.33.38.512485 PM -04:00
Followup September 16, 2008 - 11pm UTC: do you have access to support to file a bug?
where is the bug ? September 17, 2008 - 4am UTC
Reviewer: Sokrates sorry, I don't see it.
Can you please explain ?
What was expected output ?
Followup September 17, 2008 - 9am UTC: when you cast a to_timestamp value as a date - the time component disappeared. when you cast a timestamp value as a date - the time component did NOT disappear.
Isn't it just rounding February 16, 2010 - 8pm UTC
Reviewer: Warwick from Sydney, Australia Hi Tom,
Isn't this due to rounding? I tried this using both 9i and 11r1
alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Running the original query returned the results above. Changing it to
select cast(ts1 as date) dt1
, ts1
, cast(ts2 as date) dt2
, ts2
from
(
select to_timestamp('01-may-2005 23:59:58.501'
,'dd-mon-yyyy hh24:mi:ss:ff') ts1
, systimestamp ts2
from dual
)
/
DT1
TS1
DT2
TS2
---------------------
2005-05-01 23:59:59
2005-05-01 23:59:58.501000000
2010-02-17 12:49:57
17-FEB-10 12.49.56.986028 PM +11:00
Cheers Warwick
Followup February 17, 2010 - 8am UTC: 9i and 10g (and 11gr1) all lose the time component: ops$tkyte%ORA9IR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for Linux: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
ops$tkyte%ORA9IR2> alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
Session altered.
ops$tkyte%ORA9IR2> alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Session altered.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2010-02-17 09:54:28
17-FEB-10 09.54.27.579168 AM -05:00
10g: ops$tkyte%ORA10GR2> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
ops$tkyte%ORA10GR2> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2010-02-17 09:55:29
17-FEB-10 09.55.28.592716 AM -05:00
and 11gR1: ops$tkyte%ORA11GR1> select * from v$version;
BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
ops$tkyte%ORA11GR1> alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
Session altered.
ops$tkyte%ORA11GR1> alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Session altered.
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-02 00:00:00
2005-05-01 23:59:59.501000000
2010-02-17 09:56:01
17-FEB-10 09.56.01.087210 AM -05:00
but 11gr2: ops$tkyte%ORA11GR2> select * from v$version;
BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
ops$tkyte%ORA11GR2> alter session set nls_date_format='yyyy-mm-dd HH24:MI:SS';
Session altered.
ops$tkyte%ORA11GR2> alter session set nls_timestamp_format = 'yyyy-mm-dd HH24:MI:SS.FF';
Session altered.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select cast(ts1 as date) dt1
2 , ts1
3 , cast(ts2 as date) dt2
4 , ts2
5 from
6 (
7 select to_timestamp('01-may-2005 23:59:59.501'
8 ,'dd-mon-yyyy hh24:mi:ss:ff') ts1
9 , systimestamp ts2
10 from dual
11 )
12 /
DT1
-------------------
TS1
---------------------------------------------------------------------------
DT2
-------------------
TS2
---------------------------------------------------------------------------
2005-05-01 23:59:59
2005-05-01 23:59:59.501000000
2010-02-17 09:57:22
17-FEB-10 09.57.22.406995 AM -05:00
Isn't it rounding to nearest second? February 18, 2010 - 9am UTC
Reviewer: John Abate from Missouri, USA It looks to me that it is rounding the .501 fraction of a second to the nearest whole second
01-may-2005 23:59:59.501 rounds to 02-may-2005 00:00:00
Or am I missing something?
Followup February 18, 2010 - 9am UTC: doh, you are correct, I didn't notice the DATE flipped! and I now do vaguely recall that the behavior changed - in that the timestamp would trunc() not round in 11g. You are correct - I did not see the forest for all of the trees. I was so concentrated on the time going to 00:00:00, that I did not even really look at what the time was beforehand... | | | |
Reference : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:77562635789106
Oracle 2013. 3. 6. 15:35
Oracle Club에서 작성된 Oracle PL/SQL 기초 교육 문서입니다.
자세한 내용은 첨부파일의 내용을 확인해보세요.
pl_sql.doc
Oracle 2013. 1. 21. 17:47
Oracle DB에서 특정 Table의 Data를 Record별로 Text File을 생성하여 내보내고 싶을 때 어떻게 할까?
UTL_FILE 패키지를 이용하여 PL/SQL에서 파일의 입력 or 출력을 할 수 있다.
이는 PL/SQL에서 Log를 File별로 생성하고 싶을 때도 사용이 가능하다.
먼저, Oracle Directory를 생성해주어야 한다.
▣ Create Directory
DBA 계정으로 oracle에 로그인을 한 후, Directory를 생성하고, 이를 사용하고자 하는 계정에 부여하면 된다.
1. Create Directory exp_text as 'c:\temp'; --Temp Folder를 Directory로 설정하고, 이를 exp_text로 이름을 부여함
2. grant read, write on directory exp_text to scott; -- Scott 계정에 exp_text Directory에 대한 읽고, 쓰기 권한을 부여함.
|
위와 같이 Directory 생성 및 권한 부여가 되었다면, Scott 계정에서 아래와 같이 간단하게 PL/SQL을 이용하여 Text File로 Data를
추출할 수 있다.
▣ Export to Text File
create or replace procedure Export2Text IS vHandle UTL_FILE.FILE_TYPE; vDirectory varchar2(100); vFile varchar2(256); vOutput varchar2(32767); cursor c_cur is select empno, ename, hiredate, sal from emp;
begin vDirectory := 'exp_text'; for r_cur in c_cur loop if r_cur.empno > 0 then vFile := r_cur.empno || '.txt'; vHandle := UTL_FILE.FOPEN(vDirectory ,vFile,'W', 32767); vOutput := r_cur.ename || ', ' || r_cur.hiredate || ', ' || r_cur.sal; UTL_FILE.PUT_LINE(vHandle, vOutput); UTL_FILE.FCLOSE(vHandle); end if; end loop;
exception when others then UTL_FILE.FCLOSE_ALL; raise; end; / |
아래는 Directory내에 존재하는 file을 읽어오고자 할 때, 사용할 수 있는 PL/SQL이다.
▣ Import to Text File
create or replace procedure Import2Text is vHandle UTL_FILE.FILE_TYPE; vDirectory varchar2(100); vFile varchar2(256); vInput varchar2(32767); begin vDirectory := 'exp_text'; vFile := 'test.txt'; vHandle := UTL_FILE.FOPEN(vDirectory ,vFile,'R', 32767); begin loop UTL_FILE.GET_LINE(vHandle, vInput,32767); DBMS_OUTPUT.PUT_LINE(vInput); end loop; exception when NO_DATA_FOUND then dbms_output.put_line('last line of file'); end; UTL_FILE.FCLOSE(vHandle); exception when others then UTL_FILE.FCLOSE_ALL; raise; end; / |
Oracle 2012. 7. 26. 13:19
Oracle 11g에서는 Compound Triggers 기능이 생겨서 각 Event별로 Trigger를 만들 필요가 없어졌네요.
In earlier versions of Oracle, triggers were the database objects whose ability to collide with the current database state of an object and session state maintenance, kept the database developers alert while designing triggers. Oracle11g offers considerable improvements in database triggers. These additions have not only enhanced the language usability but also promise a better performance in a real application environment.
Oracle 11g fixes major issues by the enhancements as listed below.
- Trigger firing sequence can be set in Oracle 11g using FOLLOWS keyword
- Compound Triggers to encapsulate multiple triggering timings in one body
- Triggers can be created in ENABLED/DISABLED mode
- The DML triggers in 11g are 25% faster than their earlier versions in terms of compilation, execution and firing
In the tutorial, you will learn how Compound Triggers works and its benefits to the users.
1. Compound Triggers: Introduction
Compound triggers are the database DML triggers which ‘compounds’ or combines all the triggering timings under one trigger body. The triggering timings available for database triggers are BEFORE STATEMENT, AFTER STATEMENT, BEFORE EACH ROW, and AFTER EACH ROW. Trigger level variables can be defined which would be visible and accessible in all the timing blocks.
Compound trigger body does not contain any exception section but recommends each block to contain its own exception handler under exception section.
2. Compound Triggers: Reaping the Benefits
Compound trigger resolves few big issues, which have been the nightmare for the developers in the past.
- Mutating table error ORA-04091
- Multithread behavior maintains persistent state of session variables till the statement execution finishes. These are defined in the declaration section of the compound trigger.
- Enhanced performance in bulk operations
- Supports encapsulation of multiple program units; thus enhances code interactivity
3. Compound Triggers: Usage Guidelines
As a new induction to the database family, there are some introductory guidelines which must be followed while working with compound triggers
- Compound trigger are meant only for DML operations. No support still for DDL and system operations.
- Exception handling process has to be done for each timing block.
- Compound trigger remains passive, if the DML does not changes any rows
- :OLD and :NEW variable identifiers can exist only in ROW level blocks, where :NEW values can only be modified in BEFORE EACH ROW block
- No support for WHEN clause. Earlier, WHEN clause could be utilized to impose condition on the trigger action.
- No support for PRAGMA_AUTONOMOUS_TRANSACTION
There are many debates over the above guidelines, which many identify as compound trigger restrictions. But from developer’s perspective, the benefits from compound triggers are much heavier than these restrictions. The assumptions and restrictions can be melted into code practice and as ‘usage guidelines’.
4. Compound Triggers: Syntax
The syntax shows the four timing blocks in a set order.
Example Syntax [1a]: For DML Triggers
Sample Code
-
CREATE OR REPLACE TRIGGER [TRIGGER NAME]
-
FOR [DML] ON [TABLE NAME]
-
COMPOUND TRIGGER
-
-- Initial section
-
-- Declarations
-
-- Subprograms
-
Optional section
-
BEFORE STATEMENT IS
-
…;
-
Optional section
-
AFTER STATEMENT IS
-
…;
-
Optional section
-
BEFORE EACH ROW IS
-
…;
-
Optional section
-
AFTER EACH ROW IS
-
…;
-
END;
Copyright exforsys.com
In the syntax, note that the compound trigger offers only the declaration section, which are again locally available in the timing blocks.
Example Syntax [1b]: For database views
INSTEAD OF EACH ROW IS ...; END;
Note that none of the timing blocks should be duplicated. Oracle server raises exception PLS-00676 if duplicate timing blocks is found in the compound trigger definition.

5. Compound Triggers: Additional Notes
5.1. USER_TRIGGERS dictionary view structure has been restructured to include the metadata for compound triggers. The new columns specify whether a timing block is available in the compound trigger body.

5.2. The new triggering event has been introduced for compound trigger as ‘COMPOUND’. The figure below shows the TRIGGER_TYPE, TRIGGERING_EVENT and TABLE_NAME of the trigger TRG_COMP_DEMO.

5.3. The columns BEFORE_STATEMENT, BEFORE_ROW, AFTER_ROW, and AFTER_STATEMENT are set as YES/NO based on the timing blocks available in the compound trigger. For example, the trigger TRG_COMP_DEMO contains all the timing blocks in the trigger body, the columns can be queried as below.

6. Applications of Compound Triggers
A compound trigger is best suited to achieve two objectives.
- Yield better performance while loading a table simultaneous to a running transaction and using its values.
- Resolve mutating table error (ORA-04091)
I will illustrate both the accomplishments in below steps
6.1. Demonstrating performance gains during simultaneous loading
1. Two tables were created
Sample Code
-
SQL> CREATE TABLE ORDERS (ORD_ID NUMBER,
-
ITEM_CODE VARCHAR2(100),
-
ORD_QTY NUMBER,
-
ORD_DATE DATE);
-
TABLE created.
-
SQL> CREATE TABLE ORDER_ARCHIVE(ORD_ID NUMBER,
-
ORD_CODE VARCHAR2(100));
-
-
TABLE created.
Copyright exforsys.com
2. To demonstrate the performance difference between the versions, I create a normal FOR EACH ROW trigger on ORDERS table to insert data into ORDER_ARCHIVE table.
Sample Code
-
SQL> CREATE OR REPLACE TRIGGER TRG_ORDERS
-
BEFORE INSERT ON ORDERS
-
FOR EACH ROW
-
BEGIN
-
DBMS_OUTPUT.PUT_LINE('Insert order‘||:NEW.ORD_ID||’ into ORDER_ARCHIVE');
-
INSERT INTO ORDER_ARCHIVE VALUES (:NEW.ORD_ID,:NEW.ITEM_CODE);
-
END;
-
/
-
TRIGGER created.
Copyright exforsys.com
3. Now, I will insert the test data into the ORDERS table using SELECT statement
Sample Code
-
SQL> INSERT INTO orders(ord_id, item_code, ord_date)
-
2 SELECT (rownum+1)*100, 'ITEM_'||rownum, sysdate-rownum
-
3 FROM DUAL
-
4* CONNECT BY ROWNUM < 15;
-
INSERT ORDER 1 INTO ORER_ARCHIVE
-
INSERT ORDER 2 INTO ORER_ARCHIVE
-
INSERT ORDER 3 INTO ORER_ARCHIVE
-
INSERT ORDER 4 INTO ORER_ARCHIVE
-
INSERT ORDER 5 INTO ORER_ARCHIVE
-
INSERT ORDER 6 INTO ORER_ARCHIVE
-
INSERT ORDER 7 INTO ORER_ARCHIVE
-
INSERT ORDER 8 INTO ORER_ARCHIVE
-
INSERT ORDER 9 INTO ORER_ARCHIVE
-
INSERT ORDER 10 INTO ORER_ARCHIVE
-
INSERT ORDER 11 INTO ORER_ARCHIVE
-
INSERT ORDER 12 INTO ORER_ARCHIVE
-
INSERT ORDER 13 INTO ORER_ARCHIVE
-
INSERT ORDER 14 INTO ORER_ARCHIVE
-
14 rows created.
Copyright exforsys.com
Note the output of the above INSERT process. For each of the 14 records inserted into the ORDERS table through the trigger, Oracle server makes simultaneous inserts into the ORDER_ARCHIVE table.
4. Compound triggers in Oracle 11g have the ability to perform simultaneous bulk insert process into the table. I will create the compound trigger TRG_COMP_ORDERS to implement the bulk loading process. Note the use of associative arrays, which hold the data to be inserted. It flushes off once the data is replicated into the ORDER_ARCHIVE table when the index count reaches 20 or the statement execution completes.
Sample Code
-
SQL> CREATE OR REPLACE TRIGGER TRG_COMP_SAL
-
FOR INSERT ON ORDERS
-
COMPOUND TRIGGER
-
TYPE ORDER_T IS TABLE OF ORDER_ARCHIVE%ROWTYPE
-
INDEX BY PLS_INTEGER;
-
L_ORDERS ORDER_T;
-
I NUMBER := 0;
-
AFTER EACH ROW IS
-
BEGIN
-
I := I+1;
-
L_ORDERS(I).ORD_ID := :NEW.ORD_ID;
-
L_ORDERS(I).ORD_CODE := :NEW.ITEM_CODE;
-
IF I >= 20 THEN
-
DBMS_OUTPUT.PUT_LINE('Bulk Load for 20 orders');
-
FOR J IN 1..I
-
LOOP
-
INSERT INTO ORDER_ARCHIVE VALUES L_ORDERS(J);
-
END LOOP;
-
L_ORDERS.DELETE;
-
I := 0;
-
END IF;
-
END AFTER EACH ROW;
-
AFTER STATEMENT IS
-
BEGIN
-
DBMS_OUTPUT.PUT_LINE('Statement level loading');
-
FORALL J IN 1..L_ORDERS.COUNT
-
INSERT INTO ORDER_ARCHIVE VALUES L_ORDERS(J);
-
L_ORDERS.DELETE;
-
I := 0;
-
END AFTER STATEMENT;
-
END;
-
/
-
-
TRIGGER created.
Copyright exforsys.com
5. I will insert 64 rows (instead of just 14) into ORDERS table using SELECT query. The simultaneous inserts into ORDER_ARCHIVE table are achieved only in 3 bulk insert process.
Sample Code
-
SQL> INSERT INTO orders(ord_id, item_code, ord_date)
-
2 SELECT (rownum+1)*100, 'ITEM_'||rownum, sysdate-rownum
-
3 FROM DUAL
-
4 connect BY rownum < 65
-
5 /
-
Bulk LOAD FOR 20 orders
-
Bulk LOAD FOR 20 orders
-
Bulk LOAD FOR 20 orders
-
Statement level loading
-
64 rows created.
Copyright exforsys.com
Above results clearly differentiate the compound trigger from the conventional triggers in terms of performance while simultaneous data loading.
6.2. Demonstration of Mutating table solution using Compound triggers
Now, let us learn about the most accomplished achievement of Compound Triggers, i.e. their ability to tackle with Mutating table error (ORA-04091).
Mutating table occurs when a table is referenced when it is in floating state. A table is in flux or floating state when it is the participant of a running transaction. Earlier, it used to be taken as a precaution during coding or its effects were diluted using workaround solutions. Few of the efficient workarounds are as below.
- Change in logic implementation and code design
- Using PRAGMA_AUTONOMOUS_TRANSACTION
- Conversion of row level trigger to statement level
- Defining package to hold variables at session level; these variables would hold the values and later would be inserted once the statement execution finishes.
How compound trigger fixes the problem?
Compound trigger contains the timing blocks and variables, which are persistent till the statement completes and are visible in all the timing blocks of the trigger. These variables are of collection type which holds the table data, before it enters the flux mode. Table gets locked once it moves to flux mode, but the backup data in the collection variables remains persistent and can be used for reference within the trigger.
Therefore, the logic used is same as earlier, but being a single compiled unit, it is more convenient and easily maintained.
In the example code below, the compound trigger fires on update of ORDERS table. It queries the ORDERS table to fetch and display the old quantity. For a conventional DML trigger, the situation is the best candidate for ORA-04091 Mutating table error.
A user updates the order quantity for order id 600. Now before the UPDATE statement executes, the BEFORE STATEMENT block fetches the old ‘order quantity’ value (90) and stores in a collection type variable. Now when the statement executes, it updates the ‘order quantity’ to 150. AFTER STATEMENT block in the trigger displays the ‘old quantity’ value.
Sample Code
-
CREATE OR REPLACE TRIGGER TRG_ORDERS
-
FOR UPDATE ON ORDERS
-
COMPOUND TRIGGER
-
TYPE ORD_QTY_T IS TABLE OF ORDERS.ORD_QTY%TYPE;
-
L_ORD_QTY ORD_QTY_T;
-
BEFORE STATEMENT IS
-
BEGIN
-
SELECT ORD_QTY
-
BULK COLLECT INTO L_ORD_QTY
-
FROM ORDERS WHERE ORD_ID=600;
-
END BEFORE STATEMENT;
-
AFTER EACH ROW IS
-
BEGIN
-
DBMS_OUTPUT.PUT_LINE('Old Quantity Value:'||L_ORD_QTY(L_ORD_QTY.FIRST));
-
END AFTER EACH ROW;
-
END;
-
/
-
SQL> UPDATE ORDERS
-
2 SET ORD_QTY = 150
-
3 WHERE ORD_ID = 600;
-
Old Quantity Value:90
-
1 row updated.
Copyright exforsys.com
7. Conclusion
Compound trigger combines the properties of both statement and row level triggers. The logic which was earlier maintained at row and statement level can now be placed into one single body, with state and persistency in data across the timings.
출처 : http://www.exforsys.com/tutorials/oracle-11g/compound-triggers-in-oracle-11g.html
Oracle 2012. 5. 15. 11:01
"ORA-20023: Missing start and end values for time model
stat"
이는 Oracle11g의 License의 정책으로 인해서 생성된 Parameter설정과 연관되어
있는 Error 입니다.
하여 CONTROL_MANAGEMENT_PACK_ACCESS Parameter의 값을 확인 해보셔야 합니다.
11g에서는 기본값이 DIAGNOSTIC+TUNING 으로 되어 있는데,
이를, NONE으로 설정 시, Snapshot에 대한 조회 권한이 없어집니다.
해당 Parameter 값을 변경하기 위해서는 아래와 같이 해주시면 됩니다.
alter system set
CONTROL_MANAGEMENT_PACK_ACCESS="DIAGNOSTIC+TUNING" scope=both;
|
Snapshot 정보는 계속 자동으로 생성되더라도, 위와
같이 설정이 되어 있지 않다면 이를 이용하여 분석할 수 없습니다.
보다 자세한 정보는 아래 Link에서 확인해보시기 바랍니다.
http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams037.htm
Oracle 2011. 8. 11. 15:19
수정 날짜 29-APR-2010 유형 PROBLEM 상태 PUBLISHED |
|
Applies to:Oracle Server - Enterprise Edition - Version: 10.2.0.2 to 11.1.0.7 - Release: 10.2 to 11.1
Information in this document applies to any platform.
SymptomsDatapump export from the client side completes with errors. Example:
expdp system/password full=y dumpfile=full_exp.dmp logfile=full_explog
...
. . exported "CALYPSO"."KICKOFF_CFG" 0 KB 0 rows
. . exported "CALYPSO"."LE_AGR_CHILD" 0 KB 0 rows
. . exported "CALYPSO"."LE_ATTR_CODE" 0 KB 0 rows
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2745
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3712
ORA-06512: at line 1
However, reviewing the logfile shows that the "job successfully completed"
CauseThis issue has been discussed in Bug 5969934 EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
SolutionThe expdp client makes calls to DBMS_DATAPUMP package to start and monitor export job. Once the export job is underway, the client just monitors the job status by issuing DBMS_DATAPUMP.GET_STAUS.
Therefore, if the export logfile says "job successfully completed", the dump file generated by the job should be fine.
You can simply ignore the errors, since the dump file is still valid for an import.
In the 10.2.0.2 release, there were a number of problems that caused the expdp and impdp clients to exit prematurely, interpreting a nonfatal error as a fatal one, giving the appearance that the job had failed when it hadn't. In fact, inspection of the log file, if one was specified for the job, showed that the job ran successfully to completion. Often a trace file written by one of the Data Pump processes would provide more detail on the error that had been misinterpreted as a fatal one. Many of these errors involved the queues used for communication between the Data Pump processes, but there were other issues as well.
.
With each subsequent release, these problems have been addressed, and the client has become more robust and rarely, if ever, runs into situations like this. However, this is the result of many bug fixes in subsequent releases, some in Data Pump and some in supporting layers. It's impossible to know, at this point, what combination of bug fixes would address this specific failure, and even if that was possible, it wouldn't address other possible failures that look very similar on the client side.
.
Relying on information in the log file is one way to verify that the job actually completed successfully. Problems like this one became much more intermittent by the 10.2.0.4 release and are rarely, if ever, seen in 11.1 or later.
ReferencesBUG:5969934 - EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
관련 자료
|
출처 : Oracle Support
Data Pump에 생각보다 많은 Bug들이 있다. 하지만, 성능상으로는 이런 Bug들을 감수할만큼 효율적이니 잘 피해서 사용해보자.
Oracle 2011. 8. 11. 11:39
Subject: |
Automatic Tuning of Undo_retention Causes Space Problems |
|
Doc ID: |
Note:420525.1 |
Type: |
PROBLEM |
|
Last Revision Date: |
26-MAY-2008 |
Status: |
PUBLISHED |
Applies to:Oracle Server - Enterprise Edition - Version: 10.2 to 11.0
This problem can occur on any platform.
Symptoms
Look for:
1.) System Managed Undo
undo_management=auto in init.ora file
2.) Fixed size undo tablespace
SQL> select autoextensible from dba_data_files where tablespace_name='<current_undo_tablespace >'
returns "No" for all the undo tablespace datafiles.
3.) The undo tablespace is already sized such that it always has more than enough space to store all the undo generated within the undo_retention time, and the in-use undo space never exceeds the undo tablespace warning alert threshold (see below for the query to show the thresholds).
4.) The tablespace threshold alerts recommend that the DBA add more space to the undo tablespace:
SQL> select creation_time, metric_value, message_type,reason, suggested_action from dba_outstanding_alerts where object_name='<current_undo_ts>';
returns a suggested action of: "Add space to the tablespace"
Or,
This recommendation has been reported in the past but the condition has now cleared:
SQL> select creation_time, metric_value, message_type, reason, suggested_action, resolution from dba_alert_history where object_name='<current_undo_ts>';
5.) The undo tablespace in-use space exceeded the warning alert threshold at some point in time:
To see the warning alert percentage threshold:
SQL> select object_type, object_name, warning_value, critical_value from dba_thresholds where object_type='TABLESPACE';
To see the (current) undo tablespace percent of space in-use:
SQL> select
((select (nvl(sum(bytes),0))
from dba_undo_extents
where tablespace_name='<current_undo_ts>'
and status in ('ACTIVE','UNEXPIRED')) *100) /
(select sum(bytes)
from dba_data_files
where tablespace_name='<current_undo_ts>')
"PCT_INUSE"
from dual;
CauseThis is due to Bug 5387030 that is fixed in 11.1 version of RDBMS server.
Solution
Apply the patch for the Bug 5387030. Check metalink for patch availability for your platform and RDBMS version.
Workaround
There are 3 possible alternate workarounds (any one of these should resolve the problem of the alerts triggering unnecessarily):
1.) Set the autoextend and maxsize attribute of each datafile in the undo ts so it is autoextensible and its maxsize is equal to its current size so the undo tablespace now has the autoextend attribute but does not autoend:
SQL> alter database datafile '<datafile_flename>'
autoextend on maxsize <current_size>;
With this setting, v$undostat.tuned_undoretention is not calculated based on a percentage of the undo tablespace size, instead v$undostat.tuned_undoretention is set to the maximum of (maxquerylen secs + 300) undo_retention specified in init.ora file.
2.) Set the following hidden parameter in init.ora file:
_smu_debug_mode=33554432
or
SQL> Alter system set "_smu_debug_mode" = 33554432;
With this setting, v$undostat.tuned_undoretention is not calculated based on a percentage of the fixed size undo tablespace, instead v$undostat.tuned_undoretention is set to the maximum of (maxquerylen secs + 300) undo_retention specified in init.ora file.
3.) Set the following hidden parameter in init.ora:
_undo_autotune = false
or
SQL> Alter system set "_undo_autotune" = false;
With this setting, v$undostat (and therefore v$undostat.tuned_undoretention) is not maintained and and the undo_retention used is the one specified in init.ora file.
ReferencesBug 5387030 - AUTOMATIC TUNING OF UNDO_RETENTION CAUSING SPACE PROBLEMS
출처 : Oracle Support
alter system set "_IN_MEMORY_UNDO"=false;
Alter system set "_undo_autotune" = false;
|