DB Link 설정을 그동안 사용하면서 많이 사용을 안해서 그런지 이런 오류 메세지를 만난적이 없었다.
그러다 오늘 알게되었기에 오라클 문서를 확인하고 기록으로 남기기 위해 작성한다.
ORA-02020 too many database links in use
Cause: The current session has exceeded the INIT.ORA open_links maximum.
Action: Increase the open_links limit, or free up some open links by committing or rolling back the transaction and canceling
open cursors that reference remote databases.
이런 오류 메시지를 만났다면 Open_links parameter 설정을 확인해주어야 한다는 의미다.
OPEN_LINKS
Property
Description
Parameter type
Integer
Default value
4
Modifiable
No
Range of values
0 to 255
OPEN_LINKS specifies the maximum number of concurrent open connections to remote databases in one session. These connections include database links, as well as external procedures and cartridges, each of which uses a separate process.
해당 파라미터의 초기값은 4개이다. 결론부터 얘기하면 DB Link 설정 갯수가 4개보다 많기 때문에 위와 같은 오류 메세지를 보게
되었다는 얘기입니다. 따라서, 추가로 생성해야 하는 상황이라면 이 값을 조정해주시면 됩니다.
위의 설정 값을 확인하기 위해서는 "select name, value from v$parameter where name='open_links'; "와 같은 Query로도 확인이
가능합니다. ( sqlplus에서라면.. show parameter open_links로도 확인 가능)
값의 조정은 init.ora 파일에서 직접해도 되고, "alter system set open_links=10 scope=spfile;" 와 같이 조정해도 됩니다.
값은 필요한 만큼 조정하시되, 위에 보시면 최대 값이 255인것을 감안하시면 됩니다.
값이 조정되었다면 이를 반영하기 위해서는 DBMS를 Restart 시켜주어야 반영됨을 잊지 마세요.
이외에 참고로 DB Link에 연관된 파라미터 정보와 제한 정보는 아래와 같습니다. 참고하시기 바랍니다.
dblink
Specify the complete or partial name of the database link. If you specify only the database name, then Oracle Database implicitly appends the database domain of the local database.
Use only ASCII characters for dblink. Multibyte characters are not supported. The database link name is case insensitive and is stored in uppercase ASCII characters. If you specify the database name as a quoted identifier, then the quotation marks are silently ignored.
If the value of the GLOBAL_NAMES initialization parameter is TRUE, then the database link must have the same name as the database to which it connects. If the value of GLOBAL_NAMES is FALSE, and if you have changed the global name of the database, then you can specify the global name.
The maximum number of database links that can be open in one session or one instance of an Oracle RAC configuration depends on the value of the OPEN_LINKS and OPEN_LINKS_PER_INSTANCE initialization parameters.
Restriction on Creating Database Links
You cannot create a database link in another user's schema, and you cannot qualifydblinkwith the name of a schema. Periods are permitted in names of database links, so Oracle Database interprets the entire name, such asralph.linktosales, as the name of a database link in your schema rather than as a database link namedlinktosalesin the schemaralph.)
위에 열거된 OPN 혜택은 OPN 정책, OPN 협약 및 해당하는 모든 부록 또는 개정 조항의 조건에 의거하여 OPN 멤버에게 제공됩니다. 지역별 제한 사항에 따라 혜택의 제공 여부가 다를 수 있습니다. 혜택이 오라클의 독자적인 판단에 따라 제공된다고 이 혜택 표에 나와 있는 경우 해당 혜택의 제공 여부에 관한 오라클의 결정이 최종 결정이 됩니다.
미국 및 캐나다: 미국 및 캐나다 지역에서 활동하는 파트너는 전문화 자격을 얻기 위해 공공 부문 기관과의 거래를 포함시킬 수 있습니다. 그러나 공공 부문 기관과의 거래는 거래 건수, 매출, 기타 세일즈 기반의 메트릭 등을 토대로 제공되는 전문화 영역 또는 Oracle PartnerNetwork 멤버쉽과 관련된 혜택(전문화 상태 및 관련 로고 사용 이외)을 결정하는 데는 포함되지 않습니다. 공공 부문은 정부, 입법/의사결정 기구, 사법기관, 정부대행기관, 정부부처, 국가/지방/시 단위의 국가행정기관, 정부가 관리하거나 대부분의 지분을 소유하고 있는 공기업, 정당이나 정치조직 등과 같은 공공기구나 재단, 국제 적십자사나 UN, 세계은행 등과 같은 국제기구, 공립 대학교, 공립 초/중/고등학교, 공공 보건 기관 등의 공공 부문 기관 등을 말합니다. 전문화된 기술력과 솔루션을 갖춘 오라클 파트너에게 제공되는 혜택과 툴의 제공 여부는 OPN 정책에 따라 결정됩니다.
Oracle PartnerNetwork 멤버는 혜택 제공 중지 정책을 반드시 검토하십시오.
오라클은 독자적인 판단에 따라 언제든지 OPN 혜택 표 중 잘못되거나 누락된 부분을 수정할 수 있는 권리를 갖습니다.
Oracle Net Services - 버전 11.1.0.6 to 11.2.0.3 [릴리즈 11.1 to 11.2] 이 문서의 내용은 모든 플랫폼에 적용됩니다.
목적
ORA-609 에러는 alert.log 에 보고된다. 이 에러는 간헐적으로 발생하고 몇일 동안 발생하지 않을 수도 있다.
Mon Oct 12 10:03:39 2009 Errors in file e:\app\oracle\diag\rdbms\center\center\trace\center_ora_7464.trc: ORA-00609: could not attach to incoming connection ORA-12537: TNS:connection closed ORA-609 : opiodr aborting process unknown ospid (2436_7464)
데이타베이스 서버에 지역적으로 발생하는 sqlnet.log 화일에 이 에러가 발생할 수 있다.:
Fatal NI connect error 12537, connecting to: (LOCAL=NO)
VERSION INFORMATION: TNS for 64-bit Windows: Version 11.1.0.7.0 - Production Oracle Bequeath NT Protocol Adapter for 64-bit Windows: Version 11.1.0.7.0 - Production Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 11.1.0.7.0 - Production Time: 12-OCT-2009 10:03:39 Tracing to file: E:\app\oracle\product\11.1.0\db_1\NETWORK\trace\svr1_7464.trc Tns error struct: ns main err code: 12537 TNS-12537: TNS:connection closed ns secondary err code: 12560 nt main err code: 0 nt secondary err code: 0 nt OS err code: 0
listener.log에는 명확한 에러 없이 접속이 맺어졌다는 것을 보여준다. 이것은 리스너가 접속을 서버 프로세스에게 전달해 준 이후에 접속이 실패했기 때문이다.
Fatal NI connect error 12537, connecting to: (LOCAL=NO)
VERSION INFORMATION: TNS for Solaris: Version 11.2.0.2.0 - Production Oracle Bequeath NT Protocol Adapter for Solaris: Version 11.2.0.2.0 - Production TCP/IP NT Protocol Adapter for Solaris: Version 11.2.0.2.0 - Production Time: 21-DEC-2009 15:52:15 Tracing not turned on. Tns error struct: ns main err code: 12537 TNS-12537: TNS:connection closed ns secondary err code: 12560 nt main err code: 0 nt secondary err code: 0 nt OS err code: 0
Oracle Net 서버 트레이스 안에 event에 매치되는 것을 보여준다.
진단 절차
1.listener.log로부터 접속을 맺는 클라이언트를 찾아낸다. Alert log는 다음과 유사한 ORA-609 에러를 나타낸다. :
Mon Oct 05 12:41:49 2009 ORA-609 : opiodr aborting process unknown ospid (21131406_1)
Listener.log로 가서 이 접속에 일치하는 엔트리를 찾아본다. Listener.log 내 엔트리는 아래의 예처럼 보일 것이다.:
위 예제에서 클라이언트 주소값이 "sample.com"임을 주목하여야 한다. 한가지 옵션은 그 사이트에서 몇 개의 클라이언트를 위치시키고, 클라이언트 트레이싱을 활성화하는 것이다. 클라이언트 사이드에서 $ORACLE_HOME/network/log 화일을 확인해야 하고 동일한 timestamp 시점에 발생한 timeout 에러에 대해 명시적으로 확인해야 한다.
2. Oracle Net 트레이싱을 클라이언트 레벨 16으로 걸어서 확인한다. 클라이언트 SQLNET.ORA 화일 안에 아래와 같이 추가한다.
DIAG_ADR_ENABLED=off # Diable ADR if version 11g
TRACE_LEVEL_CLIENT = 16 # Enable level 16 trace TRACE_TIMESTAMP_CLIENT = ON # Set timestamp in the trace files TRACE_DIRECTORY_CLIENT = <DIRECTORY> # Control trace file location
TRACE_FILELEN_CLIENT =<n> #Control size of trace set in kilobytes eg 20480 TRACE_FILENO_CLIENT =<n> #Control number of trace files per process
만일 접속 모델이 JDBC Thin이라면 클라이언트 사이드의 Java 트레이싱이 필요하므로, Document 793415.1 How to Perform the Equivalent of SQL*Net Client Tracing with Oracle JDBC Thin Driver 를 참고하도록 한다. 만일 11.2 JDBC Thin 클라이언트가 사용된다면 다음 노트가 활용될 수 있다. Document 1050942.1 How to Trace the Network Packets Exchanged Between JDBC and the RDBMS in Release 11.2
3.Oracle Net 트레이싱을 서버 사이드 레벨 16으로 걸어서 확인한다. 클라이언트 SQLNET.ORA 화일 안에 아래와 같이 추가한다.
DIAG_ADR_ENABLED=off # Diable ADR if version 11g TRACE_LEVEL_SERVER = 16 # Enable level 16 trace TRACE_TIMESTAMP_SERVER = ON # Set timestamp in the trace files TRACE_DIRECTORY_SERVER = <DIRECTORY> # Control trace file location
TRACE_FILELEN_SERVER =<n> #Control size of trace set in kilobytes eg 20480 TRACE_FILENO_SERVER =<n> #Control number of trace files per process
트레이싱을 순환하게 되면 생성되는 트레이스 화일의 갯수와 크기를 조절할 수 있다.
TRACE_FILELEN 파라미터는 트레이스 화일의 크기를 셋팅하기 위해 사용된다. TRACE_FILENO 파라미터는 프로세스 당 트레이스 화일의 갯수를 셋팅하기 위해 사용된다.
중요 노트:
SQLNET.ORA 화일은 프로세스 생성 시 단 한번 읽혀진다. RDBMS 백그라운드 프로세스와 SHARED 서버 디스패처는 sqlnet.ora 화일의 파라미터 변경이 반영될 수 있도록 재기동되어야 한다. 프로세스가 트레이싱되기 위해 기동이 되었다면 트레이스는 프로세스가 멈출 때까지 중단되지 않는다.
Oracle Net 서버 트레이싱을 활성화하면 짧은 시간 동안에 많은 양의 트레이스를 생성시킬 수 있다. 비록 순환적인 트레이싱을 하더라도 각 프로세스는 TRACE_FILENO_SERVER 에 지정한 갯수 만큼의 트레이스를 생성시킬 것이다. 최적의 트레이싱 워크플로우는 트레이싱을 활성화하고 문제를 재현하고 트레이싱을 비활성화시키는 것이다. 그러므로, 트레이싱하는 시간의 양을 제한하는 것이 활성화된다.
TRACE_FILENO_SERVER 를 1로 셋팅하고, TRACE_FILELEN_SERVER 를 20480로 셋팅하게 되면 프로세스 당 생성되는 트레이스의 양을 낮춰주기 위한 솔루션이다. 이렇게 셋팅하면 트레이스 화일이 overwrite되고, failure가 발생한 시점의 중요한 데이타를 유실할 수 있음을 명심해야 한다.
4. Errorstack: 에러가 났을 때를 대비해 errorstack 트레이스를 설정한다. 이것은 Oracle Net 클라이언트 트레이스를 캡춰하는 것이 쉽지 않을 때 적용한다.
SQL> alter session set events '609 errorstack(3)';
에러가 재현되는 동안 몇 개의 트레이스를 수집한다.
SQL> alter session set events '609 off';
만일 에러를 만났을 때에는 다음을 수행한다.:
서버에서 SQLNET.LOG 화일을 리뷰한다.
ALERT. LOG 내에 timestamp를 비교하면서 일치하는 엔트리를 찾아본다.
SQLNET.LOG 화일 내의 엔트리로부터 Oracle Net server trace 이름을 "Tracing to file"이라는 라인에서 찾을 수 있다.
server 트레이스를 열어서 Connection ID 값에 대해서 grep 또는 찾아본다.
그런 다음, 같은 Connection ID 값에 대해서 클라이언트 트레이스 client 디렉토리를 찾아본다.
매칭되는 클라이언트와 서버 트레이스를 찾게 될 것이다. 이 절차는 이 문서에 자세히 소개되어 있다.Document 374116.1How to Match Oracle Net Client and Server Trace Files
리뷰를 위해 다음을 업로드한다.:
매칭되는 Oracle Net 클라이언트와 서버 트레이스 또는 매칭되는 Javanet 과 서버 트레이스 화일.
ALERT.LOG 와 LISTENER.LOG 화일. (전체의 로그 화일이 아니라 이슈를 커버하는 시간 대의 로그만 있으면 됨)
서버 ORACLE_HOME/network/log 아래의 SQLNET.LOG 화일
errorstack 트레이스 화일.
알려진 이슈들:
종종 ORA-609 에러가 접속이 완전히 이루어지기 전에 클라이언트가 접속이 끊기면서 발생한다. LISTENER.ORA 안에 있는 Timeout 파라미터 INBOUND_CONNECT_TIMEOUT_<listener_name> 과 SQLNET.ORA 화일 안에 있는 SQLNET.INBOUND_CONNECT_TIMEOUT 파라미터를 리뷰할 필요가 있다. 기본 시간인 60초를 사용한다면(명시적인 셋팅 없이), 이 파라미터는 증가시킬 필요가 있다.
데이타베이스가 수행 중인 서버 머신에서 네트워크 파라미터 셋팅을 확인한다. 셋팅 값이 모두 맞게 되었는지, DNS 서버가 가용한 상태인지 확인한다.
서버 플랫폼이 Microsoft Windows인 경우, 각 서비스가 동일한 계정으로 정상 기동되었는지 데이타베이스와 TNS 리스너를 위한 Windows 서비스를 확인해야 한다.
REFERENCES
NOTE:793415.1- How to Perform the Equivalent of SQL*Net Client Tracing with Oracle JDBC Thin Driver NOTE:1050942.1- How to Trace the Network Packets Exchanged Between JDBC and the RDBMS in Release 11.2 NOTE:609.1- ORA-609 TNS-12537 and TNS-12547 in 11g Alert.log
TRIM enables you to trim leading or trailing characters (or both) from a character string. If trim_character or trim_source is a character literal, then you must enclose it in single quotation marks.
If you specify LEADING, then Oracle Database removes any leading characters equal to trim_character.
If you specify TRAILING, then Oracle removes any trailing characters equal to trim_character.
If you specify BOTH or none of the three, then Oracle removes leading and trailing characters equal to trim_character.
If you do not specify trim_character, then the default value is a blank space.
If you specify only trim_source, then Oracle removes leading and trailing blank spaces.
The function returns a value with datatype VARCHAR2. The maximum length of the value is the length of trim_source.
If either trim_source or trim_character is null, then the TRIM function returns null.
Both trim_character and trim_source can be VARCHAR2 or any datatype that can be implicitly converted to VARCHAR2. The string returned is of VARCHAR2 datatype if trim_source is a character datatype and a LOB if trim_source is a LOB datatype. The return string is in the same character set as trim_source.
Examples
This example trims leading zeros from the hire date of the employees in the hr schema:
SELECT employee_id,
TO_CHAR(TRIM(LEADING 0 FROM hire_date))
FROM employees
WHERE department_id = 60
ORDER BY employee_id;
EMPLOYEE_ID TO_CHAR(T
----------- ---------
103 3-JAN-90
104 21-MAY-91
105 25-JUN-97
106 5-FEB-98
107 7-FEB-99
LTRIM removes from the left end of char all of the characters contained in set. If you do not specify set, then it defaults to a single blank. If char is a character literal, then you must enclose it in single quotation marks. Oracle Database begins scanning char from its first character and removes all characters that appear in set until reaching a character not in set and then returns the result.
Both char and set can be any of the datatypes CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB. The string returned is of VARCHAR2 datatype if char is a character datatype, NVARCHAR2 if char is a national character datatype, and a LOB if char is a LOB datatype.
Examples
The following example trims the redundant first word from a group of product names in the oe.products table:
SELECT product_name, LTRIM(product_name, 'Monitor ') "Short Name"
FROM products
WHERE product_name LIKE 'Monitor%';
PRODUCT_NAME Short Name
-------------------- ---------------
Monitor 17/HR 17/HR
Monitor 17/HR/F 17/HR/F
Monitor 17/SD 17/SD
Monitor 19/SD 19/SD
Monitor 19/SD/M 19/SD/M
Monitor 21/D 21/D
Monitor 21/HR 21/HR
Monitor 21/HR/M 21/HR/M
Monitor 21/SD 21/SD
Monitor Hinge - HD Hinge - HD
Monitor Hinge - STD Hinge - STD
RTRIM removes from the right end of char all of the characters that appear in set. This function is useful for formatting the output of a query.
If you do not specify set, then it defaults to a single blank. If char is a character literal, then you must enclose it in single quotation marks. RTRIM works similarly to LTRIM.
Both char and set can be any of the datatypes CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB. The string returned is of VARCHAR2 datatype if char is a character datatype, NVARCHAR2 if expr1 is a national character datatype, and a LOB if char is a LOB datatype.
Examples
The following example trims all the right-most occurrences of period, slash, and equal sign from a string:
I still find confusion out there about the LOGGING and NOLOGGING clauses when performing DML and DDL operations, the reality is that the NOLOGGING clause will work only on particular conditions, but all regular inserts,updates and deletes will still log the operations.
The benefits of the NOLOGGING option are:
Will save disk space when the archive option is enabled.
Will largely reduce I/O on the redologs.
Will reduce the time it takes to complete the operation.
Please note that NOLOGGING operations will only reduce -not eliminate- the logging.
Lets see an example -
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
-- First: we create an empty table with the nologging clause
We can see that there is a big difference on theredo sizegenerated by each insert, there are many post and articles on the internet that show the speed benefits when using the NOLOGGING option, but here I mainly want to clarify that a regular insert (no APPEND hint) will still generate redologs even if the table have been created with the NOLOGGING option.
What happens to the data after a restore when a nologging operation was performed on it?
I will present some scenarios to show the consequences when we need to perform a restore after a nologging transaction, this way we will know what to expect and we can better prepare ourselves in case of a disaster.
I took a full database backup, now I will create several tables with different options to see what happens after a restore, you might see some surprises here!
Scenarios:
Table “create as select” with the nologging option (table_ctas_nologging).
Regular table “create as select” (table_ctas_logging)
A nologging table created empty, and a regular (logging) insert (table_ctas_nologging_insert)
Table created with nologging, then two inserts, one with and one without logging (table_insert_mixed)
Regular logging table, with a nologging index (table_ctas_index_nologging)
ORA-01110: data file 4: '+DATA/orcl/datafile/users.259.719792191'
ORA-26040: Data block was loaded using the NOLOGGING option
That doesn’t look good, lets see the next table
1
2
3
4
5
SQL> selectcount(*) fromtable_ctas_logging ;
COUNT(*)
----------
50863
Good, no problem here, the next scenario is more interesting, the table was created with the NOLOGGING option, but the inserts were done without the APPEND hint
ORA-01110: data file 4: '+DATA/orcl/datafile/users.259.719792191'
ORA-26040: Data block was loaded using the NOLOGGING option
I tried to rebuil the index but I was still getting the same error message, at the end I was forced to drop it and recreate it.
Conclusions:
Use the NOLOGGING option only on temporary/working/staging tables.
Always perform a backup after a NOLOGGING operation.
Unless explicitly indicated, DDLs like CTAS and DMLs like inserts will log all operations.
FROM ORACLE DOCUMENTATION:
NOLOGGINGis supported in only a subset of the locations that supportLOGGING. Only the following operations support theNOLOGGINGmode:
DML:
Direct-pathINSERT(serial or parallel) resulting either from anINSERTor aMERGEstatement.NOLOGGINGis not applicable to anyUPDATEoperations resulting from theMERGEstatement.
ALTERTABLE…LOB_storage_clause…LOB_parameters…NOCACHE|CACHEREADS(to specify logging of newly created LOB columns)
ALTERTABLE…modify_LOB_storage_clause…modify_LOB_parameters…NOCACHE|CACHEREADS(to change logging of existing LOB columns)
ALTERTABLE…MOVE
ALTERTABLE… (all partition operations that involve data movement)
ALTERTABLE…ADDPARTITION(hash partition only)
ALTERTABLE…MERGEPARTITIONS
ALTERTABLE…SPLITPARTITION
ALTERTABLE…MOVEPARTITION
ALTERTABLE…MODIFYPARTITION…ADD SUBPARTITION
ALTERTABLE…MODIFYPARTITION…COALESCESUBPARTITION
CREATEINDEX
ALTERINDEX…REBUILD
ALTERINDEX…REBUILD[SUB]PARTITION
ALTERINDEX…SPLITPARTITION
For objects other than LOBs, if you omit this clause, then the logging attribute of the object defaults to the logging attribute of the tablespace in which it resides.
For LOBs, if you omit this clause, then:
If you specifyCACHE, thenLOGGINGis used (because you cannot haveCACHENOLOGGING).
If you specifyNOCACHEorCACHEREADS, then the logging attribute defaults to the logging attribute of the tablespace in which it resides.
NOLOGGINGdoes not apply to LOBs that are stored internally (in the table with row data). If you specifyNOLOGGINGfor LOBs with values less than 4000 bytes and you have not disabledSTORAGEINROW, then Oracle ignores theNOLOGGINGspecification and treats the LOB data the same as other table data.
Microsoft Windows is ONLY certified on Oracle Clusterware
Windows Vista, 7, Server 2008, and Server 2008 R2:
Do not install Oracle Database 10g Release 2 on Windows Server 2008, Server 2008 R2, 7, nor Vista using the 10.2.0.1 version. You must use the 10.2.0.4 (64-bit) x64 version specifically created for these operating systems. After installation, for Windows 7 and Windows Server 2008 R2, patchset 10.2.0.5 or higher must be applied. For more information on installation see the release notes sections on the installation media:
The matrix below summarizes client and server combinations that are supported for the most commonly used product versions. A full matrix appears at the end of this document.
New interoperability problems will only be investigated ifBOTHreleases involved are covered by a valid support contract at the time that the issue is reported . eg:
A 11.1.0 client to an 11.2.0 server issue requires the customer to have a validExtended Supportcontract for the 11.1.0 client in order for Oracle to investigate it.
Limited extended support previously applied to 9.2 releases but has now ended.
ES
Supported but fixes only possible for customers withExtended Support.
Was
Was a supported combination but one of the releases is no longer covered by any of Premier Support , Primary Error Correct support , Extended Support nor Extended Maintenance Support so fixes are no longer possible.
#2- An ORA-3134 error is incorrectly reported if a 10g client tries to connect to an 8.1.7.3 or lower server. SeeNote:3437884.8.
#3- An ORA-3134 error is correctly reported when attempting to connect to this version.
#4- There are problems connecting from a 10g client to 8i/9i where one is EBCDIC based. SeeNote:3564573.8
#5- For connections between 10.2 (or higher) and 9.2 the 9.2 end MUST be at 9.2.0.4 or higher. Connections between 10.2 (or higher) and 9.2.0.1, 9.2.0.2 or 9.2.0.3 have never been supported.
#6- For connections between 11.1 (or higher) database server and 10.1 / 10.2 database server across a database link the 10g end MUST be at 10.1.0.5 / 10.2.0.2 (or higher) respectively in order to use PLSQL between those versions. (This does not apply to normal 11g clients to 10g database server only server to server and a few very specific client products, such as Oracle Forms). SeeNote:4511371.8for more details.
#7- For database link connections between 11.1 (or higher) and 10.2 the 10g end MUST be at 10.2.0.2 (or higher) in order to use PLSQL between those versions. SeeNote:4511371.8for more details.
#8- Attempting to connect from 9.2 to 12.1 will fail with an "ORA-28040: No matching authentication protocol" error.
General Notes:
Fordatabase linksbetween different Oracle versions connections must be supported in BOTH directions in the matrix above. eg: As 11.2 -> 10.1 is not supported then database links between these version arenotsupported in either direction.
Unsupported combinations may appear to work but can encounter errors for particular operations. The fact that they appear to work should not be relied upon -issues on unsupported combinations will not be investigated.
Since new database servers are compatible with a limited set of older OCI clients, it may not be necessary to upgrade the client software when upgrading the database. However, some new features may not work without upgrading the client software. For example, an Oracle 10.2 client is able to connect to an 11.2 database, but is not able to take advantage of newer features such as Client Result Cache (introduced in 11.1).
Oracle Applications , or other Oracle products, may have supported configurations not listed in the matrix above.
The matrix above also applies between different platforms and between 32/64 bit releases of Oracle client / server except where any Oracle platform desupport notice indicates otherwise .
Unix BEQUEATH (BEQ) connections areNOTsupported between different releases. eg: Client 10.2 is not supported to make an Oracle Net connection to a 11.2 server using the BEQ protocol adapter regardless of the interoperability support listed above. SeeNote:364252.1for more details.
Oracle Client Version별로 Oracle DB Server Version에 Support하는데 있어서 상호 호환성 여부를 정리한 자료입니다.
Oracle Database 12c : Tom’s Top 12 Things About the Latest Generation of Database Technology
Tom’s Top 12 Things About the Latest Generation of Database Technology
Functions (and procedures used within those functions) can be defined in the WITH clause. Performance boost compared to regular unit defintion. Pragma to allow regular functions to benefit from these performance benefits.
Default value of column can use a sequence.nextval.
Identity columns : Multiple levels of control of how it is used. Can use simple or more complex syntax.
Metadata only default of optional columns. Previous versions this was possible only for mandatory columns.
VARCHAR2(32767) in the database. Less than 4K is stored inline. More than 4K is stored out of line, similar to LOB, but simpler. Not available by default.
Top-N now using Row limiting clause eg. “OFFSET 10 ROWS FETCH FIRST 10 ROWS ONLY”. Similar to mySQL syntax.
Row pattern matching. Quite a lot of new analytic syntax here.
Partitioning Improvements: - Asynchronous Global Index maintenance for DROP and TRUNCATE. Command returns instantly, but index cleanup happens later. - Cascade for TRUNCATE and EXCHANGE partition. - Multiple partition operations in a single DDL - Online move of a partition(without DBMS_REDEFINTIION). - Interval + Reference Partitioning.
Adaptive Execution Plans: - If the optimizer notices the cardinality is not what is expected, so the current plan is not optimal, it can alter subsequent plan operations to take allow for the differences between the estimated and actual cardinalities. - The stats gathered during this process are persisted as Adaptive Statistics, so future decisions can benefit from this. - You will see STATISTICS COLLECTOR steps in the SQL Trace. Can make the trace harder to read as it can contain information about the expected plan and the actual plan.
Enhanced Statistics: - Some dynamic sampling operations are persistent, so they are not lost when the SQL is aged out. - Hybrid histograms. When the number of distinct values is greater than 254, “almost popular” values can get “lost” in the mix. A single bucket can now store the popularity of than value, effectively increasing the number of buckets, without actually increasing it. - Possible the max number of buckets can be increased based on a parameter. (demo grounds) - Statistics gathered during loads. CTAS and INSERT … SELECT automatically compute stats. - Global temporary tables can have “session private statistics”. Previously, we had one-size-fits-all.
Temporary Undo (ALTER SESSION SET temp_undo_enabled=true): - UNDO for temporary tables can now be managed in TEMP, rather than the regular UNDO tablespace. - Reduces contents of regular UNDO, allowing better flashback operations. - Reduces the size of redo associated with recovering the regular UNDO tablespace.
Data Optimization: - Information Lifecycle Management: Uses heat map. Colder data is compressed and moved to lower tier storage. Controlled by declarative DDL policy.
Transaction Guard: - If a failure happens, your application may not know the actual status of a transaction. If it was successful, issuing it again could cause a duplication transaction. - In these cases, you can mark a transaction with an “unknown” state (as far as the application is concerned) as failed, so even though they may have been successful, it will never be considered, or recovered. You’ve guaranteed the outcome.
Pluggable database: - Oracle provided metadata and data is kept in the container database (CDB). - User metadata and data is kept in the plugable database(PDB) . - One container can have multiple plugable databases. - No namespace clashes. Allows public synonyms and database links at the PDB level, rather than the CBD level. - Cloning is quick and simple as only user metadata and data needs to be cloned. - Upgrades have the potential to just unplug from old version (12cR1) to new version (12cR2). - Reduce total resource usage is reduced on lower use databases.
Oracle Database Optimizer: An Insider’s View of How the Optimizer Works Oracle database 12c is the first step on the way to making an adaptive, or self-learning optimiser. Alternative subplans are precomputed and stored in the cursor, so no new hard parsing will be needed as part of the adaption of an already executing plan. Statistics collectors are included in the plan execution. If the collectors cross a threshold, the plan might switch during execution from a nested loops to a hash join. You can see information about the adaptive actions that have occurred using the DBMS_XPLAN package, with the format of “+all_dyn_plan +adaptive”. If a plan has been adapted, you will see it indicated in the v$sql.is_resolved_dynamic_plan column. If this functionality scares you, you can turn it off using the OPTIMIZER_APADPTIVE_REPORTING_ONLY parameter. Same work is done, but no actual adaptive action is taken. During parallel execution, collectors can influence the distribution method (HASH > Distribution). Shown in the plan as the HYBRID HASH operations. Dynamic statistics replace dynamic sampling. The resulting stats are cached as SHARED DYNAMIC STATS specific for the statement, including the bind values. This information is used for any session using the same statement. Cardinality feedback can be used to re-optimize subsequent operations. Join statistics are monitored. Works with adaptive cursor sharing. Persisted on disk. New column v$sql.is_reoptimizable shows that a subsequent run will take this into consideration. Collectors are kept, even if the SQL statement is killed part way through. The plan shows that cardinality feedback is used. SQL Plan Directives are based on a SQL phrase (a specific join) rather than the whole statement. Cached in the directive cache, but persisted in the SYSAUX tablespace. Managed using the DBMS_SPD package. Information gathered by the optimizer, may prompt automatic creation of column groups, so next time stats are gathered, the extended stats will be gathered.
What’s New in Security in the Latest Generation of Database Technology
Privilege Analysis: - Track direct privileges and privileges via roles being used, so you can determine the least privileges needed. - Monitoring controlled using DBMS_PRIVILEGE_CAPTURE. - Report what is used and what is not used.
Data Redaction: A variation in column masking of VPD, but it doesn’t just blank the value and still allows queries against the column in the WHERE clause.
Enhanced Security of Audit Trail: - Single unified audit trail. - Extension of the audit management package. - Multiple audit management privileges.
Encryption Enhancements: - Allow SQL creation and management of wallets, rather than command line utilities. Allows easier remote management. - Export and import wallets/keys between plugable databases. - Storage of wallets in ASM. - Much more…
Code-Based Access Control (CBAC): - A PL/SQL unit can have roles granted to it. - When the unit runs, any dynamic SQL running can have the privileges granted via the role. - Doesn’t affect compile time, so focussing very much on dynamic SQL. - Useful on invoker rights, since now the PL/SQL can run with user privileges and explicitly granted roles for the unit.
Invoker Rights: - INHERITED RIGHTS : Control accidental privilege escalation when a privileged user calls an invoker rights unit containing malicious code. - Invokers rights for views.
Separation of Duties: - SYSDBA – God - SYSOPER – More limited than SYSDBA, but still very powerful. - SYSBACKUP – Just enough to do a backup. - SYSDG – Just enough for data guard administration. - SYSKM – Just enough to perform basic key management tasks. - Roles for audit management.
If you want to store date and time information in Oracle, you really only have two different options for the column's datatype. Lets take a quick look at these two datatypes and what they offer.
DATE datatype
This is the datatype that we are all too familiar with when we think about representing date and time values. It has the ability to store the month, day, year, century, hours, minutes, and seconds. It is typically good for representing data for when something has happened or should happen in the future. The problem with the DATE datatype is its' granularity when trying to determine a time interval between two events when the events happen within a second of each other. This issue is solved later in this article when we discuss the TIMESTAMP datatype. In order to represent the date stored in a more readable format, the TO_CHAR function has traditionally been wrapped around the date as inListing A.
LISTING A:
Formatting a date
SQL> SELECT TO_CHAR(date1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table;
Date
---------------------------
06/20/2003 16:55:14
06/26/2003 11:16:36
About the only trouble I have seen people get into when using the DATE datatype is doing arithmetic on the column in order to figure out the number of years, weeks, days, hours, and seconds between two dates. What needs to be realized when doing the calculation is that when you do subtraction between dates, you get a number that represents the number of days. You should then multiply that number by the number of seconds in a day (86400) before you continue with calculations to determine the interval with which you are concerned. Check outListing Bfor my solution on how to extract the individual time intervals for a subtraction of two dates. I am aware that the fractions could be reduced but I wanted to show all the numbers to emphasize the calculation.
LISTING B:
Determine the interval breakdown between two dates for a DATE datatype
One of the main problems with the DATE datatype was its' inability to be granular enough to determine which event might have happened first in relation to another event. Oracle has expanded on the DATE datatype and has given us the TIMESTAMP datatype which stores all the information that the DATE datatype stores, but also includes fractional seconds. If you want to convert a DATE datatype to a TIMESTAMP datatype format, just use the CAST function as I doin Listing C. As you can see, there is a fractional seconds part of '.000000' on the end of this conversion. This is only because when converting from the DATE datatype that does not have the fractional seconds it defaults to zeros and the display is defaulted to the default timestamp format (NLS_TIMESTAMP_FORMAT). If you are moving a DATE datatype column from one table to a TIMESTAMP datatype column of another table, all you need to do is a straight INSERTSELECT FROM and Oracle will do the conversion for you. Look atListing Dfor a formatting of the new TIMESTAMP datatype where everything is the same as formatting the DATE datatype as we did inListing A. Beware while the TO_CHAR function works with both datatypes, the TRUNC function will not work with a datatype of TIMESTAMP. This is a clear indication that the use of TIMESTAMP datatype should explicitly be used for date and times where a difference in time is of utmost importance, such that Oracle won't even let you compare like values. If you wanted to show the fractional seconds within a TIMESTAMP datatype, look atListing E. In Listing E, we are only showing 3 place holders for the fractional seconds.
LISTING C:
Convert DATE datatype to TIMESTAMP datatype
SQL> SELECT CAST(date1 AS TIMESTAMP) "Date" FROM t;
Date
-----------------------------------------------------
20-JUN-03 04.55.14.000000 PM
26-JUN-03 11.16.36.000000 AM
Formatting of the TIMESTAMP datatype
1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table
Date
-------------------
06/20/2003 16:55:14
06/26/2003 11:16:36
LISTING E:
Formatting of the TIMESTAMP datatype with fractional seconds
1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS:FF3') "Date" FROM date_table
Date
-----------------------
06/20/2003 16:55:14:000
06/26/2003 11:16:36:000
Calculating the time difference between two TIMESTAMP datatypes is much easier than the old DATE
datatype. Look at what happens when you just do straight subtraction of the columns inListing F. As you can see, the results are much easier to recognize, 17days, 18hours, 27minutes, and 43seconds for the first row of output. This means no more worries about how many seconds in a day and all those cumbersome calculations. And therefore the calculations for getting the weeks, days, hours, minutes, and seconds becomes a matter of picking out the number by using the SUBSTR function as can be seen inListing G.
In order to get the system date and time returned in a DATE datatype, you can use the SYSDATE function such as :
SQL> SELECT SYSDATE FROM DUAL;
In order to get the system date and time returned in a TIMESTAMP datatype, you can use the SYSTIMESTAMP function such as:
SQL> SELECT SYSTIMESTAMP FROM DUAL;
You can set the initialization parameter FIXED_DATE to return a constant value for what is returned from the SYSDATE function. This is a great tool for testing date and time sensitive code. Just beware that this parameter has no effect on the SYSTIMESTAMP function. This can be seen inListing H.
LISTING H:
Setting FIXED_DATE and effects on SYSDATE and SYSTIMESTAMP
SQL> ALTER SYSTEM SET fixed_date = '2003-01-01-10:00:00';
System altered.
SQL> select sysdate from dual;
SYSDATE
---------
01-JAN-03
SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------
09-JUL-03 11.05.02.519000 AM -06:00
When working with date and time, the options are clear. You have at your disposal the DATE and TIMESTAMP datatypes. Just be aware, while there are similarities, there are also differences that could create havoc if you try to convert to the more powerful TIMESTAMP datatype. Each of the two has strengths in simplicity and granularity. Choose wisely.