|
Oracle 2011. 8. 11. 15:24
버그 속성
유형 |
B - Defect |
제품 버전에서 수정됨 |
- (수정된 버전 없음) |
중요도 |
2 - Severe Loss of Service |
제품 버전 |
10.2.0.2 |
상태 |
44 - Not Feasible to fix, to Filer |
플랫폼 |
46 - Linux x86 |
생성 날짜 |
02-Apr-2007 |
플랫폼 버전 |
LINUX AS4.0 |
업데이트 날짜 |
06-Aug-2008 |
기본 버그 |
- |
데이터베이스 버전 |
10.2.0.2 |
|
|
영향을 받는 플랫폼 |
Generic |
|
|
제품 소스 |
Oracle |
|
|
관련 제품
라인 |
Oracle Database Products |
제품군 |
Oracle Database |
영역 |
Oracle Database |
제품 |
5 - Oracle Server - Enterprise Edition |
Hdr: 5969934 10.2.0.2 RDBMS 10.2.0.2 DATA PUMP EXP PRODID-5 PORTID-46
Abstract: EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
*** 04/02/07 10:22 pm ***
TAR:
----
PROBLEM:
--------
1. Clear description of the problem encountered:
EXPDP client gets UDE-00008 and ORA-31626 while the server side export is
OK.
The export log from the server side looks OK.
// client console log
------------------------------
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-6512: at "SYS.DBMS_DATAPUMP", line 2745
ORA-6512: at "SYS.DBMS_DATAPUMP", line 3712
ORA-6512: at line 1
------------------------------
2. Pertinent configuration information (MTS/OPS/distributed/etc)
2-node RAC
3. Indication of the frequency and predictability of the problem
100% reproducible at the customer's site.
4. Sequence of events leading to the problem
Please see attached trace files.
DIAGNOSTIC ANALYSIS:
--------------------
No error in alert log on both nodes.
The following is the output of expdp with trace=EB0300.
Deleting queue from the master table fails, same as BUG 5663241.
// expdep command line
expdp \'sys/XXX as sysdba\' trace=EB0300
DUMPFILE=work_app_dmp:20070330_expdp_err_test.dmp
FULL=y LOGFILE=work_app_dmp:20070330_expdp_err_test.log
// trace of Master Control Process
------------------------------
KUPM: 17:54:40.520: Closing job....
KUPM: 17:54:40.520: Final state in Close_job is: COMPLETED
KUPM: 17:54:40.520: dropping master since job never started
KUPM: 17:54:40.520: keep_master_flag = FALSE
KUPM: 17:54:40.520: Delete files = FALSE
KUPM: 17:54:40.520: File subsystem shutdown
KUPM: 17:54:40.520: In set_longops
KUPM: 17:54:40.521: Work so far is: 66.80495548248291015625
KUPM: 17:54:40.521: Master delete flag is: TRUE
KUPV: 17:54:40.521: Delete request for job: SYS.SYS_EXPORT_FULL_01
KUPV: 17:54:40.535: Deleting FT job entry
KUPV: 17:54:40.535: Deleting queues
KUPC: 17:54:40.966: Deleted queue SYS.KUPC$C_1_20070330174720.
KUPC: 17:54:40.968: Error Code: -24018
KUPC: 17:54:40.968: Error Text: deleteQueue: ORA-24018: STOP_QUEUE on
SYS.KUPC$S_1_20070330174720 failed, outstanding transactions found
KUPV: 17:54:40.968: Delete request for MT: SYS.SYS_EXPORT_FULL_01
*** 17:54:51.426
KUPM: 17:54:51.426: Fixed views cleaned up
KUPM: 17:54:51.427: Log file is closed.
KUPV: 17:54:51.427: Detach request
KUPV: 17:54:51.434: Deleting FT att entry
------------------------------
Importing a table (TABLES=test_tbl_2) with the dump file
(20070330_expdp_err_test.dmp)
is successfully done, which means that the export is OK regarding the table
(test_tbl_2).
// impdp command line
impdp \'sys/XXX as sysdba\' DUMPFILE=work_app_dmp:20070330_expdp_err_test.dmp
TABLES=test_tbl_2 LOGFILE=work_app_dmp:20070330_impdp_test.log
WORKAROUND:
-----------
None.
RELATED BUGS:
-------------
BUG 5416274
BUG 5663241 (duplicate bug of BUG 5416274)
REPRODUCIBILITY:
----------------
reproduced twice at customer's site.
CT says every time expdp was done, this phenomenon occured.
TEST CASE:
----------
STACK TRACE:
------------
SUPPORTING INFORMATION:
-----------------------
I will be loading the following files:
dbram01_dm00_19485.trc
dbram01_dw01_19569.trc
expdp.txt
imp.txt
24 HOUR CONTACT INFORMATION FOR P1 BUGS:
----------------------------------------
DIAL-IN INFORMATION:
--------------------
IMPACT DATE:
------------
*** 04/02/07 10:26 pm ***
*** 04/02/07 10:44 pm ***
*** 04/02/07 10:45 pm ***
*** 04/10/07 01:09 am ***
*** 04/10/07 01:09 am ***
*** 04/10/07 01:20 am ***
*** 04/10/07 01:20 am *** (CHG: Sta->11)
*** 04/10/07 01:20 am ***
*** 04/10/07 01:20 am ***
*** 05/06/07 10:57 pm ***
*** 05/22/07 05:02 pm ***
*** 05/31/07 11:52 pm ***
*** 06/13/07 06:39 pm ***
*** 06/18/07 02:38 am ***
*** 06/18/07 02:38 am ***
*** 06/18/07 02:38 am ***
*** 06/20/07 03:41 am ***
*** 06/20/07 03:50 am ***
*** 07/16/07 06:48 pm ***
*** 07/24/07 06:48 pm ***
*** 08/02/07 09:40 pm ***
*** 08/21/07 06:45 pm ***
*** 09/25/07 07:46 pm ***
*** 10/15/07 06:39 pm ***
*** 11/02/07 05:06 pm ***
*** 11/15/07 01:48 am ***
*** 03/06/08 09:52 pm ***
*** 03/20/08 08:17 am ***
*** 03/20/08 08:17 am ***
*** 05/13/08 11:26 am ***
*** 07/08/08 06:35 am ***
*** 08/06/08 08:35 am ***
*** 08/06/08 09:21 am *** (CHG: Sta->44)
*** 08/06/08 09:21 am ***
In the 10.2.0.2 release, there were a number of problems that caused the
expdp and impdp clients to exit prematurely, interpreting a nonfatal error as
a fatal one, giving the appearance that the job had failed when it hadn't. In
fact, inspection of the log file, if one was specified for the job, showed
that the job ran successfully to completion. Often a trace file written by
one of the Data Pump processes would provide more detail on the error that
had been misinterpreted as a fatal one. Many of these errors involved the
queues used for communication between the Data Pump processes, but there were
other issues as well.
With each subsequent release, these problems have been addressed, and the
client has become more robust and rarely, if ever, runs into situations like
this. However, this is the result of dozens of bug fixes in subsequent
releases, some in Data Pump and some in supporting layers. It's impossible to
know, at this point, what combination of bug fixes would address this
specific failure, and even if that was possible, it wouldn't address other
possible failures that look very similar on the client side.
Relying on information in the log file is one way to verify that the job
actually completed successfully. Problems like this one became much more
intermittent by the 10.2.0.4 and 10.2.0.5 releases and are rarely, if ever,
seen in 11.1 or later.
출처 : Oracle Support
Oracle 2011. 8. 11. 15:19
수정 날짜 29-APR-2010 유형 PROBLEM 상태 PUBLISHED |
|
Applies to:Oracle Server - Enterprise Edition - Version: 10.2.0.2 to 11.1.0.7 - Release: 10.2 to 11.1
Information in this document applies to any platform.
SymptomsDatapump export from the client side completes with errors. Example:
expdp system/password full=y dumpfile=full_exp.dmp logfile=full_explog
...
. . exported "CALYPSO"."KICKOFF_CFG" 0 KB 0 rows
. . exported "CALYPSO"."LE_AGR_CHILD" 0 KB 0 rows
. . exported "CALYPSO"."LE_ATTR_CODE" 0 KB 0 rows
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2745
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3712
ORA-06512: at line 1
However, reviewing the logfile shows that the "job successfully completed"
CauseThis issue has been discussed in Bug 5969934 EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
SolutionThe expdp client makes calls to DBMS_DATAPUMP package to start and monitor export job. Once the export job is underway, the client just monitors the job status by issuing DBMS_DATAPUMP.GET_STAUS.
Therefore, if the export logfile says "job successfully completed", the dump file generated by the job should be fine.
You can simply ignore the errors, since the dump file is still valid for an import.
In the 10.2.0.2 release, there were a number of problems that caused the expdp and impdp clients to exit prematurely, interpreting a nonfatal error as a fatal one, giving the appearance that the job had failed when it hadn't. In fact, inspection of the log file, if one was specified for the job, showed that the job ran successfully to completion. Often a trace file written by one of the Data Pump processes would provide more detail on the error that had been misinterpreted as a fatal one. Many of these errors involved the queues used for communication between the Data Pump processes, but there were other issues as well.
.
With each subsequent release, these problems have been addressed, and the client has become more robust and rarely, if ever, runs into situations like this. However, this is the result of many bug fixes in subsequent releases, some in Data Pump and some in supporting layers. It's impossible to know, at this point, what combination of bug fixes would address this specific failure, and even if that was possible, it wouldn't address other possible failures that look very similar on the client side.
.
Relying on information in the log file is one way to verify that the job actually completed successfully. Problems like this one became much more intermittent by the 10.2.0.4 release and are rarely, if ever, seen in 11.1 or later.
ReferencesBUG:5969934 - EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
관련 자료
|
출처 : Oracle Support
Data Pump에 생각보다 많은 Bug들이 있다. 하지만, 성능상으로는 이런 Bug들을 감수할만큼 효율적이니 잘 피해서 사용해보자.
Oracle 2011. 8. 11. 11:30
수정 날짜 20-MAY-2010 유형 PROBLEM 상태 PUBLISHED |
|
Applies to:Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.4 - Release: 10.2 to 10.2
Information in this document applies to any platform.
SymptomsRunning an expdp on a certain table usually takes an hour.
However, it now runs for over a day. The problem appears to be with one table (the largest in the database).
When the expdp is started the first table that it processes has an estimated process time of over 24hrs.
When the same export is run from two other servers, it shows the elapsed time for the same table as around 20 minutes
ERROR
-----------------------
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
ORA-31693 : Table data object "MERIDIAN"."LINEPAST" failed to load/unload and is being skipped due to error:
ORA-02354 : error in exporting/importing data
ORA-01555 : snapshot too old: rollback segment number 1 with name "_SYSSMU1$" too small
The table does not have any LOB columns.
Increasing the undo_retention parameter did not help.
ChangesMost of the rows in the table were recently updated.
CauseUpdating the rows in the table have caused high fragmentation (chained/migrated rows), which increased significantly the time to perform the export of this table.
Solution1.Recreate the fragmented table through :
SQL> create table <new_table_name> as (select * from <fragmented_table);
then drop the fragmented table.
You may re-name the <new_table_name> to the old table name.
2. Perform the expdp job. (this time it ended in few minutes)
You may also refer to Note 746778.1 How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
ReferencesNOTE:746778.1 - How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
관련 자료
출처 : oracle support
('' ) NOTE:746778.1의 내용 |
|