Blog dedicated to Oracle Applications (E-Business Suite) Technology; covers Apps Architecture, Administration and third party bolt-ons to Apps
Tuesday, January 27, 2009
ASCP plan uses lot of temp tablespace after OATM
This was traced to materialized view: INV.MTL_SYS_ITEMS_SN
SELECT /*+ OPAQUE_TRANSFORM */
"INVENTORY_ITEM_ID", "ORGANIZATION_ID", "USING_ORGANIZATION_ID",
"ASL_ID", "PROCESSING_LEAD_TIME", "MINIMUM_ORDER_QUANTITY",
"FIXED_LOT_MULTIPLE", "DELIVERY_CALENDAR_CODE",
"PURCHASING_UNIT_OF_MEASURE", "VENDOR_ID", "VENDOR_SITE_ID",
"ENABLE_VMI_FLAG", "VMI_MIN_QTY", "VMI_MAX_QTY",
"ENABLE_VMI_AUTO_REPLENISH_FLAG", "VMI_REPLENISHMENT_APPROVAL",
"ATTRIBUTE1", "ATTRIBUTE2", "ATTRIBUTE3", "ATTRIBUTE4", "ATTRIBUTE5",
"ATTRIBUTE6", "ATTRIBUTE7", "ATTRIBUTE8", "ATTRIBUTE9", "ATTRIBUTE10",
"ATTRIBUTE11", "ATTRIBUTE12", "ATTRIBUTE13", "ATTRIBUTE14",
"ATTRIBUTE15", "ITEM_PRICE", "PRIMARY_VENDOR_ITEM",
"REPLENISHMENT_METHOD", "MIN_MINMAX_DAYS", "MAX_MINMAX_DAYS",
"FORECAST_HORIZON", "FIXED_ORDER_QUANTITY"
FROM "MRP_AP_PO_SUPPLIERS_V" "X"
WHERE "ORGANIZATION_ID" = 1234
OR "ORGANIZATION_ID" = 2341
OR "ORGANIZATION_ID" = 2929
OR "ORGANIZATION_ID" = 2382
The query when traced showed a lot of wait events on 'direct path write temp' and filled up the temp tablespace.
WAIT #35: nam='direct path write temp' ela= 5078 file number=4017 first dba=18341
WAIT #35: nam='direct path write temp' ela= 2611 file number=4017 first dba=18354
WAIT #35: nam='direct path write temp' ela= 2617 file number=4017 first dba=18353
WAIT #35: nam='direct path write temp' ela= 1880 file number=4017 first dba=18360
WAIT #35: nam='direct path write temp' ela= 26 file number=4017 first dba=183641
WAIT #35: nam='direct path write temp' ela= 1721 file number=4017 first dba=18363
WAIT #35: nam='direct path write temp' ela= 24 file number=4017 first dba=183721
WAIT #35: nam='direct path write temp' ela= 1922 file number=4017 first dba=18378
WAIT #35: nam='direct path write temp' ela= 6247 file number=4017 first dba=18377
WAIT #35: nam='direct path write temp' ela= 1887 file number=4017 first dba=18383
WAIT #35: nam='direct path write temp' ela= 5088 file number=4017 first dba=18383
WAIT #35: nam='direct path write temp' ela= 4958 file number=4017 first dba=18396
/8: lwp_park(0x00000000, 0) = 0
/8: pwrite(423, "\bA2\0\0\f03 M YA9 b P _".., 122880, 0x69AB2000) = 122880
/1: kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF) Err#22 EINVAL
/1: write(5, " W A I T # 3 5 : n a".., 127) = 127
/1: write(5, "\n", 1) = 1
/1: times(0xFFFFFFFF7FFF8F70) = 1994720337
/1: semctl(469762093, 12, SETVAL, 1) = 0
/1: semtimedop(570425393, 0xFFFFFFFF7FFF7E04, 1, 0xFFFFFFFF7FFF7DF0) = 0
/1: write(5, " W A I T # 3 5 : n a".., 99) = 99
/1: write(5, "\n", 1) = 1
/1: semctl(469762093, 12, SETVAL, 1) = 0
/1: semtimedop(570425393, 0xFFFFFFFF7FFF7E04, 1, 0xFFFFFFFF7FFF7DF0) = 0
/1: write(5, " W A I T # 3 5 : n a".., 99) = 99
/1: write(5, "\n", 1) = 1
/1: semctl(469762093, 12, SETVAL, 1) = 0
/1: semtimedop(570425393, 0xFFFFFFFF7FFF7E04, 1, 0xFFFFFFFF7FFF7DF0) = 0
/1: write(5, " W A I T # 3 5 : n a".., 99) = 99
/1: write(5, "\n", 1) = 1
/1: times(0xFFFFFFFF7FFF8D50) = 1994720339
/1: lwp_unpark(9) = 0
/9: lwp_park(0x00000000, 0) = 0
/1: kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF) Err#22 EINVAL
/9: pwrite(422, "\bA2\0\0\vC3E3D9A9 b P _".., 122880, 0x7C7B2000) = 122880
/9: kaio(AIONOTIFY, 0) = 0
/1: kaio(AIOWAIT, 0xFFFFFFFF7FFF8E30) = 1
/1: write(5, " W A I T # 3 5 : n a".., 127) = 127
/1: write(5, "\n", 1) = 1
/1: times(0xFFFFFFFF7FFF8F70) = 1994720340
/1: times(0xFFFFFFFF7FFF8D50) = 1994720340
/1: lwp_unpark(10) = 0
/10: lwp_park(0x00000000, 0) = 0
/1: kaio(AIOWAIT, 0xFFFFFFFFFFFFFFFF) Err#22 EINVAL
/10: pwrite(423, "\bA2\0\0\f03 M iA9 b P _".., 122880, 0x69AD2000) = 122880
/10: kaio(AIONOTIFY, 0) = 0
/1: kaio(AIOWAIT, 0xFFFFFFFF7FFF8E30) = 1
/1: write(5, " W A I T # 3 5 : n a".., 128) = 128
/1: write(5, "\n", 1) = 1
/1: times(0xFFFFFFFF7FFF8F70) = 1994720342
/1: semctl(469762093, 12, SETVAL, 1) = 0
/1: semtimedop(570425393, 0xFFFFFFFF7FFF7E04, 1, 0xFFFFFFFF7FFF7DF0) = 0
/1: write(5, " W A I T # 3 5 : n a".., 99) = 99
/1: write(5, "\n", 1) = 1
/1: semctl(469762093, 12, SETVAL, 1) = 0
/1: semtimedop(570425393, 0xFFFFFFFF7FFF7E04, 1, 0xFFFFFFFF7FFF7DF0) = 0
/1: write(5, " W A I T # 3 5 : n a".., 99) = 99
Solved by following these steps:
execute dbms_stats.gather_fixed_objects_stats;
execute dbms_stats.gather_dictionary_stats();
execute dbms_stats.gather_schema_stats('APPS',cascade=>TRUE,degree=>48);
execute dbms_stats.gather_schema_stats('APPLSYS',cascade=>TRUE,degree=>48);
execute dbms_stats.gather_schema_stats('INV',cascade=>TRUE,degree=>48);
execute dbms_stats.gather_schema_stats('PO',cascade=>TRUE,degree=>48);
execute dbms_stats.gather_schema_stats('MSC',cascade=>TRUE,degree=>48);
execute dbms_stats.gather_schema_stats('ASO',cascade=>TRUE,degree=>48);
Wednesday, January 21, 2009
OATM export import testcase
Friday, January 16, 2009
OATM fails to migrate data in long columns
Io exception: Connection
refused(DESCRIPTION=(TMP=)(VSNNUM=168822016)(ERR=12505)(ERROR_STACK=(ERROR=(CODE =12505)(EMFI=4))))
TAR Number(If customer issued)
------------------------------
7238244.993
.
Problem Statement
-----------------
As a result of similar issue in Bug 5646392 or Note 402717.1
data in long columns were not moved over new table in the new TS.
.
apps.fnd_ts_mig_cmds does not show any failure but
fndmlong<>.log shows error like
Io exception: Connection
refused(DESCRIPTION=(TMP=)(VSNNUM=168822016)(ERR=12505)(ERROR_STACK=(ERROR=(CO
DE =12505)(EMFI=4))))
Exception occured in obtaining the connection.
.
Staging tables holding the long data <><> does not exist.
.
At this point all the data lost. We are looking for a way to salvage the data
from backup taken before OATM.
.
Keep in mind there are new data in the system so restore is last resort
unless there is no other option.
.
Environment and Access Information
----------------------------------
.
Steps to reproduce the issue
----------------------------
- 11.5.9 ATG PF_H.RUP4 OATM Migration. DB Version 10.2.0.2 64 bit
- Migration completed fine, status shows success for all the tables
- When system released for users flexfield related errors encountered
- Investigation revealed that some tables are missing data in long columns
- Further investigation revealed that all the tables having long column has
no data in these columns.
On the bug they have given this possible cause:
The error the customer is getting shows that there is an error connecting to the database using JDBC. In one similar issue, development state: The long tables are migrated using a java program and use a jdbc connection.
It is this jdbc connection that is failing and producing the error message.
This error can happen if you use SID instead of Service name when you connect to the database. Please retry with correct service name.
This is probably what needs to be attempted in the customer's case but would like to see the customer's tnsnames.ora file first.
Please upload tnsnames.ora file.
I investigated this error in current test iteration of OATM and sure enough we had one table which was missing long data:
fndmlong20090116041440.log:Exception while moving data of column USER_EXIT from table APPLSYS.FND_FLEX_VALIDATION_EVENTS to table APPLSYS.FND_FLEX_VALIDATION_EVENTS5101
fndmlong20090116041440.log:Thread 2 : Exception : java.sql.SQLException: Io exception: invalid arguments in call
fndmlong20090116041440.log:Io exception: invalid arguments in call
fndmlong20090116041440.log:Thread 2 : Exception : Bigger type length than Maximum
From a previous run we got this:
Thread 4 : Exception : java.sql.SQLException: Io exception: Bigger type length than Maximum
Exception while moving data of column SQL_STATEMENT_TEXT from table ALR.ALR_ALERTS to table ALR.ALR_ALERTS5357
Thread 2 : Exception : java.lang.ArrayIndexOutOfBoundsException: 3
Io exception: Bigger type length than Maximum
As per the bug there are 168 tables with long columns.
Another bug 5934043 says:
OATM uses JAVA programm to migrate tables with long column. It seems that the "Protocol Violation" error you got is thrown from JDBC driver. The exception was thrown when jdbc driver reads something from the RDBMS that it didn't expect.
Possible reason could be the protocol engine in the thin driver and the protocol engine in the RDBMS are out of sync. Please verify if the upgrade finished successfully. It looks like this is an env issue, not an OATM specific isssue.
As a workaround we planned this to protect the data:
Export the 170 tables which have long, longraw columns, before starting OATM.
Take a row count of all tables where long column is not null
Do the OATM migration
Verify if the data migrated correctly by checking the row counts.
If any data is missing in any table, truncate it, import back the table.
Oracle still hasn't given a satisfactory answer for the cause of this error. But thankfully during our Production run, we didn't face this issue.
Tuesday, December 9, 2008
OATM migration post steps
select distinct tablespace_name from dba_segments
where tablespace_name not like 'APPS%' and
tablespace_name not IN ('SYSTEM','SYSAUX','TEMP','OWAPUB','CTXD');
We found some seeded tablespaces which did not have any objects through this query:
select name from v$tablespace
minus
select tablespace_name from dba_segments;
We dropped all the tablespaces which were returned by this query as they were empty and taking up valuable disk space.
Thursday, December 4, 2008
Resume OATM migration after errors
2. Bounce DB to be sure locks are released
3. Restart migration with next steps :
3.1. Make sure OATM migration is stopped , running the following at the OS level :
ps -ef | grep fndtsmig
3.2. Stop the migration queue, running from sqlplus as the apps user :
exec dbms_aqadm.stop_queue('SYSTEM.TBLMIG_MESSAGEQUE');
3.3. Restart the OATM migration one more time :
perl fndtsmig.pl
You should also check Metalink Note 463271.1:
) Run the 'Migration Error Report' and correct the errors that cause the migration to fail.
2) Make sure that queue 'TBLMIG_MESSAGEQUE' is started
select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED
from dba_queues
where owner ='SYSTEM'
and name = 'TBLMIG_MESSAGEQUE' ;
In case the queue is not started, run from sqlplus as the system user:
exec dbms_aqadm.start_queue('SYSTEM.TBLMIG_MESSAGEQUE');
3) Run the migration command again and the migration utility will try to move the objects which are still in the old tablespaces
Wednesday, November 5, 2008
OATM doesn't drop some seeded tablespaces
Thursday, October 23, 2008
Pre-steps during OATM conversion not in metalink
Wednesday, June 25, 2008
The 13th OATM tablespace APPS_TS_TOOLS
Customers who have migrated to the new Oracle Applications Tablespace Model and have applied Oracle Applications Technology Family Pack 11i.ATG_PF.H Rollup 4 may encounter the following error when running the AD Splicer to introduce a new product schema:
alter user xxx quota unlimited on APPS_TS_TOOLS
AD Splicer error:
The following ORACLE error:
ORA-00959: tablespace 'APPS_TS_TOOLS' does not exist
To correct this error, create the APPS_TS_TOOLS tablespace, either manually through SQL*Plus or by invoking the Tablespace Migration Utility menu. For more information, see: Understanding the Tablespace Migration Utility Main Menu, Oracle Applications System Administrator's Guide - Configuration.
The default initial size for the APPS_TS_TOOLS tablespace is 500 MB, with auto segment management enabled. If you create the APPS_TS_TOOLS tablespace through SQL*Plus, outside of the OATM utility, you can use the following statement:
CREATE TABLESPACE APPS_TS_TOOLS DATAFILE
'datafile_name.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
You can override these storage recommendations with storage parameters to suit the expected size required for objects that will be in the Tools tablespace at your site.
APPS_TS_TOOLS is used to store objects from several tools components like SSO , OID , LDAP, Portal , Discoverer, etc.
This tablespace must exist before using the Release 12 patching tools like adpatch.
Originally there were 12 tablespaces in OATM as per OATM FAQ:
How many tablespaces are introduced in OATM? ?
OATM contains twelve consolidated tablespaces for all products, including temporary tablespace, system tablespace, and undo segments:
Tablespace Type | Tablespace Name | Contents |
Transaction Tables | APPS_TS_TX_DATA | Tables that contain transaction data. |
Transaction Indexes | APPS_TS_TX_IDX | Indexes on transaction tables. |
Reference | APPS_TS_SEED | Reference and set-up data and indexes. |
Interface | APPS_TS_INTERFACE | Interface and temporary data and indexes. |
Summary | APPS_TS_SUMMARY | Summary management objects, such as materialized views, and other objects that record summary information. |
Nologging | APPS_TS_NOLOGGING | Materialized views not used for summary management and temporary objects. |
Advanced Queuing/AQ | APPS_TS_QUEUES | Advanced Queuing and dependent tables and indexes. |
Media | APPS_TS_MEDIA | Multimedia objects, such as text, video, sound, graphics, and spatial data. |
Archive | APPS_TS_ARCHIVE | Archive-purge-related objects |
Undo | UNDO | Automatic Undo Management (AUM) tablespace. UNDO segments are identical to ROLLBACK segments when AUM is enabled. |
Temp | TEMP | Temporary tablespace for global temporary table, sorts, and hash joins. |
System | SYSTEM | System tablespace. |
How are objects classified in OATM?
OATM relies on specific explicit and implicit classification rules. These rules are deternined based on storage considerations for the object type in question. Quite obviously, Tablespace classifications are not relevant for code objects and other objects without storage implications such as View, Policies, Context, triggers, etc. Objects like tables are explicitly classified based on their I/0 characteristics.
What object typesare implicitly classified in OATM?
OATM classifes the following object types:
Object type | Tablespace Type |
Index Organized Tables | Transaction_tables |
Global Temporary Tables | The database automatically always creates these objects in the TEMP tablespace. |
Advanced Queuing tables | Advanced Queuing/AQ |
Materialized View | Summary |
Materialized View Logs | Transaction_tables |
Domain Indexes | Transaction_indexes |
Indexes | With the exception of indexes on Tables classified as 'Transaction_Tables', all indexes will share the same tablespace type as the base object (table/mv). |