I came across a very frightening bug today. Bug # 7628795 which says that even though the migration report states that tables with long columns have successfully migrated, you should go through the $APPL_TOP/admin/$TWO_TASK/log/fndmtimestamp.log to check if errors like this are reported:
Io exception: Connection
refused(DESCRIPTION=(TMP=)(VSNNUM=168822016)(ERR=12505)(ERROR_STACK=(ERROR=(CODE =12505)(EMFI=4))))
TAR Number(If customer issued)
------------------------------
7238244.993
.
Problem Statement
-----------------
As a result of similar issue in Bug 5646392 or Note 402717.1
data in long columns were not moved over new table in the new TS.
.
apps.fnd_ts_mig_cmds does not show any failure but
fndmlong<>.log shows error like
Io exception: Connection
refused(DESCRIPTION=(TMP=)(VSNNUM=168822016)(ERR=12505)(ERROR_STACK=(ERROR=(CO
DE =12505)(EMFI=4))))
Exception occured in obtaining the connection.
.
Staging tables holding the long data <><> does not exist.
.
At this point all the data lost. We are looking for a way to salvage the data
from backup taken before OATM.
.
Keep in mind there are new data in the system so restore is last resort
unless there is no other option.
.
Environment and Access Information
----------------------------------
.
Steps to reproduce the issue
----------------------------
- 11.5.9 ATG PF_H.RUP4 OATM Migration. DB Version 10.2.0.2 64 bit
- Migration completed fine, status shows success for all the tables
- When system released for users flexfield related errors encountered
- Investigation revealed that some tables are missing data in long columns
- Further investigation revealed that all the tables having long column has
no data in these columns.
On the bug they have given this possible cause:
The error the customer is getting shows that there is an error connecting to the database using JDBC. In one similar issue, development state: The long tables are migrated using a java program and use a jdbc connection.
It is this jdbc connection that is failing and producing the error message.
This error can happen if you use SID instead of Service name when you connect to the database. Please retry with correct service name.
This is probably what needs to be attempted in the customer's case but would like to see the customer's tnsnames.ora file first.
Please upload tnsnames.ora file.
I investigated this error in current test iteration of OATM and sure enough we had one table which was missing long data:
fndmlong20090116041440.log:Exception while moving data of column USER_EXIT from table APPLSYS.FND_FLEX_VALIDATION_EVENTS to table APPLSYS.FND_FLEX_VALIDATION_EVENTS5101
fndmlong20090116041440.log:Thread 2 : Exception : java.sql.SQLException: Io exception: invalid arguments in call
fndmlong20090116041440.log:Io exception: invalid arguments in call
fndmlong20090116041440.log:Thread 2 : Exception : Bigger type length than Maximum
From a previous run we got this:
Thread 4 : Exception : java.sql.SQLException: Io exception: Bigger type length than Maximum
Exception while moving data of column SQL_STATEMENT_TEXT from table ALR.ALR_ALERTS to table ALR.ALR_ALERTS5357
Thread 2 : Exception : java.lang.ArrayIndexOutOfBoundsException: 3
Io exception: Bigger type length than Maximum
As per the bug there are 168 tables with long columns.
Another bug 5934043 says:
OATM uses JAVA programm to migrate tables with long column. It seems that the "Protocol Violation" error you got is thrown from JDBC driver. The exception was thrown when jdbc driver reads something from the RDBMS that it didn't expect.
Possible reason could be the protocol engine in the thin driver and the protocol engine in the RDBMS are out of sync. Please verify if the upgrade finished successfully. It looks like this is an env issue, not an OATM specific isssue.
As a workaround we planned this to protect the data:
Export the 170 tables which have long, longraw columns, before starting OATM.
Take a row count of all tables where long column is not null
Do the OATM migration
Verify if the data migrated correctly by checking the row counts.
If any data is missing in any table, truncate it, import back the table.
Oracle still hasn't given a satisfactory answer for the cause of this error. But thankfully during our Production run, we didn't face this issue.
Blog dedicated to Oracle Applications (E-Business Suite) Technology; covers Apps Architecture, Administration and third party bolt-ons to Apps
Friday, January 16, 2009
Subscribe to:
Post Comments (Atom)
5 comments:
I would be interested on an update regarding the solution to this situation. I have had the same problem and no resolution from Oracle.
Oracle suggested two things:
1. Correct your CLASSPATH. If you are using JDK1.6, then charsets.jar, tools.jar, dt.jar, rt.jar
2. Bounce the database just before you start OATM migration.
I have updated the post with more information.
- Vikram
Hi Vikram
How should i find out the tables with LONG,RAW...Columns in my database.
What are the 170 tables need to be exported??
Do you have any sql to identify tables with LONG Type
Plz let me know
Thanks
rachana
reachme217@gmail.com
Hello Vikram,
I faced the same issue and few long tables not migrated.
I stopped the migration queue:
exec dbms_aqadm.stop_queue('SYSTEM.TBLMIG_MESSAGEQUE');
Restart the database and Execute the migration command for the schema.
Ex: BEN,AZ ...
Hope this will help ....
Pouler
Hello Vikram,
I faced the same issue and few long tables not migrated.
I stopped the migration queue:
exec dbms_aqadm.stop_queue('SYSTEM.TBLMIG_MESSAGEQUE');
Restart the database and Execute the migration command for the schema.
Ex: BEN,AZ ...
Hope this will help ....
Pouler
Post a Comment