Blog dedicated to Oracle Applications (E-Business Suite) Technology; covers Apps Architecture, Administration and third party bolt-ons to Apps

Tuesday, December 30, 2008

End of Life for Sun fire V490, V890, E2900, E4900, E6900, E20k, E25k

Sun announced the End-of-life (EOL) of the Sun Fire V490, Sun Fire V890, Sun Fire E2900, Sun Fire E4900, Sun Fire E6900, Sun Fire E20K, Sun Fire E25K servers with Last Order Date of January 8, 2009 and Last Ship Date of April 8, 2009. In order to smooth your transition into the next generation of products, Sun is offering several upgrade promotions to help you migrate to the newest Enterprise servers, the Sun SPARC Enterprise M-series.

More details here: http://www.sun.com/servers/sunfire_eol.jsp

Monday, December 29, 2008

You have encountered an unexpected error. Please contact the System Administrator for assistance.

Vickie pinged me today and told me that after cloning an instance, she was getting this error whenever she tried to login through self service:

"You have encountered an unexpected error. Please contact the System Administrator for assistance."

An examination of Apache and Jserv logs did not show any errors.  However $IAS_ORACLE_HOME/Apache/Jserv/logs/jvm/OACoreGroup.0.stderr had these errors:

[Dec 29, 2008 1:48:40 PM EST]:1230576520693:Thread[Thread-29,10,main]:-1:-1:dev.justanexample.com:192.168.10.14:8239:16070UNEXPECTED:[fnd.common.logging.FileHandl
er.publish]:Please check File Permission/Disk Space for: aferror.log, defaulting Logging to STDERR

[Dec 29, 2008 1:51:24 PM EST]:1230576684451:Thread[Thread-101,10,main]:-1:-1:dev.justanexample.com:192.168.10.14:8239:16070UNEXPECTED:[fnd.framework.OAException]:
Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = oracle.apps.fnd.cache.CacheException;

[Dec 29, 2008 1:51:57 PM EST]:1230576717380:Thread[Thread-115,10,main]:-1:-1:dev.justanexample.com:192.168.10.14:8239:16070UNEXPECTED:[fnd.framework.OAException]:
Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = oracle.apps.fnd.cache.CacheException;

[Dec 29, 2008 1:55:39 PM EST]:1230576939360:Thread[Thread-192,10,main]:-1:-1:dev.justanexample.com:192.168.10.14:8239:16070UNEXPECTED:[fnd.framework.OAException]:
Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = oracle.apps.fnd.cache.CacheException;

[Dec 29, 2008 2:00:40 PM EST]:1230577240310:Thread[Thread-295,10,main]:-1:-1:dev.justanexample.com:192.168.10.14:8239:16070UNEXPECTED:[fnd.framework.OAException]:
Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = oracle.apps.fnd.cache.CacheException;

[Dec 29, 2008 2:27:54 PM EST]:1230578874623:Thread[Thread-145,10,main]:-1:-1:dev.justanexample.com:192.168.10.14:8239:16070UNEXPECTED:[fnd.common.logging.DebugEve
ntManager.handlerException]:java.lang.Exception: /aferror.log (Permission denied)

        at oracle.apps.fnd.common.logging.FileHandler.(FileHandler.java:103)
        at oracle.apps.fnd.common.logging.FileHandler.(FileHandler.java:80)
        at oracle.apps.fnd.common.logging.DebugEventManager.registerHandlers(DebugEventManager.java:1241)
        at oracle.apps.fnd.common.logging.DebugEventManager.init(DebugEventManager.java:904)
        at oracle.apps.fnd.common.logging.DebugEventManager.(DebugEventManager.java:194)
        at oracle.apps.fnd.common.AppsLog.(AppsLog.java:519)
        at oracle.apps.fnd.common.AppsLog.(AppsLog.java:489)
        at oracle.apps.fnd.common.AppsLog.(AppsLog.java:469)
        at oracle.apps.fnd.common.AppsContext.makeLog(AppsContext.java:1153)
        at oracle.apps.fnd.common.Context.setLog(Context.java:1400)
        at oracle.apps.fnd.common.Context.setLog(Context.java:1381)
        at oracle.apps.fnd.common.Context.getLog(Context.java:1353)
        at oracle.apps.fnd.common.Context.isLoggingEnabled(Context.java:2119)
        at oracle.apps.fnd.common.Context.makeConnectionManager(Context.java:655)
        at oracle.apps.fnd.common.AppsContext.makeConnectionManager(AppsContext.java:1186)
        at oracle.apps.fnd.common.Context.setConnectionManager(Context.java:615)
        at oracle.apps.fnd.common.Context.setConnectionManager(Context.java:599)
        at oracle.apps.fnd.common.AppsContext.initializeContext(AppsContext.java:880)
        at oracle.apps.fnd.common.AppsContext.(AppsContext.java:742)
        at oracle.apps.mwa.container.ApplicationsObjectLibrary.AOLInit(ApplicationsObjectLibrary.java:174)
        at oracle.apps.mwa.container.ApplicationsObjectLibrary.(ApplicationsObjectLibrary.java:78)
        at oracle.apps.mwa.container.MWALib.setObjectLibrary(MWALib.java:339)
        at oracle.apps.mwa.wap.engine.WapServlet.init(WapServlet.java:158)
        at org.apache.jserv.JServServletManager.load_init(JServServletManager.java:755)
        at org.apache.jserv.JServServletManager.loadServlet(JServServletManager.java:659)
        at org.apache.jserv.JServServletManager.loadStartupServlets(JServServletManager.java:789)
        at org.apache.jserv.JServServletManager.init(JServServletManager.java:447)
        at org.apache.jserv.JServ.start(JServ.java:625)
        at org.apache.jserv.JServ.main(JServ.java:234)

A search on metalink showed note 418130.1 which gives this cause and solution:

Cause: Un-synchronized data in APPLSYS.WF_LOCAL_USER_ROLES:HZ_PARTY

Solution:

For non-OATM instances:

sql> 
begin apps.wf_local_synch.BULKSYNCHRONIZATION(P_ORIG_SYSTEM=>'ALL',  
P_PARALLEL_PROCESSES=>2,  
P_LOGGING=>'LOGGING', P_RAISEERRORS=>TRUE,  
P_TEMPTABLESPACE=>'APPLSYSX'); END; / 

For OATM instances:
sql> 
begin apps.wf_local_synch.BULKSYNCHRONIZATION(P_ORIG_SYSTEM=>'ALL',  
P_PARALLEL_PROCESSES=>2,  
P_LOGGING=>'LOGGING', P_RAISEERRORS=>TRUE,  
P_TEMPTABLESPACE=>'APPS_TS_TX_DATA'); 
END; 

Since we were on an OATM instance, we ran this pl/sql block after logging in as apps:

SQL> begin apps.wf_local_synch.BULKSYNCHRONIZATION(P_ORIG_SYSTEM=>'ALL',
2 P_PARALLEL_PROCESSES=>2,
3 P_LOGGING=>'LOGGING', P_RAISEERRORS=>TRUE,
4 P_TEMPTABLESPACE=>'APPS_TS_TX_DATA'); END;
5 /

PL/SQL procedure successfully completed.

SQL>

There was no need to bounce Apache. After running the above query, we were able to login without issues.

Saturday, December 27, 2008

Solaris refuses to boot

Recently one of the Solaris servers which was down for maintenance by unix team, refused to boot.

While searching for a solution to this problem I came across the Nick Grossman's article How to get Solaris Box Online which is something anyone working with Solaris as the OS should know. I am pasting it below:

How to get a Solaris Box Online

Notes:
  1. I've done this many times on Solaris 2.6. Other versions may or may not be different and may or may not work.
  2. The following assumes a static IP address, a full-time Ethernet network connection, and the desire to use DNS for name resolution.
  3. This information is provided as a courtesy. It may or may not be accurate and it may or may not work for you. Use at your own risk.


1. Log in as root

2. cd /etc

3. Edit /etc/hostname.hme0

This file should contain the hostname of your machine, without the domain name. So, if the FQDN (Fully Qualified Domain Name = hostname + domain name) is solaris-box-1.yourdomain.com, the file should look like this:

# cat hostname.hme0
solaris-box-1

This assumes that your Ethernet adatper is hme0, the default device name for the first Ethernet adapter—the filename format is hostname.. On Solaris 10, it's iprb0, so you need to edit /etc/hostname.iprb0 instead. (Thanks to Robin Joinson for supplying the Solaris 10 info!)

4. Set up your /etc/hosts file.

It should look something like this:

127.0.0.1       localhost
192.168.0.150 solaris-box-1 solaris-box-1.yourdomain.com loghost

Make sure the second line has the correct IP address, hostname, and FQDN. The first line is the loopback entry. Along with #5, Solaris will know its name and IP address.

5. Create or edit /etc/nodename

Contains the hostname of the computer (same as hostname.hme0).

# cat nodename
solaris-box-1


6. Create or edit /etc/defaultrouter

Surprisingly enough, this file contains the IP address of your default router (or gateway). When you reboot and the network initialization scripts run, it will add a default route to the IP address in this file.

# cat defaultrouter
192.168.0.1


Now it knows about it's own network parameters and it's time to get it to use DNS. The next steps will tell it to use other nameservers. If you want to set up a nameserver on the local machine, see the man page for named.


7. Edit /etc/nsswitch.conf

If you haven't edited this file before, it should look suspiciously like like this:

passwd:     files
group: files
hosts: files
networks: files
protocols: files
rpc: files
ethers: files
netmasks: files
bootparams: files
publickey: files
# At present there isn't a 'files' backend for netgroup; the system will
# figure it out pretty quickly, and won't use netgroups at all.
netgroup: files
automount: files
aliases: files
services: files
sendmailvars: files

Add the word 'dns' after 'files' on the 'hosts' line:

passwd:     files
group: files
hosts: files dns
networks: files
protocols: files
rpc: files
ethers: files
netmasks: files
bootparams: files
publickey: files
# At present there isn't a 'files' backend for netgroup; the system will
# figure it out pretty quickly, and won't use netgroups at all.
netgroup: files
automount: files
aliases: files
services: files
sendmailvars: files

Now it knows to query a nameserver when looking up a name. Next we have to tell it what nameservers to use:


8. Create or edit /etc/resolv.conf

Here you tell it three things:

  1. What domain we're in
  2. Specify any additional search domains
  3. What the nameservers are (it will use them in the order you put them in the file)

When you're done it should look something like this:

# cat resolv.conf
domain yourdomain.com
search yourdomain.com
search client1.com
nameserver 192.168.0.9
nameserver 192.168.0.11

For more information, see the man page for resolv.conf.


9. Reboot or re-initialize networking

# sync;sync;sync
# reboot
OR
# /etc/init.d/network
# ifconfig -a
(Thanks to Michael Hicks for the non-rebooting method!)

You should now be normally online. If not, check everything above; if it's correct, try

  1. ping your gateway
  2. ping a machine on your local network
  3. ping your nameservers
  4. ping a well-known host, like www.yahoo.com

10. Go to www.sunfreeware.com and download some stuff.

They maintain a fairly comprehensive archive of free software precompiled for various versions of Solaris. It'll make your life a lot easier, especially if your home environment is Linux.

Packages I suggest are:

Program Function
joe editor
pico editor (required to run pine w/o any additional configuration)
pine email client
gzip file compression
tar GNU tar, more flexible and includes gzip file compression
gcc GNU C compiler
gdb GNU debugger
top memory monitoring tool
perl scripting language
bash shell
apache web server
less text paginator (use instead of more)

That's about it. If you have any comments or corrections, please email me at nick@pantsblazing.com.



Author: Nick Grossman
Last Updated: 5/30/2000

Tuesday, December 23, 2008

TNS 12546, 12560, 00516, Solaris Error 13

Srinivas Reddy mailed with the following error:

$ lsnrctl start erp11i

LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 30-DEC-2008 17:16:23

Copyright (c) 1991, 2006, Oracle.  All rights reserved.

Starting /erp11i/oracle/10.2.0/bin/tnslsnr: please wait...

TNSLSNR for Solaris: Version 10.2.0.3.0 - Production System parameter file is /erp11i/oracle/10.2.0/network/admin/erp11i_erp11i/listener.ora
Log messages written to /erp11i/oracle/10.2.0/network/admin/erp11i.log
Error listening on: (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCerp11i))
TNS-12546: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00516: Permission denied
   Solaris Error: 13: Permission denied

Listener failed to start. See the error message(s) above...

I did a truss lsnrctl start erp11i and found this in the result:

uname(0xFFFFFFFF7F2DACC8)                       = 1
access("/var/tmp/.oracle", F_OK)                = 0
chmod("/var/tmp/.oracle", 01777)                Err#1 EPERM [ALL]
so_socket(PF_UNIX, SOCK_STREAM, 0, "", SOV_DEFAULT) = 4 access("/var/tmp/.oracle/sEXTPROCerp11i", F_OK) = 0 connect(4, 0xFFFFFFFF7FFF7AE0, 110, SOV_DEFAULT) Err#146 ECONNREFUSED access("/var/tmp/.oracle/sEXTPROCerp11i", F_OK) = 0 pollsys(0x00000000, 0, 0xFFFFFFFF7FFF7910, 0x00000000) = 0
close(4)                                        = 0
so_socket(PF_UNIX, SOCK_STREAM, 0, "", SOV_DEFAULT) = 4 connect(4, 0xFFFFFFFF7FFF7AE0, 110, SOV_DEFAULT) Err#146 ECONNREFUSED access("/var/tmp/.oracle/sEXTPROCerp11i", F_OK) = 0 pollsys(0x00000000, 0, 0xFFFFFFFF7FFF7910, 0x00000000) = 0
close(4)                                        = 0
so_socket(PF_UNIX, SOCK_STREAM, 0, "", SOV_DEFAULT) = 4 connect(4, 0xFFFFFFFF7FFF7AE0, 110, SOV_DEFAULT) Err#146 ECONNREFUSED access("/var/tmp/.oracle/sEXTPROCerp11i", F_OK) = 0

I checked the ownership of /var/tmp/.oracle/sEXTPROCerp11i :

$ ls -ld /var/tmp/.oracle/sEXTPROCerp11i
srwxrwxrwx   1 oraprod    dbaprod          0 Oct 25 20:05 /var/tmp/.oracle/sEXTPROCerp11i

The correct owner for this instance was oraerp:dbaerp

$ file /var/tmp/.oracle/sEXTPROCerp11i
/var/tmp/.oracle/sEXTPROCerp11i:      socket

Because this socket is owned by oraprod:dbaprod, the socket can't be accessed by oraerp. 

The simple solution is to login as oraerp:

sudo -u oraerp -i
rm /var/tmp/.oracle/sEXTPROCerp11i
exit
sudo -u oraerp

$ lsnrctl start erp11i

LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 30-DEC-2008 18:10:00

Copyright (c) 1991, 2006, Oracle.  All rights reserved.

Starting /erp11i/oracle/10.2.0/bin/tnslsnr: please wait...

TNSLSNR for Solaris: Version 10.2.0.3.0 - Production System parameter file is /erp11i/oracle/10.2.0/network/admin/erp11i_erp11i/listener.ora
Log messages written to /erp11i/oracle/10.2.0/network/admin/erp11i.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCerp11i)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=tsgsd1003.energy.ge.com)(PORT=1589)))

Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCerp11i))
STATUS of the LISTENER
------------------------
Alias                     erp11i
Version                   TNSLSNR for Solaris: Version 10.2.0.3.0 - Production
Start Date                30-DEC-2008 18:10:01
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /erp11i/oracle/10.2.0/network/admin/erp11i_erp11i/listener.ora
Listener Log File         /erp11i/oracle/10.2.0/network/admin/erp11i.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCerp11i)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=erp11i.justanexample.com)(PORT=1589)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "erp11i" has 1 instance(s).
  Instance "erp11i", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

The issue stands resolved after this.

Monday, December 22, 2008

SP2-0544: Command "drop" disabled in Product User Profile

Anand Reddy pinged me today saying that he is getting the following error in a cloned instance while dropping database links:

SP2-0544: Command "drop" disabled in Product User Profile

This is coming as Production instance is configured to disallow DROP command for preventing its use by any developer code.

This is done through product_user_profile table. Metalink Note 197611.1 has an example:

To make use of Product_user_profile table.
A table that resides in the SYSTEM account.
It provides product level security that supplements the user level security
provided by SQL commands GRANT and REVOKE, and can be used with one's own
applications as well as with other Oracle products.
The table is created by running the command file pupbld.sql under the schema
SYSTEM.
Note:
-----
This will prevent you to drop any Schema Object using SQL*Plus only.
This will not work via Svrmgrl or any other Tool.
Note: The userid, attribute and char_value must be in uppercase.

For Example:

SQL> connect system/manager
SQL> insert into product_user_profile (product,userid,attribute,char_value)
values('SQL*Plus','SCOTT','DROP','DISABLED');

PRODUCT USERID ATTRIBUTE CHAR_VALUE
--------- ------- ---------- ----------------
SQL*PLUS SCOTT DROP DISABLED

If for example user SCOTT attempts to Drop a object, he would receive this error:

SQL> drop table xyz;

SP2-0544: invalid command: drop.

To re-enable commands, delete the row containing the restriction.

*** This will prevent user scott from dropping his own objects.

Underscore in hostname or domain name not allowed

Recently many Apps experts spent last two days trying to decipher why this error was coming in Apache logs whenever anyone tried to login to a new instance with new servers and new hostnames:

You have insufficient privileges for the current operation. Please contact your  System Administrator.

Strangely, this error appeared only when IE6 was the browser.  The error did not appear when Firefox 3, Mozilla, Google Chrome or Opera 9 were used to access the instance.

In the end, it was discovered that IETF (Internet Engineering Task Force) doesn't allow underscore character to be present in a hostname or domain name.  If a non-standard hostname or domain name is used which has underscore as part of the name, cookies are not saved by IE.  If IE doesn't save cookies, it results in this error.  Metalink Note 305316.1 describes this problem and some more solutions to this error:

Cause

Cookie domain is not being set correctly, or is not being recognized correctly

Solution

Follow these steps:
1a) Check you are not using an underscore character "_" in your hostname or domain name.
If so, this must be changed as is not IETF-compliant and therefore not a supported character
and is known to cause problems. Refer RFC 952 (http://www.ietf.org/rfc/rfc0952.txt) for more
information.
This is also documented in Microsoft article Q275033 "Cookies Are Not Saved If the Host Name
Is Invalid", this feature was introduced from IE 5.01 SP1
http://support.microsoft.com/default.aspx?scid=kb;en-us;275033

1b) If your domain name has two characters, then you may be running into the Microsoft IE 6 bug
"Internet Explorer Does Not Set a Cookie for Two-Letter Domains" described in Microsoft
article Q310676 (http://support.microsoft.com/default.aspx?scid=kb;en-us;310676)
For example you use http://myHost.ab.hk/ or http://myHost.mydomain.hk/
The fix for this issue is described in the Microsoft article

If neither of the above situations apply to your site, then proceed with these steps:-

2) Run ADADMIN and compile of all the Menus, using the FORCE option when prompted.

3) Recompile Security
a. Logon as System Administrator
b. Navigate to Request, Run
c. Select the following request:
d. Compile Security
e. Enter "Yes" for everything parameter

4) Retest for the problem

5) Run the SQL below
select SESSION_COOKIE_DOMAIN from ICX_PARAMETERS;
If this returns a NULL value then set this to be your domain name, with a leading full stop.
For example
update ICX_PARAMETERS set SESSION_COOKIE_DOMAIN = '.oracle.com';
You need to commit this update

6) Ensure the Applications Server URL is added to "trusted sites" zone in Microsoft Internet Explorer (Refer to Note 285218.1 for specific instructions if required)

7) Retest for the problem

8) Check the value of the profile option 'Self Service Personal Home Page Mode'
Set this to "Framework Only" if not set to this already.

9) Retest for the problem

10) Set the SESSION_COOKIE_DOMAIN to be the hostname.domain of the middle tier server
For example
update ICX_PARAMETERS set SESSION_COOKIE_DOMAIN = 'www.oracle.com';
You need to commit this update

11) Retest for the problem

12) Set the SESSION_COOKIE_DOMAIN to a null value
For example
update ICX_PARAMETERS set SESSION_COOKIE_DOMAIN = null;
You need to commit this update

13) Retest for the problem

If the problem still occurs, you have not run into the problems described in this note. In this case Oracle Support would likely want to know the following information
a) Is this fresh install or upgrade ?
b) If upgraded, what was the previous version ?
c) What menus work/do not work ?
d) In 11.5.10 the function security changed, so it is possible that there was some failure loading seeded data if they system has just been upgraded. Because of this you need to check the upgrade log files for errors, and confirm if there were any noticed errors when running the upgrade, for example: any skipped jobs. Have all post upgrade steps were followed ?

Friday, December 19, 2008

adadmin_adogbf2(), ERROR creating new appsborg2.zip

Bimal called me today and reported this error while regenerating jar files:

AD Administration error:
adadmin_adogbf2(), ERROR creating new appsborg2.zip.

After going through the log file I found that adjava command was failing with this error:
"invalid option -Xmx512m"

This instance was still using JRE 1.1.8.  JRE 1.1.8 doesn't understand -Xmx, it understands -mx. I advised Bimal to

grep s_adjreopts $CONTEXT_FILE

This returned Xmx512

I asked him to change it to mx512 and run autoconfig.  After changing the value of this variable, regenerate jar files worked fine without issues.


Thursday, December 18, 2008

Invalid format on HDTV on connecting Laptop HDMI port to HDTV HDMI port

I was at Shantanu's place recently.  He has bought a new HP laptop which has HDMI port.  Whenever he tried to connect his laptop's HDMI port to his Sony Bravia HDTV HDMI port, it gave the error: "Invalid Format".  We googled for this issue as we were sure it is a known issue.  However, we didn't get any answers.  On a hunch, we checked the display resolution setting in Display properties of his laptop.  It was set at 1280x1024.  Since his TV is 720p (1280x720), we reduced the resolution to 1280x720.  The "invalid format" error message went away promptly and we were able to see the laptop display on the TV.  Wikipedia has a list of TV resolutions which is good information:

Televisions are of the following resolutions:
SDTV: 480i (NTSC, 720×480 split into two 240-line fields)
SDTV: 576i (PAL, 720×576 split into two 288-line fields)
EDTV: 480p (NTSC, 720×480)
HDTV: 720p (1280×720)
HDTV: 1080i (1280×1080, 1440×1080, or 1920×1080 split into two 540-line fields)
HDTV: 1080p (1920*1080 progressive scan)

Tuesday, December 16, 2008

Use a single tnsnames.ora on your Windows PC across 8i, 9i and 10g

One of the problems I have encountered over the years is multiple client versions on Windows machines and multiple tnsnames.ora files.  For example, a PC may have Oracle 8i, Oracle 9i and Oracle 10g clients installed.  Each home will use its own tnsnames.ora file.  This causes lot of confusion and sometimes we don't even know which tnsnames.ora file is being used.

An elegant solution to this problem is the environment variable TNS_ADMIN.  In Windows you can set environment variables by following these steps:

Right click on My Computer
Click on Properties
Click on Advanced tab
Click on Environment variables button.

You can set a new System environment variable called TNS_ADMIN and give the value as a common directory which will contain tnsnames.ora file.  For example TNS_ADMIN=c:\tns

Once you set this variable, irrespective of the Oracle client being used, c:\tns\tnsnames.ora will be the single tnsnames.ora file used to resolve oracle names.

Monday, December 15, 2008

BIN$ objects

Object names starting with BIN$ are related to recycle bin. Whenever an object (table, index etc)is dropped, an entry is created in recycle bin with name starting with BIN$, if that is enabled forthe database. They show up in

select * from recyclebin;
or
show recyclebin

If you do not want dropped tables to go to recyclebin and be deleted permanently you should use

DROP TABLE myTable PURGE;

To purge recyclebin you should login as that user and use the command:

purge recyclebin;

To purge the recyclebin of all users login as SYS or SYSTEM and issue the command:

purge dba_recyclebin;

Saturday, December 13, 2008

\xff\xf4\xff\xfd\x06 in extranet server's apache log

While troubleshooting a problem in Apache, I thought of trying to send requests to Apache manually through command line :

telnet localhost 80
GET /OA_HTML/jsp/fnd/aoljtest.jsp
Ctrl +C

The GET didn't show up, but the Ctrl+C did:

127.0.0.1 - - [13/Dec/2008:15:38:49 -0500] "\xff\xf4\xff\xfd\x06" 302 - 0

Here's what these x codes mean in Apache:

                       case line
60                                when /\xff\xf4\xff\xfd\x06/
61                                        @sock.write("[*] Caught ^C, closing the socket...\n")
62                                        @sock.close
63                                        return
64                                       
65                                when /\xff\xed\xff\xfd\x06/
66                                        @sock.write("[*] Caught ^Z\n")
67                                        return


Thursday, December 11, 2008

Map unix shared mounts on windows

You can mount unix shared file systems on a windows box by installing Microsoft Windows Services for Unix. It is available for download on this page. Once you download it, if NFS client for windows is the only thing you need, you can follow these steps outlined in a support article:

Installing Client for NFS from the command line
Windows Services for UNIX version 3.0 uses Microsoft Installer for installation. As a result, you can install individual modules of the product from the command line. If previous components of Windows Services for UNIX have been installed, you must include these components in the addlocal parameter of the installation command line, separated by a comma (,). If you do not do so, these products are removed during the installation of Client for NFS.

To install Client for NFS from the command line:
Log on to the Windows computer by using an administrative level account.
Click Start, click Run, type cmd, and then click OK.
Insert the Windows Services for UNIX version 3.0 CD in the CD drive (this example uses drive D).
From the command prompt, run the following command to install Client for NFS:
msiexec /I D:\sfusetup.msi /qb addlocal="NFSClient" [sfudir="install path"]
Note The default installation path is \SFU.
To include the product key in the command line, add PidKey=key, where key is the 25 character product key.
If you are using either Windows 2000 and Windows XP, you do not have to restart the computer.

Here are the commands to execute on Unix side (Tested on Solaris 8/9/10):

share -F nfs -o rw=

For example if we want to share directory /erp11i/common/share on server cmtier.justanexample.com, this is the command to execute on Solaris command prompt logged in as root:

share -F nfs -o rw=192.168.3.2 /erp11i/common/share

Here is the command to mount this share to be executed on client:

mount to_be_mapped_as_drive:

For example if we want to mount the directory shared above on our client with IP 192.168.3.2

mount cmtier.justanexample.com:/erp11i/common/share X:

Wednesday, December 10, 2008

Oracle Redo log

Here's the wikipedia definiton: Oracle redo logfiles log the history of all changes made to the database. Each redo log file consists of redo records. A redo record, also called a redo entry, holds a group of change-vectors, each of which describes or represents a change made to a single block in the database.

For example, if a user UPDATEs a salary-value in an employee-table, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the table. And if the user then COMMITs the update, Oracle generates another redo record and assigns the change a "system change number" (SCN).

A single transaction may involve multiple changes to data blocks, so it may have more than one redo record.
Redo log files contain redo entries for both committed and uncommitted transactions.
Oracle redo log files contain the following information about database changes made by transactions:
indicators specifying when the transaction started
a transaction-identifier
the name of the data object updated (for example, an application table)
the “before image” of the transaction, i.e. the data as it existed before the changes (former versions <> SELECT s.sid, s.serial#, s.username, s.program,
2 i.block_changes
3 FROM v$session s, v$sess_io i
4 WHERE s.sid = i.sid
5 ORDER BY 5 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence
of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.

2) Query V$TRANSACTION. This view contains information about the amount of
undo blocks and undo records accessed by the transaction (as found in the
USED_UBLK and USED_UREC columns).

The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 t.used_ublk, t.used_urec
3 FROM v$session s, v$transaction t
4 WHERE s.taddr = t.addr
5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence
of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
the session.

You use the first query when you need to check for programs generating lots of
redo when these programs activate more than one transaction. The latter query
can be used to find out which particular transactions are generating redo.


In Orafaq.com's whitepaper on redolog available on http://www.orafaq.com/papers/redolog.pdf they have mentioned: The redolog is one of the most powerful features of the Oracle database, since it is the mechanism by which Oracle guarantees to be able to recover the database to the last committed transaction (provided the database is in archive-log mode). What most DBA's do not know, is that the redolog is also one of the most powerful debugging tools available, allowing the DBA to see the actual transactions, including the data, that was executed against the database.

Where this differs from the well known trace facilities, is that the redolog is always on. Unlike tracing, which has be turned on in advance of a known problem, redologs faithfully capture every transaction, making them particularly useful in regulated production environments, or where problems cannot be recreated on demand. Oracle now offers a product called Log Miner for use with Oracle 8i and higher but for those of us still wanting to go deep down to oracle internals, this articles provides an introduction to how to leverage some of the untapped power of the redolog.

The redolog is written in a condensed binary form, unsuitable for text editors. The first step then is to locate and convert the logfile into ANSI format

Metalink Note 1031381.6 describes how to dump redo log file information:

PURPOSE
This article explain how to obtain a dump of the header information in the
online redo log file(s), as well as obtaining selected information from the
online or archived redo log files.

SCOPE & APPLICATION
Informational

You are working with Oracle Technical Support. As part of the diagnostic
process, you have been asked to take a dump of the redo log files. The
information in the logs is often used to help diagnose corruption issues.

The following commands will be used in this process:

1. The 'alter session' command is used to dump redo headers.

2. Use the 'alter system dump logfile' to dump log file contents.

This command requires 'ALTER SYSTEM' system privilege. The database can be in
mount, nomount or open state when the command is issued. An online log file
or an archived log file can be dumped. It is even possible to dump a
file from another database, as long as the operating systems are the same.

Output from the command is put into the session's trace file.

The following ways of dumping a redo log file are covered:

1. To dump records based in DBA (Data Block Address)
2. To dump records based on RBA (Redo Block Address)
3. To dump records based on SCN
4. To dump records based on time
5. To dump records based on layer and opcode
6. Dump the file header information
7. Dump an entire log file:


1. To dump records based on DBA (Data Block Address)
--------------------------------------------------

This will dump all redo records for the range of data
blocks specified for a given file # and block # range.

From sqlplus (sqldba or svrmgr for older versions), issue the following command:

ALTER SYSTEM DUMP LOGFILE 'filename'
DBA MIN fileno . blockno
DBA MAX fileno . blockno;

Example:
========
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
DBA MIN 5 . 31125
DBA MAX 5 . 31150;

This will cause all the changes to the specified range of data blocks to be
dumped to the trace file. In the example given, all redo records for file #5,
blocks 31125 thru 31150 are dumped.

Note
====
For 10g:
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
DBA MIN 5 . 31125 DBA MAX 5 . 31150;

will raise:
ORA-01963: Must specify a block number

In 10g we need to skip the dot '.' while doing the redo dumps
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
DBA MIN 5 31125 DBA MAX 5 31150;


2. To dump records based on RBA (Redo Block Address)
-------------------------------------------------

This will dump all redo records for the range of redo
addresses specified for the given sequence number and block number.

Syntax:
ALTER SYSTEM DUMP LOGFILE 'filename'
RBA MIN seqno . blockno
RBA MAX seqno . blockno;

Example:
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
RBA MIN 2050 . 13255
RBA MAX 2255 . 15555;

3. To dump records based on SCN
----------------------------

Using this option will cause redo records owning changes within the SCN range
specified to be dumped to the trace file.

ALTER SYSTEM DUMP LOGFILE 'filename'
SCN MIN minscn
SCN MAX maxscn;

Example:
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
SCN MIN 103243
SCN MAX 103294;

If the purpose is to check the dumpfile you can rather do the following,
SQL> ALTER SYSTEM DUMP LOGFILE 'filename' SCN MIN 1 SCN MAX 1;

If the above completes sucessfully it ensures no issues with the archivelog.


4. To dump records based on time.
------------------------------

Using this option will cause redo records created within the time range
specified to be dumped to the trace file.

From sqlplus (sqldba or svrmgr for older versions), issue the following command:

ALTER SYSTEM DUMP LOGFILE 'filename'
TIME MIN value
TIME MAX value;

Example:
========
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
TIME MIN 299425687
TIME MAX 299458800;


Please Note: the time value is given in REDO DUMP TIME



5. To dump records based on layer and opcode.
------------------------------------------

LAYER and OPCODE are used to dump all log records for a particular type of
redo record, such as all dropped row pieces.

From sqlplus (sqldba or svrmgr for older versions), issue the following command:

ALTER SYSTEM DUMP LOGFILE 'filename'
LAYER value
OPCODE value;

Example:
========
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf'
LAYER 11
OPCODE 3;

6. Dump the file header information:
---------------------------------

This will dump file header information for every
online redo log file.

From sqlplus (sqldba or svrmgr for older versions), issue the following command:

alter session set events 'immediate trace name redohdr level 10';

For dumping archivelog header,issue the following command:

ALTER SYSTEM DUMP LOGFILE 'filename' RBA MIN 1 1 RBA MAX 1 1;

7. Dump an entire log file:
------------------------

From sqlplus (sqldba or svrmgr for older versions), issue the following command:

ALTER SYSTEM DUMP LOGFILE 'filename';

Please note:
Fully qualify the filename, and include the single quotes.


Example:
========
ALTER SYSTEM DUMP LOGFILE 'u01/oracle/V7323/dbs/arch1_76.dbf';

The dump of the logfile will be written into a trace file in the udump destination.
Use the command 'show parameters dump' within an sqlplus session.
The ouput will show the location for the udump destination where
the trace file exists.

Tuesday, December 9, 2008

OATM migration post steps

After converting to OATM (Oracle Applications Tablespace Model), we have to drop old tablespaces from which OATM migrates all data. However we noticed that there were some seeded tablespaces which were not picked up by OATM's drop script. These can be found by this query:

select distinct tablespace_name from dba_segments
where tablespace_name not like 'APPS%' and
tablespace_name not IN ('SYSTEM','SYSAUX','TEMP','OWAPUB','CTXD');

We found some seeded tablespaces which did not have any objects through this query:

select name from v$tablespace
minus
select tablespace_name from dba_segments;

We dropped all the tablespaces which were returned by this query as they were empty and taking up valuable disk space.

Monday, December 8, 2008

In which tablespace are procedures and functions stored

I was under the impression that the procedures, functions and packages which belong to a user are stored in the default tablespace allocated to that user. However it doesn't seem so. It seems the code is stored in SYSTEM tablespace. Here's a test I did to prove that the code is not stored in the user's default tablespace:

The obvious choice to do this is SCOTT schema.  But SCOTT schema is not a part of ERP.  Here's what you do:

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
conn / as sysdba
@utlsampl.sql

This will create the scott schema.  But scott's default tablespace is SYSTEM.  We'll change that.

CREATE TABLESPACE SCOTTD DATAFILE '/stage11i/dbdata/data1/scottd1.dbf' SIZE 10M;

ALTER USER SCOTT DEFAULT TABLESPACE SCOTTD;

CREATE OR REPLACE PACKAGE PACK1 AS
PROCEDURE PROC1;
FUNCTION FUN1 RETURN VARCHAR2;
END PACK1;
/

CREATE OR REPLACE PACKAGE BODY PACK1 AS
        PROCEDURE PROC1 IS
                BEGIN
                DBMS_OUTPUT.PUT_LINE('Hi a message from procedure PROC1');
                END PROC1;
        FUNCTION FUN1 RETURN VARCHAR2 IS
                BEGIN
                RETURN ('Hello from function FUN1');
        END FUN1;
END PACK1;
/

set serveroutput on

SQL> EXEC PACK1.PROC1
Hi a message from procedure PROC1

PL/SQL procedure successfully completed.

SQL> select pack1.fun1 from dual;

FUN1
--------------------------------------------------------------------------------
Hello from function FUN1

SQL>


CONN / AS SYSDBA
DROP TABLESPACE SCOTTD INCLUDING CONTENTS AND DATAFILES;

conn scott/tiger

SQL> EXEC PACK1.PROC1
Hi a message from procedure PROC1

PL/SQL procedure successfully completed.

SQL> select pack1.fun1 from dual;

FUN1
--------------------------------------------------------------------------------
Hello from function FUN1

SQL>

Even after dropping the default tablespace of scott, the package pack1 exists.  So source code like is not stored in default tablespace of a schema.  It is stored in SYSTEM tablespace.

Friday, December 5, 2008

Long running queries

Query to check Long running SQLs. The column mintogo gives an estimated time the sql is going to run

select s.sid,o.opname,s.sql_hash_value,o.sofar,o.totalwork,o.elapsed_seconds,
round(o.time_remaining/60,2) mintogo,
round(o.sofar/o.totalwork * 100,0) pctdone, o.message
from v$session_longops o, v$session s
where o.sid = s.sid
and sofar < totalwork
and (o.sid = &trgtsid or &trgtsid = 0)
/

In-built Data purge concurrent programs

As per metalink note 387459.1:

The ATG / FND supplied data purge requests are the following:
- Purge Concurrent Request and/or Manager Data        [FNDCPPUR]
- Purge Obsolete Workflow Runtime Data                    [FNDWFPR]
- Purge Signon Audit data                                             [FNDSCPRG.sql]
- Purge Obsolete Generic File Manager Data                [FNDGFMPR]
- Purge Debug Log and System Alerts                          [FNDLGPRG]
- Purge Rule Executions                                               [FNDDWPUR]
- Purge Concurrent Processing Setup Data for Cloning  [FNDCPCLN]


Metalink Note 732713.1 describes purging strategy for E-Business Suite 11i:

There is no single Archive/Purge routine that is called by all modules within eBusiness Suite, instead each module has module specific archive/purge procedures.

Concurrent Jobs to purge data

  • Purge Obsolete Workflow Runtime Data (FNDWFPR)

 Oracle Applications System Administrator’s Guide - Maintenance Release 11i (Part No. B13924-04)
 Note 132254.1  Speeding up and Purging Workflow 
 Note 277124.1  FAQ on Purging Oracle Workflow Data
 Note 337923.1  A closer examination of the Concurrent Program Purge Obsolete Workflow Runtime Data

  • Purge Debug Log and System Alerts (FNDLGPRG)

  Note 332103.1  Purge Debug Log And System Alerts Performance Issues

  • Purge Signon Audit data (FNDSCPRG)

  Note 1016344.102   What Tables Does the Purge Signon Audit Data Concurrent Program Affect?
  Note 388088.1   How To Clear The Unsuccessful Logins

  • Purge Concurrent Request and/or Manager Data (FNDCPPUR)

Oracle Applications System Administrator’s Guide - Maintenance Release 11i (Part No. B13924-04)
   Note 565942.1   Which Table Column And Timing Period Does The FNDCPPUR Purge Program Use
   Note 104282.1  Concurrent Processing Tables and Purge Concurrent Request and/or Manager Data Program (FNDCPPUR)
 Note 92333.1   How to Optimize the Process of Running Purge Concurrent Request and/or Manager Data (FNDCPPUR)

  • Delete Diagnostic Logs (DELDIAGLOG)

 Note 466593.1   How To Delete Diagnostic Logs and Statistics?

  • Delete Diagnostic Statistics (DELDIAGSTAT)

 Note 466593.1   How To Delete Diagnostic Logs and Statistics?

  • Purge FND_STATS History Records (FNDPGHST)

  Oracle Applications System Administrator’s Guide - Configuration Release 11i (Part No. B13925-06)
   Note 423177.1  Date Parameters For "Purge Fnd_stats History Records" Do Not Auto-Increment

  • Page Access Tracking Purge Data (PATPURGE)

  Note 413795.1   Page Access Tracking Data Purge Concurrent Request Fails With Ora-942
  Note 461897.1   Which Tables store the Page Access Tracking Data?
  Note 402116.1   Page Access Tracking in Oracle Applications Release 12

  • Purge Obsolete Generic File Manager Data (FNDGFMPR)

 Oracle Applications System Administrator’s Guide - Configuration Release 11i (Part No. B13925-06)
  Note 298698.1   Avoiding abnormal growth of FND_LOBS table in Application
  Note 555463.1   How to Purge Generic or Purchasing Attachments from the FND_LOBS Table

  • Summarize and Purge Concurrent Request Statistics (FNDCPCRS)

(no references found)

  • Purge Inactive Sessions (ICXDLTMP)

  Note 397118.1  Where Is 'Delete Data From Temporary Table'  Concurrent Program - ICXDLTMP.SQL

  • Purge Obsolete ECX Data (FNDECXPR)

  Note 553711.1   Purge Obsolete Ecx Data Error ORA-06533: Subscript Beyond Count
  Note 338523.1   Cannot Find ''Purge Obsolete Ecx Data'' Concurrent Request
  Note 444524.1   About Oracle Applications Technology ATG_PF.H Rollup 6

  • Purge Rule Executions (FNDDWPURG)

(no references found)

Additional Notes

You can monitor and run purging programs through OAM by navigating to the Site Map--> Maintenence --> Purge section.


This note also gives reference of a white paper in Note 752322.1 "Reducing Your Oracle E-Business Suite Data Footprint using Archiving, Purging, and Information Lifecycle Management"

Thursday, December 4, 2008

Resume OATM migration after errors

OATM migration process details are stored in a table FND_TS_MIG_CMDS.  This is the structure of the table:

SQL> desc fnd_ts_mig_cmds
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 LINENO                                    NOT NULL NUMBER
 SUBOBJECT_TYPE                                     VARCHAR2(30)
 START_DATE                                         DATE
 OWNER                                     NOT NULL VARCHAR2(30)
 OBJECT_TYPE                                        VARCHAR2(30)
 OBJECT_NAME                                        VARCHAR2(80)
 OLD_TABLESPACE                                     VARCHAR2(30)
 NEW_TABLESPACE                                     VARCHAR2(30)
 MIGRATION_CMD                                      VARCHAR2(4000)
 MIGRATION_STATUS                                   VARCHAR2(30)
 ERROR_TEXT                                         VARCHAR2(4000)
 LAST_UPDATE_DATE                                   DATE
 GENERATION_DATE                                    DATE
 END_DATE                                           DATE
 TOTAL_BLOCKS                                       NUMBER(15)
 INDEX_PARALLEL                                     VARCHAR2(10)
 PARENT_OBJECT_NAME                                 VARCHAR2(30)
 PARENT_LINENO                                      NUMBER
 EXECUTION_MODE                                     VARCHAR2(1)
 PARTITIONED                                        VARCHAR2(3)
 PARENT_OWNER                                       VARCHAR2(30)

The MIGRATION_STATUS column stores the status of the object to be migrated.  The valid values for MIGRATION_STATUS column are:

GENERATED: Migration command has been generated but not executed.
SUCCESS       : Migration successfully completed.
ERROR          : Migration errored out

You can check the details of error by querying the ERROR_TEXT column of the table.

To resume an OATM migration, you need to do the steps described in metalink note 369198.1:
1. Stop all services except the Database

2. Bounce DB to be sure locks are released

3. Restart migration with next steps :

3.1. Make sure OATM migration is stopped , running the following at the OS level :

ps -ef | grep fndtsmig

3.2. Stop the migration queue, running from sqlplus as the apps user :

exec dbms_aqadm.stop_queue('SYSTEM.TBLMIG_MESSAGEQUE');

3.3. Restart the OATM migration one more time :

perl fndtsmig.pl





You should also check Metalink Note 463271.1:

) Run the 'Migration Error Report' and correct the errors that cause the migration to fail.

2) Make sure that queue 'TBLMIG_MESSAGEQUE' is started

select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED
from dba_queues
where owner ='SYSTEM'
and name = 'TBLMIG_MESSAGEQUE' ;
In case the queue is not started, run from sqlplus as the system user:

exec dbms_aqadm.start_queue('SYSTEM.TBLMIG_MESSAGEQUE');  

3) Run the migration command again and the migration utility will try to move the objects which are still in the old tablespaces


Wednesday, December 3, 2008

Tablespace scripts

From Jonathan Lewis:
select ts.tablespace_name, 
to_char(sum(nvl(fs.bytes,0))/1048576, '99,999,990.99') as MB_FREE, 
count(*) as FRAGMENTS, 
to_char(max(nvl(fs.bytes,0))/1048576, '99,999,990.99') as BIGGEST_BIT 
from dba_free_space fs, dba_tablespaces ts 
where fs.tablespace_name(+) = ts.tablespace_name 
group by rollup (ts.tablespace_name);

From Metalink Note: 1019999.6

SET ECHO off
REM NAME: TFSTSNFO.SQL
REM USAGE:"@path/tfstsnfo"
REM ------------------------------------------------------------------------
REM REQUIREMENTS:
REM SELECT on DBA_DATA_FILES, DBA_FREE_SPACE
REM ------------------------------------------------------------------------
REM PURPOSE:
REM Produces a brief report showing the filenames, free space,
REM size and status of all tablespaces in the database.
REM ------------------------------------------------------------------------
REM DISCLAIMER:
REM This script is provided for educational purposes only. It is NOT
REM supported by Oracle World Wide Technical Support.
REM The script has been tested and appears to work as intended.
REM You should always run new scripts on a test instance initially.
REM ------------------------------------------------------------------------
REM Main text of script follows:

set echo off
break on table_space on free
column free format 999,999,999,990
column bytes format 999,999,999,990
set pagesize 66
set lines 100
spool tblsp.out
select substr(df.tablespace_name,1,15) table_space,
sum(fs.bytes) free,
substr(df.file_name,1,35) Name_of_File,
df.bytes,
substr(df.status,1,5) stat
from dba_data_files df ,dba_free_space fs
where df.tablespace_name = fs.tablespace_name
group by substr(df.tablespace_name,1,15),
substr(df.file_name,1,35),
df.bytes,substr(df.status,1,5)
/
spool off
set echo on

Monday, December 1, 2008

Oracle E-Business Suite System Survey

You can use $FND_TOP/patch/115/bin/txkInventory.pl perl script to generate a report of techstack components and their versions. This is described in metalink note 601736.1

Metalink Note 392782.1 contains a list of system survey questionnaires that Oracle Application development should know from the viewpoint of performance. This output will help in understanding h/w configuration and its related environment for Oracle E-Business Suite:

Oracle E-Business Suite System Survey

The first 6 questions should be provided manually and rest of them could be gathered by script. 

1. Hardware Information 
* DB sever 
- System Model : 
- OS version : 
- # CPUs : 
- CPU speed : 
- MEMORY : 
- Disk I/O : 
- Etc : 

* CM (Concurrent Manager) sever 
- System Model : 
- OS version : 
- # CPUs : 
- CPU speed : 
- MEMORY : 
- Disk I/O : 
- Etc : 

* AP (Application) sever 
- System Model : 
- OS version : 
- # CPUs : 
- CPU speed : 
- MEMORY : 
- Disk I/O : 
- Etc : 

2. environment information (# tiers, network setup, RAC etc..) 
3. # of total and avg. concurrent users during peak times. 
4. DB size (including version) 
5. modules being used. 
6. # of employees 

==== 

The following sql script will gather additional information 
from the production database. After sql*plus login with apps user, 
pl run system.sql and get the spool output file, system.txt 
under the current directory. 

--File name : system.sql 
spool system.txt 
set linesize 180 
set pagesize 600 

prompt Apps Version

SELECT release_name from fnd_product_groups;

prompt DB SERVER INFORMATION 
col host_name format a20 
COL VERSION FORMAT A15 
COL STATUS FORMAT A10 
COL THREAD# FORMAT 99 
select 
INST_ID, 
INSTANCE_NUMBER , 
INSTANCE_NAME , 
HOST_NAME , 
VERSION , 
STATUS , 
PARALLEL , 
THREAD# 
from gv$instance ; 

PROMPT MAJOR TABLE LIST 

select owner, table_name, num_rows, LAST_ANALYZED 
From dba_tables 
where table_name in ( 
'AP_INVOICES_ALL', 'AP_INVOICE_DISTRIBUTIONS_ALL', 'AR_PAYMENT_SCHEDULES_ALL', 
'RA_CUSTOMER_TRX_ALL', 'RA_CUSTOMER_TRX_LINES_ALL' , 
'HZ_PARTIES', 'HZ_CUST_ACCOUNTS', 
'AS_SALES_LEADS', 'AS_ACCESSES_ALL_ALL', 
'BOM_STRUCTURES_B', 'BOM_COMPONENTS_B', 
'CS_INCIDENTS_ALL_B', 
'FA_ADJUSTMENTS', 'FA_DEPRN_DETAIL', 'FA_DEPRN_SUMMARY', 
'FND_USERS', 
'GL_JE_HEADERS', 'GL_JE_LINES', 
'MRP_GROSS_REQUIREMENTS', 'MRP_RECOMMENDATIONS', 'MRP_FULL_PEGGING', 
'MRP_BOM_COMPONENTS', 'MTL_MATERIAL_TRANSACTIONS', 
'MTL_TRANSACTION_ACCOUNTS', 'MTL_SYSTEM_ITEMS_B', 
'HR_ORGANIZATION_INFORMATION', 'HR_OPERATING_UNITS', 
'MTL_PARAMETERS', 
'OE_ORDER_HEADERS_ALL', 'OE_ORDER_LINES_ALL', 
'PO_HEADERS_ALL', 'PO_LINES_ALL', 'PO_VENDORS', 
'WF_ITEM_ACTIVITY_STATUSES', 'WF_ITEM_ATRIBUTE_VALUES', 
'WF_NOTIFICATIONS', 'WF_NOTIFICATION_ATTRIBUTES' , 
'WSH_DELIVERY_DETAILS' , 'WSH_DELIVERY_ASSIGNMENTS', 
'WSH_NEW_DELIVERIES', 'WSH_DELIVERY_LEGS', 
'WSH_TRIP_STOPS', 'WSH_TRIPS' ) 
order by table_name ; 


PROMPT number of daily concurrent requests. 

SELECT trunc(REQUESTED_START_DATE), count(*) 
FROM FND_CONCURRENT_REQUESTS 
WHERE REQUESTED_START_DATE BETWEEN sysdate-30 AND sysdate 
group by rollup(trunc(REQUESTED_START_DATE)) ; 


PROMPT Applications versions and family pack versions. 

COL APPLICATION_NAME FORMAT A60 
COL SHT_NAME FORMAT A10 
col PATCH_LEVEL FORMAT A20 
SELECT A.APPLICATION_SHORT_NAME SHT_NAME,T.APPLICATION_NAME, I.STATUS, 
NVL(I.PATCH_LEVEL, 'n/a') PATCH_LEVEL, I.DB_STATUS 
FROM FND_PRODUCT_INSTALLATIONS I, 
FND_APPLICATION A, 
FND_APPLICATION_TL T 
WHERE A.APPLICATION_ID = I.APPLICATION_ID 
AND A.APPLICATION_ID = T.APPLICATION_ID 
AND T.LANGUAGE = USERENV('LANG') 
ORDER BY 1 ; 


PROMPT Multi-org being used. 

select MULTI_ORG_FLAG org, MULTI_LINGUAL_FLAG lingual, MULTI_CURRENCY_FLAG currency 
from FND_PRODUCT_GROUPS ; 

PROMPT DB size with Tablespace 

set head on 
set pagesize 30 
select NVL(tablespace_name,'** Total **') "Tablespace Name", 
sum("allocated") "Allocated(M)", 
sum("used") "Used(M)", 
sum("free") "Free(M)", 
sum(df_cnt) "#(File)" 
from 

select a.tablespace_name, trunc(b.assigned/1048576) "allocated", 
trunc((b.assigned-a.free)/1048576) "used", 
trunc(a.free/1048576) "free", 
df_cnt 
from 

select tablespace_name, sum(bytes) free 
from dba_free_space 
group by tablespace_name ) a, 

select tablespace_name, sum(bytes) assigned, count(*) df_cnt 
from dba_data_files 
group by tablespace_name ) b 
where a.tablespace_name = b.tablespace_name 
UNION ALL 
SELECT tablespace_name||'[TMP]', trunc(sum(bytes)/1048576), null, null, count(*) df_cnt 
from dba_temp_files 
group by tablespace_name 

group by rollup(tablespace_name) ; 

spool off 

Tuesday, November 25, 2008

Calculate number of concurrent users of an existing instance

The view v$license keeps track of concurrent sessions and users.

SQL> desc v$license
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------
 SESSIONS_MAX                                       NUMBER
 SESSIONS_WARNING                                   NUMBER
 SESSIONS_CURRENT                                   NUMBER
 SESSIONS_HIGHWATER                                 NUMBER
 USERS_MAX                                          NUMBER
 CPU_COUNT_CURRENT                                  NUMBER
 CPU_CORE_COUNT_CURRENT                             NUMBER
 CPU_SOCKET_COUNT_CURRENT                           NUMBER
 CPU_COUNT_HIGHWATER                                NUMBER
 CPU_CORE_COUNT_HIGHWATER                           NUMBER
 CPU_SOCKET_COUNT_HIGHWATER                         NUMBER

select sessions_current from v$license;

The above query will give you the number of concurrent users right now.

You can write a small job which will capture this information every hour for a week.  Once you have this data, you can take an average of this data to get the number of concurrent users. 

Monday, November 24, 2008

FND_STATS doesn't collect statistics on external tables

Recently we ran Gather Schema Statistics for all tables:

1.Log in to Oracle Applications with the System Administrator responsibility.
2.Navigate to the Submit Request window (Request > Run).
3.Submit the Gather Statistics program.
Set the schema name to ALL to gather statistics for all Oracle Applications schemas (having an entry in the FND_PRODUCT_INSTALLATIONS table). In addition to gathering index and table-level statistics, the program gathers column-level histogram statistics for all columns listed in the FND_HISTOGRAM_COLS table.

4. Use GATHER_AUTO option in the program.  This option gathers statistics for objects that are either lacking statistics or whose rows have changed significantly (default 10%) since the last time you gathered statistics.

GSS got stuck when it was collecting statistics on external tables.  GSS is not designed to gather statistics on external tables.  Patch 5876047 needs to be applied to solve this problem.  This patch modifies FND_STATS package to exclude gathering statistics on external tables.  However after applying this patch GSS will again fail with Ora-1 Errors 

Metalink Note:470556.1 gives this workaround:

Take a backup of the FND_STATS_HIST table
Truncate table FND_STATS_HIST
Re-submit Gather Schema Statistics program.

You do NOT need to truncate this table after each run of Gather Schema Statistics. Doing it once will be enough, because each run of GSS will use a new request_id

Thursday, November 20, 2008

Bugs introduced in 10.2.0.4 which will be fixed in 10.2.0.5

Metalink Note 555579.1 describes 30 bugs newly introduced in 10.2.0.4. However if I go to metalink advance search and give classification as Any, then I see > 500 patches. Some of these would be fixed in the next point release 10.2.0.5. The list of recommended patches is fewer in number. You should aim to get your instance patched with at least the recommended patches:

7164095 (For RAC instances)
7375644
7496435
7573282 (For RAC instances)

Oracle has since published another article which has list of all the bugs fixed in 10.2.0.5:

10.2.0.5 Patch Set - List of Bug Fixes by Problem Type [ID 1088172.1]





Wednesday, November 19, 2008

NUMA and 10.2.0.4

NUMA stands for Non uniform memory access.  It is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.

Here's a real world analogy from www.techgalaxy.net:

Imagine that you are baking a cake. You have a group of ingredients (=memory pages) that you need to complete the recipe(=process). Some of the ingredients you may have in your cabinet(=local memory), but some of the ingredients you might not have, and have to ask a neighbor for(=remote memory). The general idea is to try and have as many of the ingredients in your own cabinet as possible, since this reduces your time and effort in making the cake.
You also have to remember that your cabinets can only hold a fixed amount of ingredients(=physical nodal memory). If you try and buy more, but you have no room to store it, you may have to ask your neighbor to keep it in his/her cabinet until you need it(=local memory full, so allocate pages remotely).

Bug # 7171446 (Metalink Note 7171446.8) describes the problem of 10.2.0.4 with NUMA:

In 10.2.0.4 NUMA features are enabled by default and this can lead to a number of unexpected problems such as  skewed CPU usage and ORA-600 errors.

Workaround:
Unless the system has been specifically set up and tuned for NUMA then disable Oracle NUMA optimizations  by setting the following in the pfile / spfile / init.ora used to start the instances:
   
   _enable_NUMA_optimization=FALSE 
   _db_block_numa=1

Bug # 68680680 (Metalink Note 68680680.8) says:

ORA-4031 can occur when the NUMA option is used due to imbalance between shared pool subpools.  Subpool 1 shows memory allocated to "NUMA pool X" where X is a number.

This is fixed in 10.2.0.5.  Strangely, patch 68680680 is available for 10.2.0.3 but not for 10.2.0.4.

Tuesday, November 18, 2008

Fatal error: Don't know how to make target

Recently the DBAs faced this issue when they were applying TXK Rollup S patch and it was trying to relink:

Undefined first referenced
symbol in file
wfmlrsumpl 
$FND_TOP/lib/libfnd.a(wfmlr.o)
wfmlrgpl 
$FND_TOP/lib/libfnd.a(wfmlr.o)
ld: fatal: Symbol referencing errors. No output written to 

$ALR_TOP/bin/ALPPIM
*** Error code 1
make: Fatal error: Command failed for target 

`$ALR_TOP/bin/ALPPIM'
Done with link of alr executable 'ALPPIM' on Mon Nov 17 14:05:36 EST 2008

Relink of module "ALPPIM" failed.


make: Fatal error: Don't know how to make target 
`$ALR_TOP/bin/ALPPNR'
Done with link of alr executable 'ALPPNR' on Tue Nov 18 22:50:26 EST 2008
Relink of module "ALPPNR" failed.

make: Fatal error: Don't know how to make target 
`$ALR_TOP/bin/ALPPWF'
Done with link of alr executable 'ALPPWF' on Tue Nov 18 22:50:26 EST 2008
Relink of module "ALPPWF" failed

make: Fatal error: Don't know how to make target 
`$FA_TOP/bin/FAGDA'
Done with link of fa executable 'FAGDA' on Tue Nov 18 22:50:31 EST 2008
Relink of module "FAGDA" failed.

make: Fatal error: Don't know how to make target 

`$FA_TOP/bin/FAMPRET'
Done with link of fa executable 'FAMPRET' on Tue Nov 18 22:50:32 EST 2008
Relink of module "FAMPRET" failed.

make: Fatal error: Don't know how to make target 
`$FA_TOP/bin/FAMPSLTFR'
Done with link of fa executable 'FAMPSLTFR' on Tue Nov 18 22:50:32 EST 2008
Relink of module "FAMPSLTFR" failed.

make: Fatal error: Don't know how to make target 
`$FA_TOP/bin/FAMPTFR'
Done with link of fa executable 'FAMPTFR' on Tue Nov 18 22:50:32 EST 2008
Relink of module "FAMPTFR" failed.

make: Fatal error: Don't know how to make target 
`$FA_TOP/bin/FAMRCL'
Done with link of fa executable 'FAMRCL' on Tue Nov 18 22:50:32 EST 2008
Relink of module "FAMRCL" failed.

make: Fatal error: Don't know how to make target 
`$FA_TOP/bin/FAMRST'
Done with link of fa executable 'FAMRST' on Tue Nov 18 22:50:32 EST 2008
Relink of module "FAMRST" failed.

make: Fatal error: Don't know how to make target 
`$FA_TOP/bin/FATAXUP'
Done with link of fa executable 'FATAXUP' on Tue Nov 18 22:50:33 EST 2008
Relink of module "FATAXUP" failed.

$ORACLE_HOME/rdbms/lib/ssbbded.o 
Undefined first referenced
symbol in file
inltev 
$WIP_TOP/lib/libwip.a(wilctv.o)
inltvu 
$WIP_TOP/lib/libwip.a(wilctv.o)
ld: fatal: Symbol referencing errors. No output written to 

$WSM_TOP/bin/WSCMTI
*** Error code 1
make: Fatal error: Command failed for target 

`$WSM_TOP/bin/WSCMTI'

make: Fatal error: Don't know how to make target 
`$WSM_TOP/bin/WSCMTM'
Done with link of wsm executable 'WSCMTM' on Tue Nov 18 22:51:05 EST 2008
Relink of module "WSCMTM" failed.

Following undefined symbol errors were appearing:

make: Fatal error: Don't know how to make target 

Undefined first referenced
symbol in file
inltev 
$WIP_TOP/lib/libwip.a(wilctv.o)
inltvu 

Undefined first referenced
symbol in file
wfmlrsumpl 

$FND_TOP/lib/libfnd.a(wfmlr.o)
wfmlrgpl

When we relinked them with force option, they relinked fine:

adrelink.sh "force=y" "WSM WSCMTI"
adrelink.sh "force=y" "ALR ALPPIM"

The other binaries still failed.  On a hunch we checked the make file used to relink the binaries:

grep FAGDA $FA_TOP/lib/fa.mk
grep WSCMTM $WSM_TOP/lib/wsm.mk

The make files did not have any record of the binaries which were failing relink.  The relink command was unable to relink these binaries because they were not present in make file.

We took a backup of the existing make files and copied higher version of the make files from a different instance:

$ adident Header $FA_TOP/lib/fa.mk.old
fa.mk.old:
$Header fa.mk 115.3 99/07/16 01:53:07 porting ship $

$ adident Header $FA_TOP/lib/fa.mk
fa.mk:
$Header fa.mk 115.9 2002/05/20 03:43:30 pkm ship    $

The new make files had the binaries which were failing relink.

When we relinked again after copying the make files, it went fine.

Monday, November 17, 2008

alter java class "oracle/apps/per/proxy/client/forms/UrlThread" compile hangs

Mohan pinged me today and said:

Two workers are hung and log file says 
Loading contents of archive file into database with arguments
-Xmx512M -XX:+UseParallelGC -XX:ParallelGCThreads=4 -XX:LargePageSizeInBytes=4M oracle.aurora.server.tools.loadjava.LoadJavaMain -f -thin -user "APPS/*****@someserver.justanexample.com:1521:dba" /$APPLCSF/$APPLOUT/p001ldjva.jar 

I asked him to check v$session_wait, and he reported back:

COUNT(*) EVENT
--------- -----------------------------------
1 Streams AQ: waiting for messages in
the queue

2 row cache lock
4 Streams AQ: qmn slave idle wait
3 library cache lock
1 Streams AQ: qmn coordinator idle wa
it

1 Streams AQ: waiting for time manage
ment or cleanup tasks

SQL_TEXT
----------------------------------------------------------------
alter java class "oracle/apps/per/proxy/client/forms/UrlThread" compile 

I asked him to check the DB alert log.  This is what he found:

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! 

There are a lot of metalink notes if you search for the above phrase.  All of them indicate latch contention of some sort.

I asked Mohan to stop the patch.  Bounce the database and restart the patch.

After bouncing the database and restarting the patch, the issue went away.

Sunday, November 16, 2008

Unable to Flush Restart File in AIURWL()

Yesterday, Akhilesh pinged me with this error while applying a patch:


After I renamed the $APPL_TOP/admin/log/$TWO_TASK/restart directory, the error changed to:

Adpatch error Unable to Flush Restart File in AIURWL()

After a lot of trial and error I noticed that there was a file in $APPL_TOP/admin/log/$TWO_TASK directory called adpatch_sessions_to_clean.txt.  On a hunch I renamed this file as adpatch_sessions_to_clean.txt.old.

Voila,  the patch proceeded without any issues and got applied too.

There is no reference of this file in Metalink anywhere.  Inside the files were session numbers of some old sessions.


Friday, November 14, 2008

Script to check what sqls are being run by users

Metalink Note 135749.1 gives a great script to check what sqls are being run by users at any given time:

set pagesize 66
col c1 for a9
col c1 heading "OS User"
col c2 for a9
col c2 heading "Oracle User"
col b1 for a9
col b1 heading "Unix PID"
col b2 for 9999 justify left
col b2 heading "SID"
col b3 for 99999 justify left
col b3 heading "SERIAL#"
col sql_text for a35
break on b1 nodup on c1 nodup on c2 nodup on b2 nodup on b3 skip 3
select c.spid b1, b.osuser c1, b.username c2, b.sid b2, b.serial# b3,
a.sql_text
  from v$sqltext a, v$session b, v$process c
   where a.address    = b.sql_address
--   and b.status     = 'ACTIVE' /* YOU CAN CHOOSE THIS OPTION ONLY TO SEE
--                                  ACTVE TRANSACTION ON THAT MOMENT */
   and b.paddr      = c.addr
   and a.hash_value = b.sql_hash_value
 order by c.spid,a.hash_value,a.piece
/        

Check Metalink Note 131704.1 for Oracle's Database scripts library index

Thursday, November 13, 2008

Pasta mandatory for printing reports with UTF8 characterset

Sanjeev called today to find out when Pasta printing is required.

Metalink Note 99495.1 says:

1. If the datbase character set is UTF8, this Postscript setup document CANNOT
   be utilized.

   select value from nls_database_parameters 
   where parameter = 'NLS_CHARACTERSET';

   Pasta and IX Library is required for UTF8 character set environments and/or 
   multi-lingual environments. The printing chapter of the Oracle Applications
   11i System Administrator's Guide states, "In order to print reports with the 
   UFT8 character set, you 'must' configure PASTA". See the following documents 
   for more details. 

   Note 240864.1 "Activating and Configuring IX Library"
   Note 189708.1 "Oracle Reports 6i Setup Guide for Oracle Applications 11i"
   Note 239196.1 "PASTA 3.0 Release Information"--particular the "Pasta User's Guide 3.0"
   Note 356501.1 "How to Setup Pasta Quickly and Effectively"

Wednesday, November 12, 2008

Node id does not exist for the current application server id.

I am doing sanity checks on an instance which has been converted to shared application file system.  The DBA was working on it till late night, so he is not available now.  I was unable to access the home page.  I saw some old Apache and Jserv process.  When I tried to stop apache, I got the following error:

$ adapcctl.sh stop

adapcctl.sh version 115.55

Node id does not exist for the current application server id.
Apache Web Server Listener is not running.
Node id does not exist for the current application server id.
Node id does not exist for the current application server id.
Apache Web Server Listener (PLSQL) is not running.
Node id does not exist for the current application server id.

A search on metalink yielded note 359998.1 which suggested a mismatch in fnd_nodes.server_id and APPLICATION_SERVER_ID variable in the dbc file.  Running adgendbc.sh was suggested as a solution.  So I manually ran $COMMON_TOP/admin/install/adgendbc.sh.  This recreated the dbc file.  After this I was able to successfully stop and restart Apache and get to the instance.

Tuesday, November 11, 2008

How to generate server_id for a new node manually

During a server move, Anand asked me this question:

How to generate server_id for a new server to which the instance has moved, without running autoconfig. The answer is the AdminAppServer utility. As per metalink note 93590.1:

The AdminAppServer class will perform three functions:

- Create an encoded SERVER_ID for this machine
- Create a _.dbc file in the $FND_TOP/secure
directory that contains the SERVER_ID as well as other machine - and
database-specific information.
- Add an row in the database table FND_APPLICATION_SERVERS with the
SERVER_ID and other machine-specific information.

As per Metalink Note 150475.1:

There are 3 'AdminAppServer' commands in the adgendbc.sh script. The order in the adgendbc.sh file (from the beginning of the file) should be:

AdminAppServer DELETE
AdminAppServer ADD
AdminAppServer UPDATE

Metalink note 187403.1 says:

Adding, Updating or Deleting Server IDs

Application servers can be registered as trusted machines with
a database server. This works by generating a large random ID
number and storing that in both the application server and the
database. When the application server attempts to connect to the
database it will supply its server ID and the database will verify
that it matches a trusted machine. The server ID is stored as a
parameter in the DBC file for each application server. It can
also be defined as an environment variable. The AdminAppServer
utility is used to generate server IDs and then register them
with a database server.

To add a server ID:

Server ID values are generated by the AdminAppServer utility, and
therefore cannot be supplied on the command line. They will be
added to the database automatically when the AdminAppServer is used
to create a dbc file:

jre oracle.apps.fnd.security.AdminAppServer apps/apps \
ADD [SECURE_PATH=$FND_TOP/secure] \
DB_HOST= \
DB_PORT= \
DB_NAME=

See the section on Creating DBC files in Administering Oracle
Applications Security in Release 11i for more details.

However the above command fails. We can get the correct command, if we open $COMMON_TOP/admin/install/adgendbc.sh:

$COMMON_TOP/util/jdk1.6.0_05/bin/java
-classpath "$COMMON_TOP/util/jdk1.6.0_05/lib/dt.jar:$COMMON_TOP/util/jdk1.6.0_05/lib/tools.jar:
$COMMON_TOP/util/jdk1.6.0_05/jre/lib/rt.jar:$COMMON_TOP/util/jdk1.6.0_05/jre/lib/charsets.jar:$COMMON_TOP/java/appsborg2.zip:$COMMON_TOP/java/apps.zip:$ORACLE_HOME/forms60/java:$COMMON_TOP/java" oracle.apps.fnd.security.AdminAppServer ADD FNDNAM=$USERNAME GWYUID=APPLSYSPUB/PUB "$DBC_CONNECT_ID" TWO_TASK=STAGE11I GUEST_USER_PWD=GUEST/ORACLE SECURE_PATH=$FND_TOP/secure
APPS_JDBC_DRIVER_TYPE=THIN DB_PORT=1521 DB_HOST=stage11i.justanexample.com NODE_NAME=$HOST_NAME FND_MAX_JDBC_CONNECTIONS=500 FND_JDBC_STMT_CACHE_SIZE=200 -nopromptmsg