Blog dedicated to Oracle Applications (E-Business Suite) Technology; covers Apps Architecture, Administration and third party bolt-ons to Apps

Wednesday, March 19, 2008

ksdpec: called for event 13740 prior to event group initialization

Anand pinged me today to ask why the following lines were appearing in alert log:

SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization

Answer is given in Metalink Note 342405.1

"Event 13740 is an event id used when tracing ASH. It is detected when mapping the SGA during startup, so this message is only informational. There is no need to worry. There is no way
currently in the code of disabling this message"

ASH is Active Scripting History which is a new feature in 10g. More details about this feature are available in metalink note 243132.1:

Performance tuning and problem diagnosis are the two most challenging and important management tasksthat any database administrator performs. In line with the primary drive of the server manageability effort, the Autometic Database Diagnostic Monitor (ADDM) attempts to make, performing these two tasks, a lot simpler and easier. ADDM employs an iterative top-down approach and drives a rule-based expert system, to identify bottlenecks in a system and suggest relevant recommendations to tackle them.

ASH acquires the information it requires to sample the active session’s activity from the database kernel’s session state objects. The quantity of information sampled by ASH could be quite voluminous, and therefore, ASH maintains a fixed sized circular buffer in the database System Global Area (SGA). The fixed sized circular buffer will be allocated during database start-up time. Since the information collected by the ASH infrastructure compliments the information present in SWRF snapshots, and can be used for drilldown purposes during problem diagnosis or performance tuning, the ASH data is also periodically flushed to disk.

The flushing and purging policies of ASH, including the way ASH respects SWRF baselines, are
completely tied with SWRF policies. Still, flushing the entire content of ASH on to disk could be too populous to be feasible, and therefore, only one out of every ten active session samples will be flushed todisk.

In addition to ADDM using the ASH to achieve its objectives, the ASH contents will also be displayed in the Oracle Enterprise Manager (EM) / Mozart [OEM-SWRF] performance screen. The graph that summarizes the ASH contents in the EM performance screen will be a stacked graph showing the distribution of the elapsed database time onto various wait times and CPU time, during every minute.

The ASH infrastructure will provide a single V$ view (V$ACTIVE_SESSION_HISTORY), to view the in-memory contents of ASH. The ASH infrastructure will also provide a way to dump the in-memory contents of its circular buffer onto an external file in a human readable format. The ASH dump file can be transported to another system, imported onto a user table, and analyzed using the ADDM in that system.

ASH Memory Size
Size of ASH Circular Buffer = Max [Min [ #CPUs * 2 MB, 5% of Shared Pool Size, 30MB ], 1MB ]

Catagorising the Active Session
i. Present inside a user call
ii. Not a recursive session
iii. Not waiting for the ‘IDLE’ wait-event
iv. If it is a background process, not waiting for its usual timer-event
v. If it is a parallel slave, not waiting for the PX_IDLE wait event.

So will not see any info if a process is waitting for "SQL*Net message from client".

Lets analyze one of the case with ASH

Let start with ONLINE analysis
I have open two session. first one (SID 16) is running the DML like

SQL> delete test where rowid='AAAKB9AAEAAAAAiAAA';

From second session (SID 15) run the same DML, and it is obvious that second
session will wait for first session to commit.

Lets check the info in V$ACTIVE_SESSION_HISTORY.
Run the following script.
from v$active_session_history ash, v$event_name enm
where ash.event#=enm.event#
and SESSION_ID=&SID and SAMPLE_TIME>=(sysdate-&minute/(24*60));

Input is

Enter value for sid: 15
Enter value for minute: 1 /* How many minutes activity you want to see */

output is 59 lines as it is wiatting more than 1 minute more than 1 minute

---------- ------------------------------ ---------- ---------- ---------- ---------- ------------ ------------- --------------
15 enq: TX - row lock contention 1415053318 589825 143 0 41085 4 34
15 enq: TX - row lock contention 1415053318 589825 143 0 41085 4 34
15 enq: TX - row lock contention 1415053318 589825 143 0 41085 4 34
continue .............
15 enq: TX - row lock contention 1415053318 589825 143 0 41085 4 34
15 enq: TX - row lock contention 1415053318 589825 143 0 41085 4 34
15 enq: TX - row lock contention 1415053318 589825 143 0 41085 4 34

So you have object details with problem info.

Lets do the OFFLINE analysis of ASH
So if your are not able to find the problem online, you can dump the ASH to a trace.

Command would be like below: where level means minute. lets dump for 10 minutes history

1. SQL> alter session set events 'immediate trace name ashdump level 10';
2. SQL> alter system set events 'immediate trace name ashdump level 10';
3. SQL> oradebug setmypid
SQL> oradebug dump ashdump 10;

So you will get the trace file in udump.

Output of trace would be like

2594829169,1,161390,"07-18-2003 16:05:21.098717000",13,1,0,"",65535,0,0,2,0,0,0,4294967295,0,0,2,35,100,0,0,10
05855,0,"oracle@usunrat21 (MMNL)","","",""

Oracle has provide an utility under $ORACLE_HOME/rdbms/demo (Location may change), by which you can
upload the ASH trace dump to a database table and do the analysis.

Script Details
1. "ashdump_loader" -> Main script to run with one argument. Argument is the the name of tracefile.
2. "ashdump_table.sql" -> It would be called by the main script to create the table
3. "ashdump_sqlldr.ctl" -> It is the third script called by main script to load the trace into

You must use the same oracle version to compare the OFFLINE trace with the other view.


Raj J said...

Very Nice article, i didn't know this ashdump part. Will test it out.


Anonymous said...

Excellent post!
Thanks for sharing with all.Learned new things about reading udump file