Friday, July 13, 2007

Modifying sprepins.sql for Custom Statspack Reports III

Rows per Exec

There is a section titled "SQL ordered by Gets" in statspack reports. This section reports the sql statements performing most of the buffer gets.

CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------


One important thing missing from these columns is the number of rows processed per execution for each sql statement. With a simple change we can add a column named "rows per exec" showing this information. By knowing how many rows the sql processes I can decide if the number of buffer gets is acceptable or not. If I have a sql statement processing a few rows but performns lots of buffer gets then I need to see if something can be done about that sql.


If the sql involves calculating aggregates like group by statements then this information may not be that useful, you need to do a more detailed analysis of the sql statement. But for simple sql statements with no aggregates this can be helpful.


After this change that section looks like the following:


CPU Elapsd
Buffer Gets Executions Gets per Exec Rows per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- -------------- ------ -------- --------- ----------


To make this change find this line in sqprepins.sql.


-- SQL statements ordered by buffer gets


Remove the lines between that line and this line.


-- SQL statements ordered by physical reads


In between these two lines insert the following code. Notice the lines in bold.

ttitle lef 'SQL ordered by Gets for ' -
'DB: ' db_name ' Instance: ' inst_name ' '-
'Snaps: ' format 999999 begin_snap ' -' format 999999 end_snap -
skip 1 -
'-> End Buffer Gets Threshold: ' ebgt -
skip 1 -
'-> Note that resources reported for PL/SQL includes the ' -
'resources used by' skip 1 -
' all SQL statements called within the PL/SQL code. As ' -
'individual SQL' skip 1 -
' statements are also reported, it is possible and valid ' -
'for the summed' skip 1 -
' total % to exceed 100' -
skip 2;

-- Bug 1313544 requires this rather bizarre SQL statement
set lines 95
set underline off;
col aa format a196 heading -
' CPU Elapsd| Buffer Gets Executions Gets per Exec Rows per Exec %Total Time (s) Time (s) Hash Value |--------------- ------------ -------------- -------------- ------ -------- --------- ----------'

column hv noprint;
break on hv skip 1;

select aa, hv
from ( select /*+ ordered use_nl (b st) */
decode( st.piece
, 0
, lpad(to_char((e.buffer_gets - nvl(b.buffer_gets,0))
,'99,999,999,999')
,15)||' '||
lpad(to_char((e.executions - nvl(b.executions,0))
,'999,999,999')
,12)||' '||
lpad((to_char(decode(e.executions - nvl(b.executions,0)
,0, to_number(null)
,(e.buffer_gets - nvl(b.buffer_gets,0)) /
(e.executions - nvl(b.executions,0)))
,'999,999,990.0'))
,14) ||' '||
--Reports rows processed per exec
lpad((to_char(100*(e.rows_processed - nvl(b.rows_processed,0))/(e.executions - nvl(b.executions,0))
,'999,999,990.0'))
, 14)
||' '||
lpad((to_char(100*(e.buffer_gets - nvl(b.buffer_gets,0))/:gets
,'990.0'))
, 6) ||' '||
lpad( nvl(to_char( (e.cpu_time - nvl(b.cpu_time,0))/1000000
, '9990.00')
, ' '),8) || ' ' ||
lpad( nvl(to_char( (e.elapsed_time - nvl(b.elapsed_time,0))/1000000
, '99990.00')
, ' '),9) || ' ' ||
lpad(e.hash_value,10)||''||
decode(e.module,null,st.sql_text
,rpad('Module: '||e.module,95)||st.sql_text)
, st.sql_text) aa
, e.hash_value hv
from stats$sql_summary e
, stats$sql_summary b
, stats$sqltext st
where b.snap_id(+) = :bid
and b.dbid(+) = e.dbid
and b.instance_number(+) = e.instance_number
and b.hash_value(+) = e.hash_value
and b.address(+) = e.address
and b.text_subset(+) = e.text_subset
and e.snap_id = :eid
and e.dbid = :dbid
and e.instance_number = :inst_num
and e.hash_value = st.hash_value
and e.text_subset = st.text_subset
and st.piece < &&num_rows_per_hash
and e.executions > nvl(b.executions,0)
order by (e.buffer_gets - nvl(b.buffer_gets,0)) desc, e.hash_value, st.piece
)
where rownum < &&top_n_sql;



Rollback Segments

The sections named "Rollback Segment Stats" and "Rollback Segment Storage" provides information about individual rollback segments. The first one gives information like undo bytes written, wraps, extends, shrinks, etc... The second one gives information about the average, maximum and optimum sizes of the rollback segments. I do not remember an occasion where I used these sections for databases with automatic undo management. So, I remove these sections as I do not them for automatic undo. This change reduces my report file size from about 500KB to about 100KB. This is not a major change but helps me keep unused data out of the report.

To make this change find the lines "-- Rollback segment" and "-- Undo Segment" and comment or remove the lines between them.


The last three posts were about modifications to sprepins.sql for custom statspack reports. Actually there are more changes I use to customize my reports to my needs but I think these three posts can help you in starting if you want to customize sprepins.sql.


3 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Thanks for posting these. Many DBAs never even consider updating the Statspack system to suit their needs better.

    I automatically generate and archive Statspacks in case we have a question about past performance. I have posted an article on my blog called "Changing spreport to run StatsPacks from the command line…" http://www.ba6.us/?p=40 .

    -- Dave

    ReplyDelete
  3. It's really amazing, how those simple improvements of "out of the box" Statspack reporting make life so easier. Thanks a lot for posting it! I'll be using it in my day to day job. I put on the top of my modified sprepins.sql script link to your blog, so that due credits go to you.

    Cheers,

    Kuba D

    ReplyDelete