Tuesday, April 10, 2007

The Read Consistency Trap in Sql Tracing

There are some important things to consider when interpreting sql trace files. One of the most important ones is the read consistency trap as it is called in the documentation.

If you see different number of block reads with the same data at different times when you trace a sql statement, this can be one reason. A test case showing what this means follows:


SQL> create table t as select * from all_objects;

Table created.

SQL> alter session set sql_trace=true;

Session altered.

SQL> begin
2 for t in (select * from t) loop
3 null;
4 end loop;
5 end;
6 /

PL/SQL procedure successfully completed.

After doing a full table scan from the above session, update all rows from another session:

SQL> update t set object_name='test';

23840 rows updated.

Execute the last sql from the first session again:

SQL> /

PL/SQL procedure successfully completed.


When you get the tkprof output with aggregate=no you can see the statistics for the two executions of the same statement differently without aggregation.

tkprof test_ora_6273.trc test.txt aggregate=no


In test.txt you can see the two executions of the statement:

SELECT *
FROM
T


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 23841 0.82 0.71 331 23844 0 23840
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 23843 0.82 0.71 331 23844 0 23840

SELECT *
FROM
T


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 23841 1.19 1.00 0 48015 0 23840
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 23843 1.19 1.00 0 48015 0 23840

Why did the numbers in the "query" column approximately double when the number of rows, the number of fetches did not change? Because there is an active transaction on the table which updated all the rows. So in order to get a read consistent view of the table the query goes to the undo blocks as well as table blocks.

What this suggests is; it is important when to trace an application. If the application you are trying to tune is running when there are transactions on the tables it accesses then you need to trace the application in that context. If you trace it when there are no transactions on its tables, you can come to the conclusion that the number of block reads is appropriate. Or you can focus on an sql taking much of the execution time when the sql you need to tune is actually another one.

0 comments: