oracle performance

arupnanda's picture

OOW 2010 Session Stats with Confidence

Thank you very much to all those who attended my session "Stats with Confidence". Unfortunately I was delayed by the keynote running late. With the big party coming up, I appreciate the spirit of those brave souls who stayed back. The late start didn't allow me to show the demo completely. But here are the scripts; hope you will be able to follow it along and run it on your own infrastructure.

It contains the presentation as well. Thanks for attending and hope you will find it useful.

arupnanda's picture

A Tool to Enable Stats Collection for Future Sessions for Application Profiling

The other day I was putting together my presentation for Oracle Open World on Application Profiling in RAC. I was going to describe a methodology for putting a face to an app by measuring how it behaves in a database – a sort of a signature of that application. I was going to use the now-ubiquitous 10046 trace for wait events and other activities inside the database. For resource consumption such as redo generated, logical I/Os, etc., I used the v$sesstat; but then I was stuck. How would I collect the stats of a session when the session has not even started and I don’t know the SID. That problem led to the development of this tool where the stats of a future session can be recorded based on some identifying factors such as username, module, etc. Hope this helps in your performance management efforts.

The Problem

Suppose you want to find out the resource consumed by a session. The resources could be redo generation, CPU used, logical I/O, undo records generated – the list is endless. This is required for a lot of things. Consider a case where you want to find out which apps are generating the most redo; you would issue a query like this:

select sid, value
from v$sesstat s, v$statname n
where n.statistic# = s.statistic#
and n.name = 'redo size'
/


The value column will show the redo generated. From the SID you can identify the session. Your next stop is v$session to get the other relevant information such as username, module, authentication scheme, etc. Problem solved, right?

Not so fast. Look at the above query; it selects from v$sesstat. When the session is disconnected, the stats disappear, making the entries for that session go from v$sesstat. If you run the query, you will not find these sessions. You have to constantly select from the v$sesstat view to capture the stats of the sessions hoping that you would capture the stats before the session disconnects. But it will be not be guaranteed. Some short sessions will be missed in between collection samples. Even if you are lucky to capture some stats of a short session, the other relevant information from v$session will be gone.

Oracle provides a package dbms_monitor, where a procedure named client_id_stat_enable allows you to enable stats collection on a future session where the client_id matches a specific value, e.g. CLIENT1. Here is an example:

execute dbms_monitor.client_id_stat_enable('CLIENT1');

However there are three issues:

(1) It collects only about 27 stats, out of 400+

(2) It offers only three choices for selecting sessions – client_id, module_name and service_name.

(3) It aggregate them, sums up all stats for a specific client_id. That is pretty much useless without a detailed session level.

So, in short, I didn’t have a readily available solution.

Solution

Well, necessity is the mother of invention. When you can’t find a decent tool; you build it; and so did I. I built this tool to capture the stats. This is version 1 of the tool. It has some limitations, as shown at the end. These limitations do not apply to all situations; so the tool may be useful in a majority of the cases. Later I will expand the tool to overcome these limitations.

Concept

The fundamental problem, as you recall, is not the dearth of data (v$sesstat has plenty); it’s the sessions in the future. To capture those sessions, the tool relies on a post-logon database trigger to capture the values.

The second problem was persistence. V$SESSTAT is a dynamic performance view, which means the records of the session will be gone when the session disappears. So, the tool relies on a table to store the data.

The third problem is the getting the values at the very end of the session. The difference between the values captured at the end and beginning of the session are the stats. To capture the values at the very end; not anytime before, the tool relies on a pre-logoff database trigger.

The fourth challenge is identification of sessions. SID of a session is not unique; it can be reused for a new session; it will definitely be reused when the database is recycled. So, the tool uses a column named CAPTURE_ID, a sequentially incremented number for each capture. Since we capture once at the beginning and then at the end, I must use the same capture_id. I use a package variable to store that capture_Id.

Finally, the tool allows you to enable stats collections based on some session attributes such as username, client_id, module, service_name, etc. For instance you may want to enable stats for any session where the username = ‘SCOTT’ or where the os_user is ‘ananda’, etc. These preferences are stored in a table reserved for that purpose.

Construction

Now that you understand how the tool is structured, let me show the actual code and scripts to create the tool.

(1) First, we should create the table that holds the preferences. Let’s call this table RECSTATS_ENABLED. This table holds all the different sessions attributes (ip address, username, module, etc.) that can enable stats collection in a session.

CREATE TABLE SYS.RECSTATS_ENABLED
(
SESSION_ATTRIBUTE VARCHAR2(200 BYTE),
ATTRIBUTE_VALUE VARCHAR2(2000 BYTE)
)
/

If you want to enable stats collection of a session based on a session attribute, insert a record into this table with the session attribute and the value. Here are some examples. I want to collect stats on all sessions where client_info matches ‘MY_CLIENT_INFO1’. You would insert a record like this:

insert into recstats_enabled values ('CLIENT_INFO','MY_CLIENT_INFO1');

Here are some more examples. All sessions where ACTION is ‘MY_ACTION1’:

insert into recstats_enabled values ('ACTION','MY_ACTION1');

Those of user SCOTT:

insert into recstats_enabled values ('SESSION_USER','SCOTT')

Those with service name APP:

insert into recstats_enabled values ('SERVICE_NAME','APP')

You can insert as many preferences as you want. You can even insert multiple values of a specific attribute. For instance, to enable stats on sessions with service names APP1 and APP2, insert two records.

Important: the session attribute names follow the naming convention of the USERENV context used in SYS_CONTEXT function.

(2) Next, we will create a table to hold the statistics

CREATE TABLE SYS.RECSTATS
(
CAPTURE_ID NUMBER,
CAPTURE_POINT VARCHAR2(10 BYTE),
SID NUMBER,
SERIAL# NUMBER,
ACTION VARCHAR2(2000 BYTE),
CLIENT_DENTIFIER VARCHAR2(2000 BYTE),
CLIENT_INFO VARCHAR2(2000 BYTE),
CURRENT_EDITION_NAME VARCHAR2(2000 BYTE),
CURRENT_SCHEMA VARCHAR2(2000 BYTE),
CURRENT_USER VARCHAR2(2000 BYTE),
DATABASE_ROLE VARCHAR2(2000 BYTE),
HOST VARCHAR2(2000 BYTE),
IDENTIFICATION_TYPE VARCHAR2(2000 BYTE),
IP_ADDRESS VARCHAR2(2000 BYTE),
ISDBA VARCHAR2(2000 BYTE),
MODULE VARCHAR2(2000 BYTE),
OS_USER VARCHAR2(2000 BYTE),
SERVICE_NAME VARCHAR2(2000 BYTE),
SESSION_USER VARCHAR2(2000 BYTE),
TERMINAL VARCHAR2(2000 BYTE),
STATISTIC_NAME VARCHAR2(2000 BYTE),
STATISTIC_VALUE NUMBER;
)
TABLESPACE USERS


Note, I used the tablespace USERS; because I don’t want this table, which can potentially grow to huge size, to overwhelm the system tablespace. The STATISTIC_NAME and STATISTIC_VALUE columns record the stats collected. The other columns record the other relevant data from the sessions. All the attributes here have been shown with VARCHAR2(2000) for simplicity; of course they don’t need that much of space. In the future versions, I will put a more meaningful limit; but 2000 does not hurt as it is varchar2.

The capture point will show when the values were captured – START or END of the session.

(3) We will also need a sequence to identify the sessions. Each session will have 400+ stats; we will have a set at the end and once at the beginning. We could choose SID as an identifier; but SIDs could be reused. So, we need something that is truly unique – a sequence number. This will be recorded in the CAPTURE_ID column in the stats table.

SQL> create sequence seq_recstats;

(4) To store the capture ID when the post-logon trigger is fired, to be used inside the pre-logoff trigger, we must use a variable that would be visible to entire session. A package variable is the best for that.

create or replace package pkg_recstats
is
g_recstats_id number;
end;

(5) Finally, we will go on to the meat of the tool – the triggers. First, the post-logon trigger to capture the stats in the beginning of the session:

CREATE OR REPLACE TRIGGER SYS.tr_post_logon_recstats
after logon on database
declare
l_stmt varchar2(32000);
l_attr_val recstats_enabled.attribute_value%TYPE;
l_capture_point recstats.capture_point%type := 'START';
l_matched boolean := FALSE;
begin
pkg_recstats.g_recstats_id := null;
for r in (
select session_attribute, attribute_value
from recstats_enabled
order by session_attribute
)
loop
exit when l_matched;
-- we select the userenv; but the null values may cause
-- problems in matching; so let’s use a value for NVL
-- that will never be used - !_!_!
l_stmt := 'select nvl(sys_context(''USERENV'','''||
r.session_attribute||'''),''!_!_!_!'') from dual';
execute immediate l_stmt into l_attr_val;
if l_attr_val = r.attribute_value then
-- match; we should record the stats
-- and exit the loop; since stats should
-- be recorded only for one match.
l_matched := TRUE;
select seq_recstats.nextval
into pkg_recstats.g_recstats_id
from dual;
insert into recstats
select
pkg_recstats.g_recstats_id,
l_capture_point,
sys_context('USERENV','SID'),
null,
sys_context('USERENV','ACTION'),
sys_context('USERENV','CLIENT_IDENTIFIER'),
sys_context('USERENV','CLIENT_INFO'),
sys_context('USERENV','CURRENT_EDITION_NAME'),
sys_context('USERENV','CURRENT_SCHEMA'),
sys_context('USERENV','CURRENT_USER'),
sys_context('USERENV','DATABASE_ROLE'),
sys_context('USERENV','HOST'),
sys_context('USERENV','IDENTIFICATION_TYPE'),
sys_context('USERENV','IP_ADDRESS'),
sys_context('USERENV','ISDBA'),
sys_context('USERENV','MODULE'),
sys_context('USERENV','OS_USER'),
sys_context('USERENV','SERVICE_NAME'),
sys_context('USERENV','SESSION_USER'),
sys_context('USERENV','TERMINAL'),
n.name,
s.value
from v$mystat s, v$statname n
where s.statistic# = n.statistic#;
end if;
end loop;
end;

The code is self explanatory. I have provided more explanation as comments where needed.

(6) Next, the pre-logoff trigger to capture the stats at the end of the session:

CREATE OR REPLACE TRIGGER SYS.tr_pre_logoff_recstats
before logoff on database
declare
l_capture_point recstats.capture_point%type := 'END';
begin
if (pkg_recstats.g_recstats_id is not null) then
insert into recstats
select
pkg_recstats.g_recstats_id,
l_capture_point,
sys_context('USERENV','SID'),
null,
sys_context('USERENV','ACTION'),
sys_context('USERENV','CLIENT_IDENTIFIER'),
sys_context('USERENV','CLIENT_INFO'),
sys_context('USERENV','CURRENT_EDITION_NAME'),
sys_context('USERENV','CURRENT_SCHEMA'),
sys_context('USERENV','CURRENT_USER'),
sys_context('USERENV','DATABASE_ROLE'),
sys_context('USERENV','HOST'),
sys_context('USERENV','IDENTIFICATION_TYPE'),
sys_context('USERENV','IP_ADDRESS'),
sys_context('USERENV','ISDBA'),
sys_context('USERENV','MODULE'),
sys_context('USERENV','OS_USER'),
sys_context('USERENV','SERVICE_NAME'),
sys_context('USERENV','SESSION_USER'),
sys_context('USERENV','TERMINAL'),
n.name,
s.value
from v$mystat s, v$statname n
where s.statistic# = n.statistic#;
commit;
end if;
end;
/

Again the code is self explanatory. We capture the stats only of the global capture ID has been set by the post-logoff trigger. If we didn’t do that all the sessions would have started recording stats at their completion.

Execution

Now that the setup is complete, let’s perform a test by connecting as a user with the service name APP:

SQL> connect arup/arup@app

In this session, perform some actions that will generate a lot of activity. The following SQL will do nicely:

SQL> create table t as select * from all_objects;


SQL> exit

Now check the RECSTATS table to see the stats on this catured_id, which happens to be 1330.

col name format a60
col value format 999,999,999
select a.statistic_name name, b.statistic_value - a.statistic_value value
from recstats a, recstats b
where a.capture_id = 1330
and a.capture_id = b.capture_id
and a.statistic_name = b.statistic_name
and a.capture_point = 'START'
and b.capture_point = 'END'
and (b.statistic_value - a.statistic_value) != 0
order by 2
/

Here is the output:

NAME                                                                VALUE
------------------------------------------------------------ ------------
workarea memory allocated -2
change write time 1
parse time cpu 1
table scans (long tables) 1
cursor authentications 1
sorts (memory) 1
user commits 2
opened cursors current 2
IMU Flushes 2
index scans kdiixs1 2
parse count (hard) 2
workarea executions - optimal 2
redo synch writes 2
redo synch time 3
rows fetched via callback 5
table fetch by rowid 5
parse time elapsed 5
recursive cpu usage 8
switch current to new buffer 10
cluster key scan block gets 10
cluster key scans 10
deferred (CURRENT) block cleanout applications 10
Heap Segment Array Updates 10
table scans (short tables) 12
messages sent 13
index fetch by key 15
physical read total multi block requests 15
SQL*Net roundtrips to/from client 18
session cursor cache hits 19
session cursor cache count 19
user calls 25
CPU used by this session 28
CPU used when call started 29
buffer is not pinned count 33
execute count 34
parse count (total) 35
opened cursors cumulative 36
physical read total IO requests 39
physical read IO requests 39
calls to get snapshot scn: kcmgss 45
non-idle wait count 67
user I/O wait time 116
non-idle wait time 120
redo ordering marks 120
calls to kcmgas 143
enqueue releases 144
enqueue requests 144
DB time 149
hot buffers moved to head of LRU 270
recursive calls 349
active txn count during cleanout 842
cleanout - number of ktugct calls 842
consistent gets - examination 879
IMU undo allocation size 968
physical reads cache prefetch 997
physical reads 1,036
physical reads cache 1,036
table scan blocks gotten 1,048
commit cleanouts 1,048
commit cleanouts successfully completed 1,048
no work - consistent read gets 1,060
redo subscn max counts 1,124
Heap Segment Array Inserts 1,905
calls to kcmgcs 2,149
consistent gets from cache (fastpath) 2,153
free buffer requested 2,182
free buffer inspected 2,244
HSC Heap Segment Block Changes 2,519
db block gets from cache (fastpath) 2,522
consistent gets 3,067
consistent gets from cache 3,067
bytes received via SQL*Net from client 3,284
bytes sent via SQL*Net to client 5,589
redo entries 6,448
db block changes 9,150
db block gets 10,194
db block gets from cache 10,194
session logical reads 13,261
IMU Redo allocation size 16,076
table scan rows gotten 72,291
session uga memory 88,264
session pga memory 131,072
session uga memory max 168,956
undo change vector size 318,640
session pga memory max 589,824
physical read total bytes 8,486,912
cell physical IO interconnect bytes 8,486,912
physical read bytes 8,486,912
redo size 8,677,104


This clearly shows you all the stats of that session. Of course the table recorded all other details of the session as well – such as username, client_id, etc., which are useful later for more detailed analysis. You can perform aggregations as well now. Here is an example of the stats collected for redo size:

select session_user, sum(STATISTIC_VALUE) STVAL
from recstats
where STATISTIC_NAME = 'redo size'
group by session_user
/

Output:

SESSION_USER STVAL
------------ ---------
ARUP 278616
APEX 4589343
… and so on …

You can disassemble the aggregates to several attributes as well. Here is an example where you want to find out the redo generated from different users coming from different client machines

select session_user, host, sum(STATISTIC_VALUE) stval
from recstats
where STATISTIC_NAME = 'redo size'
group by session_user, host
/

Output:

SESSION_USER HOST STVAL
------------ ----------- -------
ARUP oradba2 12356
ARUP oradba1 264567
APEX oradba2 34567
… and so on …


Granularity like this shows you how the application from different client servers helped; not just usernames.



Limitations

As I mentioned, there are some limitations you should be aware of. I will address them in the next iterations of the tool. These are not serious and applicable in only certain cases. As long as you don’t encounter that case, you should be fine.

(1) The logoff trigger does not fire when the user exits from the session ungracefully, such as closing down the SQL*Plus window, or closing the program before exiting. In such cases the stats at the end of the session will not be recorded. In most application infrastructure it does not happen; but it could happen for adhoc user sessions such as people connecting through TOAD.

(2) The session attributes such as module, client_id and action can be altered within the session. If that is the case, this tool does not record that fact since there is no triggering event. The logoff trigger records the module, action and client_id set at that time. These attributes are not usually changed in application code; so it may not apply to your case.

(3) Parallel Query sessions will have a special issue since there will be no logoff trigger. So in case of parallel queries, you will not see any differential stats. If you don’t use PQ, as most OLTP applications do, you will not be affected.

(4) If the session just sits there without disconnecting, the logoff trigger will never fire and the stats will never be captured. Of course, it will be eventually captured when the session exits.

Once again, these limitations apply only to certain occasions. As long as you are aware of these caveats, you will be able to use this tool to profile many of your applications.

Happy Profiling!
rshamsud's picture

Query transformation – Part 1

Query transformation is a set of techniques used by the optimizer to rewrite a query and optimizer it better. Few optimization paths open up to the optimizer after query transformation. Some query transformations must be costed to be chosen and some do not need to be costed. For example, if a table can be eliminated completely from the join, then that transformation is applied and need to cost that transformation is minimal.

Test case

We will use the following test case to illustrate the concepts behind Query transformation. Some of the optimizations that we see here works from version 11gR1 onwards and so, these test cases might not work in the versions 10g and below.


create table backup.t1 (n1 number not null primary key, n2 number not null, n3 varchar2(256) );
insert into backup.t1 select n1, n1, lpad ( n1, 250,'x') from (select level n1 from dual connect by level <=100);
create table backup.t2 (n1 number not null primary key , n2 number not null, n3 varchar2(256) );
alter table backup.t1 add constrainT t1_fk foreign key  (n2) references backup.t2(n1)
insert into backup.t2 select n1, n1, lpad ( n1, 250,'x') from (select level n1 from dual connect by level <=100);
insert into backup.t1 select n1, n1, lpad ( n1, 250,'x') from (select level n1 from dual connect by level <=100);

Join elimination (JE)

JE is a technique in which one or more tables can be eliminated from the execution plan without altering functional behavior. In the listing 1-1, query selects columns from the table t1 only, but there exists a join predicate between t1 and t2 in that query. Further, no columns are selected from table t2 in this query and join to t2 simply serves as to verify the existence of foreign key values. Enabled Foreign key constraint between these two tables establishes the existence check already and so, there is no need for explicit existence check in the query also. Join to table t2 can be eliminated by the optimizer safely.

select /*+ gather_plan_statistics */ t1.* from t1, t2 where t1.n2 = t2.n1;
select *  from table(dbms_xplan.display_cursor('','','allstats last'))
------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   1 |  TABLE ACCESS FULL| T1   |      1 |     82 |    100 |00:00:00.01 |      13 |
-----------------------------------------------------------------------------------

Listing 1-1 :JE example case 1

As you see from the listing 1-1, Table T2 is removed from the execution plan. Since there is a valid foreign key constraint, optimizer eliminated the join condition to that table t2.

Let’s also discuss another Join Elimination test case. In the Listing 1-2, predicate is “t1.n2 not in (select t2.n1 from t2)”. As the enabled foreign key constraint dictates that this predicate will always be false and no rows will be returned. Optimizer promptly identified this condition and added a filter predicate in the step 1 with “NULL is NOT NULL” as a predicate. Step 1 is executed before step 2; Step 2 is never executed as the value of Starts column is zero in the execution plan.

SQL_ID  d09kmzum9wgta, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ t1.* from t1 where t1.n2 not in
(select t2.n1 from t2 )

Plan hash value: 3332582666
---------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      0 |00:00:00.01 |
|*  1 |  FILTER            |      |      1 |        |      0 |00:00:00.01 |
|   2 |   TABLE ACCESS FULL| T1   |      0 |    100 |      0 |00:00:00.01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)

Listing 1-2 :JE example case 2

Listing 1-3 provides another variation of JE.

select /*+ gather_plan_statistics */ t1.* from t1 where t1.n2 in
(select t2.n1 from t2 )

Plan hash value: 3617692013
------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    100 |00:00:00.01 |      14 |
|   1 |  TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |      14 |
------------------------------------------------------------------------------------
Listing 1-3 :JE example case 2

Following output shows the trace lines from the 10053 trace file.
JE: Considering Join Elimination on query block SEL$5DA710D3 (#1)
*************************
Join Elimination (JE)
*************************
JE: cfro: T1 objn:74684 col#:2 dfro:T2 dcol#:2
JE: cfro: T1 objn:74684 col#:2 dfro:T2 dcol#:2
Query block (0E0D43D0) before join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT “T1″.”N1″ “N1″,”T1″.”N2″ “N2″,”T1″.”N3″ “N3″ FROM “CBO3″.”T2″ “T2″,”CBO3″.”T1″ “T1″ WHERE “T1″.”N2″=”T2″.”N1″
JE: eliminate table: T2
Registered qb: SEL$14EF7918 0xe0d43d0 (JOIN REMOVED FROM QUERY BLOCK SEL$5DA710D3; SEL$5DA710D3; “T2″@”SEL$2″)

Filter Predicate(s) Generation from constraints

Various filter predicates are generated and added to the execution plan using enabled and validated constraints (check, not null constraints).

In the Listing 1-4, columns n1 and n2 has enabled valid NOT NULL constraints that precludes null values in the columns n1 and n2. Query in the listing 1-4 has predicate “n1 is null or n2 is null” which can never be true. This fact is used by the optimizer to improve the execution plan. Filter predicate (NULL IS NOT NULL) is added in step 1 which will be FALSE. So, Step 2 is never executed as the value of Starts column is 0 in the execution plan. This means that step (2) in the execution plan was never executed and table T1 was never accessed.

select /*+ gather_plan_statistics */ * from t1 where n1 is null or n2 is null;
select *  from table(dbms_xplan.display_cursor('','','allstats last'));
---------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------
|*  1 |  FILTER            |      |      1 |        |      0 |00:00:00.01 |
|   2 |   TABLE ACCESS FULL| T1   |      0 |    100 |      0 |00:00:00.01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)

Listing 1-4: Filter predicate generation from NOT NULL constraint.

Let’s add a check constraint to this column to explain this further. In the listing 1-5 a check constraint is added which specifies that “n1 200 and generated predicate n1<200 will nullify each other leading to an always FALSE condition. Optimizer identified this condition and added a filter predicate in step 1: NULL IS NOT NULL.


alter table t1 add constraint t1_n1_lt_150 check (n1 200;
select *  from table(dbms_xplan.display_cursor('','','allstats last'));
---------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------
|*  1 |  FILTER            |      |      1 |        |      0 |00:00:00.01 |
|*  2 |   TABLE ACCESS FULL| T1   |      0 |     20 |      0 |00:00:00.01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)
   2 - filter("N1">200)

Listing 1-5:Filter predicate generation from a check constraint

Following lines from the trace file generated from event 10053 shows that a predicate n1<200 is added; This auto-generated predicate and existing predicate canceled each other leading to an eternally FALSE condition.

kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#0)
constraint: “T1″.”N1″<200
predicates with check contraints: "T1"."N2"="T2"."N1" AND "T1"."N1"<200
after transitive predicate generation: "T1"."N2"="T2"."N1" AND "T1"."N1"<200
finally: "T1"."N2"="T2"."N1"
apadrv-start: call(in-use=1056, alloc=16344), compile(in-use=44792, alloc=46272)
kkoqbc-start

SJC: Set to Join Conversion

In some cases, the optimizer can convert a set operator to a join operator. Interestingly, this feature is not enabled by default (up to 11gR1). In the listing 1-6, we enable this parameter. A MINUS set operation has been converted to a join operation.

alter session set "_convert_set_to_join"=true;
select /*+ gather_plan_statistics  */ n2 from t1 minus select n1 from t2
Plan hash value: 3050591313
------------------------------------------------------------------------------------...
| Id  | Operation           | Name         | Starts | E-Rows | A-Rows |   A-Time   |...
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |      1 |        |      0 |00:00:00.01 |...
|   1 |  HASH UNIQUE        |              |      1 |     99 |      0 |00:00:00.01 |
|   2 |   NESTED LOOPS ANTI |              |      1 |     99 |      0 |00:00:00.01 |
|   3 |    TABLE ACCESS FULL| T1           |      1 |    100 |    100 |00:00:00.01 |
|*  4 |    INDEX UNIQUE SCAN| SYS_C0010995 |    100 |      1 |    100 |00:00:00.01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("N2"="N1")

Listing 1-6: SJC

There is also a new hint set_to_join with this new feature.


/*+
  ...
      OPT_PARAM('_convert_set_to_join' 'true')
  ...
      SET_TO_JOIN(@"SET$1")
  ...
  */

SU: Subquery Unnesting

Subqueries can be unnested in to a join. Listing 1-7 shows that a subquery is unnested in to a view and then joined to other row sources. In this listing, a correlated subquery is moved in to a view VW_SQ_1, unnested and then joined using Nested Loops Join technique. There are many different variations of Subquery Unnesting possible, but the crux of the matter is that subqueries can be unnested, joined and then costed.

 select /*+ gather_plan_statistics  */ n1 from t1 where n1 >
	(select max(n2) from t2 where t2.n1 = t1.n1)
Plan hash value: 2311753844
-----------------------------------------------------------------------------------...
| Id  | Operation                       | Name         | Starts | E-Rows | A-Rows |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |              |      1 |        |      0 |
|   1 |  NESTED LOOPS                   |              |      1 |      1 |      0 |
|   2 |   VIEW                          | VW_SQ_1      |      1 |      5 |    100 |
|*  3 |    FILTER                       |              |      1 |        |    100 |
|   4 |     HASH GROUP BY               |              |      1 |      5 |    100 |
|   5 |      TABLE ACCESS BY INDEX ROWID| T2           |      1 |    100 |    100 |
|*  6 |       INDEX RANGE SCAN          | SYS_C0010995 |      1 |    100 |    100 |
|*  7 |   INDEX UNIQUE SCAN             | SYS_C0010992 |    100 |      1 |      0 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(MAX("N2")<200)
   6 - access("T2"."N1""MAX(N2)")

SU is one reason why there are many performance issues after a database upgrade to 10g and above. Cost of unnested subquery will go up or down leading to an unfortunate choice of not-so-optimal execution plan.

Use of ORDERED hint can really play havoc with SU feature too. For example, in the listing 1-8, join between t1 and t2 is preferred followed by other joins. You would expect to see the leading table in the join to be T1, but the leading row source is VW_SQ_1.

select /*+ gather_plan_statistics ORDERED  */ t1.n1, t2.n1 from t1 , t2
where t1.n1 = t2.n1 and t1.n1 > (select max(n2) from t2 where t2.n1 =t1.n1)
Plan hash value: 3904485247
------------------------------------------------------------------------------------...
| Id  | Operation                        | Name         | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------...
|   0 | SELECT STATEMENT                 |              |      1 |        |      0 |...
|   1 |  NESTED LOOPS                    |              |      1 |      1 |      0 |
|   2 |   NESTED LOOPS                   |              |      1 |      1 |      0 |
|   3 |    VIEW                          | VW_SQ_1      |      1 |      5 |    100 |
|*  4 |     FILTER                       |              |      1 |        |    100 |
|   5 |      HASH GROUP BY               |              |      1 |      5 |    100 |
|   6 |       TABLE ACCESS BY INDEX ROWID| T2           |      1 |    100 |    100 |
|*  7 |        INDEX RANGE SCAN          | SYS_C0010995 |      1 |    100 |    100 |
|*  8 |    INDEX UNIQUE SCAN             | SYS_C0010992 |    100 |      1 |      0 |
|*  9 |   INDEX UNIQUE SCAN              | SYS_C0010995 |      0 |      1 |      0 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(MAX("N2")<200)
   7 - access("T2"."N1""MAX(N2)")
   9 - access("T1"."N1"="T2"."N1")
       filter("T2"."N1"<200)

Is CBO not honoring our hint? It is honoring our hint. Except that ORDERED hint was applied after the SU transformation and so, unnested view is in the leading row source. A variation of the transfromed query from 10053 trace file is printed below. With ORDERED hint, of course, CBO must choose the unnested view as the leading row source. Use LEADING hint instead of ORDERED hint if necessary.

SELECT /*+ ORDERED */ “T1″.”N1″ “N1″,”T2″.”N1″ “N1″ FROM
(SELECT MAX(“T2″.”N2″) “MAX(N2)”,”T2″.”N1″ “ITEM_1″ FROM “CBO3″.”T2″ “T2″ GROUP BY “T2″.”N1″) “VW_SQ_2″,
“CBO3″.”T1″ “T1″,”CBO3″.”T2″ “T2″ WHERE “T1″.”N1″=”T2″.”N1″ AND “T1″.”N1″>”VW_SQ_2″.”MAX(N2)” AND “VW_SQ_2″.”ITEM_1″=”T1″.”N1″

Summary

There are many techniques to cover in one blog entry. We will discuss these features further in upcoming blogs. This can be read in more conventional format: query_transformation_part_1_orainternals

rshamsud's picture

RAC object remastering ( Dynamic remastering )

In RAC, every data block is mastered by an instance. Mastering a block simply means that master instance keeps track of the state of the block until the next reconfiguration event (due to instance restart or otherwise).

Hash to the master

These data blocks are mastered in block ranges. For example, range of blocks starting from file 10, block 1 through block 128 may be mastered by instance 1, blocks from file 10, block 129 through 256 are mastered by instance 2 etc. Of course, there are differences between various versions 10g, 11g etc, but Idea here is that block ranges are uniformly mastered between various instances so that Global cache grants are evenly distributed among the instances. Interestingly, length of the block range is 128 from 10g onwards (Julian Dyke mentioned that is 1089 in 9i, but I have not personally tested it). Of course, Support recommends you to unset db_file_multiblock_read_count which will be auto adjusted to 128 which means that Full block range can be read with fewer GC messages, I suppose. I digress.

Further, Michael Möller pointed out that this hash-algorithm is further optimized: The hash-algorithm used when initially computing the master node from the DBA, results in a “virtual master”, which is then translated to a real (online&open) master by a lookup table (the length of which is the maximum number of possible nodes (128 ?). This means that when one node goes off/on-line, RAC does NOT have to recalculate the hash for all blocks, but only distribute the new Hash-to-node table. (One can later visualize dynamic remastering as an additional lookup table between the hash value and node. This table also needs redistributing on node changes.)

Following SQL is helpful in showing masters and owners of the block. This SQL joins, x$kjbl with x$le to retrieve resource name. If you are familiar with Oracle locking strategy, you would probably recognize the format of these cache fusion (aka old PCM) locks. Lock type in this case is BL, id1 is block# and id2 is file_id in this case. Column kjblname2 provides a decimal format lock resource.

Please observe the output below:

  1. Block range: File 1, block 6360-6376 is mastered by node 3 and also owned by node 3.
  2. Block range: File 1, blocks upto 10709 is mastered by instance 0 and owned by instance 3.
  3. Next block range from 10770 is mastered by instance 2 and owned by 3.

Also, note that this output is generated from a database with no remastering done yet.

REM In kjblname2 first entry before ',' is block and seond entry file_id*65536 for BL locks

select kj.*, le.le_Addr from (
select kjblname, kjblname2, kjblowner, kjblmaster, kjbllockp,
substr ( kjblname2,  instr(kjblname2,',')+1,   instr(kjblname2,',',1,2)-instr(kjblname2,',',1,1)-1)/65536 fl,
substr ( kjblname2, 1, instr(kjblname2,',')-1) blk
 from x$kjbl
) kj, x$le le
where le.le_kjbl = kj.kjbllockp
 order by le.le_addr
/

KJBLNAME               KJBLNAME2       KJBLOWNER KJBLMASTER   FL    BLK LE_ADDR
---------------------- --------------- ---------- ----------- --- ----- ----------------
[0x18d8][0x10000],[BL] 6360,65536,BL           3          3     1  6360 000000038DF9AB08
...
[0x18e7][0x10000],[BL] 6375,65536,BL           3          3     1  6375 000000038DFBF818 => case #2
[0x18e8][0x10000],[BL] 6376,65536,BL           3          3     1  6376 000000038DFD3BA0
...
[0x29d1][0x10000],[BL] 10705,65536,BL          3          0     1 10705 00000005D6FE9230
...
[0x29d5][0x10000],[BL] 10709,65536,BL          3          0     1 10709 000000038EFBB990
[0x2a12][0x10000],[BL] 10770,65536,BL          3          2     1 10770 000000075FFE3C18
...
[0x2a18][0x10000],[BL] 10776,65536,BL          2          2     1 10776 000000038EFA8488 => case #1
[0x2a19][0x10000],[BL] 10777,65536,BL          3          2     1 10777 000000038EFB7F90
[0x2a1a][0x10000],[BL] 10778,65536,BL          1          2     1 10778 000000038EFCC318

Let’s consider three simple cases of SELECT statement running instance 3:

  1. A session is trying to access the block file 1, block 10776, but that block is mastered by instance 2 and also that block is owned by instance 2 (meaning, it is in instance 2 cache). So, instance 3 will sent a PR (Protected Read) mode BL lock request on that block to instance 2. Ignoring additional complexities, instance 2 will grant PR mode lock to instance 3 and transfer the block to instance 3. Obviously, this involves multiple GC messages, grants and block transfer. Statistics ‘gc remote grants’ gets incremented too.
  2. Let’s consider that session is trying to access another block: file 1, block 6375. That block is mastered by instance 3 and also owned by instance 3. At this point, there is no additional GCS/GES processing is needed and the session pin that buffer and continue the work.
  3. Let’s consider a third case. Session is trying to access file 1 block 6374. That block is not in any buffer cache, but instance 3 is master of the block, so local affinity locks are acquired with minimal GC messages and waits. That block is read from the disk in to the buffer cache

In the case #2 and #3 above, requesting instance also is the master node of a block or block range. In these cases, statistics ‘gc local grants’ is incremented and cheaper local affinity locks on those block ranges are acquired avoiding many Global cache messages.

So far so good, but what if, say instance 1, is reading one object (table, index etc) aggressively, but other instances are not reading that object at all? [ through some sort of application node partitioning or just plain workload]. Does it make sense for the instance accessing that object aggressively request a grant to the remote instance(s) for each OPEN on that object’s blocks? Especially, if the blocks are read in to the buffer cache, but disappears soon from the buffer cache? Wouldn’t that be better if the instance reading that object aggressively is also the master of that object, as in the cases #2 and #3 above?

In addition to that, if the block is supposed to be thrown away from buffer cache (close of BL lock) or if the block needs to be written, then that will involve additional overhead/messaging between the master instance and owner instance since the ownership needs to be communicated back to the master of the block.

Enter Object remastering.

Object Remastering

There are many new features in 10g/11g RAC stack. One of them is Object remastering feature. This feature was implemented in 10gR1 and improved in 10gR2 and further enhanced in 11g. I realize there are parameters in 9i also, but I don’t think it worked as intended.

With object remastering feature, if an object is accessed by an instance aggressively, then that instance will become the master of the object reducing gc remote grants improving performance of the application. In the prior sentence, I used the word “accessed”, but it is a loose term, and the correct term is if the instance is requesting much BL locks on an object, then that object can be remastered. In an ideal world, even if the application is not partitioned, remastering of the objects that were accessed aggressively from one instance will acquire cheaper local instance affinity locks and effective RAC Tax will be minimal.

Well, I said, in an ideal world :-) There are few issues here:

  1. Instance do not remember prior mastership across restarts. This means that instance needs to re-learn the object mastership map after every restart. I can see the complexities of remembering the mastership, but it is possible to implement that.
  2. Remastering is not exactly cheap. Instance GRD is frozen during reconfiguration and in a very busy instances, this can take many seconds leading to instance freeze for several seconds. While 10gR2 introduce parallel reconfiguration (_rcfg_parallel_Replay parameter controls this behavior) using all LMS processes to complete the reconfiguration, still, several seconds of freeze is not exactly acceptable in many environments.
  3. I advice my clients to keep LMS processes to a lower value (3 to 5), at the most, but instance reconfiguration effective parallelism is reduced if we reduce number of LMS processes.
  4. Last, but not the least important point is that, default values of few parameters that trigger remastering events are quite low for busy environments causing frequent remastering of objects. In an E-Business World, minor mismanagement in the manager configuration can lead to a massive reconfiguration issues.

Parameters, views and internals

Few parameters are controlling this behavior, not well documented, my test case results are not very accurate either. But, these parameters are giving us a picture of what is going on internally. These parameters are applicable to 10gR2 and below. For 11g, whole set of different parameters comes in to play and I will blog about the differences in another blog entry.

X$object_affinity_statistics maintains the statistics about objects and OPENs on those objects. It is important to understand the difference between OPEN and Buffer access. If the block is in the cache already in a suitable mode, there is no need for BL opens on that block. So, if the sessions are accessing the same block repeatedly without requesting any additional BL locks, then the count is not incremented. So, OPEN is simply a number of BL request initiated in an ephimeral time frame.

LCK0 process maintains these object affinity statistics. If an instance opens 50 more opens on an object then the other instance (controlled by _gc_affinity_limit parameter), then that object is a candidate for remastering. That object is queued and LMD0 reads the queue and initiates GRD freeze. LMON performs reconfiguration of buffer cache locks working with LMS processes. All these are visible in LMD0/LMON trace files. Parameter _gc_affinity_time controls how often the queue is checked to see if the remastering must be triggered or not with a default value of 10 minutes.

Now, you don’t want just any object as a candidate for remastering, meaning, if instance 1 opened 101 BL locks on that object and instance 2 opened 50 BL locks on that object, you don’t want to trigger object remastering. Only objects with higher amount of BL lock requests must be queued for remastering. Well, that threshold seems to be controlled by another parameter _gc_affinity_minimum: This parameter is defined as “minimum amount of dynamic affinity activity per minute” to be a candidate for remastering. Defaults to 2500 and I think, it is lower in a busy environment.

Few lines from LMD0 trace files showing that LMD0 is reading a request queue:

* kjdrchkdrm: found an RM request in the request queue
  Transfer pkey 6589904 to node 3
*** 2009-10-12 11:41:20.257

How bad can it get?

Performance can suffer if there are remastering issues. Following AWR report shows that few instances froze due to DRM reconfiguration issue. Same type of freeze is visible in all other nodes too. gc buffer busy is a side effect of DRM freeze (not always, but in this case).

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
gc buffer busy                    1,826,073     152,415     83   62.0    Cluster
CPU time                                         30,192          12.3
enq: TX - index contention           34,332      15,535    453    6.3 Concurrenc
gcs drm freeze in enter server       22,789      11,279    495    4.6      Other
enq: TX - row lock contention        46,926       4,493     96    1.8 Applicatio

At the same time, a storm of DRM were started. This lead to repetitive DRM freeze and instance configuration leading to massive performance issues.

* received DRM start msg from 2 (cnt 5, last 1, rmno 404)
Rcvd DRM(404) Transfer pkey 1598477 from 3 to 2 oscan 1.1
Rcvd DRM(404) Dissolve pkey 6100030 from 2 oscan 0.1
Rcvd DRM(404) Dissolve pkey 5679943 from 2 oscan 0.1
Rcvd DRM(404) Transfer pkey 6561036 from 0 to 2 oscan 1.1
Rcvd DRM(404) Transfer pkey 5095243 to 2 oscan 0.1
...

A small test case

Let’s walk through a test case that shows DRM in play. Query used index access path to read nearly all blocks from a big index.

Session #1:
select data_object_id from dba_objects where object_name='WMS_N1';
DATA_OBJECT_ID
-------------
6099472

REM No affinity statistics yet.
select * from x$object_affinity_statistics where object=6099472;
no rows selected.
REM executing a costly select statement
select /*+ index(a WMS_N1 */ count(*) from big_table1 a;

Session #2: I was monitoring the DRM tables:
REM DRM operations completed so far is 409. We will keep track of this count to see the remastering count intereasing. There are few
REM other interesting fields in this view.
select drms from X$KJDRMAFNSTATS;
DRM
----
409
REM  I see that 23442 opens on that index already since the session #1 started running.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8         14          1    6099472          1      23442
REM No mastering has kicked in for that object.
select * from v$gcspfmaster_info where object_id=6099472;
no rows selected
REM few seconds later, open count increased from 23442 -> 33344.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8         16          1    6099472          1      33344

REM No remastering has kicked in for that object.
select * from v$gcspfmaster_info where object_id=6099472;
no rows selected

REM Surprisingly, while session #1 is still executing, the counter for OPENS was zeroed out even when though DRM has not been triggered yet.
REM  OPENS Increased to 1229 from 0 since the session #1 is still executing.
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8          0          1    6099472          1       1229
REM Approximately 10 minutes or so later, Remastering kicked in..
REM # of DRMS increased from 409 to 411
select drms from X$KJDRMAFNSTATS;
DRM
----
411
REM Remastering of that index happened. Now the master is 0 which is instance 1.
select * from v$gcspfmaster_info where object_id=6099472;
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6099472              0           32767            1
REM Opens are still increasing but remastering has already occurred.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05AF78          2          1    6099472          1      42335
FFFFFFFF7C05AF78          3          1    6099472          2          1
REM LMON trace files are also indicating transfer of that pkey. Notice that pkey is same as object_id
*** 2010-03-23 10:41:57.355
Begin DRM(411) - transfer pkey 6099472 to 0 oscan 0.0
 ftd received from node 1 (4/0.30.0)
 all ftds received
REM few seconds later, opens have been reset again.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8          0          1    6099472          1       7437
REM Still the master is instance 1.
select * from v$gcspfmaster_info where object_id=6099472;
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6099472              0           32767            1

Essentially, an object was remastered after excessive BL locking requests ( in a loose term accesses) on that index.

undo and affinity

Mastering of Undo segments differ from non-undo segment mastering. With non-undo segments, all the blocks are mastered by a hash technique spreading mastership among instances for a segment. Only after an instance opens BL locks aggressively on a segment that segment is mastered. But, for undo segments, Instance that activates an undo segment masters the segment immediately. This makes sense, since that undo segment will be used by the instance opening the segment in most cases. Parameter _gc_undo_affinity controls whether this dynamic undo remastering is enabled or not.

Since undo segments do not have real object_ids, a dummy object_ids over a value of 4294950912 is used. For example, undo segment 1 (with usn=1) will have an object_id of 4294950913, usn=2 will have object_id of 4294950914 etc.
[ 4294950912 = power(2,32) - power (2,14) = xFFFFC000 ]

select objecT_id, object_id-4294950912 usn, current_master, previous_master,
remaster_cnt  from v$gcspfmaster_info where object_id>4294950912

 OBJECT_ID        USN CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
4294950913          1              0           32767            1
4294950914          2              0           32767            1
4294950915          3              0           32767            1
4294950916          4              0           32767            1
4294950917          5              0           32767            1
4294950918          6              0           32767            1
4294950919          7              0           32767            1
4294950920          8              0           32767            1
4294950921          9              0           32767            1
4294950922         10              0           32767            1
4294950923         11              1           32767            1
4294950924         12              1           32767            1
4294950925         13              1           32767            1
...

REM Notice that usno 0 is in both instances. That is system undo segment.
REM As you can see below first 10 undo segments are mastered by node 1 is next 3 are mastered by instance 2.

select inst_id, usn, gets from gv$rollstat where usn <=13
order by inst_id, usn
   INST_ID        USN       GETS
---------- ---------- ----------
         1          0       3130
         1          1     108407
         1          2      42640
         1          3      43985
         1          4      41743
         1          5     165166
         1          6      43485
         1          7     109044
         1          8      23982
         1          9      39279
         1         10      48552
         2          0       4433
         2         11     231147
         2         12      99785
         2         13      1883

I was not successful in triggering another undo segment remastering event. I created one active transaction generating 200K undo blocks in one node, another node was reading that table and I can see enormous waits for those undo blocks. But, I didn’t see any remastering events related to that undo segment. Not sure why it did not work, may be the conditions for the undo segment remastering is different.

[ PS: I am able to manually remaster the undo segment using lkdebug command discussed below: So, code must be remastering the undo segments automatically too, but may be some other conditions must be met.

  1* select * from v$gcspfmaster_info where object_id=431+4294950912
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0 4294951343              0           32767            1
Oradebug lkdebug –m pkey 4294951343

* kjdrchkdrm: found an RM request in the request queue
  Transfer pkey 4294951343 to node 1
*** 2010-03-24 12:47:29.011
Begin DRM(520) - transfer pkey 4294951343 to 1 oscan 0.1
 ftd received from node 0 (4/0.31.0)
 all ftds received

select * from v$gcspfmaster_info where object_id=431+4294950912
SQL> /
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0 4294951343              1               0            2
]

I am not preaching that you should modify these undocumented parameters. Far from it. Understand the parameters, if you run in to wait events such as 'gc remaster', 'gcs freeze for instance reconfiguration', understand whether the default values are quite low. Work with support and see if this can be tuned.

Manual remastering

You can manually remaster an object with oradebug command :
oradebug lkdebug -m pkey

This enqueues an object remaster request. LMD0 and LMON completes this request
*** 2010-01-08 23:25:54.948
* received DRM start msg from 1 (cnt 1, last 1, rmno 191)
Rcvd DRM(191) Transfer pkey 6984154 from 0 to 1 oscan 0.0
 ftd received from node 1 (8/0.30.0)
 ftd received from node 0 (8/0.30.0)
 ftd received from node 3 (8/0.30.0)
Current_master starts from 0.
 1* select * from v$gcspfmaster_info where object_id=6984154
SQL> /
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6984154              1               0            2
SQL> oradebug lkdebug -m pkey 6984154
Statement processed.
SQL>  select * from v$gcspfmaster_info where object_id=6984154
 /
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6984154              2               1            3

Summary

In summary, remastering is a great feature. It is a pity that some times, we fall victims of the side effects. So, if you run in to issues with remastering, don't disable it, but see if you can tune those parameter upwards so as to control the remastering events. If you stil want to disable DRM completely, I would recommend setting _gc_affinity_limit and _gc_affinity_minimum to much higher value, say 10Million. Setting the parameter _gc_affinity_time to 0 will completely disable DRM, but that also means that you can not manually remaster objects. Further, Arup mentioned that x$object_affinity_statistics is not maintained if DRM is disabled.

Again, these are undocumented parameters. Before you change these parameters make sure that Oracle Support agrees with you.


Update 1:

From 11g onwards, affinity management renamed to policy management. For example, x$object_affinity_statistics is renamed to x$object_policy_statistics. Similarly, initialization parameters are renamed too:Parameter _gc_affinity_limit is renamed to _gc_policy_limit; Parameter _gc_affinity_time is renamed to _gc_policy_time; A new view v$policy_history is introduced and all rows with policy_event = 'initiate_affinity' are associated with DRM events.
Other details about the blog still holds good, except that by default _gc_policy_limit is lowered to 1500 which would mean that, by theory, 11g may have more DRM events. YMMV.

 1* select * from  v$policy_history
   INST_ID POLICY_EVENT         DATA_OBJECT_ID TARGET_INSTANCE_NUMBER  EVENT_DATE
---------- -------------------- -------------- ----------------------  --------------------
         2 glru_on                           0                      1  10/15/2010 10:58:28
         2 glru_on                           0                      1  10/15/2010 11:21:32
         2 initiate_affinity             74809                      1  10/15/2010 13:27:44

[ Many Thanks Arup Nanda and Michael Möller (aka "M2") for reviewing this blog entry, they contributed heavily to some of my discussion. But, any mistakes in this document is solely of mine. ]
This blog is available in a pdf format from Dynamic_remastering_RAC_orainternals.

rshamsud's picture

RMOUG 2010: My presentations

It is very disappointing to me that I had to cancel my trip to RMOUG training days. I am sick and was not able to catch the flight due to that.

But, I can always share my presentations here. I had two presentations planned in this training day and can be accessed as below:

Advanced RAC troubleshooting
Riyaj_Advanced_rac_troubleshooting_RMOUG_2010_doc
Riyaj_Advanced_rac_troubleshooting_RMOUG_2010_ppt

Why optimizer hates my sql
Riyaj_Why_optimizer_hates_my_sql_2010

RMOUG training days audience: Please accept my sincere apologies.

rshamsud's picture

HOTSOS SYMPOSIUM ’2010

I will be presenting on two topics in HOTSOS’ 2010, an Oracle performance focused conference, in my home town Dallas, Texas. You can read about my presentation topics in HOTSOS’ 2010 Riyaj . This symposium is a valued conference for performance engineers, DBAs and developers who are interested to know learn about performance. There are many great speakers presenting in this conference and the main page for this conference is HOTSOS ’2010 . BTW, My friend Alex Gorbachev interviewed Gary Goodman and posted a video in his blog also.

Tanel Poder is conducting the HOTSOS training day this year. You can’t miss his training day and I heard that he is working on a MOTS (Mother Of all Tuning Script) and planning to release that in HOTSOS ’2010.

On behalf of the Dallasites, I invite you to visit Dallas and attend this great conference.

I wish you a Happy and prosperous New Year ’2010!

rshamsud's picture

RAC performance tuning: Understanding Global cache performance

Global cache performance metrics are not correctly measured. It is not understood clearly either. There are even few blogs and web pages disseminating incorrect information. This blog entry is an attempt to offer few methods and scripts to understand global cache performance.

Always review all instances

It is very important to review the performance metrics from all instances in that RAC cluster, not just one instance that you are connected. If you have access to AWR reports, then it is critical to generate AWR reports (or statspack reports) from all instances. But, the problem is that, DBAs tend to generate AWR reports after logging in to each instance iteratively, enter couple of parameters and then reports are generated. Not exactly a convenient practice.

  REM connect to each instance separately, type in the beginning snap_id and ending snap_id for each node etc..
   sqlplus mydba@proddb1
   @$ORACLE_HOME/rdbms/admin/awrrpt.sql
   exit;
   sqlplus mydba@proddb2
   @$ORACLE_HOME/rdbms/admin/awrrpt.sql
   exit;
   sqlplus mydba@proddb3
   @$ORACLE_HOME/rdbms/admin/awrrpt.sql
   exit;

There are few issues with this approach. It is a cumbersome practice if the instance count is higher. In addition to that, all of AWR reports are, in turn, accessing underlying AWR tables. Physically, rows from all instances are together in the same block and so, by executing these reports connecting to various instances, Global cache traffic is increased. If the database is suffering from Global cache (GC) performance issues then generating reports connecting to various instances is probably not a grand idea.

I have created few basic scripts to generate AWR reports from all instances in a single step. Script awrrpt_all_gen.sql queries AWR tables and gv$instance to access last snap_id and various instance_ids. Then this script generates AWR reports for all instances in that cluster in the current directory, for the last AWR snap. This script is quite handy while debugging RAC performance issues. Now, we have AWR reports from all instances in one directory and we can easily grep these files to understand metrics in various instances.

@awrrpt_all_gen.sql

awrrpt_all_gen.sql v1.00 by Riyaj Shamsudeen @orainternals.com

To generate AWR Report from all RAC instances concurrently.
Creates reports using last two snap_ids.

...Generating awrrpt_all.sql script.... Please wait....
...Completed script generation.

Executing awrrpt_all.sql to generate AWR reports.
...Generates AWR reports with file name format awrrpt___.txt for each instance.
...Please wait for few minutes...

...AWR report created for instance 1. Please wait..
...AWR report created for instance 2. Please wait..
...AWR report created for instance 3. Please wait..
...AWR report created for instance 4. Please wait..

AWR reports created.

If you would like ability to choose the snap_ids, then use the script awrrpt_all_range_gen.sql

Use granular approach to GC measurements

It is important to measure Global cache performance metrics correctly, especially those damned averages. Incorrect measurements of averages can lead to faulty analysis. For example, following query is retrieving average global cache CR receive time from gv$sysstat view. Problem with this SQL statement is that this statement is retrieving average global cache receive time from the restart of the instances. Global cache performance is quite sensitive to workload and querying over a wide time range, as in this query, can lead to faulty analysis.


REM This is really a bad idea!!

select b1.inst_id, b2.value "RECEIVED",
b1.value "RECEIVE TIME",
((b1.value / b2.value) * 10) "AVG RECEIVE TIME (ms)"
from gv$sysstat b1, gv$sysstat b2
where b1.name = 'gc current block receive time' and
b2.name = 'gc current blocks received' and b1.inst_id = b2.inst_id;

As an analogy, above query is equivalent to a patient-Doctor conversation:

Patient: Doctor, I don’t feel good. I feel that I am having fever.
Doctor : Nope. Your average body temperature from the beginning of this year is around 98. So, you are doing excellent.

It is essential to review the global cache performance metrics at a granular level. Use AWR reports or statspack reports for a short period of time. Better yet, if you want to quickly review GC performance metrics, you can use my script gc_traffic_print.sql. This script captures various statistics relevant to Global cache performance in to a PL/SQL table, sleeps for N seconds (default N=60) and re-captures the statistics after the sleep. Then the script prints all the relevant details in easily readable tabular format. Very useful in quickly reviewing the Global cache performance metrics.

SQL> @gc_traffic_print.sql

gc_traffic_print.sql v1.20 by Riyaj Shamsudeen @orainternals.com

...Calculating GC Rx and Tx timing and blocks..
...Default collection period is 60 seconds.... Please wait for at least 60 seconds...
Enter value for sleep: 30
---------|--------------|---------|----------------|----------|---------------|---------------|-------------|
Inst     | CR blocks Rx | CR time |  CUR blocks Rx | CUR time |  CR blocks Tx | CUR blocks Tx |Tot blocks   |
---------|--------------|---------|----------------|----------|---------------|---------------|-------------|
1        |         29288|    12.17|          14914|      4.58|          27059|           17439|        88700|
2        |         21201|     5.75|          12835|      5.19|          26831|           16741|        77608|
3        |         22942|     4.64|          11751|      3.82|          23918|            9205|        67816|
4        |         32075|     4.78|          19260|      4.55|          31046|           19331|       101712|
---------|--------------|---------|----------------|----------|---------------|---------------|-------------|

PL/SQL procedure successfully completed.

Column key for the above script is:
CR block RX : GC CR blocks received
CR time : Average GC CR receive time
CUR blocks RX: GC CUR blocks received
CUR time: Average GC CuR receive time
CR blocks TX: GC CR blocks transmitted
CUR blocks TX: GC CUR blocks transmitted
tot blocks: Sum of transmitted + received for both CR and CUR traffic

Breakup: global cache performance

Global cache processing performance can be broken down to its sub components ( excluding wire latency, but that can be calculated). Breakup of this metrics must be done at a granular level also. We can use yet another script gc_traffic_processing.sql to review the performance metrics for the past N seconds. For example, in the output printed below, Global cache CR processing time is broken in to various sub-components of global cache processing. Remember that these performance metrics are in the SEND side of Global cache performance metrics. Meaning, inefficient global cache processing in one instance affects other instances' global cache receive time, but not the self. For example, in the table above, GC CR receive performance was at 12.1ms for instance 1. That is due to another instance (probably node 4 in this case) suffering from high GC processing time (more specifically high 'CR block flush time' in node 4). It is far more accurate to review global cache performance issues in a granular basis. Caution though, try not to specify at least a minute so that these numbers are consistent.

@gc_traffic_procssing.sql
gc_traffic_processing.sql v1.00 by Riyaj Shamsudeen @orainternals.com

...Prints various timing related information for the past N seconds
...Default collection period is 60 seconds.... Please wait for at least 60 seconds...
Enter value for sleep:

---------|-----------|---------|-----------|----------|------------|------------|------------|----------|
Inst     | CR blk Tx | CR bld  | CR fls tm | CR snd tm| CUR blk TX | CUR pin tm | CUR fls tm |CUR snd tm|
---------|-----------|---------|-----------|----------|------------|------------|------------|----------|
1        |      62732|       .2|       1.25|       .21|       48920|        .96|         .22|        .22|
2        |      46037|      .13|        .76|        .2|       42019|        .68|         .17|         .2|
3        |      72237|      .63|        .79|       .19|       22697|        .44|         .29|        .23|
4        |     113816|      .13|       2.04|       .15|       51020|        .87|         .12|        .13|
--------------------------------------------------------------------------------------------------------

PL/SQL procedure successfully completed.

Here is the column definitions for the script output:
CR blk TX: GC CR blocks served
CR bld : Average GC CR build time/CR block served
CR fls tm: Average GC CR flush time/CR block served
CR snd tm: Average GC CR send time/CR block served
CUR blk TX: GC CUR blocks served
CUR pin tm: Average GC Current pin time /CUR block served
CUR fls tm : Average GC Current flush time/CUR block served
CUR snd tm: Average GC current send time/CUR block served

Review histograms for GC events

Averages can be misleading. Revealing performance issues underneath the average is a very important task to do. For example, we see that 'gc cr block receive time' average is high. But, let's review the histogram for the gc events related to Global cache receive time. My script gc_event_histogram.sql is quite handy to review event histograms. This script provides a difference in performance statistics between two samples and prints them in a readable format. In this case, node 1 is has 8578 waits for the event 'gc cr block 2-way' under 2ms and the sessions in instance 1 had 9851 waits between 2-4ms. Only few waits over 16ms.

@gc_event_histogram.sql

gc_event_histogram.sql v1.0 by Riyaj Shamsudeen @orainternals.com

Default collection period is sleep seconds. Please wait..
Enter value for event: gc cr block 2-way
Enter value for sleep: 60
---------|-----------------------|----------------|----------|
Inst id  | Event                 |wait time milli |wait cnt  |
---------|-----------------------|----------------|----------|
1        |gc cr block 2-way      |               1|        98|
1        |gc cr block 2-way      |               2|      8578|
1        |gc cr block 2-way      |               4|      9851|
1        |gc cr block 2-way      |               8|      2291|
1        |gc cr block 2-way      |              16|       117|
1        |gc cr block 2-way      |              32|         6|
1        |gc cr block 2-way      |              64|         1|
1        |gc cr block 2-way      |             128|         2|
1        |gc cr block 2-way      |             256|         0|
1        |gc cr block 2-way      |             512|         0|
1        |gc cr block 2-way      |            1024|         0|
1        |gc cr block 2-way      |            2048|         0|
1        |gc cr block 2-way      |            4096|         0|
1        |gc cr block 2-way      |            8192|         0|
1        |gc cr block 2-way      |           16384|         0|
2        |gc cr block 2-way      |               1|       155|
2        |gc cr block 2-way      |               2|     10792|
2        |gc cr block 2-way      |               4|     14201|
2        |gc cr block 2-way      |               8|      3887|
2        |gc cr block 2-way      |              16|       204|
...

But, we also need to see 3-way waits. Let's review 3-way waits with the above script. We will simply supply 'gc cr block 3-way' as the wait event. In this case, we see that there are 12,062 waits over between 4-8ms. Of course, this must be improved.


@gc_event_histogram.sql

gc_event_histogram.sql v1.0 by Riyaj Shamsudeen @orainternals.com

Default collection period is sleep seconds. Please wait..
Enter value for event: gc cr block 3-way
Enter value for sleep: 60
---------|-----------------------|----------------|----------|
Inst id  | Event                 |wait time milli |wait cnt  |
---------|-----------------------|----------------|----------|
1        |gc cr block 3-way      |               1|         3|
1        |gc cr block 3-way      |               2|       320|
1        |gc cr block 3-way      |               4|     17017|
1        |gc cr block 3-way      |               8|     12062|
1        |gc cr block 3-way      |              16|      1725|
1        |gc cr block 3-way      |              32|       113|
1        |gc cr block 3-way      |              64|         0|
1        |gc cr block 3-way      |             128|         0|
1        |gc cr block 3-way      |             256|         0|
1        |gc cr block 3-way      |             512|         0|
1        |gc cr block 3-way      |            1024|         0|
1        |gc cr block 3-way      |            2048|         0|
1        |gc cr block 3-way      |            4096|         0|
1        |gc cr block 3-way      |            8192|         0|
...

In essence, these four scripts will be useful in debugging Global cache performance issues. Please let me know if you encounter errors in these scripts. We will use these scripts in future discussions to show how we resolved few RAC performance issues.

rshamsud's picture

Is plan_hash_value a final say?

I was reviewing a performance issue with a client recently. Problem is that increased global cache waits causing application slowdown affecting few critical business functions. Using one of my script gc_traffic.sql > and graphing the results with Excel spreadsheet, it is established that there is a marked increase in GC traffic today compared to week earlier. Similar jobs runs every day and so comparing two week days is sufficient to show the GC traffic increase. Graph is between total blocks and AWR snap time in 30 minutes interval. [Click the picture below to review the graph clearly.]

gc comparison

Identifying the object creating this increased GC traffic is essential to identify root cause. We were able to quickly determine that this increase in GC traffic was localized around few SQL statements using ADDM and AWR reports. We decided to focus on one SQL with an obvious increase in elapsed time compared to prior week. So, first question asked, is there a change in the plan? plan_hash_value was reviewed and quickly determined that there is no change in the plan_hash_value.

Little bit of history, there were few changes performed by the vendor over the weekend as a part of few bug fixes. Vendor’s argument was that since there is no change to the plan_hash_value, SQL access plan did not change and so, this can’t be due to vendor changes. My Client’s argument was that there were no changes to the environment and problem started after the application changes.

There are many different things that can go wrong without changes to the execution plan. We can ignore those conditions for now since (a) there has been no changes to the environment (b) no visible changes to the data (c) no error message and average CR recv time is consistent with prior weeks. Well, Let’s cut to the chase. It boiled down to a question “Can SQL plan change without a change in plan_hash_value?”. What do you think? Please answer in the poll below.

plan_hash_value and hash_value

Hash_value of a SQL statement is generated from the text of an SQL statement and plan_hash_value is generated from the execution plan of that SQL statement[ More accurately, from that child cursors' execution plan and exactly what is involved in generating plan_hash_value is not published]. It is a general belief that plan_hash_value will change even if there is a slightest change in the execution plan. But, that is not always the case!

Test case

We will use a small table to explore this issue.

prompt Test case #1: Initial test case with index on columns (n1, n2).
prompt ==========
create table t1 (n1 number, n2 number, v1 varchar2(100));
create index t1_n1 on t1(n1,n2);

explain plan for select * from t1 where n1=:b1 and n2=:b2;
select * from table (dbms_xplan.display);
prompt plan hash value in this case is 626762252

Plan hash value: 626762252
-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    53 |  4134 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    53 |  4134 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("N1"=TO_NUMBER(:B1) AND "N2"=TO_NUMBER(:B2))

So, we got the plan_hash_value as highlighted above. In the following test case we will recreate the index reordering the columns as (n2, n1).

prompt Test case #2: Index with re-ordered columns
prompt ==========
drop index t1_n1;
create index t1_n1 on t1(n2,n1);

Plan hash value: 626762252

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    53 |  4134 |     9   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    53 |  4134 |     9   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |  5298 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("N2"=TO_NUMBER(:B2) AND "N1"=TO_NUMBER(:B1))

Notice that in the test result above filter predicate changed reflecting index column reordering. But the plan_hash_value did not change. Point is that execution plan can change without a change in plan_hash_value (due to change in the underlying tables).

Let’s modify that index dropping a column from the index in the next test case.

prompt Test case #3: dropping a column from index t1_n1.

drop index t1_n1;
create index t1_n1 on t1(n1);

explain plan for select * from t1 where n1=:b1 and n2=:b2;

Plan hash value: 626762252

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    53 |  4134 |    41   (3)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T1    |    53 |  4134 |    41   (3)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |  2119 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N2"=TO_NUMBER(:B2))
   2 - access("N1"=TO_NUMBER(:B1))

We see that predicates changed but the plan_hash_value did not change. In the test case below, we will modify index to be a function based index and test this SQL statement. There are also few more self-evident test cases below.

prompt Test case #3: Index is a function based index. Still, no change in the plan_hash_value.

create index t1_n1 on t1(to_char(n1));
explain plan for select * from t1 where to_char(n1)=:b1 and n2=:b2;

Plan hash value: 626762252
-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    53 |  4134 |   127   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T1    |    53 |  4134 |   127   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |  2119 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N2"=TO_NUMBER(:B2))
   2 - access(TO_CHAR("N1")=:B1)

prompt Test case #4: Different schema and same SQL. plan_hash_value did not change.

Plan hash value: 626762252
-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    78 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    78 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |     3 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("N2"=TO_NUMBER(:B2))
   2 - access("N1"=TO_NUMBER(:B1))

plan_hash_value is also case sensitive for index names. In the test cases below, we will create case sensitive indices.

prompt Test case #5: Lower case index_name.. plan_hash_value changed.
create index "t1_n1" on t1(n1,n2);
explain plan for select * from t1 where n1=:b1 and n2=:b2;

Plan hash value: 2252949961
-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    53 |  4134 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    53 |  4134 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | t1_n1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

prompt Test case #6: Upper case index_name.. plan_hash_value did not change.
create index "T1_N1" on t1(n1,n2);
explain plan for select * from t1 where n1=:b1 and n2=:b2;
Plan hash value: 626762252

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    53 |  4134 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    53 |  4134 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

prompt Test case #7: Space in the index name changes plan_hash_value though.
create index "T1_N1 " on t1(n1,n2);
explain plan for select * from t1 where n1=:b1 and n2=:b2;
Plan hash value: 1377541522
--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |    53 |  4134 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1     |    53 |  4134 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_N1  |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Summary

In summary, plan_hash_value is a very good indicator to see if the plan changed or not. Don’t get me wrong, I also use plan_hash_value, but in addition to comparing CPU time and elapsed time. Execution plan can change even when there is no change to the plan_hash_value. Salient points from these test cases are:

  1. Plan_hash_value is dependent upon partial execution plan not on complete execution plan.
  2. If a predicate is moved from filter_predicate to access_predicate or vice-versa, it doesn’t affect plan_hash_value.
  3. Changes in the parallelism of the queries does not affect plan_hash_value. For example, if a query used 4 parallel slaves today and 16 parallel slaves yesterday, that change is not visible through plan_hash_value. This behavior is expected as parallel_adaptive_multiuser can allocate slaves depending upon the state of that instance.
  4. Plan_hash_value is case-sensitive to index/table names. Also sensitive to white-space characters.
  5. Plan_hash_value is not sensitive to index types. For example, if the index type is a function based index as in our test case #4, as long as, index name did not change plan_hash_value will remain the same.
  6. Plan_hash_value is not sensitive to schema either. SQL statement accessing different schemas can have same plan_hash_value too.

Back to our problem. One of the index was recreated removing a column and caused optimizer to apply filter predicates at table level increasing number of accesses to table block tremendously, leading to more logical reads, more Global cache waits etc. This problem was amplified since this SQL was executed very frequently and concurrently from all RAC instances.

In my client’s defense, this application change was tested thoroughly. But, alas, test data chosen for this performance test was not probing this specific issue. This performance issue did not show up in development. Essentially, chosen data in the performance benchmark suite was not an excellent choice.

As they say in Gaelic language “Go raimh maith agat” to my client for allowing me to post this blog.
This can be read in traditional format in
plan_hash_value_and_gc
Update 1: Added the document and also corrected a typo.

rshamsud's picture

Shared pool freelists (and durations)

My earlier blog about shared pool duration got an offline response from one of my reader:
” So, you say that durations aka mini-heaps have been introduced from 10g onwards. I have been using Steve Adams’ script shared_pool_free_lists.sql. Is that not accurate anymore?”

Shared pool free lists

I have a great respect for Steve Adams . In many ways, he has been a great virtual mentor and his insights are so remarkable.

Coming back to the question, I have used Steve’s script before and it is applicable prior to Oracle version 9i. In 9i, sub-heaps were introduced. Further, shared pool durations were introduced in Oracle version 10g. So, his script may not be applicable from version 9i onwards. We will probe this further in this blog.

This is the problem with writing anything about internals stuff, they tend to change from version to version and In many cases, our work can become obsolete in future releases(including this blog!).

In version 9i, each sub-heap of the shared_pool has its own free list. In version 10g and 11g, each duration in sub-heap has its own free list. This is visible through x$ksmsp and column x$ksmsp.ksmchdur indicates the duration that chunk belongs to. In 9i, that column always has a value of 1 (at least, that I have experimented so far). In 10g & 11g (up to 11.1.0.7), there are exactly 4 durations in each sub-heap and values range from 1-4 for this column ksmchdur. Each duration has its own free list.

Shared_pool_free_list.sql script

rshamsud's picture

ORA-4031 and Shared Pool Duration

After reading my earlier post on shared pool A stroll through shared pool heap , one of my client contacted me with an interesting ORA-4031 issue. Client was getting ORA-4031 errors and shared pool size was over 4GB ( in a RAC environment). Client DBA queried v$sgastat to show that there is plenty of free memory in the shared pool. We researched the issue and it is worth blogging. Client DBA was confused as to how there can be ORA-4031 errors when the shared pool free memory is few GBs.

Heapdump Analysis

At this point, it is imperative to take heapdump in level 2 and Level 2 is for the shared pool heap dump. [ Please be warned that it is not advisable to take shared pool heap dumps excessively, as that itself can cause performance issue. During an offline conversation, Tanel Poder said that heapdump can freeze instance as his clients have experienced.]. This will create a trace file in user_dump_dest destination and that trace file is quite useful in analyzing the contents of shared pool heap. Tanel Poder has an excellent script heapdump_analyzer . I modified that script adding code for aggregation at hea, extent and type levels to debug this issue further and it is available as heapdump_dissect.ksh . ( with a special permission from Tanel to publish this script.)

Shared pool review

Syndicate content