Internals

tanelpoder's picture

LOBREAD SQL Trace entry in Oracle 11.2 (and tracing OPI calls with event 10051)

A few days ago I looked into a SQL Tracefile of some LOB access code and saw a LOBREAD entry there. This is a really welcome improvement (or should I say, bugfix of a lacking feature) for understanding resource consumption by LOB access OPI calls. Check the bottom of the output below:

*** 2011-03-17 14:34:37.242
WAIT #47112801352808: nam='SQL*Net message from client' ela= 189021 driver id=1413697536 #bytes=1 p3=0 obj#=99584 tim=1300390477242725
WAIT #0: nam='gc cr multi block request' ela= 309 file#=10 block#=20447903 class#=1 obj#=99585 tim=1300390477243368
WAIT #0: nam='cell multiblock physical read' ela= 283 cellhash#=379339958 diskhash#=787888372 bytes=32768 obj#=99585 tim=1300390477243790
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390477243865
[...snipped...]
WAIT #0: nam='SQL*Net more data to client' ela= 2 driver id=1413697536 #bytes=2048 p3=0 obj#=99585 tim=1300390477244205
WAIT #0: nam='SQL*Net more data to client' ela= 4 driver id=1413697536 #bytes=2048 p3=0 obj#=99585 tim=1300390477244221
WAIT #0: nam='gc cr multi block request' ela= 232 file#=10 block#=20447911 class#=1 obj#=99585 tim=1300390477244560
WAIT #0: nam='cell multiblock physical read' ela= 882 cellhash#=379339958 diskhash#=787888372 bytes=32768 obj#=99585 tim=1300390477245579
WAIT #0: nam='SQL*Net more data to client' ela= 16 driver id=1413697536 #bytes=2020 p3=0 obj#=99585 tim=1300390477245685
WAIT #0: nam='SQL*Net more data to client' ela= 6 driver id=1413697536 #bytes=2048 p3=0 obj#=99585 tim=1300390477245706
WAIT #0: nam='SQL*Net more data to client' ela= 5 driver id=1413697536 #bytes=1792 p3=0 obj#=99585 tim=1300390477245720
#ff0000;">LOBREAD: c=1000,e=2915,p=8,cr=5,cu=0,tim=1300390477245735

In past versions of Oracle the CPU (c=) usage figures and other stats like number of physical/logical reads of the LOB chunk read OPI call were just lost – they were never reported in the tracefile. In past only the most common OPI calls, like PARSE, EXEC, BIND, FETCH (and recently CLOSE cursor) were instrumented with SQL Tracing. But since 11.2(.0.2?) the LOBREAD’s are printed out too. This is good, as it reduces the amount of guesswork needed to figure out what are those WAITs for cursor #0 – which is really a pseudocursor.

Why cursor#0? It’s because normally, with PARSE/EXEC/BIND/FETCH, you always had to specify a cursor slot number you operated on (if you fetch from cursor #5, it means that Oracle process went to slot #5 in the open cursor array in your session’s UGA and followed the pointers to shared cursor’s executable parts in library cache from there). But LOB interface works differently – if you select a LOB column using your query (cursor), then all your application gets is a LOB LOCATOR (sort of a pointer with LOB item ID and consistent read/version SCN). Then it’s your application which must issue another OPI call (LOBREAD) to read the chunks of that LOB out from the database. And the LOB locator is independent from any cursors, it doesn’t follow the same cursor API as regular SQL statements (as it requires way different functionality compared to a regular select or update statement).

So, whenever a wait happened in your session due to an access using a LOB locator, then there’s no specific cursor responsible for it (as far as Oracle sees internally) and that’s why a fake, pseudocursor #0 is used.

Note that on versions earlier than 11.2(.0.2?) when the LOBREAD wasn’t printed out to trace – you can use OPI call tracing (OPI stands for Oracle Program Interface and is the server-side counterpart to OCI API in the client side) using event 10051. First enable SQL Trace and then the event 10051 (or the other way around if you like):

SQL> @oerr 10051

ORA-10051: trace OPI calls

SQL> alter session set events '10051 trace name context forever, level 1';

Session altered.

Now run some LOB access code and check the tracefile:

*** 2011-03-17 14:37:07.178
WAIT #47112806168696: nam='SQL*Net message from client' ela= 6491763 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390627178602
OPI CALL: type=105 argc= 2 cursor=  0 name=Cursor close all
CLOSE #47112806168696:c=0,e=45,dep=0,type=1,tim=1300390627178731
OPI CALL: type=94 argc=28 cursor=  0 name=V8 Bundled Exec
=====================
PARSING IN CURSOR #47112802701552 len=19 dep=0 uid=93 oct=3 lid=93 tim=1300390627179807 hv=1918872834 ad='271cc1480' sqlid='3wg0udjt5zb82'
select * from t_lob
END OF STMT
PARSE #47112802701552:c=1000,e=1027,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=3547887701,tim=1300390627179805
EXEC #47112802701552:c=0,e=29,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3547887701,tim=1300390627179884
WAIT #47112802701552: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390627179939
WAIT #47112802701552: nam='SQL*Net message from client' ela= 238812 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390627418785
OPI CALL: type= 5 argc= 2 cursor= 26 name=FETCH
WAIT #47112802701552: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390627418945
FETCH #47112802701552:c=0,e=93,p=0,cr=5,cu=0,mis=0,r=1,dep=0,og=1,plh=3547887701,tim=1300390627418963
WAIT #47112802701552: nam='SQL*Net message from client' ela= 257633 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390627676629
#ff0000;">OPI CALL: type=96 argc=21 cursor=  0 name=#ff0000;">LOB/FILE operations
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=99585 tim=1300390627676788
[...snip...]
WAIT #0: nam='SQL*Net more data to client' ela= 2 driver id=1413697536 #bytes=1792 p3=0 obj#=99585 tim=1300390627677054
LOBREAD: c=0,e=321,p=0,cr=5,cu=0,tim=1300390627677064

Check the bold and especially the red string above.  Tracing OPI calls gives you some extra details of what kind of tasks are executed in the session. The “LOB/FILE operations” call indicates that whatever lines come after it (unlike SQL trace call lines where all the activity happens before a call line is printed (with some exceptions of course)) are done for this OPI call (until a next OPI call is printed out). OPI call tracing should work even on ancient database versions…

By the way, if you are wondering, what’s the cursor number 47112801352808 in the “WAIT #47112801352808″ above? Shouldn’t the cursor numbers be small numbers?

Well, in 11.2.0.2 this was also changed. Before that, the X in CURSOR #X (and PARSE #X, BIND #X, EXEC #X, FETCH #X) represented the slot number in your open cursor array (controlled by open_cursors) in your session’s UGA. Now, the tracefile dumps out the actual address of that cursor. 47112801352808 in HEX is 2AD94DC9FC68 and it happens to reside in the UGA of my session.

Naturally I asked Cary Millsap about whether he had spotted this LOBREAD already and yes, Cary’s way ahead of me – he said that Method-R’s mrskew tool v2.0, which will be out soon, will support it too.

It’s hard to not end up talking about Cary’s work when talking about performance profiling and especially Oracle SQL trace, so here are a few very useful bits which you should know about:

If you want to understand the SQL trace & profiling stuff more, then the absolute must document is Cary’s paper on the subject – Mastering Performance with Extended SQL Trace:

Also, if you like to optimize your work like me (in other words: you’re proactively lazy ;-) and you want to avoid some boring “where-the-heck-is-this-tracefile-now” and “scp-copy-it-over-to-my-pc-for-analysis” work then check out Cary’s MrTrace plugin (costs ~50 bucks and has a 30-day trial) for SQL Developer. I’ve ended up using it myself regularly although I still tend to avoid GUIs:

Share

tanelpoder's picture

Implicit datatype conversion in the parsing phase – something new I learned today!

Wow, I wasn’t aware that Oracle can also do an implicit datatype conversion for literal strings during parsing phase!

SQL> @desc t
           Name                            Null?    Type
           ------------------------------- -------- ----------------------------
    1      A                                        NUMBER(38)

SQL> select * from t where a = '1' || RPAD('0',5,'0');

no rows selected

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------
SQL_ID  d7r6md8wfu74d, child number 0
-------------------------------------
select * from t where a = '1' || RPAD('0',5,'0')

Plan hash value: 1601196873

--------------------------------------------------------
| Id  | Operation         | Name | E-Rows | Cost (%CPU)|
--------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |      1 |     2   (0)|
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("A"=100000)

You see what happened? The expression ’1′ || RPAD(’0′,5,’0′) has been evaluated, which returns a string. And this string ’100000′ has been converted to a NUMBER 100000 during parsing phase .. otherwise you would see quotes around the number above with a TO_NUMBER() function around it (so that Oracle could compare the NUMBER column “A” to the same datatype)…

I add a TO_CHAR() around the column A just for demoing that a varchar datatype (as the original “literal” in my query is) will be shown with quotes like every normal string:

SQL> select * from t where to_char(a) = '1'||rpad('0',5,'0');

no rows selected

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
SQL_ID  7yf6j8fdyrvk7, child number 0
-------------------------------------
select * from t where to_char(a) = '1'||rpad('0',5,'0')

Plan hash value: 1601196873

--------------------------------------------------------
| Id  | Operation         | Name | E-Rows | Cost (%CPU)|
--------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |      1 |     2   (0)|
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_CHAR("A")='100000')

Let’s see whether this trick is somehow done also for bind variables:

SQL> var x varchar2(10)
SQL> exec :x:= '1' || RPAD('0',5,'0');

PL/SQL procedure successfully completed.

SQL> print x

X
--------------------------------
100000

SQL> select * from t where a = :x;

no rows selected

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID  45f39y7580bdp, child number 2
-------------------------------------
select * from t where a = :x

Plan hash value: 1601196873

--------------------------------------------------------
| Id  | Operation         | Name | E-Rows | Cost (%CPU)|
--------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |      1 |     2   (0)|
--------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (VARCHAR2(30), CSID=873): '100000'

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("A"=TO_NUMBER(:X))

Apparently not! And this kind of makes sense – as if this string to number conversion is done during parse phase – Oracle doesn’t know what the actual value is yet (in the bind variable memory) so it can’t convert it to number in advance either :-)

This is a little interesting detail… I didn’t know that in addition to the implicit datatype conversion during query execution (using TO_CHAR, TO_NUMBER functions etc) Oracle can sometimes convert a string literal to number datatype under the hood during the parse time!

P.S. I tested this on Oracle 11.2.0.2 with optimizer_features_enable set from 11.2.0.2 to all the way back to 8.0.0 and the behavior was the same. I didn’t find any mention of this conversion in the CBO tracefile although after a filter pushdown transformation (FPD) the string literal was already shown as a number datatype. If anyone still has access to ancient Oracle database versions (like 9.2 and 10.1 ;-) then let me know whether you see the same results!

Share

tanelpoder's picture

ORA-4031 errors, contention, cursor management issues and shared pool fragmentation – free secret seminar!

Free stuff! Free stuff! Free stuff! :-)

The awesome dudes at E2SN have done it again! (and yes, Tom, this time the “we at E2SN Ltd” doesn’t mean only me alone ;-)

On Tuesday 22nd March I’ll hold two (yes two) Secret Oracle Hacking Sessions – about ORA-04031: unable to allocate x bytes of shared memory errors, cursor management issues and other shared pool related problems (like fragmentation). This event is free for all! You’ll just need to be fast enough to register, both events have 100 attendee limit (due to my GotoWebinar accont limitations).

I am going to run this online event twice, so total 200 people can attend (don’t register for both events, please). One event is in the morning (my time) to cater for APAC/EMEA region and the other session is for EMEA/US/Americas audience.

The content will be the same in both sessions. There will be no slides (you cant fix your shared pool problems with slides!) but there will be demos, scripts, live examples and fun (for the geeks among us anyway – others go and read some slides instead ;-)!

Share

tanelpoder's picture

Expert Oracle Exadata book – Alpha chapters available for purchase!

Hi,

Apress has made the draft versions of our Expert Oracle Exadata book available for purchase.

How this works is:

  1. You purchase the “alpha” version of the Expert Oracle Exadata book
  2. You get the access to draft/alpha PDF versions of some chapters now!
  3. As more chapters will be added and existing ones updated, you’ll receive an email and you can download these too
  4. You will get a PDF copy of the final book once it’s out!

This is an awesome deal if you can’t wait until the final launch and want to get ahead of the curve with your Exadata skills ;-)

Buy the alpha version of our Expert Oracle Exadata book from Apress here!

If you haven’t heard about this book earlier – I’m one of the 3 authors, writing it together with Kerry Osborne and Randy Johnson from Enkitec and our official tech reviewer is no other than THE Kevin Closson and we are also getting some (unofficial) feedback from Oracle database junkie Arup Nanda.

So this book will absolutely rock and if you want a piece of it now, order the alpha book above!

P.S. This hopefully also explains why I’ve been so quiet with my blogging lately – can’t write a book and do many other things at the same time… (at least if you want to do it well…)

Share

tanelpoder's picture

Advanced Oracle Troubleshooting v2.0 Online Deep Dives in April and May 2011

Due to a lot interest I’m going to do another run of my Advanced Oracle Troubleshooting v2.0 Online Deep Dive seminars in April and May (initially I had planned to do it no earlier than Sep/Oct…)

Check the dates & additional info out here:

P.S. People who already attended the AOT2 seminars last year – I will schedule the follow-up Q&A sessions in mid-March!

Share

tanelpoder's picture

Is this valid SQL syntax? :-)

I’m talking about this:

select-1from from dual;

Looks like invalid, right? Well, let’s run it:

SQL> select-1from from dual;

       ROM
----------
 -1.0E+000

This is because:

  1. Oracle doesn’t need whitespace for tokenizing the SQL statement (differences in character classes will do – as I’ve explained here)
  2. The first from “keyword” in the above statement is broken down to two tokens as an “F” right after a digit means that the preceding number is a FLOAT (and “D” means DOUBLE) and the tokenizer stops right there, knowing that whatever comes after this character (“ROM”) is a next token, which according to the Oracle SQL syntax rules will be assigned as the output column alias

The following funky-looking SQL statements are also valid:

SQL> select.1e2ffrom dual;

     .1E2F
----------
  1.0E+001

SQL> select.1e2fas"."from dual;

         .
----------
  1.0E+001

In the upper example, the “.1e2f” means number .1 * 10^2 (scientific notation) represented as a FLOAT internally and in the lower one I’ve just added a column alias with “AS” keyword just to make the SQL look a bit crazier.

:-)

Share

mwidlake's picture

You can explain an invalid SQL statement

I’m in “nightmare weekend before presenting” mode. I’m up to my eyes at work (and have been for ages, thus the quiet blog) and my recent weekends have been full of normal {and abnormal} life.

As is the way, when up against it and putting together my proofs for wild claims, everything breaks subtly and makes my wild claims look a little, well, wild – even though they are real issues I’ve seen, worked through and fixed in the day job. *sigh*. It does not help when you come across little oddities you have never seen before and end up spending valuable time looking into them.

So here is one. I’m just putting together a very, very simple demo of how the number of rows the CBO expects to see drops off as you move outside the known range. In the below you can see the statement I am using (I keep passing in different days of the month and watching the expected number of rows drop until I hit 1 expected row), but look at how it progress to the last entry…

mdw11> select count(*) from date_test_flat where date_1=to_date('&day-02-2011','DD-MM-YYYY')
  2  /
Enter value for day: 01

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    16 |   128 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 15

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     2 |    16 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-15 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 21

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11>

The expected number of rows drops, becomes and – and has shot up to 99 again (which is the expected number in the known range, as I have 10,000 rows spread over 100 days). My immediate thought is “Wow! Maybe Oracle have put some odd fix in where when you go well out of range it reverts to expecting an average number of rows”. Nope. It is because I asked for the data for 30th February. And I did not get an error.

I think it is because I have set autotrace traceonly explain. This causes the SQL statement not to be executed {if it is just a select, not an insert, update or delete}. It seems the costing section of the CBO is not so good at spotting duff dates, but it then gets the costing wrong.

I’ve spotted that the format of the filter also changes when the date is invalid, I really want to check that out – but I better continue failing to write the presentation!

I know, pretty pointless knowing this but it just amused me. Below is just a quick continuation to show that if the statment is to be executed you get an error and no plan and that utterly duff dates can be passed in.

mdw11> /
Enter value for day: 28

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> SET AUTOTRACE ON
mdw11> /
Enter value for day: 20
any key>

  COUNT(*)
----------
         0

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-20 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        821  consistent gets
          0  physical reads
          0  redo size
        421  bytes sent via SQL*Net to client
        415  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

mdw11> /
Enter value for day: 30
select count(*) from date_test_flat where date_1=to_date('30-02-2011','DD-MM-YYYY')
                                                         *
ERROR at line 1:
ORA-01839: date not valid for month specified

mdw11> set autotrace traceonly explain
mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11> /
Enter value for day: 45

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('45-02-2011','DD-MM-YYYY'))

tanelpoder's picture

Asynch descriptor resize wait event in Oracle

A lot of people have started seeing “asynch descriptor resize” wait event in Oracle 11gR2. Here’s my understanding of what it is. Note that I didn’t spend too much time researching it, so some details may be not completely accurate, but my explanation will at least give you an idea of why the heck you suddenly see this event in your database.

FYI, there’s a short, but incomplete explanation of this wait event also documented in MOS Note 1081977.1



The “direct path loader” (KCBL) module is used for performing direct path IO in Oracle, such as direct path segment scans and reading/writing spilled over workareas in temporary tablespace. Direct path IO is used whenever you see “direct path read/write*” wait events reported in your session. This means that IOs aren’t done from/to buffer cache, but from/to PGA directly, bypassing the buffer cache.



This KCBL module tries to dynamically scale up the number of asynch IO descriptors (AIO descriptors are the OS kernel structures, which keep track of asynch IO requests) to match the number of direct path IO slots a process uses. In other words, if the PGA workarea and/or spilled-over hash area in temp tablespace gets larger, Oracle also scales up the number of direct IO slots. Direct IO slots are PGA memory structures helping to do direct IO between files and PGA.



In order to be able to perform this direct IO asynchronously, Oracle also dynamically scales up the number of OS asynch IO descriptors, one for each slot (up to 4096 descriptors per process). When Oracle doesn’t need the direct IO slots anymore (when the direct path table scan has ended or a workarea/tempseg gets cancelled) then it scales down the number of direct IO slots and asynch IO descriptors. Scaling asynch IO descriptors up/down requires issuing syscalls to OS (as the AIO descriptors are OS kernel structures).



I guess this is supposed to be an optimization, to avoid running out of OS AIO descriptors, by releasing them when not they’re not needed, but as that Metalink note mentioned, the resize apparently sucks on Linux. Perhaps that’s why other ports also suffer and have seen the same wait event.



The “asynch descriptor resize” event itself is really an IO wait event (recorded in the wait class Other though), waiting for reaping outstanding IOs. Once this wait is over, then the OS call to change the amount of asynch IO descriptors (allocated to that process) is made. There’s no wait event recorded for the actual “resize” OS call as it shouldn’t block.



So, the more direct IO you do, especially when sorting/hashing to temp with frequent workarea closing/opening, the more of this event you’ll see (and it’s probably the same for regular tablespace direct path IO too).



This problem wouldn’t be noticeable if Oracle kept async io descriptors cached and wouldn’t constantly allocated/free them. Of course then you may end up running out of aio descriptors in the whole server easier. Also I don’t know whether there would be some OS issues with reusing cached aio descriptors, perhaps there is a good reason why such caching isn’t done.



Nevertheless, what’s causing this wait event is too frequent aio descriptor resize due to changes in direct IO slot count (due to changes in PGA workarea/temp segment and perhaps when doing frequent direct path scans through lots of tables/partitions too).



So, the obvious question here is what to do about this wait event? Well, first you should check how big part of your total response time this event takes at all?



  1. If it’s someting like 1% of your response time, then this is not your problem anyway and troubleshooting this further would be not practical – it’s just how Oracle works :)
  2. If it’s something like 20% or more of your response time, then it’s clearly a problem and you’d need to talk to Oracle Support to sort out the bug
  3. If it’s anything in between, make sure you don’t have an IO problem first, before telling that this is a bug. In one recent example I saw direct path reads take over a second on average when this problem popped up. The asynch descriptor resize wait event may well disappear from the radar once you fix the root cause – slow IO (or SQL doing too much IO). Remember, the asynch descriptor resize wait event, at least on Linux, is actually an IO wait event, the process is waiting for outstanding IO completion before the descriptor count increase/decrease can take place.

Share

tanelpoder's picture

A little new feature for shared pool geeks :-)

If you’ve taken any shared pool dumps from 11g+ databases lately, you might have wondered about this:

http://tech.e2sn.com/oracle/troubleshooting/shared-pool

Share

tanelpoder's picture

Another (secret) hacking session with me – using Oracle Session Snapper for flexible troubleshooting (and fun)

And this time we have audio !!! (Wow!)

Following the huge success of my last hacking session, planned while drinking beer at Graham Woods OOW pre-party and delivered from Miracle’s massive Oracle Closed World event in Thirsty Bear (between drinking beers), I’m announcing another hacking session:

What: Using Session Snapper for flexible Oracle Performance Troubleshooting

When: Wednesday 27th Oct 9:00-10:00 AM PDT (US West coast / California time). Check what’s this in your time zone here

Where: Internet! -> Sign up here: http://tech.e2sn.com/secret

You’ll need to register fast and be “there” on time as my current GotoWebinar account only allow 100 attendees to log on… last time over 100 people signed up, but “luckily” less actually showed up, so nobody got left outside!

BTW, I have figured out what went wrong with audio last time and caused my voice in the end of presentation disappear). A program, which I accidentally launched via a keyboard shortcut, grabbed my Mic input to itself, so gotowebinar’s app couldn’t access it anymore.

See you soon!

Share

To prevent automated spam submissions leave this field empty.
Syndicate content