Execution plans

Jonathan Lewis's picture

Not NULL

Here’s a little detail that I’ve known for years – but keep forgetting until I embarrass myself by rediscovering it (usually in front of a client). I’ll start with a demonstration of a useful little feature of mandatory columns:


drop table t1 purge;

create table t1
as
select
	*
from
	all_objects
where
	rownum <= 10000
;

execute dbms_stats.gather_table_stats(user,'t1')

create index t1_i1 on t1(object_name);

set autotrace traceonly explain

select count(*) from t1;

/*

--------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Cost (%CPU)| Time
--------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    13   (0)| 00:00:0
|   1 |  SORT AGGREGATE       |       |     1 |            |
|   2 |   INDEX FAST FULL SCAN| T1_I1 | 10000 |    13   (0)| 00:00:0
--------------------------------------------------------------------

*/

Oracle can use the index on column object_name to count the number of rows in the table because the column has been declared NOT NULL, so every row in the table also has to appear in the index. Let’s just demonstrate that by changing the column definition:


alter table t1 modify object_name null;
select count(*) from t1;

/*

-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    40   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| T1   | 10000 |    40   (0)| 00:00:01 |
-------------------------------------------------------------------

*/

Now let’s make the column mandatory again – by adding a constraint:

Jonathan Lewis's picture

Joins – MJ

The final join mechanism in my “all joins are nested loop joins” argument is the Merge Join – a join mechanism that depends on both its row sources being pre-sorted on the join columns. The note on hash joins pointed out that a “traditional” nested loop join may result in repeated visits to data blocks [...]

Jonathan Lewis's picture

Joins – HJ

In the second note on my thesis that “all joins are nested loop joins with different startup costs” I want to look at hash joins, and I’ll start by going back to the execution plan I posted on “Joins – NLJ”. --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| [...]

Jonathan Lewis's picture

Joins – NLJ

This is part one of my thesis that “all joins are nested loop joins – it’s just the startup overheads that vary”; there will be a note on “Joins – HJ” and “Joins – MJ” to follow. In some ways, the claim is trivially obvious – a join simply takes two row sources and compares [...]

Jonathan Lewis's picture

double trouble

In the latest Quiz Night, I asked how you could make a query more efficient by changing a two table join into a three table join – with the clue that my third table was a repeat of the first table. Gary Myers, in comment 4,  provided the type of answer I was looking for. Sometimes [...]

Jonathan Lewis's picture

10053 viewer

I’ve been trying to find a way to post an executable for several weeks because I’ve been sent a simple viewer for 10053 trace files written by Hans-Peter Sloot of Atos Origin and Robert van der Ende. They wrote this viewer because trace files from event 10053 can be enormous, and scrolling back and fore [...]

Jonathan Lewis's picture

Rule Rules

Everybody knows you shouldn’t be using the Rule-based optimizer (RBO) any more – everyone, that is, except some of the folks at Oracle Corp. I had a conversation a few weeks ago with someone who was having a problem with their standby database on 10.2 because a query against v$archive_gap was taking a very long [...]

Jonathan Lewis's picture

Analyze This – 2

If you’ve run the scripts from Analyze This, I hope you found that the query gave you two different execution plans. This is my output from the test (with a little cosmetic tidying):

=============
Using Analyze
=============

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-------------------------------------------------------------------
| Id | Operation | Name | [...]

To prevent automated spam submissions leave this field empty.
Syndicate content