statistics

Jonathan Lewis's picture

Frequency Histogram 5

In an earlier post on frequency histograms I described how Oracle creates an approximate histogram when dealing with character columns, and warned you  that the strategy could lead to a couple of anomalies if you were unlucky. This note describes one such anomaly. We start with a slightly curious data set:

create table t1  (v1 varchar2(42));

insert	into t1
select	'short'
from	all_objects
where 	rownum <= 100
;

insert into t1
select	'shorter'
from 	all_objects
where 	rownum <= 300
;

insert into t1
select 	'shortest'
from 	all_objects
where 	rownum <= 500
;

insert into t1
select 	'shortest_thing_in_the_recorded_data_not'
from 	all_objects
where 	rownum <= 700
;

insert into t1
select 	'shortest_thing_in_the_recorded_data_really'
from 	all_objects
where 	rownum <= 900
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'t1',
		estimate_percent => 100,
		method_opt 	 => 'for all columns size 254'
	);
end;
/

If we run the script from the earlier posting to see what Oracle has stored, we get the following:

ENDPOINT_NUMBER  FREQUENCY HEX_VAL                         CHR(TO ENDPOINT_ACTUAL_VALUE
--------------- ---------- ------------------------------- ------ ------------------------------------------
            100        100  73686F72740018721DBD93B2E00000 short  short
            400        300  73686F727465871679E2CF40400000 shorte shorter
            900        500  73686F727465871679E2CF40400000 shorte shortest
           2500       1600  73686F727465871679E2CF40400000 shorte shortest_thing_in_the_recorded_d

Because two of our values were identical to the first 32 characters Oracle has recorded them as the same, and stored them under the same (incomplete) entry. So what happens when you query for a value that isn’t in the table, and doesn’t get mentioned in the histogram – but looks similar to the other two long items ?


set autotrace traceonly explain

select
	count(*)
from
	t1
where
	v1 = 'shortest_thing_in_the_recorded_data'
;

select
	count(*)
from
	t1
where
	v1 = 'shortest_thing_in_the_data'
;

set autotrace off

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    29 |     4 |
|   1 |  SORT AGGREGATE    |      |     1 |    29 |       |
|*  2 |   TABLE ACCESS FULL| T1   |  1600 | 46400 |     4 |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1"='shortest_thing_in_the_recorded_data')

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    29 |     4 |
|   1 |  SORT AGGREGATE    |      |     1 |    29 |       |
|*  2 |   TABLE ACCESS FULL| T1   |    50 |  1450 |     4 |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1"='shortest_thing_in_the_data')

Neither of the two strings in my predicate appear in the data or in the histogram – but the first string (after applying the histogram algorithm) is a match for an entry in the histogram so Oracle reports the value derived from the matching endpoint_number as the cardinality. The second string doesn’t appear even after applying the algorithm – so Oracle has supplied a cardinality of 50, which is half the cardinality of the least frequently occuring value that it can find in the histogram. (Note – if you are running 10.2.0.3 or earlier the cardinality for this case would be 1; the optimizer changed in 10.2.0.4).

This means we have demonstrated a situation where two “rare” values which should be treated identically end up with dramatically different cardinalities estimates and could, therefore, end up being subject to dramatically different execution plans.

Conclusion: If you have fairly long, and similar, strings in a column that is a good candidate for a frequency histogram (e.g. a very descriptive status column) then you have a problem if a value that is very rare looks identical to a very popular value up to the first 32 characters. You may find that the only solution is to change the list of legal values (although various strategies involving virtual columns or function-based indexes can bypass the problem).

Jonathan Lewis's picture

Frequency Histogram 4

In an earlier note on interpreting the content of frequency histograms I made a throwaway comment about the extra complexity of interpreting frequency histograms on character-based columns. This note starts to examine some of the complications.

The driving problem behind character columns is that they can get quite large – up to 4,000 bytes – so the content of an “accurate histogram” could become quite large, and Oracle seems to have taken a strategic decision (at some point in history) to minimise this storage. As a result we can see an algorithm that works roughly as follows:

  • Take the first six bytes of the string (after padding it to 20 characters with nulls (varchar) or spaces (char))
  • View this as a hexadecimal number, and convert to decimal
  • Round to 15 significant digits and store as the endpoint_value
  • If duplicate rows appear, store the first 32 bytes of each string as the endpoint_actual_value

Given this algorithm, we can do an approximate reversal (which will only be needed when the endpoint_actual_value is not available) by formatting the endpoint_value into a hex string, extracting the first six pairs of digits, converting to numeric and applying the chr() function to get a character value. (You’ll have to fiddle with this bit of code to handle multibyte character sets, of course).

With a nice friendly single-byte character code, the first 5 characters will be extracted correctly, and the sixth will be pretty close to the original. Here’s an example (which also includes the logic to convert the endpoint_number into a frequency):


rem
rem     How to read a frequency histogram on a character column
rem

select
        endpoint_number,
        endpoint_number - nvl(prev_endpoint,0)  frequency,
        hex_val,
        chr(to_number(substr(hex_val, 2,2),'XX')) ||
        chr(to_number(substr(hex_val, 4,2),'XX')) ||
        chr(to_number(substr(hex_val, 6,2),'XX')) ||
        chr(to_number(substr(hex_val, 8,2),'XX')) ||
        chr(to_number(substr(hex_val,10,2),'XX')) ||
        chr(to_number(substr(hex_val,12,2),'XX')),
        endpoint_actual_value
from    (
        select
                endpoint_number,
                lag(endpoint_number,1) over(
                        order by endpoint_number
                )                                                       prev_endpoint,
                to_char(endpoint_value,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')hex_val,
                endpoint_actual_value
        from
                dba_tab_histograms
        where
                owner = 'XXX'
        and     table_name = 'YYY'
        and     column_name = 'STATUS_COLUMN'
        )
order by
        endpoint_number
;

set doc off
doc

ENDPOINT_NUMBER  FREQUENCY HEX_VAL                         CHR(TO ENDPOINT_ACTUAL_VALUE
--------------- ---------- ------------------------------- ------ ------------------------------------------
          40254      40254  434C4F534543E9175A7D6A7DC00000 CLOSEC CLOSED
          40467        213  434F4E4649524E7E0D374A58200000 CONFIR CONFIRMED
          40592        125  44454C49564550D642CA2965000000 DELIVE DELIVERED
          41304        712  494E564F49432991BF41C99E800000 INVOIC INVOICED
          41336         32  4E4556FFFFFFF1D5FBDBC624E00000 NEVÿÿÿ NEW
          41434         98  5041494400000C08C1A415AD800000 PAID   PAID
          41435          1  5041594D454E5B08040F761BE00000 PAYMEN PAYMENT OVERDUE
          41478         43  5049434B4544013F0FF93F6EC00000 PICKED PICKED
          41479          1  524546554E4436441DE2A321000000 REFUND REFUND MADE
          41480          1  524546554E4436441DE2A321000000 REFUND REFUND PENDING
          41482          2  52455455524E2F6693F753B6C00000 RETURN RETURNED

11 rows selected.

#

You’ll notice from the sample output that “REFUND MADE” and “REFUND PENDING” are identical in their numeric representation, and that’s why all the actual values have been stored. You can also see how rounding problems have converted CLOSED to CLOSEC, and the padding applied to short strings (combined with rounding errors) has converted NEW to NEVÿÿÿ.

There are a number of side effects to the 6 bytes / 32 character limits that Oracle has imposed for histograms – and I’ll pick up a couple of those in further posts.

Footnote: It’s interesting to note that space utilisation isn’t considered a threat in 11g when looking at the ‘synopsis’ approach of creating the ‘approximate NDV’ for columns. The difference may be due to the passage of time, of course, on the other hand the threat from synopses is largely limited to disc space whereas histograms have to take up memory (in the dictionary cache / row cache) whenever they are used.

Jonathan Lewis's picture

Frequency Histogram 3

Here’s the output I get from querying dba_tab_histograms for a column that has been given a frequency histogram by a call to dbms_stats.gather_table_stats().

ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------
              2             -1
           3190              0
           3813              1
           4310              4
           4480              5
           7507            999
          17210           2000
          17212           2002
          17213           2004
          17215           2006
          17729           2008
          17750           2009
          17752           2010
          17904           2011
          17906           2012
          17909           2017
          17994           5000
          18006           5001
          18009           5002
          18023           5003
          18062           6001
          39885          11000

In an earlier blog I wrote about a query that turned the figures from a frequency histogram into a list of column values and column frequencies, and if I had used that query against the histogram data I would have found that the value 11,000 appears 21,827 times – according to the histogram.

But there’s a problem with this histogram: the value 11,000 actually appeared roughly 2 million times in the real data set – and when I enabled autotrace on the query that I had used to count the number of times that 11,000 appeared the optimizer’s prediction (the cardinality in the execution plan) matched the actual value fairly closely. So how did the optimizer manage to get from the histogram to the correct cardinality ?

Jonathan Lewis's picture

Frequency Histograms 2

I find it convenient occasionally to “translate” a frequency histogram into a report of the underlying data (sometimes to see how closely the histogram matches the real data). To demonstrate the type of query I use I’ve created a data set with a small number of distinct values and generated a frequency histogram on the data set. This is what the data and histogram look like:


SQL> desc t1
 Name                    Null?    Type
 ----------------------- -------- ----------
 N1                      NOT NULL NUMBER

SQL> select n1, count(*) from t1 group by n1 order by n1;

        N1   COUNT(*)
---------- ----------
        10          4
        20          3
        30          6
        40         38
        50          4
        60         13
        70          6
        80         41
        90          2
       100          3

SQL> select
  2     endpoint_number, endpoint_value
  3  from
  4     user_tab_histograms
  5  where
  6     table_name  = 'T1'
  7  and        column_name = 'N1'
  8  order
  9     by endpoint_number
 10  ;

ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------
              4             10
              7             20
             13             30
             51             40
             55             50
             68             60
             74             70
            115             80
            117             90
            120            100

If you look at the histogram data you’ll see that the “endpoint_value” column holds a list of the values that appear in column n1, and the “endpoint_number” is the ‘cumulative frequency’ for the appearances of the endpoint_value – the number 10 appears 4 times, the number 20 appears 7 – 4 = 3 time, the value 30 appears 13 – 7 = 6 times, and so on.

karlarao's picture

Statistically summarize Oracle Performance data

Here’s the draft of this post http://karlarao.tiddlyspot.com/#dbms_stat_funcs , expounded version coming up! </p />
</p></div>
    <div class=»

Jonathan Lewis's picture

dba_tab_modifications

In case you don’t follow the link to Martin Widlake’s blog (see right) very often, he’s done a couple of recent posts on dba_tab_modifications that are worth reading. (And I’ve just discovered the ‘gutter=”false”;’ option for the ‘sourcecode’ tag in one of the comments on Martin’s blog – and that’s also a  helpful feature.) Filed [...]

Extended statistics and function-based indexes

About a year ago I’ve discovered nice feature of Oracle 10gR2 CBO: to overcome an issue with calculated selectivity for predicates on multiple columns, it can use DISTINCT_KEYS of the available index. Couple of weeks ago the bug fix mentioned in the OTN thread actually helped to solve a performance issue of a query. And [...]

randolf.geist's picture

Frequency histograms - edge cases

Oracle introduced with the 10.2.0.4 patch set a significant change how non-existing values are treated when there is a frequency histogram on a column / expression. See Jonathan Lewis' blog post which is probably the most concise description of the issue. In a nutshell the change is about the following (quoted from Jonathan's post): "If the value you supply does not appear in the histogram, but is inside the low/high range of the histogram then the cardinality will be half the cardinality of the least frequently occurring value that is in the histogram".

I'm still a bit puzzled why Oracle introduced such a significant change to the optimizer with a patch set, but one of the most obvious reasons might be that the change allows to generate frequency histograms using a rather low sample size, because there is no longer a similar threat as before when the frequency histogram misses one of the existing values (which would then return an estimate of 1 if they didn't appear in the histogram).

In fact when using the default DBMS_STATS.AUTO_SAMPLE_SIZE Oracle exactly does this: It uses by default a quite low sample size to perform the additional runs required for each histogram - probably an attempt to minimize the additional work that needs to be done for histogram generation.

In it is however quite interesting to see how exactly this behaviour together with the new treatment of non-existing values can turn into a threat as a recent thread on OTN demonstrated.

Consider the following scenario: You have a column with a highly skewed data distribution; there is a single, very popular value, and a few other, very unpopular values.

Now you have a query type that filters on this column and frequently searches for non-existing values. In order to speed up the query an index has been created on the column, and in order to make the optimizer aware of the fact that the data distribution is highly skewed a histogram is generated, so that the optimizer should favor the index only for those unpopular respectively non-existing values.

The following test case (run on 11.1.0.7) emulates this scenario:

create table t1 (
id number(*, 0)
, t_status number(*, 0)
, vc varchar2(100)
);

-- 1 million rows
-- One very popular value
-- Two very unpopular values
-- in column T_STATUS
insert /*+ append */ into t1 (id, t_status, vc)
with generator as (
select /*+ materialize */
level as id
from
dual
connect by
level <= 10000
)
select /*+ use_nl(v1, v2) */
rownum as id
, case
when rownum <= 10
then 1
when rownum <= 20
then 2
else 0
end as t_status
, rpad('x', 100) as vc
from
generator v1
, generator v2
where
rownum <= 1000000;

commit;

create index t1_idx on t1 (t_status);

exec dbms_stats.gather_table_stats(null, 'T1', method_opt => 'for all columns size 1 for columns t_status size 254')

The first interesting point are the generated column statistics:

SQL>
SQL> -- Note that the basic column statistics
SQL> -- are generated with a much higher sample size
SQL> -- The histogram however was created with a sample size
SQL> -- of 5,000 rows only
SQL> -- Therefore we get three distinct values but only a single bucket
SQL> -- But not always => Potential instability issue
SQL> select
2 column_name
3 , num_distinct
4 , sample_size
5 from
6 user_tab_col_statistics
7 where
8 table_name = 'T1';

COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
--------------- ------------ -----------
ID 1000000 1000000
T_STATUS 3 5499
VC 1 1000000

SQL>
SQL> -- This results in a single bucket
SQL> -- But not always => Potential instability issue
SQL> select
2 column_name
3 , endpoint_number
4 , endpoint_value
5 from
6 user_tab_histograms
7 where
8 table_name = 'T1'
9 and column_name = 'T_STATUS';

COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------- --------------
T_STATUS 5499 0

Notice the inconsistency: The basic column statistics (number of distinct values, low value, high value) obviously have been generated with a much higher sample size (in fact a compute in this case) than the histogram on T_STATUS. The histogram therefore misses the unpopular values and consists only of a single value - the very popular one.

Now watch closely what happens to the cardinality estimates of the non-existing values:

SQL>
SQL> -- The popular value
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status = 0
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 999K| 102M| 4319 (2)| 00:00:52 |
|* 1 | TABLE ACCESS FULL| T1 | 999K| 102M| 4319 (2)| 00:00:52 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("T_STATUS"=0)

13 rows selected.

SQL>
SQL> -- A non-existing value
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status = 1000
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 500K| 51M| 4316 (2)| 00:00:52 |
|* 1 | TABLE ACCESS FULL| T1 | 500K| 51M| 4316 (2)| 00:00:52 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("T_STATUS"=1000)

13 rows selected.

SQL>
SQL> -- Two non-existing values
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status in (1000, 2000)
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000K| 102M| 4325 (3)| 00:00:52 |
|* 1 | TABLE ACCESS FULL| T1 | 1000K| 102M| 4325 (3)| 00:00:52 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("T_STATUS"=1000 OR "T_STATUS"=2000)

13 rows selected.

SQL>

Ouch, ouch: Whereas the estimate for the popular value is correct, the estimates for the unpopular values are totally way-off - in fact using an IN clause with two non-existing values gets an estimate of all rows contained in the table.

The explanation: Since the least popular value is the single bucket covering virtually all rows, half of it is still 50% of the total cardinality - so two non-existing values in an IN clause end up with a selectivity of 1.

Furthermore the usual decay to values outside the low/high column values doesn't apply here either - no matter what non-existing values get used, the overestimation stays the same.

These overestimates have obviously a significant impact - here the suitable index doesn't get used - more complex plans might turn even into a complete disaster.

The default behaviour that histograms are generated with a much lower sample size than the basic column statistics also introduces a kind of instability - try to run the example several times. Sometimes one of the unpopular values might be caught by the default histogram generation, sometimes not. The effect is dramatic: If one of the unpopular values gets caught, the estimates will be reasonable again, since then the least popular value do have a very low cardinality - if not, you get exactly the opposite result just demonstrated.

If the old behaviour gets re-activated, the results are as expected with the same set of statistics:

SQL>
SQL> -- Switch off new behaviour
SQL> alter session set "_fix_control" = '5483301:off';

Session altered.

SQL>
SQL> -- The popular value
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status = 0
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 999K| 102M| 4319 (2)| 00:00:52 |
|* 1 | TABLE ACCESS FULL| T1 | 999K| 102M| 4319 (2)| 00:00:52 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("T_STATUS"=0)

13 rows selected.

SQL>
SQL> -- A non-existing value
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status = 1000
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 546753835

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 107 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 107 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_IDX | 1 | | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T_STATUS"=1000)

14 rows selected.

SQL>
SQL> -- Two non-existing values
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status in (1000, 2000)
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3743026710

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 107 | 5 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 107 | 5 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T1_IDX | 1 | | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T_STATUS"=1000 OR "T_STATUS"=2000)

15 rows selected.

Switching back to the former treatment that non-existing values lead to an estimate of 1 will fix this issue, however since this has a potential impact on every execution plan thorough regression testing would be required with this used as a global setting.

Increasing the sample size is another option, if it ensures that unpopular values get caught by the histogram generation, so that the least popular value of the histogram is one with a low cardinality. Note however that this will increase the amount of work necessary to gather the statistics for the histograms.

SQL>
SQL> -- Gather statistics with 100% sample size
SQL> exec dbms_stats.gather_table_stats(null, 'T1', estimate_percent => null, method_opt => 'for all columns size 1 for columns t_status size 254')

PL/SQL procedure successfully completed.

SQL>
SQL> -- Now the statistics are more representative
SQL> select
2 column_name
3 , num_distinct
4 , sample_size
5 from
6 user_tab_col_statistics
7 where
8 table_name = 'T1';

COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
--------------- ------------ -----------
ID 1000000 1000000
T_STATUS 3 1000000
VC 1 1000000

SQL>
SQL> -- The histogram now covers also the unpopular values
SQL> select
2 column_name
3 , endpoint_number
4 , endpoint_value
5 from
6 user_tab_histograms
7 where
8 table_name = 'T1'
9 and column_name = 'T_STATUS';

COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------- --------------
T_STATUS 999980 0
T_STATUS 999990 1
T_STATUS 1000000 2

SQL>
SQL> -- Switch back on new behaviour
SQL> alter session set "_fix_control" = '5483301:on';

Session altered.

SQL>
SQL> -- A non-existing value
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status = 1000
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 546753835

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 107 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 107 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_IDX | 1 | | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T_STATUS"=1000)

14 rows selected.

SQL>
SQL> -- Two non-existing values
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status in (1000, 2000)
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3743026710

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 107 | 5 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 107 | 5 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T1_IDX | 1 | | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T_STATUS"=1000 OR "T_STATUS"=2000)

15 rows selected.

As suggested by Jonathan Lewis in the OTN thread, another elegant solution to the problem of searching for non-existing values would be to add a virtual column that filtered out the popular values. This approach has several advantages if applicable: The index maintained is very small, minimizes the maintenance effort (and might also address index-efficiency related issues in case of frequent updates to the column) and solves the histogram issue since the histogram will only cover the unpopular values:

SQL>
SQL> -- Add a virtual column (the same could be achieved using a function-based index instead for pre-11g versions)
SQL> alter table t1 add (t_status_unpop as (case when t_status != 0 then t_status end));

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'T1', method_opt => 'for columns t_status_unpop size 254')

PL/SQL procedure successfully completed.

SQL>
SQL> -- Since the virtual column only covers
SQL> -- the unpopular values, the histogram
SQL> -- will be very precise even with a low sample size
SQL> select
2 column_name
3 , num_distinct
4 , sample_size
5 from
6 user_tab_col_statistics
7 where
8 table_name = 'T1';

COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE
--------------- ------------ -----------
ID 1000000 1000000
T_STATUS 3 1000000
VC 1 1000000
T_STATUS_UNPOP 2 20

SQL>
SQL> -- The histogram only covers the unpopular values
SQL> select
2 column_name
3 , endpoint_number
4 , endpoint_value
5 from
6 user_tab_histograms
7 where
8 table_name = 'T1'
9 and column_name = 'T_STATUS_UNPOP';

COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
--------------- --------------- --------------
T_STATUS_UNPOP 10 1
T_STATUS_UNPOP 20 2

SQL>
SQL> -- Two non-existing values
SQL> -- So we don't run into the same problem here
SQL> -- The estimate is reasonable for non-existing values
SQL> explain plan for
2 select
3 *
4 from
5 t1
6 where
7 t_status_unpop in (1000, 2000)
8 ;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 108 | 4470 (6)| 00:00:54 |
|* 1 | TABLE ACCESS FULL| T1 | 1 | 108 | 4470 (6)| 00:00:54 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("T_STATUS_UNPOP"=1000 OR "T_STATUS_UNPOP"=2000)

13 rows selected.

I haven't created an index on the virtual column, but the estimate is correct and a suitable index would get used if it existed.

chenshap's picture

Daylight Saving Time Causes Performance Issues!

Since I’m starting employment with Pythian on Monday, this is the last technical post that is exclusive to this blog. Future technical tips, discoveries and stories will appear on the Pythian blog, and either copied here or linked to from here. BTW. I already made my first post in the Pythian blog today, as the editor of the Log Buffer – and I’m not even an employee yet :)

So, lets end this with a bang. Real life story on how Daylight Saving Time caused one query to take twice as long as before! And its not even an April Fools joke :)

Jonathan Lewis's picture

Analyze This – 2

If you’ve run the scripts from Analyze This, I hope you found that the query gave you two different execution plans. This is my output from the test (with a little cosmetic tidying):

=============
Using Analyze
=============

Execution Plan
----------------------------------------------------------
Plan hash value: 1838229974

-------------------------------------------------------------------
| Id | Operation | Name | [...]

Syndicate content