partitioning

Chris Antognini's picture

Deferred Segment Creation as of 11.2.0.2

One year ago I wrote a post entitled Deferred Segment Creation. If you read the comments related to it you can see that several people were concerned by the fact that it was not possible to easily get rid of segments associated to empty tables. That was with version 11.2.0.1. Now we have version 11.2.0.2 [...]

randolf.geist's picture

Cleanup of partitioned objects residing in offlined tablespaces

If you have the scenario that a large database that contains historical data - typically a data warehouse with partitioned tables where the partitions reside in different tablespaces depending on their age - is supposed to be duplicated but the target environment, typically a lower environment, like duplicating Production to a UAT environment, doesn't have sufficient space to hold the complete database and in particular the whole historic partitions, then you somehow need to deal with that partial clone.

Of course, in an ideal world the lower environment is supposed to have sufficient space to hold the complete database, but we don't live in an ideal world. Sometimes, depending on the storage technology, you might be able to do some kind of temporary "split mirror" operation that allows you to start up a complete clone of the database and drop all the affected objects / tablespaces before putting the cleaned up / partial database on the final target system, but this is not always applicable, for instance when using ASM-based storage.

The issue in that particular case is that you can do for instance a partial RMAN clone that omits the historical tablespaces, but the resulting database then obviously has some tablespaces that are unavailable and can't be onlined since the datafiles are simply missing.

In case of objects that are completely residing in the offlined tablespaces the solution is simple and straightforward: The objects can be dropped or the affected tablespaces can simply be dropped with the "including contents" clause.

Things get more complicated however in case of partitioned objects if some of the partitions reside in the offlined tablespaces.

In that case you can't simply drop the offlines tablespaces - you end up with Oracle error message "ORA-14404: partitioned table contains partitions in a different tablespace" for example that tells you that objects exist that have some of their partitions not residing in the offlined tablespace and Oracle obviously won't do the cleanup job for you in that case.

Oracle allows to drop partitions of a table that reside in offlined tablespaces, but there is an important limitation: You can only do so if no index has been defined on the table. Otherwise you get the error message "ORA-14117/ORA-14630: [sub]partition resides in offlined tablespace" when attempting to drop the partition. It's interesting to note that the mere existence of indexes is sufficient - it doesn't matter if the index is residing in an offlined or onlined tablespace, if it is unusable or not - you just need to have at least one index defined on the table, and the drop partition operation errors out. It looks like a hard coded "if index exists then error" code path. I'm not sure why exactly this restriction is in place, but it has serious consequences if you need to do this sort of cleanup for a large database with hundreds of thousands of partitions.

By the way it is interesting to know that Oracle allows to drop segments from offlined tablespaces that are locally managed - think about the consequence: The local bitmaps that represent the allocated and free space in the tablespace can not be updated at the time of the drop operation. This is one of the few disadvantages compared to the old-fashioned dictionary managed tablespaces where an operation like that potentially could have been limited to a pure dictionary operation.

The approach Oracle chooses is straightforward: The segments dropped are converted into "temporary" objects (kind of a special "recyclebin" if you want to say so) and if the offlined tablespace should get onlined again the local bitmaps will be updated according to these temporary objects and finally the temporary objects will be removed from the dictionary. Otherwise if the offlined tablespace gets dropped the corresponding temporary objects consequently can also be removed.

Tanel Poder recently also blogged about this in the context of read-only tablespaces.

So you could now drop all indexes from the affected tables and recreate them afterwards, but this is not a good idea for several reasons:
* If the remaining partitions still represent multiple terabytes it will take quite long to rebuild all indexes
* Things might go wrong and you end up with missing indexes

Since these are partitioned objects, another Oracle best practice might come to rescue: Exchange the affected partitions with an exchange table excluding indexes - this allows to drop the exchanged partition residing in an offlined tablespace without any error message. Note that this means that after you've exchanged the partition you need to drop the unpartitioned object (that doesn't have indexes) in order to get rid of the affected segment.

Now if you only have a few objects that are affected or only a single partition to clean up per table then you're basically done, but if you need to clean up multiple partitions and the number of partitions is high then this approach is not feasible, because it sounds like quite an overhead to drop / create a table for each partition that needs to be cleaned up - in particular when talking about hundred thousands of partitions.

The approach of creating a table to exchange partitions with adds another complexity to the problem: Many large databases make usage of the option in Oracle to set columns unused instead of actually dropping them, since the former is a simple meta data operation in the dictionary (renames the column and marks the column as hidden) and independent from the data volume and therefore almost instant but the latter requires Oracle to process the whole segment which can take quite some time in case of huge segments (and is not a very efficient operation by the way, but that is something for a different post).

Now creating a table that can be exchanged with such an evolved table that contains unused columns can not be accomplished by simply doing a handy CTAS operation - the unused columns will be missing from the copy and therefore the exchange operation will fail with "column mismatch" errors.

So you need to deal with these unused columns somehow in that moment you need to create a table dynamically, and you need to do that if you need to drop more than a single partition per table.

The only other viable option that I'm aware of is to maintain this exchange table as a partitioned table itself permanently - which means create two exact copies (in terms of dictionary meta data) of every potentially affected table and make sure to apply the same DDL in the same order which will ensure that the exchange operation with those tables will succeed.

It will need two copies, one unpartitioned and one partitioned, if you want to avoid the drop and dynamic create table operation, since Oracle allows to exchange partitions only with unpartitioned tables. So it would take two exchange operations per affected partition - the first with the unpartitioned table, and the second with the partitioned table. Both exchange operations would have to be done excluding indexes and at least the final partitioned table has to be created excluding indexes - note that Oracle allows the exchange operation even if one table is missing a potentially created primary key constraint if you use the "EXCLUDING INDEXES" clause.

Then you can drop the partition that has been exchanged into the second table since the table doesn't have any indexes defined on it.

This allows you to accomplish two objectives:

1. Avoid a costly (and potentially complex in case of unused columns) drop / create table operation per partition
2. Manage the drop / create cycle by simply dropping and adding partitions in the second table which is far less overhead compared to a drop / create table operation which is important when performance matters

However maintaining two copies for every table that potentially needs to be cleaned up sounds like a huge overhead, and even in Oracle 11.2 with the deferred segment creation at least the partitioned table copy will allocate space since the deferred segment creation option is not (yet) applicable to partitioned objects.

So here a few ideas how to accomplish such a cleanup operation:

1. For each table to process create the required table copies on the fly dynamically once. If you don't have to deal with unused columns then a simple CTAS operation can be used to create those tables. If you need to handle unused columns, a more complex approach is required. You will have to use custom queries against the data dictionary to extract the required information since the official DBMS_METADATA interface as far as I know doesn't expose information about unused columns.

The following query could be used as a starting point to extract the column definition of a table:

-- Get the column list including hidden columns, but ignoring virtual columns added by function-based indexes
-- TODO: What about virtual columns (11g or later)?
select
decode(t.hidden_column,
'YES', 'SYS$' || internal_column_id || '$UNUSED',
t.column_name
) ||
' ' ||
lower(data_type) ||
decode(substr(data_type, 1, 9),
'TIMESTAMP', null,
'INTERVAL ', null,
decode(t.data_type,
'NUMBER', decode(t.data_precision || t.data_scale,
null, null,
'(' ||
decode(t.data_precision,
null, '*',
t.data_precision
) ||
decode(t.data_scale,
null, null,
',' || t.data_scale
) ||
')'
),
'FLOAT', '(' || t.data_precision || ')',
'LONG', null,
'LONG RAW', null,
'BLOB', null,
'CLOB', null,
'NCLOB', null,
'BFILE', null,
'CFILE', null,
'BINARY_FLOAT', null,
'BINARY_DOUBLE', null,
'MLSLABEL', null,
'ROWID', null,
'UROWID', null,
'DATE', null,
'(' ||
nvl(
nvl(
decode(t.char_length,
0, to_number(null),
t.char_length
),
t.char_col_decl_length
),
t.data_length
) ||
decode(t.character_set_name,
'CHAR_CS', decode(t.char_used,
'C', ' char',
'B', ' byte'
)
) ||
')'
)
) ||
' ' ||
decode(nullable,
'N', 'not null',
null
) ||
',' as column_def
from
all_tab_cols t
where
t.owner =
and t.table_name =
and t.segment_column_id is not null
order by
t.internal_column_id;

This will generate SYS$$UNUSED columns for all columns set to unused in the source table which can then subsequently be set to unused in the newly created table to have the same layout as the source table.

Note that it hasn't been tested with virtual columns yet, but should deal with most of the available data types and also handle columns using char semantics correctly (important for NCHAR / NVARCHAR based columns and databases running with multi-byte characters, in particular AL32UTF8). It also can handle virtual columns added via function-based indexes.

Of course the extraction process potentially needs to handle much more than shown here, for instance check the table properties like index organized tables, partition keys etc.

2. Process each affected table partition using the following approach:

* Add a new "clean" partition to the partitioned copy table in a tablespace that is online. The partition key is a "dummy" key that can be the same for each iteration of this process. This is the segment that is going to end up in the affected partitioned table and will be finally dropped since it then resides in a available tablespace. Remember, we want to get rid of those partitions residing in unavailable tablespaces, since we have not sufficient space in the clone of the database to make them available)

* Exchange the affected partition with the unpartitioned copy table EXCLUDING INDEXES (The unpartitioned table copy has also been initially created in an available tablespace)

* Exchange the unpartitioned copy table (that now holds the defect partition) with the newly created "clean" partition of the partitioned copy table EXCLUDING INDEXES

* Now the defect partition resides in the partitioned copy table and can simply be dropped since we don't have any indexes defined on this table

* Finally we can drop now the partition from the affected table (that is now the partition that has been added to the partitioned copy table in the previous iteration of the process) since the partition resides now in an available tablespace

The same process can be applied to a subpartitioned table. There are of course some more details to consider, for example, you can't drop the last subpartition or partition from a partition / table, but since we assume that our affected table in general still has some partitions left over that reside in current and available tablespaces it should be possible to handle these scenarios.

Here is a code snippet / template that applies this process:

/**
* Drop a (sub-)partition residing in an offline tablespace
* @param p_object The original object to clean up
* @param p_temp_exc_table_name The name of the working table
* @param p_clean_tablespace_name The tablespace where to create the cleanup partition
* @param p_partition_name The name of the (sub-)partition to drop
* @param p_segment_type The type of the segment (TABLE[ ][SUB][PARTITION])
* @param p_last_partition_indicator For subpartitions indicate that this is
* the last subpartition - this will drop the whole partition
**/
procedure drop_partition(
p_object in r_oracle_object
, p_temp_exc_table_name in out oracle_object
, p_clean_tablespace_name in oracle_object
, p_partition_name in oracle_object
, p_segment_type in oracle_object
, p_last_partition_indicator in varchar2
)
as
s_sql large_sql;
s_partition_type varchar2(255);
s_exchange_table oracle_object;
e_partition_in_offline_ts exception;
pragma exception_init(e_partition_in_offline_ts, -14117);
-- We do this in an autonomous transaction to keep it separate
-- from any outside transactions like the "queue" transaction
-- used by the parallel clean up threads
pragma autonomous_transaction;
begin
-- Determine if this is a partition or subpartition
s_partition_type := substr(p_segment_type, 7, 255);

-- Step 1: Add a clean partition to the temporary cleanup table
s_sql := 'alter table ' || p_temp_exc_table_name || ' add partition p_cleanup values (42) tablespace ' || p_clean_tablespace_name;

execute(p_object.owner, s_sql);

-- Get the name of the unpartitioned table copy
s_exchange_table := get_exchange_table(p_object);

-- Step 2: Exchange the defect (sub-)partition with the unpartitioned copy table EXCLUDING indexes
s_sql := '
alter table ' || p_object.segment_name || '
exchange ' || s_partition_type || ' ' || p_partition_name || '
with table ' || s_exchange_table || '
excluding indexes without validation';

execute(p_object.owner, s_sql);

-- Step 3: Exchange the unpartitioned copy table with the partitioned copy table partition again EXCLUDING indexes
s_sql := '
alter table ' || p_temp_exc_table_name || '
exchange partition p_cleanup
with table ' || s_exchange_table || '
excluding indexes without validation';

execute(p_object.owner, s_sql);

-- Step 4: Drop the defect partition now residing in partitioned "waste-bin"table partition
-- Since we don't have any indexes this is possible now
s_sql := 'alter table ' || p_temp_exc_table_name || ' drop partition p_cleanup';

begin
execute(p_object.owner, s_sql);
exception
-- Any tables with LOBs will cause an exception since they will implicitly have a LOB INDEX added
-- Therefore the DROP PARTITION will fail with ORA-14117
-- The simplest solution is to drop and recreate the entire working table - note the serious side effects
-- of the underlying ORA-01110 error in 11.1.0.7. See below comments about the potential impact
-- of the health check monitoring
-- A smoother way of handling this to avoid these potential side-effects is to check if the
-- table has any indexes defined on it and then do not attempt to drop the partition but
-- immediately drop the entire table
when e_partition_in_offline_ts then
drop_temporary_exc_table(p_object, p_temp_exc_table_name);
p_temp_exc_table_name := create_temporary_exc_table(p_object, p_clean_tablespace_name, p_segment_type);
end;

-- Step 5: Drop the partition / subpartition which completes the cleanup
-- TODO: Are really all subpartitions of a partition affected?
-- If not, this logic needs to be revised
if (s_partition_type = 'SUBPARTITION' and p_last_partition_indicator = 'Y') then
s_sql := 'alter table ' || p_object.segment_name || ' drop partition ' || get_parent_partition(p_object, p_partition_name);

execute(p_object.owner, s_sql);
elsif s_partition_type = 'PARTITION' then
s_sql := 'alter table ' || p_object.segment_name || ' drop partition ' || p_partition_name;

execute(p_object.owner, s_sql);
end if;
end drop_partition;

Note that tables with LOBs need special treatment since the logic used for tables without LOBs fails with an error that the partition resides in an offline tablespace when the "defect" partition gets dropped from the temporary cleanup table.

This is the error that is typically raised when attempting to drop a partition from a table having indexes defined on it. Very likely this error is caused by the fact that a LOB always has a corresponding LOB INDEX (that will be always exchanged when doing an EXCHANGE PARTITION even if EXCLUDING INDEXES is specified).

Since this index cannot be dropped from the temporary cleanup table we actually do not gain anything by performing the exchange exercise.
The simplest solution to the problem is to drop and recreate the cleanup table which is possible without the ORA-14117 error message being raised.

Note that attempting to drop the partition raising the ORA-14117 error has a serious side effect in 11.1.0.7 and 11.2.0.1: If the "Health Monitoring" detects a "flooding" ORA-1110 error (something is wrong with a datafile) any operation on these datafiles from then on seem to have tracing turned on - this leads to the odd situation that onlining such a tablespace again (which we can't do in our actual case since the datafiles are not there, but I've done for testing purposes) takes minutes and generates trace files in size between 20 and 500 MB (!).

I haven't investigated much into this, but in my tests this couldn't be avoided by setting the undocumented parameter "_disable_health_checks=TRUE".

The MMON_SLAVE process will dump something similar to this:

========= Dump for error ORA 1110 (no incident) ========
----- DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----

Also for the sessions that cause the error something like this will be written to trace files:

DDE rules only execution for: ORA 1110
----- START Event Driven Actions Dump ----
---- END Event Driven Actions Dump ----
----- START DDE Actions Dump -----
Executing SYNC actions
----- START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
DDE Action 'DB_STRUCTURE_INTEGRITY_CHECK' was flood controlled
----- END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (FLOOD CONTROLLED, 17 csec) -----
Executing ASYNC actions
----- END DDE Actions Dump (total 0 csec) -----

A smoother way of handling this to avoid these potential side-effects is to check if the table has any indexes defined on it and then do not attempt to drop the partition but immediately drop the entire table.

Since each iteration of this process takes some time (in a database with hundreds of thousands of partitions this can take a few seconds per partition due to the underlying recursive data dictionary operations) and is according to some traces mostly CPU bound you can speed up the processing by using multiple processes it if you have multiple CPUs available.

Using an approach with Advanced Queueing having multiple consumers (for instance as many consumers as CPUs) we were able to cleanup more than 200,000 partitions (table plus index (sub)partitions) in less than one hour which was within the available timeframe for that operation.

Chris Antognini's picture

Partially Index a Table

Recenty the following question was posted on oracle-l (I paraphrase…):
With Oracle Database it is possible to create something similar to Teradata’s sparse indexes?
Since the question is an interesting one, I decided to write this short post.
First of all, I have to say that such a feature is not supported by the CREATE INDEX statement [...]

Chris Antognini's picture

Partition-Wise Join of List-Partitioned Tables

When two tables are equi-partitioned on their join keys, the query optimizer is able to take advantage of partition-wise joins. To make sure that the tables are equi-partitioned, as of Oracle Database 11g reference partitioning can be used. In fact, per definition, with reference partitioning all “related” tables have exactly the same partitioning schema. If [...]

randolf.geist's picture

Compression Restrictions

When using table or index compression certain restrictions apply. Since I find it always hard to gather that information from the manuals and also some of the restrictions are not documented properly or not at all, I'll attempt to summarize the restrictions that I'm aware of here:

1. Compression attribute on subpartitions
Note that you can't define the COMPRESSION on subpartition level - the subpartitions inherit this attribute from the parent partition.

It is however perfectly valid and legal to exchange a subpartition with a compressed table and that way introduce individual compression on subpartition level. It is unclear why this is only possible using this indirect method - it simply looks like a implementation restriction. Since compressed data is most suitable for data warehouse environments and these ought to use EXCHANGE [SUB]PARTITION technology anyway this restriction is probably not that crucial.

2. Number of columns
Basic heap table compression and the new OLTP (in 11.1 called "for all operations") compression are both limited to 255 columns. If you attempt to compress a table with more than 255 columns then no compression will be performed although you don't get an error message. Interestingly the COMPRESSION column in the dictionary still shows "ENABLED" which is certainly misleading.

3. DDL Restrictions - modification of table structure
As soon as a segment contains compressed data or has been marked for compression (does not apply in all cases, see below for more information), the following restrictions regarding the modification of the table structure apply:

a. Basic heap table compression will not prevent the addition of more than 255 columns via ADD COLUMN - but if you update the segment the affected blocks will be silently decompressed during the update operation. In 9.2 no schema evolution is possible with compressed segments, see the next paragraph.

b. In Oracle 9.2 a lot of modifications to the table are no longer possible with compression enabled (and this applies to partitioned tables as well if at least one partition is compressed):

- You cannot add or drop any columns, and you even can't set any column unused. The error messages are to say at least confusing, since it tells you something about virtual and object columns, but in fact it is about the enabled compression. These are serious limitations in terms of schema evolution. I'll address below what can be done to overcome this issue.

Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

SQL> create table t_part
2 compress
3 partition by list (user_id) (
4 partition p_default values (default)
5 )
6 as
7 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);
alter table t_part set unused (created)
*
ERROR at line 1:
ORA-12996: cannot drop system-generated virtual column

SQL> alter table t_part drop unused columns;

Table altered.

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-12996: cannot drop system-generated virtual column

SQL> alter table t_part add test_col varchar2(10);
alter table t_part add test_col varchar2(10)
*
ERROR at line 1:
ORA-22856: cannot add columns to object tables

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-22856: cannot add columns to object tables

SQL>

c. In Oracle 10.2 some restrictions have been lifted but there are still a couple of limitations:

- You can now add columns and you can set columns unused but you can't drop columns nor can you add columns with default values

This allows schema evolution but has interesting side effects if you attempt to use for example a Create Table As Select (CTAS) statement to dynamically create a copy of a table and attempt to perform an EXCHANGE PARTITION operation with that newly created table: If there are unused columns in the source table then these won't be copied over to the table created via CTAS and therefore the EXCHANGE PARTITION operation will fail with column mismatch error messages. Since you can't drop the unused columns as long as there is compression enabled on the table (more on that later since it is a bit more complex with partitioned tables), you effectively can't use that approach. In order to perform EXCHANGE PARTITION with compressed tables and evolved schemas that include unused columns you have basically three choices:

- Maintain the "EXCHANGE" table along with the target table which means apply the same DDL in the same order to both tables in order to keep them synchronized

- Extract the hidden columns from the dictionary and create a exchange table with the correct hidden/unused columns which can turn into a complex operation if you need to cover all possible column data types and features including LOBs etc.

- Perform some kind of rebuild operation on the target table to get rid of the unused columns. Some ideas regarding this point are going to follow further below

Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 compress
3 partition by list (user_id) (
4 partition p_default values (default)
5 )
6 as
7 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL>

d. Oracle 11 introduces Advanced Compression which is COMPRESS FOR ALL OPERATIONS in 11.1 or OLTP compression in 11.2 (don't confuse this with HCC compression levels)

Basic compression in 11.1 has the same limitations as 10.2, but with advanced compression (called COMPRESS FOR ALL OPERATIONS in 11.1) all mentioned schema evolution restrictions have been lifted as it can be seen below.

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 --compress
3 compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;

Table altered.

SQL> alter table t_part drop column username;

Table altered.

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL>

This is more of theoretical nature since you usually won't mix basic with advanced compression, bit things become more interesting when mixing basic and advanced compression in different partitions of the same table:

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 compress
3 --compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123 compress for all operations, partition p_default);

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------------
P_DEFAULT ENABLED DIRECT LOAD ONLY
P_123 ENABLED FOR ALL OPERATIONS

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL>

Notice the inconsistent behaviour: The first DDL command after having added a partition with advanced compression succeeds, but the following DDLs fail with an error message.

If the order of these DDLs is changed, then it becomes obvious that always the first DDL is successful no matter which of those failing otherwise comes first.

Furthermore in 11.1.0.7 if you have a mixture of Advanced Compression (COMPRESS FOR ALL OPERATIONS) and NOCOMPRESS partitions still above restrictions apply - only if you have set all partitions to COMPRESS FOR ALL OPERATIONS then the restrictions are lifted which looks more like a bug than a feature.

I'll go into more details below which settings influence this behaviour.

e. Oracle 11.2 introduces Hybrid Columnar Compression (HCC) which is only enabled in conjunction with ExaData

In 11.2 basic compression still has the same restrictions as already mentioned above, and if at least one partition of a partitioned table has advanced compression enabled then some of restrictions are lifted - even with a mixture of basic and advanced compression or a mixture of no compression and advanced compression, however adding a column with a default value is still only supported if all partitions are compressed with Advanced Compression. I'm not sure if this is really a feature since at least mixing Advanced Compression with no compression adding a column with default value should be possible.

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 compress
3 --compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123 compress for all operations, partition p_default);

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED BASIC

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop unused columns;

Table altered.

SQL> alter table t_part drop column username;

Table altered.

SQL> alter table t_part move partition p_default compress for all operations;

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED OLTP

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL>

When using HCC the most important difference is that it is not restricted to 255 columns - it successfully compresses segments with any number of columns (up to the hard limit of 1,000 columns).

HCC can only be used with direct-path operations, any conventional DML will decompress the affected blocks and use OLTP compression instead - which means that in case the segment has more than 255 columns the compression will effectively be disabled since any non-HCC compression supports only up to 255 columns.

HCC has only a few restrictions regarding schema evolution - the most noticeable one is that DROP COLUMN is always turned into a SET COLUMN UNUSED, so with HCC enabled you can't drop columns since DROP UNUSED COLUMNS is not supported and raises an error. This means that dropping / adding columns works towards the hard limit of 1,000 columns with HCC enabled.

Again this has the same side effect as described above - a simple CTAS operation to create a table to be exchanged with won't work any longer with unused columns.

SQL> create table t_part
2 compress for query high
3 --compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;

Table altered.

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123 compress for all operations, partition p_default);

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED QUERY HIGH

SQL> alter table t_part add test_col3 varchar2(10) default 'BLA';

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-00904: "USERNAME": invalid identifier

SQL> alter table t_part move partition p_default compress for all operations;

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED OLTP

SQL> alter table t_part add test_col4 varchar2(10) default 'BLA';

Table altered.

SQL>

And here is what happens if you're already hitting the 1,000 columns limit and attempt to use DROP [UNUSED] COLUMNS with HCC enabled:

SQL> alter table many_x_cols drop (col256);

Table altered.

SQL> alter table many_x_cols drop unused columns;
alter table many_x_cols drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table many_x_cols add (col256 varchar2(10));
alter table many_x_cols add (col256 varchar2(10))
*
ERROR at line 1:
ORA-01792: maximum number of columns in a table or view is 1000

SQL> alter table many_x_cols move pctfree 0 nocompress;

Table altered.

SQL> alter table many_x_cols drop unused columns;

Table altered.

SQL> alter table many_x_cols add (col256 varchar2(10));

Table altered.

SQL>

As you can see dropping columns is only supported with HCC disabled which allows in this particular case to add another column after dropping the hidden/unused one.

- Index restrictions

* You cannot compress a table or a partition of table (or exchange a compressed partition into it) if there are usable bitmap indexes defined on the table due to the different Hakan factor that is reset when using by the compression - which means that with compressed segments more rows will fit into a single block leading to a different Hakan factor used for the bitmap indexes. The bitmap indexes need to be set unused (all partitions in case of a partitioned table/index) or dropped before enabling the compression and rebuild / recreated after compressing the heap segment. This is also officially documented. It is however interesting to note that this restriction is only enforced when switching from uncompressed to compressed segments / partitions. Changing the compression for example from basic compression to HCC compression is possible without hitting this restriction but due to the different compression levels in my opinion this potentially leads again to significantly different Hakan factors - so actively resetting the Hakan Factor using "ALTER TABLE ... MINIMIZE RECORDS_PER_BLOCK" and rebuilding the bitmap indexes might be in order when moving between different compression levels.

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> alter table t_part move partition p_123 nocompress;

Table altered.

SQL> alter table t_part modify default attributes nocompress;

Table altered.

SQL> alter table t_part nocompress;

Table altered.

SQL> create bitmap index t_part_idx_bitmap on t_part (test_col) local;

Index created.

SQL> alter table t_part move partition p_default compress basic;
alter table t_part move partition p_default compress basic
*
ERROR at line 1:
ORA-14646: Specified alter table operation involving compression cannot be
performed in the presence of usable bitmap indexes

SQL> alter table t_part minimize records_per_block;
alter table t_part minimize records_per_block
*
ERROR at line 1:
ORA-28602: statement not permitted on tables containing bitmap indexes

SQL> alter index t_part_idx_bitmap modify partition p_default unusable;

Index altered.

SQL> alter table t_part move partition p_default compress basic;
alter table t_part move partition p_default compress basic
*
ERROR at line 1:
ORA-14646: Specified alter table operation involving compression cannot be
performed in the presence of usable bitmap indexes

SQL> alter table t_part minimize records_per_block;
alter table t_part minimize records_per_block
*
ERROR at line 1:
ORA-28602: statement not permitted on tables containing bitmap indexes

SQL> alter index t_part_idx_bitmap unusable;

Index altered.

SQL> alter table t_part move partition p_default compress basic;

Table altered.

SQL> alter table t_part minimize records_per_block;

Table altered.

SQL> alter index t_part_idx_bitmap rebuild partition p_default;

Index altered.

SQL> alter index t_part_idx_bitmap rebuild partition p_123;

Index altered.

SQL> alter table t_part move partition p_default nocompress update indexes;

Table altered.

As you can see switching on the compression is only allowed if all partitions of the bitmap index(es) are unusable - the opposite is not true: You can uncompress afterwards without any restrictions.

* This is related to B*Tree index compression and again more of theoretical nature but might become relevant when trying to introduce B*Tree index compression partition-wise via EXCHANGE PARTITION: You can't compress a single partition of a partitioned B*Tree index (or exchange a partition with a compressed index into the table) if the whole index has been not initially created with compression enabled. If you drop the index and recreate the whole index with compression enabled then you can turn the index compression individually on and off per index partition.

Very likely this is related to the restriction that the B*Tree index compression prefix length needs to be the same across all index partitions - this can only be defined on global level and I assume this is the reason why the index needs to be created compressed first. Since you usually want to have all partitions of an partitioned index to be compressed the same way this is probably a rather irrelevant restriction (except for the mentioned case regarding EXCHANGE PARTITION)

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create index t_part_idx on t_part (user_id, test_col) nocompress local;

Index created.

SQL> alter index t_part_idx rebuild partition p_default compress;
alter index t_part_idx rebuild partition p_default compress
*
ERROR at line 1:
ORA-28659: COMPRESS must be specified at object level first

SQL> select
2 partition_name
3 , compression
4 from
5 user_ind_partitions
6 where
7 index_name = 'T_PART_IDX';

PARTITION_NAME COMPRESS
------------------------------ --------
P_123 DISABLED
P_DEFAULT DISABLED

SQL> drop index t_part_idx;

Index dropped.

SQL> create index t_part_idx on t_part (user_id, test_col) compress local;

Index created.

SQL> alter index t_part_idx rebuild partition p_default nocompress;

Index altered.

SQL> select
2 partition_name
3 , compression
4 from
5 user_ind_partitions
6 where
7 index_name = 'T_PART_IDX';

PARTITION_NAME COMPRESS
------------------------------ --------
P_123 ENABLED
P_DEFAULT DISABLED

SQL> drop index t_part_idx;

Index dropped.

SQL> create index t_part_idx on t_part (user_id, test_col) nocompress global
2 partition by range(user_id) (
3 partition p_default values less than (maxvalue)
4 );

Index created.

SQL> alter index t_part_idx rebuild partition p_default compress;
alter index t_part_idx rebuild partition p_default compress
*
ERROR at line 1:
ORA-28659: COMPRESS must be specified at object level first

SQL> drop index t_part_idx;

Index dropped.

SQL> create index t_part_idx on t_part (user_id, test_col) compress global
2 partition by range(user_id) (
3 partition p_default values less than (maxvalue)
4 );

Index created.

SQL> alter index t_part_idx rebuild partition p_default nocompress;

Index altered.

SQL> select
2 partition_name
3 , compression
4 from
5 user_ind_partitions
6 where
7 index_name = 'T_PART_IDX';

PARTITION_NAME COMPRESS
------------------------------ --------
P_DEFAULT DISABLED

SQL>

- Overcoming any of the schema evolution restrictions by temporarily uncompressing / disabling enabled compression

As already mentioned above there are different options how to overcome schema evolution restrictions. One obvious option, although very likely not applicable in many cases simply due to space and load constraints, is to temporarily uncompress the segments and to disable the compression. It is interesting to note that it is not as simple as it seems to undo compression with partitioned tables as I will demonstrate - I think I've even read somewhere that it is not possible at all to revert a table into a state where the mentioned restrictions no longer apply. This is definitely not true.

First of all it is important to note that in addition to the obvious decompression of each partition that contains compressed data the "COMPRESS" attribute that determines if a suitable operation will generate compressed data or not needs to be reset to NOCOMPRESS (but see below for "oddities"). So there is a significant difference between "ALTER ... [NO]COMPRESS" and "ALTER ... MOVE [NO]COMPRESS". The latter reorganizes the segment / partition and compresses/uncompresses the data while doing so, the former just "marks" the segment / partition as "enabled/disabled for compression" but doesn't touch the data and leaves it in whatever state it is at present. As as side note, the "COMPRESSION" column in the dictionary can not be used to distinguish between these two - you can have the column show "DISABLED" with data still compressed, and you can have the column show "ENABLED" with no compressed data and you can end up with a mixture of both - not every block of a segment is necessarily in a compressed state.

Furthermore the "COMPRESSION" attribute can be defined on multiple levels with partitioned tables:

- On the global TABLE level: ALTER TABLE ... [NO]COMPRESS

- On the partition level as default attributes: ALTER TABLE ... MODIFY DEFAULT ATTRIBUTES [NO]COMPRESS

- On the partition level to mark for [NO]COMPRESSION: ALTER TABLE ... MODIFY PARTITION [NO]COMPRESS

- On the partition level with MOVE: ALTER TABLE ... MOVE PARTITION [NO]COMPRESS ... [UPDATE [GLOBAL] INDEXES]

Now in order to be able to overcome the schema evolution restrictions all levels except the partition default attribute level need to be addressed: Any compressed data needs to be uncompressed, but also any partition that has been marked for compression (but doesn't necessarily contain compressed data) needs to be modified as well - finally the global table level also needs to be reset.

The setting on global table level seems to be the crucial step. It looks like Oracle checks during this operation (ALTER TABLE ... NOCOMPRESS) if all subordinate (sub-)partitions have a flag set that marks them as uncompressed. Only if that is the case some internal flag on global table level can also be reset so that the restrictions no longer apply.

There are however some oddities that need to be considered when doing so:

- You need to "move" any partition (ALTER TABLE ... MOVE PARTITION NOCOMPRESS) that has marked for compression if any other partition actually held compressed data - simply "unmarking" it using "ALTER TABLE ... MODIFY PARTITION NOCOMPRESS" is not sufficient even it does not hold any compressed data (for example, has only be marked with "ALTER TABLE ... MODIFY PARTITION COMPRESS").

- If you have BITMAP INDEXES defined on the table with compressed partitions you need to either set the whole indexes unusable or drop them before the COMPRESSION can be effectively disabled on global table level using ALTER TABLE ... NOCOMPRESS

See the following test case for all steps to unwind this:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 nocompress
3 partition by list (user_id) (
4 partition p_default values (default)
5 )
6 as
7 select * from all_users;

Table created.

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123, partition p_default);

Table altered.

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- And uncompress it again
SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> -- Doesn't work!
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- And uncompress it again
SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> -- The global table level was missing!
SQL> alter table t_part nocompress;

Table altered.

SQL> -- Now it works
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> -- Start again
SQL> alter table t_part drop column test_col2;

Table altered.

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- Mark in addition another partition for compression (note: no data gets compressed!)
SQL> alter table t_part modify partition p_123 compress;

Table altered.

SQL> -- And uncompress the compressed partition again
SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> -- Unmark the other partition marked for compression (note: no data was compressed!)
SQL> alter table t_part modify partition p_123 nocompress;

Table altered.

SQL> -- The global table level
SQL> alter table t_part nocompress;

Table altered.

SQL> -- But: Doesn't work!
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- Oddity 1: Although it doesn't contain compressed data
SQL> -- this partition needs to be reorganized as well
SQL> alter table t_part move partition p_123 nocompress;

Table altered.

SQL> -- Don't forget the global table level, otherwise below doesn't work
SQL> alter table t_part nocompress;

Table altered.

SQL> -- Now it works
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> -- Start again
SQL> alter table t_part drop column test_col2;

Table altered.

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- Add a bitmap index
SQL> create bitmap index t_part_idx_bitmap on t_part (username) local;

Index created.

SQL> -- And uncompress the compressed partition again
SQL> alter table t_part move partition p_default nocompress update indexes;

Table altered.

SQL> -- The global table level
SQL> alter table t_part nocompress;

Table altered.

SQL> -- But: Doesn't work!
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- Oddity 2: Bitmap indexes on compressed tables need to be set unusable or dropped
SQL> -- Note it is not sufficient to set single partitions unusable
SQL> alter index t_part_idx_bitmap unusable;

Index altered.

SQL> -- But still: Doesn't work because the global level has not been touched again
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- The global table level
SQL> alter table t_part nocompress;

Table altered.

SQL> -- Now it works
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL>

Jonathan Lewis's picture

Index Rebuild ?

While searching on Metalink for clues about an oddity relating to an index rebuild, I came across Bug 6767655  – reported in 10.2.0.3 and fixed in 11.2.  The problem is described as follows: When having two sessions running concurrently, one doing inserts to a partitioned table, and the other doing partition maintenance operations, there is [...]

arupnanda's picture

Online Materialized View Complete Refresh With Partitions

The other day I was posed with an interesting problem regarding the refresh of the materialized views. First, a little background. MVs are nothing new in Oracle; they have been around since Oracle 7 (called snapshots then). Essentially they are similar to tables (as they actually store data) but populated by running a query on some source tables. This query is the defining query of the MV. So, the user can select from the MV instead of executing the defining query – reducing the execution time significantly.

However, the MV and the query would both give the same result if the source tables have not changed. If the data has changed, the MV will not have known about it and will give a stale result. Therefore, you need to refresh the MV from time to time. Usually the MVs are refreshed by the following method:

begin
dbms_mview.refresh(‘’);
end;

So, what’s the problem? The problem is, during complete refreshes, the MV remains unavailable to the users. This duration of refresh depends on the size of the MV, the execution time of the underlying query and so on; and the unavailability is a nuisance for most applications. In some cases the applications are designed to timeout after some wait; and they do so quite consistently in this case – making it more than just a nuisance.

So, I was asked to devise a solution to make the MVs available during the complete refreshes. I accomplished that with a not-so-common use of partitions. Here is how I did it.

Prebuilt Table

First, as an architectural standard, I advocate the use of prebuilt tables in MVs. Here is how an MV is usually created:

create materialized view mv_bookings_01
refresh complete on demand
enable query rewrite
as
select hotel_id, sum(book_txn) tot_txn
from bookings
group by hotel_id
/

This creates the segment mv_bookings_01 during this process. Instead of this approach, I recommend first creating and populating a regular table and then converting that to an MV. In this revised approach, you would first create the table:

create table mv_bookings_01
as
select HOTEL_ID, sum(book_txn) tot_txn
from bookings
group by HOTEL_ID
/


Then, you would create the MV using the prebuilt table option:

create materialized view mv_bookings_01
on prebuilt table
refresh complete on demand
enable query rewrite
as
select HOTEL_ID, sum(book_txn) tot_txn
from bookings
group by HOTEL_ID
/

This does not do anything to the functionality of the MV itself. The MV feels, looks and smells like it was before; but with two significant advantages:

(1) You can easily alter the columns
(2) You can manipulate the table

I blogged about it at http://arup.blogspot.com/2009/01/ultra-fast-mv-alteration-using-prebuilt.html. Anyway, back to the issue at hand. Using the prebuilt trick, I can create a partitioned MV as well. In this specific case, I will add a column to partition on. Remember, this column does not actually participate in the application; it simply creates an anchor point for the partition.

create table mv_bookings_01
(
ref_type number,
hotel_id number(3),
tot_txn number
)
partition by list (ref_type)
(
partition p0 values (0),
partition p1 values (1)
)
/

After the table is created, I can insert data into it:

insert /*+ append */ into mv_bookings_01
select 0, hotel_id, sum(book_txn) tot_txn
from bookings
group by 0, hotel_id
/

Of course I could have selected the data from the source tables directly while creating the prebuilt table; but I just wanted to show a different approach of data loading. The column REF_TYPE is not part of the application; but I have added it to divide the table along a value by list partitioning. The column can hold only two values – 0 and 1, and hence the table has only two partitions.

Once the table is created, I can use the prebuilt table option to create the MV as shown above. However, the presence of the new column makes it a little different. My MV creation script now looks like this:

create materialized view mv_bookings_01
on prebuilt table
enable query rewrite
as
select 0 as ref_type, HOTEL_ID, sum(book_txn) tot_txn
from bookings
group by 0, HOTEL_ID
/

Note that I have used “0 as ref_type” in the select clause, i.e. the ref_type will always be 0. This is not going to a problem as you can see later. When the MV is first created, the value of ref_type is 0; hence the partition p0 is the one is that is populated; not the partition p1. We can confirm that:

SQL> select count(1) from mv_bookings_01 partition (p0);

COUNT(1)
----------
2

SQL> select count(1) from mv_bookings_01 partition (p1);

COUNT(1)
----------
0

Now that the partitions are in place, let’s see how we refresh this MV. We will no longer use the dbms_mview approach. Rather, we will use this:

(1) Create a temporary table
(2) Create all the indexes
(3) Exchange the partition that is not used with this temporary table

Step 1: Temporary table

First we create a temporary table that is identical to the prebuilt table of the MV in structure; but just not partitioned. We will use the value of ref_type column as 0 or 1 based on the value already in the table, in fact exactly opposite of what is in the table. Since we have 0 in the table, we will use 1 in the temporary table. This temporary table will contain the data that we need refreshed as.

create table tmp1
as
select 1 as ref_type, hotel_id, sum(book_txn) tot_txn
from bookings
group by 1, hotel_id
/

We also need to create another temporary table with no data.

create table tmp0
(
ref_type number,
hotel_id number(3),
tot_txn number
)
/

Step 2: Create Indexes

In this case we don’t have any indexes on the MV; but if we had, we would have created them here on the TMP0 and TMP1 tables.

Step 3: Exchange Partitions

When the temporary table is ready, we can exchange the inactive partition with this.

alter table mv_bookings_01
exchange partition p1
with table tmp1
including indexes
without validation
update global indexes
/
alter table mv_bookings_01
exchange partition p0
with table tmp0
including indexes
without validation
update global indexes
/

This SQL performs a switch: the segment that was called partition P1 in the table mv_bookings_01 is now called TMP1 and former segment TMP1 is now called partition P1 in the table mv_bookings_01. This occurs at the data dictionary level; no actual data is ever transferred from one segment to the other. As a result this is very quick. A lock is required on the table only for that instance is such an insignificant amount of time that it may not be even noticeable. Similarly the segment used to be called P0 partition is now known as TMP0 table and TMP0 is called P0.

There are several key things to note here. Note the clause “without validation”, which tells Oracle not to bother checking inside the TMP0 table that it will confirm to the partition specification. This is critical for the performance. Since we built the table with “1” as a hardcoded value, we know that the REF_TYPE column will definitely contain 1, satisfying the partition requirement for P1. A further checking is not required.

The “including indexes” clause switched the local index partitions as well. If there is a global index on the MV’s prebuilt table, that would have been invalidated and needed rebuilding afterwards. But, we avoided that invalidation by using a “update global indexes” clause in the alter statement.

Once the switchover is complete, we can check the MV to see if the data has been visible.

SQL> select count(1) from mv_bookings_01 partition (p0)
2 /

COUNT(1)
----------
0

SQL> select count(1) from mv_bookings_01 partition (p1)
2 /

COUNT(1)
----------
2

This completes the refreshing of the MV and the data is visible to end users. Let’s see the timeline

(1) Building of temporary table
(2) Building of indexes on the temporary table
(3) Exchange partitions

Steps 1 and 2 take most of the time; but that is done offline, without affecting the MV itself. So, long time there does not affect the availability of the MV itself. The step 3 is where the availability is impacted; but that is miniscule.

The table TMP0 now has the rows from partition P0 of the MV. To confirm that:

SQL> select count(1) from tmp0;

COUNT(1)
----------
2

You can drop the table by “drop table tmp0 purge”. You should also drop TMP1 since that table will contain the previous contents of the partition P1, which is useless now.

The next time this MV needs refreshing, you have to repeat the process; but with a very important distinction – the partition P0 needs exchange now. I used a script that checks the value of REF_TYPE in the MV now and use the other value passed to an SQL script with a positional parameter that accepts 0 or 1. Using this approach you can refresh a MV very quickly.

Oracle 11g Variation

In Oracle 11g, there is a new feature – Virtual Column. You can define a column as virtual in the table, which is not actually stored in the table; but calculated during runtime. What’s more, this virtual column can also be used as a partition key. You can define the REF_TYPE as a virtual column in that case.

Takeaways

(1) Just because there is an Materialized View, it does not have to be refreshed by the usual approach, i.e. dbms_mview.refresh. There are other options as well.
(2) Partitioning is not just for performance or purge reasons; it can be used in clever ways such as MV refreshes.
(3) In this specific case, we didn’t have a column that could have been used as a partitioning key; so we had to resort to adding a column. In many cases, the table may already have a column for that purpose, making the process even simpler.

Greg Rahn's picture

The Core Performance Fundamentals Of Oracle Data Warehousing – Partitioning

[back to Introduction] Partitioning is an essential performance feature for an Oracle data warehouse because partition elimination (or partition pruning) generally results in the elimination of a significant amount of table data to be scanned. This results in a need for less system resources and improved query performance. Someone once told me “the fastest I/O is the one that never happens.” This is precisely the reason that partitioning is a must for Oracle data warehouses – it’s a huge I/O eliminator. I frequently refer to partition elimination as the anti-index. An index is used to find a small amount data that is required; partitioning is used to eliminate vasts amounts of data that is not required. Main Uses For Partitioning I would classify the main reasons to use partitioning in your Oracle data warehouse into these four areas: Data Elimination Partition-Wise Joins Manageability (Partition Exchange Load, Local Indexes, etc.) Information Lifecycle Management (ILM) Partitioning Basics The most common partitioning design pattern found in Oracle data warehouses is to partition the fact tables by range (or interval) on the event date/time column. This allows for partition elimination of all the data not in the desired time window in queries. For example: If I have a [...]

Chris Antognini's picture

Does CREATE INDEX Gather Global Statistics?

You can add the COMPUTE STATISTICS clause to the CREATE INDEX statement. It instructs the SQL statement to gather and store index statistics in the data dictionary, while creating the index. This is useful because the overhead associated with the gathering of statistics while executing this SQL statement is negligible. In Oracle9i, the gathering of [...]

Chris Antognini's picture

Zero-Size Unusable Indexes and the Query Optimizer

Zero-size unusable indexes and index partions is a small but useful feature of Oracle Database 11g Release 2. Simply put, its aim is to save space in the database by immediately releasing the segment associated to unusable indexes or index partitions. To illustrate this, let’s have a look to an example…

Create a partitioned table, insert [...]

To prevent automated spam submissions leave this field empty.
Syndicate content