Oakies Blog Aggregator

dbakevlar's picture

SQLd360 with the AWR Warehouse

Mauro decided to give me another challenge-  run SQLd360 against the AWR Warehouse and let him know how it does straight “out of the box”.  It’s a simpler installation process than SQLTXPLAIN, you simply unzip and run, (no installation of any packages…)

I chose a SQL_ID from one of the source databases loaded and ran it for a 31 day history:

SQL> @sqld360.sql 8ymbm9h6ndphq T 31

I saw a few errors scroll by on the screen as the process to collect the data and zip it up ran.

There isn’t the same log file to see on successful runs to view parameters, but the SQL is very straight forward and right in the SQL directory, so easy enough to see where the DBID is populated as part of the sqld360_0b_pre.sql:

-- get dbid
COL sqld360_dbid NEW_V sqld360_dbid;
SELECT TRIM(TO_CHAR(dbid)) sqld360_dbid FROM v$database;

As this is pulled from the v$ objects, it’s going to populate incorrectly for the AWR Warehouse, which requires a query of the AWR, preferably the DBA_HIST_DATABASE_INSTANCE for this information.

SQL> desc dba_hist_database_instance;
 Name Null? Type
 ----------------------------------------- -------- --------------
 DBID NOT NULL NUMBER
 INSTANCE_NUMBER NOT NULL NUMBER
 STARTUP_TIME NOT NULL TIMESTAMP(3)
 PARALLEL NOT NULL VARCHAR2(3)
 VERSION NOT NULL VARCHAR2(17)
 DB_NAME VARCHAR2(9)
 INSTANCE_NAME VARCHAR2(16)
 HOST_NAME VARCHAR2(64)
 LAST_ASH_SAMPLE_ID NOT NULL NUMBER
 PLATFORM_NAME VARCHAR2(101)
 CON_ID NUMBER

Then the pre run script also queries the plan_table for the SQL_ID:

-- check if SQLD360 is getting called by EDB360
COL from_edb360 NEW_V from_edb360;
SELECT CASE WHEN count(*) > 0 THEN '--' END from_edb360
 FROM plan_table
 WHERE statement_id = 'SQLD360_SQLID' -- SQL IDs list flag
 AND operation = '&&sqld360_sqlid.'
 AND rownum = 1;

We will need it to call the DBA_HIST_PLAN_TABLE if we would want this data, but it does have a flag so that it can continue on as a stand alone run, so it’s not a game stopper.

We then get tripped up again when it looks for the instance number:

-- get instance number
COL connect_instance_number NEW_V connect_instance_number;
SELECT TO_CHAR(instance_number) connect_instance_number FROM v$instance;

Again, with the AWR Warehouse, we need to call upon the DBA_HIST_DATABASE_INSTANCE view after adding in a where clause for the dbname.

SELECT TO_CHAR(instance_number)
from dba_hist_database_instance where db_name=p_dbname;

As with SQLTEXPLAIN, we are shot on get_sdatabase_name_short, get_host_name and get_rdbms_version, too:

-- get database name (up to 10, stop before first '.', no special characters)
COL database_name_short NEW_V database_name_short FOR A10;
SELECT LOWER(SUBSTR(SYS_CONTEXT('USERENV', 'DB_NAME'), 1, 10)) database_name_short FROM DUAL;
SELECT SUBSTR('&&database_name_short.', 1, INSTR('&&database_name_short..', '.') - 1) database_name_short FROM DUAL;
SELECT TRANSLATE('&&database_name_short.',
'abcdefghijklmnopqrstuvwxyz0123456789-_ ''`~!@#$%&*()=+[]{}\|;:",.<>/?'||CHR(0)||CHR(9)||CHR(10)||CHR(13)||CHR(38),
'abcdefghijklmnopqrstuvwxyz0123456789-_') database_name_short FROM DUAL;
-- get host name (up to 30, stop before first '.', no special characters)
COL host_name_short NEW_V host_name_short FOR A30;
SELECT LOWER(SUBSTR(SYS_CONTEXT('USERENV', 'SERVER_HOST'), 1, 30)) host_name_short FROM DUAL;
SELECT SUBSTR('&&host_name_short.', 1, INSTR('&&host_name_short..', '.') - 1) host_name_short FROM DUAL;
SELECT TRANSLATE('&&host_name_short.',
'abcdefghijklmnopqrstuvwxyz0123456789-_ ''`~!@#$%&*()=+[]{}\|;:",.<>/?'||CHR(0)||CHR(9)||CHR(10)||CHR(13)||CHR(38),
'abcdefghijklmnopqrstuvwxyz0123456789-_') host_name_short FROM DUAL;
-- get rdbms version
COL db_version NEW_V db_version;
SELECT version db_version FROM v$instance;
DEF skip_10g = '';
COL skip_10g NEW_V skip_10g;
SELECT '--' skip_10g FROM v$instance WHERE version LIKE '10%';
DEF skip_11r1 = '';
COL skip_11r1 NEW_V skip_11r1;

You can understand how all of this is going to return the information on the AWR Warehouse repository, (actual database it’s run on) instead of the source database that the SQL_ID is from.

CPU and RAC Info

We have the same issue when it goes to gather the information on CPU, and RAC-

-- get average number of CPUs
COL avg_cpu_count NEW_V avg_cpu_count FOR A3;
SELECT ROUND(AVG(TO_NUMBER(value))) avg_cpu_count FROM gv$system_parameter2 WHERE name = 'cpu_count';
-- get total number of CPUs
COL sum_cpu_count NEW_V sum_cpu_count FOR A3;
SELECT SUM(TO_NUMBER(value)) sum_cpu_count FROM gv$system_parameter2 WHERE name = 'cpu_count';
-- determine if rac or single instance (null means rac)
COL is_single_instance NEW_V is_single_instance FOR A1;
SELECT CASE COUNT(*) WHEN 1 THEN 'Y' END is_single_instance FROM gv$instance;

The first two queries just need to update to use

WRI$_DBU_CPU_USAGE
CPU_COUNT

WRI$_DBU_CPU_USAGE
CPU_CORE_COUNT

And the final one can easily be updated to use the DBA_HIST_DATABASE_INSTANCE after passing in the db_name.

What does make SQLd360 great, is that upon quick inspection, a change to the “pre” script to pass is the DB_ID and query the DBA_HIST vs. the V$/GV$ views is all that is really required to make most of the SQLd360 AWR Warehouse compliant.

SQLd360 Wins

Without any changes-  as is, the product runs and simply provides some wrong info when it comes to stats data, environment and such.  The actual SQL is viable and the code disregards the DBID when it queries the AWR objects.  It will fail on the ASH pull, but that is something that shouldn’t be expected unless they want to hit the DBA_HIST_ACTIVE_SESSION_HISTORY at some point… :)

Thanks to Mauro for the great idea to try SQLd360 against the AWR Warehouse and review it for changes to see what would be required.  Hope this helps, guys!

 

 

 

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [SQLd360 with the AWR Warehouse], All Right Reserved. 2015.

dbakevlar's picture

SQLTXPLAIN and the AWR Warehouse- Part II

I finally have a moment to look into my “hacked” SQLT XPRECT/XTRACT runs and see if the changes I made were enough to run properly with the AWR Warehouse.

The answer is yes and no… and maybe “it depends”… :)

The Results

The data from the AWR Warehouse to pull the corresponding data for the appropriate SQL_ID from the correct source database in the AWR repository worked, but the environment information still populates incorrectly, as I didn’t update anything outside of the get_database_id function in the SQLT$A package and the executables that call it, as documented in my earlier blog post.

The first indicator that my assumptions that additional functions would need to be updated were verified when I noted the environment values in the main html page, the main entry point into the zip file provided as output in any SQLT run:

sqlt8

 

The SQL_ID was run against the SH database, but it instead shows the information for the AWR Warehouse, displaying incorrectly.  The issue is its populating the database name, DBID and the shortname from the local views, (v$instance, v$database, etc…)

The CBO environment and system statistics are also wrong, as it pulls this information from the database and this is simply the AWR repository data, not the database the data resides in:

sqlt10

 

Yes, the statistics and initialization parameters have no way to currently pull from the dba_hist tables, which means that we have the wrong data for most of the higher level info on the main HTML page.

Where It’s Right

Where I made changes, which were minimal vs. what was required, is where it displays correctly.  The cursor sharing, the SQL information, along with SQL_ID data is all correct for the statement at hand.

sqlt11

The bind peek info is correct, too-

sqlt12

You can quickly view that the data is pulled from the DBA_HIST_SQL_PLAN table and since I’m pushing the correct SQL_ID and DBID to the query, it responds with the correct information in return.

I have results in any source that shows the DBA_HIST_SQL_PLAN and distinct failures when anything queries the GV$SQL_PLAN.  This should be expected-  the AWR Warehouse is only going to house the SH database information for this SQL_ID in the DBA_HIST objects.  The GV$ objects aren’t going to know anything about SH source database information.

Success Rate

So how successful was I?  About 30%… :)

Any objects, including the r_rows objects populated as part of the SQLT runs, were successful if they sourced from the Automatic Workload Repository, (WRH$, WRI$ and WRM$ objects, which feed the DBA_HIST* views) and failed when they sourced from an GV$ or V$ object.  I also had to ensure that they had the appropriate DBID called, along with the SQL_ID.  Without this added element, failures also occurred.

Recommendations for Change

Additional Changes that need to be made to complete the transition to AWR Warehouse compliance would include:

At the time of installation, the additional option for not just tuning pack, but also add “A” for AWR Warehouse.  If this option is chosen, a second version of the SQLT$A package is then deployed, along with the executables for the XPRECT and XTRACT SQL statements.

SQLT$A Package would require updates for the following functions:

  1. get_database_id
  2. get_database_name
  3. get_sid
  4. get_instance_number
  5. get_instance_name
  6. get_host_name
  7. get_host_name_short

The XPRECT and XTRACT would then require an additional parameter addition-  dbname.

So instead of:

>START /u01/home/sqlt/run/sqltxprext.sql aycb49d3343xq 

it would be:

>START /u01/home/oraha/kellyn/sqlt/run/sqltxprext.sql aycb49d3343xq   

Carlos and Mauro were wonderful and introduced me to Abel, so I’ll be passing this along to him, so cross your fingers and lets see if we can get this valuable tool working with the AWR Warehouse….and soon! :)

 

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [SQLTXPLAIN and the AWR Warehouse- Part II], All Right Reserved. 2015.

mwidlake's picture

My Oracle Life of Clubbing & Harmony

Last year I promised myself I would do more conferences & presenting and that it would include more events further afield, such as in Europe. I can’t say I managed it in 2014 (Liverpool for the UKOUG Tech14 did not count as a foreign country for me, even if I found a couple of the locals hard to understand) but 2015 is proving more successful. I attended the OUG Ireland conference 2 weeks ago, for my first trip to that country, and I learnt recently that I have papers accepted for Harmony 2015. This conference is a joint event between the Oracle user group of Finland, the Oracle user group of Estonia and the Latvian Oracle user group.

The conference is on the 11th and 12th of June in Tallinn, Estonia. I know that a few of my friends I’ve met in the flesh will also be there but also some people I only know online and who I’m looking forward to meeting for real {and one who I am not sure if I have met in the flesh or not!!!}. That’s part of why I like going to conferences; It is nice to get to know people via electronic means but there is nothing like actually being in the same room and chatting, especially if it is relaxing over a coffee, beer or a meal.

However, I am particularly happy to be going to Tallinn as my wife has been there and loves it. We are trying to organise it so that she can come over as well, but she has her own travel commitments that vary from week to week. Sue knows how to say “can you punch my ticket” in Estonian – and she assures me this is not a euphemism for anything.

In case Sue cannot make it, she has given me the book she learnt from, so I can learn Estonian myself:

Learn Estonian - in Russian!

Learn Estonian – in Russian!

First I have to learn Russian though… Yes, it’s a Russian “How to learn Estonian” book.

Have you any idea how much pleasure she took in doing that to me?

So that is the Harmony. What about the Clubbing? That would be Club Oracle London, which is a user group I mention each time there is a meeting. It is in London in the evening and there are 3 talks, beer, pizza and lots of chat between the crowd & the presenters. I’m doing my Disasters talk at the next meeting on the 30th April. Click that link to register and secure your place, it’s free. The other presenters are Svetoslav Gyurov and Dominic Giles. Dom is being particularly brave and is offering to answer any questions people have about the database {“as honestly as I can”}. I’ve known Dom for years, he used to come over to the place I worked when we were doing a lot of beta testing of Oracle. He secured his place in my admiration by not only thoroughly knowing his stuff but also when he told me off for being pathetic and not pushing the new tech and that I was being a wimp. Utter honesty from the vendor works for me.

I’ve currently got nothing else organised for 2015 conference-wise (apart from the small issue of helping define the technical content for UKOUG Tech15! So I guess I will be there. Oh, and probably a couple of SIGs). I keep saying I’ll try to do Bulgaria but again I’d like to get that to work with going with Sue. And of course, I could put in for Oracle Open World 15, but it’s a loooong way to go and costs me a lot. And Larry does not seem to want to talk to me anymore.

randolf.geist's picture

DOAG Expertenseminar "Parallel Execution Masterclass" (German)

In zwei Wochen findet das Expertenseminar "Parallel Execution Masterclass" in Berlin statt, das ich gemeinsam mit der DOAG veranstalte.

Es sind noch ein paar Plätze frei - sollten Sie also Lust und Zeit haben, nach Berlin zu kommen und exklusives Wissen (nicht nur) über das Parallel Execution Feature der Oracle Datenbank zu erfahren, würde ich mich sehr freuen, Sie dort mit den anderen Teilnehmern begrüßen zu dürfen, um gemeinsam mit Ihnen eine gute und produktive Zeit zu verbringen!

Bei Interesse wenden Sie sich bitte an die Ansprechpartner der DOAG, die im Link angegeben sind - dort finden Sie auch eine genauere Beschreibung des Seminars.

Chris Antognini's picture

Ad: Trivadis DK Opening Event

Trivadis, the company I work for, just opened a new branch in Denmark (the press release, in German, is available here). On the 14th of April (09:00-15:00) takes place, in the Oracle’s office in Ballerup, the opening event. Two colleagues of mine and I will be there to deliver the following presentations:

Big Data versus Conventional Techniques – Decision Criteria (Peter Welker, Senior Principal Consultant)

Better use Big Data technologies in the Business intelligence and Data Warehouse environment or not? The presentation explains “classic” and “new” requirements for data analysis and compares them with technical and other criteria: latency, throughput, data volume, information quality and structure are even not enough. User friendliness, maturity, stability, maintainability or simply the available know-how and the bias of developers, admins and users are important. And of course the costs are not to overlook.

The Oracle Database In-Memory Option: Challenges and Possibilities (Christian Antognini, Senior Principal Consultant)

Oracle Database In-Memory, which was introduced with version 12.1.0.2, promises to deliver in-memory performance without modifying the application’s code, increasing the complexity of database administration, or jeopardizing the utilization of other Oracle Database functionalities.
The aim of this presentation, after explaining the key concepts the Oracle Database In-Memory option is based on, is to review what you can expect from this new technology. Specifically, we’ll take a look at the situations where you can take advantage of it and what kind of overhead you should expect when enabling it.

The Power To Perform – Managed Services Made Easy (Gerald Klump, Head of Managed Services & Outsourcing)

How many ideas, new business models or process optimisation measures are just waiting to be implemented in your company? Too many? Because your IT department is already at full capacity handling its day-to-day tasks and a host of other projects?
The diversity of challenges that have to be faced today means that involving external IT service providers is the best way, from a business point of view, of mastering all these tasks as a whole on time. The quality of the provider, alongside the quality of the collaboration itself, are the key to successfully achieving this goal. But what are the characteristics of effectively cooperating with an external service provider?
How can you effortlessly tailor services that require support to precisely accommodate your own needs? Trivadis, as a leading IT service provider, can offer you the solutions that you’re looking for. Based on concrete examples of our projects for small to medium-sized companies and key account customers such as Audi, we’ll show you what made-to-measure services actually look like in practice, what the benefits for your company will be, and what you will have to take into consideration.
With our Managed Services, you will enhance your performance as well.

Sound interesting? Or just want to stop by to say hi? Then, do not hesitate to let us know that you want to participate by sending an email to this address. Further information about the event, in Danish, is available here.

fritshoogland's picture

Using the Oracle Grid Infrastructure Agents for GoldenGate on Oracle restart

I was testing Oracle Goldengate on a non-clustered Oracle 12.1.0.2 database with ASM. With ASM, you need to have the grid infrastructure installed. The cluster ware for the single node install is called ‘oracle restart’.

The most convenient way to have Goldengate running at startup that I could find, was using the Oracle Grid Infrastructure Agents. These agents are not installed by default, you need to download these from the Oracle Technology Network. The download is with the grid infrastructure downloads in the database section.

The installation is very simple: unzip the xagpack_6.zip file, and run the xagsetup.sh script with the arguments ‘–install’ and ‘–directory’ arguments. The directory argument needs to point to a directory outside of the grid infrastructure home. My first choice would be to have it in the infrastructure home, but the install script does not allow that.

$ ./xagsetup.sh --install --directory /u01/app/oracle/product/xag
Installing Oracle Grid Infrastructure Agents on: ogg-dest
Done.

The control utility, ‘agctl’ can be run without any environment variables set to point to the directory where the utility is installed, or the grid infrastructure home.

$ /u01/app/oracle/product/xag/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 6.1.1

To add a golden gate resource, add it via the agctl utility:

$ /u01/app/oracle/product/xag/bin/agctl add goldengate gg1 --gg_home /u01/app/oracle/product/12.1.2/oggcore_1 --instance_type target --oracle_home /u01/app/oracle/product/12.1.0.2/dbhome_1 --databases ora.dest.db

This is a very simple example, where a cluster resource called ‘gg1′ is created, for which we point out the golden gate home, the database ORACLE_HOME and the database cluster resource (not the database name as some documentation from Oracle says).

In my case, golden gate was not running. After the golden gate cluster resource was added, it’s offline:

xag.gg1.goldengate
      1        OFFLINE OFFLINE                               STABLE

Now start the resource using the ‘agctl’ utility:

/u01/app/oracle/product/xag/bin/agctl start goldengate gg1

(please mind you could do exactly the same with crsctl: crsctl start res xag.gg1.goldengate)
Starting does not result in any message. Let’s look at the golden gate cluster resource:

xag.gg1.goldengate
      1        ONLINE  ONLINE       ogg-dest                 STABLE

What it does, is start the golden gate manager (mgr) process:

$ /u01/app/oracle/product/12.1.2/oggcore_1/ggsci << H
> info all
> H

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (ogg-dest.local) 1>
Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    ABENDED     REP1        00:00:00      17:55:53

Let’s add monitoring of the replicat. In real life scenario’s it’s the extracts or replicats that are important, these perform the actual work golden gate is supposed to do!

/u01/app/oracle/product/xag/bin/agctl modify goldengate gg1 --monitor_replicats rep1

The cluster ware checks the golden gate resource type status every 30 seconds. So if you check the cluster immediately after modifying the golden gate resource, it could not have performed the check with the added check for the replicat.

xag.gg1.goldengate
      1        ONLINE  INTERMEDIATE ogg-dest                 ER(s) not running :
                                                             REP1,STABLE

It now neatly reports the replicat group rep1 not running! I guess ‘ER’ means Extract and Replicat.
Now start the replicat, and check the status again:

xag.gg1.goldengate
      1        ONLINE  ONLINE       ogg-dest                 STABLE

A few final notes. This is the simplest possible setup. I couldn’t find a lot of information on the infrastructure agents except for some Oracle provided ones, which is why I created this blog. If you apply this on a cluster, I would urge you to look into the oracle provided ones, and create the full setup with golden gate writing its files on a cluster filesystem, using an application VIP, etcetera, so golden gate could be started or failed over to another host.

Goldengate is not the only thing this agents can incorporate into the cluster ware. The current version (6.1.1) supports, besides golden gate: tomcat, apache, JDE, mysql, various types of peoplesoft servers, siebel and web logic.

Deinstalling is very simple, but might put you on your wrong foot. The agents are deinstalled by using the setup script (xagsetup.sh) with the ‘–deinstall’ argument. This of course is quite normal, but it deinstalls the agents from the location where you call xagsetup.sh. This means that if you run xagsetup.sh from the location where you unzipped it for installation, it will ‘deinstall’, which means clean out, from that directory, leaving the true installation alone. The documentation states that upgrades to the agents are done by removing them and installing the new version.

Tagged: clusterware, grid, grid infra, grid infrastructure agents, install, xag

martin.bach's picture

Little things worth knowing: exp/imp vs expdp and impdp for HCC in Exadata

Do you know the difference between exp/imp and expdp/impdp when it comes to importing HCC compressed data in Exadata?

If not, then follow me through two examples. This is on 11.2.0.3/11.2.3.3.1 but applies to all database releases you can have on Exadata. The task at hand is to export a table (which happens to be non-partitioned and HCC compressed for query high) and import it into a different user’s schema. This is quite a common approach when migrating data from a non-Exadata system into an Exadata system. You could for example pre-create the DDL for the tables and implement HCC before even importing a single row. When importing the data, the partitions’ HCC attributes will be honoured and data will be inserted compressed. Or won’t it?

The table

The table I want to export resides on our V2 system. Since I am (trying to be) a good citizen I want to use dbfs for the dump files. Beginning with 12.1.0.2 Grid Infrastructure you can also have ACFS by the way. Let’s start by creating the directory needed and some meta-information about the source table:

SQL> create or replace directory DBM01 as '/dbfs_direct/FS1/mbach/';

Directory created.

SQL> select owner, table_name, partitioned, num_rows, compress_for from dba_tables where table_name = 'T1_QH';

OWNER                          TABLE_NAME                     PAR   NUM_ROWS COMPRESS_FOR
------------------------------ ------------------------------ --- ---------- ------------
MARTIN                         T1_QH                          NO             QUERY HIGH

SQL> select bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

         M     BLOCKS
---------- ----------
        72       9216

The table is 72 MB in size, and HCC compressed (I actually ensured that it was by issuing an “alter table t1_qh move;” before the export started).

Data Pump

The first export uses expdp, followed by impdp to get the data back. I am remapping the schema so I don’t have to risk overwriting my source data.

[enkdb01:oracle:DBM011] /home/oracle/mbach
> expdp martin/xxxxxx directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=exp_t1_qh.log tables=t1_qh

Export: Release 11.2.0.3.0 - Production on Tue Mar 24 06:15:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "MARTIN"."SYS_EXPORT_TABLE_02":  martin/******** directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=exp_t1_qh.log tables=t1_qh
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 72 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MARTIN"."T1_QH"                            9.675 GB 10000000 rows
Master table "MARTIN"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Dump file set for MARTIN.SYS_EXPORT_TABLE_02 is:
  /dbfs_direct/FS1/mbach/exp_t1_qh.dmp
Job "MARTIN"."SYS_EXPORT_TABLE_02" successfully completed at 06:17:16

The interesting bit here is that the table on disk occupies around 72 MB, and yet expdp tells me the 10000000 rows occupy 9.7 GB. Can anyone guess why?

[enkdb01:oracle:DBM011] /home/oracle/mbach
> ls -lh /dbfs_direct/FS1/mbach/exp_t1_qh.dmp
-rw-r----- 1 oracle dba 9.7G Mar 24 06:17 /dbfs_direct/FS1/mbach/exp_t1_qh.dmp

Yes, 10GB. Data apparently is not exported in its compressed form. Now this table is going to be imported:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> impdp imptest/xxxxxx directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=imp_t1_qh.log \
> tables=martin.t1_qh remap_schema=martin:imptest 

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 06:23:44 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "IMPTEST"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "IMPTEST"."SYS_IMPORT_TABLE_01":  imptest/******** directory=DBM01
  dumpfile=exp_t1_qh.dmp logfile=imp_t1_qh.log tables=martin.t1_qh remap_schema=martin:imptest
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "IMPTEST"."T1_QH"                           9.675 GB 10000000 rows
Job "IMPTEST"."SYS_IMPORT_TABLE_01" successfully completed at 06:29:53

This looks all right-all data back (the number of rows exported matches those imported). What about the segment size?

SQL> select owner, table_name, partitioned, num_rows, compress_for from dba_tables where table_name = 'T1_QH';

OWNER                          TABLE_NAME                     PAR   NUM_ROWS COMPRESS_FOR
------------------------------ ------------------------------ --- ---------- ------------
MARTIN                         T1_QH                          NO             QUERY HIGH
IMPTEST                        T1_QH                          NO             QUERY HIGH

2 rows selected.

SQL> select bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

         M     BLOCKS
---------- ----------
        72       9216
        72       9216

2 rows selected.

Identical down to the block.

Traditional Export/Import

Despite the fact that exp/imp are deprecated they are still included with 12.1.0.2, the current release at the time of writing. What if you did the same process with these instead? After all, many DBAs “grew up” with those tools and can use them in their sleep. This plus some initial deficits with Data Pump in 10g keep exp/imp high up in the list of tools we like.

Let’s export the table:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> exp martin/xxxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.log tables=t1_qh

Export: Release 11.2.0.3.0 - Production on Tue Mar 24 06:42:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table                          T1_QH   10000000 rows exported
Export terminated successfully without warnings.

The files are more or less identical in size to the ones created before by Data Pump:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> ls -lh /dbfs_direct/FS1/mbach/*classic*
-rw-r--r-- 1 oracle dba 9.7G Mar 24 07:01 /dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp
-rw-r--r-- 1 oracle dba  537 Mar 24 07:01 /dbfs_direct/FS1/mbach/exp_t1_qh_classic.log

What about the import?

[enkdb01:oracle:DBM011] /home/oracle/mbach
> imp imptest/xxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/imp_t1_qh_classic.log \
> FROMUSER=martin TOUSER=imptest tables=t1_qh

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 07:27:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V11.02.00 via conventional path

Warning: the objects were exported by MARTIN, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing MARTIN's objects into IMPTEST
. . importing table                        "T1_QH"

And this takes a loooong time. What’s happening in the background? I first checked what the session was doing. That’s simple-have a look at the output.

SQL> @scripts/as

   INST_ID   SID    SERIAL# USERNAME      PROG       SQL_ID         CHILD PLAN_HASH_VALUE      EXECS   AVG_ETIME OFF SQL_TEXT
---------- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- --- -----------------------------------------
         1   594         23 IMPTEST       imp@enkdb0 fgft1tcrr12ga      0               0      22723         .00 No  INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "

The #execs were interesting and indeed, after a couple of minutes that counter has gone up a lot:

   INST_ID   SID    SERIAL# USERNAME      PROG       SQL_ID         CHILD PLAN_HASH_VALUE      EXECS   AVG_ETIME OFF SQL_TEXT
---------- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- --- -----------------------------------------
         1   594         23 IMPTEST       imp@enkdb0 fgft1tcrr12ga      0               0     639818         .00 Yes INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "

You can probably guess what’s coming next. After quite some while the import finished:

> imp imptest/xxxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/imp_t1_qh_classic.log \
> FROMUSER=martin TOUSER=imptest tables=t1_qh

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 07:27:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V11.02.00 via conventional path

Warning: the objects were exported by MARTIN, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing MARTIN's objects into IMPTEST
. . importing table                        "T1_QH"   10000000 rows imported
Import terminated successfully without warnings.

And now for the reason of this blog post:

SQL> select owner, bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

OWNER                                   M     BLOCKS
------------------------------ ---------- ----------
MARTIN                                 72       9216
IMPTEST                             11227    1437056

The newly important table was not compressed at all when importing using the traditional path.

marco's picture

Presentation material & E-learning videos – In-Memory Column Store Workshop with Maria Colgan

You can now download and have another look at the presentations used during the In-Memory…

Richard Foote's picture

Oracle Database In-Memory Test Drive Workshop: Canberra 28 April 2015

I’ll be running a free Oracle Database In-Memory Test Drive Workshop locally here in Canberra on Tuesday, 28th April 2015. Just bring a laptop with at least 8G of RAM and I’ll supply a VirtualBox image with the Oracle Database 12c In-Memory environment. Together we’ll go through a number of hands-on labs that cover: Configuring the Product Easily […]

tanelpoder's picture

Sqlplus is my second home, part 8: Embedding multiple sqlplus arguments into one variable

I’ve updated some of my ASH scripts to use these 4 arguments in a standard way:

  1. What ASH columns to display (and aggregate by)
  2. Which ASH rows to use for the report (filter)
  3. Time range start
  4. Time range end

So this means whenever I run ashtop (or dashtop) for example, I need to type in all 4 parameters. The example below would show top SQL_IDs only for user SOE sessions from last hour of ASH samples:

SQL> @ashtop sql_id username='SOE' sysdate-1/24 sysdate

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
     2271      .6   21% | 56pwkjspvmg3h 2015-03-29 13:13:16 2015-03-29 13:43:34               145
     2045      .6   19% | gkxxkghxubh1a 2015-03-29 13:13:16 2015-03-29 13:43:14               149
     1224      .3   11% | 29qp10usqkqh0 2015-03-29 13:13:25 2015-03-29 13:43:32               132
      959      .3    9% | c13sma6rkr27c 2015-03-29 13:13:19 2015-03-29 13:43:34               958
      758      .2    7% |               2015-03-29 13:13:16 2015-03-29 13:43:31                 1

When I want more control and specify a fixed time range, I can just use the ANSI TIMESTAMP (or TO_DATE) syntax:

SQL> @ashtop sql_id username='SOE' "TIMESTAMP'2015-03-29 13:00:00'" "TIMESTAMP'2015-03-29 13:15:00'"

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      153      .2   22% | 56pwkjspvmg3h 2015-03-29 13:13:29 2015-03-29 13:14:59                 9
      132      .1   19% | gkxxkghxubh1a 2015-03-29 13:13:29 2015-03-29 13:14:59                 8
       95      .1   14% | 29qp10usqkqh0 2015-03-29 13:13:29 2015-03-29 13:14:52                 7
       69      .1   10% | c13sma6rkr27c 2015-03-29 13:13:31 2015-03-29 13:14:58                69
       41      .0    6% |               2015-03-29 13:13:34 2015-03-29 13:14:59                 1

Note that the arguments 3 & 4 above are in double quotes as there’s a space within the timestamp value. Without the double-quotes, sqlplus would think the script has total 6 arguments due to the spaces.

I don’t like to type too much though (every character counts!) so I was happy to see that the following sqlplus hack works. I just defined pairs of arguments as sqlplus DEFINE variables as seen below (also in init.sql now):

  -- geeky shorcuts for producing date ranges for various ASH scripts
  define     min="sysdate-1/24/60 sysdate"
  define  minute="sysdate-1/24/60 sysdate"
  define    5min="sysdate-1/24/12 sysdate"
  define    hour="sysdate-1/24 sysdate"
  define   2hours="sysdate-1/12 sysdate"
  define  24hours="sysdate-1 sysdate"
  define      day="sysdate-1 sysdate"
  define    today="TRUNC(sysdate) sysdate"

And now I can type just 3 arguments instead of 4 when I run some of my scripts and want some predefined behavior like seeing last 5 minutes’ activity:

SQL> @ashtop sql_id username='SOE' &5min

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      368     1.2   23% | gkxxkghxubh1a 2015-03-29 13:39:34 2015-03-29 13:44:33                37
      241      .8   15% | 56pwkjspvmg3h 2015-03-29 13:40:05 2015-03-29 13:44:33                25
      185      .6   12% | 29qp10usqkqh0 2015-03-29 13:39:40 2015-03-29 13:44:33                24
      129      .4    8% | c13sma6rkr27c 2015-03-29 13:39:35 2015-03-29 13:44:32               129
      107      .4    7% |               2015-03-29 13:39:34 2015-03-29 13:44:33                 1

That’s it, I hope this hack helps :-)

By the way – if you’re a command line & sqlplus fan, check out the SQLCL command line “new sqlplus” tool from the SQL Developer team! (you can download it from the SQL Dev early adopter page for now).