Oakies Blog Aggregator

Franck Pachot's picture

Start/Stop your Autonomous Databases

The ATLAS experiment in LEGO®

Here are two blog posts on the Databases at CERN blog:

  1. Oracle Cloud Infrastructure API Keys and OCID
  2. Oracle Cloud: start/stop automatically the Autonomous Databases

The idea is to control the Autonomous Databases from the command line, with no installation, just an easy function downloaded from the documentation and customized with environment variables, in order to automate the start and stop of the services. The first post shows how to get all those OCIDs and the second one how to use this oci-curl() function to stop all started ADW or ATP services.

In summary, define the following variables with your values:

privateKeyPath=~/.oci/oci_api_key.pem
keyFingerprint="05:56:ee:89:19:e7:16:03:9b:00:c3:91:cf:a6:9d:e9"
authUserId=ocid1.user.oc1..aaaaaaaaflzrbfegsz2dynqh7nsea2bxm5...
tenancyId=ocid1.tenancy.oc1..aaaaaaaac6guna6l6lpy2s6cm3kguijf...
compartmentId=ocid1.tenancy.oc1..aaaaaaaac6guna6l6lpy2s6...
endpoint=database.us-ashburn-1.oraclecloud.com
apiVersion=20160918

Load the oci-curl() function:

source <( curl https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/signing_sample_bash.txt | grep -vE "(local tenancyId|local authUserId=|local keyFingerprint|local privateKeyPath=)" | sed -e '1s/^.*#/#/' )

List your autonomous services to check that all is correctly set:

for service in autonomousDatabases autonomousDataWarehouses ; do oci-curl $endpoint get "/$apiVersion/$service?compartmentId=$compartmentId" | jq -r '.[] | [ .dbName , .lifecycleState , .id ]  | @tsv'; done

And the result is:

All details in the on the Databases at CERN blog:

  1. Oracle Cloud Infrastructure API Keys and OCID
  2. Oracle Cloud: start/stop automatically the Autonomous Databases

The usage of this oci-curl() function was inspired by Yasin Baskan blog post:

https://blogs.oracle.com/datawarehousing/managing-autonomous-data-warehouse-using-oci-curl

jonah.harris's picture

Revisiting Embedded InnoDB

Many people these days don’t know InnoDB was originally developed as an independent database engine apart from MySQL. Its author, Heikki Tuuri, modeled InnoDB after Transaction Processing: Concepts and Techniques, the seminal transaction processing book authored by Turing Award laureate James “Jim” Gray and Andreas Reuter. It wasn’t until later InnoDB was integrated with MySQL. While InnoDB […]

jonah.harris's picture

Open Source ODBC Drivers for Oracle

In the good old days, database-agnostic applications were written using drivers that implemented the Microsoft Open Database Connectivity (ODBC) API, especially on Windows. Much like JDBC, ODBC provided developers with a single, interoperable, C-based programming language interface that made it possible for applications to access data from a variety of database management systems. When developing an […]

Kamil Stawiarski's picture

ODBV3 – more comfortable usage

It has been crazy few months – organizing POUG2018 took a lot of energy but it was satisfying as hell! </p />
</p></div>
    <div class=»

connor_mc_d's picture

Partial indexing – get the dictionary definitions right

Just a quick post to clear up some confusion that can be seen on the partial indexing capabilities in Oracle Database 12c and above. I was at an event in the Oracle Perth office last week, and an attendee mentioned that they thought the feature was not working or was not enabled on their database, and presented the following demonstration to prove their case:



SQL> create table t ( x int, y int )
  2  partition by range ( x )
  3  (
  4    partition p1 values less than ( 1000 ) indexing on,
  5    partition p2 values less than ( 2000 ) indexing off
  6  );

Table created.

SQL> insert into t
  2  select rownum, rownum from dual connect by level < 2000;

1999 rows created.

SQL> create index ix on t ( x ) local;

Index created.

SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             P2

SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';

PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             USABLE

At first glance, this looks counter-intuitive. I have explicitly specified that I do not want indexing on partition P2, yet after creating a local index, I still have 2 segments, one for each partition, and double-checking USER_IND_PARTITIONS tells me that they are both “fully-fledged” usable index partitions.

As per the documentation linked above, nominating the INDEXING ON / OFF at the partition level on the table definition is not the whole story. When you create the index, you need to inform the database that you wish a particular index to respect that intent. This is because you might want some indexes to be partial and others not to be.

So by slightly changing my CREATE INDEX statement, I can get the desired outcome.


SQL> drop index ix;

Index dropped.

SQL> create index ix on t ( x ) local indexing partial;

Index created.

SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1

SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';

PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             UNUSABLE

pete's picture

Oracle Core Audit - Do you Audit your Core database engine for breach?

Oracles core database audit is a useful tool to monitor activity of the core database engine or applications and detect potential abuses. It seems to be a sad fact that with a lot of companies that i visit and from....[Read More]

Posted by Pete On 15/09/18 At 08:28 AM

Franck Pachot's picture

How to gather Oracle optimizer statistics with minimal risks of regression

The ATLAS experiment control room

Here is, on the Databases at CERN blog, an example to lower to risks when you need to gather statistics in production:

  • use pending stats to be able to test them before publishing
  • be ready to restore old ones if a critical regression comes after publishing

The whole demo (I encourage you to follow the Databases at CERN blog):

https://db-blog.web.cern.ch/blog/franck-pachot/2018-09-optimizer-statistics-gathering-pending-and-history

Here is a summary of commands used, for an easy copy/paste:

exec dbms_stats.set_table_prefs('&&OWNER','&&TABLE','publish','false');
exec dbms_stats.gather_table_stats('&&OWNER','&&TABLE');
alter session set optimizer_use_pending_statistics=true;
select /*+ gather_plan_statistics */ count(*) from &&OWNER..&&TABLE;
select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));
alter session set optimizer_use_pending_statistics=false;
exec dbms_stats.delete_pending_stats('&&OWNER','&&TABLE');
exec dbms_stats.publish_pending_stats('&&OWNER','&&TABLE',no_invalidate=>false);
exec dbms_stats.publish_pending_stats('&&OWNER','&&TABLE',no_invalidate=>false);
select report from table(dbms_stats.diff_table_stats_in_history('&&OWNER','&&TABLE',sysdate-1,sysdate,0));
exec dbms_stats.restore_table_stats('&&OWNER','&&TABLE',sysdate-1,no_invalidate=>false);

connor_mc_d's picture

Connor and Chris at OpenWorld

Chris and I will be at OpenWorld next month, so our session details are below, but you can also click on the links below to add entries to your calendar to make sure you don’t miss us. Don’t forget to use the official Schedule Builder to make sure you have booked your spot at our sessions!

If you can’t get to our sessions, you might catch us wandering the halls or at The Hub. Feel free to come and say Hello and talk tech!

See you at OpenWorld!

Calendar ICS files

Tune ANY SQL in 20 minutes (.ics)

Fast Lane to Database Success (.ics)

Developers – Don’t Be The Person That Discombobulates Your Database (.ics)

18 Things Developers Will Love About Database 18c (.ics)

Why Isn’t My Query Using An Index (.ics)

Session Details

image

 

image

 

image

Jonathan Lewis's picture

Column Stats

A little while ago I added a postscript about gathering stats on a virtual column to a note I’d written five years ago and then updated with a reference to a problem on the Oracle database forum that complained that stats collection had taken much longer after the addition of a function-based index. The problem related to the fact that the function-based index was supported by a virtual column that used an instr() function on a CLOB (XML) column – and gathering stats on the virtual column meant applying the function to every CLOB in the table.

So my post-script, added about a month ago, suggested adding a preference (dbms_stats.set_table_prefs) to avoid gathering stats on that column. There’s a problem with this suggestion – it doesn’t work

Oracle doesn’t play nicely when you try to limit the stats collection to a few columns – even in version 18.3. Here’s a demonstration of the effect. First we create a table that includes a column group (extended stats), a virtual column, and a function-based index – i.e. the three different ways of generating user-related virtual columns.


rem
rem     Script:         stats_struggle_06.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2018
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad(rownum,10,'0')             v2
from
        generator       v1
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

execute dbms_stats.delete_table_stats(user,'t1')

begin
        dbms_output.put_line(
                dbms_stats.create_extended_stats(
                        ownname         => user,
                        tabname         => 'T1',
                        extension       => '(v1, v2)'
                )
        );
end;
/

alter table t1 add id_12 
        generated always as (mod(id,12)) virtual
;

create index t1_id on t1(mod(id,10));


Since I’ve run this on 12c and 18c I’ve included a call to delete table stats after creating the table. So the next step is to enable SQL trace and see what Oracle does under the covers when we try to gather stats on just a couple of columns in the table:


alter session set events '10046 trace name context forever';

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 't1',
                method_opt  => 'for columns size 1 id v1',
                cascade     => false
        );
end;
/

alter session set events '10046 trace name context off';

column column_name  format a32
column data_default format a32

select 
        column_name, data_default,
        num_nulls, num_distinct, to_char(last_analyzed,'hh24:mi:ss') gathered
from    user_tab_cols 
where   table_name = 'T1' 
order by 
        internal_column_id
;

COLUMN_NAME                      DATA_DEFAULT                      NUM_NULLS NUM_DISTINCT GATHERED
-------------------------------- -------------------------------- ---------- ------------ --------
ID                                                                         0        10000 16:13:12
V1                                                                         0        10000 16:13:12
V2
SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7   SYS_OP_COMBINED_HASH("V1","V2")
ID_12                            MOD("ID",12)
SYS_NC00006$                     MOD("ID",10)

According to the output of the last query we’ve gathered stats only on the two columns specified. But have we really avoided the work ? Here, with some cosmetic tidying, is the SQL executed by the package:

select 
        /*+
                full(t) no_parallel(t) no_parallel_index(t) dbms_stats
                cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
                xmlindex_sel_idx_tbl no_substrb_pad 
         */
        to_char(count(ID)),
        substrb(dump(min(ID),16,0,64),1,240),
        substrb(dump(max(ID),16,0,64),1,240),
        to_char(count(V1)),
        substrb(dump(min(V1),16,0,64),1,240),
        substrb(dump(max(V1),16,0,64),1,240),
        to_char(count(V2)),
        to_char(count(SYS_STUIBQVZ_50PU9_NIQ6_G6_2Y7)),
        to_char(count(ID_12)),
        to_char(count(SYS_NC00006$))
from
        TEST_USER.T1 t  /* NDV,NIL,NIL,NDV,NIL,NIL,ACL,ACL,ACL,ACL*/

We can see that Oracle has done a count(), min() and max() on id and v1, and the “comment” at the end of the text tells us that it’s applied the approximate_ndv mechanism to the first two columns queried but not the rest. However it has count()ed all the other columns – which means it’s evaluated their underlying expressions. So if you were hoping that limiting the columns gathered would avoid a really expensive function call, bad luck.

Threat / Bug alert

A further irritation showed up when I ran a test case that used a deterministic PL/SQL function to generate a virtual column: in 12.1.0.2 the function was called once per row (possibly because every row had a different value) whether or not it was in the list of columns for gathering stats; in 18.3 the function was called nearly twice per row when I didn’t specificy stats gathering for the column and nearly 4 times per row when I did. This looks like it might be a change (possibly accidental) to how deterministic functions can cache their inputs and outputs – possibly something as “minor” as the size of the cache. To be continued when time permits …

 

 

connor_mc_d's picture

The full stack developer….is BACK for 2018!

Last year, I flexed my technology muscles Smile by building on the fine ground work of Lucas Jellema in using some Node, some REST, and some JSON to extract the full Oracle Openworld speaker catalogue, and then added some JSON parsing in the database, some SQL and slapped a nice helping on Application Express on top of it all, to end up with an Application Express rendition of the Oracle Openworld speaker catalogue.

So it is that time of year again, and whilst some of the bits and pieces have changed, I’ve managed to stumble my way through all of the difference pieces once again, and made some enhancements along the way to give you the new and improved 2018 version!

It’s waiting there on apex.oracle.com for you to explore.

https://tinyurl.com/oow18catalog

Enjoy!

To prevent automated spam submissions leave this field empty.