Oakies Blog Aggregator

glennfawcett's picture

CockroachDB… true distributed system-of-record for the cloud

After exploring the analytics end of distributed NewSQL databases, I decided to explore true system-of-record in the cloud. Without a doubt, the future is moving to the cloud. CockroachDB is at the fore-front of this journey to be the backbone database for new micro-services in the cloud. Escape the legacy database rats nest that wasn’t designed to be truly distributed or in the cloud.

With CockroachDB, you can domicile data for regulatory reasons and ensure the best possible performance by accessing data locally. CockroachDB allows you to use create a SINGLE distributed database across multiple cloud providers and on-prem. Our self healing not only helps keep your data consistent, but it automates cloud-to-cloud migration without downtime.

cockroach geo-distributed cluster

It should be an exciting new journey. If you are interested, take a look at our architecture, download, and take the roach out for a spin!

dbakevlar's picture

Create an Oracle VM on Azure in Less than 5 Minutes

If there’s one thing I’ve been able to prove this week, it’s that even with the sweet 4G LTE, Wi-Fi setup in my RV, Montana still has the worst Wi-Fi coverage in the US.  Lucky for me, I work in the cloud and automate everything, because if there’s one thing I love about automating with scripts, is that I can build out a deployment faster and less resource intensive than anyone can from the portal.

Oracle Virtual Machines in Azure

When you build out an Azure VM, with Oracle, you’ll also need to have the supporting structure and a sufficiently sized additional disk for your database.  This can be a lot of clicks inside a portal, but from a script, a few questions and bam, you have everything you need.

One of my customers wanted a script to deploy ANY of the Oracle VM images from the catalog, so I created a new script to do so, (previous one was focused on just the 12c-18c database offerings.)  This required a switch from using the sku and the version to build the URN to pushing the URN directly to the customer and knowing this would be used by a more advanced DBA or Azure administrator to begin with, so although its still a simple script to use, the responses are more cryptic, (i.e. what Azure needs to build out under the covers.)

The deployment results is a separate resource group with an Oracle VM of any version available in the Azure catalog, two disks on drive types of available types, in any US location, along with all support resources:

https://dbakevlar.com/wp-content/uploads/2019/07/ora_vm-300x126.jpg 300w, https://dbakevlar.com/wp-content/uploads/2019/07/ora_vm-768x324.jpg 768w, https://dbakevlar.com/wp-content/uploads/2019/07/ora_vm.jpg 1452w" sizes="(max-width: 650px) 100vw, 650px" />

The above is a 7.6 Red Hat OS installation, prepared for an Oracle database.  Note that the virtual network, etc. has been created as part of the script deployment.  The disks can be sized out from the initial deployment and depending on the version, (URN) chosen from the catalog, Oracle may already be installed.  I’m building out scripts to download and install Oracle on those that don’t have it for my next task..:)

Oh yeah and even on my slow network, it deployed in about 3 minutes for me.

It’s Yours

If you’re interested in working with this bash script to deploy an Azure VM either prepared for Oracle or with Oracle already installed on it, the mk_oravm.sh can be pulled from Github.

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [Create an Oracle VM on Azure in Less than 5 Minutes], All Right Reserved. 2019.

martin.bach's picture

RBAL (ospid: nnn): terminating the instance due to error 27625 after patching Oracle Restart

I have come across an odd behaviour trying to patch an Oracle Restart environment to 12.1.0.2 January 2019. Based on a twitter conversation this isn’t necessarily limited to my patch combination, there might be others as well. I have used opatchauto to apply patch 28813884 to both RDBMS and GRID homes plus its corresponding OJVM (Java) patch. Before diving into details, this is the environment I have been working with:

  • Oracle Restart 12.1.0.2 with an earlier PSU/OJVM combo applied
  • Separation of duties with oracle as the RDBMS owner, and grid owning the GRID infrastructure installation. This is a key point!
  • Patches to be applied
    • Patch 28813884 (GI PSU)
    • Patch 28790654 (corresponding OJVM patch)
  • OPatch version as installed in the GRID|RDBMS home
    • opatchauto: 13.9.4.2.0
    • opatch: 12.2.0.1.17

Following the instructions in the readme file I upgraded OPatch to the required version. The file I downloaded was named p6880880_121010_Linux-x86-64.zip. I double-checked the readme, and to me this is the correct file. After upgrading OPatch in the RDBMS and GRID homes, I started patching.

After this process completed, I wanted to start the database in UPGRADE mode as required by the OJVM patch. This needs to be done via sqlplus since srvctl does not support an “upgrade” option in “start database”.

And that’s where it hit me: whenever I tried to open the database in UPGRADE mode, it threw an error:

 SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 16 12:15:34 2019

 Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 Connected to an idle instance.
 
 SQL> startup upgrade
 
 ORACLE instance started.
 
 Total System Global Area 1073741824 bytes
 Fixed Size                  2932632 bytes
 Variable Size             713031784 bytes
 Database Buffers          352321536 bytes
 Redo Buffers                5455872 bytes
 ORA-03113: end-of-file on communication channel
 Process ID: 22354
 Session ID: 14 Serial number: 41958

Looking at the alert log I can see that RBAL terminated the instance:

ALTER DATABASE   MOUNT
 ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.113)(PORT=1521))' SCOPE=MEMORY SID='ORCL';
 2019-07-16 12:15:43.404000 -04:00
 NOTE: ASMB mounting group 1 (DATA)
 WARNING: cellinit.ora is missing. RBAL is terminating the instance.
 RBAL (ospid: 22332): terminating the instance due to error 27625
 System state dump requested by (instance=1, osid=22332 (RBAL)), summary=[abnormal instance termination].
 System State dumped to trace file /u01/app/oracle/diag/rdbms/orcl/ORCL/trace/ORCL_diag_22306_20190716121543.trc
 Dumping diagnostic data in directory=[cdmp_20190716121543], requested by (instance=1, osid=22332 (RBAL)), summary=[abnormal instance termination].
 Instance terminated by RBAL, pid = 22332
 2019-07-16 12:15:49.164000 -04:00

But since this is Oracle Restart, Clusterware will simply restart the database. Unless of course you’ve been changing the default behaviour. And funny enough, this works (see further down in the article as to why). Although I’d appreciate this in most cases, the automatic restart isn’t appropriate in my situation: when started by Clusterware, the database is not in upgrade mode:

 SQL> select status from v$instance;

 STATUS
 ------------
 OPEN

Which is a problem for me. A MOS search about error 27625 didn’t reveal anything too useful, and it took me quite a while to realise the problem has to do with permissions. An Internet search finally gave me the right answer, a fellow blogger has pointed it out a little while ago …

So what exactly is the problem? The RDBMS “oracle” binary needs a specific set of permissions and ownership/group membership for Oracle Restart with separation of duties enabled. This is what it looked like before applying the patch:

$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle asmadmin 324518992 Jul 16 12:15 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle

The file permissions are ‘6751’ with oracle owning the file but it belongs to the asmadmin group. Remember, I am using a separate user for Grid Infrastructure with its own specific operating system groups. After running opatch, this changed to:

$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle oinstall 324518992 Jul 16 12:29 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle

Permissions remained the same, however the group changed from asmadmin to oinstall. Not quite what I had in mind, and it reproducibly causes instance crashes. There is a simple solution: make sure permissions are set correctly! Using “Database Creation on 11.2/12.1/12.2 Grid Infrastructure with Role Separation ( ORA-15025, KFSG-00312, ORA-15081 ) (Doc ID 1084186.1)” got me on the right track.

 $ ls -l $ORACLE_HOME/bin/oracle
 -rwsr-s--x. 1 oracle asmadmin 324518992 Jul 16 12:29 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle

 $ sqlplus / as sysdba

 SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 16 12:34:45 2019

 Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 Connected to an idle instance.

 SQL> startup upgrade
 ORACLE instance started.

 Total System Global Area 1073741824 bytes
 Fixed Size                  2932632 bytes
 Variable Size             713031784 bytes
 Database Buffers          352321536 bytes
 Redo Buffers                5455872 bytes
 Database mounted.
 Database opened.

 SQL> select status from v$instance;

 STATUS
 ------------
 OPEN MIGRATE

 SQL> 

With the database in upgrade mode (I believe the “migrate” is a left over from the 9i days) I am reassured that running datapatch (the OJVM part actually) works as advertised.

By the way Clusterware corrects the group permissions when you issue a “srvctl start database -db …” command as documented in “Starting the database using srvctl changes the group setting for oracle binary (Doc ID 1508027.1)”. Which didn’t solve my problem as I can’t start the database in upgrade mode using srvctl.

Hope this helps you one day!

Franck Pachot's picture

No risk to activate Active Data Guard by mistake with SQL Developer SQLcl

If you have a Data Guard configuration without the Active Data Guard license, you can:

  • apply the redo to keep the physical standby synchronized
  • or open the database read-only to query it

but not at the same time.

Risk with sqlplus “startup”

Being opened READ ONLY WITH APPLY requires the Active Data Guard option. But that this may happen by mistake. For example, in sqlplus you just type “startup”, instead of “startup mount”. The standby database is opened read-only. Then the Data Guard broker (with state APPLY-ON) starts MRP and the primary database records that you are using Active Data Guard. And then DBA_FEATURE_USAGE_STATISTICS flags the usage of: “Active Data Guard — Real-Time Query on Physical Standby”. And the LMS auditors will count the option.

The ways to prevent it are unsupported:

alter system set "_query_on_physical"=false scope=spfile;

Active Data Guard's Real Time Query - avoid usage if not licensed

No problem with SQLcl

I’m pretty careful when I work on production databases but not when I’m on a lab as, there, errors are a nice way to learn new things. On a sandbox database on the Oracle Cloud ( https://medium.com/@FranckPachot/oracle-19c-data-guard-sandbox-created-by-dbca-createduplicatedb-88aab20ea0ab) I restarted the standby with a quick “startup force” and had the nice surprise to see the startup stopping in mount state. Look:

ORA-16003: standby database is restricted to read-only access

ORACLE_SID=CDB1B sql / as sysdba
SQLcl: Release 19.1 Production on Tue Jul 16 18:15:34 2019
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
ORA-16003: standby database is restricted to read-only access

The message “ORA-16003: standby database is restricted to read-only access” is not new. This is what we have when we try a “alter database open read write;” on a standby database.

Actually, I started writing this blog thinking it was a new feature in 19c. And only when re-reading the paragraph above I realized that I was using SQLcl and maybe it has implemented the startup in two times: “startup mount” + “alter database open read write”. That is one of the reasons I try to write a blog post for each thing I discover. When the goal is to publish it, I spend more time thinking about the reasons, the context, the questions that can arise…

If you want to avoid to risk to activate Active Data Guard inadvertently, forget about sqlplus and use SQLcl. The “startup” command will stop in the mount state for a standby database. You need to type “startup open read only” to explicitly open it. Or issue an “alter database open;” after the “startup” returned ORA-16003.

Update 17-JUL-2019

Note that in multitenant, since 18c, the Active Data Guard is activated only where a pluggable database is opened. Then there’s no risk to open CDB$ROOT read-only and it is even recommended to have a correct replication when cloning PDBs. This means that with SQLcl you should “startup open read only” rather than simply “startup”. More info about it:

18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only - Blog dbi services

Franck Pachot's picture

Passwordless Data Pump 19c

That’s a very light bug with a very simple workaround, but it may require a little change in scripts. If you use passwordless authentication (external password file or OS authentication) with Data Pump in 19c it will ask for the password. The solution is just to answer whatever you want because the external authentication will be used anyway.

Example

I create the wallet

mkstore -wrl $ORACLE_HOME/network/admin -create <w4ll3t-P455w0rd
w4ll3t-P455w0rd
CREATE

I create a tnsnames.ora entry that I’ll use to connect:

cat >> $ORACLE_HOME/network/admin/tnsnames.ora  <CDB1A_SYSTEM=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1A))(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))
CAT

I add a credential for this entry — here SYSTEM user and its password:

mkstore -wrl $ORACLE_HOME/network/admin -createCredential \
CDB1A_SYSTEM SYSTEM <user-P455w0rd
user-P455w0rd
w4ll3t-P455w0rd
CREATE

I also add a credential for Eazy Connect Plus (see this previous post for the dummy parameter):

mkstore -wrl $ORACLE_HOME/network/admin -createCredential \
//localhost/PDB1?_user=system SYSTEM <user-P455w0rd
user-P455w0rd
w4ll3t-P455w0rd
CREATE

Finally, I set the sqlnet.ora for this:

cat >> $ORACLE_HOME/network/admin/sqlnet.ora <<'CAT'
WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY="$ORACLE_HOME/network/admin")))
SQLNET.WALLET_OVERRIDE=TRUE
CAT

Then, I can connect passwordless

connect /@CDB1A_SYSTEM
show user
show con_name
connect /@//localhost/PDB1?_user=system
show user
show con_name

Nothing new here. It is an old feature and very simple to setup. In 2019 we do not see any password in clear text in scripts or command line, right?

Data Pump expdp/impdp

But let’s try to use the same with Data Pump

expdp /@CDB1A_SYSTEM

I want to connect passwordless because the password is in the wallet, but Data Pump asks me for the password. Don’t worry: it asks but doesn’t care. The connection will use the wallet one. Then the solution is to send /dev/null as the stdin:


‘“/ as sysdba”’

There’s the same when using OS authentication like running expdp or impdp connected SYSDBA. Note that this is not a recommendation: running Data Pump as SYSDBA is not a good idea. But if you don’t want to show the password and you are too lazy to setup a wallet, this was working until 19c. The only thing was to double-quote it so that Data Pump takes it as one parameter, and to single-quote around it so that the shell does not interpret your double quotes:

expdp '"/ as sysdba"'

Of course, another reason not to use this is that OS authentication connects you to the CDB$ROOT where you should not have a lot to export…

Fixed in 20.1

After writing this I realize that there’s now a MOS note about it, a patch and a solution (Upgrade to 20.1 when available): https://support.oracle.com/epmos/faces/DocContentDisplay?id=2556152.1

dbakevlar's picture

Best Practices for Oracle Data Guard on Azure

I keep saying I’m going to start sharing what I’m doing in the Analytics space soon, but heck, there’s too much I need to keep adding to on the Oracle in Azure arena!

So, as most people know, I’m not a big fan of Oracle RAC, (Real Application Cluster).  My opinion was that it was often sold for use cases that it doesn’t serve, (such as HA) and the resource demands between the nodes, as well as what happens when a node is evicted to those that are left are not in the best interest for most use cases.  On the other hand, I LOVE Oracle Data Guard, active or standard, don’t matter, the product is great and it’s an awesome option for those migrating their Oracle databases to Azure VMs.

What is Oracle Data Guard

Oracle Data Guard is a cross between what Always on Availability Groups and SQL Server log shipping.  It “ships” the redo logs from the primary database to a standby database, (or active database that can be used for reporting, hence the similarities to AG) and with a sync process.  They is a complex configuration, but surprisingly simple failover process for the business to switch to one of the standbys if there is a failure on the primary.

The product has received excellent support from Oracle and the community and it’s something I highly recommend for those businesses now migrating databases over to Azure, but want something similar to what they had on-premises.    Unlike RAC, I can put a standby, (secondary replica for you Microsoft folks) in a second location, then use a Far Sync Instance as a “pass through” from the primary to the standby vs. using the built in Sync direct to the standby database.

Why Use a Far Sync Instance

Oracle has done some great research on the benefit of a Far Sync Instance and the truth is, it improves the performance of the Data Guard sync on a VM by over 12% vs. using a standard Sync between a primary and a standby in Data Guard.

https://dbakevlar.com/wp-content/uploads/2019/07/azure_vm_farsync-300x12... 300w, https://dbakevlar.com/wp-content/uploads/2019/07/azure_vm_farsync-768x32... 768w, https://dbakevlar.com/wp-content/uploads/2019/07/azure_vm_farsync.png 1179w" sizes="(max-width: 489px) 100vw, 489px" />

Twelve may not seem like a high number, but when you’re looking to decrease latency any way you can and working between two different location zones in the cloud, this can be significant improvement.

A Far Sync Instance, is just a small allocation of memory for the Oracle background processes, a matching number of redo logs, (+1 for threads used by the Fast Sync) and no actual datafiles.  The primary then “ships” the redo logs to the Fast Sync Instance and then the Fast Sync can easily focus on its one task-  send the redo logs wherever they need to go, no matter where that standby is located.

All of this is build out on Azure Virtual Machines, (VMs) with the appropriate version of Oracle installed on each VM, but of course, the Fast Sync instance can be placed on a very low VM sizing tier, as it doesn’t require many resources or it can be on a shared server if the existing product sharing the VM isn’t very “chatty”.   The Oracle SGA for the Fast Sync can be as small as 300Mb and the CPU_COUNT=1.  Remember, this is pushing redo logs to and from it all day, so it’s going to be busy on that front.

The Architecture

https://dbakevlar.com/wp-content/uploads/2019/07/azure_oracle_dg-300x167... 300w, https://dbakevlar.com/wp-content/uploads/2019/07/azure_oracle_dg-768x429... 768w" sizes="(max-width: 650px) 100vw, 650px" />

The above diagram demonstrates the best practices of both Oracle, as well as Azure for an Oracle Database, using Oracle Data Guard, but having the standby in a secondary location.  By using the Far Sync instance, we’re able to ensure that the standby commit of the redo logs is timely enough for the primary, (as this is part of the configuration for the Oracle Data Guard.)

In this configuration, you’ll notice that to support Oracle Data Guard in Azure, we’ve included Express Route to ensure solid performance to the users, since we all know, the network is the last bottleneck.

The Details

There are a few best practices to consider when building out this solution and this includes:

  • Use Async compression to decrease latency on the Far Sync transfers.
  • Configure redundant network links on the customer side of the Express Route to tolerate outages.
  • Choose IOPS for the standby VM that is faster than that of the redo on the primary VM to keep up with the sync, (something you might not consider for a standby server, but it’s necessary for this configuration)
  • The Far Sync instance should have the same number of redo logs as the primary, +1 for every thread
  • Instead of using RMAN, use Azure Site Recovery to take snapshots of each of the VMs, including the Far Sync VM, eliminating extra stress on the database tier.

If you’ve wondered how you would design an Oracle Data Guard environment, either standard or active in Azure, this is how I recommend my customers to do it.

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [Best Practices for Oracle Data Guard on Azure], All Right Reserved. 2019.

Franck Pachot's picture

Ideas for Event Sourcing in Oracle

Log Miner or are there other alternatives?

With microservices, the architects want to dismantle the monolithic database and replicate data rather than share it. Then raises the need to audit the changes where the modifications are done (like the C in CQRS). The Oracle database already does that for recovery purpose, building the redo records before modifying the data blocks, but that’s a physical change vector. We need something logical with more possibilities to filter and transform. There are multiple possible methods for this. But unfortunately, the ones that were built in the database are slowly removed since Oracle has acquired Golden GAte and sells it separately.

And deprecated became desupported, and even removed in further releases, like Continuous Mine in 19c — the final pathset of 12cR2:

ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR

The doc says that the replacement is Golden Gate but that’s another product to buy, very powerful but expensive (and there’s no Standard Edition).

Debezium DBZ-137

Debezium, an open source distributed platform for change data capture, is working on an Oracle Database connector. Many ideas are mentioned in https://issues.jboss.org/browse/DBZ-137 and in this post I give my thought about them.

[DBZ-137] Ingest change data from Oracle databases using LogMiner - JBoss Issue Tracker

Oracle XStreams

The perfect solution as it has minimal overhead on the source and is very efficient. But it requires Golden Gate licensing, and then is probably not the best solution for an Open Source product.

Oracle LogMiner

LogMiner is included in all Editions, reads the redo stream (archived and online redo logs) and extracts all information. When enabling Supplemental Logging, we have enough information to build the logical change information. Many solutions are already based on that. But I see two problems with it.

LogMiner Limits: Basically, LogMiner was not made for replication. The idea was more to give a troubleshooting tool to understand what happened on the data: what is generating too much redo? Who deleted some data? Which sessions were locking rows?… There are limits, like unsupported datatypes. And it is not designed to be efficient. But there’s also the possibility to mine on another system. However, I think that those limits can be acceptable for an Open Source solution on simple databases with low rate of changes.

LogMiner Future: What is more wondering is how Oracle removes the features that may give an alternative to Golden Gate. In 19c the CONTINUOUS_MINE was removed. This means that we need to constantly open and read the whole the redo logs. And do we know what Oracle will remove in future versions when they will see a robust Open Source product that competes with Golden Gate?

On the DBZ-137 there are some remarks about RAC which is more complex because there are many redo threads. I don’t think that RAC is in the scope for this. RAC is an expensive option that is required only on large databases with very high load. That fits more in the Golden Gate scope.

Note that we can parse the SQL_REDO and SQL_UNDO from V$LOGMINER_CONTENTS but there’s also the possibility to get them from dbms_logmnr.mine_value

Mining the binary log stream

There are some attempts to mine the binary redo logs. Some well known commercial products and some Open Source attempts. That’s very complex, but that’s also fun for an open source community. The redo log structure is proprietary but Oracle will not change it too often because all availability features (recovery, standby,…) are based on it. However, there may be a legal issue to open source this mining as it exposes the proprietary format of the redo. Reverse engineering is clearly forbidden by the Oracle license.

Continuous Query Notification

I studied the usage of dbms_change_notification as a CDC alternative: https://blog.dbi-services.com/event-sourcing-cqn-is-not-a-replacement-for-cdc. This feature is aimed at nearly static data, in order to invalidate and refresh a cache. It is not designed for a high change rate and is not efficient at all for this.

Client Result Cache

In the same idea as refreshing a cache from data that do not change often, one can think about querying with client result cache as it has a mechanism to invalidate the cache when a modification occurs. However, the granularity is bad here as any change on the table will invalidate all queries on it.

Materialized View Logs

All changes can be logged in materialized view logs. This feature is built for materialized views fast refresh which is a kind of replication. This has nothing to do with the redo log used by LogMiner. With materialized view logs, the changes are stored in a table and must be deleted when consumed. But this feature exists for a long time and is widely used. However, I would seriously question the architecture if there’s a general need for double writing, then reading it and deleting it, just to put the same data into another place.

Triggers

With triggers, we can log the changes as with materialized view logs. It gives more possibilities, like sending the change rather than storing it in the database (but then we have to manage the transaction visibility). An optimized example to store the audited change has been published by Connor McDonald:

A Fresh Look at Auditing Row Changes

But this is still a lot of overhead and need to to be adapted when columns are added or removed.

ORA_ROWSCN

When we enable row dependencies, the ORA_ROWSCN pseudo-column can help to filter the rows that may have been updated recently. However, there are two problems with this approach.

full read: if we want a near real-time replication, we will probably pool for changes frequently. ORA_ROWSCN would be nice if indexed, but that’s not the case. It just reads the information stored in the table block. That means that to find the changes done in the last 5 minutes we need to full scan the table and ORA_ROWSCN will then help to identify those rows that were changed. It is a transparent alternative to a “last update” column timestamp but does not help to access quickly to those rows.

commit time: there’s a general problem with anything that reads a “change” timestamp. Let’s say that I pool the changes every 5 minutes. I have a long transaction that updates a row at 12:39 and commits at 12:42. The pool that runs at 12:40, looking for changes since 12:35, does not see the change as it is not committed yet. The pool that runs at 12:45 can see it but not when it filters on the changes that occurred since the last run, which is 12:20. This means that each run must look on a larger window, including the longest transaction start. And then it must deal with duplicates as some of the changes have been captured by the previous run. This is a general problem when there’s no “commit SCN” available.

Userenv(‘commitscn’)

While talking about the visibility time (commit SCN) vs. the change there is an undocumented way to get it. insert or update with userenv(‘commitscn’) and this will magically get back to the table row at the end of the transaction to set the Commit SCN. It is not supported and anyway it can be invoked only once in a transaction and then cannot be added automatically in a trigger.

Oracle Flashback Query

If we don’t want to add additional auditing on DML, the redo log is not the only internal logging. Oracle also logs the undo information for consistent reads (MVCC) and this, without any additional overhead on the modification, can show all changes that occurred in a table. Basically, we can SELECT … FROM … VERSION BETWEEN SCN … AND … and all changes will be visible with the new and old values and additional information about the operation and the transaction.

However, this is not indexed. Like with ORA_ROWSCN we need to full scan the table and the consistent read will build the previous versions of the blocks, thanks to the undo.

Flashback Data Archive

Flashback Query can reconstruct the recent changes, limited by the undo retention, and by the last DDL that occurred. Flashback Data Archive (which was called Total Recall) can go further. This feature is available in Enterprise Edition without the need for an additional option. It can go beyond the undo retention and allows some DDL. But, again, it is not optimized to get all changes since a specific point in time. The idea is that when you know the rows you want to read, then it can get to the previous version.

Minimal trigger + flashback query

Some of those solutions can be combined. For example, a trigger can log only the ROWID of the changed rows and the replication process will get more information for these rows through flashback query. This lowers the overhead on the changes, while still avoiding a full scan for the replication. Or you may get those ROWID directly from custom-mining the redo logs, which is much simple than trying to get all information from it.

DDL triggers

SQL is agile and allows the structure to change. If adding a column breaks the whole replication, then we have a problem. All the solutions above need to handle those changes. The redo log contains the changes in the dictionary, but it can be complex to decode. All other solution must adapt to those changes and that means having a DDL trigger and handling the different kinds of changes.

Not easy…

The summary is that there are no easy solutions, and the easiest ones have been removed by Oracle to push the sales for Golden Gate. My first recommendation when someone wants to replicate to changes to query it from another place is: don’t do that. Relational databases are made to ingest new data and modifications, and be able to query for different purposes. We have views to show data in different formats. We have indexes to get fast access for different use cases. Oracle is not like many of its competitors. It has been optimized for mixed workloads from the first versions. You can query the same database where the changes occur because a SELECT does not lock anything. You have a resource manager to be sure that runaway queries cannot slow down the transactional activity. And the CPU usage for those queries, when correctly tuned, will rarely be higher than the replication activity you need to implement to stream the changes to another database.

Then, which technology should an event sourcing be built upon? LogMiner looks good for small databases with basic usage. And the project should adapt to the features that are removed by Oracle in the future.

Hybrid trigger / flashback query

When only a few tables are concerned, generating DML triggers is probably the simplest, especially if they log only the minimum, like the ROWID. The ROWID will be visible only when the transaction is committed. Then the replication process must use flashback query, reading only those blocks from the ROWID. The nice thing here is that flashback query shows when the change was visible (the commit time) rather than the change time, which makes it easier to filter out the changes already processed by the last run.

Here is the idea when a trigger has logged the ROWID changed into a DEMO_CDC table, and we query:

The execution plan for this shows optimal access with the ROWID:

Explain Plan
------------------------------------------------
PLAN_TABLE_OUTPUT
Plan hash value: 3039832324
------------------------------------------------
| Id | Operation | Name |
------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | NESTED LOOPS | |
| 2 | SORT UNIQUE | |
| 3 | TABLE ACCESS FULL | DEMO_CDC |
| 4 | TABLE ACCESS BY USER ROWID| DEMO |
------------------------------------------------

The important thing here is that the query cost is proportional to the changes and not to the full size of the table. And the trigger overhead is limited to the ROWID only. There’s no need to store in a table the values that are stored already in the base table and the undo segments. If reading this is done frequently, there are good chances that all blocks involved (the ROWID list, the UNDO records and the table block) are still in the buffer cache.

This trigger+flashback approach is just an idea that I’ve never used. So feedbacks welcome on https://twitter.com/franckpachot

Franck Pachot's picture

Oracle Heterogeneous Services

An example configuring Oracle ODBC gateway to PostgreSQL

That sounds futuristic, like a 2025 post-microservices trend. Monolithic databases are definitely dismantled, we developed in microservices, spread data to many databases. Because we had agile development teams, they have chosen the database technology that fits their need: cheap, easy, free from any infrastructure constraints, and of course cool enough for their CV. The enterprise information system is now a zoo with all species represented.

https://web.archive.org/web/19961219141518/http://www.oracle.com/products/gateways/html/transparent.html

Then, the business needs to query and operate on data that has been produced in many different sources, heterogeneous systems, and polyglot data structures. Of course, all was sourced, streamed and replicated, with an approximate consistency and latency.

But after all that fun, the data analysts need a consistent and integrated view of this data to meet their business objectives. They need to process it in one system only. They need to map it and query transparently, without having to learn different languages and structures. They may even want to enrich it, updating the multiple systems, in a transaction that stays consistent even in case of failure.

Actually, this is not new. The screenshot above is from 1995 when Oracle introduced their Oracle Gateway to connect to other databases like if they were Oracle ones. We can run DML, procedures, and transactions over Database Links. It has evolved with gateways to specific database systems, but the gateway to ODBC databases is still there and free. And Heterogenous Services are available even in Standard Edition.

Today, whether you want to offload some tables to a different DB, or you already have multiple DBMS that you want to query from a single service, you will have to run queries and transactions across different systems. Here is an example where I’ll connect Oracle DB Links to a PostgreSQL database.

In this example, I prefix the name with the type, like: usrdemo for the user, dsndemo for the data source name, lsndemo for the listener… That’s not something I do usually but it helps to understand clearly the link of components. There is a double network communication for each call here: from the database Heterogenous Service to the Gateway listener, and the listener calls the remote database through ODBC.


1. Install a PostgreSQL database

For this demo, I have setup an Oracle DBaaS in the Oracle Cloud Infrastructure which is a PaaS (the database is installed automatically and ready to use) but with IaaS access (I can connect as root and install whatever I need).

I chose a 19c Standard Edition (19c has just been made available this week on this service) running on Oracle Linux 7.5 which is the same as RHEL7.

There, I have an Oracle Database 19c already running on Oracle Enterprise Linux 7 and I install PostgreSQL 11 to demo the gateway.

sudo rpm -Uvh https://yum.postgresql.org/11/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install -y postgresql11-server

I create a database and initialize it with pgbench.

sudo mkdir -p /u01/pgdata && sudo chown -R postgres /u01/pgdata
sudo su postgres
export PGDATA=/u01/pgdata
# create the database
/usr/pgsql-11/bin/initdb -D $PGDATA
# I prefer password authentication:
sed -i '/^host *all/s/trust/md5/' /u01/pgdata/pg_hba.conf
# start the server
/usr/pgsql-11/bin/pg_ctl -D $PGDATA -l /u01/pgdata/logfile start
# create user and database
/usr/pgsql-11/bin/psql
create user usrdemo password 'pwddemo';
create database demo;
\q
whoami | grep postgres && exit
sudo su - oracle
PGPASSWORD=pwddemo /usr/pgsql-11/bin/pgbench -i -h localhost -p 5432 -U usrdemo demo

Now I have a ‘demo’ database with pgbench tables, accessible from my oracle user through TCP/IP port 5432 with user ‘usrdemo’ password ‘pwddemo’


2. Install Linux ODBC and Postgres ODBC

The free Oracle Gateway connects to any database through ODBC. I install the ODBC driver for PostgreSQL

sudo yum install -y postgresql11-odbc
ls /usr/pgsql-11/lib/psqlodbc.so

I define my PostgreSQL database as the ODBC Data Source:

cat > /home/oracle/odbcDG4PGDEMO.ini <[DSNPGDEMO]
Database = demo
Description = Connection to PostgreSQL
Driver = PostgreSQL
Servername = localhost
Port = 5432
Username = usrdemo
END

You can use the default location like /etc/odbc.ini or $HOME/.odbc.ini but here I show a specific location just to mention it.

3. Install Oracle Gateway

I download the Heterogeneous Services Database Gateways to non-Oracle Databases. On the Oracle Database Software Downloads, choose the “See All” link:

Oracle Database 19c Download for Linux x86-64

From there I download the LINUX.X64_193000_gateways.zip (that must be done manually, clicking the “I accept the license terms” checkbox).

I install it in my Oracle Home (ORACLE_HOME must be set):

unzip LINUX.X64_193000_gateways.zip
cd gateways
./runInstaller -silent
sudo $ORACLE_HOME/root.sh

I have now some additional binaries and an HS (Heterogeneous Services) product directory:

tree $ORACLE_HOME/hs

4. Setup Oracle Gateway to ODBC

In this directory I define the Gateway configuration which references the ODBC driver and DSN:

cat > $ORACLE_HOME/hs/admin/initDG4PGDEMO.ora <set ODBCINI=/home/oracle/odbcDG4PGDEMO.ini
HS_FDS_CONNECT_INFO = DSNPGDEMO
HS_FDS_SHAREABLE_NAME=/usr/pgsql-11/lib/psqlodbc.so
HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1
#HS_FDS_TRACE_LEVEL=ON
END

The “set ODBCINI” must be used if we don’t use the default /etc/odbc.init or $HOME/.odbc.ini

The trace can dump more information to the log subdirectory in case of an error is encountered.

This Database Gateway has a System IDentifier, that will be referenced by the listener. The exposed database name is HO.WORLD by default, which is very ugly, but you can change that with HS_DB_NAME and HS_DB_DOMAIN.

5. Setup Oracle Gateway listener

The name is referenced with SID_NAME from a listener:

cat >> $ORACLE_HOME/network/admin/listener.ora <LSNPGDEMO=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=0.0.0.0)(PORT=1520))))
SID_LIST_LSNPGDEMO=(SID_LIST=(SID_DESC=(SID_NAME=DG4PGDEMO)(ORACLE_HOME=$ORACLE_HOME)(PROGRAM=dg4odbc))))
END

I start this listener:

lsnrctl < set current_listener LSNPGDEMO
start
set displaymode raw
services
END

The listener is the Gateway that will be contacted by the Heterogenous Services.


6. Setup Heterogenous Services

I define a tnsnames.ora entry to connection to this listener:

cat >> $ORACLE_HOME/network/admin/tnsnames.ora <TNSPGDEMO=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1520))(CONNECT_DATA=(SID=DG4PGDEMO))(HS=OK))
END

I can tnsping but not connect because this is an Heterogenous Service agent

ORA-28547: connection to server failed, probable Oracle Net admin error

7. Create Heterogenous Services database link

All components are there and ready for heterogeneous queries and transactions. I have a pluggable database PDB1 and create a demo user there.

echo -e 'alter session set container=PDB1;\ngrant dba to demo identified by "Ora4c!eC!0ud" container=current;' | sqlplus / as sysdba

Note that I’ll set global_names to false here because I’ve not defined a name and domain for my gateway. Then it is called HO.WORLD and I don’t want to have my database link names starting with “HO.WORLD”. And if I don’t, with the default global_names, I’ll get something like: “ORA-02085: database link DBLPGDEMO.SUB11131414120.ACEDVCN.ORACLEVCN.COM”. To avoid this, either I set HS_DB_NAME and HS_DB_DOMAIN or I forget about global name enforcement.

7a. dictionary translation

sqlplus demo/'Ora4c!eC!0ud'@//localhost:1521/PDB1
alter system set global_names=false;
drop database link DBLPGDEMO;
create database link DBLPGDEMO connect to "usrdemo" identified by "pwddemo" using 'TNSPGDEMO';
desc "pgbench_accounts"@DBLPGDEMO

That’s the first gem from Oracle transparent gateway: you see the metadata as if it were an Oracle database. The PostgreSQL tables, columns and datatypes were converted to Oracle ones.

7b. distributed queries

The second gem is about the optimizer. You can run joins between two remote queries and they will be processed on the right site:

explain plan for
select count(*) from "pgbench_branches"@DBLPGDEMO
join "pgbench_accounts"@DBLPGDEMO using ("bid")
where "abalance"=0;
set pagesize 5000 linesize 150
select * from dbms_xplan.display();

Cardinality estimation is not as accurate as for local tables, but the query that is pushed will be optimized by the remote optimizer.

7c. native passthrough

The third gem is the possibility to run any native query with the DBMS_HS_PASSTHROUGH which is very similar to DBMS_SQL:

set serveroutput on
declare
val number;
c number;
n number;
begin
c:=dbms_hs_passthrough.open_cursor@DBLPGDEMO;
dbms_hs_passthrough.parse@DBLPGDEMO(c,q'[
select count(*) from "pgbench_branches"
join "pgbench_accounts" using ("bid")
where "abalance"=0;
]');
loop
n := dbms_hs_passthrough.fetch_row@DBLPGDEMO(c);
exit when n = 0;
dbms_hs_passthrough.get_value@DBLPGDEMO(c,1,val);
dbms_output.put_line(val);
end loop;
dbms_hs_passthrough.close_cursor@DBLPGDEMO(c);
end;
/

You wonder how this DBMS_HS_PASSTHROUGH is called with a database link as of it was on the remote database, but you know that it is not defined there? It is actually interpreted by the Gateway, more info about this magic:

dbms_hs_passthrough-the magic package

8. Troubleshoot

There are limitation, some cryptic error messages but enabling the trace with HS_FDS_TRACE_LEVEL=ON creates a file in ?/hs/log/
.trc

Oracle Corporation --- 2019-07-09 21:39:02.167558000
Heterogeneous Agent Release
19.0.0.0.0
Oracle Corporation --- 2019-07-09 21:39:02.167304000
Version 19.0.0.0.0
HOSGIP for "HS_FDS_TRACE_LEVEL" returned "ON"
HOCXU_VC2_MAX=4000
HOCXU_RAW_MAX=2000
HOSGIP for "HS_FDS_SHAREABLE_NAME" returned "/usr/pgsql-11/lib/psqlodbc.so"
HOSGIP for "HS_OPEN_CURSORS" returned "50"
HOSGIP for "HS_FDS_FETCH_ROWS" returned "100"
HOSGIP for "HS_LONG_PIECE_TRANSFER_SIZE" returned "65536"
HOSGIP for "HS_NLS_NUMERIC_CHARACTERS" returned ".,"
HOSGIP for "HS_KEEP_REMOTE_COLUMN_SIZE" returned "OFF"
HOSGIP for "HS_FDS_DELAYED_OPEN" returned "TRUE"
HOSGIP for "HS_FDS_WORKAROUNDS" returned "0"
HOSGIP for "HS_FDS_MBCS_TO_GRAPHIC" returned "FALSE"
HOSGIP for "HS_FDS_GRAPHIC_TO_MBCS" returned "FALSE"
HOSGIP for "HS_FDS_RECOVERY_ACCOUNT" returned "RECOVER"
HOSGIP for "HS_FDS_TRANSACTION_LOG" returned "HS_TRANSACTION_LOG"
HOSGIP for "HS_FDS_TIMESTAMP_MAPPING" returned "DATE"
HOSGIP for "HS_FDS_DATE_MAPPING" returned "DATE"
HOSGIP for "HS_FDS_TRUNC_ANSI_DATE" returned "OFF"
HOSGIP for "HS_FDS_MAP_NCHAR" returned "TRUE"
HOSGIP for "HS_FDS_RESULTSET_SUPPORT" returned "FALSE"
HOSGIP for "HS_FDS_RSET_RETURN_ROWCOUNT" returned "FALSE"
HOSGIP for "HS_FDS_PROC_IS_FUNC" returned "FALSE"
HOSGIP for "HS_FDS_REPORT_REAL_AS_DOUBLE" returned "FALSE"
using postgres as default schema
HOSGIP for "HS_SQL_HANDLE_STMT_REUSE" returned "FALSE"
HOSGIP returned value of "TRUE" for HS_FDS_QUOTE_IDENTIFIER
SQL text from hgodtab, id=0, len=44 ...
00: 73656C65 6374202A 2066726F 6D202270 [select * from "p]
10: 6762656E 63685F61 63636F75 6E747322 [gbench_accounts"]
20: 20776865 72652031 203D2030 [ where 1 = 0]
SQL text from hgopars, id=1, len=42 ...
00: 53454C45 43542043 4F554E54 282A2920 [SELECT COUNT(*) ]
10: 46524F4D 20227067 62656E63 685F6163 [FROM "pgbench_ac]
20: 636F756E 74732220 4131 [counts" A1]
Deferred open until first fetch.
Performing delayed open.
HOA 07/09 21:39:02.235762000: (horcrces_CleanupExtprocSession) Entered!
HOA 07/09 21:39:02.235786000: (horcrpooe_PopOciEnv) Entered!
HOA 07/09 21:39:02.235797000: (horcrfoe_FreeOciEnv) Entered!
HOA 07/09 21:39:02.235807000: (horcrfoe_FreeOciEnv) Exiting...
HOA 07/09 21:39:02.235816000: (horcrfse_FreeStackElt) Entered!
HOA 07/09 21:39:02.235825000: (horcrfse_FreeStackElt) Exiting...
HOA 07/09 21:39:02.235835000: (horcrpooe_PopOciEnv) Exiting...
HOA 07/09 21:39:02.235844000: (horcrces_CleanupExtprocSession) Exiting...

At least we have the default values for the non-specified parameters.

Finally, a little note about the architecture: the Gateway with the ODBC driver and the heterogeneous service listener can be installed on the Oracle side, the remote database side, or in the middle (why not a Docker container if it makes it more appealing?) as it communicates by TCP/IP and is not subject to licensing. Like a streaming replication server but with redundancy, no latency, and transparent real-time query.

mwidlake's picture

First Lessons, Frustrations, & Funny Stuff – Introducing the iPad To My Mum

<< Introducing I.T. to an Elderly Relative
<<<< Preparing the device

https://mwidlake.files.wordpress.com/2019/07/n-key.jpeg?w=640&h=642 640w, https://mwidlake.files.wordpress.com/2019/07/n-key.jpeg?w=150&h=150 150w, https://mwidlake.files.wordpress.com/2019/07/n-key.jpeg?w=298&h=300 298w" sizes="(max-width: 320px) 100vw, 320px" />

So, you are are helping an elderly relative or someone else who knows nothing at all about keyboards, icons, internet, or web browsing to get going with I.T. You have set up the device for them, now you need to introduce them to it. This is where it gets… interesting.

As I describe in earlier posts, I bought an iPad for my mum and set it up in a nice, simple way for her. I knew there was nothing she could do to actually break the iPad, it would just be a little confusing and possibly quite a frustrating process showing her how to use it. I was wrong. On all counts.

To do this I drove up to see my mum for the day, taking along the current Mrs Widlake for emotional support. Having arrived and set up the new router we had got from British Telecom (that’s a whole other story of woe) I sat Mum down and showed her the iPad, the on/off button, the volume buttons and the Home button. I got her to turn the device off and on, I pulled up some things on the internet to show her videos & music and got her to turn the volume up and down, and showed her how you touch the screen to do things. I told Mum about plugging it in and the icon that shows how much charge it has. All was going OK. I showed her the keyboard…

“Ohh, that’s complex!” No it’s not, there is one key per letter and some special keys. “Why can’t it have 9 numbered buttons and you just press 3 twice for H?” Because it is not 1995 anymore and this is much easier. I open Messenger for her, start a conversation to me and get her to type, yes you guessed it, ‘Hello World’. “I can’t find the ‘L'”. That’s OK, just take your time…

Mum is punching her finger on the screen as if she is killing cockroaches. You just need to tap it mother “I am!”. More softly (bash bash bash). No, gently (bash bash). If I poked your cat that hard she’d bite me, imagine you are touching the cat (bash bash bash). Mum, the screen is glass – if you hit it like that it will break and cut your finger off! That worked.. sort of (thud thud thud). 2 minutes and liberal use of the delete key later (her aim is poor) we get ‘Hello World’. Great! Well, you are sending the message to me, look that’s my name and a little picture of me! Say ‘Hello Martin’ – “Hello Martin” says Mum. Nooo, type it. “Where’s the L key?” Same place as before, just take your time…

When Mum is looking for a key she hovers her finger over the keyboard, waves it over one area, goes to another area and waves it over that – and then back to the first area… and repeats. Half of the keyboard has some sort of exclusion field around it. Mum, just look along each row until you find the letter you want. “I was!” No, you looked there and then there, 3 times. Trust me, just work along each row. She does.. “There it is! I knew it was there!”. Hmmm

After about 10 minutes of this random messaging (it felt like an hour but my wife, sniggering on the sofa, said it was 10 minutes) I get Mum to practice logging into the device. This, after all, is a vital step.

I tell her the password is my name. I decided on my name as she (probably) won’t forget it and it is more secure than a number that she will remember. “With a Y or an I?” Huh? “Martin with a Y or an I?” What did you name me? “Martin”. With a Y or an I? “Well, an I of course.” Well it’s with a bloody I then! “Some people spell it different…”. Why would I set your password to my name but spelt the wrong way? It’s an I you silly old Moo. (yes, it’s getting to me).

She types Marti.. “There is no N key”. It’s there. “Oh yes”. I tell her to press DONE. She does, the home screen comes on. I get her to turn it off and put in her password again. “What is my password?” Martin. “I just typed that”. Yes, we are practising. “OK – (thud thud thud… thud….)”. The N key is there, Mum (thud). And DONE… (thud) “I’m in!”. Excellent. Now do it again so you have done it without any help.

(thud thud thud….thud…..) “The N key has gone!” – It’s…  {breathes a little…} there! “Oh yes! I knew that!” But she does press DONE on her own.

Now do it again. “Why?” Because I need to know you can do it easily. (thud thud…thud thud…….) “Where…” It’s there! There! THERE!!! You’ve pressed it 4 times in the last 2 minutes, it’s ALWAYS there, it does not bloody move!!! IT’S THERE!!!! I can feel veins throbbing at my temples…

Sue pipes up “Shall I make us all a cup of tea and we can go look at the fish in the pond?” She’s a saint.

After a break and some calming down, we go through it all again (with fewer issues and less swearing) and I show Mum ‘Messenger’ again and how she can communicate with me. I show her how to type a message and send it and how to call me and we do a few trials and she seems OK with that. She keeps forgetting to press the plane icon “why is it a blue arrow?” It’s like a plane, you send the message. “It looks like an arrow”. OK, it’s an arrow, you are firing the message to me wherever I am. “How does it know where you are?” Magic Pixies.

By now we are both getting really annoyed with each other but she can turn the device on, log in, use the keyboard (well, sort of) and she can message me. That is enough for day one – and I need alcohol in a way that is slightly worrying.

We drive home and later that evening we get a message off my mum. It’s almost indecipherable as she has forgotten where the delete key is, and she does not seem to understand that she can check what she has typed, but it’s sort-of a success. I started to reply about where the delete key is, but something in my head steps in and quietly suggests to me that remote support for my confused mother after all the wine I consumed is probably a poor idea. I send a brief “we got home” message – and a picture of a cat.

Next day she calls me on Messenger. Hi Mum, how are you? “{small scream} – is that you, Martin?” Yes, you called me. “No I didn’t!” Err, yes you did. “I didn’t, I sent you a message”. Did you press the blue arrow. “Yes!”. The one next to the text you typed “No, the one at the top of the screen”…. At the top of the screen?… Does it look a bit like a telephone? “Yes!” That would be the telephone then. “Oh! How do I send this message?” After I end the call mother, press the blue arrow. 30 seconds later my phone rings. Hi Mum… “(smaller scream) – it did it again!” So, why do you think it did it again? “I pressed the wrong key?” Yes.

Over the next few hours I get a few messages (no more calls) and slowly the random strings slowly become things I can understand. We are getting there.

Next day she calls me on Messenger… Hi Mum? “{small scream…}”  We repeat the previous day. Typing is better.

Next day, no call, no messages.

Next day, no call, no messages.

Next day, the phone (real phone) goes “I’ve broken it, it won’t work!” Hello Mum. OK, what is broken. “It’s broken, it won’t let me in! It won’t accept my password”. OK pick up the device tell me what you are doing… We work through it, she is entering the password (with an I not a Y, I checked) and “it’s not working” is actually she is getting a message saying the device is disabled. I ask Mum if maybe, perhaps, she got the password wrong a few times and it asked her to wait 5 minutes before trying again? “No, I got my password right – but it would not let me in and after a few times it said that!”. OK… So, leave it alone for an hour and try again. “I did that yesterday!” I’m getting a bad feeling about this… ” And after I tried it, it told me to wait again… and it still could not remember my password and then I left it all day and now it says it’s disabled and needs to be plugged in. I plugged it in!”

I explain that she has actually done the one thing that can brick(*) an iPad. She has repeatedly got the password wrong enough times and persistently enough to cause it to believe it is stolen. It is useless to her. It needs to be plugged into a computer and reset. *sigh*. I asked her why she did not call me when “it forgot her password”. She did not want to bother me…

So now I had to organise a day to drive over there, factory reset the damned thing, and set it up again. And I was going to change her password to a simple number.

It had not been a little confusing, it had been utterly baffling.  I had not found it quite frustrating, I had been turned into a swearing lunatic. And she had indeed broken the iPad.

I rang my Brother. I told him the inheritance is all his – I am adopted.

(*) Brick – turn an electronic device into something as responsive and useful as a house brick.

Jonathan Lewis's picture

opt_estimate 5

If you’ve been wondering why I resurrected my drafts on the opt_estimate() hint, a few weeks ago I received an email containing an example of a query where a couple of opt_estimate() hints were simply not working. The critical features of the example was that the basic structure of the query was of a type that I had not previously examined. That’s actually a common type of problem when trying to investigate any Oracle feature from cold – you can spend days thinking about all the possible scenarios you should model then the first time you need to do apply your knowledge to a production system the requirement falls outside every model you’ve examined.

Before you go any further reading this note, though, I should warn you that it ends in frustration because I didn’t find a solution to the problem I wanted to fix – possibly because there just isn’t a solution, possibly because I didn’t look hard enough.

So here’s a simplified version of the problem – it involves pushing a predicate into a union all view. First some data and a baseline query:

rem
rem     Script:         opt_estimate_3a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2019
rem

create table t1
as
select
        rownum                          id,
        100 * trunc(rownum/100)-1       id2,
        mod(rownum,1e3)                 n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        dual
connect by
        rownum <= 1e4   -- > comment to avoid WordPress format issue
;

create table t2a pctfree 75 as select * from t1;
create table t2b pctfree 75 as select * from t1;

create index t2ai on t2a(id);
create index t2bi on t2b(id);

explain plan for
select
        t1.v1,
        t2.flag,
        t2.v1
from
        t1,
        (select 'a' flag, t2a.* from t2a
         union all
         select 'b', t2b.* from t2b
        )       t2
where
        t2.id = t1.n1
and     t1.id = 99
/

select * from table(dbms_xplan.display(null,null,'outline alias'))
/


There is one row with t1.id = 99, and I would like the optimizer to use an indexed access path to select the one matching row from each of the two tables in the union all view. The smart execution plan would be a nested loop using a “pushed join predicate” – and that’s exactly what we get by default with this data set:


-----------------------------------------------------------------------------------------------
| Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |      |     2 |    96 |    30   (4)| 00:00:01 |
|   1 |  NESTED LOOPS                          |      |     2 |    96 |    30   (4)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL                    | T1   |     1 |    19 |    26   (4)| 00:00:01 |
|   3 |   VIEW                                 |      |     1 |    29 |     4   (0)| 00:00:01 |
|   4 |    UNION ALL PUSHED PREDICATE          |      |       |       |            |          |
|   5 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2A  |     1 |    15 |     2   (0)| 00:00:01 |
|*  6 |      INDEX RANGE SCAN                  | T2AI |     1 |       |     1   (0)| 00:00:01 |
|   7 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2B  |     1 |    15 |     2   (0)| 00:00:01 |
|*  8 |      INDEX RANGE SCAN                  | T2BI |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$1        / T1@SEL$1
   3 - SET$5715CE2E / T2@SEL$1
   4 - SET$5715CE2E
   5 - SEL$639F1A6F / T2A@SEL$2
   6 - SEL$639F1A6F / T2A@SEL$2
   7 - SEL$B01C6807 / T2B@SEL$3
   8 - SEL$B01C6807 / T2B@SEL$3

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$639F1A6F" "T2A"@"SEL$2")
      INDEX_RS_ASC(@"SEL$639F1A6F" "T2A"@"SEL$2" ("T2A"."ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$B01C6807" "T2B"@"SEL$3")
      INDEX_RS_ASC(@"SEL$B01C6807" "T2B"@"SEL$3" ("T2B"."ID"))
      USE_NL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T2"@"SEL$1")
      NO_ACCESS(@"SEL$1" "T2"@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SET$1")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      PUSH_PRED(@"SEL$1" "T2"@"SEL$1" 1)
      OUTLINE_LEAF(@"SET$5715CE2E")
      OUTLINE_LEAF(@"SEL$B01C6807")
      OUTLINE_LEAF(@"SEL$639F1A6F")
      ALL_ROWS
      OPT_PARAM('_nlj_batching_enabled' 0)
      DB_VERSION('12.2.0.1')
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."ID"=99)
   6 - access("T2A"."ID"="T1"."N1")
   8 - access("T2B"."ID"="T1"."N1")

So that worked well – operation 2 predicts one row for the tablescan of t1, with a nested loop join and union all pushed predicate where an index range scan of t2a_i1 and t2b_i1 gives us one row from each table. The “Predicate Information” tells us that the t1.n1 join predicate has been pushed inside the view to both subqueries so we see “t2a.id = t1.n1”, and “t2b.id = t1.n1”.

So what if I want to tell Oracle that it will actually find 5 rows in the t2a range scan and table access and 7 rows in the t2b range scan and table access (perhaps in a more complex view that would persuade Oracle to use two different indexes to get into the view and change the join order and access method for the next few tables it accessed). Since I’ve recently just written about the nlj_index_scan option for opt_estimate() you might think that this is the one we need to use – perhaps something like:


opt_estimate(@sel$639f1a6f nlj_index_scan, t2a@sel$2 (t1), t2a_i1, scale_rows=5)
opt_estimate(@sel$b01c6807 nlj_index_scan, t2b@sel$3 (t1), t2b_i1, scale_rows=7)

You’ll notice I’ve been very careful to find the fully qualified aliases for t2a and t2b by looking at the “Query Block Name / Object Alias” section of the plan (if the view appeared as a result of Oracle using Concatenation or OR-Expansion you would find that you got two query block names that looked similar but had suffixes of “_1” and “_2”). But it wasn’t worth the effort, it didn’t work. Fiddling around with all the possible variations I could think of didn’t help (maybe I should have used set$5715ce2e as the query block target for both the hints – no; what if I …)

Of course if we look at the “Outline Data” we’d notice that the use_nl() hint in the outline says: “USE_NL(@SEL$1 T2@SEL$1)”, so we don’t have a nested loop into t2a and t2b, we have a nested loop into the  view t2. So I decided to forget the nested loop idea and just go for the indexes (and the tables, when I got to them) with the following hints (you’ll notice that during the course of my experiments I added my own query block names to the initial query blocks – so the generated query block names have changed):



explain plan for
select
        /*+
                qb_name(main)
                opt_estimate(@sel$f2bf1101, index_scan, t2a@subq_a, t2ai, scale_rows=5)
                opt_estimate(@sel$f2bf1101, table,      t2a@subq_a,       scale_rows=5)
                opt_estimate(@sel$f4e7a233, index_scan, t2b@subq_b, t2bi, scale_rows=7)
                opt_estimate(@sel$f4e7a233, table,      t2b@subq_b,       scale_rows=7)
        */
        t1.v1,
        t2.flag,
        t2.v1
from
        t1,
        (select /*+ qb_name(subq_a) */ 'a' flag, t2a.* from t2a
         union all
         select /*+ qb_name(subq_b) */ 'b', t2b.* from t2b
        )       t2
where
        t2.id = t1.n1
and     t1.id = 99
;

select * from table(dbms_xplan.display(null,null,'outline alias'));


-----------------------------------------------------------------------------------------------
| Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |      |     2 |    96 |    30   (4)| 00:00:01 |
|   1 |  NESTED LOOPS                          |      |     2 |    96 |    30   (4)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL                    | T1   |     1 |    19 |    26   (4)| 00:00:01 |
|   3 |   VIEW                                 |      |     1 |    29 |     4   (0)| 00:00:01 |
|   4 |    UNION ALL PUSHED PREDICATE          |      |       |       |            |          |
|   5 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2A  |     5 |    75 |     2   (0)| 00:00:01 |
|*  6 |      INDEX RANGE SCAN                  | T2AI |     5 |       |     1   (0)| 00:00:01 |
|   7 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2B  |     7 |   105 |     2   (0)| 00:00:01 |
|*  8 |      INDEX RANGE SCAN                  | T2BI |     7 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------


Excellent – we get the cardinalities we want to see for the tables – except the view operator doesn’t hold the sum of the table cardinalities, and the join doesn’t multiply up the estimates either. I couldn’t find a way of getting the view to show 12 rows (not even with a guessed – but presumably unimplemented – opt_estimate(view …) hint!), however during the course of my experiments I tried the hint: “opt_estimate(@main, table, t2@main, scale_rows=15)”. This didn’t have any visible effect in the plan but while searching through the 10053 trace file I found the following lines:

Table Stats::
  Table: from$_subquery$_002  Alias: T2  (NOT ANALYZED)
  #Rows: 20000  SSZ: 0  LGR: 0  #Blks:  37  AvgRowLen:  15.00  NEB: 0  ChainCnt:  0.00  ScanRate:  0.00  SPC: 0  RFL: 0  RNF: 0  CBK: 0  CHR: 0  KQDFLG: 9

Access path analysis for from$_subquery$_002
    >> Single Tab Card adjusted from 20000.000000 to 300000.000000 due to opt_estimate hint

Access path analysis for from$_subquery$_002
    >> Single Tab Card adjusted from 12.000000 to 180.000000 due to opt_estimate hint

So at some point in the code path the optimizer is aware that 5 + 7 = 12, and that 12 * 15 = 180. But this doesn’t show up in the final execution plan. You might notice, by the way, that the scale_rows=15 has been applied NOT ONLY to the place where I was aiming – it’s also been applied to scale up the 20,000 rows that are estimated to be in the union all to 300,000 as the estimate for a tablescan of the two tables.

Possibly if I spent more time working through the 10053 trace file (which, as I’ve said before, I try to avoid doing) I might have found exactly which code path Oracle followed to get to the plan it produced and managed to tweak some hints to get the numbers I wanted to see. Possibly the optimizer was already following the code path that actually produced the numbers I wanted, then “forgot” to use them. One day, perhaps, I’ll tale another look at the problem – but since I wasn’t trying to solve a problem for a client (and given that there was an alternative workaround) I closed the 10053 trace file and put the model aside for a rainy day.

Footnote

One thought did cross my mind as a way of finding out if there was a real solution – and I offer this for anyone who wants to play: create a second data set that genuinely produces the 5 and 7 I want to see (and, check that the view reports the sum of the two components); then run the original query against the original data so that you’ve got the execution plan in memory, overwrite the original data with the new data set (without changing the statistics on the orginal). Then use the SQL Tuning Advisor to see if it produces a SQL profile for the captured SQL_ID that reproduces the correct plan for the second data set and check what opt_estimate() hints it uses.  (Warning – this may turn into a frustrating waste of time.)

 

To prevent automated spam submissions leave this field empty.