Oakies Blog Aggregator

fritshoogland's picture

Repairing a Linux host with disk/filesystem issues

This is a writeup on some scenario’s for disk issues that you could encounter when running linux systems. The linux systems I am talking about are centos/redhat/oracle (EL) version 7 systems. This is just a writeup for myself to know how to deal with different scenario’s, hopefully other people find this interesting too. I don’t believe there is any difference for the below scenario’s and resolutions between running physical/bare metal, virtualised or in the cloud, provided you can get access the you need (like the BIOS POST console). The test configuration uses (legacy) MBR (meaning non UEFI), grub2 as boot loader, LVM for meta devices (you don’t want to run without LVM!!) and XFS for the filesystems.

I included a way to introduce each corruption yourself, so you can actually rehearse these kind of corruptions. These self introduced corruptions are simply parts overwritten and can be recovered from. In reality this might be different: if a disk is truly physically broken, there is no way you can make it come to live again with software, and the data on it is simply gone. Also, if the physical disk is not harmed, but the filesystem is damaged, I think most of you understand that if the amount of damage is big enough, at a certain point it is beyond repair.

REALLY IMPORTANT NOTICE

Modern filesystems like XFS, EXT3/4, NTFS are ‘logging filesystems’. This means changes to metadata are written into a log on disk called the ‘intent log’, after which the true change is written and then the corresponding log in the intent log is flagged as done. This means that unclean mounts or crashes can be recovered from by replaying the intent log to make sure all transactions are fully written.

However…by default these filesystems do not log actual data changes. The protected writes are filesystem structure changes. This is especially important to realise if writes are not done via O_DIRECT, and thus end up in the page cache.

This means that data corruption can still occur, despite having a modern, logging filesystem. Things like databases therefore keep consistency mechanisms in play to validate data consistency.

I do not know enough about BTRFS and ZFS to know if these do things differently from the logging perspective. Not logging all IO for “normal” filesystems is a performance choice.

Let’s start: know your system!

It really helps if you understand how your system looks like. This is how my system looks like:

# parted /dev/sda print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1075MB  1074MB  primary  xfs          boot
 2      1075MB  68.7GB  67.6GB  primary               lvm

The first partition of /dev/sda is set to be the boot partition (flags: boot), and is an XFS filesystem partition. The second partition is an LVM physical volume. ‘lsblk’ can look a bit more into the functions of the partitions, and reveals any other blockdevices.

# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0   64G  0 disk
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   63G  0 part
  ├─ol-root 249:0    0   41G  0 lvm  /
  ├─ol-swap 249:1    0    2G  0 lvm  [SWAP]
  └─ol-home 249:2    0   20G  0 lvm  /home
sr0          11:0    1  540M  0 rom

This shows that /dev/sda1 is mounted as /boot on the linux installation, and the sda2 LVM volume contains three logical volumes named: root, swap and home, which are mounted on /, as swap and on /home. There also is a ‘rom’ block device, which is the (oracle) linux boot image. This image can be downloaded from Oracle’s software delivery cloud, and is 566MB, and contains the rescue boot environment that is used in this blogpost. Any serious sysadmin should have such an image on hand to perform emergency recovery if needed.

Normally, the local disks are the first ones to boot from before other bootable devices such as a CD/DVD disk image in the BIOS. This means that I can leave the linux boot image connected, the system will normally boot from its hard disks and essentially not touch the DVD image. If for some reason the BIOS can’t find the MBR on the local disks, it will scan for MBR records on other disks in the order as set in the BIOS, and find the linux boot image and boot from that. On a real production system you don’t leave such an image connected, normally it’s attached when there is a need.

If you want or must use the linux boot image and not the local installation, you have to interfere the BIOS POST, and change the boot order. I am using virtualbox for most of my tests, the way to do that is: press F12 (Mac: fn-F12) and select CD-ROM as boot device and press enter.

To use the rescue mode on the linux boot image, boot it, then select “troubleshooting” (down arrow, enter), and “rescue a Oracle linux system” (down arrow, enter). This will boot linux from the boot image, and run anaconda. Anaconda gives you the following options:

1. Continue

This will scan your available disks, and mount the filesystems it could find under /mnt/sysimage in read-write mode. Not all filesystem issues will allow mounting read-write.

2. Read-only mount

This will scan your available disks, and mount the filesystems it could find under /mnt/sysimage in read-only mode. Obviously, not all issues allow read only mounting too.

3. Skip to shell

This will do no scanning of disks, so if you want to prevent any access to the local disks, this is the options to choose.

4. Quit (reboot)

This is a safe reboot (stopping all the started units). This is the same that would happen if the shell is exited from using option 1, 2 and 3.

Let’s look at a couple of different scenario’s:

Corrupted MBR

A MBR linux system scans the disks it can see in the order that is set in the BIOS, and boots from the first MBR found. This will normally be your linux installation on the local disks. If you have a corrupted or otherwise unbootable MBR on your local disks, and the linux boot image attached, it will boot from the linux boot image, unless it is set to explicitly not boot that device.

If no usable MBR is found, it will tell you on the console. On virtualbox, it shows:

FATAL: Could not read from the boot medium! System halted.

If just the MBR is damaged/corrupted, it can be recovered. If you want to rehearse recovering from a corrupted MBR, overwrite the MBR in the following way. Please mind this is machine dependent, look for specifics about the system to the overview above with parted and lsblk:

# dd if=/dev/zero of=/dev/sda bs=448 count=1

If you detach any other bootable devices, and reboot your system after writing over the MBR, it should not find anything to boot from, and your system will give an error like seen above, indicating it’s unable to boot.

In order to restore the MBR for grub2, perform the following tasks:
– Boot from the linux boot image.
– Start rescue mode.
– Choose ‘1’/continue, and enter to go to the shell.
– Change the root to your local disks: ‘chroot /mnt/sysimage’.
– Next thing is to understand your system. Based on the parted executable information above, we know the boot flag is set on device /dev/sda. This means we have to enter the following in the console:

grub2-install /dev/sda

This will report, if your disk is well enough to hold the information, and not really broken:

Installing for i386-pc platform.
Installation finished. No error reported.

– Now the MBR is written, and you can reboot your system by exiting the chroot jail and shell: ‘exit’ and ‘exit’ to boot your system from the local disks again.

Corrupted /boot filesystem

If the /boot filesystem, which holds the linux kernel and grub settings, is corrupted, the system will boot, because that is handled by the MBR, but then very quickly tell you it doesn’t recognise the filesystem:

.
error: unknown filesystem.
Entering rescue mode...
grub rescue>

A corrupted /boot filesystem can be recovered from in most cases, obviously depending on what and how much is corrupted in the filesystem, and the underlying device must be a able to function after the corruption. This can be rehearsed by overwriting part of the filesystem. In order to be absolutely sure the overwritten data on the disk is not changed by anything else, boot the linux boot image in rescue mode, and choose option ‘3’, and then overwrite part of the filesystem in /dev/sda1:

# dd if=/dev/zero of=/dev/sda1 bs=1k skip=2 count=1

In order to recover /boot, perform the following tasks:
– Boot from the linux boot image.
– Start rescue mode.
– Choose ‘1’/continue, and enter to go to the shell. If you are rehearsing: the dd command overwritten part of the filesystem header, it will not be recognised by lsblk, and if you take a look in the systems fstab (/mnt/sysimage/etc/fstab), it is mounted by UUID, but the UUID can’t be taken from the corrupted filesystem. The root and home filesystems are found in LVM, and mounted. This means you have to know /dev/sda1 is /boot.
– Repair the filesystem:

# xfs_repair /dev/sda1

Depending on the corruption, this might or might not succeed. If you are rehearsing this using the above dd command, you will see a lot of dots being printed searching for a superblock, which it will find and then recover from.
– Exit the shell to reboot the system: ‘exit’.

Corrupted root filesystem in LVM

When the root filesystem is corrupted, the system will boot as usual, show the different kernels in the grub dialogue, and then when the choice times out, boot the pre-selected kernel and try to boot linux. After the kernel is loaded, the root filesystem will be mounted, and it will fail. This generates the following message:

Generating "/run/initramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt to a USB stick or /boot after mounting them and attach it to a bug report.

In order to rehearse recovering a corrupted root filesystem, boot the linux boot image in rescue mode, and choose option ‘3’. You will find that the volume group is not active, it needs to be activated using:

# vgchange -ay
 3 logical volume(s) in volume group "ol" now active

Then overwrite the filesystem header for the logical volume ‘root’ in the volume group ‘ol’:

# dd if=/dev/zero of=/dev/ol/root bs=1k skip=2 count=1

A corrupted root filesystem can be recovered from in most cases, obviously depending on what and how much is corrupted in the filesystem, and the underlying device must be able to function after the corruption. In order to recover perform the following tasks:
– Boot from the linux boot image.
– Start rescue mode.
– Choose ‘3’.
– Activate LVM:

# vgchange -ay

– Repair the filesystem in the logical volume

# xfs_repair /dev/ol/root

– Exit the shell to reboot the system: ‘exit’.

Corrupted non root filesystem in LVM

On my system, I only got /home as a non root filesystem in LVM. When I corrupted this filesystem, the linux system will came up without any complaining on the console. To be honest, this was kind of surprising to me.

I needed to dig around in the /var/log/messages file to find an error message:

Oct  2 10:57:11 localhost lvm: 3 logical volume(s) in volume group "ol" now active
Oct  2 10:57:11 localhost systemd: Started LVM2 PV scan on device 8:2.
Oct  2 10:57:11 localhost mount: mount: wrong fs type, bad option, bad superblock on /dev/mapper/ol-home,
Oct  2 10:57:11 localhost mount: missing codepage or helper program, or other error
Oct  2 10:57:11 localhost mount: In some cases useful info is found in syslog - try
Oct  2 10:57:11 localhost mount: dmesg | tail or so.
Oct  2 10:57:11 localhost systemd: Found device /dev/mapper/ol-home.
Oct  2 10:57:11 localhost systemd: Mounting /home...
Oct  2 10:57:11 localhost systemd: Starting LVM2 vgchange on device 8:2...

Based on what I see, I interpret this as systemd activating LVM, and then mounting the filesystem, which gives an error. What I don’t understand is what systemd is doing when it says it found the logical volume device and mounts it.

This can be rehearsed by booting into rescue mode, choose option ‘3’, activate LVM ‘vgchange -ay’, and then overwrite the filesystem header in the logical volume: ‘dd if=/dev/zero of=/dev/ol/home bs=1k skip=2 count=1’.

A corrupted non root filesystem in LVM can recovered from in most cases. There is no need to use the linux boot image for rescue mode. A linux system will come up when the root filesystem is valid and it can load enough from it to boot linux. The following needs to be done:
– Start a shell as root.
– Repair the filesystem in the logical volume: ‘xfs_repair /dev/ol/home’.
– Mount all filesystems: ‘mount -a’. This should mount the just repaired filesystem.

Corrupted swap in LVM

Just like a non root filesystem corruption mentioned above, the inability to activate a swap device is logged in the /var/log/messages file, but there is no explicit message on the console. Outside of logging the inability to active the swap device the system will come up happily, without swap if the only swap device is corrupted. These are the messages I could find in the /var/log/messages file:

Oct  2 11:32:39 localhost systemd: Found device /dev/mapper/ol-swap.
Oct  2 11:32:39 localhost systemd: Activating swap /dev/mapper/ol-swap...
Oct  2 11:32:39 localhost kernel: alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
Oct  2 11:32:39 localhost kernel: XFS (sda1): Mounting V4 Filesystem
Oct  2 11:32:39 localhost swapon: swapon: /dev/mapper/ol-swap: swapon failed: Invalid argument
Oct  2 11:32:39 localhost systemd: dev-mapper-ol\x2dswap.swap swap process exited, code=exited status=255
Oct  2 11:32:39 localhost systemd: Failed to activate swap /dev/mapper/ol-swap.
Oct  2 11:32:39 localhost systemd: Dependency failed for Swap.
Oct  2 11:32:39 localhost systemd: Job swap.target/start failed with result 'dependency'.
Oct  2 11:32:39 localhost systemd: Unit dev-mapper-ol\x2dswap.swap entered failed state.
Oct  2 11:32:39 localhost kernel: Unable to handle swap header version 0

It seems systemd understands the swap device is not usable, previously we saw systemd didn’t detect filesystem issues on an LVM device.

This can be rehearsed by booting into rescue mode, choose option ‘3’, activate LVM ‘vgchange -ay’, and then overwrite the swap header: ‘dd if=/dev/zero of=/dev/ol/swap bs=1k count=2’.

A corrupted swap device in LVM can be recovered from if the underlying device is able to function.
– Start a shell as root.
– Format the swap device again, in my case the swap device is /dev/ol/swap: ‘mkswap /dev/ol/swap’.
– Activate swap: ‘swapon -a’.
– Validate swap has been activated: ‘swapon -s’.

Franck Pachot's picture

Unindexed Foreign Keys in Oracle and PostgreSQL

A new blog post on the Databases at CERN blog: verifying (with pgSentinel) that PostgreSQL does not lock full tables like Oracle does when the foreign key is not indexed.

https://db-blog.web.cern.ch/blog/franck-pachot/2018-09-unindexed-foreign-keys-oracle-and-postgresql

Unindexed Foreign Keys in Oracle and PostgreSQL

kevinclosson's picture

SLOB Chewed Up All My File System Space and Spit It Out. But, Why?

This is a quick blog post in response to a recent interaction with a SLOB user. The user reached out to me to lament that all her file system space was consumed as the result of a SLOB execution (runit.sh). I reminded her that runit.sh will alert to possible derelict mpstat/iostat/vmstat processes from an aborted SLOB test. If these processes exist they will be spooling their output to unlinked files.

The following screen shot shows what to expect if a SLOB test detects potential “deadwood” processes. If you see this sort of output from runit.sh, it’s best to investigate whether in fact they remain from an aborted test or whether there are other users on the system that left these processes behind.

 

#000000;" src="https://kevinclosson.files.wordpress.com/2018/10/blog-pic1-derilect-iostat.png?w=500&h=277" alt="" width="500" height="277" srcset="https://kevinclosson.files.wordpress.com/2018/10/blog-pic1-derilect-iostat.png?w=500&h=277 500w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic1-derilect-iost... 1000w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic1-derilect-iost... 150w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic1-derilect-iost... 300w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic1-derilect-iost... 768w" sizes="(max-width: 500px) 100vw, 500px" />

The next screenshot shows how to take action on the runit.sh output:

#000000;" src="https://kevinclosson.files.wordpress.com/2018/10/blog-pic2-derilect-iostat.png?w=500&h=290" alt="" width="500" height="290" srcset="https://kevinclosson.files.wordpress.com/2018/10/blog-pic2-derilect-iostat.png?w=500&h=290 500w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic2-derilect-iost... 150w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic2-derilect-iost... 300w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic2-derilect-iost... 768w, https://kevinclosson.files.wordpress.com/2018/10/blog-pic2-derilect-iost... 917w" sizes="(max-width: 500px) 100vw, 500px" />

Summary

If SLOB is consuming all your file system space, it’s probably already telling you why–all you need to do is take action. I hope this is helpful for some wayward Googler someday.

martin.bach's picture

Ansible tips’n’tricks: a new series

In the past few months I have spent considerable amounts of time working on new technology (new as in “new to me”) and one of the things I have found a great interest-and more importantly-use cases with, is ansible. I don’t know how it is with you, but if I don’t have a practical use case for using a technology I find it hard to get familiar with it.

Ansible is a fantastic piece of technology with decent documentation and great community support. I didn’t find writing ansible code too hard. Most of the problems I have encountered while learning how to write ansible playbooks have-in some way or another-already been encountered and solved by other users. It does require a bit of Internet research, and you will need check if the solution offered somewhere on the Internet is still valid with the current version of the tool. As with a lot of other software, ansible evolves at a rather quick pace and keeping up can be a bit of a challenge.

I promise not to get into details about what ansible is, others have done so and I’m not a great fan of repeating what more clever people have said. Head over to the official documentation and see for yourself what ansible is. For the most part I consider it pretty straight forward and elegant. It sure beats my attempts at writing bash code ;)

Having said that it might require a minute or so to get used to the markup language; most so-called playbooks are written in YAML and you learn to appreciate the importance of white space in your scripts. Another key concept I personally found quite important to understand is idempotency. A script that is considered to be idempotent can be re-run multiple times safely. There are other important concepts to be aware of, and you should consult the documentation to learn more about these if you aren’t yet familiar with them.

My main reason for using ansible is to automate routine tasks in my personal lab environment, and this little series I’m planning on writing is primarily a “note to self” so I don’t forget how to do things. As with the Oracle posts I plan on writing I’ll add a “tested/written with version x” so later on when someone hits this article series via a search engine what it requires to do the same.

Things I’m planning on writing on are those that cost me a bit more time on and I’m hoping these little snippets can help you with your automation project.

Uwe Hesse's picture

Using the Query Cache for good performance in #Exasol

The result of a query can be cached in Exasol to the effect that repeated identical queries complete in no time. This feature has been introduced in version 5 and is enabled by default.

SQL_EXA> select session_value,system_value  from exa_parameters where parameter_name='QUERY_CACHE';
EXA: select session_value,system_value  from exa_parameters where parameter...

SESSION_VALUE        SYSTEM_VALUE
-------------------- --------------------
ON                   ON

1 row in resultset.

The Query Cache can be (de-)activated on the session level as well as on the system level.

SQL_EXA> alter session set query_cache='off';
EXA: alter session set query_cache='off';

Rows affected: 0
SQL_EXA> select object_name,mem_object_size/1024/1024 as mb from exa_user_object_sizes where object_name='T';
EXA: select object_name,mem_object_size/1024/1024 as mb from exa_user_objec...

OBJECT_NAME          MB
-------------------- ----------------------------------
T                             1537.49641990661621093750

1 row in resultset.

SQL_EXA> select count(*) from t;
EXA: select count(*) from t;

COUNT(*)
---------------------
            320000000

1 row in resultset.
SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 4
Elapsed: 00:00:03.022

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 5
Elapsed: 00:00:02.620

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 6
Elapsed: 00:00:02.724

Without using the Query Cache the repeated query takes roughly 3 seconds.

SQL_EXA> alter session set query_cache='on';
EXA: alter session set query_cache='on';

Rows affected: 0


Timing element: 7
Elapsed: 00:00:00.008

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 8
Elapsed: 00:00:00.009

Lightning fast! If statement profiling is enabled, QUERY CACHE RESULT shows as PART_NAME in tables like EXA_USER_PROFILE_LAST_DAY.
Also EXECUTION_MODE from EXA_SQL_LAST_DAY shows the usage of the Query Cache:

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
                   35

1 row in resultset.

SQL_EXA> show autocommit;
AUTOCOMMIT = "ON"
SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=33 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         0.005                     1 CACHED

1 row in resultset.

If DML changes the table, the result in the Query Cache is invalidated automatically:

SQL_EXA> update t set numcol2=1 where rowid in (select rowid from t limit 1);
EXA: update t set numcol2=1 where rowid in (select rowid from t limit 1);
Rows affected: 1

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
888896

1 row in resultset.

Timing element: 10
Elapsed: 00:00:02.870

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
51

1 row in resultset.

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME DURATION ROW_COUNT EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------

0 rows in resultset.

There’s a 1 Minute interval for syncing the statistic tables. It can be triggered manually, though:

SQL_EXA> flush statistics;
EXA: flush statistics;

Rows affected: 0

SQL_EXA> commit;
EXA: commit;

Rows affected: 0

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         2.862                     1 EXECUTE

1 row in resultset.

Runtime and EXECUTION_MODE EXECUTE confirms that the Query Cache was invalidated by the UPDATE above. Now it’s automatically refreshed:

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 11
Elapsed: 00:00:00.010
Jonathan Lewis's picture

Case Study

A question about reading execution plans and optimising queries arrived on the ODC database forum a little while ago; the owner says the following statement is taking 14 minutes to return 30,000 rows and wants some help understanding why.

If you look at the original posting you’ll see that we’ve been given the text of the query and the execution plan including rowsource execution stats. There’s an inconsistency between the supplied information and the question asked, and I’ll get back to that shortly, but to keep this note fairly short I’ve excluded the 2nd half of the query (which is a UNION ALL) because the plan says the first part of the query took 13 minutes and 20 second and the user is worried about a total of 14 minutes.

SELECT /*+ gather_plan_statistics*/ DISTINCT
                rct.org_id,
                hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1 order_number,
                rct.customer_trx_id,
                rct.trx_number,
                rct.trx_date,
                rctd.gl_date,
                rct.creation_date,
                rctl.line_number,
                rct.invoice_currency_code inv_currency,
                (
                       SELECT SUM (rct_1.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_1
                       WHERE  rct_1.customer_trx_id = rct.customer_trx_id
                       AND    rct_1.line_type = 'LINE') inv_net_amount,
                (
                       SELECT SUM (rct_2.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_2
                       WHERE  rct_2.customer_trx_id = rct.customer_trx_id
                       AND    rct_2.line_type = 'TAX') inv_tax_amount,
                (
                       SELECT SUM (rct_3.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_3
                       WHERE  rct_3.customer_trx_id = rct.customer_trx_id) inv_gross_amount,
                gll.currency_code                                    func_currency,
                Round((
                        (
                        SELECT SUM (rct_4.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_4
                        WHERE  rct_4.customer_trx_id = rct.customer_trx_id
                        AND    rct_4.line_type = 'LINE')*gdr.conversion_rate),2) func_net_amount,
                Round((
                        (
                        SELECT SUM (rct_5.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_5
                        WHERE  rct_5.customer_trx_id = rct.customer_trx_id
                        AND    rct_5.line_type = 'TAX')*gdr.conversion_rate),2) func_tax_amount,
                Round((
                        (
                        SELECT SUM (rct_6.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_6
                        WHERE  rct_6.customer_trx_id = rct.customer_trx_id)*gdr.conversion_rate),2) func_gross_amount,
                glcc.segment1                                                                 company,
                glcc.segment2                                                                 account,
                hg.geography_name                                                             billing_country,
                gdr.conversion_rate
FROM            apps.hz_parties hzp,
                apps.hz_cust_accounts hca,
                apps.ra_customer_trx_all rct,
                apps.ra_customer_trx_lines_all rctl,
                apps.ra_cust_trx_line_gl_dist_all rctd,
                apps.gl_code_combinations_kfv glcc,
                apps.hz_cust_site_uses_all hcsua,
                apps.hz_cust_acct_sites_all hcasa,
                apps.hz_party_sites hps,
                apps.hz_locations hl,
                apps.hz_geographies hg,
                apps.gl_ledgers gll,
                apps.gl_daily_rates gdr
WHERE           hzp.party_id = hca.party_id
AND             hca.cust_account_id = rct.bill_to_customer_id
AND             hca.cust_account_id = hcasa.cust_account_id
AND             rct.customer_trx_id = rctl.customer_trx_id
AND             rctl.customer_trx_line_id = rctd.customer_trx_line_id
AND             glcc.code_combination_id = rctd.code_combination_id
AND             rct.bill_to_site_use_id = hcsua.site_use_id
AND             hcsua.cust_acct_site_id = hcasa.cust_acct_site_id
AND             hcasa.party_site_id = hps.party_site_id
AND             hps.location_id = hl.location_id
AND             hl.country = hg.country_code
AND             hg.geography_type = 'COUNTRY'
AND             rctl.line_type = 'TAX'
AND             gll.ledger_id = rct.set_of_books_id
AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             gdr.conversion_type = 'Corporate'
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')
AND             glcc.segment1 = '2600'
AND             glcc.segment2 = '206911'
GROUP BY        hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1,
                rct.trx_number,
                rct.trx_date,
                rct.creation_date,
                rctl.line_number,
                rctl.unit_selling_price,
                rct.org_id,
                rctd.gl_date,
                rct.customer_trx_id,
                glcc.segment1,
                glcc.segment2,
                hg.geography_name,
                rct.invoice_currency_code,
                gll.currency_code,
                gdr.conversion_rate 

We note that there are six scalar subqueries in the text I’ve reported – and they form two groups of three, and the difference between the two groups is that one group is multiplied by a conversion rate while the other isn’t; moreover in each group the three subqueries are simply querying subsets of the same correlated data set. So it looks as if all 6 scalar subqueries could be eliminated and replaced by the inclusion of an aggregate view in the from clause and the projection of 6 columns from that view.

However, before pursuing that option, take a look at the plan with the rowsource execution stats – where is the time going ?


-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                                           |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   1 |  UNION-ALL                                                 |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   2 |   HASH UNIQUE                                              |                              |      1 |      1 |    501 |00:13:20.17 |    3579K|  
|   3 |    HASH GROUP BY                                           |                              |      1 |      1 |  19827 |00:13:20.15 |    3579K|  
|   4 |     NESTED LOOPS                                           |                              |      1 |        |  21808 |00:13:10.26 |    3579K|  
|   5 |      NESTED LOOPS                                          |                              |      1 |      1 |  21808 |00:13:10.11 |    3578K|  
|   6 |       NESTED LOOPS OUTER                                   |                              |      1 |      1 |  21808 |00:13:09.90 |    3576K|  
|   7 |        NESTED LOOPS OUTER                                  |                              |      1 |      1 |  21808 |00:13:09.25 |    3501K|  
|   8 |         NESTED LOOPS OUTER                                 |                              |      1 |      1 |  21808 |00:13:08.48 |    3426K|  
|   9 |          NESTED LOOPS OUTER                                |                              |      1 |      1 |  21808 |00:13:07.66 |    3333K|  
|  10 |           NESTED LOOPS OUTER                               |                              |      1 |      1 |  21808 |00:13:06.92 |    3258K|  
|  11 |            NESTED LOOPS OUTER                              |                              |      1 |      1 |  21808 |00:13:06.08 |    3183K|  
|  12 |             NESTED LOOPS                                   |                              |      1 |      1 |  21808 |00:13:04.69 |    3090K|  
|  13 |              NESTED LOOPS                                  |                              |      1 |      1 |  21808 |00:13:05.75 |    3026K|  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  16 |                 NESTED LOOPS                               |                              |      1 |    351 |  33459 |00:00:03.67 |    1025K|  
|  17 |                  NESTED LOOPS                              |                              |      1 |    351 |  33459 |00:00:03.06 |     926K|  
|  18 |                   NESTED LOOPS                             |                              |      1 |    351 |  33459 |00:00:02.47 |     827K|  
|* 19 |                    HASH JOIN                               |                              |      1 |    351 |  33459 |00:00:01.90 |     730K|  
|  20 |                     TABLE ACCESS FULL                      | GL_LEDGERS                   |      1 |     38 |     39 |00:00:00.01 |      15 |  
|  21 |                     NESTED LOOPS                           |                              |      1 |        |  33459 |00:00:01.75 |     730K|  
|  22 |                      NESTED LOOPS                          |                              |      1 |    351 |  33459 |00:00:01.44 |     696K|  
|  23 |                       NESTED LOOPS                         |                              |      1 |    351 |  33459 |00:00:01.11 |     646K|  
|* 24 |                        HASH JOIN                           |                              |      1 |    385 |  33459 |00:00:00.40 |     526K|  
|* 25 |                         TABLE ACCESS BY INDEX ROWID BATCHED| GL_CODE_COMBINATIONS         |      1 |     35 |      1 |00:00:00.01 |     108 |  
|* 26 |                          INDEX RANGE SCAN                  | GL_CODE_COMBINATIONS_N2      |      1 |    499 |     77 |00:00:00.01 |       3 |  
|* 27 |                         TABLE ACCESS BY INDEX ROWID BATCHED| RA_CUST_TRX_LINE_GL_DIST_ALL |      1 |    651K|   1458K|00:00:02.22 |     526K|  
|* 28 |                          INDEX RANGE SCAN                  | RA_CUST_TRX_LINE_GL_DIST_N2  |      1 |    728K|   1820K|00:00:01.60 |   11147 |  
|* 29 |                        TABLE ACCESS BY INDEX ROWID         | RA_CUSTOMER_TRX_LINES_ALL    |  33459 |      1 |  33459 |00:00:00.53 |     119K|  
|* 30 |                         INDEX UNIQUE SCAN                  | RA_CUSTOMER_TRX_LINES_U1     |  33459 |      1 |  33459 |00:00:00.31 |   86364 |  
|* 31 |                       INDEX UNIQUE SCAN                    | RA_CUSTOMER_TRX_U1           |  33459 |      1 |  33459 |00:00:00.21 |   49850 |  
|  32 |                      TABLE ACCESS BY INDEX ROWID           | RA_CUSTOMER_TRX_ALL          |  33459 |      1 |  33459 |00:00:00.20 |   33459 |  
|  33 |                    TABLE ACCESS BY INDEX ROWID             | HZ_CUST_ACCOUNTS             |  33459 |      1 |  33459 |00:00:00.42 |   97887 |  
|* 34 |                     INDEX UNIQUE SCAN                      | HZ_CUST_ACCOUNTS_U1          |  33459 |      1 |  33459 |00:00:00.24 |   64428 |  
|  35 |                   TABLE ACCESS BY INDEX ROWID              | HZ_PARTIES                   |  33459 |      1 |  33459 |00:00:00.44 |   98783 |  
|* 36 |                    INDEX UNIQUE SCAN                       | HZ_PARTIES_U1                |  33459 |      1 |  33459 |00:00:00.26 |   65175 |  
|  37 |                  TABLE ACCESS BY INDEX ROWID               | HZ_CUST_SITE_USES_ALL        |  33459 |      1 |  33459 |00:00:00.46 |   98374 |  
|* 38 |                   INDEX UNIQUE SCAN                        | HZ_CUST_SITE_USES_U1         |  33459 |      1 |  33459 |00:00:00.28 |   64915 |  
|* 39 |                 TABLE ACCESS BY INDEX ROWID                | HZ_CUST_ACCT_SITES_ALL       |  33459 |      1 |  33459 |00:00:00.45 |   98195 |  
|* 40 |                  INDEX UNIQUE SCAN                         | HZ_CUST_ACCT_SITES_U1        |  33459 |      1 |  33459 |00:00:00.26 |   64736 |  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
|  43 |               TABLE ACCESS BY INDEX ROWID                  | HZ_PARTY_SITES               |  21808 |      1 |  21808 |00:00:00.35 |   64339 |  
|* 44 |                INDEX UNIQUE SCAN                           | HZ_PARTY_SITES_U1            |  21808 |      1 |  21808 |00:00:00.23 |   42531 |  
|  45 |              TABLE ACCESS BY INDEX ROWID                   | HZ_LOCATIONS                 |  21808 |      1 |  21808 |00:00:00.33 |   64353 |  
|* 46 |               INDEX UNIQUE SCAN                            | HZ_LOCATIONS_U1              |  21808 |      1 |  21808 |00:00:00.18 |   42545 |  
|  47 |             VIEW PUSHED PREDICATE                          | VW_SSQ_1                     |  21808 |      1 |  21808 |00:00:01.17 |   93476 |  
|  48 |              SORT GROUP BY                                 |                              |  21808 |      1 |  21808 |00:00:01.06 |   93476 |  
|  49 |               TABLE ACCESS BY INDEX ROWID BATCHED          | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.84 |   93476 |  
|* 50 |                INDEX RANGE SCAN                            | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.36 |   59938 |  
|  51 |            VIEW PUSHED PREDICATE                           | VW_SSQ_2                     |  21808 |      1 |  21808 |00:00:00.69 |   74433 |  
|  52 |             SORT GROUP BY                                  |                              |  21808 |      1 |  21808 |00:00:00.59 |   74433 |  
|  53 |              TABLE ACCESS BY INDEX ROWID BATCHED           | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.49 |   74433 |  
|* 54 |               INDEX RANGE SCAN                             | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.24 |   59903 |  
|  55 |           VIEW PUSHED PREDICATE                            | VW_SSQ_3                     |  21808 |      1 |  21808 |00:00:00.61 |   74852 |  
|  56 |            SORT GROUP BY                                   |                              |  21808 |      1 |  21808 |00:00:00.51 |   74852 |  
|  57 |             TABLE ACCESS BY INDEX ROWID BATCHED            | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.38 |   74852 |  
|* 58 |              INDEX RANGE SCAN                              | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.19 |   59148 |  
|  59 |          VIEW PUSHED PREDICATE                             | VW_SSQ_4                     |  21808 |      1 |  21808 |00:00:00.70 |   93490 |  
|  60 |           SORT GROUP BY                                    |                              |  21808 |      1 |  21808 |00:00:00.61 |   93490 |  
|  61 |            TABLE ACCESS BY INDEX ROWID BATCHED             | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.63 |   93490 |  
|* 62 |             INDEX RANGE SCAN                               | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.25 |   59950 |  
|  63 |         VIEW PUSHED PREDICATE                              | VW_SSQ_5                     |  21808 |      1 |  21808 |00:00:00.63 |   74427 |  
|  64 |          SORT GROUP BY                                     |                              |  21808 |      1 |  21808 |00:00:00.54 |   74427 |  
|  65 |           TABLE ACCESS BY INDEX ROWID BATCHED              | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.44 |   74427 |  
|* 66 |            INDEX RANGE SCAN                                | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.21 |   59900 |  
|  67 |        VIEW PUSHED PREDICATE                               | VW_SSQ_6                     |  21808 |      1 |  21808 |00:00:00.59 |   74846 |  
|  68 |         SORT GROUP BY                                      |                              |  21808 |      1 |  21808 |00:00:00.50 |   74846 |  
|  69 |          TABLE ACCESS BY INDEX ROWID BATCHED               | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.35 |   74846 |  
|* 70 |           INDEX RANGE SCAN                                 | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.17 |   59144 |  
|* 71 |       INDEX RANGE SCAN                                     | HZ_GEOGRAPHIES_N11           |  21808 |   5812 |  21808 |00:00:00.13 |    2684 |  
|  72 |      TABLE ACCESS BY INDEX ROWID                           | HZ_GEOGRAPHIES               |  21808 |    168 |  21808 |00:00:00.07 |     620 |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

Let’s start by raising some concerns about the quality of information available.

First, the OP says it takes 14 minutes to return 30,000 rows: but the top line of the plan says it has taken 13 minutes and 20 seconds to return the first 501 rows, and if we look a little further down the plan operation 3 (Hash Group By) reports 00:13:20.15 to aggregate down to 19,827 rows. So this half of the plan cannot return more than 19,827 rows, and the half I have discarded (for the moment) must be returning the other 10,000+ rows. The information we have is incomplete.

Of course you may think that whatever the rest of the plan does is fairly irrelevant – it’s only going to be responsible for at most another 40 seconds of processing – except my previous experience of rowsource execution statistics tells me that when you do a large number of small operations the times reported can be subject to fairly large rounding errors and that enabling the measurement can increase the execution time by a factor of three or four. It’s perfectly feasible that this half of the query is actually the faster half under normal run-time circumstances but runs much more slowly (with a much higher level of CPU utilisation) when rowsource execution stats is in enabled. So let’s not get too confident.

With that warning in mind, what can we see in this half of the plan.

Big picture: the inline scalar subqueries have disappeared. In 12c the optimimzer can unnest scalar subqueries in the select list and turn them into outer joins, and we can see that there are 6 “Nested Loop Outer” operations, corresponding to 6 “View Pushed Predicate” operations against views labelled VW_SSQ1 through to VW_SSQ6 (SSQ = Scalar Sub Query ?). This goes back to my early comment – a person could probably rewrite the 6 scalar subqueries as a single aggregate view in the from clause: the optimizer isn’t quite clever enough to manage that in this case, but in simpler cases it might be able to do exactly that.

Big picture 2: most of the 13 minutes 20 seconds appears at operation 14 as it processes the 33,459 rows supplied to it from the 4.33 seconds of work done by operation 15 and its descendants. Reducing this part of the execution plan to the smallest relevant section we get the following:

-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

For each row supplied by operation 15 Oracle calls operation 41, which calls operation 42 to do an index range scan to supply a set of rowids so that operation 41 can access a table and return rows. Apparently the total time spent by operation 41 waiting for operation 42 to return rowids and then doing its own work is 12 minutes 44 seconds, while the range scans alone (all 33,459 of them) take 13 minutes and 8 seconds. Remember, though, that “lots of small operations = scope of rounding errors” when you look at these timings. Despite the inconsistency between the timings for operations 41 and 42 it’s reasonable to conclude that between them that’s where most of the execution time went.

Two questions – (a) can we refine our analysis of how the time is split between the two operations and (b) why do these lines take so much time.

Check the Starts and the A-rows: (reminder: for comparison, we expect A-rows to be approximately E-rows * Starts) for both operations we see 33,459 starts and 21,808 rows. The index range scans return (on average) a single rowid about two-thirds of the time, and every time a range scan returns a rowid the corresponding row is returned from the table (If you check the Id column there’s no asterisk on operation 41 so no extra predicate is applied as Oracle accesses the table row – but even if there were an extra predicate we’d still be happy to infer that if 21,808 rowids returned from operation 42 turned into 21,808 rows returned from the table then there are no wasted accesses to the table).

Now look at the Buffers for the index range scan – 1.837M: that’s roughly 56 buffers per range scan – that’s a lot of index to range through to find one rowid, which is a good clue that perhaps we do a lot of work with each Start and really do use up a lot of CPU on this operation. Let’s see what the Predicate Section of the plan tells us about this range scan:


Predicate Information (identified by operation id):  
---------------------------------------------------  
  42 - access("GDR"."FROM_CURRENCY"="RCT"."INVOICE_CURRENCY_CODE" AND "GDR"."TO_CURRENCY"="GLL"."CURRENCY_CODE" AND   
              "GDR"."CONVERSION_TYPE"='Corporate')  
       filter(("GDR"."CONVERSION_TYPE"='Corporate' AND TO_DATE(INTERNAL_FUNCTION("GDR"."CONVERSION_DATE"))=TO_DATE(INTERNAL_FUNCTION("RCTD"."  
              GL_DATE"))))  

We have access predicates (things which narrow down the number of leaf blocks that we walk through) and filter predicates (things we do to test every key entry we access). Notably the gdr.conversion type is a filter predciate as well as an access predicate – and that suggests that our set of predicates has “skipped over” a column in the index: from_currency and to_currency might be the first two columns in the index, but conversion_type is then NOT the third.

More significantly, though, there’s a column called conversion_date in the index (maybe that’s column 3 in the index – it feels like it ought to be); but for every index entry we’ve selected from the 56 blocks we walk through we do some sort of internal conversion (or un-translated transformation) to the column then convert the result to a date to compare it with another date (similarly processed from an earlier operation). What is that “internal function” – let’s check the query:


AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             gdr.conversion_type = 'Corporate'
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')

(I’ve swapped the order of a couple of lines to highlight a detail).

The filter predicate is comparing gdr.conversion_date with rctd.gl_date – and we can probably assume that both columns really are dates because (a) the word “date” is in their names and (b) the rctd.gl_date is being compared with genuine date values in the next predicate down (and – though I haven’t shown it – the way the plan reports the next predicate proves that the column really is a date datatype).

So the predicate in the SQL applies the to_date() function to two columns that are dates – which means the optimizer has to convert the date columns to some default character format and then convert them back to dates. The “internal function” is a to_char() call. Conversions between date and character formats are CPU-intensive, and we’re doing a double conversion (to_date(to_char(column_value)) to every data value in roughly 56 blocks of an index each time we call that line of the plan. It’s not surprising we spend a lot of time in that line.

Initial strategy:

Check the column types for those two columns, if they are both date types decide whether or not the predicate could be modified to a simple gdr.conversion_date = rctd.gl_date (though it’s possible that something slightly more sophisticated should be used) but whatever you do avoid the redundant conversion through character format.

Ideally, of course, if we can avoid this conversion we may find that Oracle can be more accurate in its range scan through the index, but we may still find that we do a large range scan even if we do manage to do it a little more efficiently, in which case we may want to see if there is an alternative index which will allow use to pick the one rowid we need from the index without  visiting so many leaf blocks in the index.

Warning

Simply eliminating the to_date() calls may changes the results. Here’s a demonstration of how nasty things happen when you apply to_date() to a date:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 D1                                     DATE
 D2                                     DATE

SQL> insert into t1 values(sysdate, sysdate + 10/86400);

1 row created.

SQL> select * from t1 where d1 = d2;

no rows selected

SQL> select * from t1 where to_date(d1) = to_date(d2);

D1        D2
--------- ---------
30-SEP-18 30-SEP-18

1 row selected.

SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

Session altered.

SQL> select * from d1 where to_date(d1) = to_date(d2);

no rows selected

Different users could get different results because they have different settings for their nls_date_format.

Reminder

I started my analysis with two comments about the quality of information – first, we don’t really know whether or not this half of the union all would be responsble for most of the time if rowsource execution statistics were not enabled; secondly large number of small operations can lead to a lot of rounding errors in timing. There are six occurrences of unnested scalar subqueries which are all called 21,808 times – and the recorded time for all 6 of them is remarkably small given the number of executions, even when allowing for the precision with which they operate; it’s possible that these subqueries take a larger fraction of the total time than the plan indicates, in which case it might become necessary (rather than just nice) to do a manual unnesting and reduce the number of inline views to 3 (one for each line_type), 2 (one with, one without, conversion_rate) or just one.

Footnote

Once again I’ve spent a couple of hours writing notes to explain the thoughts that went through my mind in roughly 10 minutes of reading the original posting. It’s a measure really of how many bits of information you can pull together, and possibly discard, very quickly once you understand how many things the optimizer is capable of doing and how the execution plan tries to show you how a statement was (or will be) handled.

Update (5th Oct 2018)

Another way of looking for the best strategy for tuning this statement, given the available information, is this:

Where, in the sequence of events, does the data volume we’re processing drop to the right scale for the output. If we don’t drop to the right scale very early in the plan execution then we may need to re-arrange the order in which we visit tables; if we are operating at the right volume almost immediately then there’s a good chance that we’ve started the right way. Take a look at the first few lines of this plan (remembering that the query was interrupted before returning the whole result set):

-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                                           |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   1 |  UNION-ALL                                                 |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   2 |   HASH UNIQUE                                              |                              |      1 |      1 |    501 |00:13:20.17 |    3579K|  
|   3 |    HASH GROUP BY                                           |                              |      1 |      1 |  19827 |00:13:20.15 |    3579K|  
|   4 |     NESTED LOOPS                                           |                              |      1 |        |  21808 |00:13:10.26 |    3579K|
-----------------------------------------------------------------------------------------------------------------------------------------------------    

At line 4 we generate 21,808 rows which we aggregate down to 19,827, which we then hash down to distinct values – the original user told us that the query returns 30,000 rows so we shouldn’t assume that the uniqueness requirement has reduced 19,827 rows to the 501 reported so far, there may be more to come. What we can say about these numbers, particularly lines 3 and 4 is that prior to the aggregation we need to find about 22,000 rows and carry them through the rest of the plan.

Now look at lines 24 – 28 where the heavy duty action starts (the first physical operation is actually at lines 19/20 where (thanks to swapping join inputs) Oracle scans the gl_ledger table and hashes it into memory in anticipation of incoming probe data – but that’s a tiny blip on the way to the big join):

-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|* 24 |                        HASH JOIN                           |                              |      1 |    385 |  33459 |00:00:00.40 |     526K|  
|* 25 |                         TABLE ACCESS BY INDEX ROWID BATCHED| GL_CODE_COMBINATIONS         |      1 |     35 |      1 |00:00:00.01 |     108 |  
|* 26 |                          INDEX RANGE SCAN                  | GL_CODE_COMBINATIONS_N2      |      1 |    499 |     77 |00:00:00.01 |       3 |  
|* 27 |                         TABLE ACCESS BY INDEX ROWID BATCHED| RA_CUST_TRX_LINE_GL_DIST_ALL |      1 |    651K|   1458K|00:00:02.22 |     526K|  
|* 28 |                          INDEX RANGE SCAN                  | RA_CUST_TRX_LINE_GL_DIST_N2  |      1 |    728K|   1820K|00:00:01.60 |   11147 | 
-----------------------------------------------------------------------------------------------------------------------------------------------------    

The important thing we see here is that the very first hash join identifies 33,459 rows: we’re immediately into the right ball-park for the final output. The timings are a bit suspect – I really don’t like seeing the time for hash join (0.4 seconds) being smaller than one of its direct child operations (the 2.22 seconds) – but this bit of the work seems to get to the right scale very quickly: this looks as if it’s likely to be a good way to start the final join order.

We might question whether the optimizer has been wise to use an index range scan to identify 1.45 million rows in a table (and probing it 1.82 million times). Maybe that was quick because all the data had previously been buffered and perhaps thisrange scan will be extremely slow on a busy production system; maybe a tablescan would be better, maybe there’s a way of getting to this big table through a different join order that means we only visit it roughly 33,459 times through an index that identifies exactly the rows we really need. Without good knowledge of what the data looks like (and without understanding what the query is supposed to achieve and how often it runs) we can only look at the supplied execution plan and work out where the time went and whether that suggests the plan is doing roughly the right thing or doing something that is clearly silly. This plan looks like a reasonable starting point with one minor (we hope) glitch around line 42 that we identified earlier on.

 

connor_mc_d's picture

My APEX 18.2 upgrade in a nutshell

As always, you should read the Installation/Upgrade manual from top to bottom before upgrading any piece of software, and be aware of all of the pre-requisites and processes.  But for me, my Application Express 18.2 upgrade was as simple as:

  • Download Application Express
  • Unzip to my target location
  • sqlplus / as sysdba
    • SQL> @apexins.sql SYSAUX SYSAUX TEMP /i/
  • Go to my ORDS installation
    • java -jar ords.war validate
  • sqlplus / as sysdba

And voila! A freshly upgraded Application Express in a matter of minutes…

image

Sweet!

Jonathan Lewis's picture

Hacking for Skew

In my presentation to the UKOUG SIG yesterday “Struggling with Statistics – part 2” I described a problem that I wrote about a few months ago: when you join a fact table with a massively skewed distribution on one of the surrogate key columns to a dimension holding the unique list of keys and descriptions a query against a description “loses” the skew. Here’s an demo of the problem that’s a little simpler than the one in the previous article.


rem
rem     Script:         bitmap_join_histogram.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem     Updated:        Sep 2018
rem 

execute dbms_random.seed(0)

create table facts
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual 
        connect by 
                level <= 1e4 --> comment to avoid wordpress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      id_status,
        lpad(rownum,10,'0')                     v1,
        lpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 --> comment to avoid wordpress format issue
;

alter table facts add constraint fct_pk primary key(id);
alter table facts modify id_status not null;

create table statuses
as
select
        id,
        chr(65 + id)            status_code,
        rpad('x',100,'x')       description
from    (
        select
                distinct(id_status)             id
        from
                facts
        )
;

alter table statuses modify status_code not null;

alter table statuses add constraint sta_pk primary key (id);
alter table facts add constraint fct_fk_sta foreign key (id_status) references statuses(id);

create bitmap index fct_b1 on facts(id_status);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'facts',
                method_opt       => 'for all columns size skewonly'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'statuses',
                method_opt       => 'for all columns size 254'
        );
end;
/

The definition of the facts.id_status column means I get a nice skewing effect on the data and this is what my data looks like:


select id_status, count(*) from facts group by id_status order by id_status;

 ID_STATUS   COUNT(*)
---------- ----------
         0      26050
         1      23595
         2      18995
         3      13415
         4       8382
         5       4960
         6       2643
         7       1202
         8        490
         9        194
        10         55
        11         17
        12          2

13 rows selected.

The statuses table translates the numbers 0 – 12 into the letters ‘A’ – ‘M’.

A quick check will show you that there are 55 rows for id_status = 10, which means 55 rows for status_code = ‘K’. So what happens when we write the two queries that should show us these results. I don’t really care what the execution plans are at this point, I’m interested only in the optimizer’s estimate of cardinality – so here are two queries, each followed by its execution plan:


select
        sum(fct.id)
from
        facts   fct
where
        fct.id_status = 10
;


-----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |        |     1 |     8 |    12   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                      |        |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| FACTS  |    55 |   440 |    12   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |        |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | FCT_B1 |       |       |            |          |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("FCT"."ID_STATUS"=10)


select
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'K'
;

--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    13 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    13 |            |          |
|*  2 |   HASH JOIN         |          |  7692 | 99996 |   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     5 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='K')

The estimated cardinality for the query against the base column reflects the value 55 from the histogram, but the estimated cardinality of the join is 7,692 – which is num_rows(facts) / num_distinct(id_status). Oracle has lost information about the skew. There is a way to get Oracle to produce a correct estimate (shown in the previous article) by rewriting the join as an IN subquery with the (undocumented) “precompute_subquery” hint, but there is an alternative which David Kurtz hypothesized in a conversation after the presentation was over (in fact someone else had described their use of exactly his suggested approach in a comment on a much older blog note about this problem): take the histogram from the id_status column on the facts table and “apply it” to the status_code column on the statuses table. In discussion with David I expressed the opinion that this probably shouldn’t work, and it wasn’t really a bit of fakery I’d want to apply to a production system – but we both tried it when we got home … with differing degrees of success.

Here’s a piece of code that I inserted into my script immediately after gathering stats on the statuses table. I’ll explain the details below as it makes a couple of assumptions that need to be pointed out:


declare

        srec                    dbms_stats.statrec;

        m_distcnt               number;
        m_density               number;
        m_nullcnt               number;
        m_avgclen               number;

        c_array                 dbms_stats.chararray;

begin

        dbms_stats.get_column_stats(
                ownname         => 'test_user',
                tabname         => 'facts',
                colname         => 'id_status',
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

        srec.bkvals := dbms_stats.numarray();
        c_array     := dbms_stats.chararray();

        for r in (
                select  stt.status_code, count(*) ct
                from    facts fct, statuses stt
                where   stt.id = fct.id_status
                group by
                        stt.status_code
                order by
                        stt.status_code
        ) loop

                c_array.extend;
                c_array(c_array.count) := r.status_code;
                srec.bkvals.extend;
                srec.bkvals(srec.bkvals.count) := r.ct;

        end loop;

        dbms_stats.prepare_column_values(srec, c_array);

        dbms_stats.set_column_stats(
                ownname         => 'test_user',
                tabname         => 'statuses',
                colname         => 'status_code',
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

end;
/

alter system flush shared_pool;

The code isn’t intended to be efficient, and I’ve been a bit lazy in setting up the content.

The first step gets the column stats from facts.id_status – and I know that I’ve got a frequency histogram that covers exactly the right number of distinct values on that column so almost everything is set up correctly to copy the stats across to statuses.status_code, except one column is numeric and the other is character and (although I know it’s true because of the way I defined the status_code values) I need to ensure that the bucket values I write to the status_code need to be arranged in alphabetic order of status_code.

So my second step is to run a query against the facts table to get the counts of status_code in alphabetical order and copy the results in order into a pair of arrays – one being a standalone array of the type defined in the dbms_stats package as an array of character types, the other being the array of bucket values that already exists in the stats record for the facts.id_status column that I’ve pulled into memory. (The bucket values array is stored as cumulative frequency values, so I do have to overwrite it with the simple frequency values at this point).

Finally I “prepare column values” and “set column stats” into the correct column, and the job is done. The flush of the shared pool is there to avoid any accidents of cursors surviving previous tests and causing confusion.

So what happens when I run a couple of queries with these faked stats in place ?

set autotrace traceonly explain

select  
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'K'
;


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          |    55 |   770 |   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     6 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='K')



select
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'D'
;


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          | 13415 |   183K|   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     2 |    12 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='D')

Querying for ‘K’ the prediction is 55 rows, querying for ‘D’ the prediction is for 13,415 rows – both estimates are exactly right. Wow !!!

Problem – that’s not what David Kurtz saw. In an email to me he said: “To my surprise, if I fake a histogram on the dimension table using the skew on the join column from the fact table I do get the correct number of rows calculated in the execution plan (provided it is less than the value if the histogram was not present)”. To make that concrete – when he queried for ‘K’ he got the correct prediction, when he queried for ‘D’ he was back to a prediction of 7,692. Looking at the report of the actual data, he’d get the right prediction for codes ‘F’ to ‘M’ and the wrong prediction for codes ‘A’ to ‘E’.

So what went wrong (and with whom) ?

When I run up new tests I tend to test Oracle versions in the order 12.1.0.2, then 11.2.0.4, then 12.2.0.1, then 18.3.0.0 – it’s the order of popularity that I currently see. So I was running my test on 12.1.0.2; David was running his test on 18.3.0.0. So I jumped a step and ran my test on 12.2.0.1: here are my results when querying for status_code = ‘D’:


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          |  7692 |   105K|   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     6 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------



Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='D')

As David has seen with 18.3, Oracle used the num_distinct to estimate the cardinality for  ‘D’. (It still used the value indicated by the histogram for ‘K’.) When I set the optimizer_features_enable parameter back to 12.1.0.2 the cardinality estimate for ‘D’ wentback to 13,415 – so it looks as if this is a deliberate piece of coding. 172 fix controls and 31 optimizer state parameters changed, but none of the more likely looking candidates had any effect when I tried testing them separately; possibly there’s a new sanity check when the number of rows recorded for the table is a long way off the total histogram bucket count.

I took a quick look at the 10053 trace in 12.2, with and without the change to optimizer_features_enable. The key difference was in the single table access path analysis – which didn’t give me any further clues.

With optimizer_features_enable = 12.1.0.2
=========================================
Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='K'
  Estimated selectivity: 5.5000e-04 , endpoint value predicate, col: #2

Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='D'
  Estimated selectivity: 0.134150 , endpoint value predicate, col: #2


With optimizer_features_enable defaulting to 12.2.0.1
=====================================================
Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='K'
  Estimated selectivity: 5.5000e-04 , endpoint value predicate, col: #2


Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='D'
  Estimated selectivity: 0.076923 , endpoint value predicate, col: #2


Bottom line on this – there’s at least one person who already uses this method to work around the optimizer limitation, they need to be careful when they upgrade to 12.2 (or above) as the method no longer works in all cases.

connor_mc_d's picture

OpenWorld Monday…I’m so screwed

As I mentioned in a previous blog post, I whipped up a small Application Express application to let me plan out my Openworld activities. But there’s a small problem with being able to quickly and easily find thing based on the schedule…. There is too much to see!!!

I had a quick glance through the schedule just for Monday, and already I can see myself being very frustrated with all of the content that I am going to have miss out on, simply because it clashes with other content I’d like to see.

And just to compound things Smile with any luck, I’ll be able to score some time in the theatre in the demo grounds to give some lightning talks in both the morning and afternoon. More details on that soon!

But here is my wish list below – just smashed full of conflicts… sigh Smile

Monday morning

The New Auto-Upgrade for Oracle Databases [TRN4031]
Daniel Overby Hansen, Database Administrator, SimCorp
Mike Dietrich, Master Product Manager – Database Upgrades and Migrations, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3004

which overlaps with

High Availability and Sharding Deep Dive with Next-Generation Oracle Database [TRN4032]
Wei Hu, Vice President of Product Development, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3007

which overlaps with

Oracle Linux and Oracle VM VirtualBox: The Enterprise Development Platform [PRO4724]
Simon Coter, Director of Product Management – Oracle VM & VirtualBox, Oracle
Sergio Leunissen, Vice President of Product Management, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone South – Room 152

which overlaps with

A Day in the Life of a DBA in an Autonomous World [PRO4376]
Akshay Sangaonkar, Senior Principal Product Manager, Oracle
Karl Dias, Vice President, Database Manageability, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Marriott Marquis (Golden Gate Level) – Golden Gate C3

which overlaps with

Make AppDev Fun Again, with the Oracle RAD Stack [PRO4087]
Kris Rice, Senior Director, Oracle
Michael Hichwa, Software Development VP, Oracle
Joel Kallman, Senior Director, Software Development, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3009

which overlaps with

Using Deep Learning and Neural Networks in Oracle Database 18c [BUS1891]
Brendan Tierney, ., Oralytics
Neil Chandler, Database Administrator, Chandler Systems Ltd
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3001

which overlaps with

Analytic Functions: A Developer’s Best Friend [DEV6244]
Timothy Hall, DBA, Developer, Author, Trainer, oracle-base.com
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 2003

 

Monday mid-morning

Oracle Linux and Oracle VM: Get Trained for Cloud, Hybrid, and On-Premises [TRN5828]
Avi Miller, Product Management Director, Oracle
Antoinette O’SULLIVAN, Director of Training and Documentation for Linux and Virtualization, Oracle
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone South – Room 154

which overlaps with

Why Citizen Developers Should Be Your New Best Friend [TRN4091]
David Peake, Senior Principal Product Manager, Oracle
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 3009

which overlaps with

Oracle Autonomous Database Cloud [PKN3947]
Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle
Monday, Oct 22, 11:00 a.m. – 12:15 p.m. | The Exchange @ Moscone South – The Arena

which overlaps with

3D: Docker for Database Developers [TIP1247]
Roel Hartman, Director, APEX Consulting
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 3020

which overlaps with

Oracle Linux: State of the Penguin [PRO4720]
Wim Coekaerts, Senior Vice President, Operating Systems and Virtualization Engineering, Oracle
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2000

which overlaps with

50 Shades of Data: How, When, Why—Big, Relational, NoSQL, Elastic, Graph, Event [DEV4976]
Lucas Jellema, CTO, AMIS Services BV
Code One Tracks: Database, Big Data, and Data Science, Java Server-Side Development and Microservices
Session Type: Developer Session
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 2007

which overlaps with

Microservices: Get Rid of Your DBA and Send the DB into Burnout [DEV5504]
Franck Pachot, Computing Engineer, CERN
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 2003

which overlaps with

REST-Enabled Neural Networks in Oracle Database 18c [DEV5026]
Brendan Tierney, ., Oralytics
Neil Chandler, Database Architect, Chandler Systems
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2001

which overlaps with

Real-Life SQL Tuning: From Four Minutes to Eight Seconds in an Hour [DEV5668]
Liron Amitzi, Senior Database Consultant, Brillix LTD
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2003

 

Monday afternoon

Oracle Database in an Asynchronous World [TRN5515]
Dominic Giles, Master Product Manager, Oracle
Monday, Oct 22, 3:45 p.m. – 4:30 p.m. | Moscone West – Room 3003

which overlaps with

A Day in the Life of a Real-World Performance Engineer [TRN4026]
Graham Wood, Software Architect, Oracle
Robert Carlin, Software Development Manager, Real World Performance, Oracle Database Development, Oracle
Mihajlo Tekic, Oracle
Monday, Oct 22, 3:45 p.m. – 4:30 p.m. | Moscone West – Room 3004

which overlaps with

Build a Web App with Oracle REST Data Services and Oracle JavaScript Extension Toolkit [HOL6325]
Jeff Smith, Senior Principal Product Manager, Oracle
Ashley Chen, Senior Product Manager, Oracle
Colm Divilly, Consulting Member of Technical Staff, Oracle
Elizabeth Saunders, Principal Technical Staff, Oracle
Monday, Oct 22, 3:45 p.m. – 4:45 p.m. | Marriott Marquis (Yerba Buena Level) – Salon 3/4

 

Monday late afternoon

Oracle Optimizer and the Road to the Latest Generation of Oracle Database [PRO4009]
Nigel Bayliss, Senior Principal Product Manager, Oracle
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3005

which overlaps with

Oracle Autonomous OLTP Database Cloud Overview and Roadmap [PRO3978]
Maria Colgan, master product manager at Oracle Corporation , Oracle
Juan Loaiza, Senior Vice President, Oracle
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3003

which overlaps with

Data Warehouse Like a Tech Startup with Oracle Autonomous Data Warehouse Cloud [BUS3194]
Mark Rittman, Managing Director, MJR Analytics ltd
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3006

 

Monday evening

Data Management in a Microservices World [TIP4175]
Gerald Venzl, Senior Principal Product Manager, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3003

which overlaps with

18(ish) Things Developers Will Love About Oracle Database 18c [PRO4093]
Chris Saxon, Developer Advocate for SQL, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3009

which overlaps with

What’s New for Oracle SQL Developer [PRO4058]
Jeff Smith, Senior Principal Product Manager, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3006

which overlaps with

What Everyone Should Know About Oracle Partitioning [PRO4046]
Hermann Baer, Senior Director Product Management, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3005

 

To prevent automated spam submissions leave this field empty.