Oracle

Franck Pachot's picture

UUID (aka GUID) vs. Oracle sequence number

When you want to generate a unique number, for a primary key, the most common idea (and the right one) is to get a number from an always increasing generator. Oracle provides SEQUENCE for this purpose, which is optimized, easy to use and scalable.

But some application developer has the idea to use a UUID (Universally Unique Identifier) also known as GUID (Globally Unique Identifier). The main advantage is that you do not need a shared object like a sequence because there is no need for a central authority. UUID is generated with a hash function on some local identifiers (host, process, thread) and the hash value is large enough to reduce the probability of collisions without the need for a shared central authority.

However, in the context of Oracle Database:

Franck Pachot's picture

We still need partitions and indexes, and I will continue to commute by train even when…

We still need indexes, and I will continue to commute by train even when self-driving cars become reality…

When the Autonomous Data Warehouse Cloud Service had been announced, Oracle came with this surprising idea that we do not need to create Indexes, Partitions and Materialized views for our analytic queries. It was even blocked in ADW and recently released but not recommended. Automatic indexing is for ATP and the message for ADW is: you don’t need indexes for your analytic queries.

In my opinion, and even with the best performance in non-index access, we will still need index range scans. And even when it is not the fastest access path. Because the fastest response time is not the first performance criteria for the end-user.

Full Table Scan vs. Index access

Here is a diagram I used to explain the cost of Full Table Scan vs. Index Access, depending on the number of rows to return.

Jonathan Lewis's picture

QC vs. PX

One last post before closing down for the Christmas break.

Here’s a little puzzle with a remarkably easy and obvious solution that Ivica Arsov presented at the UKOUG Tech2018 conference. It’s a brilliant little puzzle that makes a very important point, because it reminded me that most problems are easy and obvious only after you’ve seen them at least once. If you’ve done a load of testing and investigation into something it’s easy to forget that there may be many scenarios you haven’t even thought of testing – so when you see the next puzzle your mind follows all the things you’ve done previously and doesn’t think that you might be looking at something new.

Jonathan Lewis's picture

Transitive Closure

This is a follow-up to a note I wrote nearly 12 years ago, looking at the problems of transitive closure (or absence thereof) from the opposite direction. Transitive closure gives the optimizer one way of generating new predicates from the predicates you supply in your where clause (or, in some cases, your constraints); but it’s a mechanism with some limitations. Consider the following pairs of predicates:

Jonathan Lewis's picture

NULL predicate

People ask me from time to time if I’m going to write another book on the Cost Based Optimizer – and I think the answer has to be no because the product keeps growing so fast it’s not possible to keep up and because there are always more and more little details that might have been around for years and finally show up when someone asks me a question about some little oddity I’ve never noticed before.

The difficult with the “little oddities” is the amount of time you could spend trying to work out whether or not they matter and if it’s worth writing about them. Here’s a little example to show what I mean – first the data set:

Franck Pachot's picture

Oracle VPD as a safeguard for DML

A new blog post on the Databases at CERN blog about using VPD Row-Level Security (DBMS_RLS) as a safeguard for the privileged users who need to bypass the application and run SQL directly: https://db-blog.web.cern.ch/blog/franck-pachot/2018-12-oracle-vpd-safeguard-dml

Franck Pachot's picture

CPU Capacity planning from OEM metrics

The CPU used by your Oracle Database is expensive because it is the metric used by licensing. The more you can control and know what you need, the more freedom you will have to optimize the costs. With instance caging, available in all editions, you can put a soft limit. This means that:

  • you run on a limited number of threads and after a while, this gives a good idea of what you really need. You can forecast the capacity for a future consolidation.
  • you monitor ‘resmgr: cpu quantum’ and if activity is high you can decide to scale-up immediately, throttle some services, or do some query/design tuning.

In order to set instance caging, you need to define a value for CPU_COUNT according to the past activity. This post is the detail behind the following tweet:

Jonathan Lewis's picture

Extreme Nulls

This note is a variant of a note that I wrote a few months ago about the impact of nulls on column groups. The effect showed up recently on a client site with a little camouflage that confused the issue for a little while, so I thought it would be worth a repeat.  We’ll start with a script to generate some test data:

Franck Pachot's picture

Minimal Oracle installation (and Docker image)

A new blog post on the Databases at CERN blog about some research on the minimal Oracle Database installation: https://db-blog.web.cern.ch/blog/franck-pachot/2018-12-minimal-oracle-1

And a bonus here: the Dockerfile which builds this minimal image. You need to build the Oracle XE image (oracle/database:18.4.0-xe) with the buildfiles provided by Oracle (https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance/dockerfiles/18.4.0) and this Dockerfile will copy only the necessary:

Jonathan Lewis's picture

Case Study

A recent thread on the ODC database forum highlighted a case where the optimizer was estimating 83,000 for a particular index full scan when the SQL Monitor output for the operation showed that it was returning 11,000,000 rows.

Apart from the minor detail that the OP didn’t specifically ask a question, the information supplied was pretty good. The OP had given us a list of bind variables, with values, and the SQL statement, followed by the text output of the Monitor’ed SQL and, to get the predicate section of the plan, the output from a call to dbms_xplan. This was followed by the DDL for the critical index and a list of the stats for all the columns in the index.

To prevent automated spam submissions leave this field empty.
Syndicate content