bug

Chris Antognini's picture

Purging Unused Triples Is a Never-ending Operation

This is a short post to point out a problem I recently discovered and for which I couldn’t find information online. Hence, in case you hit it, thanks to this short post I hope that you’ll save some time.

The documentation describes a procedure you can use to get rid of dangling triples stored in a semantic network. Simply put, you must execute a PL/SQL block like the following:

Chris Antognini's picture

SQL_ADAPTIVE_PLAN_RESOLVED Is Broken

You can use the V$SQL.IS_RESOLVED_ADAPTIVE_PLAN column to know whether the execution plan associated to a child cursor is adaptive or not. Specifically, to know whether the query optimizer selected either an adaptive join method or an adaptive star transformation (notice that it is not set when the hybrid hash distribution is involved).

The column takes one of the following values:

Chris Antognini's picture

SPD State Does Not Change If Adaptive Statistics Are Disabled

The aim of this post is to point out an issue (bug?) that I recently discovered. But, before talking about it, I need to go through a rather long introduction about the state of SQL plan directive (SPD).

As of version 12.1.0.2, an SPD has two state information. You can see both of them through the CDB/DBA_SQL_PLAN_DIRECTIVES data dictionary views:

randolf.geist's picture

12c Adaptive Joins Plus Statistics Feedback For Joins Cardinality Estimate Bug

I've encountered a bug at several clients that upgraded to Oracle 12c - 12.1.0.2 - that requires the combination of several new adaptive features introduced with Oracle 12c.It needs an execution plan that makes use of adaptive join methods, plus at runtime the activation of the new "statistics feedback for joins" feature that was also introduced with Oracle 12c. Note that in 11.2 there was already the "cardinality feedback" feature that only applies to single table cardinality misestimates, but not to join cardinality misestimates.In case then the join method used at runtime is a Nested Loop join - not necessarily the join method preferred initially, so a runtime switch from Hash to Nested Loop join also reproduces the problem - the "statistics feedback for joins" feature generates a bad OPT_ESTIMATE hint for the join cardinality that always seems to be one, like the following: OPT_ESTIMATE(...

Chris Antognini's picture

Initialization Parameters Set at the PDB Level Are Not Always Honored

Before describing the issue that lead to this post, let’s shortly review how the handling of initialization parameters works in a multitenant environment.

Chris Antognini's picture

Wrong Results Involving INDEX FULL SCAN (MIN/MAX) in 12.1.0.2

One of my customers that recently upgraded to 12c hit a bug (22913528) that I think is good to be aware of. Note that as the title of this post states, the problem only occur in 12.1.0.2. At least, I wasn’t able to reproduce it in any other version.

To reproduce it you simply need a composite partitioned table with a non-partitioned or global-partitioned index. In other words, if all your indexes are local, you shouldn’t be impacted by the bug.

The SQL statements I use to prepare the schema to reproduce it are the following:

Chris Antognini's picture

Also Extensions Created Because of SQL Plan Directives Can Invalidate Packages


UPDATE 2015-12-18: As metioned by Bryn Llewellyn in this tweet, the invalidation I describe below is caused by bug 19450314. A patch for it is available here for several releases (e.g. it is available for 12.1.0.1.0, 12.1.0.2.1 or 12.1.0.2.13, but not for 12.1.0.2.0). I successfully tested it on 12.1.0.1.0.

The fact that an extension explicitly created by a user through DBMS_STATS can invalidate objects like packages is not new in 12c. It has been like that since the introduction of extensions in 11g. In my opinion, since such an invalidation takes place only when a developer or DBA triggers it, I do not consider it a major problem.

Chris Antognini's picture

Auto SGA Management Impacts the Default Value of DB_FILE_MULTIBLOCK_READ_COUNT

The database engine determines the maximum disk I/O size used during multiblock reads (for example, full table scans or index fast full scans) by multiplying the values of the db_block_size and db_file_multiblock_read_count initialization parameters. The db_file_multiblock_read_count initialization parameter can be set explicitly, or, as of version 10.2, it’s also possible to instruct the database engine to automatically configure it. For the latter, simply don’t set it.

About the value which is automatically determined by the database engine the Oracle Database 12c Reference Guide gives us the following information:

mwidlake's picture

PL/SQL bug with DBMS_RANDOM?

I think I’ve found an (admittedly obscure) bug with DBMS_RANDOM, group functions, PL/SQL and/or SQL.

Have a look and see if you also think this is odd – or have I missed the totally obvious?

(This is all on 12.1.0.2)

{Update – my conclusion is, and thanks to Joel and Sayan for their comments, that this is not a “bug”. Oracle do not promise us how PL/SQL functions are executed due to the way SQL can be re-written by the parser. I think most of us stumbling over something like this would treat it as a bug though. You have to look at the column projection, again see the comments, to see how Oracle is deciding to get the columns derived by a naked call to DBMS_RANDOM.VALUE (by naked I mean no inclusion of parameters passed in and, significantly, no reference to columns). It’s just the way it is}

oraclebase's picture

PDB Logging Clause… Again…

About 14 months ago I spotted a problem with the PDB Logging Clause. I opened an SR and several months later I got a patch, which unfortunately didn’t fix the issue, just altered the symptom somewhat. I wrote about that patch here.

Yesterday I got a new patch, which actually does fix the problem, so now the PDB Logging Clause works as documented!

I’ve updated the PDB Logging Clause article to reflect the change.

Syndicate content