12c Release 1

martin.bach's picture

Tales from the field: potential reasons for PDB plug-in violations part 1

Container Databases have been an area that I have researched intensively over the past years. With this post (and hopefully some others that follow) I would like to demonstrate some of the new situations the DBA might be confronted with. Please don’t use this post to give the new 12c architecture a hard time: standardised deployments (which I love) help you a lot. Not only do your DBA scripts work reliably everywhere, but the error condition I am showing in this post should be a lot less likely.

At the end of the post I’ll show an alternative approach using a standardised way of creating PDBs.

Environment

Setting the scene, my environment is as follows:

martin.bach's picture

Automatic Deployment of CDBs with non-default database options in 12.1.0.2

I have written about the importance of automation in previous posts, and this one is following the same line. This time I wanted to test and document my experience with “golden database templates”. Wy? Because most DBAs will appreciate that there are more interesting things to do than to run “create database” scripts all day, so why not automate the process? For quite some time DBCA, or the Database Creation Assistant offers you the option to create templates. One approach you could use would be to

  • Create a “golden master” database
  • Use DBCA to create a template based on this
  • And enjoy the fact that every database is exactly the same

All of this is done in hope to make life for the administrators a lot easier.

Motivation

martin.bach's picture

Little things worth knowing: auto-DOP or hint-which one takes precedence?

This is just another short post about one of the little things worth knowing. Assume you are on 12.1.0.2 for example and you want to know if parallel_degree_limit reliably keeps the degree of parallelism at bay, for all sessions and queries across the board. After all, every machine has finite CPU (and other) resources that shouldn’t be exhausted. Setting parallel_degree_limit of course only has an effect if you set parallel_degree_policy to limited, auto or adaptive. The latter is the Big Knob to enable Automatic Degree Of Parallelism (DOP), and the post assumes that Auto DOP is enabled for the session. In this configuration, what happens to hinted statements?

Documentation

The documentation (12c Reference Guide, section on parallel_degree_limit) states that you can provide an integer in addition to “CPU” and “IO”:

martin.bach's picture

Trouble with multiple SCAN listeners in 12c Release 1

UPDATE 28-JUL-2016: the problem – as stated by Anil in the comments section – seems to be fixed. I applied the Proactive Bundle Patch 23273686 and OJVM patch 23177536 to the Grid and RDBMS home. After a successful execution of datapatch the errors in the CRSD agent log I reported in this post went away, and I can also see the settings in the database alert.log.

Scroll to the bottom of the post for more detail.

For those not yet on that patch level-the original post

martin.bach's picture

TABLE ACCESS INMEMORY FULL – but there may be more

While preparing to teach a class this week I have had the time to look into the In Memory (cost) option a bit closer. I noticed a few interesting things along the way and would like to share one of these here.

Background

One of the questions I was asking myself was:

“What happens if I scan segments that are within the IM area, and some are not?”

I was primarily thinking of joins in a DWH environment, but in order to keep the test case simple enough and reproducible I decided to go with a partitioned table where the current partition is assigned to the IMCS, and the rest is not. For this test I’m relying on the Swingbench SH schema. All of this runs on my laptop in a VM so I had to be selective when it comes to indexes. I also chose to NOT partition the tables at this stage, I wanted to chose my own partitioning scheme. For reference, here is the command that created the SH schema:

martin.bach's picture

Little things worth knowing: automatic generation of extended statistics in 12c Part II

In my last post I demonstrated an example how to use a PL/SQL package to monitor a workload in order to identify correlated columns. Such columns are good candidates for the creation of extended statistics since the optimiser does not assume that information stored in different columns may be related. Before starting my tests I invoked DBMS_STATS.SEED_COL_USAGE, which in turn increased Oracle’s attention level trying to find correlated columns. Eventually these have been identified (as shown in the output of DBMS_STATS.REPORT_COL_USAGE) and a subsequent call to DBMS_STATS.GATHER_TABLE_STATS caused extended statistics to be created, including histograms. This is one way you can get extended statistics automatically, but it requires you to enable monitoring of a workload by invoking a PL/SQL API.

martin.bach's picture

Little things worth knowing: automatic generation of extended statistics in 12c

When you are migrating to Oracle 12c I hope you might this post useful. I came across this feature when researching what’s new with Oracle 12c (and yes I still find lots of new ones I haven’t noticed before). This one is a bit hidden away in section 2.2.4.3 Automatic Column Group Detection of the 12c New Features Guide. And it’s a lot more complex than I first thought! In this first post I’ll try and show the generation of extended statistics in 12c. I am planning on another post to explain how the rest of the adaptive optimisations that are new with 12c fit into the picture.

What is the motivation?

martin.bach's picture

Example of Full Transportable Export to create a 12c PDB

The Cool Stuff

Oracle has introduced a new way of transporting data from one platform to another in 12c. The new feature is called “Full Transportable Export”. This enhancement works from 11.2.0.3 and later and is a great way to move data, as it allows for an easy-to-use combination of Export Data Pump and Transportable Tablespaces. The documentation specifically praises it as a means to move from an 11g Release 2 database into a 12c Pluggable Database. And Pluggable Databases I like :)

Unfortunately the Full Transportable Export/Import process does not perform endianness conversion for you. And in fact, if you have encrypted data you want to move you cannot do so unless you are on the same endianness.

Limitations

Before we begin, here is a list of restrictions pertaining to the new Full Transportable Export, taken from the 12c Utilities Guide:

martin.bach's picture

CloneDB in Oracle 12.1.0.2

I personally really like CloneDB, a way to thin-clone an Oracle database over NFS. This can be quite interesting, and I wanted to update my blog for 12.1.0.2.3 (April PSU). Tim Hall has a good example for 11.2.0.2 and later with further references.

My setup is as follows:

martin.bach's picture

Fixing a problem with the ASM spfile preventing RAC 12c from starting

This is a little note to myself on how to fix a corrupt spfile in clustered ASM. I hope you find it useful, too.

Let’s assume you made a change to the ASM (server) parameter file that causes an issue. You are most likely to notice this once CRS is restarted but parts of the stack fail to come up. If “crsctl check crs” mentions any component not started you can try to find out where in the bootstrap process you are stuck. Here is the output from my system.

To prevent automated spam submissions leave this field empty.
Syndicate content