Oakies Blog Aggregator

rshamsud's picture

inmemory area is another sub-heap of the top-level SGA heap

I blogged earlier about heap dump shared pool heap duration and was curious to see how the inmemory – 12.1.0.2 new feature – is implemented. This is a short blog entry to discuss the inmemory area heap.

Parameters

I have set the initialization parameters sga_target=32G and inmemory_size=16G, meaning, out of 32GB SGA, 16GB will be allocated to inmemory area and the remaining 16GB will be allocated to the traditional areas such as buffer_cache, shared_pool etc. I was expecting v$sgastat view to show the memory allocated for inmemory area, unfortunately, there are no rows marked for inmemory area (Command “show sga” shows the inmemory area though). However, dumping heapdump at level 2 shows that the inmemory area is defined as a sub-heap of the top level SGA heap. Following are the commands to take an heap dump.

oradebug setmypid
oradebug heapdump 2 -- this command creates an heap dump trace file.
oradebug tracefile_name

Reviewing trace file

Trace file shows that the inmemory area is implemented as two sub-heaps namely IMCA_RO and IMCA_RW. Split is not equal between these two sub-heaps and I am not exactly sure about the algorithm for this split, about 12.75GB is allocated for IMCA_RO and the remaining 3.25GB is allocated for IMCA_RW area [ That's about 80-20:) split ].

$ grep "heap name" *ora_56235*.trc
HEAP DUMP heap name="sga heap"  desc=0x600013d0
HEAP DUMP heap name="sga heap(1,0)"  desc=0x60063740
HEAP DUMP heap name="sga heap(1,3)"  desc=0x60068048
HEAP DUMP heap name="sga heap(2,0)"  desc=0x6006d490
HEAP DUMP heap name="sga heap(2,3)"  desc=0x60071d98
HEAP DUMP heap name="sga heap(3,0)"  desc=0x600771e0
...
HEAP DUMP heap name="sga heap(7,0)"  desc=0x6009e720
HEAP DUMP heap name="sga heap(7,3)"  desc=0x600a3028
HEAP DUMP heap name="IMCA_RO"  desc=0x60001130 <--- In memory Read only area?
HEAP DUMP heap name="IMCA_RW"  desc=0x60001278 <--- In memory Read write area?

You can learn all about SGA heap duration here , only last two lines are interesting to this blog entry and shows that two sub-heaps were allocated for Inmemory area.

The inmemory sub-heaps are split in to memory extents, similar to traditional SGA heap allocations. Each extent has numerous 64MB chunks allocated to it. These chunks are tagged as “cimadrv”. Total heap size is about 12.5GB.

HEAP DUMP heap name="IMCA_RO"  desc=0x60001130
 extent sz=0x1040 alt=288 het=32767 rec=0 flg=2 opc=2
 parent=(nil) owner=(nil) nex=(nil) xsz=0x30600000 heap=(nil)
 fl2=0x20, nex=(nil), dsxvers=1, dsxflg=0x0
 dsx first ext=0x64000000
 dsx empty ext bytes=0  subheap rc link=0x64000070,0x64000070
 pdb id=0
EXTENT 0 addr=0x363a00000
  Chunk        363a00010 sz=  8388304    free      "               "
  Chunk        3641ffee0 sz= 65011736    freeable  "cimadrv        "
  Chunk        367fffef8 sz= 67108888    freeable  "cimadrv        " <-- 64MB chunks
  Chunk        36bffff10 sz= 67108888    freeable  "cimadrv        "
  Chunk        36fffff28 sz= 67108888    freeable  "cimadrv        "
  Chunk        373ffff40 sz= 67108888    freeable  "cimadrv        "
...
EXTENT 1 addr=0x2e3b00000
  Chunk        2e3b00010 sz= 66059528    freeable  "cimadrv        "
  Chunk        2e79ffd18 sz= 67108888    freeable  "cimadrv        "
  Chunk        2eb9ffd30 sz= 67108888    freeable  "cimadrv        "
…
Total heap size    =13690208144 <-- Total heap size.

Next heap IMCA_RW is more interesting. This sub-heap also has extents with 64MB of chunks allocated it, however, I see that there are also smaller chunks in the heap. (I am still researching meaning of these chunks and trying to avoid guess work at this time.)

EAP DUMP heap name="IMCA_RW"  desc=0x60001278
 extent sz=0x1040 alt=304 het=32767 rec=0 flg=2 opc=2
 parent=(nil) owner=(nil) nex=(nil) xsz=0x50100000 heap=(nil)
 fl2=0x20, nex=(nil), dsxvers=1, dsxflg=0x0
 dsx first ext=0x790000030
 dsx empty ext bytes=0  subheap rc link=0x7900000a0,0x7900000a0
 pdb id=0
EXTENT 0 addr=0x80ff00000
  Chunk        80ff00010 sz= 17825296    free      "               "
  Chunk        810fffe20 sz= 50331672    freeable  "cimadrv        "
  Chunk        813fffe38 sz= 67108888    freeable  "cimadrv        "
  Chunk        817fffe50 sz= 67108888    freeable  "cimadrv        "
…
  Chunk        80f8d5ef8 sz=     8296    freeable  "cimcadrv-sb    " <-- smaller chunks. Most are about 8k or 16k.
  Chunk        80f8d7f60 sz=       48    freeable  "cimcadrv-sbrcv "
  Chunk        80f8d7f90 sz=      184    freeable  "cimcadrv-sblatc"
  Chunk        80f8d8048 sz=     8296    freeable  "cimcadrv-sb    "
  Chunk        80f8da0b0 sz=       48    freeable  "cimcadrv-sbrcv "
  Chunk        80f8da0e0 sz=      184    freeable  "cimcadrv-sblatc"
…
Total heap size    =3489660848

So, if this is similar to shared pool heap, is it possible to get an out-of-space error such as ORA-4031 for the shared pool heap?. There is such an error associated with inmemory option :).

 oerr ora 64356
64356, 00000, "in-memory area out of space"
// *Document: NO
// *Cause:    The in-memory area had no free space.
// *Action:   Drop the in-memory segments to make space.

In summary, I was expecting inmemory area to be allocated as integral part of buffer_cache buffers, however, that is not the case. Inmemory area size is allocated as sub-heaps very similar to the shared pool sub-heaps (but, NOT part of shared pool heaps though). As the software was released just recently, I need to research further to understand the intricate details.

oraclebase's picture

EM Cloud Control 12c : The 24 Hour DBA

I think I’ve lived through all the ages of Enterprise Manager. I used the Java console version back in the days when admitting you used it got you excommunicated from the church of DBA. I lived through the difficult birth of the web-based Grid Control. I’ve been there since the start of Cloud Control. I’ll no doubt be there when it is renamed to Big Data Cloud Pixie Dust Manager (As A Service).

I was walking from the pool to work this morning, checking my emails on my phone and it struck me (not for the first time) that I’m pretty much a 24 hour DBA these days. I’m not paid to be on call, I’m just a 9-5 guy, but all my Cloud Control notifications come through to my phone and tablet. I know when backups have completed (or failed). I know when a Tnsping takes too long. I know when we have storage issues. I know all this because Cloud Control tells me.

Now you might look on this as a bad thing, but being the control freak I am, I prefer to get a message on a Sunday telling me something is broken, hop on the computer and fix it there and then, rather than coming in on Monday to a complete sh*t-storm of users complaining. I’m not paid to do it, but that’s the way I roll.

While walking down memory lane I was thinking about all the scripting I used to do to check all this stuff. Endless amounts of shell scripts to check services and backups etc. I don’t do hardly any of that these days. Cloud Control handles all that.

We are a pretty small Oracle shop, but I think life would be a whole lot more difficult without Cloud Control. I’ve mentioned this a number of times, but it’s worth saying again… If you have more than a handful of Oracle databases, you really should be using Cloud Control these days. It’s as simple as that.

Just in case you are wondering, this is how our infrastructure looks this morning… :)

em-all-green

Cheers

Tim…


EM Cloud Control 12c : The 24 Hour DBA was first posted on July 30, 2014 at 8:45 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
doug's picture

Enkitec Extreme Exadata Expo 2014

(Otherwise known as #E42014 to the Twitterati. Note to the casual reader ... like a lot of conference posts, this is more personal diary entry than having any tech content whatsoever. You have been warned.)

Yes, so this post is hopelessly late, but I really have been busy this time!

Although I feel like I've done quite a few presentations over the past year, a lot of them have been at client sites and the conferences have been a little more spaced out so it felt good to be back in the wild a couple of months ago, particularly as it was going to be my last conference before moving to Singapore for work (more on that later). It was supposed to be a packed week with E4 covering the first few days and OUGF the last few days. Happily for me, @HeliFromFinland was good enough to understand that one conference would steal a lot less Singapore preparation time and the refundable travel made it attractive too. Thanks, Heli! (Although when reading the #OUGF14 tweets in between packing, I'm wasn't so sure!)

I was lucky enough to be able to use some frequent flyer miles to upgrade both my flights to DFW, which was nice. But with no slides to work on and with an unusually low desire to watch movies, I mainly ate, read, slept and drank and the flight seemed to zip by. I believe I might have had one cocktail too many so that, by the time Jason Osborne picked me up in his extremely nice sports car, I could barely form a sentence. (I may be exaggerating slightly) I was nursing a Mojito slowly by the time everyone else was ready to meet at the hotel bar. It turned out to be the first of quite a few (no idea how that happened!) and although I was in bed relatively early, I felt like death when I woke up. I think that's the first time that's ever happened.

Sunday was all about registering and getting a nice Enkitec speaker goodie bag before settling in for an afternoon of Tanel Poder presenting on Exadata Internals and Advanced Performance Metrics - two 90 minute sessions - and although he was splendid as usual, I ended up only managing the first couple of hours before I really had to *get some food* and *sleep*. I got the first half done but the second consisted largely of me lying in a hotel room in a zombified state trying to rise myself for the speakers dinner that Enkitec had laid on

.

Embedded image permalink

By the time they were done, though, I was able to catch up with them as they returned for a far more sedate last few beers and an early night in bed.

One of the aspects I like most about E4 is that it's available as a webinar for virtual attendees at a reasonable cost and, because it's recorded, I can watch presentations that I might have missed the first time later, when I have some down-time. It means it's much easier to attend for those who can't get travel permission and also that, if you are particularly interested in presentations I just touch on here, anyone can register and see them all for themselves! (and no, I'm not on commission, I just think some of this stuff should have a wider audience than it already has ...)

One of the presentations that you're probably not likely to see at too many other conferences (I'm certainly not familiar with it) was the initial keynote - Exadata: The Untold Story of a Startup within Oracle with Kodi Umamageswaran who is VP Exadata Product Development at Oracle. I know that lining up this particular speaker took a lot of work from the organisers and it was well worth it for some classic keynote stuff. An entertaining and wide-ranging look at the pre-history of Exadata (SAGE) development, some of the key stages since and a look at the most recent developments. I thought it hit just the right level of being technical enough to be entertaining for a tech crowd, but without getting too bogged down in any one area.

Next up was Tom Kyte's keynote, titled What Needs to Change, during which he talked about some of the many changes in performance expectations and system capabilities in the time he's been working with Oracle and a good sprinkling of the content from the Real World Performance Days he does with Andrew Holdsworth and Graham Wood. My favourite bit of this is always when he talks about how 100% CPU usage is a bad idea for an OLTP-type system (essentially a very bad idea for response times) because I've had so many people at customer sites quote me some Tom Kyte thread or another suggesting that 100% CPU usage is a great thing, because you've paid for that capacity after all. Erm, yes, sure it is for some workloads but possibly a more important Tom Kyte quote is ... 'it depends'!

I wanted to attend Exadata Resource Manager Deep Dive by Akshay Shah of Oracle, but felt I needed to skip at least one session to eat something (Fajitas!) and prepare for my own. Having lunch with Cary Millsap, I was surprised to hear that he would be an Enkitec employee from that day (with a nice sideline in books and tools and training too, of course). It strikes me that this is a good move for all concerned. Nobody sane wouldn't want Cary on board and those Accenture people can help sell the Method-R skills into as many customers as possible. Good move by Kerry Osborne, if you ask me.

I always look forward to watching Tyler Muth present and this time it was a continuation of his central areas of interest these days - High Throughput Computing on Exadata - which was a collection of tips and a sense of the approach he takes when working with large ETL processes and the like. Encouragingly for me, they were very similar to some client work I did a year or two ago at a client, presented at last year's OUG Finland conference and it's always good to know you're heading in the right direction. I remember saying I would write some blog posts about that too! Maybe some day ... I could watch Tyler present all day, whether he sticks to his planned ideas or goes a little off-piste because he always has something interesting to say in an entertaining way and feels like a kindred spirit. (With less swearing perhaps! ;-)) One take-away I noticed was that he recommended the ODA sizing document as a nice guide to stop people over-consolidating their workloads. e.g. Do you really want 42 databases on a 8 core server? I think it was the tables in this section that he was talking about.

Next up was my presentation which I think went well from a presentation perspective and contained some hopefully useful real world DBaaS experiences but I think the problem with that particular presentation is that it's not really technical enough on the one hand and on the other I *want* to keep it light. The truth is that I could probably talk sensibly about the subject for three hours so I've walked away thinking about what I didn't say! Still, it wasn't too bad and Martin Bach seemed to like it, which was comforting :-) 

Not too long later, I had somehow been roped into taking part in a Hadoop vs. Oracle Database Panel with my old mate Alex Gorbachev as the moderator and Tanel Poder, Eric Sammer and Kerry Osborne debating the strengths and weaknesses of the two approaches and whether the RDBMS is on it's way out as a useful technology for most data analysis tasks. I must confess I'm not the greatest fan of panel sessions because you can never get into enough detail and argue the case properly but at least not everyone agreed, although that might have been the beer and jetlag combining in my case :-) Eventually we split into two groups to try to illustrate the different design approaches to a problem from an Oracle or a Hadoop perspective, so I roped in an impossibly young and smart backup team of Martin Back and Karl Arao. I probably still managed to talk over them though ;-) I suppose it was all a good bit of knockabout fun and came with a free beer attached, so I mustn't grumble! I guess I walked away still thinking it's horses for courses ...

Embedded image permalink

Embedded image permalink
That finished off day one nicely for me as I felt I'd both learned a few things and had fun at the same time. The fun is always guaranteed but I rarely learn as much as I do at E4.

The keynote the following morning was probably worth the price of admission alone. The Exadata SmartScan Deep Dive delivered by Roger MacNicol (who is a Consulting Engineer working on Smart Scan at Oracle) was precisely the type of presentation techies are looking for but with the added value that his presentation style and slides are as excellent as his content. It's so long ago now that I'm a bit short on details but plan to watch it again and, if you get a chance to hear Roger speak at a future conference (are you listening UKOUG?) then you should grab it with both hands, feet and whatever else you have at your disposal!

Maria Colgan, on the other hand, was far too busy working on the upcoming launch of Oracle In-Memory or something to bother about actually attending conferences to deliver her keynotes in person! ;-) Which wasn't a big deal for me personally as I've already heard a *lot* about IMO but it was a good opportunity for me to take the p*** out of her on Twitter!

My last full presentation of the day before I had time for a few beers and to head to the airport was Think Exa! with Martin Bach & Frits Hoogland, which was a collection of a handful of subject areas they highlighted as being worth thinking carefully about during initial implementations on Exadata servers. At first I thought Mr. Bach was going to do all the talking, but they did take it in lengthy alternating sections, which worked really well and set me up nicely for a last few beers with Martin, Frits and soon-to-be-Oak-Table-Network-member Alex Fatkulin, who I've known electronically for a while but only get to see at E4.

It's the second time I've been able to attend E4 in person (the other I attended remotely) and it's quickly become one of my favourite conferences. Sure, it's organised by good people doing a great job and some are friends, but I think as well as taking good care of people, Kerry Osborne's contacts within Oracle on top of Enkitec's consultants and customers merge together to help create a truly special agenda. I'd thoroughly recommend attending if you have the opportunity, even virtually.

Thanks again to everyone at Enkitec for another top job organising this thing and for giving me the opportunity for another trip to Dallas!

However, the complete highlight of the conference for me was getting to spend some decent time with a happy and healthy-looking Peter Bach, which amazed me after the health problems he's faced this year. Good to have you back, Peter, and I bet Oracle are glad to have you back working for them too! LOL

martin.bach's picture

Upgrading clustered Grid Infrastructure to 12.1.0.2 from 12.1.0.1.3

Oracle 12.1.0.2 is out, after lots of announcements the product has finally been released. I had just extended my 12.1.0.1.3 cluster to 3 nodes and was about to apply the July PSU when I saw the news. So why not try and upgrade to the brand new thing?

What struck me at first was the list of new features … Oracle’s patching strategy has really changed over time. I remember the days when Oracle didn’t usually add additional features into point releases. Have a look at the new 12.1.0.2 features and that would possibly qualify to be 12c Release 2…

In summary the upgrade process is actually remarkably simple, and hasn’t changed much since earlier versions of the software. Here are the steps in chronological order.

./runInstaller

I don’t know how often I have type ./ruinInstaller instead of runInstaller, but here you go. This is the first wizard screen after splash screen has disappeared.

GI 12.1.0.2-001

Naturally I went for the upgrade of my cluster. Before launching the installer though I made sure that everything was in working order by means of cluvfy. On to the next screen:

GI 002

always install English only. Troubleshooting Oracle in a different language (especially if I don’t speak or understand) is really hard so I avoid it in the first place.

Over to the screen that follows and oops-my SYSDG disk group (containing OCR and voting files) is too small. Bugger. In the end I added 3 new 10GB LUNs and dropped the old ones. But it took me a couple of hours to do so. Worse: it wasn’t even needed, but proved to be a good learning exercise. The requirement to have that much free space is most likely caused by the management repository and related infrastructure.

GI 003 error

Back to this screen everything is in best order, the print screen has been taken just prior to the change to the next. Note the button to skip the updates on unreachable nodes. Not sure if I wanted to do that though.

GI 003

I haven’t got OEM agents on the servers (yet) so I’m skipping the registration for now. You can always do that later.

GI 004

This screen is familiar; I am keeping my choices from the initial installation. Grid Infrastructure is owned by Oracle despite the ASMDBA and ASMADMIN groups by the way.

GI 005

On the screen below you define where on the file system you want to install Grid Infrastructure. Remember that for clustered deployments the ORACLE_HOME cannot be in the path of the ORACLE_BASE. For this to work you have to jump to the command line and create the directory on all servers and grant ownership to the GI owner account (oracle in this case, could be grid as well).

GI 006

Since I like to be in control I don’t allow Oracle to run the root scripts. I didn’t in 12.1.0.1 either:

GI 007

In that screen you notice the familiar checking of requirements.

GI 008

In my case there were only a few new ones shown here. This is a lab server so I don’t plan on using swap, but the kernel parameter “panic_on_oops” is new. I also didn’t set the reverse path filtering which I corrected before continuing. Interestingly the installer points out that there is a change in the asm_diskstring with its implications.

One thing I haven’t recorded here (because I am using Oracle Linux 6.5 with UEK3) is the requirement for using a 2.6.39 kernel – that sounds like UEK2 to me. I wonder what that means for Red Hat customers. I needed the RHEL compatible kernel because Oracle hasn’t provided virtio SCSI in UEK 2 (but does in UEK3). 
Another interesting case was that the kernel_core pattern wasn’t equal on all nodes. Turned out that 2 nodes had the package abrt installed, and the other two didn’t. Once the packages were installed on all nodes, the warning went away.

GI 009

Unfortunately I didn’t take a print screen of the summary in case you wonder where that is. I went straight into the installation phase:

GI 010

At the end of which you are prompted to run the upgrade scripts. Remember to run them in screen and pay attention to the order you run them in.

GI 011

The output from the last node is shown here:

[root@rac12node3 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2014/07/26 16:15:51 CLSRSC-4015: Performing install or upgrade action for Oracle
Trace File Analyzer (TFA) Collector.

2014/07/26 16:19:58 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2014/07/26 16:20:02 CLSRSC-464: Starting retrieval of the cluster configuration data

2014/07/26 16:20:51 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2014/07/26 16:20:51 CLSRSC-363: User ignored prerequisites during installation

ASM configuration upgraded in local node successfully.

2014/07/26 16:21:16 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2014/07/26 16:22:51 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully
completed.

OLR initialization - successful
2014/07/26 16:26:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/07/26 16:34:34 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/07/26 16:35:55 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2014/07/26 16:35:55 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2014/07/26 16:38:51 CLSRSC-479: Successfully set Oracle Clusterware active version

2014/07/26 16:39:13 CLSRSC-476: Finishing upgrade of resource types

2014/07/26 16:39:26 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'

2014/07/26 16:39:26 CLSRSC-477: Successfully completed upgrade of resource types

2014/07/26 16:40:17 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Did you notice that TFA has been added? Trace File Analyzer is another of these cool things to play with, it was available with 11.2.0.4 and as an add-on to 12.1.0.1.

Result!

Back to OUI to complete the upgrade. After which cluvfy performs a final check and I’m done. Prove it worked:

[oracle@rac12node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jul 26 17:13:02 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE	12.1.0.2.0	Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL>

In another post I’ll detail the upgrade for my databases. I am particularly interested about the unplug/plug way of migrating…

khailey's picture

ASH visualized in R, ggplot2, Gephi, Jit, HighCharts, Excel ,SVG

There is more and more happening in the world of visualization and visualizing Oracle performance specifically with v$active_session_history.

Of these visualizations,  the one pushing the envelope the most is Marcin Przepiorowski. Marcin is responsible for writing S-ASH , ie Simulated ASH versions 2.1,2.2 and 2.3. See

Here are some examples of what I have seen happening out there in the web with these visualizations grouped by the visualization tool.

Gephi

The first example is using Gephi. The coolest example of Gephi I’ve seen is Greg Rahn’s analysis of the voting for Oracle World mix sessions.

Here is Marcin’s example using Gephi with ASH data:

JIT

Here are two more examples from Marcin Przepiorowski using JIT or Javascript InfoVis Toolkit

Click on the examples to go to the actual HTML and not just the image. On the actual page from Marcin you can double click on nodes to make them the center of the network.

TOP 5 SQL_ID, SESSIONS and PROGRAMS joined together with additional joins to points not included in top 5. ex. TOP 5 SQL ID with list of sessions and programs TOP 5 SESSIONS with list of sql_id’s and programs TOP 5 PROGAMS with list of sessions and sql id’s

TOP 5 SQL_ID, SESSIONS and PROGRAMS joined together with additional joins to points not included in top 5. ex. TOP 5 SQL ID with list of sessions and programs TOP 5 SESSIONS with list of sql_id’s and programs TOP 5 PROGAMS with list of sessions and sql id’s

 

R

Frits Hoogland gives a great blog entry on getting started with R and then using R to analyze 10046 tracefiles (sort of ASH on steroids)

Here from Greg Rahn again is one of the cooler examples of R. In this case it’s the ASH data of a parallel query execution showing the activity of the different processes:

HighCharts

Here is an example basically reproducing the Top Activity screen with highcharts, a sqlscript on v$session and a cgi script to feed the data to the web page:

SVG

OEM 12c shows the new face of Enterprise Manager with the load maps

PL/SQL and SVG : EMlite

(quick apex example: http://ba6.us/node/132)

 

Excel

Karl Arao has been doing a good bit of ASH visualization using his own Excel work as well as Tanel’s Excel Perf Sheet  ( see a video  here)

http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost/

 

 

 

oraclebase's picture

Oracle 11gR2 and 12cR1 on Oracle Linux 7

I did a quick update of my Oracle installation articles on Oracle Linux 7. The last time I ran through them was with the beta version OL7 and before the release of 12.1.0.2.

The installation process of 11.2.0.4 on the production release of Oracle Linux 7 hasn’t changed since the beta. The installation of 12.1.0.2 on Oracle Linux 7 is a lot neater than the 12.1.0.1 installation. It’s totally problem free for a basic installation.

You can see the articles here.

There is a bold warning on the top of both articles reminding you that the database is not supported on Oracle Linux 7 yet! Please don’t do anything “real” with it until the support is official.

Note. I left the fix-it notes for the 12.1.0.1 installation at the bottom of the 12c article, but now 12.1.0.2 is available from OTN there is really no need for someone to be installing 12.1.0.1 other than for reference I guess.

Cheers

Tim…

 

 


Oracle 11gR2 and 12cR1 on Oracle Linux 7 was first posted on July 29, 2014 at 5:58 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
marco's picture

Oracle Database 12.1.0.2.0 – Turning OFF the In-Memory Database option

So how to turn it the option off/disabled…As a privileged database user: > Just don’t set the INMEMORY_SIZE parameter to a non zero value…(the default...
class="readmore">Read More

kevinclosson's picture

Impugn My Character Over Technical Points–But You Should Probably Be Correct When You Do So. Oracle 12c In-Memory Feature Snare? You Be The Judge ‘Cause Here’s The Proof.

Executive Summary

This blog post offers proof that you can trigger In-Memory Column Store feature usage with the default INMEMORY_* parameter settings. These parameters are documented as the approach to ensure In-Memory functionality is not used inadvertently–or at least they are documented as the “enabling” parameters.

During the development of this study, Oracle’s Product Manager in charge of the In-Memory feature has cited Bug #19308780 as it relates to my findings.

Index of Related Posts

This is part 4 in a series: Part I, Part II, Part III, Part IV.

BLOG UPDATE 2014.07.29: Oracle’s Maria Colgan issued a tweet stating “What u found in you 3rd blog is a bug [...] Bug 19308780.”  Click here for a screenshot of the tweet. Also, click here for a Wayback Machine (web.archive.org) copy of the tweet.

What Really Matters?

This is a post about enabling versus using the Oracle Database 12c Release 12.1.0.2 In-Memory Column Store feature which is a part of the separately licensed Database In-Memory Option of 12c. While reading this please be mindful that in this situation all that really matters is what actions on your part effect the internal tables that track feature usage.

Make Software, Not Enemies–And Certainly Not War

There is a huge kerfuffle regarding the separately licensed In-Memory Column Store feature in Oracle Database 12c Release 12.1.0.2–specifically how the feature is enabled and what triggers  usage of the feature.

I pointed out a) the fact that the feature is enabled by default and b) the feature is easily accidentally used. I did that in Part I and Part II in my series on the matter.  In Part III I shared how the issue has lead to industry journalists quoting–and then removing–said quotes. I’ve endured an ungodly amount of shameful backlash even from some friends on the Oaktable Network list as they asserted I was making a mole hill out of something that was a total lark (that was a euphemistic way of saying they all but accused me of misleading my readers).  I even had friends suggesting this is a friendship-ending issue. Emotion and high-technology are watery-oil like in nature.

About the only thing that hasn’t happened is for anyone to apologize for being totally wrong in their blind-faith rooted feelings about this issue. What did he say? Please read on.

From the start I pointed out that the INMEMORY_QUERY feature is enabled by default–and that it is conceivable that someone could use it accidentally. The back lash from that was along the lines of how many parameters and what user actions are needed for that to be a reality.  Maria Colgan–who is Oracle’ s PM for the In-Memory Column Store feature–tweeted that I’m confusing people when announcing her blog post on the fact that In-Memory Column Store usage is controlled not by INMEMORY_QUERY but instead INMEMORY_SIZE. Allow me to add special emphasis to this point. In a blog post on oracle.com, Oracle’s PM for this Oracle database  feature explicitly states that INMEMORY_SIZE must be changed from the default to use the feature.

If I were to show you everyone else was wrong and I was right, would you think less of me? Please, don’t let it make you feel less of them. We’re just people trying to wade through the confusion.

The Truth On The Matter

Here is the truth and I’ll prove it in a screen shot to follow:

  1. INMEMORY_QUERY is enabled by default. If it is set you can trigger feature usage–full stop.
  2. INMEMORY_SIZE is zero by default.  Remember this is the supposedly ueber-powerful setting that precludes usage of the feature and not, in fact, the top-level-sounding INMEMORY_QUERY parameter. As such this should be the parameter that would prevent you for paying for usage of the feature.

In the following screenshot I’ll show that INMEMORY_QUERY is at the default setting of ENABLE  and INMEMORY_SIZE is at the default setting of zero. I prove first there is no prior feature usage. I then issue a CREATE TABLE statement specifying INMEMORY.  Remember, the feature-blocking INMEMORY_SIZE parameter is zero.  If  “they” are right I shouldn’t be able to trigger In-Memory Column Store feature usage, right? Observe–or better yet, try this in your own lab:

proof-mu

So ENABLED Means ENABLED? Really? Imagine That.

So I proved my point which is any instance with the default initialization parameters can trigger feature usage. I also proved that the words in the following three screenshots are factually incorrect:

MariaCallingMeOut

Screenshot of blog post on Oracle.com:

maria-on-inmemory_size_assertion

Screenshot of email to Oracle-L Email list:

 

kerry

 

 

Summary

I didn’t want to make a mole hill of this one. It’s just a bug. I don’t expect apologies. That would be too human–almost as human as being completely wrong while wrongly clinging to one’s wrongness because others are equally, well, wrong on the matter.

 

Sundry References

 Print out of Maria’s post on Oracle.com and link to same: Getting started with Oracle Database In-Memory Part I

Franck Pachot 2014.07.23 findings reported here:  Tweet , screenshot of tweet.

 

 

 

Filed under: oracle

kevinclosson's picture

Oracle Database 12c In-Memory Feature. Enabled, Used or Confused? Don’t Be.

Enabled By Default. Not Usable By Default.

Series Links: Part I, Part II.

It was my intention to only write 2 installments on my short series about Oracle Database 12c In-Memory Column Store feature usage. My hopes were quickly dashed when the following developments occurred:

1. A quote from an Oracle spokesman cited on informationweek.com was pulled because (I assume) it corroborated my assertion that the feature is enabled by default. It is, enabled by default.

Citations: Tweet about the quote, link to the July 26, 2014 version of the Informationweek.com article showing the Oracle spokesman quote: Informationweek.com 26 July 2014.

The July 26, 2014 version of the Informationweek.com article quoted an Oracle spokesman as having said the following:

Yes, Oracle Database In-Memory is an option and it is on by default, as have been all new options since Oracle Database 11g

2. An email from an Oracle Product Manager appeared on the oracle-l email list and stated the following:

So, it is explicitly NOT TRUE that Database In-Memory is enabled by default – and it’s (IMHO) irresponsible (at best) to suggest otherwise

Citation: link to the oracle-l list server copy of the email, screenshot of the email.

 

Features or Options, Enabled or Used

I stated in Part I that I think the In-Memory Column Store feature is a part of a hugely-important release.  But, since the topic is so utterly confusing I have to make some points.

It turns out that neither of the Oracle folks I mention above are correct. Please allow me to explain. Yes, the Oracle spokesman spoke the truth originally to Informationweek.com as reported by Doug Henschen. The truth that was spoken is, yes indeed, the In-Memory Column Store feature/option  is enabled by default. Now don’t be confused. There is a difference between enabled and usable and  in-use.

In Part II of the series I showed an example of the things that need to be done to put the feature into use–and remember, you’re not charged for it until it is used. I believe that post made it quite clear that there is a difference between enabled and in-use. What does the Oracle documentation say about In-Memory Column Store feature/option default settings? It says it is enabled by default. Full stop. Citation: Top-level initialization parameter enabled by default. I’ve put a screenshot of that documentation here for education sake:

enable-by-default

 

This citation of the documentation means the Oracle spokesman was correct.  The feature is enabled by default.

The problem is with the mixing of the words enabled and “use” in the documentation.

Please consider the following screenshot of a session where the top-level INMEMORY_QUERY parameter is set to the default (ENABLE) as well as the INMEMORY_SIZE parameter to grant some RAM to the In-Memory Column Store feature. In the screenshot you’ll see that I didn’t trigger usage of the feature just by enabling it. I did, however, show that you don’t have to “use” the feature to trigger “usage” so please visit Part II on that matter.

img1-mu

So here we sit with wars over words.

Oracle’s Maria Colgan just posted a helpful blog (or, practically a Documentation addendum) going over the initialization parameters needed to fully, really-truly enable the feature–or more correctly how to go beyond enabled to usable.  I’ve shown that Oracle’s spokesman was correct in stating the feature is enabled by default (INMEMORY_QUERY enabled by default). Maria and others showed that you have to set 2 parameters to really, truly, gosh-darnit use the feature that is clearly ENABLE(d) by default. I showed you that enabling the feature doesn’t mean you use the feature (as per the DBA_FEATURE_USAGE_STATICS view). I also showed you in Part II how one can easily, accidentally use the feature.  And using the feature is chargeable and that’s why I assert INMEMORY_QUERY should ship with the default value of DISABLE. It is less confusing and it maps well to prior art such as the handling of Real Application Clusters.

Trying To Get To A Summary

So how does one summarize all of this?  I see it as quite simple. Oracle could have shipped Oracle Database 12c 12.1.0.2 with the sole,  top-level enabling parameter disabled (e.g., INMEMORY_QUERY=DISABLE). Doing so would be crystal clear because it nearly maps to a trite sentence–the feature is DISABLE(d). Instead we have other involved parameters that are not top level adding to the confusion. And confusion indeed since the Oracle documentation insinuates INMEMORY_SIZE is treated differently when Automatic Memory Management is in play:

amm-issue

Prior Art

And what is that prior art on the matter? Well, consider Oracle’s (presumably) most profitable separately-licensed feature of all time–Real Application Clusters. How does Oracle treat this desirable feature? It treats it with a crystal-clear top-level, nuance-free disabled state:

cluster_database_default

So, in summary, the In-Memory feature is not disabled by default. It happens to be that the capacity-sizing parameter INMEMORY_SIZE is set to zero so the feature is unusable. However, setting both INMEMORY_QUERY and INMEMORY_SIZE does not constitute usage of the feature.

Confused? I’m not.

 

Filed under: oracle

khailey's picture

To subset or not to subset

There was a problem at a customer in application development where using full copies for developers and QA was causing excessive storage usage and they wanted to reduce costs , so they decided to use subsets of the production development and QA
  • Data growing, storage costs too high, decided to roll out subsetting
  • App teams and IT Ops teams had to coordinate and manage the complexity of the  shift to subsets in dev/test
  • Scripts had to be written to extract the correct and coherent data, such as correct date ranges and respect referential integrity
  • It’s difficult to get 50% of data 100% of skew instead of  50% of data 50% of skew
  • Scripts were constantly breaking as production data evolved requiring more work on the subsetting scripts
  • QA teams had to rewrite automated test scripts to run correctly on subsets
  • Time lost in ADLC, SDLC to enable subsets to work (converting CapEx into higher OpEx) put pressure on release schedules
  • Errors were caught late in UAT, performance, and integration testing, creating “integration or testing hell” at the end of development cycles
  • Major incidents occurring post deployment, forcing more detailed tracking of root cause analysis (RCA)
  • Production bugs causing downtime were  due 20-40% to non-representative data sets and volumes.
Moral of the story, if you roll out subsetting,  it’s worth holding the teams accountable and tracking the total cost and impact across teams and release cycles. What is the real cost impact of going to subsetting? How much extra time goes into building and maintaining the subsets and more importantly what is the cost impact of letting bugs slip into production because of the subsets?
A robust, efficient and cost savings alternative solution would be to use database virtualization. With database virtualization, database copies take up almost no space, can be made in minutes and all the over head and complexities listed above go way. In addition database virtualization will reduce CapEx/OpEx in many other areas such as
  • Provisioning  operational reporting environments
  • Providing  controlled backup/restore  for DBAs
  • Full scale test environments.
And subsets do not provide the data control features that database virtualization provides to accelerate application projects (including analytics, MDM, ADLC/SDLC, etc.). Our customers repeatedly see 50% acceleration on project timelines and cost, which generally dwarf the CapEx, OpEx storage expense lines, due to the features we make available in our virtual environments:
  • Fast data refresh
  • Integration
  • Branching (split a copy of dev database off for use in QA in minutes)
  • Automated secure branches (masked data for dev)
  • Bookmarks for version control or compliance preservation
  • Share (pass errors + data environment from test to dev, if QA finds a bug, they can pass a copy of db back to dev for investigation)
  • Reset/rollback (recover to pre-test state or pre-error state)
  • Parallelize all steps: have multiple QA databases to run QA suites in parallel. Give all developers their own copy of the database so they can develop without impacting other developers.
VDBs