Oracle

tanelpoder's picture

The full power of Oracle’s diagnostic events, part 2: ORADEBUG DOC and 11g improvements

I haven’t written any blog entries for a while, so here’s a very sweet treat for low-level Oracle troubleshooters and internals geeks out there :)

Over a year ago I wrote that Oracle 11g has a completely new low-level kernel diagnostics & tracing infrastructure built in to it. I wanted to write a longer article about it with comprehensive examples and use cases, but by now I realize I won’t ever have time for this, so I’ll just point you to the right direction :)

Basically, since 11g, you can use SQL_Trace, kernel undocumented traces, various dumps and other actions at much better granularity than before.

For example, you can enable SQL_Trace for a specific SQL_ID only:

SQL> alter session set events 'sql_trace[SQL: 32cqz71gd8wy3{pgadep: exactdepth 0} {callstack: fname opiexe}
plan_stat=all_executions,wait=true,bind=true';


Session altered.

Actually I have done more in above example, I have also said that trace only when the PGA depth (the dep= in tracefile) is zero. This means that trace only top-level calls, issued directly by the client application and not recursively by some PL/SQL or by dictionary cache layer. Additionally I have added a check whether we are currently servicing opiexe function (whether the current call stack contains opiexe as a (grand)parent function) – this allows to trace & dump only in specific cases of interest!

kevinclosson's picture

Little Things Doth Crabby Make – Part XIII. When Startup Means Shutdown And Stay That Way.

This seemed worthy of Little Things Doth Crabby Make status mostly because I was surprised to see that Oracle Database 11g STARTUP command worked this way… Consider the following text box. I purposefully tried to do a STARTUP FORCE specifying a PFILE that doesn’t exist. Well, it did exactly what I told it to do, [...]

kevinclosson's picture

Running Oracle Database On A System With 40% Kernel Mode Overhead? Are You “Normal?”

Fellow Oak Table Network member Charles Hooper has undertaken a critical reading of a recently published book on the topic of Oracle performance. Some folks have misconstrued his coverage as just being hyper-critical, but as Charles points out his motive is just to bring the content alive. It has been an interesting series of blog [...]

How to kill performance with 16MB table

If you think 16MB tables are so small that can’t be root cause of significant performance problems with SELECT queries, then read about my today’s headache – a story of 16MB table worth something like 270 seconds of response time. It began mostly as usual: an application web page didn’t show up within 5 minutes. [...]

kevinclosson's picture

Little Things Doth Crabby Make – Part XII.1 Please, DD, Lose My Data! I Didn’t Need That Other 4K Anyway.

I’ve never done a “dot release”  for any post in my Little Things Doth Crabby Make series but there is a first time for everything. In yesterday’s post (Part I) I blogged about how dd(1) on Linux will happily let you specify arbitrarily huge values for the bs (block size) option yet doing so will [...]

kevinclosson's picture

Little Things Doth Crabby Make – Part XII. Please, DD, Lose My Data! I Didn’t Need That Other 4K Anyway.

It’s been a while since the last installment in my Little Things Doth Crabby Make series. The full series can be found here. So, what’s made me crabby this time? Well, when a Linux utility returns a success code to me I expect that to mean it did what I told it to do. Well… What’s [...]

kevinclosson's picture

Do It Yourself Exadata Performance! Really? Part III.

I just noticed that the vote count for my Oracle Mix Suggest-A-Session is up to 92! I’m flattered and thanks for the votes, folks. I promise this is the last post on this thread! The session I aim to present has some content that I delivered to our EMEA Sales Consultants during an event we [...]

Kerry Osborne's picture

Exadata Offload - The Secret Sauce

The “Secret Sauce” for Exadata is it’s ability to offload processing to the storage tier. Offloading means that the storage servers can apply predicate filters at the storage layer, instead of shipping every possible block back to the database server(s). Another thing that happens with offloading is that the volume of data returned can be further reduced by column projection (i.e. if you only select 1 column from a 100 column table, there is no need to return the other 99 columns). Offloading is geared to long running queries that access a large amount of data. Offloading only works if the Oracle decides to use it’s direct path read mechanism. Direct path reads have traditionally been done by parallel query slaves but can also be done by serial queries. In fact, as of 11g, Oracle has changed the decision making process resulting in more aggressive use of serial direct path reads. I’ve seen this feature described as “serial direct path reads” and “adaptive direct path reads”.

kevinclosson's picture

Do It Yourself Exadata Performance! Really? Part II.

Yes, shamelessly, I’m still groveling for more votes. My Suggest-A-Session vote count is at a modest 45! Come on folks, I know we can do better than that. If you don’t get there and vote I fear there is no way for me to get a podium for this topic unless you all collectively petition [...]

glennfawcett's picture

Open Storage S7000 with Exadata… a good fit ETL/ELT operations.

I have worked on Exadata V2 performance projects with Kevin Closson for nearly a year now and have had the opportunity to evaluate several methods of loading data into a data warehouse. The most common, and by far the fastest method, involves the use of “External Tables”. External tables allow the user to define a table object made up of text files that live on a file system.   Using External Tables allows for standard SQL parallel query operations to be used to load data into permanent database tables.

To prevent automated spam submissions leave this field empty.
Syndicate content