Cool Stuff

tanelpoder's picture

Is this valid SQL syntax? :-)

I’m talking about this:

select-1from from dual;

Looks like invalid, right? Well, let’s run it:

SQL> select-1from from dual;

       ROM
----------
 -1.0E+000

This is because:

  1. Oracle doesn’t need whitespace for tokenizing the SQL statement (differences in character classes will do – as I’ve explained here)
  2. The first from “keyword” in the above statement is broken down to two tokens as an “F” right after a digit means that the preceding number is a FLOAT (and “D” means DOUBLE) and the tokenizer stops right there, knowing that whatever comes after this character (“ROM”) is a next token, which according to the Oracle SQL syntax rules will be assigned as the output column alias

The following funky-looking SQL statements are also valid:

SQL> select.1e2ffrom dual;

     .1E2F
----------
  1.0E+001

SQL> select.1e2fas"."from dual;

         .
----------
  1.0E+001

In the upper example, the “.1e2f” means number .1 * 10^2 (scientific notation) represented as a FLOAT internally and in the lower one I’ve just added a column alias with “AS” keyword just to make the SQL look a bit crazier.

:-)

Share

tanelpoder's picture

Another (secret) hacking session with me – using Oracle Session Snapper for flexible troubleshooting (and fun)

And this time we have audio !!! (Wow!)

Following the huge success of my last hacking session, planned while drinking beer at Graham Woods OOW pre-party and delivered from Miracle’s massive Oracle Closed World event in Thirsty Bear (between drinking beers), I’m announcing another hacking session:

What: Using Session Snapper for flexible Oracle Performance Troubleshooting

When: Wednesday 27th Oct 9:00-10:00 AM PDT (US West coast / California time). Check what’s this in your time zone here

Where: Internet! -> Sign up here: http://tech.e2sn.com/secret

You’ll need to register fast and be “there” on time as my current GotoWebinar account only allow 100 attendees to log on… last time over 100 people signed up, but “luckily” less actually showed up, so nobody got left outside!

BTW, I have figured out what went wrong with audio last time and caused my voice in the end of presentation disappear). A program, which I accidentally launched via a keyboard shortcut, grabbed my Mic input to itself, so gotowebinar’s app couldn’t access it anymore.

See you soon!

Share

tanelpoder's picture

COUNT STOPKEY operation (the where ROWNUM

I was running tests on some very large tables on an Exadata cluster and found an interesting bug.

Instead of having to query 4Billion row tables, I have reproduced this case with a cartesian join…

Check this. I’m generating 8 Billion rows using a 3-way cartesian join of set of 2000 rows. So, this results in 2000 * 2000 * 2000 rows, which is 8 billion rows.

SQL>  with sq as (select null from dual connect by level <= 2000)
      select count(*)
      from sq a, sq b, sq c;

COUNT(*)
----------
8000000000

Everything worked well as you see. All 8 billion rows were nicely counted. Let’s modify this query a bit, by adding a WHERE rownum <= 8 000 000 000 predicate, which shouldn’t modify the outcome of my query as 8 billion rows is exactly what I have:

SQL> with sq as (select null from dual connect by level <= 2000)
     select count(*)
     from sq a, sq b, sq c
     where rownum <= 8000000000;

COUNT(*)
----------
4294967295

Ding! (as I’m unable to come up with a better word to present this problem :)

Seems like we have a problem! The COUNT operation thinks that we have only a bit over 4 billion rows returned from the query plan branches below it.

We clearly have a(n easily reproducible ) bug here!

What happens here is that whenever you use ROWNUM < N operator in a query predicate like I have done, you will introduce an additional step to the query plan (COUNT STOPKEY).

Check this simple query:

SELECT * FROM t WHERE ROWNUM <= 1000

Plan hash value: 508354683

--------------------------------------------
| Id  | Operation          | Name | E-Rows |
--------------------------------------------
|   0 | SELECT STATEMENT   |      |        |
|*  1 |  COUNT STOPKEY     |      |        |
|   2 |   TABLE ACCESS FULL| T    |     41 |
--------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=1000)

As the above plan explains, the “WHERE rownum <” predicate is applied in the execution plan line (row source) 1 – COUNT STOPKEY. You won’t see this line when you are just counting rows without any “ROWNUM <” predicate.

How this works is that the COUNT STOPKEY knows how many rows you want and will just keep calling its child function under it in the execution plan tree to get more and more rows, until the required amount of rows have been returned. And then the COUNT STOPKEY function just stops calling its child row sources and declares the end-of-data condition.

And here’s the catch – due to a bug, the variable used to keep track of number of processed rows by COUNT STOPKEY is only 4 bytes, even on 64bit platforms. So, it can hold values up to 4294967295 in it (the count returned above), which is 2 ^ 32 – 1. That wouldn’t be so much of a problem in practical real world applications, but what I don’t like is that the operation will silently return wrong data – it will just stop fetching more rows, even though we haven’t reached the ROWNUM <= 8 000 000 000 limit yet and there are rows to fetch from the child row-source, but COUNT STOPKEY declares end-of-data condition and returns ~4B as a result.

This is (obviously a bug) and now there’s a bug number for that as well (thanks to Greg Rahn for letting me know) – Bug# 10214991 (unfortunately it’s not public in MOS).

Now, there’s a reason why this bug has gone unnoticed for so long despite that 4+ billion-row tables have existed for long time (I worked first with such Oracle database in year 2000 – on Windows NT :).

  1. A real-life business query with a construct of WHERE ROWNUM <= X makes sense only when the data is ordered by some meaningful business attribute (a column). Otherwise you’ll get the query results in quite random order. Note that I’m talking about real, business queries here.
  2. The only right way to order data in SQL is via an ORDER BY clause. There is no other right way to do it, even though some hacks sometimes work (and will stop working after the next upgrade)
  3. Nobody (or not many people) have written queries like: give me the sum of 5 billion biggest purchases ordered by the selling price or give me the sum of last 5 billion purchases up to this date. If you replace the billion by just ten, hundred, or just a thousand, then hell yeah, such types of queries are being executed every day (or should I say minute, second).
    Whether using ROWNUM in your business queries (instead of actual date range or purchase price in $$$) is a good or bad design is a completely different question – the fact is that ROWNUM is used in such SQLs.

Now we’ll just need to wait until the data volumes (and reports) get so big that someone actually hits this problem in real life!

So, better patch your database before the end of this decade! :-)

Share

tanelpoder's picture

Announcing the first E2SN Virtual Conference with Jonathan Lewis, Cary Millsap, Kerry Osborne and me – 18-19 Nov!

Yes, it’s official. I’m organizing a virtual conference with some of THE top speakers in the world. The topic is Systematic Oracle SQL Optimization (in real world)

The dates are 18-19 November, the conference lasts for 4 hours on both days, so you’ll be able to still get some work done as well (and immediately apply the knowledge acquired!).

Well, none of the speakers need introduction, but just in case you’ve lived in space for last 20 years, here are the links to their blogs :)

I can tell you, (at least the first 3) people in the above list ROCK!!!

And all of them are OakTable members too :)

This conference will have 4 x 1.5 hour sessions, each delivered by a separate speaker. We aim to systematically cover the path of:

  1. Finding out where is the performance problem (and which SQLs cause it)
  2. Finding out what is the problem SQL execution plan doing and which part of it is slow
  3. How to write and fix your code so that the optimizer wouldn’t hate your SQL
  4. How to fix the SQL execution plan performance problem when you can’t touch the application code!

And as this is the first (pilot) virtual conference, then the price is awesome, especially if you get the early bird rate by signing up before 1. November!

So, check out the abstracts, details, agenda and sign up here!

#000000;">#0000ff;">http://tech.e2sn.com/virtual-conferences

P.S. I expect this event to be awesome!

Share

tanelpoder's picture

A: The most fundamental difference between hash and nested loop joins

Ok guys, thanks for waiting!

I ended up expanding the article quite a lot compared to what I had originally planned. In fact I only wrote 50% of what I plan to write, I’ll update the rest… um… later… Instead of just stating the difference between the joins I took a step back and elaborated something what I often see people doing (and talking about in newsgroups and lists too).

Basically the most fundamental (or biggest or most important) difference between nested loop and hash joins is that:

  • Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.

In other words, when joining table A and B (A is driving table, B is the probed table), then a nested loop join can take 1st row from A and perform a lookup to B using that value (of the column(s) you join by). Then nested loop takes the next row from A and performs another lookup to table B using the new value. And so on and so on and so on.

This opens up additional access paths to the table B, for example when joining ORDERS and ORDER_ITEMS by ORDER_ID (and ORDER_ID is leading column of PK in ORDER_ITEMS table), then for whatever orders are taken from ORDERS table, we can perform a focused, narrow index range scan on ORDER_ITEMS for every ORDER_ID retrieved from the driving ORDERS table. A hash join can’t do that.

Of course this doesn’t mean that hash joins can’t use any indexes for tables they read – index range scans and unique lookups can still be used under a hash join, but only if there are constant values in the query text (in form of literal or bind variables). If there are no such constant (filter) conditions under a hash join, then the other options to use that index would be to do an INDEX FULL SCAN (which is a range scan from end to end of the index) or INDEX FAST FULL SCAN (which is like a full table scan through the entire index segment). However none of these opportunities give the same benefits as nested loops looking up rows from row source B dynamically based on what was retrieved from A during runtime.

Note that this nested loops benefit isn’t limited to indexes only on table B, the difference is more fundamental than just a specific access path. For example, if table B happens to be a single table hash cluster or indexed X$ table, then the nested loop is also able to do “optimized” lookups from these row-sources, based on the values retrieved from table A.

So, my article with a lot of (loosely) related details is here:

In the comments section of my question, Tom, Bernard Polarski, Christian Antognini and Marc Musette got the closest to what I had in my mind when I asked the question. However, of course your mileage may vary somewhat depending on what kind of problems you have experienced the most over all the years. Also, Jonathan Lewis had a valid comment regarding that the answer depends on what exactly does one mean by “fundamental” and yeah this was open to interpretation.

Nevertheless, I wanted to emphasize that there’s a more important difference between NL and hash joins, than the usual stuff you see in training material which talk about implementation details like hash tables and memory allocation…

Some day I will complete that article, I plan to add some design advice in there, like denormalization opportunities for getting the best of the both worlds etc. But now I’m gonna get a beer instead.

Thanks for reading and answering my blog, I was quite impressed by the volume of comments & answers to my question. I must do this more often!

Share

tanelpoder's picture

A million kilometers in two years…

I’ve been tracking my business travel with Tripit.com‘s awesome service for about 2 years now.

After getting back from my Tallinn->Helsinki->New York->Detroit->New York->San Francisco->New York->Helsinki->Tallinn trip yesterday, Tripit reported that I have flown 1 007 509 km during my business trips (American readers, that’s about 42 miles ;)

Check yourself below :)

Tripit says I’ve visited 71 different cities in 27 countries within the last two years.

Here’s the map of places where I’ve visited my clients, done training or spoken at conferences:

Actually there’s probably couple of more cities where I’ve been in the last two years, for some reason Tripit doesn’t recognize my trip to Melbourne (but it does show the visit to Sydney which I did during the same trip).

Anyway, the conclusion here is that I think I’ve done enough flying for now. Now I plan to stay at home for a loooong time (I mean at least 3-4, maybe even 5 weeks in a row!!! ;)

But seriously, what I’ve decided is that:

  1. I won’t do any more public on-site seminars (with only few exceptions).
  2. I will move all my public seminar offering to web-based online seminars (using Citrix gotomeeting.com’s service), which I’ll deliver in person.
  3. I will still do private corporate training on-site occasionally, which offers flexibility of customizing the content and focus areas of the seminar to match the customer’s needs
  4. I will also offer private corporate online training, which gives much greater flexibility for choosing the seminar duration and times etc (it’s possible to spread a seminar to 1-day sections, each day delivered on a separate week, to reduce the impact of people being away from their work)
  5. I will still do consulting & advanced troubleshooting where I usually solve even the most complex problems in matter of couple of days (like explained here for example)

Ok, enough of self-promotion and advertising, back to work ;-)

P.S. I will publish my online seminar schedule “very soon now”!!!

P.P.S. I’m not affiliated with Tripit.com by any means business-wise, but if you travel frequently, then I recommend you to check out their awesome service (and iPhone app). The basic version is free, but I just decided to upgrade to Pro after couple of years of using it!

Share/Bookmark

tanelpoder's picture

Oracle Closed World presentation links

Thanks to everybody who attended my OCW hacking session!

Sorry to guys who attended via webinar – I’ll do the session again in a few weeks, with audio from end to end hopefully! And I will get someone to assist me with monitoring the transmission quality and attendee questions etc.

Note that this stuff is mostly for hacking and fun – don’t use the undocumented stuff in production!

The links are below:

Download scripts & Tools:

Rlwrap links:

Diagnostic events:

X$TRACE
I haven’t written any articles on X$TRACE yet, but you can find some stuff from one of my very old presentations:

Or just type:

ALTER TRACING ENABLE “10704:4:ALL”

Where 10704 is the KST event number, 4 is the level and ALL means all Oracle PIDs (Oracle PIDs, not OSPID’s).


Share/Bookmark

tanelpoder's picture

Hot stuff! Oracle Closed World Secret presentation webcast!

After hours of careful planning (as you’ll see from the images) I decided to webcast my tomorrow’s Oracle Closed World hacking session…

This session isn’t necessarily going to be useful, but it definitely should be fun !!!

You can register here:

http://tech.e2sn.com/secret

Share/Bookmark

tanelpoder's picture

Public appearances 2010

Here’s the list of events where I’ll speak this year:

Michigan OakTable Symposium 2010
Ann Arbor, MI
16-17 September 2010

Considering the concentration of OakTable members there, this will be an awesome event!

I will be delivering my “Back to Basics: Choosing The Entry Point to Performance Troubleshooting Wisely” and “Understanding LGWR, log file sync waits and commit performance” sessions there.

Promo video:
http://www.oaktable.net/media/michigan-oaktable-symposium-2010-promo

Agenda & Registration:
http://michigan.oaktable.net/

Oracle Open #ff0000;">Closed World
San Francisco, CA
19-22. September

Note that I won’t be speaking at the official Oracle Open World conference, but I will be speaking at a secret underground event there, about some really fun stuff, like deep internals, hacking, kernel tracing and of course advanced troubleshooting ;-) And rest of the time I’ll be in some bar.

NYOUG Fall 2010 Training Session
Manhattan, NYC, NY
16 November 2010

This is a full day seminar organized by NYOUG. I will be delivering my “Scripts and Tools for Oracle Troubleshooting and Advanced Performance Analysis” session there. It’s an updated version of the material I delivered at the Hotsos Symposium Training Day this year.

Agenda & Registration:
http://www.nyoug.org/upcoming_events.htm#NYOUG_Training_Days

UKOUG Tech & EBS Conference (to be confirmed)
Birmingham, UK
29 November – 1 December 2010

I submitted four papers to UKOUG Tech&EBS conference, so if all goes well, I’ll be there in end of Nov/beginning of Dec too.

http://techandebs.ukoug.org/


Share/Bookmark

tanelpoder's picture

The full power of Oracle’s diagnostic events, part 2: ORADEBUG DOC and 11g improvements

I haven’t written any blog entries for a while, so here’s a very sweet treat for low-level Oracle troubleshooters and internals geeks out there :)

Over a year ago I wrote that Oracle 11g has a completely new low-level kernel diagnostics & tracing infrastructure built in to it. I wanted to write a longer article about it with comprehensive examples and use cases, but by now I realize I won’t ever have time for this, so I’ll just point you to the right direction :)

Basically, since 11g, you can use SQL_Trace, kernel undocumented traces, various dumps and other actions at much better granularity than before.

For example, you can enable SQL_Trace for a specific SQL_ID only:

SQL> alter session set events 'sql_trace[SQL: 32cqz71gd8wy3{pgadep: exactdepth 0} {callstack: fname opiexe}
plan_stat=all_executions,wait=true,bind=true';


Session altered.

Actually I have done more in above example, I have also said that trace only when the PGA depth (the dep= in tracefile) is zero. This means that trace only top-level calls, issued directly by the client application and not recursively by some PL/SQL or by dictionary cache layer. Additionally I have added a check whether we are currently servicing opiexe function (whether the current call stack contains opiexe as a (grand)parent function) – this allows to trace & dump only in specific cases of interest!

To prevent automated spam submissions leave this field empty.
Syndicate content