Oracle

Franck Pachot's picture

MVCC in Oracle vs. Postgres, and a little no-bloat beauty

MVCC in Oracle vs. PostgreSQL, and a little no-bloat beauty

Databases that are ACID compliant must provide consistency, even when there are concurrent updates.

Let’s take an example:

  • at 12:00 I have 1200$ in my account
  • at 12:00 My banker runs long report to display the accounts balance. This report will scan the ACCOUNT tables for the next 2 minutes
  • at 12:01 an amount of 500$ is transferred to my account
  • at 12:02 the banker’s report has fetched all rows

What balance is displayed in my banker’s report?

Jonathan Lewis's picture

Troubleshooting

Here’s a question to provoke a little thought if you’ve got nothing more entertaining to do on a Sunday evening.  What threats do you think of when you see a statement like the following in (say) an AWR report, or in a query against v$sql ?

update tableX set
        col001 = :1, col002 = :2, col003 = :3, ...
        -- etc. 
        -- the names are supposed to indicate that the statement updates 302 columns
        -- etc.
        col301 = :301, col302 = :302
where
        pk_col = :303
;

I’ll be writing up some suggestions tomorrow (Monday, UK BST), possible linking to a few other articles for background reading.

Update

The first three comments have already hit the high points, but I’m going to jot down a few notes anyway.

The first two things that really (should) make an impact are:

Jonathan Lewis's picture

Split Partition

This is a little case study on “pre-emptive trouble-shooting”, based on a recent question on the ODC database forum asking about splitting a range-based partition into two at a value above the current highest value recorded in a max_value partition.

The general principle for splitting (range-based) partitions is that if the split point is above the current high value Oracle will recognise that it can simply rename the existing partition and create a new, empty partition, leaving all the indexes (including the global and globally partitioned indexes) in a valid state. There are, however, three little wrinkles to this particular request:

Jonathan Lewis's picture

Free Space

Several years ago I wrote a note about reporting dba_free_space and dba_extents to produce a map of the space usage in a tablespace in anticipation of messing about with moving or rebuilding objects to try and reduce the size of the files in the tablespace.  In the related page where I published the script I pointed out that a query against dba_extents would be expensive because it makes use of structure x$ktfbue which generates the information dynamically by reading segment header blocks.

Jonathan Lewis's picture

Parse Solution

In the “Parse Puzzle” I posted a couple of days ago I showed a couple of extracts from an AWR report that showed contradictory results about the time the instance spent in parsing and hard parsing, and also showed an amazing factor of 4 difference between the DB Time and the “SQL ordered by Elapsed Time”. My example was modelling a real world anomaly I had come across, but was engineered to exaggerate the effect to make it easy to see what was going on.

Jonathan Lewis's picture

Parse Puzzle

Here are some details from an AWR report covering a few minutes in the lifetime of an instance of 18.3. It’s a carefully constructed demonstration and all I’ve done is take an AWR snapshot, execute a single SQL statement, then take another snapshot, so the only thing captured by the report is the work done in that brief time interval. The purpose of the exercise is to demonstrate how some Oracle features can make a complete nonsense of the AWR. (I have, as I often do, produced a model that reproduces an affect that can appear in production but exaggerates the effect to make it more clearly visible.)

First the Time Model statistics:

Franck Pachot's picture

Transaction management in PostgreSQL and what is different from Oracle

TL;DR: Everything is different about transaction behavior. This may also change your ideas about “database independent” applications.

I like to explain some PostgreSQL concepts from an oracle DBA point of view. There are many things that are different in the two RDBMS and it is important to understand them.

Auto commit

Here is a short example where I create a table, insert one row and rollback:

Franck Pachot's picture

Is a datafile OMF or not?

TL;DR: there’s no flag (only the name tells it), but a function

You want to know which files are Oracle Managed Files or not? Maybe because you like OMF (like I do as I show no interest for file names when I have tablespaces) and you have non-OMF files that you want to move to OMF ones.

There’s no flag in V$DATABASE or DBA_DATA_FILES. Only the name of the file (starting with ‘o1_mf’, like Oracle Managed Files, and ending with ‘.dbf’, like Database File) makes it OMF or not. But I don’t like to rely on name convention so I searched how Oracle is doing it. There’s an isFileNameOMF procedure in the RMAN packages.

dbms_Backup_Restore.isFileNameOmf

Here is a procedure that displays which files are ASM, and which ones are regular user-managed ones:

Jonathan Lewis's picture

LOB reads

This is a blog note that started life in September 2001 (which was before I started blogging, I think), got drafted as a blog in January 2014 because of a question on the Oracle-L list server, and has finally got to publication following a question on the ODC database forum. (But the comments in the blog don’t look as if they are relevant to the question.)

The question on Oracle-l was:  Why is the same CLOB datablock read multiple times?

The answer is basically: client programs tend to set a “sensible” fetch buffer size for LOBs and if the buffer size is less than the size of the LOB the block(s) the LOB is in have to be visited multiple times and for nocache LOBs that means the block(s) will be read multiple times.

This can be seen quite easily in SQL*Plus which has two helpful parameters (one dating back to at least v5):

Franck Pachot's picture

Oracle RMAN Backup Optimization ON and backup archivelog command.

TL;DR: BACKUP OPTIMIZATION ON acts as a NOT BACKED UP 1 TIMES for the BACKUP ARCHIVELOG ALL command.

I always mention “not backed up n times” in my “backup archive log” commands because I think it makes sense to precise exactly how many copies of archived log we want to have available to restore from each backup device

I often configure “backup optimization on” when thinking about the tablespaces I put read-only so that they are not backed up each time(given that there is no suppression of old backups that are unknown to RMAN).

But I’ve never realized that BACKUP OPTIMIZATION and NOT BACKED UP actually both work on datafiles, archivelogs, and backupsets. And then the following is redundant:

To prevent automated spam submissions leave this field empty.
Syndicate content