Technical

jeremy.schneider's picture

Readable Code for Modify_Snapshot_Settings

It annoyed me slightly that when I googled modify_snapshot_settings just now and all of the examples used huge numbers for the retention with (at best) a brief comment saying what the number meant. Here is a better example with slightly more readable code. Hope a few people down the road cut-and-paste from this article instead and the world gets a few more lines of readable code as a result. :)

On a side note, let me re-iterate the importance of increasing the AWR retention defaults. There are a few opinions about the perfect settings but everyone agrees that the defaults are a “lowest common denominator” suitable for demos on laptops but never for production servers. The values below are what I’m currently using.

jeremy.schneider's picture

Largest Tables Including Indexes and LOBs

Just a quick code snippit. I do a lot of data pumps to move schemas between different databases; for example taking a copy of a schema to an internal database to try to reproduce a problem. Some of these schemas have some very large tables. The large tables aren’t always needed to research a particular problem.

Here’s a quick bit of SQL to list the 20 largest tables by total size – including space used by indexes and LOBs. A quick search on google didn’t reveal anything similar so I just wrote something up myself. I’m pretty sure this is somewhat efficient; if there’s a better way to do it then let me know! I’m posting here so I can reference it in the future. :)

jeremy.schneider's picture

Largest Tables Including Indexes and LOBs

Just a quick code snippit. I do a lot of data pumps to move schemas between different databases; for example taking a copy of a schema to an internal database to try to reproduce a problem. Some of these schemas have some very large tables. The large tables aren’t always needed to research a particular problem.

Here’s a quick bit of SQL to list the 20 largest tables by total size – including space used by indexes and LOBs. A quick search on google didn’t reveal anything similar so I just wrote something up myself. I’m pretty sure this is somewhat efficient; if there’s a better way to do it then let me know! I’m posting here so I can reference it in the future. :)

jeremy.schneider's picture

Largest Tables Including Indexes and LOBs

Just a quick code snippit. I do a lot of data pumps to move schemas between different databases; for example taking a copy of a schema to an internal database to try to reproduce a problem. Some of these schemas have some very large tables. The large tables aren’t always needed to research a particular problem.

Here’s a quick bit of SQL to list the 20 largest tables by total size – including space used by indexes and LOBs. A quick search on google didn’t reveal anything similar so I just wrote something up myself. I’m pretty sure this is somewhat efficient; if there’s a better way to do it then let me know! I’m posting here so I can reference it in the future. :)

jeremy.schneider's picture

OSP #2c: Build a Standard Platform from the Bottom-Up

This is the fourth of twelve articles in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

jeremy.schneider's picture

OSP #2c: Build a Standard Platform from the Bottom-Up

This is the fourth of twelve articles in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

jeremy.schneider's picture

OSP #2c: Build a Standard Platform from the Bottom-Up

This is the fourth of twelve articles in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

jeremy.schneider's picture

Replicating Tanel’s Script Library

Tanel does offer a zip file with all of his scripts. The zip seems up-to-date now; I started doing this alternative technique awhile ago when the zip file didn’t seem to get updated as quickly as the raw scripts directory.

  mkdir tpt
  cd tpt

wget -r -nH --cut-dirs=2 --no-parent --reject="index.html*" http://blog.tanelpoder.com/files/scripts/

  cd ..

[svn/git] add tpt
[svn/git] commit tpt -m "added Tanel Poder's script library to our script repository"

Please remember that as Tanel says on his own website, “always proofread the scripts and test their effect out in a test environment before running in production.”

jeremy.schneider's picture

Replicating Tanel’s Script Library

Tanel does offer a zip file with all of his scripts. The zip seems up-to-date now; I started doing this alternative technique awhile ago when the zip file didn’t seem to get updated as quickly as the raw scripts directory.

  mkdir tpt
  cd tpt

wget -r -nH --cut-dirs=2 --no-parent --reject="index.html*" http://blog.tanelpoder.com/files/scripts/

  cd ..

[svn/git] add tpt
[svn/git] commit tpt -m "added Tanel Poder's script library to our script repository"

Please remember that as Tanel says on his own website, “always proofread the scripts and test their effect out in a test environment before running in production.”

jeremy.schneider's picture

Convert Raw/Hex to Timestamp

Just filing this away on my own blog because it seems that I always have hard finding it via google.  Here’s some code to convert raw/hex values into timestamps.  (I’ve come across this need in two situations: [1] bind variables recorded in trace files or sql monitor and [2] hi/low values in column statistics.)

As a query, without creating any objects in the database:

Syndicate content