Uncategorized

connor_mc_d's picture

LOBs vs Data – when compression occurs

Just a quick tip for SECUREFILE lobs.

You may be familiar with basic compression on data in tables. It is a dictionary attribute of the table, which is then taken into account when you perform:

  • a SQL Loader direct load, or
  • an INSERT with the APPEND hint, or
  • a bulk bind INSERT in PLSQL using the APPEND_VALUE hint.

Whichever of the above you perform does not really matter – the key thing is that when you set the COMPRESS attribute on a table, this only applies on operations subsequent to the moment at which you altered the table. It does not compress the existing data. Hence when you alter a table to add the attribute, it is instantaneous.

connor_mc_d's picture

ODC Appreciation Day–LOB compression

LOBs tend to be large. Well duh…it’s right there in the name! “Large Object”. So one of the cool things I like with the SECUREFILE option in recent releases of Oracle Database is the ability to compress LOBs. Here’s a quick demo of that in action:

connor_mc_d's picture

OpenWorld Tuesday … yep, still screwed

Well…I’m only planning day 2 of OpenWorld and already I’ve pretty much given up on being able to see all the talks that I want to see Smile The challenge for me will be trying to coax those people that are inside Oracle to give me a synopsis of their talk after the event. That is probably my best hope here.

In any event, if you’re a database person like me, here’s my tips for what I’ll be trying to sneak into on Tuesday.

Tuesday morning

An Insider’s Guide to Oracle Autonomous OLTP Database Cloud [TRN3979]
Maria Colgan, master product manager at Oracle Corporation , Oracle
Robert Greene, Senior Director, Product Management, Oracle
Tuesday, Oct 23, 11:15 a.m. – 12:00 p.m. | Moscone West – Room 3003

which overlaps with

connor_mc_d's picture

Partial Indexes–Take Care With Truncate

Partial indexes are a very cool feature that came along with Oracle 12c. The capability at partition level to control index existence allows for a couple of obvious use cases:

1) You index the data in recent partitions only, because small amounts of data are aggressively searched by applications and/or users, but not the older data because the queries for older data are either less frequent or are more analytical in nature.

connor_mc_d's picture

Easy as pi…. hole.

A slight digression from my normal database-focussed content today Smile

In internet bandwidth and latency strapped Perth (Western Australia), every last drop of internet counts. Recently I stumbled upon this blog post by Troy Hunt about using a Raspberry Pi to serve as a local DNS to filter out unnecessary content. Personally, I don’t care about the content as such (I don’t like ads, but I do acknowledge that they are generally a necessary “evil”), but for me it is about getting the most performance out of my lowly internet connection until the technology evolves in Australia.

connor_mc_d's picture

My APEX 18.2 upgrade in a nutshell

As always, you should read the Installation/Upgrade manual from top to bottom before upgrading any piece of software, and be aware of all of the pre-requisites and processes.  But for me, my Application Express 18.2 upgrade was as simple as:

  • Download Application Express
  • Unzip to my target location
  • sqlplus / as sysdba
    • SQL> @apexins.sql SYSAUX SYSAUX TEMP /i/
  • Go to my ORDS installation
    • java -jar ords.war validate
  • sqlplus / as sysdba

And voila! A freshly upgraded Application Express in a matter of minutes…

connor_mc_d's picture

OpenWorld Monday…I’m so screwed

As I mentioned in a previous blog post, I whipped up a small Application Express application to let me plan out my Openworld activities. But there’s a small problem with being able to quickly and easily find thing based on the schedule…. There is too much to see!!!

I had a quick glance through the schedule just for Monday, and already I can see myself being very frustrated with all of the content that I am going to have miss out on, simply because it clashes with other content I’d like to see.

connor_mc_d's picture

Correcting datatypes with minimal downtime

Just a quick post here by request of an attendee of the September Office Hours. I had a demo converting the data type of a primary key without losing data and with minimal impact to the availability of the database using DBMS_REDEFINITION.  You can see that video here

but here is the script used to run that demo. 

connor_mc_d's picture

Datatype conversion laziness … yet another reason

I’ve got two words to describe my thoughts on being lazy with data types …. JUST DON’T! Smile

There’s been many blog posts, videos, AskTOM questions etc over the years with numerous examples of how the optimizer gets confused, how the SQL engine gets confused…and even how developers will ultimately get confused if you do not spend that little tiny bit of extra effort in ensuring data type consistency.

connor_mc_d's picture

Another little 12c improvement

You’ve got a huge table right? Massive! Immense! And then something bad happens. You get asked to remove one of the columns from that table.

“No problem” you think. “I won’t run the ‘drop column’ command because that will visit every block and take forever!”

So you settle on the perfect tool for such a scenario – simply mark the column as unused so that it is no longer available to application code and the developers that write that code.

To prevent automated spam submissions leave this field empty.
Syndicate content