Oakies Blog Aggregator

mwidlake's picture

Friday Philosophy – Why I Volunteer for User Groups

I’ve just noticed a new page about me popping up on the UKOUG web site – It’s in the section about volunteer case studies, alongside people like Joel Goodman, Simon Haslam, Carl Dudley, Jason Arneil, Brendan Tierney and others who have been stupid good enough to give time and effort to the UKOUG.
{You can get to the page by going to the UKOUG home page (www.ukoug.org) and clicking the Membership or Member Activities tab and Case Studies & Testimonials under that and finally Volunteer Case Studies. Phew. Or follow the link I gave at the start and click on the other names.}

I’m not sure how long I’ve been up on there but only a couple of days I think.

Anyway, Why DO I volunteer for user groups?

The little bio covers most of it but I thought I would put some words here on my blog too. I volunteer because, fundamentally, I am a socialist (with a small ‘S’) – I feel that we are all better off if we all help each other. I’ve been helped by people in my career (presenting stuff I don’t know, giving advice), I guess I feel that I should return that favor. Many of the people who have (and continue) to help me stand nothing to gain personally by helping me. In fact, one or two have helped me when, strictly speaking, they are helping create a rival for work opportunities. I try to do the same to those around me. I know, it sounds a bit “Disney film teaching the kids to do right” goody-two-shoes, but that is the core of it. And there are some other aspects to it too…

Why do I volunteer for the UKOUG specifically? Because they are THE main user group in my geographic area and provide the most support to the Oracle user community here in the UK. Most of the people involved in the UKOUG are just nice people too. But I also support and volunteer for smaller user groups, mostly by either promoting their meetings, going to them or presenting. I started presenting at the main UKOUG conference back when Dido, Eminem and Christina Aguilera where in their hey-days. I also went to the RDBMS and similar SIGs and before long I was presenting at them and then got sucked into chairing one of them – the Management and Infrastructure SIG. I’ve been slowly sucked in more & more as the years role by.

That has led on to me presenting at other user groups in different countries. Actually, I used to do quite a bit of presenting abroad (mostly the US) around 10 years ago, but that was part of the role I had at the time and my employer paid the bills. No employer to pay the bills now, but then as it is my time I try to make presenting abroad also a chance to have a short holiday, I try to take a day or two one side or the other of the event to look around. And actually, it is nice spending time with other people who present at or attend user group meetings.

Another part of it is I just like presenting. This is not quite so Disney Nice Guy, there is an aspect that is more selfish, that standing up, being listened to and telling people stuff that maybe they don’t know makes me feel better about myself. Better about myself? OK, I’ll let that stand for now but it is more that it makes me feel I am achieving something and having an impact. That I am useful. Fundamentally it is still a desire to help and presenting does not scare me (I know it is scary for a lot of people, but then a lot of people are not scared of heights and I am – it all balances out). But with a slice of “look at me!!!” thrown in.

There are also rewards for the effort. I’ve got to know a lot more people as a result of presenting, blogging (and now tweeting) than I would have had I stayed just one of the audience. For me it has helped me make more friends. As I said above, part of what is now nice about user group meetings for me is meeting friends I’ve made who are also on the speaker circuit and there is inevitable a few drinks in the evening whenever there is a user group. It also gives you more exposure in the community and helps lead to job opportunities – or at least that is the theory. No one has yet offered me a job because they liked my blog post or presentation!

That leads me to the last aspect of volunteering. Some people volunteer primarily for selfish reasons. To get bragging rights, get it on their CV’s, to help them get sales contacts or better jobs. The odd thing is, people who do it for those reasons tend not to last – as volunteering for user groups is a lot of hard work to get those rewards. You can usually spot them as they are the ones who don’t actually do a lot or complain all the time about the coffee being bad (actually, usually the coffee IS bloody terrible) and other things. Don’t get me wrong, some of those rewards do come with the volunteering, but if someone is volunteering primarily to get them, it does not seem to work out for them. Or maybe that is my socialism coming out again :-). Fundamentally, I think volunteering only works if, at the core of it, you want to help other people. Maybe that is why other volunteers are such nice people to hang around with.

Why do you do it? (or not).

Jonathan Lewis's picture

Understanding SQL

From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix. Then, after you’ve made a couple of suggestions and explained a couple of ideas the provider simply fades into the distance and doesn’t tell you any more about the query, or whether they’ve taken advantage of your advice, or found some other way to address the problem.

Such a query, with its execution plan, appeared a couple of weeks ago:

UPDATE FACETS_CUSTOM.MMR_DTL
SET
	CAPITN_PRCS_IND = 2,
	FIL_RUN_DT = Current_fil_run_dt,
	ROW_UPDT_DT = dta_cltn_end_dttm
WHERE
	CAPITN_PRCS_IND = 5
AND	HSPC_IND ='Y'
AND	EXISTS (
		SELECT	1
		FROM	FACETS_STAGE.CRME_FUND_DTL_STG STG_CRME
		WHERE	STG_CRME.MBR_CK = MMR_DTL.MBRSHP_CK
		AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN STG_CRME.ERN_FROM_DT AND STG_CRME.ERN_THRU_DT
		AND	STG_CRME.FUND_ID IN ('AAB1', '1AA2', '1BA2', 'AAB2', '1AA3', '1BA3', '1B80', '1A80')
	)
AND	EXISTS (
		SELECT	1
		FROM	FACETS_CUSTOM.FCTS_TMS_MBRID_XWLK XWLK
		WHERE	XWLK.MBR_CK = MMR_DTL.MBRSHP_CK
		AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN XWLK.HSPC_EVNT_EFF_DT AND XWLK.HSPC_EVNT_TERM_DT
	)
;

-------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT              |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|   1 |  UPDATE                       | MMR_DTL               |       |       |            |          |
|   2 |   NESTED LOOPS SEMI           |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|*  3 |    HASH JOIN RIGHT SEMI       |                       |    49 |  5488 | 12375   (2)| 00:02:29 |
|   4 |     TABLE ACCESS FULL         | FCTS_TMS_MBRID_XWLK   |  6494 | 64940 |    24   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL         | MMR_DTL               |   304K|    29M| 12347   (2)| 00:02:29 |
|*  6 |    TABLE ACCESS BY INDEX ROWID| CRME_FUND_DTL_STG     |     1 |    36 |     5   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | IE1_CRME_FUND_DTL_STG |     8 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("XWLK"."MBR_CK"="MMR_DTL"."MBRSHP_CK")
       filter("XWLK"."HSPC_EVNT_EFF_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
              "XWLK"."HSPC_EVNT_TERM_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
   5 - filter("CAPITN_PRCS_IND"=5 AND "HSPC_IND"='Y')
   6 - filter(("STG_CRME"."FUND_ID"='1A80' OR "STG_CRME"."FUND_ID"='1AA2' OR
              "STG_CRME"."FUND_ID"='1AA3' OR "STG_CRME"."FUND_ID"='1B80' OR "STG_CRME"."FUND_ID"='1BA2' OR
              "STG_CRME"."FUND_ID"='1BA3' OR "STG_CRME"."FUND_ID"='AAB1' OR "STG_CRME"."FUND_ID"='AAB2') AND
              "STG_CRME"."ERN_FROM_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
              "STG_CRME"."ERN_THRU_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
   7 - access("STG_CRME"."MBR_CK"="MMR_DTL"."MBRSHP_CK")

The most informative bit of narrative that went with this query said:

“The table MMR_DTL doesnt have index on these columns CAPITN_PRCS_IND , HSPC_IND .. Since this is an update stmt which will update 85k records, In a dilema whether to index these columns or not .. And this table MMR_DTL is an ever growing table. Worried about the update performance. “

This was in response an observation that there was a full tablescan on MMR_DTL at operation 5 despite the predicate “CAPITN_PRCS_IND”=5 AND “HSPC_IND”=’Y’. You’ll note that the predicted cardinality for that scan is 304K and the update statement is going to change CAPITN_PRCS_IND from the value 5 to the value 2 – so it’s not entirely unreasonable to be worried about the impact of creating an index that included the column capitn_prcs_ind.

What more can we say about this query, given the limited information. Lots – unfortunately the owner of the query isn’t giving anything else away.

I’m going to leave this note unfinished to give people a little chance to think about the clues in the request, the questions they might ask, reasons why there might be a performance problem, and strategies they might investigate, then I’ll update the posting with a few ideas some time in the next 24 hours.

 

 

mwidlake's picture

Guaranteed Method of Boosting your Oracle Skills

I can tell you how to be a better Oracle DBA, Developer, Designer, Architect – whatever your flavour of role or aspect of profession, if Oracle tech is part of your working world I can hand you the key to improvement. And it is easy.

I am totally assured(*) I can do this. It will work for every single one of you reading this post (except for you two, you know who you are). And you must send me $100 before I tell you how…

Hell, no you don’t! This is not some bull-droppings selling piece, it is just advice. And some advice aimed directly at myself too.

When did you last read the Oracle Server/Database Concepts manual? You know, that fairly short book (in fact, from 11G it is really short, with links through to other chapters in other books) that explains how Oracle TM (Copyright), actually does stuff? What it can do? It does not go into too many details but rather gives you a brief introduction of each concept and the fundamental “how it works” information.

Thanks to Kevin Fries who pointed out it would be nice if I linked to the online concepts manuals (as he did in his comment):
Here is the online 12C Database Concepts manual.
Here is the online 11GR2 Database Concepts manual for those still with 11GR2.
Here is the online 10GR2 Database Concepts manual for those trapped in the past with 10GR2.

Read it. Read it this week. I am confident that if you read the latest Oracle Database Concepts manual you will be ahead of the game by a massive step.

Oracle 7 instance diagram

Oracle 7 instance diagram

Why am I so sure? Because we forget what Oracle has been able to do for years and we miss new abilities if our day-job has not touched on them since they came in. I think there is a growing move to learning only what you really need to know to get the job done (as we are all so busy) as we know we can search the web for the rest. My friend Neil Chandler came up with a name for it, JIT learning: “Just In Time” learning). Only, you can’t easily search for what you don’t know (or have forgotten) is possible with the tech you use. If you go through the concepts manual you will be reminded of stuff you forgot, things you missed or {and this is key to newbies} gain an outline understanding of what Oracle can do.

I became fascinated with how many people read the concepts manual about a decade ago and started asking a question when I presented on any Oracle topic. “Who has reads the concepts manual for the version of Oracle you mostly work with?”. In the last 10, 12 years the number of hands has decreased from well over 50%. In 2012, at a UK meeting, it hit the bottom of the barrel, no hands whatsoever. Oddly enough, a few weeks later I was in Slovenia (for none-European people, a lovely country bordering Italy and Austria – google it if you need more help) and the same question resulted in 40% of the audience raising a hand. When I was in the US 6 months later, no hands at all again. In the UK and Europe since, no hands or occasionally, one hand – and a few questions usually nailed down that it was a prior version of the manual they had read.

I took some time to ask this question again at a UK user group meeting about 4 months ago (no hands came up of course) and asked “why?”. The consensus was “we know most of what is in the concepts manual, we just need to know what has changed” – with an undercurrent of not having time to read the now-huge set of Oracle manuals. A few people suggested just reading the New Features. This was a crowd who did not know what a table cluster was (“Ha, look at ME! I know what a table cluster is! Hahahahaaaa – OK, no one uses them.”). (British ironic self-depreciation there).

Reading “New Features” is certainly better than nothing but I feel it is not enough as it does not remind us of the established stuff we have forgotten. I am on a bit of a personal Jihad to explain the basics of Oracle to any Newbie who cares to listen and I have to keep checking my facts with the concepts manual and some chosen expert friends (thank you VERY MUCH expert friends) and I keep stumbling over stuff I don’t know, misunderstood or forgot. And I have been an “expert” in Oracle since… Well, before the millennium bug or Android phones or iTunes had been invented. THAT LONG! I know my shit – and some of it is, well…. wrong.

Actually, I have a confession. I have not read the 11g or 12C concepts manual. I told you this advice was aimed at me too.

So, Go Read The Oracle 12C Concepts Manual. Go ON! Go and read it!!!! Oh. Still here? Well, I AM going to read it – as I last read the 10G concepts manual properly. And as part of my current push to present, talk and blog about the basics of Oracle, I will blog what jumps out at me. I know one thing – I will not be quiet any time until August if I follow my own advice, I will be posting blogs left, right and center about what I learn..

I’ll use the tag 12Cbasics.

Let the learning of basics begin.

Oracle 7 or Oracle 8, 1.5

Oracle 7 or Oracle 8, 1.5 ” of pure info

Thanks to Joel Garry for digging his old manuals out the basement and doing this shot for me :-)

(*) it won’t work if you already read the latest concepts manual. But most people have not! Heck, I did not charge for the advice, so sue me if you read it already.

khailey's picture

Delphix announces masking acquisition

I’m excited to announce the first acquisition by Delphix (portending many to come). Dephix has just acquired the data masking company  Axis Technology Software. Delphix is integrating the Axis masking technology into the core Delphix product. Masking is a feature that 9 out of 10 customers have asked for after buying Delphix. Delphix eliminates the data cloning bottleneck  for application development environments and naturally the next concern that arises is how to mask the data from production in application development environments. The answer has been to use Axis masking which Delphix has been selling prepackaged together with Delphix in partnership with Axis. Axis was so impressed with Delphix that they wanted to become part of the Delphix team. Delphix has been impressed as well with Axis and were more than pleased to bring our companies together. Our companies have offices located just across the street from each other in Boston making the transition and integration of Axis into Delphix easy.

Axis masking is an awesome product and despite as a small team they have succeeded penetrating the challengers quadrant of the Gartner magic quadrant on masking solutions. Now with Axis code integrated into the core of Delphix, we are looking forward to seeing the combined Delphix overall solution, Data as a Service (Daas), in the market leader quadrant.

Masking is crucial in the industry now as security is a top concern.  There have been 783 industry data breaches in 2014 up 20% over 2013 raising the urgency of data security.  Delphix has a two prong approach to data security

  1. reduce surface area of exposure
  2. masked sensitive data outside of production

Surface Area of Risk

Production data is typically copied into many other environments such  backup, reporting, development, QA, UAT, sandbox and other environments. Thus there are often up to a dozen copies of sensitive data to secure creating more work and exposure to risk.  Delphix takes production data and compresses the data into one shared foot print across all non-production copies thus reducing the surface area of risk and providing a clear view into who has access to what data and when. With Delphix, companies can control, monitor and audit who has access to what data and when they had access.

Screen Shot 2015-05-19 at 12.52.41 PM

Masking

The second part of the equation is masking sensitive data outside of production so even if people have access to the data, it poses little to no danger. When it comes to security, the problem is the people who have access to the data. When data is sensitive, access has to be limited, yet more than 80% of sensitive data is found in insecure environments such as development and QA. Data from production system is copied of to backup, reporting, development, QA, UAT, sandbox and other environments. Of those environments, most don’t  required sensitive data such as real social security numbers, credit card numbers, patient names, and diagnoses.  Data masking replaces identifying data such as social security numbers, birth dates, and addresses with scrambled data that can’t be matched to actual customers. As a result, data that is stolen or breached can’t be used maliciously. Masking data can be tricky complex operation. It’s not a simple matter of blanking out a field of data but instead one has to replace data with compatible but similar data. People’s names should be replaced with reasonable people’s names and not just a list of random characters. Social security numbers should look like social security numbers and not just random numbers. Data that is masked should not be able to be unmasked. Data that is masked should consistently mask to the same value across different databases in a contained development environment. Referential integrity should be maintained. There are many algorithms that one can use for masking depending on the different types of data and concerns. The Axis masking technology provides these different algorithms and even will go into data and analyze the data to help determine data that could potentially be sensitive data.

Screen Shot 2015-05-19 at 12.52.29 PM

With Delphix, data can be synchronized with a Delphix appliance in the production zone, masked there and only masked data propagated to a Delphix appliance in a non-production zone, thus guaranteeing that data security outside of the production zone.

Summary

Delphix accelerates cloning production data into development and QA environments eliminating the  need for data subsets and synthetic data thus reducing bugs and speeding up development times. Axis masking, now being integrated into core Delphix, easily, efficiently and robustly masks sensitive data in development and QA environments. The combination of the two technologies brings a new approach in the industry not yet seen elsewhere and eliminates what some say are the two biggest bottlenecks in supplying production parity environments to application development – cloning the data and masking the data.

 

 

 

Richard Foote's picture

Indexing and Transparent Data Encryption Part II (Hide Away)

In Part I, I quickly ran through how to setup an encrypted tablespace using Transparent Data Encryption and to take care creating indexes outside of these tablespaces. Another method of encrypting data in the Oracle database is to just encrypt selected columns. Although the advantage here is that we can just encrypt sensitive columns of interest (and that the […]

khailey's picture

Delphix Data as a Service (DaaS)

#222222;">The capabilities of Delphix can be differentiated from snapshot technologies through the following hierarchy:

#222222;">
Screen Shot 2015-05-19 at 4.39.57 PM
  1. Data as as Service (DaaS) (Delphix approach to data management)
  2. Virtual Data (end-to-end collection and provisioning of thin clones)
  3. Thin Cloning
  4. Storage Snapshots
On top we have the most powerful and advanced data management features that enable fast, easy, secure, audit-able data flow through organizations.
#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-21-at-8.08.47-AM.png" target="_blank">Screen Shot 2014-05-21 at 8.08.47 AM
DaaS is built on top of other technologies. On the bottom we have the minimal building blocks starting with storage snapshots.  Storage snapshots can be used to make “thin clone” databases. Storage snapshots have been around for nearly 2 decades but have seen minimal usage for database thin cloning due to the technical and managerial hurdles. Part of the difficulty with creating thin clones is that thin cloning requires work by multiple people and/or teams such as as DBAs, system admins, storage admins etc it takes to create the thin clones.

#555555;">Why does it take so long to clone databases with file system snapshots? There are two reasons

    #555555;">
  • bureaucracy
  • technical challenges

#555555;">Bureaucracy

#555555;">Depending on your company the more or less bureaucratic steps you will have (one customer reported 300 steps to thin cloning)  to get a thin clone database allocated. If you are the DBA, Storage, Systems guru all rolled into one at a small company, and if so bravo, you can probably do it pretty  quick. On the other hand if you wear all those hats, you are probably the crucial person in IT and most critical IT processes grind to a halt because they depend on you and you are super busy.

#555555;">#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-23-at-4.01.39-PM.png" target="_blank">Screen Shot 2014-05-23 at 4.01.39 PM

#555555;">Why does it take so long to pass tasks between people and  groups? Because a task that might take an hour when someone is completely free and idle will take multiple days as that person starts to be 95% busy or more. See the following chart from the book The Phoenix Project:

#555555;">#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-23-at-4.02.15-PM.png" target="_blank">Screen Shot 2014-05-23 at 4.02.15 PM

#555555;">Technical Challenges

#555555;">#2970a6;" href="http://www.oracle-base.com/articles/11g/clonedb-11gr2.php" target="_blank">Screen Shot 2013-11-11 at 8.51.06 PM

#555555;">The easiest way to create a clone is to snapshot the production storage. To snapshot the production storage, either shutdown the source database, take a snapshot or more  likely put all the table spaces in hot backup mode, take a snapshot, and then take all of the table space out of hot backup mode. If the database spans more than one LUN it may take special storage array options to snapshot all the LUNs at the same point in time. Once the all the database LUNs are snapshot, then you can use the snapshots to create a “thin clone” of the production database on the same storage as production.

#555555;">Problem with this scenario no matter what storage you use is that the clone is doing I/O on the same LUNs as production.  The whole point of cloning production is to protect production but in this case the clone’s I/O will be hurting production. Ooops

#555555;">#2970a6;" tabindex="0" src="https://ci6.googleusercontent.com/proxy/fcE42SEokWkiuZtZtVW52Y_dRCzL1qzsiLy2ZfObx5ZC7y-ulC7_gVYPwFYCJs4CcZUigSCfxM6oO6guatMgyVslgFCz-YJfxvFDDvu8RZSmGUHan_5QlmXUzoxX_vmciqQC02fMyxGBchBOhZ9_BLRRzPcT__gKdJbdmQ=s0-d-e1-ft#http://www.kylehailey.com/wp-content/uploads/2013/11/Screen-Shot-2013-11-11-at-8.51.31-PM-1024x593.png" alt="Screen Shot 2013-11-11 at 8.51.31 PM" width="385" height="222" />

 

#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-21-at-8.08.47-AM.png" target="_blank">Screen Shot 2014-05-21 at 8.08.47 AM

#555555;">What we want to do is somehow get a copy of production onto some non-production storage where we can snapshot it. This means making a full physical copy of production onto a “development filer.” Once a copy has been made we can make clones by snapshoting the copy. These snapshots then require configuration to make them available to target machines either  over fiber channel or mounting them over NFS and then recovering the database on the target machines.

#555555;">Problem with this scenario is that what if tomorrow we want a clone of production as it is that day? Currently we only have the copy from yesterday, thus we have to copy across the whole copy of production onto the “development filer.” Continually copying the source each time we need a clone at a different point in time defeats the purpose of creating thin clones in the first place.

#555555;"> 

Delphix is the solution

#555555;">

In order to overcome the obstacles creating thin clones, all the steps can be optimized and automated with a technology called “Virtual Data” (like Virtual Machines).
#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-21-at-8.08.47-AM.png" target="_blank">Screen Shot 2014-05-21 at 8.08.47 AM
Virtual data just the first step in automation. The next step is adding all the processes, functionality and control to manage the virtual data which is DaaS.
#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-21-at-8.08.47-AM.png" target="_blank">Screen Shot 2014-05-21 at 8.08.47 AM
File system snapshots  address the very bottom of the hierarchy, that is, they only manage storage snapshots. They have no automated thin cloning of databases. Without automated thin cloning of databases there is no end-to-end processing of data from source to thin cloned target i.e.virtual data. With out virtual there is no DaaS.
#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-21-at-8.08.47-AM.png" target="_blank">Screen Shot 2014-05-21 at 8.08.47 AM
DaaS features, all of which are encompassed by Delphix, include
#2970a6;" href="http://www.oraclerealworld.com/wp-content/uploads/2014/05/Screen-Shot-2014-05-21-at-8.08.47-AM.png" target="_blank">Screen Shot 2014-05-21 at 8.08.47 AM
  • Security
    • Masking
    • Chain of custody
  • Self Service
    • Login and Roles
    • Restrictions
  • Developer
    • Data Versioning and Branching
    • Refresh, Rollback
  • Audit
    • Live Archive
  • Modernization
    • Unix to Linux conversion
    • Data Center migration
    • Federated data cloning
    • Consolidation

DaaS re-invents data management and provisioning by virtualizing, governing, and delivering data on demand.

Most businesses manage data delivery with manual, ad hoc processes: users file change requests, then wait for DBAs, systems administrators, and storage administrators to push data from system to system, bogging down production applications, networks, and target systems with long load times. Data delays cost businesses billions a year in lost productivity and low utilization of systems and software resources.

As a result, there  an enormous opportunity to optimize data management. Data management can be optimized with DaaS yielding significant business impact:

  • Drive revenue, competitive differentiation with faster application time to market
  • Enable faster growth via better release management of enterprise applications
  • Improve customer intimacy, upsell, cross-sell with faster, more flexible analytics
  • Free budget for innovation by reducing IT maintenance costs
  • Reduce compliance risk through better governance, data security.

Businesses need to manage data as a strategic asset across their operations, applying the same rigor as supply chain optimization for manufacturing companies.

DaaS Transformation Process with Delphix

Delphix applies a three-step process to transform the data supply chain:

  • Analyze: survey systems, processes, teams across data supply chains
  • Transform: virtualize, automate data delivery with centralized governance
  • Leverage: drive business value via new data products, process optimization

Businesses typically manage multiple data supply chains simultaneously, all of which are targets for data chain optimization:

  • Compliance retention, reporting
  • Modernization, migration projects
  • Application projects and development
  • BI, analytics
  • Data protection.

Delphix re-invents the data supply chain with its DaaS:

  • Install data engines in hours across all repositories, locations (including cloud)
  • Connect: non-disruptively sync data across sites, systems, architectures
  • Control: secure data, track release versions, preserve and prove data history
  • Deploy: automatically launch virtual data environments in 10x less space, time
  • Leverage data with self service refresh, reset, branching, bookmarks, integration.
pete.sharman's picture

Installing Enterprise Manager 12c For Testing

A question I get asked fairly often when I’m at conferences, either during the Q&A for my sessions or in general chit chat (a.k.a. networking) afterwards is “I want to play around with the features in Enterprise Manager 12c but don’t want to do that in our Production environment. How do I go about installing a copy of Enterprise Manager 12c somewhere else in the easiest way as a test environment?” The answer to that is pretty straightforward. It’s to download the relevant VM template from Oracle’s Software Delivery Cloud. Note: The screenshots shown in this post are from the new and enhanced Oracle Software Delivery Cloud, rather than the classic Software Delivery Cloud, so if you use the classic form your screens will be different. On the first screen, make sure you understand the export restrictions and click “Accept”:

Installing from SDC1

On the next screen, select the “Linux/OVM/VMs” checkbox, then type “Enterprise Manager” into the Product field. At the time of writing this blog post, you will see there are three templates you can download:

  1. Oracle VM Template for Deploying Oracle Enterprise Manager Cloud Control on ODA
  2. Oracle VM Template for Oracle Enterprise Manager Cloud Control
  3. Oracle VM VirtualBox for Deploying Oracle Enterprise Manager Cloud Control

Installing from SDC2

In this example, I’ll select the ODA template and select the x86 64 bit platform:

Installing from SDC3

Note you can choose more products to download here if you want, but for now just click the “Continue” button:

Installing from SDC4

On the next screen, you will be provided a list of the available releases (there may be more than one). The most current release will be chosen for you by default. Again, just click the “Continue” button on this screen, as only one release is currently available:

Installing from SDC5

You will then be asked to read and accept the license agreement. Once you have read that, tick the checkbox to say you’ve reviewed it then click the “Continue” button:

Installing from SDC6

The next screen has a variety of options that you can choose. The VM template is comprised of a number of zip files, so you can download each one individually, or you can click the “Download All” button:

Installing from SDC7

The other option available to you from the previous screen is to click on “WGET Options”. WGET is a popular package for allowing you to retrieve files via the command line. Clicking on the “Download .sh” button will allow you to download a text file that you can run on the command line to retrieve your files. You will need to edit the script first to include your oracle.com credentials before you run the script:

Installing from SDC8

For now, I’m going the use the “Download All” button to continue the process. When you do that, you may (depending on your browser settings) get a message that a pop-up blocker has been detected. There will be an option to allow you to disable the pop-up blocker for this site (in my case using Firefox, I need to click the “Options” button on the top right to address the issue). Once that’s done, you can click the “Download All” button again to proceed:

Installing from SDC9

On the next screen, you can install the download manager to manage the download process for you by clicking “Download the installer”:

Installing from SDC10

You need to save the installer executable by clicking “Save File”:

Installing from SDC11

Select the directory you want the installer saved to:

Installing from SDC12

The installer will automatically execute when the file is saved. You need to accept the Akamai NetSession Interface EULA and click “Next”:

Installing from SDC13

Depending on your configuration, you may get a message that Windows Firewall has blocked some features of the app. Click on “Allow Access”:

Installing from SDC14

On the next screen you need to select a download folder where the installer will download the files to:

Installing from SDC15

You will then see a message that the download manager has been successfully installed. Click “Close”:

Installing from SDC16

The files will then start downloading to the folder you specified:

Installing from SDC17

For this walkthrough I cancelled the download because of the time it would take to download all the files, so you may see more screens after this. The only one I saw was a survey screen:

Installing from SDC18

Once you have all the files downloaded, you just need to import the template into either Oracle VM or VirtualBox (depending on which template you downloaded) and start the template. Voila! One working EM 12c test environment, with little effort involved!

martin.bach's picture

If you use swingbench take note of sbutil

This is going to be a very short post for a change. I have used Swingbench extensively and really love the tool. Many thanks to Dominic Giles!

Recently he announced a new tool on his blog that you can use to inflate your data volume. So instead of using the “-scale” argument when executing oewizard you can just keep the defaults and later on create as much data as you like. Here is an example, the reason for this post.

Setting the Scene

If you have not yet downloaded the latest Swingbench version and JDK 8 do this first. You can get Swingbench from the tool’s landing page and the Java software from java.sun.com. I used Swingbench 2.5.791 (stable). With both deployed, I created the Order Entry schema in lights-out mode:

[oracle@lab bin]$ ./oewizard -dbap secret -u soe -p soe -cl -cs //lab/martin -ts soe_tbs -create -dba system
SwingBench Wizard
Author  :        Dominic Giles
Version :        2.5.0.971

Running in Lights Out Mode using config file : oewizard.xml
The following statement failed : GRANT EXECUTE ON dbms_lock TO soe : Due to : ORA-01031: insufficient privileges

============================================
|           Datagenerator Run Stats        |
============================================
Connection Time                        0:00:00.006
Data Generation Time                   0:01:06.110
DDL Creation Time                      0:00:30.346
Total Run Time                         0:01:36.465
Rows Inserted per sec                      183,121
Data Generated (MB) per sec                   14.9
Actual Rows Generated                   13,002,138

Post Creation Validation Report
===============================
The schema appears to have been created successfully.

Valid Objects
=============
Valid Tables : 'ORDERS','ORDER_ITEMS','CUSTOMERS','WAREHOUSES','ORDERENTRY_METADATA','INVENTORIES',
'PRODUCT_INFORMATION','PRODUCT_DESCRIPTIONS','ADDRESSES','CARD_DETAILS'
Valid Indexes : 'PRD_DESC_PK','PROD_NAME_IX','PRODUCT_INFORMATION_PK','PROD_SUPPLIER_IX','PROD_CATEGORY_IX',
'INVENTORY_PK','INV_PRODUCT_IX','INV_WAREHOUSE_IX','ORDER_PK','ORD_SALES_REP_IX',
'ORD_CUSTOMER_IX','ORD_ORDER_DATE_IX','ORD_WAREHOUSE_IX','ORDER_ITEMS_PK','ITEM_ORDER_IX',
'ITEM_PRODUCT_IX','WAREHOUSES_PK','WHS_LOCATION_IX','CUSTOMERS_PK','CUST_EMAIL_IX',
'CUST_ACCOUNT_MANAGER_IX','CUST_FUNC_LOWER_NAME_IX','ADDRESS_PK','ADDRESS_CUST_IX',
'CARD_DETAILS_PK','CARDDETAILS_CUST_IX'
Valid Views : 'PRODUCTS','PRODUCT_PRICES'
Valid Sequences : 'CUSTOMER_SEQ','ORDERS_SEQ','ADDRESS_SEQ','LOGON_SEQ','CARD_DETAILS_SEQ'
Valid Code : 'ORDERENTRY'
Schema Created

More Data!

The next step is to invoke sbutil:

[oracle@lab bin]$ ./sbutil -u soe -p soe  -cs //lab/martin -soe parallel 12 -dup 4
Getting table Info
Got table information. Completed in : 0:00:01.558
Dropping Indexes
Dropped Indexes. Completed in : 0:00:03.252
Creating copies of tables
Created copies of tables. Completed in : 0:00:00.042
Begining data duplication
Completed Iteration 4. Completed in : 0:00:07.942
Creating  Constraints
Created  Constraints. Completed in : 0:00:28.737
Creating  Indexes
Created  Indexes. Completed in : 0:00:54.541
Updating Metadata and Recompiling Code
Updated Metadata. Completed in : 0:00:00.754
Updating Sequences
Updated Sequences. Completed in : 0:00:00.792
Determining New Row Counts
Got New Row Counts. Completed in : 0:00:03.132
Completed Data Duplication in 0 hour(s) 2 minute(s) 8 second(s) 454 millisecond(s)
----------------------------------------------------------------------------------------------------------
|Table Name          |  Original Row Count|       Original Size|       New Row Count|            New Size|
----------------------------------------------------------------------------------------------------------
|ORDER_ITEMS         |           4,288,582|              280 MB|          17,154,328|              1.1 GB|
|CUSTOMERS           |           1,000,000|              136 MB|           4,000,000|            552.4 MB|
|CARD_DETAILS        |           1,500,000|               80 MB|           6,000,000|            337.8 MB|
|ORDERS              |           1,429,790|              160 MB|           5,719,160|              664 MB|
|ADDRESSES           |           1,500,000|              184 MB|           6,000,000|            571.8 MB|
----------------------------------------------------------------------------------------------------------
|Total               |                    |              840 MB|                    |              3.2 GB|
----------------------------------------------------------------------------------------------------------

How Cool is That?

Really, it is. And look at the timings. This didn’t take too long, even on my poor old AMD 6200 series lab server.

Richard Foote's picture

Indexing and Transparent Data Encryption Part I (The Secret Life of Arabia)

Database security has been a really hot topic recently so I thought I might write a few posts in relation to indexing and Transparent Data Encryption (TDE) which is available as part of the Oracle Advanced Security option. To protect the database from unauthorized “backed-door” accesses, the data within the actual database files can be encrypted. […]

dbakevlar's picture

SQL ID Specific Performance Information

When you need to have information about one SQL_ID and don’t need everything and the kitchen sink, there are a few different ways to collect this via Oracle.  I’m surprised how rarely this is covered in performance tuning/optimization, (whatever the current “acceptable” term is for fixing a database when there are performance issues arise… J)  classes, manuals and documentation.  I’d like to cover two of my favorite here and know, there are other options that I won’t be digging into, (SQLd360, SQLTXPLAIN, tracing, etc…)

The Workload Repository SQL Report

I also refer to this as the “SQL ID Specific AWR Report”.  Now many of you may have come across this report via Cloud Control when you go into a unique SQL_ID in the console and upon looking at the details, you notice there are two options below the wait event actrivity, one for run an AWR SQL Report and one for an ASH report.  What many don’t realize, is that this report also is available from the command line on the database host.

awrsq1

Report Location:  $ORACLE_HOME/rdbms/admin/awrsqrpt.sql

Report Format:  HTML from Cloud Control and the command line, TEXT is also an option from SQL Plus.

Requirements:  Execution DBMS_Repository and privileges to run reports and store on the host or if you run via Cloud Control, then you just can save off the HTML report to your desktop.

Licensing:  Yes, you do need diagnostics pack for this feature.

What’s so great about this report?

This report, like “Search SQL”, which we’ll go over soon enough, allows you to go through all of the AWR history you possess and let you know the minimum and maximum runs, execution plans for each plan hash value that exists, along with elapsed time and other pertinent statistical information about the SQL ID in question.

Notice that the version of the database, that it’s a RAC instance, along with which node in the RAC environment the statement was run in, is quickly stated, so if anyone questions this, you can quickly locate this information at the top of the report.

awrsq2

The report will display the SQL_ID in question, along with identifying if there was more than one plan hash value, (which the report will number and identify by PHV, (plan hash value) and the plan hash value ID.  This will make it easy to know which execution plan and events belong to which if there is more than one.  For our example, we do only have one here.

awrsq3

Remember for the elapsed time, this is for the total time for the snapshot timeline involved and you need to look at the per execution to see how long each one actually required to complete.  These statistics are very helpful when identifying the wait event(s) that are involved in the performance challenge.

awrsq4

The plan is nicely formatted and you can step into the time used for each step in the process.  I wish it defaulted with temp usage, but alas, that is not to be in the HTML report.  This is the only area of the report where I prefer the HTML report over the text, as with this report, there is considerable size difference for storing the report and I hate giving up any data…. :)

If you’ve had an outage during your snapshot timeline, then this report, just like AWR, is dependent upon up time and will notify you of the issue, requiring you to pick snapshots that have no downtime involved.

Search SQL

Search SQL has been around since EM10g, but it’s one of the least known features.  I like it as it gives a clean, high level view of performance over a large period of time by snapshot and then you can dig down to do research from there.

Report:  None really from the command line-  it’s a compilation of report features.

Report Format:  None-  this is via the console, but some of the reports can be saved off, (such as ADDM and other reports…)  you can also copy and paste or screenshot pages.

Licensing:  Due to the way we are using it here, both the diagnostics and the tuning pack will be required.

Using Search SQL

Using our same run as we used above, we can identify a SQL_ID that is causing us concern and we want to investigate further-

awrss1

Copy the SQL_ID so you can paste it in the proper field in the Search SQL console.  You’re now going to leave the Top Activity, (or ASH Analytics, whichever you are using, both work great for this type of initial identification…)

awrss2

Each Search SQL console is database specific, so click on the Performance drop down, SQL and then Search SQL for EM12c, (EM10g and EM11g, this is directly under the Performance menu….)

awrss3

Update the following sections:

  • Add AWR Snapshots and Baselines.
  • Choose All snapshots to verify that you are seeing the entire performance data contained in your AWR.  If your database is set with the default time for retaining AWR, this is 8 days for 11.2 and above.
  • Enter in or paste the SQL ID into the SQL ID value section.
  • Click on Search, (don’t click on Show SQL, as I know this is a bit misleading, but it will just show you the SQL statement to be run by Cloud Control…)

awrss4

Four tabs will be shown, don’t be surprised if you don’t notice the completion as the cache often only returns one row:

  • Cursor Cache, (in the cache currently)
  • AWR Snapshot data, (where we’ll spend most of our time…)
  • AWR Baselines, (good data to have to back up changes in plan hash values if you have your database set to update automatically or if someone’s put in a plan from a baseline run.)
  • SQL Tuning Set if you’ve chosen to examine those, too.

You CAN examine the existing run and looking at it this way really shows you how it’s currently running vs. how it has run, but if you’re just looking for when it really ran well, then skip the curor section and jump to the AWR snapshots to see this SQL ID for ALL snapshots in the AWR-

awrss5

You’re seeing all of this from the largest elapsed time to the least descending, so although we’re able to view any patterns on how the SQL may be degrading over time, note if there are plan has value changes, (which in our example, there are and we didn’t see that for this statement in our AWR SQL Report) and click on the “Next 25” link at the bottom to get to the lesser elapsed run times.

awrss6

There are some runs with 0.00 on the run times.  You can investigate these, but we’d find that they hadn’t completed or no rows were returned.  What is more important are the runs where we see overall run times that are efficient in the elapsed time and that rows were returned.  I’ve marked them in our report.

awrss7

Upon closer inspection, you’ll also notice that they all belong to the same plan hash value except for one run.  This is a good indicator that this plan has value should be identified as the best over all for this query.

Now as we all know, it’s not just about the one query.  The database could impact the run as well.

  • What resource limitations were in play?
  • Where there any concurrency issues?
  • Where there any other factors that overall effect performance?

On the right, you’ll notice that the snapshots are links.  You can use these to inspect the over all performance of the database during the snapshot interval the execution occurred.

awrss9

Here’s all that high level statistics info at the database level.  You also have the opportunity to run an ADDM report from here and see that “all that’s important” view of your database.

awrss8

Now the second tab is going to offer you the workload repository report for this database, including the summary to tell you, is there anything up in this database at that time that you should be aware of.

You can go back to the Search SQL report and then look at the individual executions listed, too.

awrss12

This will take you to the SQL Details page for whichever one you click on-

awrss10

Just as with any SQL Details, you can see the activity, the plan, plan control, etc.  The example above is for an execution from the day before, so it’s kind of an easy way to get to it vs. going and searching historically through Top Activity.

Baselines

Now why did we choose the Baselines as part of our search?  Baselines, especially when they are automatically implemented, can impact performance at times and knowing this information can quickly let you know it wasn’t another culprit involved.

awrss11

We can see what plan hash value is attributed to what baseline, the overall elapsed time and you can click on the Baseline Name link to see when and if it was implemented to diagnose if its part of your problem.

Hopefully this gives you some more tools for your arsenal when that boss or peer is standing at your desk and demanding an answer to a challenge-  what more can we ask for but answers! :)



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [SQL ID Specific Performance Information], All Right Reserved. 2015.