Oakies Blog Aggregator

dbakevlar's picture

RMOUG’s First WIT Scholarship Winner

I’ve been running the Women in Technology, (WIT) at RMOUG, which I first started planning out in 2011, which has grown to include other user groups and even countries as its grown.  The last couple years, I’ve worked to try to add a WIT scholarship to RMOUG’s, but it wasn’t always easy to convince others that we should provide one when you are asking to choose one group over another.  There’s only so much money to go around and so many initiatives that we need to address each year for the Oracle community.

For 2015, we had a unique candidate submitted from our continual education scholarship, referred to as the Stan Yellott scholarship, after the wonderful RMOUG board member and much loved mentor who passed away in 2006.  The candidate, Natalie Kalin is from Pine Creek High School, out of Colorado Springs, which has students that attend our yearly conference, Training Days, each February.  She stood out, as not only had she chosen to major in robotics in College, but she was already in AP Robotics in high school and was then taking this education and providing STEM initiatives to Middle School students in her district.

NKalin

Due to her technical skills, initiative, stellar transcripts and submission to the scholarship, it was easy for the board of directors at RMOUG to recognize her and award a second scholarship for WIT.  I want to share the announcement from her local school district, along with recognize her accomplishment and thank her for making it so easy for me to create the WIT scholarship this year!

Thank you and congratulations, Natalie!

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [RMOUG's First WIT Scholarship Winner], All Right Reserved. 2015.

cary.millsap's picture

What happened to “when the application is fast enough to meet users’ requirements?”

On January 5, I received an email called “Video” from my friend and former employee Guđmundur Jósepsson from Iceland. His friends call him Gummi (rhymes with “who-me”). Gummi is the guy whose name is set in the ridiculous monospace font on page xxiv of Optimizing Oracle Performance, apparently because O’Reilly’s Linotype Birka font didn’t have the letter eth (đ) in it. Gummi once modestly teased me that this is what he is best known for. But I digress...

His email looked like this:

It’s a screen shot of frame 3:12 from my November 2014 video called “Why you need a profiler for Oracle.” At frame 3:12, I am answering the question of how you can know when you’re finished optimizing a given application function. Gummi’s question is, «Oi! What happened to “when the application is fast enough to meet users’ requirements?”»

Gummi noticed (the good ones will do that) that the video says something different than the thing he had heard me say for years. It’s a fair question. Why, in the video, have I said this new thing? It was not an accident.

When are you finished optimizing?

The question in focus is, “When are you finished optimizing?” Since 2003, I have actually used three different answers:

When are you are finished optimizing?
  1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
    Source: Optimizing Oracle Performance (2003) pages 302–304.
  2. When the application is fast enough to meet your users’ requirements.
    Source: I have taught this in various courses, conferences, and consulting calls since 1999 or so.
  3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
    Source: “Why you need a profiler for Oracle” (2014) frames 2:51–3:20.

My motive behind answers A and B was the idea that optimizing beyond what your business needs can be wasteful. I created these answers to deter people from misdirecting time and money toward perfecting something when those resources might be better invested improving something else. This idea was important, and it still is.

So, then, where did C come from? I’ll begin with a picture. The following figure allows you to plot the response time for a single application function, whatever “given function” you’re looking at. You could draw a similar figure for every application function on your system (although I wouldn’t suggest it).

Somewhere on this response time axis for your given function is the function’s actual response time. I haven’t marked that response time’s location specifically, but I know it’s in the blue zone, because at the bottom of the blue zone is the special response time RT. This value RT is the function’s top speed on the hardware you own today. Your function can’t go faster than this without upgrading something.

It so happens that this top speed is the speed at which your function will run if and only if (i) it contains no unnecessary calls and (ii) the calls that remain run at hardware speed. ...Which, of course, is the idea behind this new answer C.

Where, exactly, is your “requirement”?

Answer B (“When the application is fast enough to meet your users’ requirements”) requires that you know the users’ response time requirement for your function, so, next, let’s locate that value on our response time axis.

This is where the trouble begins. Most DBAs don’t know what their users’ response time requirements really are. Don’t despair, though; most users don’t either.

At banks, airlines, hospitals, telcos, and nuclear plants, you need strict service level agreements, so those businesses investment into quantifying them. But realize: quantifying all your functions’ response time requirements isn’t about a bunch of users sitting in a room arguing over which subjective speed limits sound the best. It’s about knowing your technological speed limits and understanding how close to those values your business needs to pay to be. It’s an expensive process. At some companies, it’s worth the effort; at most companies, it’s just not.

How about using, “well, nobody complains about it,” as all the evidence you need that a given function is meeting your users’ requirement? It’s how a lot of people do it. You might get away with doing it this way if your systems weren’t growing. But systems do grow. More data, more users, more application functions: these are all forms of growth, and you can probably measure every one of them happening where you’re sitting right now. All these forms of growth put you on a collision course with failing to meet your users’ response time requirements, whether you and your users know exactly what they are, or not.

In any event, if you don’t know exactly what your users’ response time requirements are, then you won’t be able to use “meets your users’ requirement” as your finish line that tells you when to stop optimizing. This very practical problem is the demise of answer B for most people.

Knowing your top speed

Even if you do know exactly what your users’ requirements are, it’s not enough. You need to know something more.

Imagine for a minute that you do know your users’ response time requirement for a given function, and let’s say that it’s this: “95% of executions of this function must complete within 5 seconds.” Now imagine that this morning when you started looking at the function, it would typically run for 10 seconds in your Oracle SQL Developer worksheet, but now after spending an hour or so with it, you have it down to where it runs pretty much every time in just 4 seconds. So, you’ve eliminated 60% of the function’s response time. That’s a pretty good day’s work, right? The question is, are you done? Or do you keep going?

Here is the reason that answer C is so important. You cannot responsibly answer whether you’re done without knowing that function’s top speed. Even if you know how fast people want it to run, you can’t know whether you’re finished without knowing how fast it can run.

Why? Imagine that 85% of those 4 seconds are consumed by Oracle enqueue, or latch, or log file sync calls, or by hundreds of parse calls, or 3,214 network round-trips to return 3,214 rows. If any of these things is the case, then no, you’re absolutely not done yet. If you were to allow some ridiculous code path like that to survive on a production system, you’d be diminishing the whole system’s effectiveness for everybody (even people who are running functions other than the one you’re fixing).

Now, sure, if there’s something else on the system that has a higher priority than finishing the fix on this function, then you should jump to it. But you should at least leave this function on your to-do list. Your analysis of the higher priority function might even reveal that this function’s inefficiencies are causing the higher-priority functions problems. Such can be the nature of inefficient code under conditions of high load.

On the other hand, if your function is running in 4 seconds and (i) its profile shows no unnecessary calls, and (ii) the calls that remain are running at hardware speeds, then you’ve reached a milestone:

  1. if your code meets your users’ requirement, then you’re done;
  2. otherwise, either you’ll have to reimagine how to implement the function, or you’ll have to upgrade your hardware (or both).

There’s that “users’ requirement” thing again. You see why it has to be there, right?

Well, here’s what most people do. They get their functions’ response times reasonably close to their top speeds (which, with good people, isn’t usually as expensive as it sounds), and then they worry about requirements only if those requirements are so important that it’s worth a project to quantify them. A requirement is usually considered really important if it’s close to your top speed or if it’s really expensive when you violate a service level requirement.

This strategy works reasonably well.

It is interesting to note here that knowing a function’s top speed is actually more important than knowing your users’ requirements for that function. A lot of companies can work just fine not knowing their users’ requirements, but without knowing your top speeds, you really are in the dark. A second observation that I find particularly amusing is this: not only is your top speed more important to know, your top speed is actually easier to compute than your users’ requirement (…if you have a profiler, which was my point in the video).

Better and easier is a good combination.

Tomorrow is important, too

When are you are finished optimizing?

  1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
  2. When the application is fast enough to meet your users’ requirements.
  3. When there are no unnecessary calls, and the calls that remain run at hardware speed.

Answer A is still a pretty strong answer. Notice that it actually maps closely to answer C. Answer C’s prescription for “no unnecessary calls” yields answer A’s goal of call reduction, and answer C’s prescription for “calls that remain run at hardware speed” yields answer A’s goal of latency reduction. So, in a way, C is a more action-oriented version of A, but A goes further to combat the perfectionism trap with its emphasis on the cost of action versus the cost of inaction.

One thing I’ve grown to dislike about answer A, though, is its emphasis on today in “…exceeds the cost of the performance you’re getting today.” After years of experience with the question of when optimization is complete, I think that answer A under-emphasizes the importance of tomorrow. Unplanned tomorrows can quickly become ugly todays, and as important as tomorrow is to businesses and the people who run them, it’s even more important to another community: database application developers.

Subjective goals are treacherous for developers

Many developers have no way to test, today, the true production response time behavior of their code, which they won’t learn until tomorrow. ...And perhaps only until some remote, distant tomorrow.

Imagine you’re a developer using 100-row tables on your desktop to test code that will access 100,000,000,000-row tables on your production server. Or maybe you’re testing your code’s performance only in isolation from other workload. Both of these are problems; they’re procedural mistakes, but they are everyday real-life for many developers. When this is how you develop, telling you that “your users’ response time requirement is n seconds” accidentally implies that you are finished optimizing when your query finishes in less than n seconds on your no-load system of 100-row test tables.

If you are a developer writing high-risk code—and any code that will touch huge database segments in production is high-risk code—then of course you must aim for the “no unnecessary calls” part of the top speed target. And you must aim for the “and the calls that remain run at hardware speed” part, too, but you won’t be able to measure your progress against that goal until you have access to full data volumes and full user workloads.

Notice that to do both of these things, you must have access to full data volumes and full user workloads in your development environment. To build high-performance applications, you must do full data volume testing and full user workload testing in each of your functional development iterations.

This is where agile development methods yield a huge advantage: agile methods provide a project structure that encourages full performance testing for each new product function as it is developed. Contrast this with the terrible project planning approach of putgin all your performance testing at the end of your project, when it’s too late to actually fix anything (if there’s even enough budget left over by then to do any testing at all). If you want a high-performance application with great performance diagnostics, then performance instrumentation should be an important part of your feedback for each development iteration of each new function you create.

My answer

So, when are you finished optimizing?

  1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
  2. When the application is fast enough to meet your users’ requirements.
  3. When there are no unnecessary calls and the calls that remain run at hardware speed.

There is some merit in all three answers, but as Dave Ensor taught me inside Oracle many years ago, the correct answer is C. Answer A specifically restricts your scope of concern to today, which is especially dangerous for developers. Answer B permits you to promote horrifically bad code, unhindered, into production, where it can hurt the performance of every function on the system. Answers A and B both presume that you know information that you probably don’t know and that you may not need to know. Answer C is my favorite answer because it is tells you exactly when you’re done, using units you can measure and that you should be measuring.

Answer C is usually a tougher standard than answer A or B, and when it’s not, it is the best possible standard you can meet without upgrading or redesigning something. In light of this “tougher standard” kind of talk, it is still important to understand that what is optimal from a software engineering perspective is not always optimal from a business perspective. The term optimized must ultimately be judged within the constraints of what the business chooses to pay for. In the spirit of answer A, you can still make the decision not to optimize all your code to the last picosecond of its potential. How perfect you make your code should be a business decision. That decision should be informed by facts, and these facts should include knowledge of your code’s top speed.

Thank you, Guđmundur Jósepsson, of Iceland, for your question. Thank you for waiting patiently for several weeks while I struggled putting these thoughts into words.

mwidlake's picture

Friday Philosophy – The Problem of Positive Discrimination?

Have you ever (or are you currently) working in an organisation with any Positive Discrimination policies? Where, for example, there is a stated aim to have 25% of the board as female or 30% of the workforce from ethnic groups that are not of the majority ethnic group in your geographic location? How do you feel about that? Is positive discrimination a good thing or a bad thing? I can’t decide.

{Big Caveat! Before anyone wants to give me the same sort of hassle as a tiny few did recently over a related post, note that I am just wondering aloud and whilst I encourage comments and feedback, I reserve the right to block or delete any comments that I feel are abusive or discriminatory or simply from the unhinged. Just saying. Also I am mostly going to reference women as the aim for positive discrimination, as the blog got really untidy when I swapped between different types of discrimination. I apologise if anyone is offended by that – it is not intended.}

I don’t think I’ve ever been comfortable with the concept of positive discrimination and if I wind back the clock to my early 20’s, back then I was quite angrily dead set against it – on the grounds that it is still discrimination. It seemed to me then that it was a simple yin/yang concept. If discrimination is wrong, it’s wrong and “positive” discrimination is in fact just discrimination against the majority. Wrong is wrong. Stealing is wrong, be it from the poor or the rich or from organisations. All those post-it notes I’ve stolen over the years? Bad Martin.

So what has changed about my opinion? Well, I think that as we all get older we tend to be able to better consider the wider picture and less black/white about most of our philosophies {my personal opinion is that those who don’t modify their opinions in light of more experience and greater thought are, well, not maturing}. I can’t but accept that the business/IT work place as a whole is male-dominated and is riddled with sexism. This does not mean *at all* that all or even most men in business/IT are sexist, but the statistics, studies and countless personal experiences make it clear that the pay, success and respect of women are impacted.
A way to counteract that is to encourage more women to work in IT (or science or whichever area they are under-represented in) and show that they are just as effective in senior positions by tipping the balance in their favor. Positive discrimination is one way of doing that. Is the small evil of this type of discrimination acceptable if it first counteracts and then helps overturn and melt the large evil of the massive inequalities we currently have? Once equality is there (or you are at least approaching it) you drop the little evil of positive discrimination? But how else do you balance the books until the issue has been addressed? My own perception is that sexism and racism at least are reduced from what they were when I first started working, maybe positive discrimination is a significant factor in that? Maybe it is more that society has shifted?

Part of me likes the Women In Technology {try search on hashtag #WIT but you get loads of things that are labelled as “witty” as well} events and discussions such as supported in the Oracle sphere by Kellyn PotVin-Gorman and Debra Lilley amongst others. I much prefer to have a balanced workforce. But when I’ve been to a talk about it or seen online discussions, there often seems to be an element of “we hate men” or “all men are out to put us down” that, frankly, insults me. In fairness I’ve also seen that element questioned or stopped by the female moderators so I know they are aware of the problem of Men Bashing. After all, for reasons I have gone into in a prior post, as a small man I empathise with some of their issues – so to be told all men are the problem is both personally an affront and also… Yes, it’s discrimination. I should not have to feel I need to justify my own non-sexism but I do – My work, hiring and promoting history demonstrates I treat both sexes as equal. If I think you are rubbish at your job, it has nothing to do with how many X chromosomes you have.

I mentioned above “the little evil of positive discrimination” and that is certainly how I see it. I think of it as wrong not just because of the yin/yang simplistic take on right and wrong but because positive discrimination can have negative effects. Forcing a percentage of the workforce or management to be from a specified group means you are potentially not hiring the best candidates or putting the less capable into those positions. If your workforce is 10% female, not at all unusual in IT, then it is unlikely the best candidates for management are 25% female. They might be, it might be that 40% of them are female as they have managed to demonstrate their capabilities and stick with the industry despite any extra challenges faced. But to have a false percentage strikes me as problematic. Another issue is that of perceived unfair advantage or protection. How would any of us feel if we did not get a job or position as someone else got it on the basis of their sex, colour or disability to fulfill a quota? People are often bad tempered enough when they fail to get what they want. Over all, I think positive discrimination leads to a level of unease or resentment in the larger group not being aided. NOTE – I mean on average. I do not mean that everyone (or even most) feels resentment. And those who do vary in how much each individual feels upset by it.

I know a few people, including myself, who have hit big problems when disciplining or even sacking someone who is not a white male. I’ve had HR say to me “we are going to have to be very careful with this as they are {not-white-male}”. I asked the direct question of would this be easier if the person was a white male? – And they said, frankly, yes. It’s hard not to let that get your back up. I’ve seen this make someone I felt was pretty liberal and balanced become quite bigoted. That is positive discrimination being a little evil and having exactly the opposite effect as intended. That HR department was, in my opinion, getting it wrong – but I’ve heard so many similar stories that I feel it is the same in most HR departments across the UK, US and maybe Europe too. I can’t speak about other places.

I know a few women who are also very uncomfortable with positive discrimination as it makes them feel that either they got something not on the basis of their own abilities or others see it that way from looking in.

I’ve occasionally seen the disparity in numbers seen as a positive – I knew a lady at college who loved the fact she was only one of 3 women out of just over a hundred people in her year doing a degree in Computer Science. I was chatting to her {at a Sci-fi society evening, where she was also markedly out-numbered by the opposite sex} about how it must be daunting. She laughed at me in scorn – It was great! She said she stuck out and so got better responses when she asked questions in lectures, she had no trouble getting help off the over-worked tutors as they were keen to be seen to not be discriminatory and, as you mostly “met people” via your course or your societies, she pretty much had her pick of a hundred+ men. That told me.

So all in all, I still do not know if I am for or against positive discrimination. I guess I just wish it was not necessary. If there really was no discrimination, we would not question how many female, black, asian, disabled, short, fat, ginger, protestant people there were doing whatever we do.

{sorry for the lack of humour this week, I just struggled to squeeze it into such a delicate topic}

dannorris's picture

Exadata Documentation Available

Please join me in welcoming the Exadata product documentation to the internet. It’s been a long time coming, but glad it’s finally made an appearance!

dbakevlar's picture

Everything I Needed to Know About Enterprise Manager I Learned at Collaborate 2015

Collaborate 2015 at the Mandalay in Las Vegas is just around the corner and the sheer amount of Enterprise Manager focused content is phenomenal!  Oracle partners and power users around the world come together each year to provide the lucky attendees the best in use cases, tips and technical know -how regarding the best infrastructure management tool in the industry.

If you are one of the lucky masses who attend this incredible conference from IOUG, Quest and OAUG, you’ll be searching the massive schedule of sessions for the presenters that will get you to the level of expertise you desire in Enterprise Manager features and products.  So let’s save you a lot of time and just tell you about all the incredible sessions and offer you some links!

Sunday, April 12th

Our adventure starts out on Sunday with the IOUG Pre-Conference Workshops.  Join Werner De Gruyter, Courtney Llamas and me for a great hands on lab titled, “Everything I Ever Needed to Know About OEM, I Learned at Collaborate!”  from 9am-noon.   After you’ve refreshed with some lunch and networked with some of the other speakers and attendees, you can then head over to Brent Sloterbeek, from Gentex Corporation to sit in on his user experience session where he speaks on his great success expanding the use of EM12c from just the DBA staff to the entire IT department.

Monday, April 13th

Monday begins the impressive list of presenters on Enterprise Manager topics, beginning with my peer, the ever impressive Courtney Llamas explaining how you can take EM12c from “Zero to Manageability”.  The schedule has Werner and I as her co-presenters, but she knows this topic like the back of her hand, so if she does have us speak, it will mostly be both of us saying, “What she said!” :)

The first session choosing challenge of the conference starts right here, folks!  The wonderful Rene Antunez, from Pythian is also presenting at the same time on Database as a Service in the Cloud. I pity those who have to choose between these two sessions and please don’t ask me to choose between coworkers and Rene, who I mentored.  It’s just too difficult for me to make one!

After Courtney and Rene, you can head over and see Rich Niemiec speak on the Best in Database 12c Tuning Features, followed by another person I’ve mentored for years and am proud to be the mentor of, Golden Gate expert, Bobby Curtis, who’ll take some time off from GG to tell you about how to best implement Exachk for Exadata with EM12c.

At 3:15pm, we have another competition of sessions on EM12c that you need to choose between and I’m in the middle of it, too!  Ravi Madabhushanam, from App Associates will be discussing how Easy E-Business Suite patching can be with Enterprise Manager while I discuss one of the newest symbiotic product launches from the EM12c performance team, the Power of the AWR Warehouse.  I’m not finished yet-  3:15 is a popular session time!  Erik Benner, Steve Lemme,  Seth Miller, Rene Antunez, Charles Kim and Michael Timpanaro-Perrotta, (OK, so someone has a longer name than me! :) ) will be on a panel discussing how IT can benefit and should consider Database as a Service! But wait-  that’s still not all at 3:15pm, (yeah, it’s going to be a tough decision…)  Michael Nelson from Northrop Grumman  will be talking about the importance and how to implement the Automatic Diagnostic Repository.

Monday finishes up with three great sessions at 4:15, (sorry folks, you have to make another difficult decision again…:) )  One of my fantastic co-authors from the Expert Enterprise Manager 12c book is up to bat, Leighton Nelson,  Introducing Enterprise Manager .  It’s an excellent complement to Courtney’s session earlier in the day, so if you are new to the product, please consider this session.  For those of you who are looking for an advanced topics, we then have Kant Mrityunjay from AST Corporation discussing  Top Automation of Weblogic Tasks and for the database performance folks, we have Alfredo Krieg Villa from Sherwin Williams, discussing how to Stabilize Performance with SQL Plan Management in DB12c.

Monday evening offers you the chance to relax and network with fellow Enterprise Manager experts  at 5:30pm in the Exhibit Hall at the Oracle Demogrounds where you can view demos on Total Cloud Control and Applications Management, both powered by Oracle Enterprise Manager! It’s a great chance to speak with product managers, some of the Strategic Customer Program team members and Oracle community power users.

Tuesday, April 14th

Tuesday kicks off EM12c sessions at 9:45am with Ken Ramey from Centroid Systems focusing on security with Protect Your Identities, but will offer some bonus tuning tips, too!  At this same time, if you are looking for the best answers on how to upgrade to EM12c High Availability, head on over and see Bill Petro from American Express who will go over a real user experience on how it’s done right!

Bobby Curtis is back up to bat, (he’ll appreciate the baseball reference… :) )  speaking on his topic of specialty, GoldenGate monitoring with EM12c.   Erick Mader and Jon Gilmore from Zirous, Inc. will be up at 11am to tell you all about Oracle WebLogic Performance Tuning.   You also have the chance to learn about the power of Oracle Real User Experience, (aka RUEI) from Frank Jordan of ERP Suites during this time on Tuesday.  Rich Niemiec will be speaking at 11am, too, mostly on Database 12c features, but he does promise, since it is a session on the BEST Database 12c features, he will be covering a few Enterprise Manager features, too!

Alfredo Krieg Villa is back on Tuesday at 2pm, to discuss how to save your day with Enterprise Manager 12c Administration.  You lucky devils get to learn more about the glories of Automated Patching with EM12c from Fernando de Souza from General Dynamics IT at 4:30 to finish out your Tuesday OEM sessions.

Tuesday concludes with the Exhibitor Showcase Happy Hour in the Exhibit Hall at the Oracle Demogrounds where you get another chance at Cloud Control and Applications Management demos, along with great Las Vegas Mandalay venue drinks and food!

Wednesday, Aprile 15th

We start out Wednesday at 8am with Gleb Otochin from Pythian, (ask me about my first interaction with Gleb when we worked across from each other when I trained at the Ottawa office-  great story! :) ) He’s a brilliant guy and he’s going to tell you all about how to build Your Own Private Cloud.  Also at 8am, Collaborate has Shawn Ruff from Mythics discussing how to Manage and Monitor Fusion Middleware while one of my co-authors on the Enterprise Manager Command Line Interface book, Ray Smith, is going to show you how to Flaunt it if You’ve Got it with EM12c Extensibility.  You also have Frank Pound from the Bank of Canada who’s going to show you how easy it is Comply with Audits using EM12c.

You get a small break from EM12c content, so take a breath, network or if you want, you can attend other sessions not on our EM12c list.  I’ll forgive you, really… :)  At 10:45, Krishna Kapa from UBS is going to explain Oracle Database 12c Multitenant with Enterprise Manager 12c.

At 12:30, we then have the great IOUG Oracle Enterprise Manager 12c SIG.  If you are passionate about EM12c and want to get involved in the community, this is the place to be!  We are planning out this event, even as I write this and I can tell you, it’s going to be great!

At 2:45pm, the always wonderful, Kai Yu, from Dell, will show you how to Design and Implement Your Own Private Cloud.  During this same time, Claudia Naciff and Vladimir Lugo, of Loyola Marymount University, will be making my day by promoting  Migrating Your Cron Jobs to Oracle EM12c.  I love seeing folks simplifying management of their environments with Enterprise Manager and it’s great to see Claudia and Vladimir presenting on this topic.  Last, but not least in this time slot is  Raj Garrepally from Emory University, presenting how to use Oracle Enterprise Manager, along with other tools to ease the management of Peoplesoft Environments.

Erik Benner is up at 4pm to talk about  Servers and Systems and Storage, Oh My!  Don’t miss out learning about infrastructure management with EM12c.  It’s going to be another tough decision on sessions again at this 4pm time slot-  Leighton Nelson is also speaking with Sean Brown from World Wide Technology on Database as a Server with RAC using DB12c, along with a panel from Keith Baxter on Oracle Best Practices for Managing Oracle Applications.

Thursday, April 16th

The last day of Collaborate 2015 is made sunnier by starting the morning with Krishna Kapa from UBS as he gives another great session, this time on how to successfully combine Database 12c Multitenant Architecture with EM12c.   If Las Vegas isn’t sunny enough with the addition of that first session on the list, you also have Angelo Rosado from Oracle giving another session of great content on Managing Oracle E-Business Suite 12.2 with EM12c.

At 9:45, James Lui and Erik Benner are going to host the OAUG Oracle Enterprise Manager Applications SIG.  This is a great way to get involved with Fusion Middleware passionate folks in our community, provide feedback and be part of the best in the middleware world, (not to be confused with Tolkien’s middle earth, but we’ll know who’s who if anyone shows up dressed as an elf or hobbit, right? :) )  Another great session on DBaaS is up during this time for all you EM12c fans from someone I’ve enjoyed presenting with at Oracle, GP Gongloor.  He’ll show you how the future is now with Advanced Database Management with Database as a Service and EM12c.

Mark Saltsman and Kumar Anthireyan from Bias are presenting at 11am, explaining the 12 Things to Consider for Migrating EBS onto Exadata, which will include some great tips with EM12c.  Mark Scardina from Oracle is going to present on Database Clouds with Oracle RAC 12c and as we know, cloud is the almighty world we live in now!  My previous boss and always mentor, Alex Gorbachev is on at this time, too, presenting on Anomaly Detection for Database  Monitoring.  His summary description describes it as a novel approach, but with Alex, should we expect anything less? :) All of these great presenters are up against one of my favorite peers at Oracle, Pete Sharman, who is the master of masters when it comes to Database as a Service.  He will discuss Data Cloning and Refreshes Made Easy with EM12c Snap Clone.  If you want to learn from the master on DBaaS, this is the guy!  To make things even more difficult, my team members from SCP, Courtney Llamas and Werner De Gruyter are presenting in this same time slot to take you Under the Hood of the Enterprise Manager.  This session covers a lot of what we do as part of the SCP team and I know how much I learned from both of these wonderful teammates when I joined Oracle.  If you are considering implementing EM12c, this is an invaluable session.

To close out the conference on EM12c sessions, Courtney Llamas will easily convince everyone the power of Smarter Monitoring with Adaptive Thresholds and Time Based Metrics.  For anyone who is looking to “silence the white noise” in their environment, this is a not-to-be-missed session.  Anthony Noriega will also be presenting at during this time slot on a great case study revolving around Moving ASM Files that includes EM12c, so let’s give these great presenters on the last day our support  and don’t miss out on all the great content that is going on till the very end!

AND…

For those of you who are interested in Social Media, I’ll also be on the Social Media panel on Monday at 4pm, (as soon as I finish my AWR Warehouse session, so if you see me running down the hall, you know where I’m headed!) and if you are a WIT, (Women in Technology), you’ll soon see the announcement that I’m your speaker for the Monday luncheon.  I won’t ruin all the surprise and I’ll leave it to IOUG to let you in on the special topic I’ll be presenting on!

If you want a “just the facts” list of all the EM12c sessions for the conference, go to the following link, which will provide everything you need to make it easy to add your choices to My Show Planner.  See you at IOUG Collaborate 2015 in April at the Mandalay!



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Everything I Needed to Know About Enterprise Manager I Learned at Collaborate 2015], All Right Reserved. 2015.

marco's picture

DBMS_INMEMORY_ADVISOR

When you follow the Oracle in-memory / optimizer team, then you have probably seen this…

martin.bach's picture

Understanding enhancements to block cleanouts in Exadata part 2

In part 1 of the series I tried to explain (probably a bit too verbose when it came to session statistics) what the effect is of delayed block cleanout and buffered I/O. In the final example the “dirty” blocks on disk have been cleaned out in the buffer cache, greatly reducing the amount of work to be done when reading them.

Catching up with now, and direct path reads. You probably noticed that the migration to 11.2 caused your I/O patterns to change. Suitably large segments are now read using direct path read not only during parallel query but also potentially during the serial execution of a query. Since the blocks read during a direct path read do not end up in the buffer cache there is an interesting side effect to block cleanouts. The scenario is the same unrealistic yet reproducible one as the one presented in part 1 of this article.

Enter Direct Path Reads – non Exadata

To be absolutely sure I am getting results without any optimisation offered by Exadata I am running the example on different hardware. I am also using 11.2.0.4 because that’s the version I had on my lab server. The principles here apply between versions unless I am very mistaken.

I am repeating my test under realistic conditions, leaving _serial_direct_read at its default, “auto”. This means that I am almost certainly going to see direct path reads now when scanning my table. As Christian Antognini has pointed out direct path reads have an impact on the amount of work that has to be done. The test commences with an update in session 1 updating the whole table and flushing the blocks in the buffer cache to disk to ensure they have an active transaction in the ITL part of the header. The selects on session 2 show the following behaviour. The first execution takes longer as expected, the second one is faster.

SQL> select /* not_exadata */ count(*) from t1_100k;

  COUNT(*)
----------
    500000

Elapsed: 00:00:20.62

SQL> @scripts/mystats stop r=physical|cleanout|consistent|cell|table
...
------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    active txn count during cleanout                                            83,334
STAT    cell physical IO interconnect bytes                                    747,798,528
STAT    cleanout - number of ktugct calls                                           83,334
STAT    cleanouts and rollbacks - consistent read gets                              83,334
STAT    consistent changes                                                           1,019
STAT    consistent gets                                                            751,365
STAT    consistent gets - examination                                              667,178
STAT    consistent gets direct                                                      83,334
STAT    consistent gets from cache                                                 668,031
STAT    consistent gets from cache (fastpath)                                          850
STAT    data blocks consistent reads - undo records applied                        583,334
STAT    immediate (CR) block cleanout applications                                  83,334
STAT    physical read IO requests                                                    8,631
STAT    physical read bytes                                                    747,798,528
STAT    physical read total IO requests                                              8,631
STAT    physical read total bytes                                              747,798,528
STAT    physical read total multi block requests                                       673
STAT    physical reads                                                              91,284
STAT    physical reads cache                                                         7,950
STAT    physical reads direct                                                       83,334
STAT    table fetch by rowid                                                           170
STAT    table scan blocks gotten                                                    83,334
STAT    table scan rows gotten                                                     500,000
STAT    table scans (direct read)                                                        1
STAT    table scans (short tables)                                                       1


------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

The second execution shows slightly better performance as seen before in part 1. The relevant statistics are shown here:


SQL> select /* not_exadata */ count(*) from t1_100k;

  COUNT(*)
----------
    500000

Elapsed: 00:00:04.60

SQL> @scripts/mystats stop r=physical|cleanout|consistent|cell|table
...
------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    active txn count during cleanout                                            83,334
STAT    cell physical IO interconnect bytes                                    682,672,128
STAT    cleanout - number of ktugct calls                                           83,334
STAT    cleanouts and rollbacks - consistent read gets                              83,334
STAT    consistent changes                                                             472
STAT    consistent gets                                                            750,250
STAT    consistent gets - examination                                              666,671
STAT    consistent gets direct                                                      83,334
STAT    consistent gets from cache                                                 666,916
STAT    consistent gets from cache (fastpath)                                          245
STAT    data blocks consistent reads - undo records applied                        583,334
STAT    immediate (CR) block cleanout applications                                  83,334
STAT    physical read IO requests                                                      681
STAT    physical read bytes                                                    682,672,128
STAT    physical read total IO requests                                                681
STAT    physical read total bytes                                              682,672,128
STAT    physical read total multi block requests                                       673
STAT    physical reads                                                              83,334
STAT    physical reads direct                                                       83,334
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                    83,334
STAT    table scan rows gotten                                                     500,000
STAT    table scans (direct read)                                                        1


------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

The amount of IO is slightly less. Still all the CR processing is done for every block in the table-each has an active transaction (active txn count during cleanout). As you would expect the buffer cache has no blocks from the segment stored:

SQL> select count(*), inst_id, status from gv$bh 
  2  where objd = (select data_object_id from dba_objects where object_name = 'T1_100K') 
  3  group by inst_id, status;

  COUNT(*)    INST_ID STATUS
---------- ---------- ----------
     86284          2 free
    584983          1 free
         1          1 scur

Subsequent executions all take approximately the same amount of time. Every execution has to perform cleanouts and rollbacks-no difference to before really except that direct path reads are used to read the table. No difference here to the buffered reads, with the exception that there aren’t any blocks from T1_100K in the buffer cache.

Commit

Committing in session 1 does not have much of an effect-the blocks read by the direct path read are going to the PGA instead of the buffer cache. A quick demonstration: the same select has been executed after the transaction in session 1 has committed.

SQL> select /* not_exadata */ count(*) from t1_100k;

  COUNT(*)
----------
    500000

Elapsed: 00:00:04.87

SQL> @scripts/mystats stop r=physical|cleanout|consistent|cell|table
...
------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    cell physical IO interconnect bytes                                    682,672,128
STAT    cleanout - number of ktugct calls                                           83,334
STAT    cleanouts only - consistent read gets                                       83,334
STAT    commit txn count during cleanout                                            83,334
STAT    consistent changes                                                             481
STAT    consistent gets                                                            166,916
STAT    consistent gets - examination                                               83,337
STAT    consistent gets direct                                                      83,334
STAT    consistent gets from cache                                                  83,582
STAT    consistent gets from cache (fastpath)                                          245
STAT    immediate (CR) block cleanout applications                                  83,334
STAT    physical read IO requests                                                      681
STAT    physical read bytes                                                    682,672,128
STAT    physical read total IO requests                                                681
STAT    physical read total bytes                                              682,672,128
STAT    physical read total multi block requests                                       673
STAT    physical reads                                                              83,334
STAT    physical reads direct                                                       83,334
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                    83,334
STAT    table scan rows gotten                                                     500,000
STAT    table scans (direct read)                                                        1


------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

As you can see the work done for the cleanout is repeated very time, despite the fact that active txn count during cleanout is not found in the output. And still, there aren’t really any blocks pertaining to T1_100K in the buffer cache.

SQL> select count(*), inst_id, status from gv$bh where
  2   objd = (select data_object_id from dba_objects where object_name = 'T1_100K')
  3  group by inst_id, status;

  COUNT(*)    INST_ID STATUS
---------- ---------- ----------
     86284          2 free
    584983          1 free
         1          1 scur

OK that should be enough for now-over to the Exadata.

Direct Path Reads – Exadata

The same test again, but this time on an X2-2 quarter rack running database 12.1.0.2 and cellsrv 12.1.2.1.0. After the update of the entire table and the flushing of blocks to disk, here are the statistics for the first and second execution.

SQL> select count(*) from t1_100k;

  COUNT(*)
----------
    500000

Elapsed: 00:00:17.41

SQL> @scripts/mystats stop r=physical|cleanout|consistent|cell|table


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    active txn count during cleanout                                            83,334
STAT    cell IO uncompressed bytes                                             682,672,128
STAT    cell blocks processed by cache layer                                        83,933
STAT    cell commit cache queries                                                   83,933
STAT    cell flash cache read hits                                                  14,011
STAT    cell num smartio automem buffer allocation attempts                              1
STAT    cell physical IO bytes eligible for predicate offload                  682,672,128
STAT    cell physical IO interconnect bytes                                    804,842,256
STAT    cell physical IO interconnect bytes returned by smart scan             682,904,336
STAT    cell scans                                                                       1
STAT    cleanout - number of ktugct calls                                           83,334
STAT    cleanouts and rollbacks - consistent read gets                              83,334
STAT    consistent changes                                                             796
STAT    consistent gets                                                            917,051
STAT    consistent gets direct                                                      83,334
STAT    consistent gets examination                                                833,339
STAT    consistent gets examination (fastpath)                                      83,336
STAT    consistent gets from cache                                                 833,717
STAT    consistent gets pin                                                            378
STAT    consistent gets pin (fastpath)                                                 377
STAT    data blocks consistent reads - undo records applied                        750,002
STAT    immediate (CR) block cleanout applications                                  83,334
STAT    physical read IO requests                                                   16,147
STAT    physical read bytes                                                    804,610,048
STAT    physical read requests optimized                                            14,011
STAT    physical read total IO requests                                             16,147
STAT    physical read total bytes                                              804,610,048
STAT    physical read total bytes optimized                                    114,778,112
STAT    physical read total multi block requests                                       851
STAT    physical reads                                                              98,219
STAT    physical reads cache                                                        14,885
STAT    physical reads direct                                                       83,334
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                    83,334
STAT    table scan disk non-IMC rows gotten                                        500,000
STAT    table scan rows gotten                                                     500,000
STAT    table scans (direct read)                                                        1


------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================


SQL> -- second execution, still no commit in session 1
SQL> select count(*) from t1_100k;

  COUNT(*)
----------
    500000

Elapsed: 00:00:01.31

SQL> @scripts/mystats stop r=physical|cleanout|consistent|cell|table
...
------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    active txn count during cleanout                                            83,334
STAT    cell IO uncompressed bytes                                             682,672,128
STAT    cell blocks processed by cache layer                                        83,949
STAT    cell commit cache queries                                                   83,949
STAT    cell flash cache read hits                                                   1,269
STAT    cell num smartio automem buffer allocation attempts                              1
STAT    cell physical IO bytes eligible for predicate offload                  682,672,128
STAT    cell physical IO interconnect bytes                                    682,907,280
STAT    cell physical IO interconnect bytes returned by smart scan             682,907,280
STAT    cell scans                                                                       1
STAT    cleanout - number of ktugct calls                                           83,334
STAT    cleanouts and rollbacks - consistent read gets                              83,334
STAT    consistent changes                                                             791
STAT    consistent gets                                                            917,053
STAT    consistent gets direct                                                      83,334
STAT    consistent gets examination                                                833,339
STAT    consistent gets examination (fastpath)                                      83,337
STAT    consistent gets from cache                                                 833,719
STAT    consistent gets pin                                                            380
STAT    consistent gets pin (fastpath)                                                 380
STAT    data blocks consistent reads - undo records applied                        750,002
STAT    immediate (CR) block cleanout applications                                  83,334
STAT    physical read IO requests                                                    1,278
STAT    physical read bytes                                                    682,672,128
STAT    physical read requests optimized                                             1,269
STAT    physical read total IO requests                                              1,278
STAT    physical read total bytes                                              682,672,128
STAT    physical read total bytes optimized                                    678,494,208
STAT    physical read total multi block requests                                       885
STAT    physical reads                                                              83,334
STAT    physical reads direct                                                       83,334
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                    83,334
STAT    table scan disk non-IMC rows gotten                                        500,000
STAT    table scan rows gotten                                                     500,000
STAT    table scans (direct read)                                                        1


------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

In the example table scans (direct read) indicates that the segment was read using a direct path read. Since this example was executed on an Exadata this direct path read was performed as a Smart Scan (cell scans = 1). However, the Smart Scan was not very successful: although all 83,334 table blocks were opened by the cache layer (the first one to touch a block on the cell during the Smart Scan) none of them passed the examination of the transaction layer-they have all been discarded. You can see that there was no saving by using the Smart Scan here: cell physical IO bytes eligible for predicate offload is equal to cell physical IO interconnect bytes returned by smart scan.

83,334 blocks were read during the table scan (table scan blocks gotten), all of which were read directly (physical reads direct). All of the blocks read had an active transaction (active txn count during cleanout) and all of these had to be rolled back (cleanouts and rollbacks – consistent read gets).

The same amount of work has to be done for every read.

Commit in session 1

After committing the transaction in session 1 the statistics for the select statement in session 2 are as follows (also note the execution time)

SQL> select count(*) from t1_100k;

  COUNT(*)
----------
    500000

Elapsed: 00:00:00.16

SQL> @scripts/mystats stop r=physical|cleanout|consistent|cell|table

------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------

Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------
STAT    cell IO uncompressed bytes                                             682,672,128
STAT    cell blocks helped by commit cache                                          83,334
STAT    cell blocks processed by cache layer                                        83,334
STAT    cell blocks processed by data layer                                         83,334
STAT    cell blocks processed by txn layer                                          83,334
STAT    cell commit cache queries                                                   83,334
STAT    cell flash cache read hits                                                     663
STAT    cell num smartio automem buffer allocation attempts                              1
STAT    cell physical IO bytes eligible for predicate offload                  682,672,128
STAT    cell physical IO interconnect bytes                                     13,455,400
STAT    cell physical IO interconnect bytes returned by smart scan              13,455,400
STAT    cell scans                                                                       1
STAT    cell transactions found in commit cache                                     83,334
STAT    consistent changes                                                             791
STAT    consistent gets                                                             83,717
STAT    consistent gets direct                                                      83,334
STAT    consistent gets examination                                                      3
STAT    consistent gets examination (fastpath)                                           3
STAT    consistent gets from cache                                                     383
STAT    consistent gets pin                                                            380
STAT    consistent gets pin (fastpath)                                                 380
STAT    physical read IO requests                                                      663
STAT    physical read bytes                                                    682,672,128
STAT    physical read requests optimized                                               663
STAT    physical read total IO requests                                                663
STAT    physical read total bytes                                              682,672,128
STAT    physical read total bytes optimized                                    682,672,128
STAT    physical read total multi block requests                                       654
STAT    physical reads                                                              83,334
STAT    physical reads direct                                                       83,334
STAT    table fetch by rowid                                                             1
STAT    table scan blocks gotten                                                    83,334
STAT    table scan disk non-IMC rows gotten                                        500,000
STAT    table scan rows gotten                                                     500,000
STAT    table scans (direct read)                                                        1
STAT    table scans (short tables)                                                       1


------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis

==========================================================================================
End of report
==========================================================================================

There are no more active transactions found during the cleanout. The blocks however still have to be cleaned out every time the direct path read is performed. As a matter of design principle Oracle cannot reuse the blocks it has read with a direct path read (or Smart Scan for that matter) they simply are not cached.

But have a look at the execution time: quite a nice performance improvement that is. In fact there are a few things worth mentioning:

  • The segment is successfully scanned via a Smart Scan. How can you tell?
    • cell scans = 1
    • cell physical IO interconnect bytes returned by smart scan is a lot lower than cell physical IO bytes eligible for predicate offload
    • You see cell blocks processed by (cache|transaction|data) layer matching the number of blocks the table is made up of. Previously you only saw blocks processed by the cache layer
  • The commit cache made this happen
    • cell commit cache queries
    • cell blocks helped by commit cache
    • cell transactions found in commit cache

The commit cache “lives” in the cells and probably caches recently committed transactions. If a block is read by the cell and an active transaction is found the cell has a couple of options to avoid a round-trip of the block to the RDBMS layer for consistent read processing:

  1. If possible it uses the minscn optimisation. Oracle tracks the minimum SCN of all active transactions. The cell software can compare the SCN from the ITL in the block with the lowest SCN of all active transactions. If that number is greater than the SCN found in the block the transaction must have already completed and it is safe to read the block. This optimisation is not shown in the blog post, you’d see  cell blocks helped by minscn optimization increase
  2. If the first optimisation does not help the cells make use of the commit cache-visible as cell commit cache queries and if a cache hit has been scored, cell blocks helped by commit cache. While the transaction in session 1 hasn’t completed yet you could only see the commit cache queries. After the commit the queries were successful

If both of them fail the block must be sent to the RDBMS layer for consistent read processing. In that case there is no difference to the treatment of direct path read blocks outside Exadata.

Summary

The commit cache is a great enhancement Exadata offers for databases using the platform. While non-Exadata deployments will have to clean out blocks that haven’t been subject to cleanouts every time the “dirty” block is read via direct path read, this can be avoided on Exadata if the transaction has committed.

davidkurtz's picture

PeopleTools 8.54 for the Oracle DBA

The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015.  In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

I have been doing some research, and have even read the release notes!  As a result, I have picked out some topics that I want to talk about.  I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:

Links have been added to the above list as I have also blogged about each.  I hope it might produce some feedback and discussion.  After the Roadshow I will also add a link to the presentation.

PeopleTools 8.54 is still quite new, and we are all still learning.  So please leave comments, disagree with what I have written, correct things that I have got wrong, ask questions.

davidkurtz's picture

PeopleTools 8.54: Oracle Resource Manager

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.

PeopleTools Feature

This feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

  • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT DISTINCT group_or_subplan, type
FROM dba_rsrc_plan_directives
WHERE plan = 'MIXED_WORKLOAD_PLAN'
ORDER BY 2 DESC,1
/

GROUP_OR_SUBPLAN TYPE
------------------------------ --------------
ORA$AUTOTASK_SUB_PLAN PLAN
BATCH_GROUP CONSUMER_GROUP
INTERACTIVE_GROUP CONSUMER_GROUP
ORA$DIAGNOSTICS CONSUMER_GROUP
OTHER_GROUPS CONSUMER_GROUP
SYS_GROUP CONSUMER_GROUP

If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE

  • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT PT_ORA_CONSUMR_GRP 
FROM PS_PT_ORA_RESOURCE
, PS_PTEXEC2RESOURCE
WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
AND PT_ORA_CONSUMR_GRP <> ' '
AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

PT_ORA_CONSUMR_GRP
------------------------
INTERACTIVE_GROUP

And then the PeopleSoft process explicitly switches its group, thus:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE 
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
END;

Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.

Setup

The PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">BEGIN
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
END;

BEGIN
FOR i IN(
SELECT DISTINCT r.pt_ora_consumr_grp
FROM sysadm.ps_pt_ora_resource r
WHERE r.pt_ora_consumr_grp != ' '
AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
) LOOP
dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
(GRANTEE_NAME => 'SYSADM'
,CONSUMER_GROUP => i.pt_ora_consumr_grp
,GRANT_OPTION => FALSE);
END LOOP;
END;
/

The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------
resource_manager_plan string MIXED_WORKLOAD_PLAN

I question one or two of the mappings on PS_PTEXEC2RESOURCE.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM PS_PTEXEC2RESOURCE …

PT_EXECUTABLE_NAME PT_RESOURCE_NAME
-------------------------------- -----------------

PSAPPSRV APPLICATION SERVE
PSQED MISCELLANEOUS
PSQRYSRV QUERY SERVER

  • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
    • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
  • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.

Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group Mappings

I would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'MODULE_NAME'
,value => 'PROCESSMONITOR'
,consumer_group => 'SYS_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
, module, action, resource_consumer_group
FROM v$session
WHERE module IN('PROCESSMONITOR','WIBBLE')
ORDER BY program, module, action
/

So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE 
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
END;
/

PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP

You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">UPDATE ps_ptexec2resource 
SET pt_resource_name = 'DO_NOT_USE'
WHERE pt_executable_name = 'PSAPPSRV'
AND pt_resource_name = 'APPLICATION SERVER'
/
COMMIT
/
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
END;
/
BEGIN
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'CLIENT_PROGRAM'
,value => 'PSAPPSRV'
,consumer_group => 'INTERACTIVE_GROUP');

DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'MODULE_NAME'
,value => 'PROCESSMONITOR'
,consumer_group => 'SYS_GROUP');

DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
explicit => 1,
oracle_user => 2,
service_name => 3,
module_name_action => 4, --note higher than just module
module_name => 5, --note higher than program
service_module => 6,
service_module_action => 7,
client_os_user => 8,
client_program => 9, --note lower than module
client_machine => 10
);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

Conclusion

I have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.

  • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
  • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
    • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
    • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.

Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules

hamcdc's picture

RETURNING BULK COLLECT and database links

Looks like the nice PL/SQL facility for returning a set of updated rows is restricted when it comes to database links

(This tested on 12.1.0.1)

SQL> declare
  2    type int_list is table of number(12) index by pls_integer;
  3    l_results int_list;
  4
  5  begin
  6    update MY_TABLE b
  7    set b.my_col = ( select max(last_ddl_time) from user_objects@dblink where object_id = b.key_col)
  8    where b.my_col is null
  9    returning b.other_col bulk collect into l_results;
 10  end;
 11  /
declare
*
ERROR at line 1:
ORA-22816: unsupported feature with RETURNING clause
ORA-06512: at line 6

When we remove the database link, things revert to what we would expect

 
 
SQL> declare
  2    type int_list is table of number(12) index by pls_integer;
  3    l_acct int_list;
  4
  5  begin
  6    update MY_TABLE b
  7    set b.my_col = ( select max(last_ddl_time) from user_objects where object_id = b.key_col)
  8    where b.my_col is null
 10    returning b.other_col bulk collect into l_results;
 11  end;
 12  /
 
PL/SQL procedure successfully completed.
 

A workaround is to perform an equivalent SELECT to fetch the required data from the remote source (for example, into a temporary table), and then update locally.