Oakies Blog Aggregator

Franck Pachot's picture

Download your SR content from MOS

You may want to keep track of your Oracle Service Requests offline. Or simply be able to read them as simple text. Here is a simple way to download all of them to a simple text file.

First, it is easy to get a list of the Service Requests as an Excel file. Just list them on the GUI. You may choose:

  • The Support Identifiers (CSI) on the right
  • Only SR where you are primary contact or all of them, with the ‘person’ icon
  • Include closed SRs with the red check icon
  • The columns: View -> Columns -> Show all

And then View -> Export to XLS

The service_requests.xls is actually in XML format which is easy to parse, but you can also convert it to CSV. Here I have saved it to service_requests.csv

Then with AWK and LYNX installed here is how to get each SR into text:

awk -F";" 'NR>1{gsub("[()/?%*:|. \\" q qq "]","_");print "lynx -dump -accept-all-cookies -auth=: " q "https://support.oracle.com/epmos/faces/SrDetailPrint?srNumber=" $2 "&print=true&sysmsg=true&sortBy=Newest%20on%20Top" q "> " q $9 "_" $2 "_" $1".txt" q '} q="'" qq='"' service_requests.csv | sh -x

The ideas is to build the file name from the contact ($9), the SR number ($2), and the subject ($1 after translating some characters to ‘_’). And then download using the url used for the ‘print’ view. The authentication is easy with -auth=ID:PASSWD the argument where you pass your Oracle SSO login username and password. Just replace and with yours.

Note that you should not try to run that in parallel or you will get ‘The user has already reached the maximum allowed number of sessions’.

dbakevlar's picture

How to Linux for the SQL DBA Articles on Simple Talk

I’ve started to write a series of articles on Simple Talk from Redgate on Linux for the SQL Server DBA.  Thanks to Kathi Kellenberger, who came to my pre-con in Indianapolis SQL Saturday and then asked if it was something that I’d be up to doing.

The biggest challenge when doing one hour sessions on Linux at SQL Saturdays is that you find out, one hour is just no where, near enough.  The eight hour pre-con we get to dig in deeper, actually work through a number of labs and I feel like students leave with a better grasp on how to work with this new operating system in preparation for managing a database on it.

If you’re curious to learn or just need a refresher, here’s the first article in the series, with more to come!







Copyright © DBAKevlar [How to Linux for the SQL DBA Articles on Simple Talk], All Right Reserved. 2018.

Jonathan Lewis's picture


My entrance at the Polish Oracle User Group conference 2017 has just resurfaced on Twitter. There is a back-story to this which contains an allegorical lesson in using Oracle. As I said in the opening remarks in the subsequent presentation: “The lesson began before I got to the stage”.

Like all the other speakers at POUG2017 I had received an email asking me to name a tune for a playlist. Having decided that Beethoven, Brahms and Tchaikowski were not the sort of composers the organisers had in mind I nominated (with a touch of irony) “Who wants to live forever?” by Queen (despite Richard Foote’s strenuous efforts to turn the Oracle world to David Bowie).

When the conference started I noticed two things: first, that the tunes for the “playlist” were actually being used to accompany speakers to the stage, secondly that the admin staff were all wearing monk-like gowns and hoods. So I asked for my tune to be changed to “The Imperial March” (Darth Vader’s theme) and borrowed the gown from the tallest admin person, with the results you see in the video clip.

So what’s the Oracle allegory ?

First you read the manuals, then you observe how it really works before you go live.


Franck Pachot's picture


By Franck Pachot

COMMIT is the SQL statement that ends a transaction, with two goals: persistence (changes are durable) and sharing (changes are visible to others). That’s a weird title and introduction for the 499th blog post I write on the dbi-services blog. 499 posts in nearly 5 years- roughly two blog posts per week. This activity was mainly motivated by the will to persist and share what I learn every day.

Persistence is primarily for myself: writing a test case with a little explanation is a good way to remember an issue encountered, and Google helps to get back to it when the problem is encountered again later. Sharing is partly for others: I learn a lot from what others are sharing (blogs, forums, articles, mailing lists,…) and it makes sense to also share what I learn. But in addition to that, publishing an idea is also a good way to validate it. If something is partially wrong or badly explained, or just benefits from exchanging ideas, then I’ll get feedbacks, by comments, tweets, e-mails.

This high throughput of things I learn every day gets its source from multiple events. In a consulting company, going from one customer to another means different platforms, versions, editions, different requirements, different approaches. Our added value is our experience. From all the problems seen in all those environments, we have build knowledge, best practices and tools (this is the idea of DMK) to bring a reliable and efficient solution to customers projects. But dbi services also invests a lot in research and training, in order to build this knowledge pro-actively, before encountering the problems at customers. A lot of blog posts were motivated by lab problems only (beta testing, learning new features, setting up a proof of concept before proposing it to a customer). And then encountered later at customers, with faster solutions as this had been investigated before. Dbi services also provides workshops for all technologies and preparing training exercises, as well as giving the workshop, was also a great source of blog posts.

I must say that dbi services is an amazing company in this area. Five years ago, I blogged in French on developpez.com and answered forums such as dba-village.com, and wrote a few articles for SOUG. But as soon as I started at dbi services, I passed the OCM, I presented for the first time in public, at DOAG, and then at many local and international conferences. I attended my first Oracle Open World. I became ACE and later ACE Director. The blogging activity is one aspect only. What the dbi services Technology Organization produces is amazing, for the benefit of the customers and the consultants.

You may have heard that I’m going to work in the database team at CERN, which means quiescing my consulting and blogging activity here. For sure I’ll continue to share, but probably differently. Maybe on the Databases at CERN blog, and probably posting on Medium. Blogs will be also replicated to http://www.oaktable.net/ of course. Anyway, it is easy to find me on LinkedIn or Twitter. For sure I’ll be at conferences and probably not only Oracle ones.

Database transparent_1000pxOracle_100_1000pxI encourage you to continue to follow the dbi services blog, as I’ll do. Many colleagues are already sharing on all technologies. And new ones are coming. Even if my goal was the opposite, I’m aware that publishing so often may have throttled other authors to do so. I’m now releasing some bandwidth to them. The dbi services blog is in the 9th position in the Top-100 Oracle blogs and 27th position in the Top-60 Database blogs with 6 blog posts a week on average. And there’s also a lot non-database topics covered as well. So stay tuned on https://blog.dbi-services.com/.


Cet article COMMIT est apparu en premier sur Blog dbi services.

connor_mc_d's picture

18c database creation on Windows

Hopefully you’ve followed my very simple and easy guide to downloading the 18c database software for Windows. But of course, software on its own is not much use – we need a database! So let’s get cracking and create one. Using the Start menu like I’ve done below, or using the Windows panels, locate the Database Configuration assistant and start it.



After a few seconds the initial screen will ask what you want to do.  Choose “Create Database”.



If you like you could just go with “Typical Configuration” and you’ll be done in just a couple of clicks, but I always prefer to opt for the “Advanced Configuration” for two reasons. Firstly, even if you accept all of the defaults, it gives you a better idea of what options are going to be installed, where the files will be stored etc.  And secondly…well…we all like to think of ourselves as advanced don’t we Smile



For just research and exploration on your own Windows machine, you’ll probably want to opt for just a single instance database.  RAC takes a bit more setup and complexity. For a faster install, choose one of the options that has the datafiles included.  This way, the installer will just copy some existing files and seed them as your database, rather then building the entire instance from scratch.



Now choose a unique and memorable name for your database. I’ve reached into the depths of my imagination and come up with “db18” for my version 18 database. Go figure Smile. I’d also recommend you go with a container database configuration, because that is the strategic direction for Oracle going forward, so if you are going to have a database to skill up on, it makes sense for that database to be a container database.



To keep things simple, I’m just nominating 1 single location for all of my database files. It can be anywhere but a common convention is that wherever you house them, you’ll have a folder called “oradata” and then a folder for each database you create under that.



I’m skipping the fast recovery area and archiving at this stage. If I start to get serious with testing things like backup and recovery, then I would revisit this after database creation to enable at least archiving so that I can explore all the goodies that RMAN has to offer.



On a brand new installation, it is likely you will not have a pre-existing listener to choose from. (If you had a previous installation, or had run the Network Configuration Assistant already, then you would see a listener to use).

I’m creating one called LISTENER18. The default port is normally 1521, but I’ve opted for 1518 just to align it with the version I’m using.



I’m skipping Data Vault and Label Security, but hopefully you can now see why it’s cool to go with the “Advanced Configuration” – you get to see all the potential functionality areas of the database that you might want to explore.



Now you choose how much of your server/desktop/laptop you’re going to allow this database to grab. My machine has plenty of RAM, but it also has a stack of other database versions running on it to handle my AskTOM daily tasks. So I’ll keep this dude at around 6G.



Now we’ll flick across the other tabs on this screen to see if there anything of note. The default for processes seems to have gone up in this version (I think it used to be around 300 in 12c) but in any event, that’s more than enough for me on this machine.



I have simple rule for character sets – UTF all the way. Single byte charactersets are soooo last century. We live in a global village, so you should be able to handle characters from all over the world!



And for the last tab, I’m going to opt for the sample schemas, so that when my database is created I’m not just left with an empty database shell. I want some sample data there so I can jump straight in and start experimenting with this release.



If I’m a DBA, I might go for some EM management to see how that all works, but for now, I’m skipping that.



Being a sandbox for experimenting, I’m setting all the passwords to a common value. Naturally this will not what you’ll be doing for your production databases!



So now I’m ready to go. I always tick the “Generate Scripts” option because it lets me see what is actually occurring when the creation assistant is doing its job. Even so, the scripts are probably not what I would use to automate a database install, since the ‘dbca’ command has nice command line option nowadays, along with the ‘-silent’ option so you can create an entire database with just a single command.



I finally get a summary of what is about to occur, and we’re off! Database creation is go for launch!



The total time to create your database will depending on your hardware, in particular how fast your storage is. I’ve done a few creations now using both flash storage and conventional hard drives, and unsurprisingly the flash storage is faster. You’re probably looking at around 10 minutes to complete.



When your database creation is complete, you’ll get the standard summary screen and you are ready to go.



And here it is – the finished product! My 18c database on Windows is ready to go. By default, the underlying Windows service will have a Start status of “Automatic” which means your database will start every time Windows starts. If you are short on memory, or do not want the database started unless you explicitly want it to, you can set this to Manual via “services.msc”



You can watch the video version of this installation here

Enjoy your 18c Windows database !

dbakevlar's picture

Power BI- Loading PBI Log Files

There’s a reason that log analytics programs, like Splunk, Data Dog and Sumo Logic are so popular.  Even Microsoft has a Log Analytics product, but the important message here is log data is massive and parsing through it to find important information can be a bit of a pain.  The second word in Log Analytics IS “analytics”.  Due to this, the first thought when faced with the number of logs from many complex Power BI environments that people are building, (multiple data sources, multiple data centers, SSRS, Power BI, etc) was to load the logs into Power BI.

I’ve been working with trace files, but the log files should have been the first files I should have discussed, (my bad!)  Let’s correct that oversight right now.

1st Example- Power BI Desktop Logs

First we’ll start with the standard Power BI Desktop log, which can be found in C:\Users\\appdata\local\Microsoft\Power BI Desktop\Traces\Performance.  This file keep track of the main processing performed by Power BI.  This log rotates out on a regular basis, so you’ll see more than one and the log will have the naming convention of:


We will load this log file into Power BI Desktop by clicking on Get Data –> Text/CSV and then choose to view all files and navigate to the folder that contains the log files.  Choose the desktop log from the list of files available and click OK.

You’ll need to format the data to produce a working table, which the M query below will demonstrate:

    Source = Csv.Document(File.Contents("C:\Users\kegorman.NORTHAMERICA\Documents\Traces\PBIDesktop.2020.2018-08-23T18-20-38-862814.log"),5,"",null,1252),
        #"Changed Type" = Table.TransformColumnTypes(Source,{{"Column1", type text}, {"Column2", type text}, {"Column3", Int64.Type}, {"Column4", type text}, {"Column5", type text}}),
        #"Replaced Value" = Table.ReplaceValue(#"Changed Type","{Start:","",Replacer.ReplaceText,{"Column5"}),
        #"Split Column by Delimiter" = Table.SplitColumn(#"Replaced Value", "Column5", Splitter.SplitTextByDelimiter(",Action:", QuoteStyle.Csv), {"Column5.1", "Column5.2"}),
        #"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"Column5.1", type datetime}, {"Column5.2", type text}}),
        #"Replaced Value1" = Table.ReplaceValue(#"Changed Type1",",ProductVersion:2.56.5023.1043 (PBIDesktop)","",Replacer.ReplaceText,{"Column5.2"}),
        #"Replaced Value2" = Table.ReplaceValue(#"Replaced Value1","}","",Replacer.ReplaceText,{"Column5.2"}),
        #"Split Column by Delimiter1" = Table.SplitColumn(#"Replaced Value2", "Column5.2", Splitter.SplitTextByEachDelimiter({"Duration:"}, QuoteStyle.Csv, true), {"Column5.2.1", "Column5.2.2"}),
        #"Changed Type2" = Table.TransformColumnTypes(#"Split Column by Delimiter1",{{"Column5.2.1", type text}, {"Column5.2.2", type duration}}),
        #"Renamed Columns" = Table.RenameColumns(#"Changed Type2",{{"Column5.2.2", "Duration"}, {"Column5.2.1", "Action"}, {"Column5.1", "Start Time"}}),
        #"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"Column4"}),
        #"Renamed Columns1" = Table.RenameColumns(#"Removed Columns",{{"Column3", "PID"}, {"Column2", "Type"}}),
        #"Replaced Value3" = Table.ReplaceValue(#"Renamed Columns1",":","",Replacer.ReplaceText,{"Type"}),
        #"Renamed Columns2" = Table.RenameColumns(#"Replaced Value3",{{"Column1", "Main Action"}})
#"Renamed Columns2"

The table then results in a very workable data set that appears similar to this:

https://dbakevlar.com/wp-content/uploads/2018/08/log_tbl-300x39.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/log_tbl-768x100.png 768w" sizes="(max-width: 828px) 100vw, 828px" />

This offers you a dataset that you can load, reload and/or append to with log data that is available to report on.  As we’ve seen previously, you can create reports on resource usage, but this one would be used to search for “where type=’Error'” or look at the steps performed by the child TID #57, etc.

There weren’t any errors in my log, so it’s kind of difficult to demonstrate, as you’d only want it to display if there was a problem, but you could set up a report that only shows the Actions that take over duration of 5 seconds.  I decided instead to just simulate the data, displaying the Action and the TID for those that fell into a certain number range…:)

https://dbakevlar.com/wp-content/uploads/2018/08/log_tbl2-300x180.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/log_tbl2-768x460.png 768w" sizes="(max-width: 650px) 100vw, 650px" />

2nd Example-  Power BI Report Server

Second one is  inspecting the Reporting Server Portal log, (RSPortal**.log) that resides in #000000;">C:\Program Files\Microsoft Power BI Report Server\PBIRS\LogFiles

We again load this log file via Get Data –> Text/CSV and then choose to view all files, as it won’t see the .log extension otherwise.  Choose the file and click on Edit.

The M query displays the changes I performed to format the data into something that can easily be worked with.  Because of the stagnated output of the data lines, this will format the error and warning messages, with the rest of the rows only having the Information Message fulfilled, the rest of the columns will be null:

    Source = Table.FromColumns({Lines.FromBinary(File.Contents("C:\Program Files\Microsoft Power BI Report Server\PBIRS\LogFiles\RSPortal.log"), null, null, 1252)}),
        #"Split Column by Delimiter" = Table.SplitColumn(Source, "Column1", Splitter.SplitTextByDelimiter("|", QuoteStyle.Csv), {"Column1.1", "Column1.2", "Column1.3", "Column1.4"}),
        #"Changed Type" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"Column1.1", type text}, {"Column1.2", type text}, {"Column1.3", Int64.Type}, {"Column1.4", type text}}),
        #"Renamed Columns" = Table.RenameColumns(#"Changed Type",{{"Column1.1", "Information Message"}, {"Column1.2", "Status"}, {"Column1.3", "Status Code"}, {"Column1.4", "Status Message"}})
#"Renamed Columns"

We can then create a simple table in Power BI:

Values:  Status Message

Filters:  status = Error

You then will receive the following output:

https://dbakevlar.com/wp-content/uploads/2018/08/Error_msg.-300x70.png 300w, https://dbakevlar.com/wp-content/uploads/2018/08/Error_msg.-768x179.png 768w, https://dbakevlar.com/wp-content/uploads/2018/08/Error_msg..png 1066w" sizes="(max-width: 650px) 100vw, 650px" />

Of everything included in the log file, this quickly isolates the error message and displays it in the report.

The deal is, this uses Power BI a bit differently than we originally considered it-  it’s not just for the business user, but it can be for the technical professional as well.  I’ve worked with this kind of data my entire career and if there’s a way I can display it the way I need to answer the questions the business needs answering from me, what better way than to use the tool they’re already using to answer their questions about the business every day? </p />
    <div class=»

Jonathan Lewis's picture

Error Logging

Error logging is a topic that I’ve mentioned a couple of times in the past, most recently as a follow-up in a discussion of the choices for copying a large volume of data from one table to another, but originally in an addendum about a little surprise you may get when you use extended strings (max_string_size = EXTENDED).

If you use the default call to dbms_errlog.create_error_log() to create an error logging table then Oracle will create a table with a few columns of its own plus every column (name) that you have in your original table – but it will create your columns as varchar2(4000), or nvarchar2(2000), or raw(2000) – unless you’ve set the max_string_size to extended.  Here’s a simple  demo script with results from two different systems, one with the default setting the other with the extended setting (note, there’s a little inconsistency in handling raw() columns.

rem     Script:         log_errors_min.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jun 2018
rem     Purpose:

create table t1 (
        v1      varchar2(10),
        n1      number(2,0),
        d1      date,
        nv1     nvarchar2(10),
        r1      raw(10)

execute dbms_errlog.create_error_log('t1')

desc err$_t1

max_string_size = STANDARD
 Name			       Null?	Type
 ----------------------------- -------- --------------------
 V1					VARCHAR2(4000)
 N1					VARCHAR2(4000)
 D1					VARCHAR2(4000)
 NV1					NVARCHAR2(2000)
 R1					RAW(2000)

max_string_size = EXTENDED
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ORA_ERR_NUMBER$                        NUMBER
 ORA_ERR_MESG$                          VARCHAR2(2000)
 ORA_ERR_ROWID$                         ROWID
 ORA_ERR_OPTYP$                         VARCHAR2(2)
 ORA_ERR_TAG$                           VARCHAR2(2000)
 V1                                     VARCHAR2(32767)
 N1                                     VARCHAR2(32767)
 D1                                     VARCHAR2(32767)
 NV1                                    NVARCHAR2(16383)
 R1                                     RAW(32767)

Every single “original” column that appears in this table will be a LOB, with an inline LOB locator of 30 or more bytes. (At least, that’s the implementation, I haven’t checked for 12.2 or 18.3).

If this is going to be a problem (e.g. you have a table defined with 500 columns but only use 120 of them) you can create a minimalist error logging table. Provided you create it with the ora_err% columns suitably defined you can add only those columns you’re really interested in (or feel threatened by), and you don’t have to declare them at extreme lengths. e.g.

create table err$_special (
        ora_err_number$         number,
        ora_err_mesg$           varchar2(2000),
        ora_err_rowid$          rowid,
        ora_err_optyp$          varchar2(2),
        ora_err_tag$            varchar2(2000),
        n1                      varchar2(128)

insert into t1 values(1,'abc','02-jan-1984',sys_op_c2c('abc'),hextoraw('0xFF')) 
log errors into err$_special
reject limit unlimited

execute print_table('select * from err$_special')

ORA_ERR_NUMBER$               : 1722
ORA_ERR_MESG$                 : ORA-01722: invalid number

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
N1                            : abc

If you try to create an error logging table that doesn’t include the 5 critical columns you’ll see Oracle error ORA-38900: missing mandatory column “ORA_ERR_{something}” of error log table “{your logging table name}” when you try to log errors into it, and the 5 critical columns have to be the first 5 columns (in any order) in the table or you’ll get Oracle error ORA-38901: column “ORA_ERR_{something}$” of table “{your logging table name}” when you try to log errors into it.

connor_mc_d's picture

18c Database installation on Windows

If you’re a Windows enterprise, or you want to run your 18c database on your Windows laptop/desktop for research and education, then there has been some good news this week.  The software is now available to you on the OTN network page.  Here’s a walk through of the software installation process

Head to the standard database downloads page


Accept the license agreement and choose the Windows version to download


Note – if you want to see all of the various Windows 18c components (grid, client, examples, etc), you can get that here

Once you have downloaded the software, note that when you unzip it, you are unzipping it directly into place, not into a staging area.  So unzip to a folder that you intend to be your ORACLE_HOME location.


Once the unzip has been completed, in the base directory where you unzipped the files, there will be a setup.exe file.  Double click on that to launch the installer


The familiar java based software installer will appear. For this blog post, I opted to solely go with software configuration – I’ll cover database creation in a separate post.


For my use, I’m just using a home laptop, so single instance for me. If you do want RAC, you’ll be needing to download more components anyway (eg Grid)


You’ll then get the standard pre-installation checks on your machine.  For reference, the machine I’m installing the software on – its a 32G RAM machine on Windows 10, and I did not get any warnings.


I nominated my existing ORACLE_BASE location as the target for this ORACLE_HOME as well


and went with the default option of using a virtual Windows account to own the software:


Note: The first time I did this install, I went with the Windows Built-In account because I had an old 11g database installation under the same ORACLE_BASE, and I thought at least it would be consistent.  Whilst the installation and subsequent use of 18c worked fine, and my 11g instance was fine, it totally hosed by 12c installation which was also under the same ORACLE_BASE.  The 12c installation had been done with the default virtual account, and it appears the subsequent addition of 18c using the built-in Windows SYSTEM account reset the permissions on the critial diagnostic directory paths.  From that point on, my 12c installation could not start and got “permission denied” errors when trying to access various destinations.

So my advice would be – adopt a consistent approach for any software under a common ORACLE_BASE.

Next you choose the edition you want to install.  Enterprise for me naturally Smile


And then you are ready to install. Just hit the Install button, sit back and relax.


You will see the familiar progress dialog box, and like all progress boxes from any vendor, the percentage complete will typically have no true bearing on how long the process will take Smile For the record, my installation took around 8-10mins at this stage, most of it configuring the central inventory.


All things going well, you finally get the confirmation screen, and voila! You’re 18c database software is installed!


Here’s an (accelerated) video showing the above steps as they were performed on my machine.

Enjoy 18c on Windows!

Richard Foote's picture

New Zealand: “Oracle Indexing Internals and Best Practices” Seminars November 2018.

Good news for those of you in beautiful New Zealand. Due to popular demand, I’ll be returning to run some of my acclaimed “Oracle Indexing Internals and Best Practices” seminars in November 2018. The dates and events are: Wellington: 19-20 November 2018: Registration Here or Buy Directly Here Auckland: 21-22 November 2018: Registration Here or […]

dbakevlar's picture

RV Life and Working Remote

I get a lot of questions about what it’s like to work remote while living in our 5th wheel.  I’ll link this post to danceswithwinnebagos.com, too, so for those asking the same question from that site, it’s a two for one… </p />
    <div class=»

To prevent automated spam submissions leave this field empty.