Oakies Blog Aggregator

Franck Pachot's picture

/proc/meminfo formatted for humans

Here is a small awk script I use to format memory information on Linux:

awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*" #awk #meminfo

This reads /proc/meminfo and formats it to display the size in GB on the first column. Most of the statistics are in kB (formatted in the ‘v’ arrays in the awk script) but for Huge Pages we must read the Hugepagesize as they are in number of pages (stored in the ‘h’ array in the awk script). Then I sort it by size, and color the ‘HugePage’ pattern with grep.

Here is an example of the output on a ‘ VM.DenseIO2.24’ compute shape in the Oracle Cloud (320GB — showing 314 MemTotal here).

I have allocated 102400 Huge Pages (200GB) with the following line in /etc/sysctl.conf

vm.nr_hugepages=102400

Remember that this can be allocated dynamically (sysctl -p) but be careful to leave enough small pages (here is an example where the system cannot boot because of invalid settings: https://blog.dbi-services.com/kernel-panic-not-syncing-out-of-memory-and-no-killable-processes/)

In this example, 64 GB of those Huge Pages are used (136 GB free within 200GB total). They were allocated by two Oracle Database instances having a 32GB System Global Area each. This is visible from the alert.log. When the first instance started, the 102400 pages were free and 16385 were allocated:

Supported system pagesize(s):
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES
4K Configured 12 12
2048K 102400 16385 16385

When the second instance started, only 102400–16385=86015 were free and another 16385 were allocated:

Supported system pagesize(s):
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES
4K Configured 12 12
2048K 86015 16385 16385

So, this leaves 120 GB of free small pages, approximately counted by MemAvailable but I recommend Frits Hoogland post for a better calculation:

Linux memory usage

The current post follows the awk snippet I posted in a tweet— follow me on Twitter if you like this kind of posts…

Added 3-MAR-2019

I’ve added the calculation of used Huge Pages as the difference between Total and Free (added in italic in the code above):

    32767 GB VmallocTotal:   34359738367 kB
32510 GB VmallocChunk: 34089686416 kB
503 GB MemTotal: 528074912 kB
483 GB DirectMap1G: 506462208 kB
457 GB HugePages_Total: 234000
436 GB HugePages_Free: 223470

39 GB CommitLimit: 41181260 kB
33 GB MemAvailable: 35024844 kB
31 GB Cached: 33027992 kB
30 GB DirectMap2M: 31846400 kB
20 GB HugePages Used (Total-Free)
18 GB Inactive: 19130736 kB
17 GB Inactive(file): 18231100 kB
17 GB Active: 18713072 kB
15 GB SwapTotal: 16759804 kB

mwidlake's picture

Introducing I.T. to an Elderly Relative

Introducing an older person to the connected world can be a challenge. So I thought I would describe my recent experiences in introducing my elderly mother to I.T and the internet. Each such situation will be different of course, depending on the prior experience of the person and the skills you believe they have. I’m going to run through what I think are the main initial considerations. I knew from the start it was going to be a particular challenge with my mother, so I think she is a good example. Hopefully, for many the task will be a little easier…

Firstly, why are we doing this?

Not everyone has to be on the internet and I knew it was going to be stressful for everyone involved, so the first question to ask is “Is it in the best interest of Uncle Bob to go through this?”

For years my mother has shown very little interest in computers or the internet, and at times she has been quite “Oh, those damn things!” about it all. But over the last 2 or 3 years Mum’s started showing an interest. This has nothing to do with the fact that her youngest son’s whole working life has been in I.T., I think she’s simply started to feel she is missing out as there are so many references on TV programs and the newspaper to things on the internet. “Just go to blingy bong for more information!”. And to her, it really is “blingy bong”.

I think it is vital that the person wants to get online – and this is not a one-week wonder.

Before now my mum had mentioned getting online but then lost interest when the one thing she was interested in disappeared, such as checking the state of play in the Vuelta cycling race as it was not on her TV. Setting someone up on the internet is not cheap and I knew she would insist on paying. You have to organise broadband to the property, buy a device and then spend time in training them. If mum lost interest after a couple of days of trying, it would all be a waste of effort. But she had been constant in mentioning this for a couple of months.

Another reason to get Mum online is so she can stay in touch more easily {am I really sure I want this?!?}. Her hearing is not as good as it was and phone calls are a ‘dedicated, binary activity’. What do I mean by that? Well, when you are on the phone, you have to keep the conversation going and you are doing nothing else, this is your only chance to communicate – dedicated. And when you are not on the phone you are not in contact – Binary (all or nothing).

I think those of us in the technology industry or who grew up in the last… 4 decades maybe take this for granted, but with email, texts, messenger, whatsapp etc you can throw a message or two at people when the need occurs to you, and leave them for the person to pick up. It is a more relaxed way of communicating and, in many ways, more reliable. At present if mum needs me to come over and change light bulbs she needs to call me in the evening. She won’t call me during the day, she is convinced nothing short of death is important enough to call during the day! So she also needs to remember to call and mum is getting worse for that. If she is online she can send me a message when she notices the bulb in hall has blown.

The next step is to assess the capabilities of the person you are helping.

https://mwidlake.files.wordpress.com/2019/02/old_typwriter.jpg?w=600&h=600 600w, https://mwidlake.files.wordpress.com/2019/02/old_typwriter.jpg?w=150&h=150 150w" sizes="(max-width: 300px) 100vw, 300px" />

I’ve introduced a few other people (mother-in-law, brother to some degree, relatives of friends) to computers and the internet over the years and the size of the challenge is very much dictated by their skills. I think you need to be honest about how much and how soon people can learn, especially if they are older or have learning needs. It’s great to be surprised by them doing better than you expected, but if they do worse then it can be demoralising for both parties.

My mother-in-law was a retired science teacher, interested in a dozen things, confident, and self-motivated. When she asked me to help her get on the internet I knew it was not going to be too hard.  But something I did not consider is that she had never typed at all (which surprised me, but there you go), so the keyboard was an initial, surprise challenge to the task. Just think about it, you have to explain the “enter” key, the “delete” key, “shift” key, special symbols… But the Mother-in-law was used to using equipment and took to it well. It did mean that the first session was almost totally about introducing her to the keyboard and just a few basics on turning the machine on and off and using email. After that I went on in later sessions to show her the basics of Windows, email, web browsing and she was soon teaching herself. She got a couple of “computes for dummies” and went through them.

Learning skills deteriorate as you age – but each individual is different. Be realistic.

My mother had also never used a typewriter – but she is also not good with technology. Getting her to understand how to use a video player was a task way back when.  It is not that she is no good with mechanical things or controlling them, she was a sewing machinist all her career – but she never moved from a simple sewing machine with just a dozen manually selected stitch patterns to ones which you can program or that have a lot of controls. This might be mean to say, but she struggled with an electronic cat-flap when we installed one for her! {Well, we installed it for the cats to be honest, we do not make Mum enter and exit the house on her hands and knees through a small hole in the door}. My mum has also never had (or wanted) a mobile phone, let alone a smart phone. Apps, widgets, icons, touch screens are all things she has never used.  We were going to have to keep it very, very simple. Mum also lacks focus and retention of details. Lots of repetition would be needed to learn, and only a few things at a time.

Third Question – What hardware?

This is a major consideration. A few years ago if you wanted internet access and email the choice was simply “Mac or PC” and probably came down to what you personally preferred and felt most comfortable supporting.

I realised from the very start that my mum would never cope with a Windows PC or a Mac. I know some people are so Mac-fanboy that they will insist it is “so easy anyone could use them” but no, Macs can have issues and there is a lot of stuff to initially learn to get going. And, like PC’s, they DO go wrong and have issues.

https://mwidlake.files.wordpress.com/2019/02/ipad.jpeg?w=424&h=600 424w, https://mwidlake.files.wordpress.com/2019/02/ipad.jpeg?w=106&h=150 106w" sizes="(max-width: 212px) 100vw, 212px" />

Choice made – will it be the correct one?

I did initially investigate if I could make a Windows PC work for my mum. I can sort out most issues on a PC and so it would be easier for me to support her. You can set Windows up to be simpler for an older person. I was more than happy setting up other older people with a PC in the past, as I’ve mentioned. Another big advantage with a PC would be I could set it up so I could remote access it and help. I live 2.5 hours from Mum, remote access would be a major boon. In another situation I think I would go down that route, set up a Windows laptop, reduce what was available on it, put on the things I felt they would want initially and ensure I had full access to the machine. I could then do interactive “show and tell” sessions. Of course, you have to consider privacy if you have full access to someone’s machine. But I felt I was trying to come up with a solution that was more easy for me rather than more easy for the person I was helping.

My final factor in my decision on what to go for was “the internet”. There is bad stuff on the internet (I don’t mean content so much, what my Mum looks at is up to her and I am under no illusions that when someone gets old they do not become a child to protect. I don’t understand why some people seem to think old people are sweet and innocent! Old people used to be young, wild, risk-taking and randy. They’ve lived a life and learnt about the world and they know what they do and do not like). What bothers me about the internet is viruses, spyware, downloads that screw your system over. No matter how much I would explain to my mum, there was a good chance she would end up clicking on something and downloading some crap that messed up the system or stole her details. Machines that are not Windows PCs suffer from this a lot less.

For a while my mum said she wanted an Alexa or something similar. Something she could ask about Lonnie Donegan’s greatest hits (this is a totally true example). But talking to her she also wanted email and BBC news and sport. Also, I’ve seen people using an Alexa and getting it to understand & do what you want is pretty hit & miss, I could see that really frustrating my Mum. Also I don’t like the damned, nasty, spying, uncontrolled bloody things – they listen all the time and I don’t think it is at all clear what gets send back to the manufacturer, how it is processed, how they use it for sales & marketing.

So, for my mum a tablet was the way to go. It is simpler, much more like using a phone (you know, the mobile phone she has never had!) and has no complication of separate components. Plus it is smaller. I decided on an iPad because:

    • The three people she is most likely to be in contact with already have an iPad mini or iPhone,
    • They are simple. Simple-ish. Well, not too complicated.
    • I felt it was big enough for her to see things on it but not so big as to be in the way.
    • The interface is pretty well designed and swish.
    • They are relatively unaffected by viruses and malware (not impervious though)
    • It will survive being dropped on the carpeted floor of her house many, many, many times.
    • You can’t harm them by just typing things and running apps. {Hmm, I’ll come back to that in a later post…}
    • If she really hated it, I could make use of a new iPad <br />
</li></ul></li></ul></div>
    <div class=»
Franck Pachot's picture

19c Auto Index: the dictionary views

The abbreviation AI may be misleading but it has nothing to do with Artificial Intelligence. And you may have been surprised that the ‘A’ means ‘Automatic’ rather than ‘Autonomous’ as the latter is constantly used to tag any new feature in the database since 18c. But this difference is really important: ‘Autonomous’ supposes that you don’t have anything to do and don’t even need to be notified about what happened. On the opposite, ‘Automatic’ means that some things are done without your intervention, in order to help you, but you are still in charge of managing them. And you need to look at the dictionary views, to be aware of the findings, recommendations, and implementations. Automatic Indexing is an evolution of the Advisors that were introduced since 10g and, in the same way, it provides many dictionary views to understand its activity.

This posts present those views, mainly defined in the catproc script $ORACLE_HOME/rdbms/admin/cataivw.sql, grouped in the following areas:

  • Configuration
  • Activity log
  • Indexes created
  • SQL statements

Automatic Indexing — Configuration Parameters

DBA_AUTO_INDEX_CONFIG

The configuration parameters are displayed with this view.

Here I’ve set AUTO_INDEX_MODE to run and automatically implement its findings:

exec dbms_auto_index.configure('auto_index_mode','implement');

The other parameters are the default:

select * from dba_auto_index_config order by 1;
PARAMETER_NAME                    PARAMETER_VALUE   LAST_MODIFIED
AUTO_INDEX_DEFAULT_TABLESPACE
AUTO_INDEX_MODE IMPLEMENT 14:20:12
AUTO_INDEX_REPORT_RETENTION 31
AUTO_INDEX_RETENTION_FOR_AUTO 373
AUTO_INDEX_RETENTION_FOR_MANUAL
AUTO_INDEX_SCHEMA
AUTO_INDEX_SPACE_BUDGET 50

Actually, the table behind this view is SMB$CONFIG which is from the SQL Management Base. SMB stores what the optimizer needs to persist about SQL statements: SQL Profiles, SQL Plan Baselines, SQL Patches and in 19c SQL Quarantine. And Automatic Indexing is heavily linked with them to control the scope of its implementations.

SMB$CONFIG

SMB$CONFIG shows additional underscore parameters.

select * from sys.smb$config where parameter_name like '%AUTO_INDEX%' order by 1;

Here I’ve set _AUTO_INDEX_TRACE to the value 2 to get more tracing in the job trace. I’ve set it with:

exec sys.dbms_auto_index_internal.configure( '_AUTO_INDEX_TRACE', 2, allow_internal=>true);


DBA_ADVISOR_TASKS

Automatic Indexing is based on the Optimizer Advisor Framework. Here are the new Advisor tasks:

SQL> select * from dba_advisor_tasks where owner='SYS' order by task_id;
TASK_ID TASK_NAME                          ADVISOR_NAME
2 SYS_AUTO_SPM_EVOLVE_TASK SPM Evolve Advisor
3 SYS_AI_SPM_EVOLVE_TASK SPM Evolve Advisor
4 SYS_AI_VERIFY_TASK SQL Performance Analyzer
5 SYS_AUTO_INDEX_TASK SQL Access Advisor
6 AUTO_STATS_ADVISOR_TASK Statistics Advisor
7 INDIVIDUAL_STATS_ADVISOR_TASK Statistics Advisor


Automatic Indexing — Activity Log

DBA_ADVISOR_EXECUTIONS

The standard Advisor views can give information about the Auto Indexing activity. The TASK_ID=5 and TASK_NAME=SYS_AUTO_INDEX_TASK

select * from dba_advisor_executions where task_name='SYS_AUTO_INDEX_TASK' order by execution_id;


DBA_AUTO_INDEX_EXECUTIONS

You don’t need to go to those views because Auto Indexing provides specific ones, based on the same WRI$_ADV_TASKS and WRI$_ADV_EXECUTIONS tables.

select * from dba_auto_index_executions order by execution_start;


DBA_ADVISOR_OBJECTS

The Advisor Framework stores additional information as objects in WRI$_ADV_OBJECTS. The Automatic Indexing ones are the log information from its activity, with TYPE_ID=31 and TYPE=’AUTO INDEX INFORMATION’ (these types are defined in X$KEAOBJT)

select * from dba_advisor_objects where task_name='SYS_AUTO_INDEX_TASK' and type='AUTO INDEX INFORMATION'order by object_id;


dbms_auto_index_internal.finding_name(attr7)

Those Advisor views show general attributes that have a different signification for each object type, and for AUTO INDEX INFORMATION the ATTR7 identifies the finding. Those, as far as I know, are not visible from a table but hardcoded in the DBMS_AUTO_INDEX_INTERNAL function. Here are all possible values:

SQL> select attr7,sys.dbms_auto_index_internal.finding_name(attr7) from (select rownum attr7 from xmltable('1 to 51')) order by 1;
  ATTR7 SYS.DBMS_AUTO_INDEX_INTERNAL.FINDING_NAME(ATTR7)
1 Compiled statements
2 Statements using auto index in compilation verification(final)
3 Statements discarded (misestimate or high selectivity of indexes)
4 New index candidate
5 Candidate indexes
6 Index for rebuild
7 Rebuilt indexes
8 Redundant index
9 Redundant indexes
10 Misestimate in sql_id
11 Pruned indexes
12 SPM begin
13 Statements considered by SPM
14 Indexes in first verification
15 Indexes in second verification
16 No significant improvement with index
17 Ineffective indexes
18 Significant improvement with index
19 Effective indexes
20 Error for statement
21 Timeout for statement
22 No buffer gets for statement
23 Statement regressed or no significant improvement
24 Regressed statements
25 Statement produced same plan
26 Statement has same performance
27 Unchanged statements
28 Statement improved
29 Improved statements
30 Index created
31 Index dropped
32 Index rebuilt
33 Index marked unusable
34 Index marked visible
35 Index marked invisible
36
37 Auto index clean up work done
38 Execution validation for mis-estimated statements done
39 Auto index action based on performance validation done
40 Auto index compilation verification done
41 Statements using auto index in compilation verification
42 SPM end
43 Max space budget reached
44 Report mode, performance validated but index stays invisible
45 Out-of-space during rebuild
46 Statements in STS
47 Auto index execution start
48 Resuming auto index execution
49 Skipping table from auto index creation
50 Auto index execution end
51 Implement validated auto index

“_auto_index_log”

Oracle has an internal view to display the advisor objects as a log of its activity:

select * from sys."_auto_index_log" order by log_id;

Unfortunately, there’s no public dictionary view on it except an aggregated one to sum the statistics.

DBA_AUTO_INDEX_STATISTICS

Based on “_auto_index_log” we have some summary counters

select * from dba_auto_index_statistics where value>0 order by 1;


Automatic Indexing — New Indexes

DBA_ADVISOR_OBJECTS

The goal of Auto Indexing activity is to create (and drop) indexes and this index information is stored as objects with TYPE_ID=2 and TYPE=’INDEX’ in the Advisor Framework objects

select * from dba_advisor_objects where task_name='SYS_AUTO_INDEX_TASK' and type='INDEX' order by object_id;


“_auto_index_ind_objects”

This internal view decodes the attributes in the context of Automatic Indexes to display more information about the created indexes

select * from sys."_auto_index_ind_objects" order by object_id;


DBA_INDEXES

The full metadata is available from the dictionary views about indexes. The name of those indexes start with ‘SYS_AI_’ and are flagged with this new AUTO column.

The DDL as generated by DBMS_METADATA has this AUTO attribute, but we cannot use it ourselves. The tablespace here is SYSTEM because I’ve not set the configuration AUTO_INDEX_DEFAULT_TABLESPACE parameter. It seems that 12cR2 ADVANCED LOW compression is used. I’ll update this post when having more information about the licensing consequences. [Update: I raised the point and a new parameter should come in 19.3]

DBA_AUTO_INDEX_IND_ACTIONS

A public view is available to see the commands that were executed to create this index, which shows the different steps:

  • created as unusable to analyze the execution plans
  • rebuilt (online) but invisible to control which statements will verify it
  • made visible (which does not lock the table since 12c) when accepted
select * from dba_auto_index_ind_actions order by action_id;

Those actions, with start/end timestamp, give more detail about the operations that we have seen in “_auto_index_log” (New index candidate, Index rebuilt, Index marked visible).

Automatic Indexing — SQL Statements

The Automatic Indexing works basically by capturing a SQL Tuning Set on which it runs the SQL Access Advisor, and the tuning set is visible as SYS_AUTO_STS (here created at the same time as my first ‘Statements in STS’ in “_auto_index_log”):


DBA_ADVISOR_OBJECTS

Automatic Indexing goes further than the SQL Access Advisor. The statements are continuously verified to detect improvement and regressions. They are stored with some flags as TYPE_ID=7 and TYPE=’SQL’

select * from dba_advisor_objects where task_name='SYS_AUTO_INDEX_TASK' and type='SQL' order by object_id;


“_auto_index_sql_objects”

The internal view decodes attr2 as the Plan Hash Value and attr7 as some flags about the verification(my guess on a quick test is that flag is set to 1 when the SQL was improved, 2 when regression has been seen)

select * from sys."_auto_index_sql_objects" order by object_id;


DBA_AUTO_INDEX_SQL_ACTIONS

The goal of those verifications is to prevent the regressions by blacklisting the new index usage for some queries. We can see that in SQL actions:

select * from dba_auto_index_sql_actions;

This is an example where Automatic Indexing has called loaded the previous plan as accepted in a SQL Plan Baseline by calling DBMS_SPM_INTERNAL.LOAD_PLANS_FROMSQL_SET

DBA_SQL_PLAN_BASELINES

This SQL Plan Baseline is identified with the ‘EVOLVE-AUTO-INDEX-LOAD’ origin.

This is quite surprising because fixing the previous plan does not only prevent the usage of the new AUTO index, but also any new one that I can create manually (at least until it automatically evolved).

SQL_PATCHES

I expected to see the regressions locked down by SQL Patches rather than SQL Plan Baselines. In this first test, I see no SQL Patch created, but this is another dictionary view to look at when trying to understand Automatic Indexing.

Franck Pachot's picture

19c Easy Connect

When TCP/IP was the protocol used mostly everywhere, Oracle introduced EZCONNECT naming method to avoid long connection strings with parentheses everywhere. They said that it was a way to avoid tnsnames.ora but that’s not completely true:

  • you can use full connection strings without a tnsnames.ora
  • you still need a tnsnames.ora for more complex connection strings

But the idea was to replace:

(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=APP.service.net))(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=2483)))

by:

localhost:2483/APP.service.net

and even reduce it to a simple hostname when using the default port and default service name.

I use it a lot when connecting manually, but as soon as there’s a need to add multiple hosts for transparent failover or load balancing, or to add some additional parameters, the full connection string is required.

In 19c, Oracle has extended the Easy Connect syntax to allow this in EZCONNECT naming and again telling us that it is a way to avoid a tnsnames.ora:

https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/configuring-naming-methods.html#GUID-B0437826-43C1-49EC-A94D-B650B6A4A6EE

I still don’t agree. You probably still prefer to deploy a common tnsnames.ora to be sure that all applications connect with the correct string, or use an LDAP directory. However, they are not wrong... I am. I use EZCONNECT as a syntactic sugar. But for Oracle it a naming method as defined in NAMES.DIRECTORY_PATH.

However, if you have a centralized way to store the JDBC URL, then EZCONNECT will avoid any local configuration in 19c as mentioned in the documentation. Let’s see what is possible.

I’m using tnsping to resolve the EZCONNECT string to a full description one, which I ident for better readability.

This is the most simple, and not new, using the default port 1521:

tnsping //geneva/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
)

And the most complex before 19c is mentioning the server type (dedicated, shared or DRCP) and the instance name

tnsping //geneva:1521/PDB1:pooled/CDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
(INSTANCE_NAME=CDB1)
(SERVER=pooled)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
)

Now in 19c we can mention multiple hosts which will be converted to a Load Balancing address list

tnsping //geneva,lausanne/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1521)
)
)

With the same port for:

tnsping //geneva,lausanne:1522/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1522)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

or different port:

tnsping //geneva:1521,lausanne:1522/PDB1
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

and even add parameters in URL style (starting with ? and separated by &):

tnsping //geneva:1521,lausanne:1522/PDB1?transport_connect_timeout=5&retry_count=2
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(transport_connect_timeout=5)
(retry_count=2)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

By the way, lowering the default TCP timeout is a good idea most of the times because your network probably answers in less than 60 seconds when available, and you don’t want to wait one minute before trying another address. More about those settings:

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT - Blog dbi services

Note that there are no verifications in the EZCONNECT string. You can mention non-existing parameters:

tnsping //geneva:1521,lausanne:1522/PDB1?make_application_fast=on
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(make_application_fast=on)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

and even try to disable the automatically added Load Balancing which probably results in something you don’t want (both on and off):

tnsping //geneva:1521,lausanne:1522/PDB1?LOAD_BALANCE=off
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=PDB1)
)
(LOAD_BALANCE=off)
(LOAD_BALANCE=on)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=lausanne)
(PORT=1522)
)
)

(I opened a SR for this one, will update this post about this)

There’s something I like to do when quickly connecting to an unmounted instance (undocumented and even more unrecommended in this way):

tnsping //geneva/CDB1)(UR=A
(DESCRIPTION=
(CONNECT_DATA=
(SERVICE_NAME=CDB1)
(UR=A)
)
(ADDRESS=
(PROTOCOL=tcp)
(HOST=geneva)
(PORT=1521)
)
)

So it is still possible here. Do not put the UR=A as a parameter (after ?) because those go with the address list rather than CONNECT_DATA.

Note that SQLCL, the SQL Developer command line accepts all that (tested in 18.4) except when mentioning an instance name in addition to a parameter:

But this combination is probably not very useful in most of the cases.

In summary, if you have a centralized way to store the connection strings, then EZCONNECT is a good idea. Because the developers do not like the SQL*Net descriptions with all those parentheses (they are born with XML or JSON or even YAML for the younger ones) and this URL-like syntax will be better accepted by everyone.

dbakevlar's picture

Pricks Upon the Soul of Women’s Ambition

There were two trends in topics that repeatedly came up at RMOUG around women in attendance that I felt deserved a deeper investment of time. As I spend a significant amount of time in a more progressive atmosphere, I find I’m even more aware of them than previously. I hope that my words of support and any advice I offered was helpful to those who sought me out this week.

Be Happy With What You Have” or “You’re Never Satisfied” Trend

Numerous conversations touched on the common thread around discomfort with women’s ambition. I honestly believe that most individuals aren’t even aware of the discomfort, but depending on the level, their response is believed to be supportive even though they’re squashing their ambition to openly hostile. Its astounding how often we push the idea that women should be happy with what we’ve achieved, not with what others have achieved or with what our ambition hopes we’ll achieve.

This comment, in all its myriad forms, strikes me odd:

  1. Its often offered to us with no ill-intent, but rather sympathy in hopes to cheer us up, resulting in more damage in the long run.
  2. I’ve never once, heard this type of response used with my male peers.

One woman talked about how she was told that she should be happy with how much she’d accomplished and not waste time on the ‘what ifs’. What caused me to stop to request details about the situation, quickly we were able to recognize that the what ifs were actually goals she wanted to achieve, having set timelines and milestones for. As the discussion progressed, this incredibly skilled women realized that most of the goals had been set in her yearly review, but as she took them on, her manager undermined her objectives, second-guessed her every move and often micro-managed her.

I asked her to replay the experience, but to replace her in the situation with a male peer. She was incredibly surprised how different she felt about the interaction by simply removing her from the scenario and experiencing the situation with someone that wouldn’t be subject to the subtle layers of bias.

A second woman was telling me how frustrated she was, after she had been turned down for a management position. Her manager told her that with the personal demands on her, she should be relieved that they had awarded the promotion to her male peer, but she was anything but satisfied with it. I asked her some questions about the situation and we quickly surmised that she was more senior, had more experience and had been paramount to a larger percentage of high profile projects.

My friend quickly realized why she was frustrated. She knew in her gut that she was the better qualified candidate and had been left out on an earned promotion. That her manager, hoping to soothe her, bypassed this obvious lack of consideration and instead told her she must be relieved, which was simply salt in the wound.

The one thing that comes out of these examples is bias is very inconspicuous. Its rarely, if ever, direct and its so ingrained in society, we may miss it until we have an honest discussion about it.

Women are going to have ambitions in their career, same as their male peers and I only see this desire to succeed increasing with upcoming generations. Old biases need to be pointed out, with a continued focus on education over persecution, to change what is happening around us in our everyday life that impacts the rise of our female peers in the workplace.

Telling Women to “Think Globally and Do More”

This was another conversation where the person suggesting this had no ill intentions. He simply mentioned the challenges for women’s rights in the world and that we women should be stepping up to help change the world. The problem is, most women are already overwhelmed by demands and challenges that impact our ability to empower ourselves and those around us everyday. To ask women to do more at a global level, in all honesty, isn’t helpful. I remember trying to make a global difference for just my technical arena and it often left me frustrated and deflated. I discovered if I focused on making a difference in just one person’s life at a time, then asked them to pay it forward, the ripples this made on a larger scale created more success.

It left me less responsible for the larger scope, only needing to worry about my responsibilities to the individuals I was mentoring and sponsoring, while the reach became more sustainable by leveraging a larger group of individuals than just me as an army of one.

Stop asking women to change the world. Let women, (and men) lift as we rise and in turn, make a sustainable, global change through smaller initiatives. For those that simply mention the challenges that other women face to dismiss a woman’s challenges in the here and now- just STOP. It’s a version of gaslighting that I have a personal pet peeve about. One woman’s challenges and setbacks are not made lesser by the crimes inflicted on another.

Peace out.



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Pricks Upon the Soul of Women’s Ambition], All Right Reserved. 2019.

connor_mc_d's picture

Worth the wait

Yes, I know it’s been awhile Smile

Yes, I know people have been angry at the delay Smile

But, can we put that behind us, and rejoice in the fact…that YES

It’s here!

Yes, 18c XE for Windows is now available.

https://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html

Most probably, most developers in your organization are running a Windows PC. Now every single one of them can have a fully functioning Oracle database for developing, prototyping, testing, deploying, learning, exploring, educating, … the list goes on… on their own PC for absolutely zero cost.

Franck Pachot's picture

19c DG Broker export/import configuration

This is something I wanted for a long time: be able to save a broker configuration to be able to re-configure it if needed. What I usually do is maintain a script with all commands. What I dreamed was being able to export the configuration as a script. What we have now, in 19c, is the ability to export/import the configuration as a .xml file.

Actually, the configuration is already stored as XML in the broker configuration files (the .dat ones):

SQLcl: Release 18.4 Production on Tue Feb 19 13:40:27 2019
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.2.0.0.0
SQL> show parameter broker
NAME TYPE VALUE
---------------------- ------- -------------------------------------
dg_broker_config_file1 string ?/dbs/dr1CDB1A.dat
dg_broker_config_file2 string ?/dbs/dr2CDB1A.dat
dg_broker_start boolean TRUE

Those are binary files, but we can look at the content with ‘strings’. And since 12cR2 the content shows some XML. In all versions, we can see a message from the authors, ‘fine folks at NEDC’. Here is where this was developed:

NEDC Facility Information

So here are the strings in my broker file:

[oracle@db192 ~]$ strings /u01/app/oracle/product/DB192/dbs/dr1CDB1A.dat
}|{z
cdb1a
cdb1a
Brought to you by the fine folks at NEDC.
<?xml version="1.0" encoding="UTF-8"?>

ONLINE
1934436017
ONLINE
1673904225
1673904226
1673904227
cdb1a

PRIMARY
PRIMARY
...
cdb1b
cdb1a
0
1000647508


0
513

If you want to know more about those, my ex-colleague William Sescu at dbi-services explained everything:

Oracle 12c - How to Recover lost DataGuard Broker Configuration Files - Blog dbi services

Now in 19c, no need to parse that as we can export this metadata in a well-formatted XML:

[oracle@db192 ~]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Feb 19 13:46:05 2019
Version 19.2.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "CDB1B"
Connected as SYSDG.
DGMGRL> export configuration to MYCONFIG;
Succeeded.
DGMGRL>

You cannot mention a full path as the file goes to the diag trace directory (the one called ‘udump’ by the seasoned DBAs). If you are not sure, the broker trace shows it:

The Data Guard broker metadata is exported to /u01/app/oracle/diag/rdbms/cdb1b/CDB1B/trace/myconfig

Here you see that it was transformed to lower-case, as usual in DGMGRL, but you can also quote it to make it case sensitive. What this doesn’t show is that when you do not provide an extension, ‘.log’ will be added (which is a funny default for an XML file…)

[oracle@db192 trace]$ ls -alrt $ORACLE_BASE/diag/rdbms/*/$ORACLE_SID/trace/*myconfig*
-rw-r--r--. 1 oracle oinstall 4992 Feb 19 13:46 /u01/app/oracle/diag/rdbms/cdb1b/CDB1B/trace/myconfig.log

If you compare it with the .dat you will see that the XML content is exactly the same:

[oracle@db192 trace]$ cat /u01/app/oracle/diag/rdbms/cdb1b/CDB1B/trace/myconfig.log
<?xml version="1.0" encoding="UTF-8"?>

ONLINE
1934436017
ONLINE
1673904225
1673904226
1673904227
cdb1a

PRIMARY
PRIMARY
...
cdb1b
cdb1a
0
1000647508


0
513

Once you have the export, I test it by removing the configuration and import it:

DGMGRL> import configuration from myconfig;
Succeeded. Run ENABLE CONFIGURATION to enable the imported configuration.
DGMGRL> enable configuration
Enabled.

This import must be done from the primary or you get an error.

Actually, the XML file is cool because it has more information than just the configuration commands. Here is the XML formatted in https://countwordsfree.com/xmlviewer:

Here I have the history of the last 10 switchovers which can be very useful. And then you want to know what is this timestamp unit?

In the XML file you will find many timestamps.

Most of them are EPOCH, the number of seconds since 01-JAN-1970:


Success
0
1550611393

This, 19-FEB-2019 21:23:13 which is exactly when I exported this file.

But look at the switchover history:


PhysicalSwitchover
cdb1a
cdb1b
0
1000674973


PhysicalSwitchover
cdb1b
cdb1a
0
1000675048


PhysicalSwitchover
cdb1a
cdb1b
0
1000675267

Those ones are the number of seconds since 06-JUN-1987 and are actually the time when the switchover command started: 19.02.19 at 21:16:13, 21:17:28, and 21:21:07.

Richard Foote's picture

Unique Indexes Force Hints To Be “Ignored” Part II (One Of The Few)

In Part I, I showed a demo of how the introduction of a Unique Index appears to force a hint to be “ignored”. This is a classic case of what difference a Unique Index can make in the CBO deliberations. So what’s going on here? When I run the first, un-hinted query: we notice something a […]

connor_mc_d's picture

Statistics on Object tables

Way back in Oracle 8.0 we introduced the “Object-Relational” database, which was “the next big thing” in the database community back then. Every vendor was scrambling to show just how cool their database technology was with the object-oriented programming paradigm.

Don’t get me wrong – using the Oracle database object types and features associated with them has made my programming life a lot easier over the years. But for me, it’s always been pretty much limited to that, ie, programming, not actually using the object types in a database design as such. Nevertheless, using objects as columns, or even creating tables of objects is supported by the database. For example, I can create a object type of MY_OBJECT (which could itself be made up of objects) and then have a table, not with that object as a column, but actually a table of that object.


SQL> create type myobj as object  ( x int, y int );
  2  /

Type created.

SQL> create table t of myobj;

Table created.

To populate that table, I need to construct objects before they can be inserted, because are inserting objects not rows.


SQL> insert into t
  2  select myobj(rownum,rownum)
  3  from dual connect by level <= 10;

10 rows created.

Obviously, for good performance, we always need to gather optimizer statistics on database tables so that the optimizer can derive the best execution plans it can for any SQL statements that access this object table. Which leads to the question: Where can we see the optimizer statistics for an object table? Because once we get into object table territory, the first thing that normally takes DBAs and Developers by surprise is that the standard dictionary view for tables looks … sparse.Smile


SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from  user_tables
  3  where  table_name = 'T';

no rows selected

A quick perusal of the documentation reveals where we can get information for object tables – and that is the aptly named USER_OBJECT_TABLES view


SQL> select *
  2  from   user_object_tables
  3  where  table_name = 'T'
  4  @pr
==============================
TABLE_NAME                    : T
TABLESPACE_NAME               : USERS
CLUSTER_NAME                  :
IOT_NAME                      :
STATUS                        : VALID
PCT_FREE                      : 10
PCT_USED                      :
INI_TRANS                     : 1
MAX_TRANS                     : 255
INITIAL_EXTENT                : 65536
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
PCT_INCREASE                  :
FREELISTS                     :
FREELIST_GROUPS               :
LOGGING                       : YES
BACKED_UP                     : N
NUM_ROWS                      : 10
BLOCKS                        : 5
EMPTY_BLOCKS                  : 0
AVG_SPACE                     : 0
CHAIN_CNT                     : 0
AVG_ROW_LEN                   : 23
AVG_SPACE_FREELIST_BLOCKS     : 0
NUM_FREELIST_BLOCKS           : 0
DEGREE                        :          1
INSTANCES                     :          1
CACHE                         :     N
TABLE_LOCK                    : ENABLED
SAMPLE_SIZE                   : 10
LAST_ANALYZED                 : 18-FEB-19
PARTITIONED                   : NO
IOT_TYPE                      :
OBJECT_ID_TYPE                : SYSTEM GENERATED
TABLE_TYPE_OWNER              : MCDONAC
TABLE_TYPE                    : MYOBJ
TEMPORARY                     : N
SECONDARY                     : N
NESTED                        : NO
BUFFER_POOL                   : DEFAULT
FLASH_CACHE                   : DEFAULT
CELL_FLASH_CACHE              : DEFAULT
ROW_MOVEMENT                  : DISABLED
GLOBAL_STATS                  : YES
USER_STATS                    : NO
DURATION                      :
SKIP_CORRUPT                  : DISABLED
MONITORING                    : YES
CLUSTER_OWNER                 :
DEPENDENCIES                  : DISABLED
COMPRESSION                   : DISABLED
COMPRESS_FOR                  :
DROPPED                       : NO
SEGMENT_CREATED               : YES
INMEMORY                      : DISABLED
INMEMORY_PRIORITY             :
INMEMORY_DISTRIBUTE           :
INMEMORY_COMPRESSION          :
INMEMORY_DUPLICATE            :
EXTERNAL                      : NO
CELLMEMORY                    :
INMEMORY_SERVICE              :
INMEMORY_SERVICE_NAME         :
MEMOPTIMIZE_READ              : DISABLED
MEMOPTIMIZE_WRITE             : DISABLED
HAS_SENSITIVE_COLUMN          : NO

PL/SQL procedure successfully completed.

So far so good. But if I want to really dig down into optimizer statistics for table, I need to look further afield than just USER_TABLES and look at the dictionary views dedicated to just optimizer statistics information, and that is when things get a little interesting for object tables.


SQL> select *
  2  from  user_tab_statistics
  3  where  table_name = 'T';

no rows selected.

SQL> select *
  2  from   user_tab_col_statistics
  3  where  table_name = 'T'
  4  @pr
==============================
TABLE_NAME                    : T
COLUMN_NAME                   : SYS_NC_OID$
NUM_DISTINCT                  : 10
LOW_VALUE                     : 265DF9DB62004A2B90B6FB3D550CB9FD
HIGH_VALUE                    : E868127216F045C89F54B55A48EAD6CF
DENSITY                       : .1
NUM_NULLS                     : 0
NUM_BUCKETS                   : 1
LAST_ANALYZED                 : 18-FEB-19
SAMPLE_SIZE                   : 10
GLOBAL_STATS                  : YES
USER_STATS                    : NO
NOTES                         :
AVG_COL_LEN                   : 17
HISTOGRAM                     : NONE
SCOPE                         : SHARED
==============================
TABLE_NAME                    : T
COLUMN_NAME                   : X
NUM_DISTINCT                  : 10
LOW_VALUE                     : C102
HIGH_VALUE                    : C10B
DENSITY                       : .1
NUM_NULLS                     : 0
NUM_BUCKETS                   : 1
LAST_ANALYZED                 : 18-FEB-19
SAMPLE_SIZE                   : 10
GLOBAL_STATS                  : YES
USER_STATS                    : NO
NOTES                         :
AVG_COL_LEN                   : 3
HISTOGRAM                     : NONE
SCOPE                         : SHARED
==============================
TABLE_NAME                    : T
COLUMN_NAME                   : Y
NUM_DISTINCT                  : 10
LOW_VALUE                     : C102
HIGH_VALUE                    : C10B
DENSITY                       : .1
NUM_NULLS                     : 0
NUM_BUCKETS                   : 1
LAST_ANALYZED                 : 18-FEB-19
SAMPLE_SIZE                   : 10
GLOBAL_STATS                  : YES
USER_STATS                    : NO
NOTES                         :
AVG_COL_LEN                   : 3
HISTOGRAM                     : NONE
SCOPE                         : SHARED

PL/SQL procedure successfully completed.

I can get column level statistics, but there is no entry in USER_TAB_STATISTICS for my object table. Checking the documentation indicates that unlike the USER_TABLES/USER_OBJECT_TABLES pairing, there is no partnering object-based view to match USER_TAB_STATISTICS. Digging into the definition for the USER_TAB_STATISTICS view shows that object tables are explicitly excluded:


and bitand(t.property, 1) = 0 /* not a typed table */

That might not seem such a big deal, but what happens if you lock the statistics for a table.


SQL> exec dbms_stats.lock_table_stats('','T');

PL/SQL procedure successfully completed.

The STATTYPE_LOCKED column is not on USER_TABLES and since we do not have an entry for the table in USER_TAB_STATISTICS, there is no direct mean of seeing if an object table has locked statistics. If you are faced with this problem, you have a couple of options at your disposal:

1) It would appear that the same flag on the internal dictionary table that indicates locked statistics is set for object tables as it would be for standard relational tables. Hence you could create a clone of the USER_TAB_STATISTICS view and remove the BITAND condition on the PROPERTY column. That of course is a very unsupported thing to do, and is just a rod for your back every time you patch or upgrade the database.

2) The other option is to assume that no-one is going to “mix and match” locking table statistics with index statistics. Every object table has an underlying index that is automatically created, so you can look at the locked status for this underlying index as a representative indicator of the table’s statistics locked state.


SQL> select *
  2  from  user_ind_statistics
  3  where  table_name = 'T'
  4  @pr
==============================
INDEX_NAME                    : SYS_C0028743
TABLE_OWNER                   : MCDONAC
TABLE_NAME                    : T
PARTITION_NAME                :
PARTITION_POSITION            :
SUBPARTITION_NAME             :
SUBPARTITION_POSITION         :
OBJECT_TYPE                   : INDEX
BLEVEL                        : 0
LEAF_BLOCKS                   : 1
DISTINCT_KEYS                 : 10
AVG_LEAF_BLOCKS_PER_KEY       : 1
AVG_DATA_BLOCKS_PER_KEY       : 1
CLUSTERING_FACTOR             : 1
NUM_ROWS                      : 10
AVG_CACHED_BLOCKS             :
AVG_CACHE_HIT_RATIO           :
SAMPLE_SIZE                   : 10
LAST_ANALYZED                 : 18-FEB-19
GLOBAL_STATS                  : YES
USER_STATS                    : NO
STATTYPE_LOCKED               : ALL
STALE_STATS                   : NO
SCOPE                         : SHARED

PL/SQL procedure successfully completed.

TL;DR: Querying optimizer statistics for object tables takes a little more care than with normal tables.

Franck Pachot's picture

Oracle 19c Hint Usage reporting

One reason why we try to avoid hints in our queries is that it is very difficult to use correctly. No error is raised when there’s an incorrect syntax or when the hint cannot be used semantically. 19c dbms_xplan has an important enhancement as it can report hint usage, at least for optimizer hints.

By default, DBMS_XPLAN in the default TYPICAL format will report only invalid hints:

SQL> select /*+ FULL(DUAL) INDEX(BLABLABLA) BLABLABLA(DUAL) */ * from dual;
DUMMY
-----
X
SQL> select * from dbms_xplan.display_cursor(format=>'-cost');
PLAN_TABLE_OUTPUT
SQL_ID 3ps01tc9mxuhd, child number 0
-------------------------------------
select /*+ FULL(DUAL) INDEX(BLABLABLA) BLABLABLA(DUAL) */ * from dual
Plan hash value: 272002086
-------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Time |
-------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 00:00:01 |
-------------------------------------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (N - Unresolved (1), E - Syntax error (1))
--------------------------------------------------------------------
1 -  SEL$1
N - INDEX(BLABLABLA)
E - BLABLABLA

This tells me that for the line Id=1 of the plan, the query block SEL$1 has two hints unused. One because of syntax Error (E) because BLABLABLA is not a hint. The other, INDEX(), is a valid syntax but mentions an alias that is not in the query and then the error is unresolved (N)

We can choose to show also the hints that were correctly used:

SQL> select * from dbms_xplan.display_cursor('3ps01tc9mxuhd',format=>'+HINT_REPORT');
PLAN_TABLE_OUTPUT
SQL_ID 3ps01tc9mxuhd, child number 0
-------------------------------------
select /*+ FULL(DUAL) INDEX(BLABLABLA) BLABLABLA(DUAL) */ * from dual
Plan hash value: 272002086
--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)|
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)|
--------------------------------------------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (N - Unresolved (1), E - Syntax error (1))
---------------------------------------------------------------------
1 -  SEL$1
N - INDEX(BLABLABLA)
E - BLABLABLA
1 -  SEL$1 / DUAL@SEL$1
- FULL(DUAL)

The FULL(DUAL) is correct and was used. Of course, this hint was not useful because there’s no other access path to DUAL, but that cannot be reported because basically the FULL() hint only tell the optimizer to ignore other access paths. So it was used even if it was not useful.

The DBMS_XPLAN formats are:

  • +HINT_REPORT_USED to show used hints
  • +HINT_REPORT_UNUSED to show unresolved and syntax errors
    this format flag is included in TYPICAL, the default format
  • +HINT_REPORT combines both of them and is the default with ALL

As an example, the following formats are the same

format=>'ALL -HINT_REPORT_UNUSED'
format=>'BASIC +HINT_REPORT_USED'

OTHER_XML

This displayed by all DBMS_XPLAN display functions is available in OTHER_XML from the PLAN_TABLE, V$SQL_PLAN, AWR, STS, SPM,…

SQL> select extract(xmltype(other_xml),'//hint_usage') from v$sql_plan where other_xml like '%hint_usage%' and sql_id='3ps01tc9mxuhd';
EXTRACT(XMLTYPE(OTHER_XML),'//HINT_USAGE')
------------------------------------------

DBMS_XPLAN is the best way to format it as this xml format is not documented. It seems that:

  • ’ is the query block name (hint scope can statement ‘’, query block ‘’, or alias ‘’)
  • ‘@st’ is PE for parsing syntax error (‘E’ in dbms_xplan note)
  • ‘@st’ is UR for unresolved (‘N’ in dbms_xplan note)
  • ‘@st’ is ‘NU’ or ‘EU’ for unused (‘U’ in dbms_xplan note)
  • ’ is the hint text
  • we might get a reason for unused ones in ‘

Pre-19c

Before this feature, there was some information in the CBO trace, but very limited. Here is what I have for my statement:

Dumping Hints
=============
atom_hint=(@=0x7fcd2d8aa460 err=0 resol=1 used=1 token=83 org=1 lvl=3 txt=INDEX ("BLABLABLA") )
atom_hint=(@=0x7fcd2d8ac008 err=0 resol=1 used=1 token=448 org=1 lvl=3 txt=FULL ("DUAL") )

Examples with reason

Here is an example with statement-level hints FIRST_ROWS/ALL_ROWS:

SQL> explain plan for select /*+ first_rows(1) all_rows */ * from SCOTT.DEPT;
Explained.
SQL> select * from dbms_xplan.display(format=>'basic +hint_report');
PLAN_TABLE_OUTPUT
Plan hash value: 3383998547
----------------------------------
| Id | Operation | Name |
----------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| DEPT |
----------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (2))
--------------------------------------------------------------------
0 -  STATEMENT
U - all_rows / hint conflicts with another in sibling query block
U - first_rows(1) / hint conflicts with another in sibling query block

The message is clear: conflicting hints are all ignored

SQL> explain plan for select /*+ first_rows(1) */ * from (
2 select /*+ all_rows */ * from SCOTT.DEPT
3 ) ;
Explained.
SQL> select * from dbms_xplan.display(format=>'basic +alias +hint_report');
[...]
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (1))
--------------------------------------------------------------------
0 -  STATEMENT
U - all_rows / hint overridden by another in parent query block
- first_rows(1)

Here the conflicting statement level hints are in different query blocks, and the report tells me that only the parent one was used

It is recommended to name the query blocks for complex queries. If some name conflicts (in the query or in underlying views) this will be reported:

SQL> explain plan for select /*+ qb_name(one) */ * from (
2 select /*+ qb_name(one) */ * from SCOTT.DEPT
3 ) ;
[...]
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$F5BB74E1 / DEPT@SEL$2
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (2))
--------------------------------------------------------------------
0 -  SEL$2
U - qb_name(one) / same QB_NAME hints for different query blocks
1 -  SEL$F5BB74E1
U - qb_name(one) / same QB_NAME hints for different query blocks

the consequence is that both were ignored.

You get also an error when the name is too long (more than 20 characters):

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
--------------------------------------------------------------------
1 -  SEL$1
U - qb_name(X12345678901234567890) / hinted query block name is too long

With duplicate hints, one is ignored:

SQL> explain plan for select /*+ full(DEPT) full(DEPT) */ * from SCOTT.DEPT;
[...]
1 - SEL$1 / DEPT@SEL$1
U - full(DEPT) / duplicate hint
- full(DEPT)

When I allow no possible join method, all my join hinting are ignored:

SQL> explain plan for select /*+ leading(DEPT) no_use_nl(EMP) no_use_hash(EMP) no_use_merge(EMP) */ * from SCOTT.DEPT join SCOTT.EMP using(DEPTno);
[...]
1 - SEL$58A6D7F6
- leading(DEPT)
3 -  SEL$58A6D7F6 / EMP@SEL$1
U - no_use_hash(EMP) / all join methods are excluded by hints
U - no_use_merge(EMP) / all join methods are excluded by hints
U - no_use_nl(EMP) / all join methods are excluded by hints

If I mention an index name that does not exist, I see the reason:

SQL> explain plan for select /*+ index(DEPT SYS_AI_M4J1C) */ * from SCOTT.DEPT;
[...]

1 - SEL$1 / DEPT@SEL$1
U - index(DEPT SYS_AI_M4J1C) / index specified in the hint doesn't exist

This means that the table/alias DEPT exists but not the index mentioned.

Here is one where I use FULL() for an IOT:

SQL> explain plan for select /*+ full(IOT) */ count(*) from IOT where id=42;
Table created.
SQL> explain plan for select /*+ full(IOT) */ count(*) from IOT where id=42;
Explained.
SQL> select * from dbms_xplan.display(format=>'basic +hint_report');
PLAN_TABLE_OUTPUT
Plan hash value: 3425135035
---------------------------------------------------
| Id | Operation | Name |
---------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | INDEX FAST FULL SCAN| SYS_IOT_TOP_73014 |
---------------------------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
--------------------------------------------------------------------
2 -  SEL$1 / IOT@SEL$1
U - full(IOT) / FULL hint is same as INDEX_FFS for IOT

This is not really unused because without a hint the optimizer would have chosen an INDEX UNIQUE SCAN. But the reason explains clearly that INDEX_FFS was substituted as it is the IOT equivalent of FULL

When ignore_optim_embedded_hints hint is present, other hints are ignored:

SQL> explain plan for select /*+ index(DEPT) ignore_optim_embedded_hints */ * from SCOTT.DEPT;
[...]
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (1))
--------------------------------------------------------------------
0 -  STATEMENT
- ignore_optim_embedded_hints
1 -  SEL$1 / DEPT@SEL$1
U - index(DEPT) / rejected by IGNORE_OPTIM_EMBEDDED_HINTS

Exactly the same reason is displayed when it is set at session level:

SQL> alter session set optimizer_ignore_hints=true;
Session altered.
SQL>
SQL> explain plan for select /*+ index(DEPT) */ * from SCOTT.DEPT;
[...]
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
--------------------------------------------------------------------
1 -  SEL$1 / DEPT@SEL$1
U - index(DEPT) / rejected by IGNORE_OPTIM_EMBEDDED_HINTS

There is a similar parameter for parallel hints:

SQL> alter session set optimizer_ignore_parallel_hints=true;
Session altered.
SQL>
SQL> explain plan for select /*+ parallel(DEPT) */ * from SCOTT.DEPT;
[...]
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
--------------------------------------------------------------------
1 -  SEL$1 / DEPT@SEL$1
U - parallel(DEPT) / because of _optimizer_ignore_parallel_hints

Looking at the binaries (I don’t think it is exposed as a V$), here some the possible reasons:

The documentation has many other examples explained:

SQL Tuning Guide

To prevent automated spam submissions leave this field empty.