ACE

marco's picture

Hotsos Revisited 2013 – Presentatie materiaal

Hierbij nog dank voor allen die aanwezig waren bij de weer gevulde, informatieve & gezellige avond tijdens “Hotsos Revisited 2013″. Wij presentatoren hebben genoten van het ambiance. Hier ook nog voor degenen die graag het nog een keer willen nalezen het presentatie materiaal van Toon, Jacco, Gerwin, Frits en mij… Presentatie materiaal in alfabetische volgorde: …

Continue reading »

oraclebase's picture

Should you aim to become an Oracle ACE?

I tweeted the following yesterday,

“It’s 7 years ago today that I was made an Oracle ACE. Seriously. It was April Fools Day 2006… :)

The followup from that tweet included a number of questions about what you get out of becoming an Oracle ACE and what is the quickest way to become one. In my mind, these types of questions highlight the misunderstanding of what the Oracle ACE program is. You can hear Vikki, Debra, Alex and myself talking about the Oracle ACE program here, but I feel like I want to clarify a few things. This is just my opinion. Others may say different. :)

Should you aim to become an Oracle ACE?

oraclebase's picture

An inside look at the Oracle ACE program

I was recently part of a 3-part podcast about the ACE program. You can listen to all 3 parts here:

Thanks to Bob Rhubart for the invite and to the other members of the panel (Vikki Lira, Alex Gorbachev and Debra Lilley).

Cheers

Tim…

 

marco's picture

The Oracle XMLDB “anonymous” user account

Trying here to be as correct as possible, as far as I understand it currently.

ANONYMOUS is an Oracle user account specifically designed for HTTP access. It has only one system privilege, that is “create session” and the account is locked by default. If it is unlocked, it only is used for HTTP access via the XDB Protocol Server, aka PL/SQL Gateway, and can access objects in the XDB Repository that are protected by an ACL (Access Control Lists) mentioning this “principal”.

By default there is no ACL file that grants any privilege to this “user” ANONYMOUS. When APEX is installed then there will be a /sys/acls/ro_anonymous_acl.xml file that grants read access to the /images/ or /i/ directory (depending on the APEX version). If you lock ANONYMOUS or remove the ACL defined privileges then APEX can not show/access those files in that XDB Repository folder (/images, /i) if you would need to access these files. For example when using the APEX listener setup the application images and help doc images are stored locally on the server and not in the database, so in principal there is no need to access those image(s) directories in the database.

Example of an ACL which can used by XDB which grants read properties and read content rights to all objects which are protected by this ACL

#66cc66;"><acl description#66cc66;">=#ff0000;">"File /sys/acl/my_acl.xml"
     xmlns#66cc66;">=#ff0000;">"http://xmlns.oracle.com/xdb/acl.xsd"
     xmlns:dav#66cc66;">=#ff0000;">"DAV:"
     xmlns:xsi#66cc66;">=#ff0000;">"http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation#66cc66;">=#ff0000;">"http://xmlns.oracle.com/xdb/acl.xsd
                         http://xmlns.oracle.com/xdb/acl.xsd"#66cc66;">>
  #66cc66;"><ace#66cc66;">>
    #66cc66;"><principal#66cc66;">>ANONYMOUS#66cc66;">principal#66cc66;">>
    #66cc66;"><grant#66cc66;">>true#66cc66;">grant#66cc66;">>
    #66cc66;"><privilege#66cc66;">>
      #66cc66;"><read #66cc66;">-properties#66cc66;">/>
      #66cc66;"><read #66cc66;">-contents#66cc66;">/>
      #66cc66;"><resolve #66cc66;">/>
    #66cc66;">privilege#66cc66;">>
  #66cc66;">ace#66cc66;">>
#66cc66;">acl#66cc66;">>

By default when a resource (a file or folder) is created by a process it will get the privileges defined in the bootstrap ACL (which is protected by itself). So no privileges will be granted to this ANONYMOUS account by default. And even when unlocked, this user only opens up, by default, to hierarchy enabled, XDB Repository related objects. Mind the mentioning “by default”; Its is possible to opening up and overrule default security ruling in place when you alter the content of ACL defaults (which is, could be considered, a security breach). For example you could alter the contents of the bootstrap_acl.xml file in such a way, if your have maliceious intentions from within the database, but you would need very powerful database account access to start with anyway, to make this happen.

Example of the default content of the bootstrap_acl.xml file:

SQL#66cc66;">> #993333; font-weight: bold;">SELECT xdburitype#66cc66;">(#ff0000;">'/sys/acls/bootstrap_acl.xml'#66cc66;">)#66cc66;">.getCLOB#66cc66;">(#66cc66;">) #993333; font-weight: bold;">FROM dual;
 
#66cc66;"><acl description#66cc66;">=#ff0000;">"Protected:Readable by PUBLIC and all privileges to OWNER" 
     xmlns#66cc66;">=#ff0000;">"http://xmlns.oracle.com/xdb/acl.xsd" 
     xmlns:dav#66cc66;">=#ff0000;">"DAV:" 
     xmlns:xsi#66cc66;">=#ff0000;">"http://www.w3.org/2001/XMLSchema-instance" 
     xsi:schemaLocation#66cc66;">=#ff0000;">"http://xmlns.oracle.com/xdb/acl.xsd 
          http://xmlns.oracle.com/xdb/acl.xsd"#66cc66;">>
  #66cc66;"><ace#66cc66;">>
    #66cc66;"><principal#66cc66;">>dav:owner#66cc66;">principal#66cc66;">>
    #66cc66;"><grant#66cc66;">>true#66cc66;">grant#66cc66;">>
    #66cc66;"><privilege#66cc66;">>
      #66cc66;"><all #66cc66;">/>
    #66cc66;">privilege#66cc66;">>
  #66cc66;">ace#66cc66;">>
  #66cc66;"><ace#66cc66;">>
    #66cc66;"><principal#66cc66;">>XDBADMIN#66cc66;">principal#66cc66;">>
    #66cc66;"><grant#66cc66;">>true#66cc66;">grant#66cc66;">>
    #66cc66;"><privilege#66cc66;">>
      #66cc66;"><all #66cc66;">/>
    #66cc66;">privilege#66cc66;">>
  #66cc66;">ace#66cc66;">>
  #66cc66;"><ace#66cc66;">>
    #66cc66;"><principal#66cc66;">>PUBLIC#66cc66;">principal#66cc66;">>
    #66cc66;"><grant#66cc66;">>true#66cc66;">grant#66cc66;">>
    #66cc66;"><privilege#66cc66;">>
      #66cc66;"><read #66cc66;">-properties#66cc66;">/>
      #66cc66;"><read #66cc66;">-contents#66cc66;">/>
      #66cc66;"><read #66cc66;">-acl#66cc66;">/>
      #66cc66;"><resolve #66cc66;">/>
    #66cc66;">privilege#66cc66;">>
  #66cc66;">ace#66cc66;">>
#66cc66;">acl#66cc66;">>

Be aware that, although the PUBLIC ACE (Access Control Entries) entry sounds dangerous, this only means that from within the database DIRECT access to the objects via database accounts are possible. This is not possible via HTTP (by default). An example to this effect would be that for the APEX /images directory, which is protected only for read only access of the principal ANONYMOUS, this means that PL/SQL packages (owned/executed by users from WITHIN the database) etc, will not have access to these image files.

The “service” provided via the XDB Protocol Server and its access rules are defined in the xdbconfig.xml configuration file. The services defined there (for example APEX’s entries via PL/SQL, that is, via the PL/SQL gateway) in this xdbconfig.xml file links up to the to be used “principal” (ANONYMOUS in the case of APEX) security access owner, role, trusted user or LDAP definition, for that specific service.

Normally an anonymous user is a user whose credentials have not been validated (hence unauthenticated) that is permitted access to only unprotected resources, but by default all created objects in the XDB repository will be protected by the default bootstrap ACL and in normal cases a ACL with a defined ANONYMOUS principal is not created, does not exist in the database. Even if, you would still need entries in the xdbconfig.xml file that link the (unlocked) ANONYMOUS account with a defined service that grants you access or an entry point to the database.

The underlying by Oracle implemented security mechanism is the same as for the database and also it used the advanced security feature VPD. Due to the fact that Oracle itself makes use of this, a extra license is not needed for this advanced security feature, as long as you don’t use it yourself. Oracle XMLDB in itself is a “no cost option” that comes along when you buy the licenses needed for your database software.

This is a backup copy of a XMLDB OTN Forum Thread.

alexgorbachev's picture

Congratulations to Yury Velikanov, now an Oracle ACE!

I’m very excited to announce this somewhat old news — we have a brand new Oracle ACE at Pythian — Yury Velikanov. Fantastic addition to Pythian’s team of ACEs and ACE Directors — Fahd Mirza, Gwen Shapira, Jared Still, Christo Kutrovsky and myself. If you want to know more about Oracle ACE Program, the latest issue of Oracle Magazine has an article written by the fellow Oracle ACE Mike Riley — Everything Is Coming Up ACEs!.

I’ve known Yury for a while as I met him online on Oracle-L many years ago. The world is small and after living on different continents (North America and Europe), providence put us together on the third continent — Australia. As I was building our Australian business in Sydney, it took me a year to convince Yury to join Pythian. But I tell you it was worth the efforts!

Yury is one of our top Senior Oracle Apps DBAs and he is also extremely capable when it comes to Oracle core database technology and middleware which you would expect from a very good Apps DBA. A decade ago, Yury was already fusing Oracle E-Business Suite with Oracle RAC when most people couldn’t even afford to think about it. In 2003, Yury became one of the first five Oracle Certified Masters in Europe. In other words, Yury is very talented individual with top notch Oracle EBS and Oracle database skills.

As you probably know, excellent technical skills are not the only required quality of an Oracle ACE. Oracle ACEs are distinguished for their contributions to the community and Yury has always been an enthusiastic Oracle expert willing to both share his knowledge and learn from others. As a result, he is an active contributor to Oracle community, regionally and globally.

Yury founded Oracle e-Business Suite related mailing list several years ago to help Oracle Apps DBAs around the world exchange their experiences and help others. In Australia, Yury has become an organizer of the Sydney Oracle Meetup – an informal group of Oracle professionals meeting regularly face to face.

Yury also has a long list of conferences he presented at including Hotsos Symposium, UKOUG, AUSOUG and others. Recently, Yury has also started blogging on Pythian Blog and is planning to do more and more.

I would like to note that Yury is a person who never refuse to help — he will lose sleep, work 42 hours per day if need but won’t step away if asked for help. I’ve been there myself and I know that he is a very approachable individual and will do whatever it takes.

After all this, how could I not nominate him as an Oracle ACE? Right. There was no way! Now few months late Yury is happily joined our Oracle ACE Team at Pythian. It’s actually happened a few weeks ago, it just took me so long to write about it!

alexgorbachev's picture

Congrats to Fahd Mirza on becoming an Oracle ACE

Last week brought great news to Pythian — one of our DBAs in Pakistan, Fahd Mirza, has become an Oracle ACE. Fahd joined Pythian in September 2010 as the very first Pythian employee in Pakistan and thanks to his skills and ambitions ended up on the team supporting Exadata environments. Fahd is a long standing active community member, frequent blogger and passionate Oracle technologist evangelizing for Oracle technology in Pakistan. No wonder he got nominated as an Oracle ACE and was accepted.

I should also mention that another Oracle ACE DBA joined us recently Jared Still. Jared is a well respected member of Oracle community, member of the OakTable Network and a veteran of the Oracle-L mailing list. Jared is a top notch Oracle DBA and huge fan of the most popular programming language at Pythian — Perl — and event wrote a book “Perl for Oracle DBAs“.

With all this, we have 5 Oracle ACEs & ACE Directors at Pythian now including Gwen Shapira, Christo Kutrovsky and myself. But that’s not all, Pythian is known as an incubator for Oracle ACEs (I think we were called the Oracle ACE Factory in one of the Oracle ACE newsletters) and it’s been a pleasure to have worked side by side with other Oracle ACEs and ACE Directors — Riyaj Shamsudeen, Doug Burns, Sheeri Cabral and Dmitri Volkov. Some of them became Oracle ACEs at Pythian, some before or after that and even though they are not working at Pythian now, they are still our good friends and help us out on many occasions with training or collaborating on exciting projects.

It’s a great initiative by Oracle through the Oracle ACE program to recognize active community contributors and passionate Oracle professionals around the globe! Well done Oracle!

arupnanda's picture

OOW 2010 Session Stats with Confidence

Thank you very much to all those who attended my session "Stats with Confidence". Unfortunately I was delayed by the keynote running late. With the big party coming up, I appreciate the spirit of those brave souls who stayed back. The late start didn't allow me to show the demo completely. But here are the scripts; hope you will be able to follow it along and run it on your own infrastructure.

It contains the presentation as well. Thanks for attending and hope you will find it useful.

arupnanda's picture

A Tool to Enable Stats Collection for Future Sessions for Application Profiling

The other day I was putting together my presentation for Oracle Open World on Application Profiling in RAC. I was going to describe a methodology for putting a face to an app by measuring how it behaves in a database – a sort of a signature of that application. I was going to use the now-ubiquitous 10046 trace for wait events and other activities inside the database. For resource consumption such as redo generated, logical I/Os, etc., I used the v$sesstat; but then I was stuck. How would I collect the stats of a session when the session has not even started and I don’t know the SID. That problem led to the development of this tool where the stats of a future session can be recorded based on some identifying factors such as username, module, etc. Hope this helps in your performance management efforts.

The Problem

Suppose you want to find out the resource consumed by a session. The resources could be redo generation, CPU used, logical I/O, undo records generated – the list is endless. This is required for a lot of things. Consider a case where you want to find out which apps are generating the most redo; you would issue a query like this:

select sid, value
from v$sesstat s, v$statname n
where n.statistic# = s.statistic#
and n.name = 'redo size'
/


The value column will show the redo generated. From the SID you can identify the session. Your next stop is v$session to get the other relevant information such as username, module, authentication scheme, etc. Problem solved, right?

Not so fast. Look at the above query; it selects from v$sesstat. When the session is disconnected, the stats disappear, making the entries for that session go from v$sesstat. If you run the query, you will not find these sessions. You have to constantly select from the v$sesstat view to capture the stats of the sessions hoping that you would capture the stats before the session disconnects. But it will be not be guaranteed. Some short sessions will be missed in between collection samples. Even if you are lucky to capture some stats of a short session, the other relevant information from v$session will be gone.

Oracle provides a package dbms_monitor, where a procedure named client_id_stat_enable allows you to enable stats collection on a future session where the client_id matches a specific value, e.g. CLIENT1. Here is an example:

execute dbms_monitor.client_id_stat_enable('CLIENT1');

However there are three issues:

(1) It collects only about 27 stats, out of 400+

(2) It offers only three choices for selecting sessions – client_id, module_name and service_name.

(3) It aggregate them, sums up all stats for a specific client_id. That is pretty much useless without a detailed session level.

So, in short, I didn’t have a readily available solution.

Solution

Well, necessity is the mother of invention. When you can’t find a decent tool; you build it; and so did I. I built this tool to capture the stats. This is version 1 of the tool. It has some limitations, as shown at the end. These limitations do not apply to all situations; so the tool may be useful in a majority of the cases. Later I will expand the tool to overcome these limitations.

Concept

The fundamental problem, as you recall, is not the dearth of data (v$sesstat has plenty); it’s the sessions in the future. To capture those sessions, the tool relies on a post-logon database trigger to capture the values.

The second problem was persistence. V$SESSTAT is a dynamic performance view, which means the records of the session will be gone when the session disappears. So, the tool relies on a table to store the data.

The third problem is the getting the values at the very end of the session. The difference between the values captured at the end and beginning of the session are the stats. To capture the values at the very end; not anytime before, the tool relies on a pre-logoff database trigger.

The fourth challenge is identification of sessions. SID of a session is not unique; it can be reused for a new session; it will definitely be reused when the database is recycled. So, the tool uses a column named CAPTURE_ID, a sequentially incremented number for each capture. Since we capture once at the beginning and then at the end, I must use the same capture_id. I use a package variable to store that capture_Id.

Finally, the tool allows you to enable stats collections based on some session attributes such as username, client_id, module, service_name, etc. For instance you may want to enable stats for any session where the username = ‘SCOTT’ or where the os_user is ‘ananda’, etc. These preferences are stored in a table reserved for that purpose.

Construction

Now that you understand how the tool is structured, let me show the actual code and scripts to create the tool.

(1) First, we should create the table that holds the preferences. Let’s call this table RECSTATS_ENABLED. This table holds all the different sessions attributes (ip address, username, module, etc.) that can enable stats collection in a session.

CREATE TABLE SYS.RECSTATS_ENABLED
(
SESSION_ATTRIBUTE VARCHAR2(200 BYTE),
ATTRIBUTE_VALUE VARCHAR2(2000 BYTE)
)
/

If you want to enable stats collection of a session based on a session attribute, insert a record into this table with the session attribute and the value. Here are some examples. I want to collect stats on all sessions where client_info matches ‘MY_CLIENT_INFO1’. You would insert a record like this:

insert into recstats_enabled values ('CLIENT_INFO','MY_CLIENT_INFO1');

Here are some more examples. All sessions where ACTION is ‘MY_ACTION1’:

insert into recstats_enabled values ('ACTION','MY_ACTION1');

Those of user SCOTT:

insert into recstats_enabled values ('SESSION_USER','SCOTT')

Those with service name APP:

insert into recstats_enabled values ('SERVICE_NAME','APP')

You can insert as many preferences as you want. You can even insert multiple values of a specific attribute. For instance, to enable stats on sessions with service names APP1 and APP2, insert two records.

Important: the session attribute names follow the naming convention of the USERENV context used in SYS_CONTEXT function.

(2) Next, we will create a table to hold the statistics

CREATE TABLE SYS.RECSTATS
(
CAPTURE_ID NUMBER,
CAPTURE_POINT VARCHAR2(10 BYTE),
SID NUMBER,
SERIAL# NUMBER,
ACTION VARCHAR2(2000 BYTE),
CLIENT_DENTIFIER VARCHAR2(2000 BYTE),
CLIENT_INFO VARCHAR2(2000 BYTE),
CURRENT_EDITION_NAME VARCHAR2(2000 BYTE),
CURRENT_SCHEMA VARCHAR2(2000 BYTE),
CURRENT_USER VARCHAR2(2000 BYTE),
DATABASE_ROLE VARCHAR2(2000 BYTE),
HOST VARCHAR2(2000 BYTE),
IDENTIFICATION_TYPE VARCHAR2(2000 BYTE),
IP_ADDRESS VARCHAR2(2000 BYTE),
ISDBA VARCHAR2(2000 BYTE),
MODULE VARCHAR2(2000 BYTE),
OS_USER VARCHAR2(2000 BYTE),
SERVICE_NAME VARCHAR2(2000 BYTE),
SESSION_USER VARCHAR2(2000 BYTE),
TERMINAL VARCHAR2(2000 BYTE),
STATISTIC_NAME VARCHAR2(2000 BYTE),
STATISTIC_VALUE NUMBER;
)
TABLESPACE USERS


Note, I used the tablespace USERS; because I don’t want this table, which can potentially grow to huge size, to overwhelm the system tablespace. The STATISTIC_NAME and STATISTIC_VALUE columns record the stats collected. The other columns record the other relevant data from the sessions. All the attributes here have been shown with VARCHAR2(2000) for simplicity; of course they don’t need that much of space. In the future versions, I will put a more meaningful limit; but 2000 does not hurt as it is varchar2.

The capture point will show when the values were captured – START or END of the session.

(3) We will also need a sequence to identify the sessions. Each session will have 400+ stats; we will have a set at the end and once at the beginning. We could choose SID as an identifier; but SIDs could be reused. So, we need something that is truly unique – a sequence number. This will be recorded in the CAPTURE_ID column in the stats table.

SQL> create sequence seq_recstats;

(4) To store the capture ID when the post-logon trigger is fired, to be used inside the pre-logoff trigger, we must use a variable that would be visible to entire session. A package variable is the best for that.

create or replace package pkg_recstats
is
g_recstats_id number;
end;

(5) Finally, we will go on to the meat of the tool – the triggers. First, the post-logon trigger to capture the stats in the beginning of the session:

CREATE OR REPLACE TRIGGER SYS.tr_post_logon_recstats
after logon on database
declare
l_stmt varchar2(32000);
l_attr_val recstats_enabled.attribute_value%TYPE;
l_capture_point recstats.capture_point%type := 'START';
l_matched boolean := FALSE;
begin
pkg_recstats.g_recstats_id := null;
for r in (
select session_attribute, attribute_value
from recstats_enabled
order by session_attribute
)
loop
exit when l_matched;
-- we select the userenv; but the null values may cause
-- problems in matching; so let’s use a value for NVL
-- that will never be used - !_!_!
l_stmt := 'select nvl(sys_context(''USERENV'','''||
r.session_attribute||'''),''!_!_!_!'') from dual';
execute immediate l_stmt into l_attr_val;
if l_attr_val = r.attribute_value then
-- match; we should record the stats
-- and exit the loop; since stats should
-- be recorded only for one match.
l_matched := TRUE;
select seq_recstats.nextval
into pkg_recstats.g_recstats_id
from dual;
insert into recstats
select
pkg_recstats.g_recstats_id,
l_capture_point,
sys_context('USERENV','SID'),
null,
sys_context('USERENV','ACTION'),
sys_context('USERENV','CLIENT_IDENTIFIER'),
sys_context('USERENV','CLIENT_INFO'),
sys_context('USERENV','CURRENT_EDITION_NAME'),
sys_context('USERENV','CURRENT_SCHEMA'),
sys_context('USERENV','CURRENT_USER'),
sys_context('USERENV','DATABASE_ROLE'),
sys_context('USERENV','HOST'),
sys_context('USERENV','IDENTIFICATION_TYPE'),
sys_context('USERENV','IP_ADDRESS'),
sys_context('USERENV','ISDBA'),
sys_context('USERENV','MODULE'),
sys_context('USERENV','OS_USER'),
sys_context('USERENV','SERVICE_NAME'),
sys_context('USERENV','SESSION_USER'),
sys_context('USERENV','TERMINAL'),
n.name,
s.value
from v$mystat s, v$statname n
where s.statistic# = n.statistic#;
end if;
end loop;
end;

The code is self explanatory. I have provided more explanation as comments where needed.

(6) Next, the pre-logoff trigger to capture the stats at the end of the session:

CREATE OR REPLACE TRIGGER SYS.tr_pre_logoff_recstats
before logoff on database
declare
l_capture_point recstats.capture_point%type := 'END';
begin
if (pkg_recstats.g_recstats_id is not null) then
insert into recstats
select
pkg_recstats.g_recstats_id,
l_capture_point,
sys_context('USERENV','SID'),
null,
sys_context('USERENV','ACTION'),
sys_context('USERENV','CLIENT_IDENTIFIER'),
sys_context('USERENV','CLIENT_INFO'),
sys_context('USERENV','CURRENT_EDITION_NAME'),
sys_context('USERENV','CURRENT_SCHEMA'),
sys_context('USERENV','CURRENT_USER'),
sys_context('USERENV','DATABASE_ROLE'),
sys_context('USERENV','HOST'),
sys_context('USERENV','IDENTIFICATION_TYPE'),
sys_context('USERENV','IP_ADDRESS'),
sys_context('USERENV','ISDBA'),
sys_context('USERENV','MODULE'),
sys_context('USERENV','OS_USER'),
sys_context('USERENV','SERVICE_NAME'),
sys_context('USERENV','SESSION_USER'),
sys_context('USERENV','TERMINAL'),
n.name,
s.value
from v$mystat s, v$statname n
where s.statistic# = n.statistic#;
commit;
end if;
end;
/

Again the code is self explanatory. We capture the stats only of the global capture ID has been set by the post-logoff trigger. If we didn’t do that all the sessions would have started recording stats at their completion.

Execution

Now that the setup is complete, let’s perform a test by connecting as a user with the service name APP:

SQL> connect arup/arup@app

In this session, perform some actions that will generate a lot of activity. The following SQL will do nicely:

SQL> create table t as select * from all_objects;


SQL> exit

Now check the RECSTATS table to see the stats on this catured_id, which happens to be 1330.

col name format a60
col value format 999,999,999
select a.statistic_name name, b.statistic_value - a.statistic_value value
from recstats a, recstats b
where a.capture_id = 1330
and a.capture_id = b.capture_id
and a.statistic_name = b.statistic_name
and a.capture_point = 'START'
and b.capture_point = 'END'
and (b.statistic_value - a.statistic_value) != 0
order by 2
/

Here is the output:

NAME                                                                VALUE
------------------------------------------------------------ ------------
workarea memory allocated -2
change write time 1
parse time cpu 1
table scans (long tables) 1
cursor authentications 1
sorts (memory) 1
user commits 2
opened cursors current 2
IMU Flushes 2
index scans kdiixs1 2
parse count (hard) 2
workarea executions - optimal 2
redo synch writes 2
redo synch time 3
rows fetched via callback 5
table fetch by rowid 5
parse time elapsed 5
recursive cpu usage 8
switch current to new buffer 10
cluster key scan block gets 10
cluster key scans 10
deferred (CURRENT) block cleanout applications 10
Heap Segment Array Updates 10
table scans (short tables) 12
messages sent 13
index fetch by key 15
physical read total multi block requests 15
SQL*Net roundtrips to/from client 18
session cursor cache hits 19
session cursor cache count 19
user calls 25
CPU used by this session 28
CPU used when call started 29
buffer is not pinned count 33
execute count 34
parse count (total) 35
opened cursors cumulative 36
physical read total IO requests 39
physical read IO requests 39
calls to get snapshot scn: kcmgss 45
non-idle wait count 67
user I/O wait time 116
non-idle wait time 120
redo ordering marks 120
calls to kcmgas 143
enqueue releases 144
enqueue requests 144
DB time 149
hot buffers moved to head of LRU 270
recursive calls 349
active txn count during cleanout 842
cleanout - number of ktugct calls 842
consistent gets - examination 879
IMU undo allocation size 968
physical reads cache prefetch 997
physical reads 1,036
physical reads cache 1,036
table scan blocks gotten 1,048
commit cleanouts 1,048
commit cleanouts successfully completed 1,048
no work - consistent read gets 1,060
redo subscn max counts 1,124
Heap Segment Array Inserts 1,905
calls to kcmgcs 2,149
consistent gets from cache (fastpath) 2,153
free buffer requested 2,182
free buffer inspected 2,244
HSC Heap Segment Block Changes 2,519
db block gets from cache (fastpath) 2,522
consistent gets 3,067
consistent gets from cache 3,067
bytes received via SQL*Net from client 3,284
bytes sent via SQL*Net to client 5,589
redo entries 6,448
db block changes 9,150
db block gets 10,194
db block gets from cache 10,194
session logical reads 13,261
IMU Redo allocation size 16,076
table scan rows gotten 72,291
session uga memory 88,264
session pga memory 131,072
session uga memory max 168,956
undo change vector size 318,640
session pga memory max 589,824
physical read total bytes 8,486,912
cell physical IO interconnect bytes 8,486,912
physical read bytes 8,486,912
redo size 8,677,104


This clearly shows you all the stats of that session. Of course the table recorded all other details of the session as well – such as username, client_id, etc., which are useful later for more detailed analysis. You can perform aggregations as well now. Here is an example of the stats collected for redo size:

select session_user, sum(STATISTIC_VALUE) STVAL
from recstats
where STATISTIC_NAME = 'redo size'
group by session_user
/

Output:

SESSION_USER STVAL
------------ ---------
ARUP 278616
APEX 4589343
… and so on …

You can disassemble the aggregates to several attributes as well. Here is an example where you want to find out the redo generated from different users coming from different client machines

select session_user, host, sum(STATISTIC_VALUE) stval
from recstats
where STATISTIC_NAME = 'redo size'
group by session_user, host
/

Output:

SESSION_USER HOST STVAL
------------ ----------- -------
ARUP oradba2 12356
ARUP oradba1 264567
APEX oradba2 34567
… and so on …


Granularity like this shows you how the application from different client servers helped; not just usernames.



Limitations

As I mentioned, there are some limitations you should be aware of. I will address them in the next iterations of the tool. These are not serious and applicable in only certain cases. As long as you don’t encounter that case, you should be fine.

(1) The logoff trigger does not fire when the user exits from the session ungracefully, such as closing down the SQL*Plus window, or closing the program before exiting. In such cases the stats at the end of the session will not be recorded. In most application infrastructure it does not happen; but it could happen for adhoc user sessions such as people connecting through TOAD.

(2) The session attributes such as module, client_id and action can be altered within the session. If that is the case, this tool does not record that fact since there is no triggering event. The logoff trigger records the module, action and client_id set at that time. These attributes are not usually changed in application code; so it may not apply to your case.

(3) Parallel Query sessions will have a special issue since there will be no logoff trigger. So in case of parallel queries, you will not see any differential stats. If you don’t use PQ, as most OLTP applications do, you will not be affected.

(4) If the session just sits there without disconnecting, the logoff trigger will never fire and the stats will never be captured. Of course, it will be eventually captured when the session exits.

Once again, these limitations apply only to certain occasions. As long as you are aware of these caveats, you will be able to use this tool to profile many of your applications.

Happy Profiling!
arupnanda's picture

Build a Simple Firewall for Databases Using SQL Net

This article was initially published in 2003 in DBAZine.com, which has since been folded.

So, you want to set up a secured database infrastructure?

You are not alone. With the proliferation of threats from all sources — identity thefts to corporate espionage cases — and with increased legislative pressures designed to protect and serve consumer privacy, security has a taken on a new meaning and purpose. Part of the security infrastructure of an organization falls right into your lap as a DBA, since it’s your responsibility to secure the database servers from malicious entities and curious insiders.

What are your options? Firewalls are first to come to mind. Using a firewall to protect a server, and not just a database server, is not a new concept and has been around for a while. However, a firewall may be overkill in some cases. Even if a firewall is desirable, it may still have to be configured and deployed properly. The complexity in administering a firewall, not to mention the cost to acquire one, may be prohibitive. If the threat level can be reduced by proper positioning of existing firewalls, the functionality of additional ones can be created by a tool available free with Oracle Net, Node Validation. In this article, you will learn how to build a rudimentary, but effective, firewall-like setup with just Oracle Net, and nothing else.

Background

Let’s see a typical setup. Acme, Inc. has several departments — two of which are Payroll and Benefits. Each department’s database resides on a separate server. Similarly, each department’s applications run on separate servers. There are several application servers and database servers for each department. To protect the servers from unauthorized access, each database server is placed behind a firewall with ports open to communicate SQL*Net traffic only. This can be depicted in figure 1 as follows:


Figure 1: Protecting departmental database and application servers using multiple firewalls.

This scheme works. But notice how many firewalls are required and the complexity that having this number adds to the administration process. What can we do to simplify the setup? How about removing all the firewalls and having one master firewall around the servers, as in Figure 2?

Figure 2: One master firewall around all the servers.

This method protects the servers from outside attacks; however, it still leaves inside doors open. For instance, the application server PAYROLL1 can easily connect to the database server of the Benefits Department BENEFITDB1, which is certainly inappropriate. In some organizations, there could be legal requirements to prevent this type of access.

Rather than creating a maze of firewalls as in the case we noted previously, we can take advantage of the SQL*Net Node Validation to create our own firewall. We will do this using only Oracle Net, which is already installed as a part of the database install. The rest of this article will explain in detail how to accomplish this.

Objective

Our objective is to design a setup as shown in figure 3. In this network, the application servers benefits1 and benefits2 access the database on server benefitsdb1. Similarly, application servers payroll1 and payroll2 access the database on server payrolldb1. Clients should be given free access to the intended machines. Client machines shouldn’t be allowed to access the database on other departments (e.g., benefits1 and benefits2 shouldn’t be able to access the database on payrolldb1). Likewise, application servers payroll1 and payroll2 should not be allowed to access benefitsdb1.


Figure 3: One master firewall and restricting access from non-departmental clients.

Note the key difference in requirements here — we are not interested in disallowing any type of access from client machines to servers of another department. Rather, it’s enough to disable access at the Oracle level only. This type of restriction is enforced by the listener. A listener can check the IP address of the client machine and, based on certain rules, decide to allow or deny the request. This can be enabled by a facility called Valid Node Checking, available as a part of Oracle Net installation. Let’s see how this can be done.

To set up valid node checking, simply place a set of lines on a specific file on the server. In our example, the following lines are placed in the parameter file on the server payrolldb1, allowing access to servers payroll1 and payroll2.

tcp.validnode_checking = yes

tcp.invited_nodes = (payroll1, payroll2)

Where this parameter file is located depends on the Oracle version. In Oracle 8i, it’s a file named protocol.ora; in Oracle 9i, it’s called sqlnet.ora. Both these files are located in the directory specified by the environmental variable TNS_ADMIN, which defaults to $ORACLE_HOME/network/admin in UNIX or %ORACLE_HOME%\network\admin in Windows.

These parameters are self-explanatory. The first line, tcp.validnode_checking = yes, specifies that the nodes are to be validated before accepting the connection.

The second line specifies that only the clients payroll1 and payroll2 are allowed to connect to the listener. The clients are indicated by either IP address (e.g., 192.168.1.1) or the node name as shown above. The list of node names is specified by a single line separated by commas. It is important to have only one line — you shouldn’t break it up.

The values take effect only during the startup of the listener. After making the change in protocol.ora (in Oracle 8i) or sqlnet.ora (in Oracle 9i and later), stop and restart the listener. After you’ve done so, if a user, regardless of the authentication in the database or authority level, attempts to connect to the database on benefits1 from the node payroll1, he receives the error as shown below.

$ sqlplus scott/tiger@payrolldb1

SQL*Plus: Release 9.2.0.4.0 - Production on Tue Jan 2o 9:03:33 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

ERROR: ORA-12537: TNS:connection closed

Enter user-name:

The error message is not very informative; it does not explicitly state the nature of the error. This error occured, however, because the connection request came from a client that is not listed as accepted. In this case, the listener simply rejected the connection originating from the node benefits1, regardless of the user. Yet the same user trying to connect from node payroll1 would succeed.

Excluded Nodes

In the previous example, we saw how to allow only a certain set of clients, and disallow all others. Similarly, you can specify the other type of rule — exclude some clients and allow all others. Say the lines in the parameter file are as follows:

tcp.validnode_checking = yes

tcp.excluded_nodes = (payroll3)

All clients but those connecting from payroll3 would be able to connect to all nodes. So, in this case, clients benefits1 and benefits2 would be able to connect to payrolldb1 in addition to clients payroll1 and payroll2. Isn’t that counter to what we wanted to achieve? Where can this exclusion be used?

In real life cases, networks are subdivided into subnetworks, and they offer adequate protection. In a particular subnet, there may be a greater number of clients needing access than the number being restricted. In such a case, it might be easier to specifically refuse access from a set of named clients, conveniently named in the tcp.excluded_nodes parameter. You can also use of this parameter to refuse access from certain machines that had been used to launch attacks in the past.

You can also mix excluded and included nodes, in which case, the invited nodes are given precedence over excluded ones. But there are three very big drawbacks to this approach.
1. There is no way to specify a wild card character in the node list. You must specify a node explicitly by its name or its IP address.
2. All excluded or invited nodes are to be specified in only one line, severely limiting your ability to specify a large number of nodes.
3. Since the validation is based on IP address or client names only and it’s relatively easy to spoof these two key pieces of identification, the system is not inherently secure.

For these reasons, mixing excluded and included nodes is not quite suitable for excluding a large list of servers from a network or subnetwork. This method can be used when the list of machines accessing the network is relatively small and the machines are in a subnetwork, behind a firewall. In such a configuration, the risk of external attacks is very slight, and the risk of unauthorized access by spoofing key identification is negligible.

Oracle Net also provides another means to develop a rudimentary firewall using a lesser known and even lesser used tool called Connection Manager. This tool is far more flexible in the setup; you can specify wildcards n-node names without restrictions such as the need to have only a single line for naming the nodes. A detailed discussion of Connection Manager with real-life examples can be found in the book Oracle Privacy Security Auditing.

Troubleshooting

Of course, things may not always proceed as smoothly as in the examples we’ve cited so far. One of the common problems you can encounter is that the exclusion may not work even though the files may be present and the parameters seem to be defined properly.

To diagnose a node checking issue you may encounter, you need to turn on tracing during the connection process. Tracing the process can be done in several levels of detail, and in this case, you should enable it for the level called support, or "16." Place the following line in the file sqlnet.ora:

trace_level_server = support

Doing this causes the connection process to write detailed information in a trace file under the directory $ORACLE_HOME/network/trace. The directory can be specified to a different value by a parameter in the file sqlnet.ora, as
trace_directory_server = /tmp

By doing this, the trace information to be written to the directory /tmp instead of the default. After setting the parameters as shown above, you should attempt the connection again. There is no need to bounce the listener. The connection attempt will create trace files named similar to svr_0.trc to be written in the proper directory. You should open this file in an editor (parts of the file are shown below).

[20-JAN-2004 12:00:01:234] Attempted load of system pfile
source /u02/oracle/product/9.2/network/admin/sqlnet.ora

[20-JAN-2004 12:00:01:234] Parameter source loaded successfully

[20-JAN-2004 12:00:01:234]

[20-JAN-2004 12:00:01:234] -> PARAMETER TABLE LOAD RESULTS FOLLOW <-

[20-JAN-2004 12:00:01:234] Successful parameter table load

[20-JAN-2004 12:00:01:234] -> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <-

[20-JAN-2004 12:00:01:234] tcp.validnode_checking = yes

[20-JAN-2004 12:00:01:234] trace_level_server = support

[20-JAN-2004 12:00:01:234] tcp.invited_nodes = (192.168.1.1, 192.168.1.2)

[20-JAN-2004 18:27:04:484] NAMES.DIRECTORY_PATH = (TNSNAMES)

[20-JAN-2004 18:27:04:484] tcp.excluded_nodes = (192.168.1.3)

[20-JAN-2004 18:27:04:484] --- PARAMETER SOURCE INFORMATION ENDS ---

These lines indicate that

1. The parameter file /u02/oracle/product/9.2/network/admin/sqlnet.ora was read by the listener.

2. The parameters were loaded successfully.

3. The contents of the parameter were read as they were mentioned.

4. The names of the excluded and invited nodes are displayed.

If the information is not as shown here, the problem be caused by the way the parameter file is written; most likely a typographical error such as a missing parenthesis. This type of error should be fixed before proceeding further along the trace file.

If the parameters are indeed loaded properly, you should next check the section of the file in which the node validity checking is done. This section looks like this:

[20-JAN-2004 12:30:45:321] ntvllt: Found tcp.invited_nodes. Now loading...

[20-JAN-2004 12:30:45:321] ntvllhs: entry

[20-JAN-2004 12:30:45:321] ntvllhs: Adding Node 192.168.1.1

[20-JAN-2004 12:30:45:321] ntvllhs: Adding Node 192.168.1.2

[20-JAN-2004 12:30:45:321] ntvllhs: exit

[20-JAN-2004 12:30:45:321] ntvllt: exit

[20-JAN-2004 12:30:45:321] ntvlin: exit

[20-JAN-2004 12:30:45:321] nttcnp: Validnode Table IN use; err 0x0

The first line indicates that the parameter tcp.invited_nodes was found. Next, the entries in that list are read and displayed one after the other. This is the most important clue. If the addresses were written incorrectly, or the syntax were wrong, the trace files would have indicated this by not specifying the node names checked. The last line in this section shows that the ValidNode table was read and used with error code of 0x0 (in hexadecimal, equating to zero) — the table has no errors. If there were a problem in the way the valid node parameters were written in the parameter file, the trace file would have shown something different. For instance, say the parameters were written as

tcp.excluded_nodes = (192.168.1.3

Note how a parenthesis is left out, indicating a syntax problem. However, this does not affect the connection; the listener simply ignores the error and allows the connection without doing a valid node checking. Upon investigation, we would find the root of the problem in the trace file. The trace file shows the following information.

--- PARAMETER SOURCE INFORMATION FOLLOWS ---

[20-JAN-2004 12:45:03:214] Attempted load of system pfile

source /u201/oracle/product/9.2/network/admin/sqlnet.ora

[20-JAN-2004 12:45:03:214] Load contained errors 14] Error stack follows: NL-00422: premature end of file NL-00427: bad list

[20-JAN-2004 12:45:03:214]

[20-JAN-2004 12:45:03:214] -> PARAMETER TABLE LOAD RESULTS FOLLOW <-

[20-JAN-2004 12:45:03:214] Some parameters may not have been loaded

[20-JAN-2004 12:45:03:214]

See dump for parameters which loaded OK This clearly shows that the parameter file had errors that prevented the parameters from loading. Because of this, the valid node checking is turned on and in use, but there is nothing in the list of the excluded nodes as shown in the following line from the trace file:

[20-JAN-2004 12:45:03:214] nttcnp: Validnode Table IN use; err 0x0

Since the error is 0x0, no error is reported by the validity checking routine. The subsequent lines on the trace file show other valuable information. For instance this line,

[20-JAN-2004 12:45:13:211] nttbnd2addr: using host IP address: 192.168.1.1

shows that the IP address of the server to which the listener was supposed to route the connection was 192.168.1.1. If all goes well, the listener allows the client to open a connection. This is confirmed by the following line:

[20-JAN-2004 12:45:14:320] nttcon: NT layer TCP/IP connection has been established.

As the line says, the TCP/IP connection has been established. If any other problems exist, the trace file will show enough helpful information for a diagnosis.

Summary

To summarize:

  1. Node Validation can be used to instruct listeners to accept or reject a connection from a specific client.
  2. The parameter file is sqlnet.ora in Oracle 9i and protocol.ora in Oracle8i.
  3. The nodes must be explicitly specified by name or by IP Address; no wildcards are supported.
dannorris's picture

Congratulations New Oracle ACE, Jeremy Schneider!

I’ll be the first to offer a large congratulations to Jeremy Schneider on being the most recent appointment to the Oracle ACE program. He certainly deserves it (I nominated him, so I suppose I would think so) and I continue to look for great things to come.

To prevent automated spam submissions leave this field empty.
Syndicate content