Internals

tanelpoder's picture

Implicit datatype conversion in the parsing phase – something new I learned today!

Wow, I wasn’t aware that Oracle can also do an implicit datatype conversion for literal strings during parsing phase!

SQL> @desc t
           Name                            Null?    Type
           ------------------------------- -------- ----------------------------
    1      A                                        NUMBER(38)

SQL> select * from t where a = '1' || RPAD('0',5,'0');

no rows selected

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------
SQL_ID  d7r6md8wfu74d, child number 0
-------------------------------------
select * from t where a = '1' || RPAD('0',5,'0')

Plan hash value: 1601196873

--------------------------------------------------------
| Id  | Operation         | Name | E-Rows | Cost (%CPU)|
--------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |      1 |     2   (0)|
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("A"=100000)

You see what happened? The expression ’1′ || RPAD(’0′,5,’0′) has been evaluated, which returns a string. And this string ’100000′ has been converted to a NUMBER 100000 during parsing phase .. otherwise you would see quotes around the number above with a TO_NUMBER() function around it (so that Oracle could compare the NUMBER column “A” to the same datatype)…

I add a TO_CHAR() around the column A just for demoing that a varchar datatype (as the original “literal” in my query is) will be shown with quotes like every normal string:

SQL> select * from t where to_char(a) = '1'||rpad('0',5,'0');

no rows selected

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
SQL_ID  7yf6j8fdyrvk7, child number 0
-------------------------------------
select * from t where to_char(a) = '1'||rpad('0',5,'0')

Plan hash value: 1601196873

--------------------------------------------------------
| Id  | Operation         | Name | E-Rows | Cost (%CPU)|
--------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |      1 |     2   (0)|
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_CHAR("A")='100000')

Let’s see whether this trick is somehow done also for bind variables:

SQL> var x varchar2(10)
SQL> exec :x:= '1' || RPAD('0',5,'0');

PL/SQL procedure successfully completed.

SQL> print x

X
--------------------------------
100000

SQL> select * from t where a = :x;

no rows selected

SQL> @x
Display execution plan for last statement for this session from library cache...

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID  45f39y7580bdp, child number 2
-------------------------------------
select * from t where a = :x

Plan hash value: 1601196873

--------------------------------------------------------
| Id  | Operation         | Name | E-Rows | Cost (%CPU)|
--------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |     2 (100)|
|*  1 |  TABLE ACCESS FULL| T    |      1 |     2   (0)|
--------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (VARCHAR2(30), CSID=873): '100000'

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("A"=TO_NUMBER(:X))

Apparently not! And this kind of makes sense – as if this string to number conversion is done during parse phase – Oracle doesn’t know what the actual value is yet (in the bind variable memory) so it can’t convert it to number in advance either :-)

This is a little interesting detail… I didn’t know that in addition to the implicit datatype conversion during query execution (using TO_CHAR, TO_NUMBER functions etc) Oracle can sometimes convert a string literal to number datatype under the hood during the parse time!

P.S. I tested this on Oracle 11.2.0.2 with optimizer_features_enable set from 11.2.0.2 to all the way back to 8.0.0 and the behavior was the same. I didn’t find any mention of this conversion in the CBO tracefile although after a filter pushdown transformation (FPD) the string literal was already shown as a number datatype. If anyone still has access to ancient Oracle database versions (like 9.2 and 10.1 ;-) then let me know whether you see the same results!

Share

tanelpoder's picture

ORA-4031 errors, contention, cursor management issues and shared pool fragmentation – free secret seminar!

Free stuff! Free stuff! Free stuff! :-)

The awesome dudes at E2SN have done it again! (and yes, Tom, this time the “we at E2SN Ltd” doesn’t mean only me alone ;-)

On Tuesday 22nd March I’ll hold two (yes two) Secret Oracle Hacking Sessions – about ORA-04031: unable to allocate x bytes of shared memory errors, cursor management issues and other shared pool related problems (like fragmentation). This event is free for all! You’ll just need to be fast enough to register, both events have 100 attendee limit (due to my GotoWebinar accont limitations).

I am going to run this online event twice, so total 200 people can attend (don’t register for both events, please). One event is in the morning (my time) to cater for APAC/EMEA region and the other session is for EMEA/US/Americas audience.

The content will be the same in both sessions. There will be no slides (you cant fix your shared pool problems with slides!) but there will be demos, scripts, live examples and fun (for the geeks among us anyway – others go and read some slides instead ;-)!

Share

tanelpoder's picture

Expert Oracle Exadata book – Alpha chapters available for purchase!

Hi,

Apress has made the draft versions of our Expert Oracle Exadata book available for purchase.

How this works is:

  1. You purchase the “alpha” version of the Expert Oracle Exadata book
  2. You get the access to draft/alpha PDF versions of some chapters now!
  3. As more chapters will be added and existing ones updated, you’ll receive an email and you can download these too
  4. You will get a PDF copy of the final book once it’s out!

This is an awesome deal if you can’t wait until the final launch and want to get ahead of the curve with your Exadata skills ;-)

Buy the alpha version of our Expert Oracle Exadata book from Apress here!

If you haven’t heard about this book earlier – I’m one of the 3 authors, writing it together with Kerry Osborne and Randy Johnson from Enkitec and our official tech reviewer is no other than THE Kevin Closson and we are also getting some (unofficial) feedback from Oracle database junkie Arup Nanda.

So this book will absolutely rock and if you want a piece of it now, order the alpha book above!

P.S. This hopefully also explains why I’ve been so quiet with my blogging lately – can’t write a book and do many other things at the same time… (at least if you want to do it well…)

Share

tanelpoder's picture

Advanced Oracle Troubleshooting v2.0 Online Deep Dives in April and May 2011

Due to a lot interest I’m going to do another run of my Advanced Oracle Troubleshooting v2.0 Online Deep Dive seminars in April and May (initially I had planned to do it no earlier than Sep/Oct…)

Check the dates & additional info out here:

P.S. People who already attended the AOT2 seminars last year – I will schedule the follow-up Q&A sessions in mid-March!

Share

tanelpoder's picture

Is this valid SQL syntax? :-)

I’m talking about this:

select-1from from dual;

Looks like invalid, right? Well, let’s run it:

SQL> select-1from from dual;

       ROM
----------
 -1.0E+000

This is because:

  1. Oracle doesn’t need whitespace for tokenizing the SQL statement (differences in character classes will do – as I’ve explained here)
  2. The first from “keyword” in the above statement is broken down to two tokens as an “F” right after a digit means that the preceding number is a FLOAT (and “D” means DOUBLE) and the tokenizer stops right there, knowing that whatever comes after this character (“ROM”) is a next token, which according to the Oracle SQL syntax rules will be assigned as the output column alias

The following funky-looking SQL statements are also valid:

SQL> select.1e2ffrom dual;

     .1E2F
----------
  1.0E+001

SQL> select.1e2fas"."from dual;

         .
----------
  1.0E+001

In the upper example, the “.1e2f” means number .1 * 10^2 (scientific notation) represented as a FLOAT internally and in the lower one I’ve just added a column alias with “AS” keyword just to make the SQL look a bit crazier.

:-)

Share

mwidlake's picture

You can explain an invalid SQL statement

I’m in “nightmare weekend before presenting” mode. I’m up to my eyes at work (and have been for ages, thus the quiet blog) and my recent weekends have been full of normal {and abnormal} life.

As is the way, when up against it and putting together my proofs for wild claims, everything breaks subtly and makes my wild claims look a little, well, wild – even though they are real issues I’ve seen, worked through and fixed in the day job. *sigh*. It does not help when you come across little oddities you have never seen before and end up spending valuable time looking into them.

So here is one. I’m just putting together a very, very simple demo of how the number of rows the CBO expects to see drops off as you move outside the known range. In the below you can see the statement I am using (I keep passing in different days of the month and watching the expected number of rows drop until I hit 1 expected row), but look at how it progress to the last entry…

mdw11> select count(*) from date_test_flat where date_1=to_date('&day-02-2011','DD-MM-YYYY')
  2  /
Enter value for day: 01

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    16 |   128 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 15

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     2 |    16 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-15 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 21

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11>

The expected number of rows drops, becomes and – and has shot up to 99 again (which is the expected number in the known range, as I have 10,000 rows spread over 100 days). My immediate thought is “Wow! Maybe Oracle have put some odd fix in where when you go well out of range it reverts to expecting an average number of rows”. Nope. It is because I asked for the data for 30th February. And I did not get an error.

I think it is because I have set autotrace traceonly explain. This causes the SQL statement not to be executed {if it is just a select, not an insert, update or delete}. It seems the costing section of the CBO is not so good at spotting duff dates, but it then gets the costing wrong.

I’ve spotted that the format of the filter also changes when the date is invalid, I really want to check that out – but I better continue failing to write the presentation!

I know, pretty pointless knowing this but it just amused me. Below is just a quick continuation to show that if the statment is to be executed you get an error and no plan and that utterly duff dates can be passed in.

mdw11> /
Enter value for day: 28

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> SET AUTOTRACE ON
mdw11> /
Enter value for day: 20
any key>

  COUNT(*)
----------
         0

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-20 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        821  consistent gets
          0  physical reads
          0  redo size
        421  bytes sent via SQL*Net to client
        415  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

mdw11> /
Enter value for day: 30
select count(*) from date_test_flat where date_1=to_date('30-02-2011','DD-MM-YYYY')
                                                         *
ERROR at line 1:
ORA-01839: date not valid for month specified

mdw11> set autotrace traceonly explain
mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11> /
Enter value for day: 45

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('45-02-2011','DD-MM-YYYY'))

tanelpoder's picture

Asynch descriptor resize wait event in Oracle

A lot of people have started seeing “asynch descriptor resize” wait event in Oracle 11gR2. Here’s my understanding of what it is. Note that I didn’t spend too much time researching it, so some details may be not completely accurate, but my explanation will at least give you an idea of why the heck you suddenly see this event in your database.

FYI, there’s a short, but incomplete explanation of this wait event also documented in MOS Note 1081977.1



The “direct path loader” (KCBL) module is used for performing direct path IO in Oracle, such as direct path segment scans and reading/writing spilled over workareas in temporary tablespace. Direct path IO is used whenever you see “direct path read/write*” wait events reported in your session. This means that IOs aren’t done from/to buffer cache, but from/to PGA directly, bypassing the buffer cache.



This KCBL module tries to dynamically scale up the number of asynch IO descriptors (AIO descriptors are the OS kernel structures, which keep track of asynch IO requests) to match the number of direct path IO slots a process uses. In other words, if the PGA workarea and/or spilled-over hash area in temp tablespace gets larger, Oracle also scales up the number of direct IO slots. Direct IO slots are PGA memory structures helping to do direct IO between files and PGA.



In order to be able to perform this direct IO asynchronously, Oracle also dynamically scales up the number of OS asynch IO descriptors, one for each slot (up to 4096 descriptors per process). When Oracle doesn’t need the direct IO slots anymore (when the direct path table scan has ended or a workarea/tempseg gets cancelled) then it scales down the number of direct IO slots and asynch IO descriptors. Scaling asynch IO descriptors up/down requires issuing syscalls to OS (as the AIO descriptors are OS kernel structures).



I guess this is supposed to be an optimization, to avoid running out of OS AIO descriptors, by releasing them when not they’re not needed, but as that Metalink note mentioned, the resize apparently sucks on Linux. Perhaps that’s why other ports also suffer and have seen the same wait event.



The “asynch descriptor resize” event itself is really an IO wait event (recorded in the wait class Other though), waiting for reaping outstanding IOs. Once this wait is over, then the OS call to change the amount of asynch IO descriptors (allocated to that process) is made. There’s no wait event recorded for the actual “resize” OS call as it shouldn’t block.



So, the more direct IO you do, especially when sorting/hashing to temp with frequent workarea closing/opening, the more of this event you’ll see (and it’s probably the same for regular tablespace direct path IO too).



This problem wouldn’t be noticeable if Oracle kept async io descriptors cached and wouldn’t constantly allocated/free them. Of course then you may end up running out of aio descriptors in the whole server easier. Also I don’t know whether there would be some OS issues with reusing cached aio descriptors, perhaps there is a good reason why such caching isn’t done.



Nevertheless, what’s causing this wait event is too frequent aio descriptor resize due to changes in direct IO slot count (due to changes in PGA workarea/temp segment and perhaps when doing frequent direct path scans through lots of tables/partitions too).



So, the obvious question here is what to do about this wait event? Well, first you should check how big part of your total response time this event takes at all?



  1. If it’s someting like 1% of your response time, then this is not your problem anyway and troubleshooting this further would be not practical – it’s just how Oracle works :)
  2. If it’s something like 20% or more of your response time, then it’s clearly a problem and you’d need to talk to Oracle Support to sort out the bug
  3. If it’s anything in between, make sure you don’t have an IO problem first, before telling that this is a bug. In one recent example I saw direct path reads take over a second on average when this problem popped up. The asynch descriptor resize wait event may well disappear from the radar once you fix the root cause – slow IO (or SQL doing too much IO). Remember, the asynch descriptor resize wait event, at least on Linux, is actually an IO wait event, the process is waiting for outstanding IO completion before the descriptor count increase/decrease can take place.

Share

tanelpoder's picture

A little new feature for shared pool geeks :-)

If you’ve taken any shared pool dumps from 11g+ databases lately, you might have wondered about this:

http://tech.e2sn.com/oracle/troubleshooting/shared-pool

Share

tanelpoder's picture

Another (secret) hacking session with me – using Oracle Session Snapper for flexible troubleshooting (and fun)

And this time we have audio !!! (Wow!)

Following the huge success of my last hacking session, planned while drinking beer at Graham Woods OOW pre-party and delivered from Miracle’s massive Oracle Closed World event in Thirsty Bear (between drinking beers), I’m announcing another hacking session:

What: Using Session Snapper for flexible Oracle Performance Troubleshooting

When: Wednesday 27th Oct 9:00-10:00 AM PDT (US West coast / California time). Check what’s this in your time zone here

Where: Internet! -> Sign up here: http://tech.e2sn.com/secret

You’ll need to register fast and be “there” on time as my current GotoWebinar account only allow 100 attendees to log on… last time over 100 people signed up, but “luckily” less actually showed up, so nobody got left outside!

BTW, I have figured out what went wrong with audio last time and caused my voice in the end of presentation disappear). A program, which I accidentally launched via a keyboard shortcut, grabbed my Mic input to itself, so gotowebinar’s app couldn’t access it anymore.

See you soon!

Share

tanelpoder's picture

A: The most fundamental difference between hash and nested loop joins

Ok guys, thanks for waiting!

I ended up expanding the article quite a lot compared to what I had originally planned. In fact I only wrote 50% of what I plan to write, I’ll update the rest… um… later… Instead of just stating the difference between the joins I took a step back and elaborated something what I often see people doing (and talking about in newsgroups and lists too).

Basically the most fundamental (or biggest or most important) difference between nested loop and hash joins is that:

  • Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.

In other words, when joining table A and B (A is driving table, B is the probed table), then a nested loop join can take 1st row from A and perform a lookup to B using that value (of the column(s) you join by). Then nested loop takes the next row from A and performs another lookup to table B using the new value. And so on and so on and so on.

This opens up additional access paths to the table B, for example when joining ORDERS and ORDER_ITEMS by ORDER_ID (and ORDER_ID is leading column of PK in ORDER_ITEMS table), then for whatever orders are taken from ORDERS table, we can perform a focused, narrow index range scan on ORDER_ITEMS for every ORDER_ID retrieved from the driving ORDERS table. A hash join can’t do that.

Of course this doesn’t mean that hash joins can’t use any indexes for tables they read – index range scans and unique lookups can still be used under a hash join, but only if there are constant values in the query text (in form of literal or bind variables). If there are no such constant (filter) conditions under a hash join, then the other options to use that index would be to do an INDEX FULL SCAN (which is a range scan from end to end of the index) or INDEX FAST FULL SCAN (which is like a full table scan through the entire index segment). However none of these opportunities give the same benefits as nested loops looking up rows from row source B dynamically based on what was retrieved from A during runtime.

Note that this nested loops benefit isn’t limited to indexes only on table B, the difference is more fundamental than just a specific access path. For example, if table B happens to be a single table hash cluster or indexed X$ table, then the nested loop is also able to do “optimized” lookups from these row-sources, based on the values retrieved from table A.

So, my article with a lot of (loosely) related details is here:

In the comments section of my question, Tom, Bernard Polarski, Christian Antognini and Marc Musette got the closest to what I had in my mind when I asked the question. However, of course your mileage may vary somewhat depending on what kind of problems you have experienced the most over all the years. Also, Jonathan Lewis had a valid comment regarding that the answer depends on what exactly does one mean by “fundamental” and yeah this was open to interpretation.

Nevertheless, I wanted to emphasize that there’s a more important difference between NL and hash joins, than the usual stuff you see in training material which talk about implementation details like hash tables and memory allocation…

Some day I will complete that article, I plan to add some design advice in there, like denormalization opportunities for getting the best of the both worlds etc. But now I’m gonna get a beer instead.

Thanks for reading and answering my blog, I was quite impressed by the volume of comments & answers to my question. I must do this more often!

Share

To prevent automated spam submissions leave this field empty.
Syndicate content