Oakies Blog Aggregator

pete.sharman's picture

VMware Hands On Labs

Introduction

In a previous post, I covered the VMware Experts Program. When I was attending the Program, one of the things Don Sullivan mentioned was the VMware Hands On Labs (HOL). I hadn’t had a chance to explore those before, so I was interested to see what they were like.

The Good

Quite honestly, these are among the best hands on lab environments I have played around with. If I wanted to test a feature of Oracle software in my previous life, most of the time I built myself a virtual machine environment using Oracle’s VirtualBox, installed all the relevant software and started from there. Of course, much of that could be snapshotted to save repeating the exercise, but it was largely restricted to a single VM. If I wanted to test something like Oracle’s Real Application Clusters technology, I built myself a small two node cluster out of two separate VMs and went from there. It really didn’t give me the feel of a real-world environment.

Enter the VMware HOL. Obviously, to get a real-world environment to test things like vMotion migration of VMs takes much more in the way of resources than my poor old laptop could handle, even if it does have 32 GB of RAM. The VMware HOL environment solves that issue for you. It takes only a small amount of time to actually crank up an environment for each lab you want to do, and each lab comes complete with online instructions integrated with the lab, as well as a separate PDF file and HTML version of the lab if you want to use that.

The labs are largely standalone, but do reference other labs where more details can be found. For things that may take some time to execute, you generally find they have included an interactive simulation where you click your way around and type in a few things, but the actual steps are simply simulated to save time.

Overall, as I mentioned above, the VMware HOLs are pretty darn good!

The Bad

Frankly, there’s not much to say here. If I was to nit pick a few minor points, they’d be these:

  • It would be nice to have a recommended path through the labs documented outside the labs themselves. There are quite a few labs there, and it’s a bit hard to determine which ones should be done in which order. In the labs themselves, there are sometimes suggestions to do other labs for more information, but that’s about it.
  • Sometimes the interactive simulation may not take input correctly (sometimes need to hit a key twice – which can be more than a tad confusing when you’re entering passwords and don’t realize one got missed!). However, after a while you realize you can type any active key (i.e. not Shift) and it will think you’re typing what it expects, so you can just keep typing “jjjjjjjjjjjjj…” until the field populates with the right number of characters. <br />
</li></ul></div>
    <div class=»
dbakevlar's picture

The Journey Begins- Power BI and AI

I’m back!!  I know you missed my posts…be honest…. </p />
</p></div>
    <div class=»

mwidlake's picture

OUG Scotland – Why to Come & Survival Guide

The UKOUG’s Scottish conference is on the 21st June in the centre of Edinburgh, at the Sheraton Grand Hotel, not far from Edinburgh Castle in the centre of the city.

 

The Event

https://mwidlake.files.wordpress.com/2018/06/screenhunter_360-jun-12-09-... 600w, https://mwidlake.files.wordpress.com/2018/06/screenhunter_360-jun-12-09-... 150w" sizes="(max-width: 300px) 100vw, 300px" />

There is a six-stream agenda covering Database, Apex & Development, Platform & Services, Coud Apps, EBS Apps tech, and Business Analytics/systems & EPM, so pretty much the whole breadth of Oracle Tech, Apps and BI. We have a keynote by Oracle’s Caroline Apsey on the Bloodhound Project, the UK-based group trying to smash the world land-speed record with a 1,000mph rocket car – and solve lots of engineering challenges on the way. And uses the Oracle Cloud. I’ll be sure to see that one.

With 6 all-day streams there are a lot of presentations to choose from, but as a taste of what is on offer I’ll mention Jonathan Lewis talking about stats, Heli Helskyaho explaining the basics of machine learning, and from Oracle we have Grant Ronald on AI-driven chatbots, Hilary Farrell on the new features of APEX 18.1, and Keith Laker on JSON & SQL. The talks are a nice mixture of end-user experiences, recognised experts and Oracle themselves. UKOUG is independent of Oracle so although we are very happy to have Oracle support us, we have talks that are not just what Oracle are currently pushing. This is what I love about user group meetings, you get the whole story.

As a member of the UKOUG this event is free, counting as one of your SIG places. If you have run out of SIG places, you can buy an extra one at £85 – or upgrade your membership of course </p />
</p></div>
    <div class=»

dbakevlar's picture

First Day at Microsoft, the Satya Way

So I’ve finally crawled my way back out of the hole I dug myself in this last month.  The house is empty, the 5th wheel is ready for us to move into and I’m now in Redmond, doing my NEO, (New Employee Orientation.)

I swear this is all I see in my head every time I hear “NEO” </p />
</p></div>
    <div class=»

mauro.pagano's picture

Introducing SQLdb360: merging eDB360 and SQLd360, while rising the bar to community engagement

Today, we are very happy to release SQLdb360, a new tool that merges together eDB360 and SQLd360, under a single package

Tools eDB360 and SQLd360 can still be used independently, but now there is only one package to download and keep updated. All the new features and updates to both tools are now in that one package.

The biggest change that comes with SQLdb360 is the kind invitation to everyone interested to contribute to its development. This is why the new blended name and its release format.

We do encourage your help and ideas to continue building a free, open-source, and hopefully a YMMV great tool!

Over the years, a few community members requested new features, but they were ultimately slowed down by our speed of reaction to their requests. Well, no more!

Few consumers of these tools implemented cool changes they needed, sometimes sending us the changes (or pull requests) until a later time. This means good ideas were available to others after some time. Not anymore!

If there is something you’d like to have as part of SQLdb360 (aka SQLd360 and eDB360), just write and test the additional code, then send us the pull request! Next, we will review, validate, and merge your code changes to the main tool.

There are several advantages to this new approach:

  1. Carlos and Mauro won’t dictate the direction of the tool anymore: we will continue helping and contributing, but we won’t “own” it anymore (the community will!)
  2. Carlos and Mauro won’t slow down the development anymore: nobody is the bottleneck now!
  3. Carlos and Mauro wan’t run out of ideas anymore!!! The community has great ideas to share!!!

Due to the nature of this new collaborative effort, the way we now publish SQLdb360 is this:

  • Instead of linking to the current master repository, the tool now implements “releases”. This, in order to snapshot stable versions that bundle several changes together (better than creating separate versions per merge into master).
  • Links in our blogs are now getting updated, with references to the latest (and current) stable release of SQLdb360 (starting with v18.1).

Note: Version names sound awfully familiar to Oracle nomenclature, right? Well, we started using this numbering back in 2014!!!

Carlos & Mauro

Franck Pachot's picture

Change Data Capture from Oracle with StreamSets Data Collector

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects. There are some ideas running on (Debezium), VOODOO but not yet released.

Today I tested the Oracle CDC Data Collector for StreamSets. StreamSets Data Collector is an open-source project started by former people from Cloudera and Informatica, to define pipelines streaming data from data collectors. It is easy, simple and has a buch of destinations possible. The Oracle CDC is based on LogMiner which means that it is easy but may have some limitations (mainly datatypes, DDL replication and performance).

Install

The installation guide is at streamsets.com. I choose the easiest way for testing as they provide a Docker container (https://github.com/streamsets)

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest: sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Status: Downloaded newer image for streamsets/datacollector:latest
ef707344c8bd393f8e9d838dfdb475ec9d5f6f587a2a253ebaaa43845b1b516d

CaptureStreamSets001
And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and efficient. There’s a home page where you define the ‘pipelines’ and monitor them running. In the pipelines, we define sources and destinations. Some connectors are already installed, others can be automatically installed. For Oracle, as usual, you need to download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal reasons. I’ll do something simple here just to check the mining from Oracle.

CaptureStreamSets002CaptureStreamSets003
In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the streamsets-datacollector-jdbc-lib library
Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver (ojdbc8.jar).
I’ve also installed the MySQL one for future tests:

File Name Library ID
ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline

I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy: just select ‘Oracle CDC’ as source in a new pipeline. Click on it and configure it. I’ve set the minimum here.
In JDBC tab I’ve set only the JDBC Connection String to: jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1 which is my PDB (I’m on Oracle 18c here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set ‘sys as sysdba’ as username and its password. The configuration can also be displayed as JSON and here is the corresponding entry:

"configuration": [
{
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1"
},
{
"name": "hikariConf.useCredentials",
"value": true
},
{
"name": "hikariConf.username",
"value": "sys as sysdba"
},
{
"name": "hikariConf.password",
"value": "oracle"
},
...

I’ve provided SYSDBA credentials and only the PDB service, but it seems that StreamSets figured out automatically how to connect to the CDB (as LogMiner can be started only from CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to the source – but of course, it will use CPU and memory resource from the source database host in this case.

Then I’ve defined the replication in the Oracle CDC tab. Schema to ‘SCOTT’ and Table Name Pattern to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually replicate for this first test. But of course, we can define a SCN here which is what must be used to ensure consistency between the initial load and the replication. ‘Dictionary source to ‘Online Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table names and column names. But be carefull as table structure changes may not be managed correctly with this option.

{
"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
{
"schema": "SCOTT",
"table": "%"
}
]
},
{
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"INSERT",
"UPDATE",
"DELETE",
"SELECT_FOR_UPDATE"
]
},
{
"name": "oracleCDCConfigBean.dictionary",
"value": "DICT_FROM_ONLINE_CATALOG"
},

I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is there.

Named Pipe destination

I know that the destination part is easy. I just want to see the captured changes here and I took the easiest destination: Named Pipe where I configured only the Named Pipe (/tmp/scott) and Data Format (JSON)

{
"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName": "com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
{
"name": "namedPipe",
"value": "/tmp/scott"
},
{
"name": "dataFormat",
"value": "JSON"
},
...

Supplemental logging

The Oracle redo log stream is by default focused only on recovery (replay of transactions in the same database) and contains only the minimal physical information requried for it. In order to get enough information to replay them in a different database we need supplemental logging for the database, and for the objects involved:

SQL> alter database add supplemental log data;
Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where owner='SCOTT' and table_name like '%') loop execute immediate 'alter table "'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary key) columns'; end loop;
PL/SQL procedure successfully completed.

Run

And that’s all. Just run the pipeline and look at the logs:

CaptureStreamSets005-log

StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the queries that it uses for that:

BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => :1 , ENDTIME => :2 , OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.NO_SQL_DELIMITER); END;

This starts to mine between two timestamp. I suppose that it will read the SCNs to get finer grain and avoid overlapping information.

And here is the main one:

SELECT SCN, USERNAME, OPERATION_CODE, TIMESTAMP, SQL_REDO, TABLE_NAME, COMMIT_SCN, SEQUENCE#, CSF, XIDUSN, XIDSLT, XIDSQN, RS_ID, SSN, SEG_OWNER FROM V$LOGMNR_CONTENTS WHERE ((( (SEG_OWNER='SCOTT' AND TABLE_NAME IN ('BONUS','DEPT','EMP','SALGRADE')) ) AND (OPERATION_CODE IN (1,3,2,25))) OR (OPERATION_CODE = 7 OR OPERATION_CODE = 36))

This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The operations 1,3,2,25 are those that we want to capture (insert, update, delete, select for update) and were defined in the configuration. Here the pattern ‘%’ for the SCOTT schema has been expanded to the table names. As far as I know, there’s no DDL mining here to automatically capture new tables.

Capture

Then I’ve run this simple insert (I’ve added a primary key on this table as it is not ther from utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet buffers the changes until the end of the transaction)

SQL> commit;

and here I got the message from the pipe:

/ $ cat /tmp/scott
 
{"LOC":"Cloud","DEPTNO":50,"DNAME":"IT"}

The graphical interface shows how the pipeline is going:
CaptureStreamSets006

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually, this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo stream format) and limitations are clearly documented.

Monitoring

Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on the source database. With big transactions, you may need large PGA (but you can also choose buffer to disk). If you have Oracle Tuning Pack, you can also monitor the main query which retreives the redo information from LogMiner:
CaptureStreamSets007
You will see a different SQL_ID because the filter predicates sues literals instead of bind variables (which is not a problem here).

Conclusion

This product is very easy to test, so you can do a Proof of Concept within a few hours and test for your context: supported datatypes, operations and performance. By easy to test, I mean: very good documentation, very friendly and responsive graphical interface, very clear error messages,…

 

Cet article Change Data Capture from Oracle with StreamSets Data Collector est apparu en premier sur Blog dbi services.

Jonathan Lewis's picture

dbms_random

In a recent ODC thread someone had a piece of SQL that was calling dbms_random.string(‘U’,20) to generate random values for a table of 100,000,000 rows. The thread was about how to handle the ORA-30009 error (not enough memory for operation) that is almost inevitable when you use the “select from dual connect by level <= n” strategy for generating very large numbers of rows, but this example of calling dbms_random.string() so frequently prompted me to point out an important CPU saving , and then publicise through this blog a little known fact (or deduction) about the dbms_random.string() function.

If you generate a random string of length 6 using only upper-case letters there are 308,915,766 different combinations (266); so if you’re after “nearly unique” values for 100 million rows then a six character string is probably good enough – it might give you a small percentage of values which appear in a handful rows but most of the values are likely to be unique or have two rows. If you want to get closer to uniqueness then 7 characters will do it, and 8 will make it almost certain that you will get a unique value in every row.

So if you want “nearly unique” and “random 20 character strings” it’s probably sufficient to generate random strings of 6 to 8 characters and then rpad() them up to 20 characters with spaced – the saving in CPU will be significant; roughly a factor of 3 (which is going to matter when you’re trying to generate 100 million rows. As a little demo I supplied the OP with a script to create a table of just one million random strings – first of 20 random characters, then of 6 random characters with 14 spaces appended. The run time (mostly CPU) dropped from 1 minute 55 seconds to 41 seconds.

Why is there such a difference ? Because to generate a random string of 6 characters Oracle generates a random string of one character six times in a row and concatenates them. The difference between 6 calls and 20 calls per row gives you that factor of around 3. For a quick demo, try running the following anonymous PL/SQL block:

rem
rem     Script:         random_speed.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2010
rem

begin
        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',6));
        dbms_output.new_line;

        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
end;
/

Jere are the results I got from instances of 12.1.0.2, 12.2.0.1, and 18.1.0.0 (from LiveSQL):


BVGFJB
B
V
G
F
J
B

I haven’t shown the tests for all the possible dbms_random.string() options but, unsurprisingly, changing the test to use the ‘L’ (lower case alpha) option produces the same effect (and the same 6 letters changed to lower case). The same effect, with different characters, also appeared using the ‘A’ (mixed case alpha), ‘X’ (uppercase alphanumeric) and ‘P’ (all printable characters) options.

I haven’t considered the effect of using a multi-byte character set – maybe Oracle calls its random number generator once per byte rather than once per character. The investigation is left as an exercise to the interested reader.

tl;dr

When generating a very large number of random strings – keep the “operational” part of the string as short as you can and leave the rest to be rpad()‘ed.

pete's picture

Need Help with Oracle Security GDPR Training and Services

I talked here a few days ago about GDPR in general and I also published my slides from my talk GDPR for the Oracle DBA . We have been helping clients secure data in their Oracle databases and training people....[Read More]

Posted by Pete On 09/06/18 At 04:33 PM

morten's picture

Creating complete synthetic test data sets from production data

How easy could it be to generate a complete set of test data
from existing tables? To make sure that data looks like production
but in no way is a copy or scramble of production data. Instead it
is synthetically created, to look like what your production data looks
like. Same types, same data domain, same coherence between values, same data distribution
and also maintaining the relationships (foreign keys) between all the tables
simulating.

morten's picture

Creating complete synthetic test data sets from production data

How easy could it be to generate a complete set of test data
from existing tables? To make sure that data looks like production
but in no way is a copy or scramble of production data. Instead it
is synthetically created, to look like what your production data looks
like. Same types, same data domain, same coherence between values, same data distribution
and also maintaining the relationships (foreign keys) between all the tables
simulating.

To prevent automated spam submissions leave this field empty.