The OMS Patcher is a newer patching mechanism for the OMS specifically, (I know, the name kind of gave it away…) Although there are a number of similarities to Oracle’s infamous OPatch, I’ve been spending a lot of time on OTN’s support forums and via email, assisting folks as they apply the first system patch to 22.214.171.124.0. Admit it, we know how much you like patching…
http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/nooo.gif?resiz... 300w" sizes="(max-width: 334px) 100vw, 334px" data-recalc-dims="1" />
The patch we’ll be working with is the following:
./emctl stop oms
/u01/app/oracle/13c/OMSPatcher/omspatcher <– location of the OMSPatcher executable.
$ORACLE_HOME should be set to OMS_HOME and set omspatcher to the OMSPATCHER :
export omspatcher=$OMS_HOME/OMSPATCHER/omspatcher export ORACLE_HOME=/u01/app/oracle/13c
$ omspatcher apply -analyze -property_file
I’d recommend running the following instead, which is a simplified command and will result in success if you’re set up your environment:
If this returns with a successful test of your patch, then simply remove the “-analyze” from the command and it will then apply the patch:
You’ll be asked a couple of questions, so be ready with the information, including verifying that you can log into your Weblogic console.
Verify that the Weblogic domain URL and username is correct or type in the correct one, enter the weblogic password
Choose to apply the patch by clicking “Y”
Patch should proceed.
The output of the patch will look like the following:
OMSPatcher log file: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log Please enter OMS weblogic admin server URL(t3s://adc00osp.us.oracle.com:7102):> Please enter OMS weblogic admin server username(weblogic):> Please enter OMS weblogic admin server password:> Do you want to proceed? [y|n] y User Responded with: Y Applying sub-patch "22589347 " to component "oracle.sysman.si.oms.plugin" and version "126.96.36.199.0"... Applying sub-patch "22823175 " to component "oracle.sysman.emas.oms.plugin" and version "188.8.131.52.0"... Applying sub-patch "22823156 " to component "oracle.sysman.db.oms.plugin" and version "184.108.40.206.0"... Log file location: /u01/app/oracle/13c/cfgtoollogs/omspatcher/22920724/omspatcher_2016-04-29_15-42-56PM_deploy.log OMSPatcher succeeded.
Note the sub-patch information. It’s important to know that this is contained in the log, for it you needed to rollback a system patch, it must be done via each sub-patch using the Identifier listed here.
If you attempted to rollback the system patch, using the system patch identifier, you’d receive an error:
$ 01/app/oracle/13c/OMSPatcher/omspatcher rollback -id 22920724 -analyze < OMSPatcher Automation Tool Copyright (c) 2015, Oracle Corporation. All rights reserved. ...... "22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID. OMSRollbackSession failed: "22920724" is a system patch ID. OMSPatcher does not support roll back with system patch ID.
Once the system patch has completed successfully, you’ll need to add the agent patch and best practice is to use a patch plan and apply it to one agent, make it the gold agent current image and then apply that to all your agents that are subscribed to it. If you need more information on how to use Gold Agent Images, just read up on it in this post.
Copyright © DBA Kevlar [EM13c- Applying System Patches with the OMS Patcher], All Right Reserved. 2016.
The second edition of Cary Millsap‘s MASTERING ORACLE TRACE DATA (MOTD) is finally available. You can order it through amazon.com. I had the pleasure not only to review MOTD while Cary was working on it, but also to write a foreword that summarizes what I think about the book. So, if you are asking yourself whether you should buy MOTD, here is my opinion/foreword…
In late 1999, through an Oracle Support note, I had my first encounter with extended SQL trace. From the beginning, it was clear that the information it provided was essential for understanding what an application is doing when it is connected to Oracle Database. At the same time, I was very disappointed that no tool was available for analyzing extended SQL trace files for the purpose of leveraging their content. But even worse, at that moment the widespread tuning methods were more similar to black art than computer science. As Cary would say… it was an age of darkness.
In 2003, when O’Reilly published Optimizing Oracle Performance (by Cary Millsap with Jeff Holt), the story changed. In fact, that book introduced a simple (but not simplistic) method, based on extended SQL trace, for optimizing the performance of Oracle Database applications: Method R. The four-step method was so concise that it could be summarized in eleven lines of text (page 20 of Optimizing Oracle Performance). Despite that, Method R was (and is) extremely powerful. It has been a game changer.
The remarkable achievement of the book you are reading is not only to give you a distilled and enhancement version of all the key concepts introduced by Optimizing Oracle Performance, but also to cover additional material about fundamental topics like instrumentation. A part that has been enormously enhanced is the one that covers the case studies. In fact, about half of the book is devoted to it. Case studies are very important because they make much easier for you to grasp, based on real examples, how to apply what you learned in the theoretical part of the book.
In addition, for the users of the Method R Profiler and Method R Tools, the second part is very useful to realize how to fully take advantage of those powerful utilities.
Without any doubts, this book is the definitive guide to mastering Oracle trace data.
Monitoring templates are an essential feature to a basic Enterprise Manager environment, ensuring consistent monitoring across groups and target types. There’s an incredibly vast group of experts in the EM community and to demonstrate this, Brian Williams from Blue Medora, a valuable partner of Oracle’s in the Enterprise Manager space, is going to provide a guest blog post on how simple and efficiently you can monitor even PostgreSQL databases with EM13c using custom monitoring templates!
Creating and Applying Custom Monitoring Templates in Oracle Enterprise Manager 13c
Guest Blogger: Brian Williams
Oracle Enterprise Manager 13c is a premier database monitoring platform for your enterprise. With Enterprise Manager 13c, users have access to many database-level metric alerting capabilities, but how do we standardize these threshold values across your database environment? The answer is simple: by creating and deploying Oracle Enterprise Manager’s monitoring templates.
Monitoring templates allow you to standardize monitoring settings across your enterprise by specifying the monitoring settings and metric thresholds once you apply them to your monitored targets. You can save, edit, and apply these templates across multiple targets or groups. A monitoring template is specified for a particular target type and can only be applied to targets of the same type. A monitoring template will have configurable values for metrics, thresholds, metric collection schedules, and corrective actions.
Today we are going to walk through the basic steps of creating a custom monitoring template and apply that template to select database targets. In this example, I will be creating templates for my newly added PostgreSQL database targets monitored with the Oracle Enterprise Manager Plugin for PostgreSQL from Blue Medora.
To get started, login to your Oracle Enterprise Manager 13c Cloud Control Console. Navigate to the Enterprise menu, select Monitoring and then Monitoring Templates. From this view, we can see a list of all monitoring templates on the system. To begin creating a new monitoring template, select Create from this view. If you are not logged in as a super admin account, you may need to grant the resource privilege Create Monitoring Template.
http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im1.png?res... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im1.png?res... 768w" sizes="(max-width: 533px) 100vw, 533px" data-recalc-dims="1" />
Figure 1 – Monitoring Templates Management Page
From the Create Monitoring Template page, select the Target Type radial button. In the Target Type Category drop down, select Databases. In the Target Type drop down, select PostgreSQL Database, or the target type of your choice. Click Continue.
The next screen presented will be the Create Monitoring Template page. Name your new template, give a description, and then click the Metric Thresholds tab. From the Metric Thresholds tab, we can begin defining our metric thresholds for our template.
You will be presented with many configurable metric thresholds. Find your desired metrics and from the far right column named Edit, click the Pencil Icon to edit the collection details and set threshold values. After setting the threshold values, click Continue to return to the Metric Thresholds view and continue to configure additional metric thresholds as needed. After all metrics have been configured, click OK to finish the creation of the monitoring template.
The final step to make full use of your newly created template is to apply the template to your selected target databases. From the Monitoring Templates screen, highlight your template, select Actions, and then Apply. Select the apply option to completely replace all metric settings in the target to use only metrics configured in your template. Click the Add button and select all database targets desired for the application. After the targets are added to the list, click Select All to mark targets for final application. Click OK to process the application. The deployment can be tracked by watching the Pending, Passed, or Failed number for the Apply Status box on the Monitoring Templates page.
http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im2.png?res... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im2.png?res... 768w" sizes="(max-width: 574px) 100vw, 574px" data-recalc-dims="1" />
Figure 2 – Apply Monitoring Template to Destination Targets
Now that I have the newly created template applied, I can navigate back to my database target home page and view top-level critical alerts based on my configurations.
http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im3.png?res... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im3.png?res... 768w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/bw_im3.png?w=1170 1170w" sizes="(max-width: 558px) 100vw, 558px" data-recalc-dims="1" />
Figure 3 – Target Home Page and PostgreSQL Overview
Although your database targets will eventually alert with issues, there is a solution available to give you at-a-glance visibility into PostgreSQL high availability via replication monitoring; check out the Oracle Enterprise Manager Plug-in for PostgreSQL by visiting Blue Medora’s website for product and risk-free trial information. For more walkthroughs on creating and applying monitoring templates, refer to the Enterprise Manager Cloud Control Administrator’s Guide, Chapter 7 Using Monitoring Templates.
Brian Williams is a Solutions Architect at Blue Medora specializing in Oracle Enterprise Manager and VMware vRealize Operations Manager. He has been with Blue Medora for over three years, also holding positions in software QA and IT support. Blue Medora creates platform extensions designed to provide further visibility into cloud system management and application performance management solutions.
Copyright © DBA Kevlar [Blue Medora's Brian Williams Blogs About Custom Monitoring Templates], All Right Reserved. 2016.
The OTN database forum supplied a little puzzle a few days ago – starting with the old, old, question: “Why is the plan with the higher cost taking less time to run?”
The standard (usually correct) answer to this question is that the optimizer doesn’t know all it needs to know to predict what’s going to happen, and even if it had perfect information about your data the model used isn’t perfect anyway. This was the correct answer in this case, but with a little twist in the tail that made it a little more entertaining. Here’s the query, with the two execution plans and the execution statistics from autotrace:
SELECT /* INDEX(D XPKCLIENT_ACCOUNT) */ E.ECID,A.acct_nb FROM client_account d, client e, account a where A.acct_nb ='00000000000000722616216' AND D.CLNT_ID = E.CLNT_ID AND D.ACCT_ID=A.ACCT_ID;
Plan (A) with a full tablescan of client_account – cost 808, runtime 1.38 seconds, buffer gets 17,955
------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 59 | 808 (14) | 00:00:10 | | 1 | NESTED LOOPS | | 1 | 59 | 808 (14) | 00:00:10 | | 2 | NESTED LOOPS | | 1 | 59 | 808 (14) | 00:00:10 | |* 3 | HASH JOIN | | 1 | 42 | 806 (14) | 00:00:10 | | 4 | TABLE ACCESS BY INDEX ROWID| ACCOUNT | 1 | 30 | 5 (0) | 00:00:01 | |* 5 | INDEX RANGE SCAN | XAK1ACCOUNT | 1 | | 4 (0) | 00:00:01 | | 6 | TABLE ACCESS FULL | CLIENT_ACCOUNT | 9479K| 108M | 763 (10) | 00:00:09 | |* 7 | INDEX UNIQUE SCAN | XPKCLIENT | 1 | | 1 (0) | 00:00:01 | | 8 | TABLE ACCESS BY INDEX ROWID | CLIENT | 1 | 17 | 2 (0) | 00:00:01 | ------------------------------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 17955 consistent gets 0 physical reads 0 redo size 623 bytes sent via SQL*Net to client 524 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
Plan (B) with an index fast full scan on a client_account index – cost 1,190, runtime 0.86 seconds, buffer gets 28696
---------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 59 | 1190 (8) | 00:00:14 | | 1 | NESTED LOOPS | | 1 | 59 | 1190 (8) | 00:00:14 | | 2 | NESTED LOOPS | | 1 | 59 | 1190 (8) | 00:00:14 | |* 3 | HASH JOIN | | 1 | 42 | 1188 (8) | 00:00:14 | | 4 | TABLE ACCESS BY INDEX ROWID| ACCOUNT | 1 | 30 | 5 (0) | 00:00:01 | |* 5 | INDEX RANGE SCAN | XAK1ACCOUNT | 1 | | 4 (0) | 00:00:01 | | 6 | INDEX FAST FULL SCAN | XPKCLIENT_ACCOUNT | 9479K | 108M | 1145 (5) | 00:00:13 | |* 7 | INDEX UNIQUE SCAN | XPKCLIENT | 1 | | 1 (0) | 00:00:01 | | 8 | TABLE ACCESS BY INDEX ROWID | CLIENT | 1 | 17 | 2 (0) | 00:00:01 | ---------------------------------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 28696 consistent gets 0 physical reads 0 redo size 623 bytes sent via SQL*Net to client 524 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
Note, particularly, that the two plans are the same apart from operation 6 where a full tablescan changes to an index fast full scan, predicting the same number of rows but with an increase of 50% in the cost; the increase in cost is matched by an increase in the reported workload – a 60% increase in the number of consistent reads and no disk reads or recursive SQL in either case. Yet the execution time (on multiple repeated executions) dropped by nearly 40%.
So what’s interesting and informative about the plan ?
The cost of a tablescan or an index fast full scan is easy to calculate; broadly speaking it’s “size of object” / “multiblock read count” * k, where k is some constant relating to the hardware capability. The costs in these plans and the autotrace statistics seem to be telling us that the index is bigger than the table, while the actual run times seem to be telling us that the index has to be smaller than the table.
It’s easy for an index to be bigger than its underlying table, of course; for example, if this table consisted of nothing but two short columns the index could easily be bigger (even after a rebuild) because it would be two short columns plus a rowid. If that were the case here, though, we would expect the time to fast full scan the index to be higher than the time to scan the table.
So two thoughts crossed my mind as I looked at operation 6:
- Mixing block sizes in a database really messes up the optimizer costing, particularly for tablescans and index fast full scans. Maybe the table had been built in a tablespace using 32KB blocks while the index had been built in a tablespace using the more common 8KB blocksize – I didn’t want to start working out the arithmetic but that might be just enough to produce the contradiction.
- Maybe the table was both bigger AND smaller than the index – bigger because it held more data, smaller because it had been compressed. If so then the difference in run-time would be the overhead of decompressing the rows before projecting and comparing the data.
Conveniently the OP has included an extract from the 10053 trace:
Table Stats:: Table: CLIENT_ACCOUNT Alias: D #Rows: 9479811 #Blks: 18110 AvgRowLen: 71.00 ChainCnt: 0.00 Column (#1): CLNT_ID( AvgLen: 6 NDV: 1261035 Nulls: 0 Density: 0.000001 Min: 0 Max: 4244786 Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 239 Column (#2): ACCT_ID( AvgLen: 6 NDV: 9479811 Nulls: 0 Density: 0.000000 Min: 1 Max: 22028568 Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 255 Index Stats:: Index: XPKCLIENT_ACCOUNT Col#: 1 2 LVLS: 2 #LB: 28543 #DK: 9479811 LB/K: 1.00 DB/K: 1.00 CLUF: 1809449.00
Note that the index is called xpclient_account – which suggests “primary key” – and the number of distinct keys in the index (#DK) matches the number of rows in the table(#Rows). The index and table stats seem to be consistent so we’re not looking at a problem of bad statistics.
Now to do some simple (ballpark) arithmetic: for the table can we check if “rows * average row length / 8K = blocks”. We can read the numbers directly from the trace file: 9,500,000 * 71 / 8,000 = 84,000. It’s wrong by a factor of about 4 (so maybe it’s a 32K block, and maybe I could rule out that possibility by including more detail in the arithmetic – like allowing properly for the block header, row overheads, pctfree etc).
For the index – we believe it’s the primary key, so we know the number of rows in the index – it’s the same as the number of distinct keys. As for the length of an index entry, we have the index definition (col#: 1 2) and we happen to have the column stats about those columns so we know their average length. Allowing for the rowid and length bytes we can say that the average index entry is (6 +1) + (6 + 1) + 6 = 20 bytes. So the number of leaf blocks should be roughy 9,500,000 * 20 / 8,000 = 23,750. That’s close enough given the reported 28,543 and the fact that I haven’t bothered to worry about row overheads, block overheads and pctfree.
The aritmetic provides an obvious guess – which turned out to be correct: the table is compressed, the index isn’t. The optimizer hasn’t allowed for the CPU cost of decompressing the compressed rows, so the time required to decompress 9.5M rows doesn’t appear in the execution plan.
Looking at the column stats, it looks like there are roughly 8 acct_ids for each clnt_id, so it would probably be sensible to compress the primary key index (clnt_id, acct_id) on the first column as this would probably reduce the size of the index by about 20%.
Better still – the client_account table has very short rows – it looks like a typical intersection table with a little extra data carried. Perhaps this is a table that should be an index-organized table with no overflow. It looks like there should also be an index (acct_id, clnt_id) on this table to optimse the path from account to client and this would become a secondary index – interestingly being one of those rare cases where the secondary index on an IOT might actually be a tiny bit smaller than the equivalent index on a heap table because (in recent versions of Oracle) primary key columns that are included in the secondary key are not repeated in the index structure. (It’s a little strange that this index doesn’t seem to exist already – you might have expected it to be there given the OP’s query, and given that it’s an “obvious” requirement as an index to protect the foreign key.)
The only argument against the IOT strategy is that the table clearly compresses very well as a heap table, so a compressed heap table plus two B-tree indexes might be more cost-effective than an IOT with a single secondary index.
So Uwe Hesse caught my interest when he blogged about how to add your twitter handle to your time on your taskbar. This is really cool for those of us that present, so that while we demo, you’ll see our twitter handle displayed at all times.
I did notice that his instructions, as most instructions I find for things on the Oracle side are for older versions of Windows or for Mac. I have a Window 10 machine and yes, it’s possible, just a few steps different.
There are a number of ways to do this….most of them are long, so I’m going to try to take you through the least amount of steps in Windows 10, which for some reason, has buried time settings… »
SQL supports three types of join operation. Most developers learn the inner join first. But there are two other join operations you should know about. These are the outer join, and the full outer join. These additional join types allow you to write in essence could be termed as optional joins.
The so-called inner-join is the default. It's the happy path from a theory perspective, and it's the join type most SQL developers learn first. Use it to combine related rows from two or more tables.
For example, perhaps you want to report on all the customers in the AdventureWorks database. You might begin working that business problem by writing the following query:
SELECT c.CustomerID, p.FirstName, p.LastName FROM Sales.Customer AS c JOIN Person.Person AS p ON c.PersonID = p.BusinessEntityID CustomerID FirstName LastName ----------- ----------- ----------- 29485 Catherine Abel 29486 Kim Abercrombie 29487 Humberto Acevedo ...
Implicit in this query is the presumption that all customers are persons. The inner join operation will not return customers who lack a corresponding row in the Person table. Yet there are customers in AdventureWorks without a corresponding row in Person. The implicit presumption that all customers are persons is wrong, and we need some other join operation capable of correctly addressing the business requirement by listing all the customers.
Our solution lies in the outer join, which perhaps ought to have been termed the optional join. Designate a driving table using the keywords LEFT and RIGHT. Then the join to the other table becomes optional. For example, we can report on all customers – whether persons or not – by writing the following left outer join:
SELECT c.CustomerID, p.FirstName, p.LastName FROM Sales.Customer AS c LEFT JOIN Person.Person AS p ON c.PersonID = p.BusinessEntityID CustomerID FirstName LastName ----------- ----------- ----------- 29485 Catherine Abel 29486 Kim Abercrombie 29487 Humberto Acevedo ... 1 NULL NULL 2 NULL NULL 3 NULL NULL
The keywords LEFT JOIN specify the table written to the left as the driving table. That's the Customer table, so we'll get all rows from that table in our result. Whenever possible, those rows are joined to the Person table. Nulls are returned when there are no corresponding Person rows.
Left and right joins are the same operation. The only difference lies in the order in which you mention the tables in your query. The preceding left join can be rewritten as a right join by mentioning the tables in the opposite order:
SELECT c.CustomerID, p.FirstName, p.LastName FROM Person.Person AS p RIGHT JOIN Sales.Customer AS c ON c.PersonID = p.BusinessEntityID
It's common to spell out LEFT OUTER JOIN and RIGHT OUTER JOIN. The word OUTER is optional in the syntax, but using it helps remind you and other developers maintaining the code that an optional join is in play.
Both queries in this section give the same result. However, it's customary in practice to write all such joins as left joins. I've yet to encounter right joins in production code. I'm sure they are out there, but it's better in my opinion to stay with left joins and avoid confusing future developers.
Full Outer Joins
It's rare, but it might happen that you want both tables in a join to be optional. Perhaps you want to list all customers, and all persons, joining them where possible, and you want the entire listing in a single report. Enter the full outer join:
SELECT c.CustomerID, p.FirstName, p.LastName FROM Sales.Customer AS c FULL JOIN Person.Person AS p ON c.PersonID = p.BusinessEntityID CustomerID FirstName LastName ----------- ----------- ----------- 29485 Catherine Abel 29486 Kim Abercrombie 29487 Humberto Acevedo ... 1 NULL NULL 2 NULL NULL 3 NULL NULL ... NULL Syed Abbas NULL Kim Abercrombie NULL Hazem Abolrous ...
You won't get the nice groupings that I show if you execute the full outer join in AdventureWorks. I took some license and grouped the rows for the sake of clarity. You will get back the same rows, but you may see them all mixed up rather than neatly grouped.
Results from the preceding full outer join include the following rows:
- All rows from an inner join. These are the first three in the example output, and each row contains column values from both Customer and Person.
- The additional rows that would be returned by a left outer join having Customer as the driving table. These are the second group in the output, and the Person columns are null.
- The additional rows that would be returned by a right outer join having Person as the driving table. These are the final group, and this time the Customer columns are null.
Putting the query into business terms, I would say that it returns all customers and all persons, correlating customer to person whenever possible.
Earlier in the section on inner joins, I pointed out how the inner join implicitly excluded customers who were not also listed as persons. In that particular case, the business requirement was to include those customers, which led to my introducing outer joins.
But it's not always wrong to implicitly exclude results via an inner join. Think about the business requirement to list products having reviews along with the names of reviewers and their ratings. Here's a query to generate that list:
SELECT p.ProductID, pr.ReviewerName, pr.Rating FROM Production.Product AS p INNER JOIN Production.ProductReview AS pr ON p.ProductID = pr.ProductID ProductID ReviewerName Rating ----------- ------------ ------ 709 John Smith 5 937 David 4 937 Jill 2 798 Laura Norman 5
The inner join is a perfectly proper choice. The inner join's implicit exclusion of non-reviewed products aligns with the business requirement to list only those products having reviews.
Inner joins are also reasonable when foreign key constraints are in place to ensure that rows in one table have corresponding rows in another. Such is the case when reporting on the names of sales persons. There is a chain of foreign key constraints that require all sales persons to be employees, and all employees to be persons.
Joins Are Fundamental
Joins are fundamental in SQL, and you'll be hard pressed to find many production queries that are written without at least one join operation. Choose inner joins when the business question you are answering requires corresponding rows in both tables, or when database constraints ensure those same corresponding rows. Write left outer joins when you need to make joins optional in one direction. Question any alleged need for a full outer join, but you will sometimes need them.
If you speak often at conferences, sharing your screen to demo things, this could be helpful:
Throughout your presentation, the audience will be able to see your Twitter Handle, reminding them to include it with tweets about the event. I used to include it in the slides, but this is better, because it works also with live demonstrations where no slides are being showed. Which is incidentally my favorite way to do presentations
Now how can you do it? Quite easy, you open the Windows Control Panel and click on Region and Language. Then click on Additional settings:
Then you insert your Twitter Handle (or any other text you like to see on the taskbar) as AM and PM symbol. Make sure to select Time formats with trailing tt:
That’s it. If you want the font size as large as on the first picture above, that can be done here:
I did that with Windows 7 Professional 64 bit. Hope you find it useful
Tagged: speaker tip
The important thing to understand is the data does not change just because it is managed differently. If the data does not change, then the entities and the relationships contained in the data cannot change either. The entities and the relationships between them have not changed since the dawn of time. They were the same in the days of network database management systems which came before relational database management systems, they stayed the same when object-oriented database management systems came along, and they are the same now that we have NoSQL databases.(read more)