Oakies Blog Aggregator

JonathanGennick's picture

Centering Squarespace Buttons

Centering just one call to action button is easily done in Squarespace. But what if you have multiple buttons, and you want them in a row and centered as a group. That's a harder task, and I present in this post a fun hack to help you solve it. 

The Problem

Imagine writing a post about wheel sizes for mountain bikes. You aim to capture reader interest and motivate readers to dig beyond the surface of what is a contentious and fad-driven topic. You issue a call to action with a button like this one:

Dig Deeper!

Centering just one button is easy. I've created this button by dropping a Button block onto the page. Centering is as easy as choosing Center from the drop-down list of alignment choices. Job done!

Centering is a more interesting problem when two or more buttons are involved. Change your goal to that of getting readers to make a choice between two up-and-coming sizes that are pushing the traditional 26-inch wheel off the market. Now your call to action involves two buttons:

Go 29er

Choose 650b

The look here might not be what you're after. Both buttons are centered, but they are each centered in their own half of the display width. You prefer the buttons closer together and centered as a group.

A Failed Hack

One hack that almost works is to align the left-side button to the right, and the right-side button to the left. This has the effect of pushing the two buttons together in the center of the content. If you carefully write the button text to the same number of characters for each button, you can fake the effect of centering the buttons as a group. For example:

Go 29er

Go 650b

This approach falls apart fast. Just make your browser window narrower. Try it now. Narrow your window to mimic the size of your mobile phone display. You''ll see the buttons stack on top of each other and fly to opposite sides of the content area. The approach is not extensible beyond two buttons, and it's not easy to write button text to the same number of characters and width. A different solution is needed. 

A Successful Solution

A solution that works is to take Squarespace's HTML for a Button block, and to use a modified form of that HTML in a Code block to generate as many buttons as you need in a group. Here's how that approach plays out:

  1. Place one button on your page.
  2. Copy the HTML generated for that button.
  3. Paste the HTML into a Code block.
  4. Modify the HTML slightly, and replicate part of it for each button that you need.
  5. Delete the Button block placed in Step 1.

This approach is extensible beyond two buttons, and works well in desktop and mobile views. The buttons generated are indistinguishable from those created using a Button block, because the very same formatting is applied. 

Following is the HTML behind the Go 29er button shown just above. You can find the HTML by pressing CTRL-U to view this page's HTML source. Then search on the text "Go 29er". Find the second occurrance, and back up just a bit until you find the first of the enclosing div elements for the Button block.

 class="sqs-block button-block sqs-block-button" data-block-type="53" id="block-yui_3_10_1_1_1397998943922_173603">
class="sqs-block-content">
class="sqs-block-button-container--right">
href="http://twentynineinches.com/" class="sqs-block-button-element--medium sqs-block-button-element" >Go 29er>
>
>
>

Pasting this HTML into a Code block is the trick. Drop a Code block onto your page. Paste in the above HTML. Then make the following modifications:

  1. Remove sqs-block from the list of classes in the outermost div element. Your Code block will itself be an sqs-block. You don't want to nest one sqs-block inside another.
  2. Remove any id attributes that are present. Alternatively, you can replace any Squarespace-generated id values with ones that you make up yourself. 
  3. Remove the class attribute from the innermost div element. You'll be centering the buttons yourself, so it's best to avoid referencing the Squarespace-generated alignment class. 
  4. Write a style attribute into the tag and specify whatever margin you want for your buttons. I've specified margins of 1em in the example to follow.
  5. Write a style attribute into the innermost div element to center the text within that element. The effect will be to center the enclosed buttons as a group. I've specified text-align: center in the following example.
  6. Copy and paste the tag to create as many buttons as you need.
  7. Modify your copied tags to point to the correct URLs. 

Following is my result. There are two tags, one each for 29er and 650b. Each tag points to a URL that is appropriate for the choice being made. The margins of 1em provide visual separation between the buttons when they are left-and-right from each other, and also when they stack up in the mobile view. The innermost div centers the enclosed buttons as a group.

 class="button-block sqs-block-button" data-block-type="53">
class="sqs-block-content">
style="text-align: center;">
style="margin: 1em;"
href="http://twentynineinches.com/" class="sqs-block-button-element--medium sqs-block-button-element" >Go 29er
>
style="margin: 1em;"
href="http://650bpalace.blogspot.com/" class="sqs-block-button-element--medium sqs-block-button-element" >Go 650b
>
>
>
>

You can just begin with the HTML shown here. Steps 1 through 5 are done for you, and you can focus on Steps 6 and 7. Figure 1 shows the above code as it should be pasted into a Code block. 

Figure 1. The solution code as it appears when pasted into a Squarespace Code block

Figure 1. The solution code as it appears when pasted into a Squarespace Code block

Copy and paste the HTML into a Code block. Modify the button text and link targets. Copy-and-paste the tags to make as many buttons as you need.

The solution works because it places all buttons into the same div element where they can be centered as a group. Here's the final result

Go 29er
Go 650b

You can adjust the margin settings as needed to get the precise spacing that you like. The buttons are identical to those generated by the Button block. You can narrow your browser window, and these will eventually stack up vertically while remaining centered. 

Some Formatting

The buttons in this article are medium-size. You'll see the following class assigned to the elements:

sqs-block-button-element--medium
You can also create small and large buttons. Simply specify small or large in the class name. For example:
sqs-block-button-element--small
sqs-block-button-element--large

Other aspects of button formatting are controllable from the Style Editor. Click the paintbrush icon to get there, and scroll down to the Button Block settings. You'll be able to specify rounded corners, pill-shaped buttons, and probably a few other options depending upon your chosen template.

And Sew On

The fundamental problem at the beginning of this article is that each button is being centered within its own, independent space. That's the behavior you get when you drop Button blocks onto a page, and it's perfectly reasonable behavior too. Sometimes though, you do want to center a block of buttons as a single unit. That's also reasonable, and the hack in this article provides an approach that works toward that end.

Note: You can choose to write your buttons entirely from scratch. Kris Black shows how to do so in his article Create a Simple Call to Action Button. An advantage to the approach I describe is that you can still use the Style Editor to style the resulting buttons, and they will be consistent in appearance with any that you create through the Squarespace Button block. An advantage to Kris's approach is that you can style your buttons without being limited to the built-in choices.


Learn CSS for Squarespace

5.00

Learn CSS for Squarespace is a short, five-chapter introduction to working with CSS on the Squarespace platform. The book is delivered as a mobi file that is readable from the Kindle Fire HDX or similar devices. A complimentary sample chapter is available. Try before you buy.

Add To Cart

oraclebase's picture

The Amazing Spider-Man 2

I’ve just got back from watching The Amazing Spider-Man 2.

Wow, that is one seriously long film! At 2 hours and 22 minutes, it’s a good 1 hour and 22 minutes too long…

I guess there are two sides to this film:

  1. Action Scenes: During the action this films it is brilliant. Really over the top stuff. Bright, flashy and really cool.
  2. Everything Else : I don’t give a crap about character development in an action film. This re-imagining of the franchise is turning out to be even more whiny than the Toby Maguire version.

The film really could have been edited down massively and I would have come out agreeing with the film title. As it was, it’s “The Amazingly Long and Whiny Spider-Man Too!”

Cheers

Tim…


The Amazing Spider-Man 2 was first posted on April 19, 2014 at 8:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
oraclebase's picture

Running scripts in CDBs and PDBs in Oracle Database 12c

You’ve been sold on the whole concept of the multitenant option in Oracle 12c and you are launching full steam ahead. Your first database gets upgraded and converted to a PDB, so you start testing your shell scripts and bang! Broken! Your company uses CRON and shell scripting all over the place and the multitenant architecture has just gone and broken the lot in one fell swoop! I think this will end up being a big shock to many people.

I’ve been talking about this issue with a number of people since the release of Oracle 12c. Brynn Llewellyn did a session on “Self-Provisioning Pluggable Databases Using PL/SQL” at last year’s UKOUG, which covered some of these issues. More recently, I spent some time speaking to Hans Forbrich about this when we were on the OTN Yathra 2014 Tour.

Today, I put down some of my thoughts on the matter in this article.

Like most things to do with Oracle 12c, I’m sure my thoughts on the subject will evolve as I keep using it. As my thoughts evolve, so will the article. :)

Cheers

Tim…


Running scripts in CDBs and PDBs in Oracle Database 12c was first posted on April 19, 2014 at 4:23 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Jonathan Lewis's picture

Coincindences

I had another of those odd timing events today that make me think that Larry Ellison has access to a time machine. I found (yet another example of a) bug that had been reported on MoS just a few days before it appeared on an instance I was running. How is it possible that someone keeps doing things that I’m doing, but just a few days before I do them !

For no good reason I happened to browse through a load of background trace files on an 11.2.0.4 instance and found the following in an “m000″ file:

*** SERVICE NAME:(SYS$BACKGROUND) 2014-04-19 08:55:20.617
*** MODULE NAME:(MMON_SLAVE) 2014-04-19 08:55:20.617
*** ACTION NAME:(Auto-Purge Slave Action) 2014-04-19 08:55:20.617

*** KEWROCISTMTEXEC - encountered error: (ORA-06525: Length Mismatch for CHAR or RAW data
ORA-06512: at "SYS.DBMS_STATS", line 29022
ORA-06512: at line 1
)
  *** SQLSTR: total-len=93, dump-len=93,
      STR={begin dbms_stats.copy_table_stats('SYS', :bind1, :bind2, :bind3, flags=>1, force=>TRUE); end;}

Before trying to track down what had gone wrong I did a quick check on MoS, searching for “copy_table_stats” and “29022″ and found bug 17079301 – fixed in 12.2, and 12.1.0.2, with a patch for 12.1.0.1 (and some back-ports for 11.2.0.4). The description of the bug in the note was basically: “it happens”.

I may get around to looking more closely at what’s gone wrong but as an initial though I’m guessing that, even though the action name is “auto-purge slave action”, this may be something to do with adding a partition to some of the AWR history tables and rolling stats forward – so at some point I’ll probably start by checking for partitions with missing stats in the SYS schema.

The bug note, by the way, was published (last updated, on second thoughts) on 14th April 2014 – just 5 days before I first happened to spot the occurrence of the bug.

dbakevlar's picture

REPVFY Diagnostics, Part I of…Well, A Zillion

There are a number of Verification Utilities for Enterprise Manager 12c, (EM12c) and I’ve written about them before, but today I’m going to start on the Repository Verification Utility, (REPVFY).  This will be an ongoing series, as there are so many valuable features rolled into the utility and new ones that will be added as new patches and releases happen.

For this post, I’m going to be reviewing a level 2 diagnostics report from a new repository database.  This is going to offer us all kinds of valuable data about the environment and should be considered by any EM12c administrator when doing a health check of their EM environment.

Inspecting the files from the output, you’ll note that they don’t require a lot of space, even when unzipped and this allows the admin to retain historical information about the EM12c environment.

Note that the files are broken up by type of review and marked with the timestamp for many of these files.

em_blog_post_0418

For this first blog post, I’m going to jump right into the first log alphabetically, the advisors log file, which as you might guess, is advising on performance.  The interesting thing is as small as this file is, it is actually comprised of a number of reports that one would be surprised how much work to run all of it manually.

The Advisors Log is comprised of the following reports:

  • ADDM Report
    • Space Advisor Report
    • ASH Report
  • AWR Report
  • SGA Resizing Report

The report then ends with the #ff0000;">CBO Repository information before closing up this one of many log files generated by a simple and single execution from the command line.  I would definitely read through to this final section to understand the stats collection in the EM Repository database a bit more clearly, as I do touch on it in this post.

Advisor Log Header

When viewing the log file, the first thing it tells you is where the source SQL for the report comes from.  This is helpful if you want to inspect the SQL used.

The next is information about the release versions and such involved.  This is standard for most of the log files in the repvfy output:

 

COMPONENT          INFO
------------------ ----------------------------------------
EMDIAG Version     2014.0318
Repository Version 12.1.0.3.0
Database Version   11.2.0.3.0
Test Version       2014.0402
Repository Type    CENTRAL

The next sections of the report will go over the repository database and report on the type of database, (single instance or RAC, protected, version, etc.) and other pertinent information.  This is multiple query output I would like to see rewritten and compiled into one result set, but it’s still good to have this data, no matter what the output.

We then jump into SGA information and init/sp parameters.  What surprised me is that next section was dedicated to System statistics and the OS stats data.  This data is often only updated when a change to hardware has been made or if parallel is implemented and it’s set up in an upper level of the report indicating that the author of the report found this information valuable to the performance of the repository database, (not agreeing or disagreeing here, found it more curious than anything as I find it can be important in large databases, which the EM repository rarely is…)

--- Database OS statistics
SNAME                          PNAME                               PVAL1 PVAL2
------------------------------ ------------------------------ ---------- ----------------------------------------
SYSSTATS_INFO                  STATUS                                    COMPLETED
SYSSTATS_INFO                  DSTART                                    03-29-2014 02:00
SYSSTATS_INFO                  DSTOP                                     03-29-2014 02:00
SYSSTATS_INFO                  FLAGS                                   1
SYSSTATS_MAIN                  CPUSPEEDNW                           1405
SYSSTATS_MAIN                  IOSEEKTIM                              18
SYSSTATS_MAIN                  IOTFRSPEED                           4096

 

ADDM Report

Next the utility runs an ADDM report for the previous hour’s snapshot.  Now ADDM reports rarely receive a lot of love from DBAs.  It’s not the top info it includes or the findings, it’s the recommendations that commonly leave them less than thrilled.  How often do we read one that doesn’t recommend some type of hardware solution to a problem that we know with hardware applied, will only rear its ugly head again a few months to a year later?  If there was one recommendation I could make to this valuable report is to take those recommendations out and I think you’d see a lot more DBAs embracing ADDM reports.

The report for the repository database is true to form, only this time it’s Oracle’s objects being offered hardware solutions!

ECMAM_PK                                  0          0          0          0
Move object SYSMAN.ECMAM_PK to flash storage, estimated I/O wait reduction is 10 millisec
EM_CS_SCORE_HIST                          0          0          0          0
Move object SYSMAN.EM_CS_SCORE_HIST partition 2014-03-25 00:00 to flash storage, estimated I/O wait reduction is 125 millisec.
EM_EVENT_MSGS_PK                          0          0          0          0
Move object SYSMAN.EM_EVENT_MSGS_PK partition OPEN to flash storage, estimated I/O wait reduction is 5 millisec.
EM_METRIC_KEY_GUID_IDX                    0          0          0          0
Move object SYSMAN.EM_METRIC_KEY_GUID_IDX to flash storage, estimated I/O wait reduction is 336 millisec

Now who is to say that a savings of 10-336ms is worth the cost of flash storage?  I’m not, but this is just more proof to me that the recommendations section is still not where most DBAs would like to see the ADDM report.

Further sections of the report do live up to the hype and offer similar value to what we see in AWR for the EM repository database:

Top Background Events            DB/Inst: OEM/OEM  (Apr 02 14:00 to 15:00)
Avg Active
Event                               Event Class     % Activity   Sessions
----------------------------------- --------------- ---------- ----------
os thread startup                   Concurrency          47.91       0.03
CPU + Wait for CPU                  CPU                   9.77       0.01

 

To clearly show that it is Concurrency that is causing the OS Thread Startup % of activity is important.  This helps keep the administrator on the right track if they were to investigate the wait event.

The following section goes through the code and action involved in the last hour’s ADDM activity.  You can see if there is anything causing an issue, but as we can see in our repository, not much of anything-  it’s nice an quiet…. J

Top Service/Module               DB/Inst: OEM/OEM  (Apr 02 14:00 to 15:00)
Service        Module                   % Activity Action               % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS      DBMS_SCHEDULER                 8.84 EM_REPOS_SEV_EVAL        2.79
EM_SYSTEM_MEMBER_S       1.40
EM_TASK_WORKER_3         1.40
MGMT_COLLECTION.COLLECTI       8.37 Task:881 Repositor       4.19
Task:37 Performanc       1.40
OEM.PbsSystemPool              6.51 PingHeartBeatWork:       2.33
LoaderWorker             1.40
GATHER_SYSMAN_STATS            2.33 UNNAMED                  0.93

 

Next it reports on sessions and then on percentage of activity was allocated to different SQL types and the percentage allocated to different phases of the SQL, (what percentage of activity was consumed by hard parsing vs. SQL or PL/SQL execution….)

Just like an ADDM report, the top SQL statements, including SQL_ID and percentage of activity is reported:

Sampled #
SQL ID             Planhash        of Executions     % Activity
----------------------- -------------------- -------------------- --------------
Event                          % Event Top Row Source                    % RwSrc
------------------------------ ------- --------------------------------- -------
7mqykaw68fkhb           2691428406                    6           4.19
CPU + Wait for CPU                4.19 TABLE ACCESS - FULL                  2.79
SELECT COUNT(*) FROM DBA_SCHEDULER_JOB_RUN_DETAILS WHERE OWNER = MGMT_USER.GET_R
EPOSITORY_OWNER AND JOB_NAME = UPPER(:B3 ) AND STATUS = 'SUCCEEDED' AND ACTUAL_S
TART_DATE >= :B2 AND ACTUAL_START_DATE < :B1

 

The report will go through this for top row source, literals, actions, etc.  It will then jump into the top procedures that were called:

PLSQL Entry Subprogram                                            % Activity
----------------------------------------------------------------- ----------
PLSQL Current Subprogram                                           % Current
----------------------------------------------------------------- ----------
SYSMAN.EM_TASK.WORKER                                                   8.84
SQL                                                                  6.05
SYS.DBMS_ASSERT.SIMPLE_SQL_NAME                                      1.40
SYSMAN.EM_PING.RECORD_BATCH_HEARTBEAT                                   3.26
SQL                                                                  1.86

 

This can help you quickly find what procedures to search for the SQL shown in the sections before.

Blocking sessions, top sessions and top latches are just some of the areas, just as in a standard AWR report that are covered, but this is all for the EM repository database!

This section of the report is finished up with information about the AWR retention so if you need to go back and investigate further, you know up front what you have to work with in the AWR, (Automatic Workload Repository.)

--- AWR retention
INTERVAL  RETENTION
---------- ----------
60      11520

 

No, the report is not even close to being finished for the Advisor log at this point.  It jumps promptly into a workload report to give you a good view of what kind of processing this repository database is handling daily.

FULL AWR Report

Just as with a standard AWR report, the basic information about the repository for the previous hour is included with the Advisor log.  Keep this in mind as you review this report.  If this is not a heavy use time for your EM repository database, the data may not be as valid as if we were looking at a larger timeframe.

If you are more comfortable with statspack or AWR reports, then you will want to spend more time here than the ADDM section, but both provide their own view on the data provided.

The IO Statistics by Function Summary

Reads:   Reqs   Data    Writes:  Reqs   Data    Waits:    Avg
Function Name   Data    per sec per sec Data    per sec per sec Count    Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Buffer Cache Re    818M     2.1 .227007      0M     0.0      0M    7628     0.1
Others              62M     1.1 .017205     44M     0.8 .012210    6643     0.8
DBWR                 0M     0.0      0M     94M     1.7 .026086    5938     0.5
LGWR                 1M     0.0 .000277     40M     2.2 .011100   15.3K     0.5
Direct Reads         2M     0.0 .000555      0M     0.0      0M      17     0.0
Direct Writes        0M     0.0      0M      1M     0.0 .000277      11     0.7
TOTAL:             883M     3.2 .245045    179M     4.6 .049675   35.5K     0.5
------------------------------------------------------------
Direct Reads
2M     0.0 .000555      0M     0.0      0M       0     N/A
Direct Reads (Data File)
2M     0.0 .000555      0M     0.0      0M       0     N/A
Direct Writes
0M     0.0      0M      1M     0.0 .000277       0     N/A
Direct Writes (Data File)
0M     0.0      0M      1M     0.0 .000277       0     N/A
TOTAL:
883M     3.2 .245045    178M     4.6 .049397   11.6K     0.1
------------------------------------------------------------

Enqueue Activity

Enqueue Type (Request Reason)
------------------------------------------------------------------------------
Requests    Succ Gets Failed Gets       Waits  Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
JS-Job Scheduler (queue lock)
25,724       25,724           0          59            0           1.19
KO-Multiple Object Checkpoint (fast object checkpoint)
30           30           0           3            0           6.67
SQ-Sequence Cache
75           75           0           3            0           3.33
------------------------------------------------------------

Also spend some time inspecting the Latch and Mutex sections.  These are areas of memory that many Oracle databases are experiencing challenges with from time to time.  These allocations in memory are good to understand and even the EM repository database experiences issues with both when concurrency is commonly involved.

I’ve been a long time proponent of the Segment reporting section in the AWR.  This report can often seal up remaining questions that DBAs have after seeing the top SQL by elapsed time that were answered upon viewing the execution plan.

Segments by Physical Reads
-> Total Physical Reads:         105,015
-> Captured Segments account for   98.1% of Total
Tablespace                      Subobject  Obj.      Physical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSAUX     SCHEDULER$_JOB_RUN_D            TABLE       59,463   56.62
SYS        SYSAUX     SCHEDULER$_EVENT_LOG            TABLE       43,048   40.99
SYS        SYSTEM     OBJ$                            TABLE          266     .25
SYS        SYSAUX     I_WRI$_OPTSTAT_H_OBJ            INDEX          107     .10
SYSMAN     MGMT_TABLE EM_EV_METRIC_ALERT   P201404    TABLE           47     .04

 

There will be sections for logical, physical and unoptimized reads, along with row locks, table scans, writes and even ITL waits.  The report continues with library cache and all variations of memory information.  This is an area that I feel more than an hour could be beneficial for the REPVFY utility output and a future enhancement might be to follow with an argument for time frame of snapshots to be utilized for the report.

SGA Resizing Report

This report provides not just SGA information, but also PGA and caches.  It starts out by giving the reader a view of any specially sized buffer caches and each of the pools, (large, java, ASM, etc.)

-- SGA Dynamic Components
--  Cache: D:DEFAULT, K:KEEP, R:RECYCLE
--   Mode: DEF:DEFerred mode, IMM:IMMediate mode
Last Oper  Last
Component              Size (M) Type/Mode  Operation Time
---------------------- -------- ---------- --------------------
D: 2K buffer cache            0 STATIC
D: 4K buffer cache            0 STATIC
D: 8K buffer cache            0 STATIC
D:16K buffer cache            0 STATIC
D:32K buffer cache            0 STATIC
D:buffer cache              448 INITIALIZI

From there, it steps into resize operations for SGA and then PGA.

-- PGA Resize Operations overview
I# SNAP_TIME            PGA_TARGET PGA_ALLOC    PROCS        AUTO      MANUAL
---- -------------------- ---------- --------- -------- ----------- -----------
1 02-APR-2014 15:00:00        493       281       60        0.00        0.00
1 02-APR-2014 14:00:00        493       290       66        0.00        0.00
1 02-APR-2014 13:00:00        493       311       70        0.00        0.00
1 02-APR-2014 12:00:00        493       286       66        0.00        0.00

 

This part of the report is not held to the 1 hour snapshot as we experience in the ADDM and AWR section of the Advisor log.  The data for these sections went back for one week and provided a clear picture of resizing activities for the SGA and PGA.

Repository CBO Report

This section is built out of SQL added to the advisor.sql in the REPVFY utility.  Its only found in the Advisor report and is essential to understanding specific information in terms to CBO, (cost based optimizer) information to the repository database.  To think of this as schema specific is more in line with the value provided from the information.

If you remember back to when you installed the EM12c on the repository database, it has you disable the automated dbms_stats job?  This is now providing you with the internal stats job information that provides the correct statistical information to the CBO-

TABLE_NAME                       INCREMENTAL GRANULARITY     STALE_PCT
-------------------------------- ----------- --------------- ---------
ADP_METRIC0                      TRUE
ADP_METRIC1                      TRUE
ADP_METRIC2                      TRUE
EMCT_CBA_CHARGE_1DAY             TRUE
EMCT_CBA_CHARGE_1HOUR            TRUE
EM_EVENTS_INCIDENT               TRUE                        10
EM_EVENT_ANNOTATIONS             TRUE                        10
EM_EVENT_CONTEXT                 TRUE
EM_EVENT_MSGS                    TRUE                        10
EM_EVENT_PROCESSING_STATE        TRUE
EM_EVENT_RAW                     TRUE                        10
EM_EVENT_SEQUENCES               TRUE                        10
EM_EVENT_SEQ_STATE                                           10
EM_METRIC_VALUES                             AUTO            10
EM_METRIC_VALUES_DAILY                       AUTO            10
EM_METRIC_VALUES_HOURLY                      AUTO            10

 

As the repository database scales, the EM stats feature can be scaled to support more extensive statistics information as well.  I am cautious to say this and will say it with a grain of salt, but it backs up my experience that 95% of databases are served successfully with the current automated stats job, but that there are types that require a DBA with knowledge of the schema, data and code to ensure the CBO has what it needs to perform efficiently.  The biggest challenge is that there aren’t a lot of DBAs with that knowledge or time to invest in this task, (I know, I did this in two companies and it must be a true passion.)  EM12c has removed this challenge from the DBAs plate by setting up the one-off stats collection requirement for them as part of the repository installation.

They’ve enhanced that in the REPVFY by having a utility step that checks to see what is currently in place and then the ability to review it as part of this output.

TABLE_NAME                       RUN_TYPE                         INCREMENTAL GRANULARITY     STALE_PCT
-------------------------------- -------------------------------- ----------- --------------- ---------
EM_EVENTS_INCIDENT               DAILY                                                        20
EM_EVENT_ANNOTATIONS             DAILY                                                        20
EM_EVENT_MSGS                    DAILY                                                        20
EM_EVENT_RAW                     DAILY                                                        20

 

Curious about when and how long the jobs that do this work run?  This is included as well-

[----- CBO statistics gathering log (last 3 days) ---------------------------]
LOG_DATE             RUN_TYPE                           DURATION OBJECTS_ANALYZED OBJECTS_STALE OBJECTS_FILTER VALUE
-------------------- -------------------------------- ---------- ---------------- ------------- -------------- --------------------------------------------------------------------------------
02-APR-2014 14:24:16 PROACTIVE                                 5                3            69             24 P:1.86:EM_EVENT_ANNOTATIONS;1.13:MGMT_JOB_EXECUTION;.71:EM_EV_METRIC_ALERT;
02-APR-2014 12:24:16 PROACTIVE                                 9                5            71             24 P:2.47:EM_EVENT_RAW;1.97:EM_EVENT_SEQUENCES;1.34:MGMT_JOB_EXECUTION;.87:EM_EVENT
02-APR-2014 10:24:16 DAILY                                   123              272           457                D:6.91:EM_JOB_METRICS;5.66:MGMT_SYSTEM_PERFORMANCE_LOG;4.81:MGMT_ARU_PATCH_FIXES
02-APR-2014 08:24:16 PROACTIVE                                13                7            74             24

 

Table stats in the repository?  Yep, that’s included, too!

[----- Table statistics -----------------------------------------------------]
TABLE_NAME                       PARTITION_NAME                 LAST_ANALYZED          NUM_ROWS     BLOCKS AVG_ROW_LEN GLO USE STATT STA
-------------------------------- ------------------------------ -------------------- ---------- ---------- ----------- --- --- ----- ---
EM_METRIC_VALUES_DAILY                                          29-MAR-2014 10:25:19      95257       2040          16 YES NO  ALL   YES
EM_METRIC_VALUES_HOURLY                                         01-APR-2014 10:26:49     681018      15820          16 YES NO  ALL   YES
MGMT_DB_FEATUREUSAGE                                            02-APR-2014 10:25:13       5713        244         149 YES NO  ALL   YES

 

And with all of that,  (pages and pages…) we’re done with the first of many reports.  I’ll keep to the most essential data in each, but hopefully you’ve learned something you hadn’t known before about the REPVFY utility and I’ll update with the next log report soon!



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [REPVFY Diagnostics, Part I of...Well, A Zillion], All Right Reserved. 2014.

Jonathan Lewis's picture

Bitmap loading

Everyone “knows” that bitmap indexes are a disaster (compared to B-tree indexes) when it comes to DML. But at an event I spoke at recently someone made the point that they had observed that their data loading operations were faster when the table being loaded had bitmap indexes on it than when it had the equivalent B-tree indexes in place.

There’s a good reason why this can be the case.  No prizes for working out what it is – and I’ll supply an answer in a couple of days time.  (Hint – it may also be the reason why Oracle doesn’t use bitmap indexes to avoid the “foreign key locking” problem).

 

pete's picture

Automatically Add License Protection and Obfuscation to PL/SQL

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

jeremy.schneider's picture

OSP #3a: Build a Standard Cluster Platform

This is the fifth article in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

We’ve looked in some depth at the process of defining a standard platform with an eye toward Oracle database use cases. Before moving on, it would be worthwhile to briefly touch on clustering.

Most organizations should hold off as long as possible before bringing clusters into their infrastructure. Clusters introduce a very significant new level of complexity. They will immediately drive some very expensive training and/or hiring demands – in addition to the already-expensive software licenses and maintenance fees. There will also be new development and engineering needed – perhaps even within application code itself – to support running your apps on clusters. In some industries, clusters have been very well marketed and many small-to-medium companies have made premature deployments. (Admittedly, my advice to hold off is partly a reaction to this.)

When Clustering is Right

Nonetheless there definitely comes a point where clustering is the right move. There are four basic goals that drive cluster adoption:

  1. Parallel or distributed processing
  2. Fault tolerance
  3. Incremental growth
  4. Pooled resources for better utilization

I want to point out immediately that RAC is just one way of many ways to do clustering. Clustering can be done at many tiers (platform, database, application) and if you define it loosely then even an oracle database can be clustered in a number of ways.

Distributed Processing

Stop for a moment and re-read the list of goals above. If you wanted to design a system to meet these goals, what technology would you use? I already suggested clusters – but that might not have been what came to your mind first. How about grid computing? I once worked with some researchers in Illinois who wrote programs to simulate protein folding and DNS sequencing. They used the Illinois BioGrid – composed of servers and clusters managed independently by three different universities across the state. How about cloud computing? The Obama Campaign in 2008 used EC2 to build their volunteer logistics and coordination platforms to dramatically scale up and down very rapidly on demand. According to the book In Search of Clusters by Gregory Pfister, these four reasons are the main drivers for clustering – but if they also apply to grids and clouds then then what’s the difference? Doesn’t it all accomplish the same thing?

In fact the exact definition of “clustering” can be a little vague and there is a lot of overlap between clouds, grids, clusters – and simple groups of servers with strong & mature standards. In some cases these terms might be more interchangeable than you would expect. Nonetheless there are some general conventions. Here is what I have observed:

CLUSTER Old term, most strongly implies shared hardware resources of some kind, tight coupling and physical proximity of servers, and treatment of the group as a single unit for execution of tasks. While some level of single system image is presented to clients, each server may be individually administered and strong standards are desirable but not always implied.
GRID Medium-aged term, implies looser coupling of servers, geographic dispersion, and perhaps cross-organizational ownership and administration. There will not be grid-wide standards for node configuration; individual nodes may be independently administered. The grid may be composed of multiple clusters. Strong standards do exist at a high level for management of jobs and inter-node communication.

Or, alternatively, the term “grid” may more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies.

CLOUD New term, implies service-based abstraction, virtualization and automation. It is extremely standardized with a bias toward enforcement through automation rather than policy. Servers are generally single-organization however service consumers are often external. Related to the term “utility computing” or the “as a service” terms (Software/SaaS, Platform/PaaS, Database/DaaS, Infrastructure/IaaS).

Or, alternatively, may (like “grid”) more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies.

Google Trends for Computers and Electronics Category

Google Trends for Computers and Electronics Category

These days, the distributed processing field is a very exciting place because the technology is advancing rapidly on all fronts. Traditional relational databases are dealing with increasingly massive data volumes, and big data technology combined with pay-as-you-go cloud platforms and mature automation toolkits have given bootstrapped startups unforeseen access to extremely large-scale data processing.

Building for Distributed Processing

Your business probably does not have big data. But the business case for some level of distributed processing will probably find you eventually. As I pointed out before, the standards and driving principles at very large organizations can benefit your commodity servers right now and eliminate many growing pains down the road.

In the second half of this article I will take a look at how this specifically applies to clustered Oracle databases. But I’m curious, are your server build standards ready for distributed processing? Could they accommodate clustering, grids or clouds? What kinds of standards do you think are most important to be ready for distributed processing?

jeremy.schneider's picture

OSP #3a: Build a Standard Cluster Platform

This is the fifth article in a series called Operationally Scalable Practices. The first article gives an introduction and the second article contains a general overview. In short, this series suggests a comprehensive and cogent blueprint to best position organizations and DBAs for growth.

We’ve looked in some depth at the process of defining a standard platform with an eye toward Oracle database use cases. Before moving on, it would be worthwhile to briefly touch on clustering.

Most organizations should hold off as long as possible before bringing clusters into their infrastructure. Clusters introduce a very significant new level of complexity. They will immediately drive some very expensive training and/or hiring demands – in addition to the already-expensive software licenses and maintenance fees. There will also be new development and engineering needed – perhaps even within application code itself – to support running your apps on clusters. In some industries, clusters have been very well marketed and many small-to-medium companies have made premature deployments. (Admittedly, my advice to hold off is partly a reaction to this.)

When Clustering is Right

Nonetheless there definitely comes a point where clustering is the right move. There are four basic goals that drive cluster adoption:

  1. Parallel or distributed processing
  2. Fault tolerance
  3. Incremental growth
  4. Pooled resources for better utilization

I want to point out immediately that RAC is just one way of many ways to do clustering. Clustering can be done at many tiers (platform, database, application) and if you define it loosely then even an oracle database can be clustered in a number of ways.

Distributed Processing

Stop for a moment and re-read the list of goals above. If you wanted to design a system to meet these goals, what technology would you use? I already suggested clusters – but that might not have been what came to your mind first. How about grid computing? I once worked with some researchers in Illinois who wrote programs to simulate protein folding and DNS sequencing. They used the Illinois BioGrid – composed of servers and clusters managed independently by three different universities across the state. How about cloud computing? The Obama Campaign in 2008 used EC2 to build their volunteer logistics and coordination platforms to dramatically scale up and down very rapidly on demand. According to the book In Search of Clusters by Gregory Pfister, these four reasons are the main drivers for clustering – but if they also apply to grids and clouds then then what’s the difference? Doesn’t it all accomplish the same thing?

In fact the exact definition of “clustering” can be a little vague and there is a lot of overlap between clouds, grids, clusters – and simple groups of servers with strong & mature standards. In some cases these terms might be more interchangeable than you would expect. Nonetheless there are some general conventions. Here is what I have observed:

CLUSTER Old term, most strongly implies shared hardware resources of some kind, tight coupling and physical proximity of servers, and treatment of the group as a single unit for execution of tasks. While some level of single system image is presented to clients, each server may be individually administered and strong standards are desirable but not always implied.
GRID Medium-aged term, implies looser coupling of servers, geographic dispersion, and perhaps cross-organizational ownership and administration. There will not be grid-wide standards for node configuration; individual nodes may be independently administered. The grid may be composed of multiple clusters. Strong standards do exist at a high level for management of jobs and inter-node communication.

Or, alternatively, the term “grid” may more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies.

CLOUD New term, implies service-based abstraction, virtualization and automation. It is extremely standardized with a bias toward enforcement through automation rather than policy. Servers are generally single-organization however service consumers are often external. Related to the term “utility computing” or the “as a service” terms (Software/SaaS, Platform/PaaS, Database/DaaS, Infrastructure/IaaS).

Or, alternatively, may (like “grid”) more loosely imply a group of servers where nodes/resources and jobs/services can easily be relocated as workload varies.

Google Trends for Computers and Electronics Category

Google Trends for Computers and Electronics Category

These days, the distributed processing field is a very exciting place because the technology is advancing rapidly on all fronts. Traditional relational databases are dealing with increasingly massive data volumes, and big data technology combined with pay-as-you-go cloud platforms and mature automation toolkits have given bootstrapped startups unforeseen access to extremely large-scale data processing.

Building for Distributed Processing

Your business probably does not have big data. But the business case for some level of distributed processing will probably find you eventually. As I pointed out before, the standards and driving principles at very large organizations can benefit your commodity servers right now and eliminate many growing pains down the road.

In the second half of this article I will take a look at how this specifically applies to clustered Oracle databases. But I’m curious, are your server build standards ready for distributed processing? Could they accommodate clustering, grids or clouds? What kinds of standards do you think are most important to be ready for distributed processing?

Richard Foote's picture

Indexing Foreign Key Constraints With Bitmap Indexes (Locked Out)

Franck Pachot made a very valid comment in my previous entry on Indexing Foreign Keys (FK) that the use of a Bitmap Index on the FK columns does not avoid the table locks associated with deleting rows from the parent table. Thought I might discuss why this is the case and why only a B-Tree index does […]