Oakies Blog Aggregator

harald's picture

Expert Advice: How to Start Selling on Your Website

Are you just taking your first steps selling a product or service online and don’t know where to begin? Be sure to register for our next 60-minute webinar, where our expert Happiness Engineers will walk you through the basics of eCommerce and show you how to set up your online store.

Date: Thursday, April 16, 2020
Time: 5 p.m. UTC | 1 p.m. EDT | 12 p.m. CDT | 10 a.m. PDT
Cost: Free
Who’s invited: business owners, entrepreneurs, freelancers, service providers, store owners, and anyone else who wants to sell a product or service online.
Registration link

Hosts Steve Dixon and Maddie Flynn are both veteran Happiness Engineers with years of experience helping business owners and hobbyists build and launch their eCommerce stores. They will provide step-by-step instructions on setting up:

  • Simple Payments — perfect for selling a single product or service.
  • Recurring Payments — great for subscriptions and donations.
  • WooCommerce — ideal for entrepreneurs who want to build an online store and automate sales.

No previous eCommerce experience is necessary, but we recommend a basic familiarity with WordPress.com to ensure you can make the most from the webinar. The presentation will conclude with a Q&A session (15-20 minutes), so you can already note down any questions you might have and bring them with you to the webinar.

Seats are limited, so register now to reserve your spot. See you then!

mwidlake's picture

COVID-19: The Coming Peak in the UK & Beyond.

<<<<<
<<<< Why we had to go into lock-down
<< What we could do to help ease social distancing

The UK government is now talking more in it’s daily briefings about what will come “next”, that is after we have seen the number of diagnosed cases & deaths continue to grow, plateau, and then fall. It will plateau & fall, so long as we all keep staying at home and limiting our social interactions. If we do not, we risk the virus spreading out of control again.

When Will the Peak Be?

First of all, there will be two peaks. First the number of new cases a day will peak and then, about 8 days later, the number of deaths per day will peak. This is because of the average gap between being diagnosed in hospital and succumbing, for those unfortunate enough to do so.

The number of deaths a day looks to me like it will peak around April 20th, at somewhere between 1,200 and 1,500 a day (see below why I think tracking deaths is more reliable than case numbers and why case numbers are a poor metric). We will know that peak is coming as, if the lock-down measures have worked as intend, their effect will result in a plateauing and then drop in new cases during next week ( April 12th-18th). We might be seeing that plateauing already. Deaths will plateau (stay steady) for maybe a longer period than cases due to the fact that the gap between diagnosis and recovery or death is variable. That period will be something like April; 20-27th

If we follow the same “curve” as Italy and Spain,  the number of new cases will slowly start dropping but not as sharply as early models indicated. Deaths will also drop, about 8-10 days later. What happens then I have no idea really, it depends on how well the current social distancing measures work and if people continue to stick to them as spring progresses and people want to escape confinement.

A disproportionate number of deaths in this peak will be from our health services and critical works – people working in shops, bus drivers, refuse collectors, GP’s, teachers – because they are the most exposed. The care industry workers and lower paid people in our society will be hardest hit, which seems monumentally unfair.

The plan of pretty much all national governments so far is the same:

  • Isolation of all people who are non-key workers
  • Slow the spread
  • Expand the respiratory Intensive Care capabilities of the health services as much as possible
  • Look after as many of the wave of people already infected & becoming ill as possible

As I’ve covered in prior blogs, if the government’s measures work we are then we are left “sleeping with the tiger”. The virus is in our population, it will be slowly spreading still, and when social isolation measures relax there is a real risk of the illness and deaths exploding again because most of us are not immune. This is know by all epidemiologists studying this, it is a situation that China, Italy, Spain, and most other countries will face.

The big question is – what comes after the peak?

I’m going to cover three four things:

  1. Why we cannot go on “Cases” the number most often graphed and discussed. We have to go on deaths, and even then there are some confusing factors.
  2. Why the Infection Fatality Rate is key – and we do not know that yet
  3. A “test” or “vaccine” is not a black and white thing, it’s grey, and especially for a Vaccine, it is not coming soon.
  4. How we might manage the period between either a reliable vaccine or herd immunity. Both currently look like at least 18-24 months away.

Why Case Numbers Cannot Be Relied On.

Case numbers (the number of people who have been confirmed as having Covid-19) are the most commonly reported figures, many of us track if things are getting better or worse by them. But they are a very poor indicator really and they certainly cannot be used to compare between countries.

First of all, how are the diagnoses being made? Most countries are using the WHO-approved test or a very similar one, called a PCR test. I won’t go into the details here, I’ll put them in the section of the post on testing, but the test is accurate if done in a laboratory. Why in a lab? because any cross-contamination can give a false positive and if the sample or test chemicals are not kept/handled correctly, can give a false negative.

Not all countries are using just PCR tests. China made some diagnoses based only on symptoms. I’m not sure if other countries are making diagnoses from symptoms only and including them in official figures.

More significantly is who is being tested. In the UK the test was originally only being done on seriously ill patients in hospital. It is now being done on a few NHS staff and certain key people (like Boris Johnson!). In South Korea and Germany, many, many more people were tested, so there will be more cases identified. Add on to that the number of tests a country can do.

https://mwidlake.files.wordpress.com/2020/04/screenhunter_-356.jpg?w=600... 600w, https://mwidlake.files.wordpress.com/2020/04/screenhunter_-356.jpg?w=150... 150w" sizes="(max-width: 300px) 100vw, 300px" />

In the UK are testing rates have been very poor

In the UK we were limited to a pitifully small number of tests per day, less than 6,000 until March 17th and we only reached 10,000 test a day at the start of April. You cannot detect cases in people you have not tested.

Case numbers will also vary from country to country based on the country’s population! The UK is going to have a lot more cases than Denmark as we have over 10 times as many people.

The final confusion is that even in a single country, what counts as a day for reporting can vary and it can take time for information to be recorded. The UK sees a drop of cases against the prevailing trend on Sunday and Monday. As the cases are for the prior day and it seems like the data is not being as well processed at weekend.

Estimations of how many people really have Covid-19 at any time, as opposed to validated Case numbers, vary wildly. In the UK I doubt we are detecting even 1/3 of cases.

So, all in all, Case Rates are pretty poor as an indicator of how many people are really ill.

Infection Fatality Rate and Tracking Deaths Not Cases.

As I mentioned in my previous post last week, what we really need to know is the Infection Fatality Rate (IFR). This is the percentage of infected people who die. It is not the same as the Case Fatality Rate (CFR), which is the percentage of known cases that die. As the number of known cases is such an unreliable number (see above!) then of course the CFR is going to be rubbish. This is a large part of why the CFR varies so wildly from country to country. France has a CFR of 8.7%, almost as bad as the UK at 10.4%. The US has a CFR of 2.9% (but they will catch up).

As I also covered last week, we cannot calculate the IFR until we know the number of people who have been infected. For that we need a reliable antibody test and one does not exist yet. Yes, they are being sold, but the reliability is poor. Last I knew the UK NHS had reviewed several candidates and none were reliable enough to use.

Scientist have suggested many Infection Fatality rates. I feel 0.5% is a fair estimate. It is vital we know this number with some accuracy as if we have an Infection Fatality Rate we can flip the coin and calculate the number of people who have been infected from the number of people who have died.

You can go from a graph like the example one I show (either from a model or, after the peak, from real figures) and as you have the number who died (say 20,000 to keep it simple) and the IFR of 0.5% you know that 4 million people (minus the 20,000 who died) had the disease and are now immune.

Of course, once we have a reliable antibody test we can verify the exact value for Infection Fatality Rate and the percentage of the population now immune.  But we only need that information from one country and it can be used, with minor modifications for population age and capacity of the health services, to estimate how many people are immune and thus how many are still at risk from Covid-19. In my example, about 62.5 million people in the UK would still be susceptible to Covid-19. Which is why this will be far from over after this initial peak.

There is one huge caveat in respect of the IFR. If in the UK the NHS is over-run, we will have extra deaths. People who would have survived with treatment die as too many people needed treatment at the same time. This is the whole “flattening the curve” argument, we have to protect the NHS from being over-run to limit this extra, avoidable deaths. In effect the IFR is elevated due to the limitations of the health system.

Countries which do have a poor health service or other aspects of their society that block them from the health service (cultural bias, fear of crippling debt) or more likely to have an elevated IFR, as are countries that allow Covid-19 to run unchecked through their population.

There is another aspect to the IFR and measuring progress of Covid-19 via the death rate. The number of deaths is a more reliable measure. I know that sounds callous, but as we have seen, the Case Number is totally reliant on how you do your testing and there needs to be a huge testing capacity to keep up. Deaths are simpler:

  • There are fewer deaths so fewer tests are needed (to confirm SARS-CoV-2 was present in the deceased, if not already tested).
  • Deaths have to be recorded in a timely manner.
  • Deaths are noticed. There are going to be people who are seriously ill and would be tested if they went to hospital but don’t, they get better and it is not recorded. They are “invisible”. Dead people invariably get noticed.
  • A country that wants to hide the active level of Covid-19 can do so by not testing, under-testing, or not reporting honestly on the tests. It’s not impossible, but it’s hard to cover up a significant increase in the number of deaths.

I stress that is is not a perfect indicator though. There is no clear distinction made as to whether the patient dies of some other illness but SARS-CoV-2 was present; whether the patient was likely to die “soon” anyway – again due to other illnesses; patients who die outside hospitals are not counted in the UK daily figures yet. (If you follow me on Twitter you will have possibly seen me querying the figures last Monday – and people pointing out the reason!)

Reported deaths will also suffer from spikes and dips due to how the reporting is done. The UK and some other countries I checked (France, Italy, Spain) show a dip in all figures, against trend, on Sunday or Monday (or both).

There is a really nice article on all of this this by New Scientist which is itself partly based on this paper by the lancet that gives an IFR of 0.66%

There is also a whole plethora of graphs and information on ourworldindata.org/coronavirus , as well as text explaining in more detail what I have said here. It is well worth a look and you can change which countries appear on the graphs.

 

Test are Not Black And White

There has been a lot of talk in the UK and elsewhere (including the USA), about not doing enough testing. On the other hand their is a constant stream of media reports about quick home tests, both for if you have Covid-19 or have antibodies to SARS-CoV-2 and so are immune. So what is the reality?

A test is only any good if it is reliable as used. For something like a deadly pandemic, it needs to be really reliable. Let me explain why.

Let’s say a company is selling an antibody test and someone uses it, it says they are immune,  and they stop self-isolating. But the test is 75% accurate. 75% sounds good, yes? No. it means 1 in 4 people who take that test and it says they are immune are not –  and they have now gone out, spread the disease to their aunt Mary and she dies. Plus infecting a large number of people and keeping the whole sorry mess going.

{Update – as a friend reminded me, when you are testing for an “unlikely” event, which being immune to C-19 is right now, even a 95% accurate test will give far more false positives than real positives across the whole population – I’ll try and do another blog to explain why}.

And that is if they take the test properly – companies are most likely to give you the best, under-ideal-conditions accuracy rate as they want to sell more kits than Sproggins Pharma selling a similar kit which they claim is 73% accurate.

If you are reading this, you are probably the sort of person who will read the instructions, follow them carefully, not put the swab down on a table,  not let the dog chew it.  And you note the bit on reliability. Most won’t. They will do the test quickly, it says they are immune and they will believe it, especially if the quoted reliability rate is high.

Any home test that can be used by the public has to be both very reliable (less then 5% false positives) and utterly idiot-proof. I’m really concerned that countries that put money first will allow companies to sell tests that do not meet these criteria and it will make the situation a lot, lot worse. It might even result in the pandemic running out of control.

Test For Being Infected – PCR test

PCR stands for Polymerase Chain Reaction. The WHO-approved test for Covid-19 is a PCR test and has been fully described since the end of January. You can even download the details of the test and methods from the WHO page I link to.

A PCR test is a genetic test. A primer is added to the sample to be tested and that primer latches on to a very specific DNA or RNA sequence. A biochemical reaction is then used, called a Polymerase Chain Reaction, to make copies of that DNA/RNA, doubling the number in the sample. These steps are repeated 30 to 40 times to make millions of copies of DNA/RNA. With an old-style PCR test you would then need to run the processed sample through a second process to detect it, like a Southern Blot – you get a square of gel with black lines on it. The PCR test for COVID-19 should be a real-time PCR test. With this the new copies made are attached to a florescent dye so that it can be easily detected as soon as there are enough copies in the sample, say after 30 iterations not the full 40, saving time.

If the original sample contains even just a few pieces of the DNA/RNA you are testing for, you will detect it. The process takes a few hours.

The RNA of the SARS-CoV-2 virus was sequenced (read) back in January and the WHO identified sequences that were unique to the virus, and these are used to make the primers. As I understand it most countries use the WHO identified primers but the USA had some “discussions” between commercial companies over which primers they thought should be used. I won’t suggest there was an element of these commercial companies looking to make a fortune from this, i’m sure it was all about identifying an even more unique RNA sequence to target.

The test has to be done in laboratory conditions. Because the test is so sensitive any cross contamination can give a false positive. e.g the sample taken from a patient was done by someone with COVID-19 themselves or there was SARS-Cov-2 virus in the air from another nearby patient. If a swab is used to get a sample from the back of the throat, it has to be put into a sealed tube as soon as it is used.

If the sample to be tested has not been looked after properly (kept cool, not kept for too long etc) or the chemicals for running the test are similarly not kept in a laboratory environment, you may fail to detect the RNA – a false negative.

Finally, the virus RNA has to be there to be detected. A patient early in their illness may not be shedding virus at a high enough level for the swab to pick up some of it. Once a patient’s own immune system has wiped out the virus (or almost wiped it out) again the swap may not have any or enough virus in it to be detected.

Done right a PCR test is a powerful, incredibly reliable (over 99%) diagnostic tool and is used for detecting many viral diseases, including HIV, Influenza, and MERS.

You can probably now understand why creating a PCR test for Covid-19 that can be used at home or in the ward and gives a result in minutes is a bit of a challenge.

Some companies are trying to create a different sort of test. These depend on creating a chemical that will bind to the virus itself, probably one of the viral surface proteins. That chemical or part of it will then react with something else, a marker chemical, to give a visible change, much like a pregnancy test. You put the sample in a well or spot where the detecting chemical is. Fluid is then dragged along the strip carrying the thing to be detected (the virus in this case) and the detecting chemical. Any detecting chemical that did not bind will be left behind. When the fluid goes past the marker chemical, if there is enough detecting chemical, it will change colour. Neat!

Best I know at time of writing, no one has come up with such a test that was reliable. I’m pretty sure someone will, in a few weeks or months. It should be accurate but no where near as sensitive as a PCR test. I must stress, to actually be of use in handling Covid-19 as a nation, the rate of false positive would need to be very low. False negative, though not good for the individual, is nothing like as big a problem in containing the pandemic).

Antibody Test

An Antibody test will show if you have had Covid-19. It will not show if you currently have it, or at least not until the very late stages. This is because it is testing for the natural ability for your immune system, via antibodies, to recognise and attack the SARS-Cov-2 virus.

We desperately need an antibody test as it will allow us to identify people who have had the disease and are now immune. This is vital for 2 reasons:

  1. Someone who is immune does not need to be restricted by social distancing. See my prior post on why this is vital and how we might identify such people.
  2. We can find out how many people have had the disease and compare it to the number of people who have died of the disease and get that very useful Infection Fatality Rate.

Unfortunately, making an antibody test is not easy. Some are in trials and I think the UK government have tried some –  and none have proven trustworthy.

An antibody test is simply not simple. What you need to do is design something that an antibody reacts against, so let me just describe something about antibodies. Before I go any further, I must make it very, very, very clear that of all the biological things I have touched on so far, antibody technology is something my academic background hardly touched on and most of what I know comes from popular science magazines and a few discussions with real experts last year when my work life touched that area.

Your body creates antibodies when it detects something to fight, an invader in our tissues. This is usually a viral or bacterial infection. It also includes cells that “are not our own”, which is why we reject organ transplants unless they are both “matched” to us and we take drugs to dial down our immune response. Our antibodies recognise bits of the invader, in the case of viruses that is (usually) proteins that are in the coat, the outer layer, of viruses. Usually it’s the key proteins, the ones that give them access to our cells. Our immune cells learn to recognise these proteins and attack anything with them on it.

Anyone infected with SARS-Cov-2 who survives (which is, thankfully, most of us) now have antibodies that recognise the virus. There is no guarantee that what Dave’s immune system recognises SATS-CoV-2 by is what Shanti’s immune system does. It will be a bit of the virus, but not necessarily the same bit!

So an antibody test has to include proteins or fragments of proteins that most human immune systems that recognise SARS-Cov-2 will recognise. And as that will potentially vary from person to person…. Oh dear. Thus a good antibody test probably needs to have several proteins or protein fragments in it to work. This is why it is complex.

Again, the tests will come but the first ones will almost certainly not be specific/reliable enough to really trust.

 

Vaccine

The bad news? Despite all the media hype and suggestions in government announcements of creating a vaccination in 18 months (maybe sooner), it is very unlikely. Sorry. It is very, very unlikely. Don’t get me wrong, I would love us to have one right now, or in a month, or even in 6 months. But unless there is a medical miracle, we won’t and by suggesting to everyone that we might, I think the powers that be are storing up a lot of anger, frustration and other issues

A vaccine needs to do something similar to the Antibody test. It needs to contain something that either is part of the virus or looks like part of it. This is usually:

  • An inactivated version of the virus
  • a fragment of the virus
  • One of the key proteins on the virus
  • Rarely, a related virus that is much less harmful (for example cowpox for smallpox vaccine).

The vaccine is administered and the person creates antibodies to it. Now, when the person is exposed to the real virus, the immune system is ready to attack it. Neat!

https://mwidlake.files.wordpress.com/2020/04/screenhunter_-354.jpg?w=600... 600w, https://mwidlake.files.wordpress.com/2020/04/screenhunter_-354.jpg?w=150... 150w" sizes="(max-width: 300px) 100vw, 300px" />

Influenza Vaccine is often less than 50% effective

Creating vaccines is a long process. You need to come up with something that is safe to administer, prompts our immune systems to create the antibodies, and the antibody reliably attack the virus the vaccine is for – and nothing else! (Occasionally a new vaccine is found to prompt some people’s immune system to attack other things – like the healthy, useful protein the virus actually attacks). And you have to produce a LOT of that thing if you are going to administer it to a large number of people, such as most of the UK population.

The vaccine has to work on most people as you need 60-70% of people to be immune to SAR-CoV-2 get herd immunity from Covid-19 – the higher the better. The influenza vaccine is often much less effective than 50%, especially in older people.

You are giving the vaccine to healthy people and to lots and lots of them. It has to be really, really, really safe. If it seriously harms 1 in a thousand people (which might sound reasonable at first glance, for treating something as bad as Covid-19) – well, that is almost as bad as Covid-19 itself. You would be harming hundreds of thousands of people.

With a drug you use to treat the ill, you can afford for it to be less safe – as you are only giving it to people who are ill (so a smaller number) and they have more to lose. The risk/reward balance is more likely to be positive for a drug. Even if a drug for a life-threatening illness harms 5% of people but cures 50%, it is worth (with informed consent) using it.

We have never, ever created a vaccine in 18 months before. I’m struggling to get a scientific reference as searches are swamped with talk pieces (like this one!) on why it will take a long time. However, this video by an American doctor  Zubin Damnia who does social media about medical matters explains better than I can and this history of vaccines makes it clear at the top it often takes 10 years.

The bottom line is, much though I want to be wrong, the often stated aim of having a suitable vaccine in 18 months or less will need a medical miracle and a huge amount of work.

After The Peak And With No Vaccine – How Do We Cope?

After the peak, most people are still at risk from Covid-19. As I said earlier, if the Infection Fatality Rate is 0.5% then for each person who died there will be 200 people who are now immune, so if there are 20,000 deaths that is 4 million people immune. 6

If there is no vaccine then we have, I think, four options:

  1. Continue social isolation measures as they are to keep the virus from spreading.
  2. Relax isolation a little and let cases creep up but held as steady rate, but within the capacity of the NHS.
  3. Relax isolation quite a bit, monitor number of admissions to ICU (or something similar) and re-impose strict social isolation at  the current level if things start getting worse.
  4. Relax isolation a lot and massively increase testing and case tracking – copying the South Korea/Singapore approach.

Option 1 to hold us all in isolation is, I think, untenable. People will stop doing it and the impact on our economy must be massive. The impact on our society will also be massive, especially if this continues into the next academic year.

I don’t think we can manage option 4 in the UK yet.

So I think we will see an attempt at option 2, relaxing some social isolation rules (such as allowing restaurants to open and small gatherings) but then option 3, tightening social isolation if numbers of new cases start to build.

Option 4 could become a reality in a few months, especially if we can get people to use mobile phone apps to track movements and aid identifying the contacts of people who become ill,  but not everyone has a mobile phone and I think a good percentage of people will not agree to be tracked.

At present, without a vaccine, we will be living with some sort of social until we reach herd immunity, with at the very least 60% of us immune. How long will that take? 60% of the UK population is 40.5 million people. That equates to 202,500 deaths from Covid-19 to get there (remember, see the bit on IFR above).

This current peak of Covid-19 will last about 3 months, from the start of March to the end of May. It remains to be seen if we exceed the NHS expanded capacity. If we allow 20,000 deaths a peak with 4 million people becoming immune each peak, that’s 10 peaks, so 2.5 years.

A better option could well be to aim for a steady rate of new cases and deaths from Covid-19, say 1000 a week. At that rate herd immunity will take just over 200 weeks, 4 years. If we allow 4000 deaths a week than we could be there in a year, but our NHS would have to be handling the many, many thousands of ill patients that would entail.

Of course, in reality, our treatment of Covid-19 patients will get better over time, so fewer people will die from it, but it will still be a horrible thing to go through. And, if we DO get a vaccine sooner rather than later, many of those people will have died needlessly.

So, as you can see, we are in this for a long while.

The expanded health services, better knowledge of what social movement restrictions work, improved testing (including home testing), even my idea of cards for those immune, would all make life easier, it is not all doom and gloom. But I just wish all of what I have put here was being discussed and shared with people (preferably in a shorter form than this blog!) in a clear and constant message. I think if more people understood where we are and what is likely to to happen (or not), we will save ourselves a lot of issues weeks/months/even years down the line.

I honestly don’t know what the answer is – I don’t think anyone does. Which is why all of this talk about an “exit strategy” results in lots of hand waving and no clear plan.

As ever, if you think I’ve got something wrong, you know of a good academic source covering this, or you simply have a comment – let me know.

tanelpoder's picture

Virtual Conference: Systematic Oracle SQL Optimization Interview Videos

I have posted another interview to YouTube. This one is with Cary Millsap and he describes what he’ll be talking about in his virtual conference talk:
 I will add a few more videos in the coming days. The conference interview playlist URL is here.
You can sign up for the conference here. See you soon!
 

tanelpoder's picture

Virtual Conference: Systematic Oracle SQL Optimization Interview Videos

I have posted another interview to YouTube. This one is with Cary Millsap and he describes what he’ll be talking about in his virtual conference talk:
 I will add a few more videos in the coming days. The conference interview playlist URL is here.
You can sign up for the conference here. See you soon!
 

harald's picture

Import Your WordPress Site to WordPress.com — Including Themes and Plugins

It’s been possible to export your posts, images, and other content to an export file, and then transfer this content into another WordPress site since the early days of WordPress.

https://en-blog.files.wordpress.com/2020/04/import-wordpress.png?w=1440 1440w, https://en-blog.files.wordpress.com/2020/04/import-wordpress.png?w=150 150w, https://en-blog.files.wordpress.com/2020/04/import-wordpress.png?w=300 300w, https://en-blog.files.wordpress.com/2020/04/import-wordpress.png?w=768 768w, https://en-blog.files.wordpress.com/2020/04/import-wordpress.png?w=1024 1024w" sizes="(max-width: 720px) 100vw, 720px" />
Select WordPress from the list of options to import your site.

This basic WordPress import moved content, but didn’t include other important stuff like themes, plugins, users, or settings. Your imported site would have the same pages, posts, and images (great!) but look and work very differently from the way you or your users expect (less great).

There’s a reason that was written in the past tense: WordPress.com customers can now copy over everything from a self-hosted WordPress site — including themes and plugins — and create a carbon copy on WordPress.com. You’ll be able to enjoy all the features of your existing site, plus the benefits of our fast, secure hosting with tons of features, and our world-class customer service.

https://en-blog.files.wordpress.com/2020/04/import-wordpress-everything.... 1440w, https://en-blog.files.wordpress.com/2020/04/import-wordpress-everything.... 150w, https://en-blog.files.wordpress.com/2020/04/import-wordpress-everything.... 300w, https://en-blog.files.wordpress.com/2020/04/import-wordpress-everything.... 768w, https://en-blog.files.wordpress.com/2020/04/import-wordpress-everything.... 1024w" sizes="(max-width: 720px) 100vw, 720px" />
Select “Everything” to import your entire WordPress site to WordPress.com.

To prep for your import, sign up for a WordPress.com account — if you’d like to import themes and plugins, be sure to select the Business or eCommerce plan — and install Jetpack (for free) on your self-hosted site to link it to WordPress.com. To start the actual import, head to Tools → Import in your WordPress.com dashboard.

Then, sit back and relax while we take care of moving your old site to a new sunny spot at WordPress.com. We’ll let you know when it’s ready to roll!

connor_mc_d's picture

Will a BLOB eat all my memory?

Probably the most common usage for large objects (CLOBs and BLOBs) is to store them in a database table. In this circumstance, it feels intuitive that you won’t have a lot of concerns about memory, because the database will simply store those objects in datafiles like it would any other kind of data.

But BLOBs and CLOBs can also be declared as local variables in a PL/SQL block. We typically expect local variables to be housed within the memory for that session (the PGA). There are explicit calls in the DBMS_LOB package to create a temporary large object, but what if we do not use that API? What if we just start bludgeoning a local variable with more and more data? Is this a threat to the session memory and potentially the database server ?

Time for a test!

To see what happens when we keep growing a BLOB, I’m going to monitor two attributes for a session:

  • How much PGA is it consuming, and
  • Whether it is using any temporary tablespace allocation.

To keep my code nice and compact, I’ll determine in advance the statistic number for the PGA memory I want to monitor


SQL> select name
  2  from   v$statname
  3  where  statistic# = 40;

NAME
--------------------------------
session pga memory max

Before doing anything, I’ll check the amount of PGA my session has used in its lifetime.


SQL> select 'pga '||value val
  2  from   v$mystat
  3  where  statistic# = 40;

VAL
--------------------------------
pga 2585904

Now I’ll start with a simple routine that adds 10kilobytes to an existing 1kilobyte BLOB. We wouldn’t expect such a variable to cause any problems because it is less than what would easily store in a VARCHAR2 anywa


SQL> set serverout on
SQL> declare
  2    l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
  3    c     blob;
  4  begin
  5    c := l_raw;
  6    for i in 1 .. 10
  7    loop
  8      dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
  9    end loop;
 10
 11    for i in (
 12    select 'pga '||value val
 13    from   v$mystat
 14    where  statistic# = 40
 15    union all
 16    select 'blk '||blocks from v$sort_usage
 17    where username = 'MCDONAC'
 18    union all
 19    select 'len '||dbms_lob.getlength(c) from dual
 20    )
 21    loop
 22      dbms_output.put_line(i.val);
 23    end loop;
 24  end;
 25  /
pga 2863896
len 11000

PL/SQL procedure successfully completed.

Notice also that the query portion on line 16 to report any temporary space has not returned any results. So everything here is being done in PGA. Let’s ramp things up a little. Now I’ll push to 100kB, which exceeds what could be stored in a standard local variable.


SQL> set serverout on
SQL> declare
  2    l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
  3    c     blob;
  4  begin
  5    c := l_raw;
  6    for i in 1 .. 100
  7    loop
  8      dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
  9    end loop;
 10
 11    for i in (
 12    select 'pga '||value val
 13    from   v$mystat
 14    where  statistic# = 40
 15    union all
 16    select 'blk '||blocks from v$sort_usage
 17    where username = 'MCDONAC'
 18    union all
 19    select 'len '||dbms_lob.getlength(c) from dual
 20    )
 21    loop
 22      dbms_output.put_line(i.val);
 23    end loop;
 24  end;
 25  /
pga 3453720
len 101000

PL/SQL procedure successfully completed.

Our PGA has jumped a little, and there is still no temporary space usage.  So lets push onwards and see if I can blow up my session PGA. Now I’ll push 10 mega-bytes into the BLOB.


SQL> set serverout on
SQL> declare
  2    l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
  3    c     blob;
  4  begin
  5    c := l_raw;
  6    for i in 1 .. 10000
  7    loop
  8      dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
  9    end loop;
 10
 11    for i in (
 12    select 'pga '||value val
 13    from   v$mystat
 14    where  statistic# = 40
 15    union all
 16    select 'blk '||blocks from v$sort_usage
 17    where username = 'MCDONAC'
 18    union all
 19    select 'len '||dbms_lob.getlength(c) from dual
 20    )
 21    loop
 22      dbms_output.put_line(i.val);
 23    end loop;
 24  end;
 25  /
pga 2994968
blk 1280
blk 128
len 10001000

PL/SQL procedure successfully completed.

You can see what the database is doing here. Even though the BLOB is declared as a local variable, it is being handled like a true temporary BLOB. We store it in PGA if it fits within the overall database’s management of the PGA, but once it gets large, we’ll utilise temporary storage to ensure our session does not chew up excessive memory. This allows me to write as much data as I like into the variable and not harm the PGA.  Here’s the BLOB pushed out to 200megabytes, but I’ll still only consuming just over 3megabytes of PGA


SQL> set serverout on
SQL> declare
  2    l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
  3    c     blob;
  4  begin
  5    c := l_raw;
  6    for i in 1 .. 200000
  7    loop
  8      dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
  9    end loop;
 10
 11    for i in (
 12    select 'pga '||value val
 13    from   v$mystat
 14    where  statistic# = 40
 15    union all
 16    select 'blk '||blocks from v$sort_usage
 17    where username = 'MCDONAC'
 18    union all
 19    select 'len '||dbms_lob.getlength(c) from dual
 20    )
 21    loop
 22      dbms_output.put_line(i.val);
 23    end loop;
 24  end;
 25  /
pga 3584792
blk 24576
blk 128
len 200001000

PL/SQL procedure successfully completed.


(picture credit: @jorgezapatag unsplash)

Richard Foote's picture

Oracle 19c Automatic Indexing: Adding Columns To Existing Automatic Indexes (2+2=5)

  In my previous post, I discussed how when the following query is run: select * from major_tom3 where code3=4 and code2=42; the Automatic Indexing process will create an index on (CODE2, CODE3) but ultimately not use the index as the CBO considers the corresponding index based execution plan too expensive. I’m going to expand […]

pete's picture

PL/SQL, AST, DIANA, Attributes and IDL

I have been wanting to write a detailed post about this subject for a very long time and indeed I have had some notes and screen dumps for some of this for more than 15 years for some parts of....[Read More]

Posted by Pete On 06/04/20 At 08:57 PM

oraclebase's picture

Video : Online Table Move Operations in Oracle 12.2 Onward

In today’s video we demonstrate how to move, or rebuild, a table as an online operation.

This video was done as a response to some questions about the previous video on shrink operations. As usual, the video is based on some stuff I’ve written previously.

The star of today’s video is John Kelly.

Cheers

Tim…


Video : Online Table Move Operations in Oracle 12.2 Onward was first posted on April 6, 2020 at 9:41 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
fritshoogland's picture

Oracle library cache cursor child generation

This post is about library cache SQL cursors, and how these are managed by the database instance.

Whenever an Oracle database parses a SQL, it follows the following sequence of events (events which are of importance to the topic of cursor children):
1. The SQL text is hashed to generate a hash value.
2. The hash value is used to inspect the session cursor cache of the session to find a cached child.
If a cached entry is found, the child is inspected for compatibility, and then executed.
3. The hash value is used to calculate the library cache hash bucket.
4. The hash bucket is inspected, and the pointer is followed to find the parent handle.
5. In the parent handle a pointer is followed to the parent’s heap 0.
6. In the parent heap 0 a pointer to the hash table with the cursor children followed.
7. The hash table is read from the most recent child down to the oldest child.
If a compatible child is found, that child is executed.
8. If no compatible child is found, a new child is created.

I guess this is reasonably well known.

However, there are some nuances that are interesting to understand.

The first one is that with pluggable databases, a child has the container-id in which the child is generated as a property. During the inspection of the child list, the container-id of the child must match the container-id of the executing session, otherwise the child is skipped. This prevents from getting blocked from sessions across containers. Please mind the library cache hash table and the parent handle are shared between containers, although waiting for these should be negligible. See Oracle multi-tenant and library cache isolation.

Children are inspected from most recently created to the oldest. This is logical, more recent children have the highest probability to have been optimised with the most current state of the data.

Whenever there is no compatible child found, a new child must be created.

Incomplete list insertion.
The first thing that is done to create a new child, is an entry is allocated into the child list for the new child. This is done using the function kkshinins(), and a cursor trace will show this as:

kkshinins insert child into incomplete list bi=0x7ea05290 cld=21 flg=25

This is noteworthy, because to do so, the hash table (child list) mutex must be taken in exclusive mode. If a session wants to inspect the child list, it will briefly take the hash table mutex in shared mode. If the session creating the new child does find such a session, it will wait for ‘cursor: mutex X’. The other way around: if a session wants to inspect the child list, but find it taken by a session in exclusive mode, it will wait for ‘cursor: mutex S’.

I am not sure how to read ‘incomplete list’, and whether this means that the current child list is containing a child not yet complete, or hinting at a separate list of incomplete cursors. What I do know, is that the function kkshinins() inserts the new at this point not yet created (“incomplete”) child as the very last entry of the child list.

This means that at this point, if another session needs to find a compatible child, it will scan the child list and if no children are found compatible, it will find the incomplete child as the last one. At this point once the session finds the incomplete child, it has to wait for it, and will wait for ‘kksfbc child completion’.

Child load.
The session that is creating a new child will continue to create the new child, for which the next step is to allocate the child’s heap 0. This is done in the function kksLoadChild(). After the child’s heap 0 is allocated, it will pin the child in exclusive mode. Sessions that were waiting for ‘kksfbc child completion’ now will wait for ‘cursor: pin S wait on X’. Any other sessions that scanned the child list and did not find any compatible child, will run into this (last) child and wait for ‘cursor: pin S wait on X’ too.

There is a good reason the sessions wait for the child creation. If a session would not wait for the child creation, it would not have found any suitable child, because the incomplete child is the last one on the list, and therefore also would start creating a child. The reason this doesn’t happen is that in such case, there is a fair chance that both sessions would create identical children.

At this point, after the child cursor is pinned in exclusive mode, the session will continue to create the child and perform all the steps it needs to do during parsing. In fact, child generation is the actual thing that is considered ‘hard parse’. This includes syntax checking, semantic checking, execution path selection, optimization, creation of the executable version of the cursor (in heap 6) etc.

Incomplete list removal.
When the child creation is finished, the just created child is taken off the incomplete list (function kkshindel()), and inserted into child list as first entry (function kkshhcins()). This requires exclusive access to the child list once again, so when there is high concurrency, this might yield ‘cursor: mutex X’ for the session creating the child, and/or ‘cursor: mutex S’ for the sessions willing to scan the child list. A cursor trace shows:

kkshindel remove child from incomplete list bi=0x7ea05290 cld=21 flg=30
kkshhcins insert child into hash table bi=0x7ea05290 cld=21 flg=38

I am not sure if it’s a change in the child’s status (perhaps the ‘flg’ field shown in the trace?) being taken off the incomplete list, but what does happen is that the child after being taken off the incomplete list is inserted as first entry on the list of children. The next step is that the exclusive pin of this child is changed to shared. Any sessions that were waiting for child creation in ‘cursor: pin S wait on X’ now will continue and can inspect the child to check compatibility, and if found compatible execute the newly created child. If the child it was waiting for was not compatible, it has run out of children to check and the session will start creating a new child.

To prevent automated spam submissions leave this field empty.