Depending on where you are in the world, reported infection rates range from over 20% (New York) to 6% (Geneva).
What is going on?
Are researchers in simply in too much of a hurry (and under too much pressure) to double check their findings? Is there something inherently different about this coronavirus that makes it harder to detect reliably than other viruses?
The pressure to develop a fast and effective test for Covid-19 antibodies is immense and the number of tests available has soared to over 200 either available now or close to launch.
Inaccurate, unauthorised test are flooding onto the market due to emergency deregulation of the normal registration process – especially in the US. Increasing reliance has been placed on manufacturer claims and anecdotal evidence – with sadly predictable results.
There are two main types of tests – lab-based ones (which usually give a quantitative measure of antibodies) and rapid tests (which, like pregnancy tests, give a binary yes/no answer). The main problem seems to lie with the rapid tests.
The UK government recently had to admit to spending $20 million upfront for 2 million Chinese coronavirus testing kits that were too inaccurate to use. Spain has to destroy 640,000 testing kits when they discovered they could only detect around 30% of infections.
In another high-profile case, inaccurate testing in the Californian county of Santa Clara was blamed for a massive over estimate in the number of cases of infection (especially given the relatively low number of deaths).
Some tests give false positives whereas other fail to detect Covid-19 when it is present. Other tests can accurately detect cases but may be open to interpretation, or are only effective when an infection has reached a particular stage.
The two important measures for a test are “sensitivity” and “specificity.” Low sensitivity fails to detect the virus and can cause false negatives whereas low specificity may detect other viruses and generates false positives. To be really useful, a test should have more than 98% specificity and over 90% sensitivity. Some tests do meet this yardstick – a lab-based test developed by Roche, is claimed to combine 99.8% specificity with 100% sensitivity.
But having an effective test is only part of the story – how they are used is extremely important too.
Consider drive-through testing sites. They sound like a great way to test large numbers of people in a relatively safe way. Unfortunately, they only work for the share of the population with cars.
Randomly calling people by phone and asking them to come to hospital, similarly over-represents those with access to transport, and will under-represent those who are less likely to attend hospital. And those less likely to attend hospital include those who are fearful because of underlying health conditions and those in the BAME groups where Covid-19 seems to be more serious.
As with any research, it will only give useful results if the methodology is carefully thought through.
This seems to have been forgotten in the frantic rush to understand the dynamics of Covid-19 in real populations.
The public (and Government) all want a simple answer but there seems to be a widespread lack of the will, or ability to put in the planning needed to get one.
Don’t get me wrong, I’m sure there are some highly effective, well-funded, well-staffed and well-planned testing programmes in existence. I just worry that they risk getting drowned out by the noise.
Until everyone developing testing programmes can take a step back and have a cold, hard look at the science, we run the risk of continuing to grab at quick-fixes that just don’t work in the long run.