*Megan Bell*

So, you’ve gone to the doctor with some very strange symptoms. Your GP is completely stumped, and they decide to give you a few tests for very rare diseases that fit your symptoms. A week later you get your results back and you’ve tested positive for one of them. Perhaps something like cystic fibrosis, which affects approximately 1 in every 10,000 people in the UK. How worried should you really be?

The answer lies once again with Bayes’ Theorem.

There are 4 possible outcomes of a test, summarised in this table.

In medical papers, tests are often quoted with values for ‘specificity’ and ‘sensitivity’. The *sensitivity* is how likely you are to test positive if you have the disease – so a sensitivity of 99% means that out of 100 people who have the disease, 99 will test positive (true positive) and 1 will test negative (false negative). The *specificity* is how likely you are to test negative if you don’t have the disease, so a specificity of 99% means that out of 100 people without the disease, 99 will test negative (true negative), and 1 will test positive (false positive). These values are shown in the table below.

Now, back to our original example. The disease you have tested positive for is very rare, so let’s say that only 1 in every 100 people with your symptoms actually have the disease. The test you took is 99% sensitive, and 94% specific.

According to Bayes’ Theorem, these numbers mean that if you test positive, there is in fact only a 14% chance you actually have the disease… but how is that possible? Before we jump straight to using the mathematical formula, let’s see if we can work this out intuitively.

If there are 100 people with your symptoms, 1 will actually have the disease. Since the test is 99% sensitive, they will very likely test positive (green). However, out of the 99 remaining people who don’t have the disease, 93 will test negative (black) and 6 will test positive (red) because the test is 94% specific.

Therefore if you test positive, you are one of 7 people who have tested positive, so there is a 1 in 7 chance of you actually having the disease.

Let’s check this with Bayes’ Theorem. From the first article in this series, we know that the general formula is:

where here the event A is having the disease, and the event B is testing positive.

The probability that you test positive given you have the disease is the sensitivity of the test, 0.99 in this case. The probability you have the disease is 0.01, as 1 in every 100 people with your symptoms have the disease.

The probability of a positive test result is a bit more complicated. This is the sum of the probability of a true positive, and the probability of a false positive. The probability of a true positive is 0.01 x 0.99, as there is 1 person in every 100 with the disease, who tests positive 99 out of 100 times. The probability of a false positive is 0.99 x 0.06, as there are 99 people in every 100 without the disease, who will test positive 6 out of 100 times. Therefore, overall we have:

which gives a probability of you *actually* having the disease of 14% or 1/7, exactly as calculated above.

As you can imagine, if everyone in the country was checked for diseases like breast cancer or prostate cancer, there would be a lot of false positives. Mammograms for screening breast cancer can range from 70-95% sensitive, and can be up to 91.5% specific. These are both lower accuracies than we used in our example, so if there is still a 1 in 100 chance you have breast cancer before screening, there’s a very high chance of false positives.

This is why the NHS only screens people who are already at risk because they have a family history of cancer, or are over a certain age. This reduces the number of false positives, and means the test results are much more likely to be correct.

So far I’ve purposefully avoided any mention of the big elephant in the room – COVID-19 – but I’d say we now have enough of an understanding of disease testing to give it a go…

Bayes’ Theorem is currently being used to shape some of the scientific advice about coronavirus. Many countries and venues are introducing temperature checks or swab tests at borders and entrances, so that people who have a fever (and therefore could be infected) are barred entry. But are these actually effective?

At the height of the pandemic, it was estimated that as many as 1 in every 400 people in the UK could have COVID-19 at any one time. This means that if you picked any person at random, there is a 0.0025 probability that they are infected. Some studies have suggested up to 80% of people with COVID-19 develop a fever – a very generous estimate, but the one we’ll use as a best-case scenario. We’ll also assume that the thermometer will always detect if someone has a fever, so the sensitivity of this test is 80%. Some people with a fever won’t have coronavirus, leading to a false positive. Let’s assume that there is 1 person in every 400 (0.25%) that has a fever but not coronavirus, so the specificity of the test is 99.75% – which is VERY high (and much better than the 94% we had with our first example above).

If there is a 0.0025 chance of being infected, there is a 0.9975 chance of not being infected. If you are infected, then the chance of you testing positive is 0.0025 x 0.8 (the chance of being infected multiplied by the sensitivity of the test at 80%). If you are not infected, the chance of also testing positive is 0.9975 x 0.0025 (the chance of not being infected multiplied by one minus the specificity of the test, as the test will be positive in 0.25% of people who are not infected).

Therefore, from Bayes’ Theorem:

This would suggest that if you have a fever as detected by the thermometer test, there is only a 45% chance you actually have COVID-19.

Don’t get too worried though, this doesn’t mean that these checks are pointless! Most people who are infected are picked up by the temperature check, because the test will catch 80% of people who are infected. The problem is that a lot of healthy people will also test positive, so may have to be quarantined when it’s not necessary.

Luckily there’s a very simple solution. If a second test that’s not a temperature check (also with 80% sensitivity and 99.75% specificity) is taken, the chances of you actually being infected if you get a positive result rise to … (drum roll please)…. 99.6%!

The new probability that you have the disease has risen from 0.0025 to 0.445, because you’ve already had a positive test. The chance of not having the disease after one positive test is therefore (1-0.445) = 0.555. Plugging the new numbers in to Bayes Theorem gives the result above.

So, if you get a second positive test, it’s *very* likely that you actually are infected. This is why many countries that are using a track and trace system require you to receive a negative test result at least two times before you can be released from isolation.

Finally, to really get a feel for what’s happening with these tests, let’s play around with the numbers a little. Suppose we keep the sensitivity of the second test at 80% and decrease the specificity to 95%, then the chance you have the disease if you test positive decreases to 93% (from 99.6% above). So, as we can see, even very small changes in the specificity or sensitivity of the test can mean the results are much less certain.

In fact, this is a major problem for many governments when they are trying to develop and buy tests for a disease such as COVID-19. Some PCR tests (the ‘swab’ tests) are 97% sensitive and 97% specific, meaning they test negative for 3 in every 100 people who are infected, and test positive for 3 in every 100 people who are not infected. The number of missed cases can increase even more due to human error. If the swabs aren’t taken properly, or the chemicals used are out of date and poor quality, the testing sensitivity can drop to 66%, which dramatically reduces the accuracy.

Antibody tests are much quicker, and only involve taking a finger prick of blood. These can be up to 99% sensitive, and 98% specific, meaning only 1 in every 100 infected people are tested negative, and 2 in every 100 healthy people test positive. However, as COVID-19 infection rates drop, the chances of you having the disease fall. This means there will be many more false positives and many false negatives. As you can imagine, needing to test everyone at least twice would be very expensive and time-consuming. So, governments want tests to be as accurate as possible before they start using them on the whole population.

Again, this doesn’t mean that the testing being done is completely useless! If you already have symptoms of COVID-19, your chances of having the disease are much higher than 1 in 400, because that value was for the entire population, whether or not they had symptoms.

Coronavirus symptoms could generally apply to many other illnesses, such as a common cold or flu. But, because it’s currently not flu season, if you have a fever and a cough, it’s very likely to be due to coronavirus. Often these tests are just used to confirm what doctors already know, to rule out other conditions and make sure you get the right treatment.

Ultimately, no test is ever going to be perfect, and there will always be a tiny uncertainty in any result; we just have to do the best we can. A test that is 80% sensitive isn’t as good as it seems at first, but as long as you test again you can be pretty much certain! Just don’t go rushing to your nearest test centre at the first sign of a tickle in your throat…

So far in this series we’ve been introduced to Bayes’ Theorem and seen how it can help governments to stop outbreaks of viruses like COVID-19, and screen for deadly diseases like cancer; but it also has a vital role to play in our courtrooms. When judges and lawyers don’t apply Bayes’ Theorem properly, there can be life-changing consequences, which we will discuss in the final article of this series.

Article 1: Ghosts, Spam Emails and Bayes’ Theorem

Article 3: The Prosecutor’s Fallacy

[…] Understanding Bayes’ Theorem was crucial to the eventual defeat of the Nazis almost 75 years ago, but there are still many cases today where conditional probabilities applied incorrectly may have life-threatening consequences. As we shall see in the rest of this series of articles, this seemingly simple equation could be the difference between life in prison and walking free, and can help to explain why accurate tests for diseases are so vital to stop outbreaks. […]

LikeLike

[…] red hair but the witness says they do). This is very similar to the disease testing case from the second article, summing the probability of a true positive and a false […]

LikeLike