Results from individual studies are very precise, and not always the best grounds for making policy decisions.
First published in January 2021.
Germany has recently announced that it will not offer the AstraZeneca COVID-19 jab to over-65s due to insufficient data about its efficacy in that age group.
Meanwhile, initial data from Israel seemed to suggest that 14 days after the Pfizer/BioNTech first vaccine dose, patients only had a 33% reduced chance of infection – disappointingly low according to some reports. But more encouraging data has since emerged showing that after the second dose the vaccine is 92% “effective”.
These stories make headline news and stimulate heated debate as to whether authorities are making the right decisions about which vaccines to use. At a time of great uncertainty and contradictory viewpoints, many are quick to jump on any new data that appears to support their views. But these headline figures can be extremely misleading.
Germany’s AstraZeneca decision
The decision by the German Standing Committee on Vaccination (STIKO) not to recommend the Oxford-AstraZeneca vaccination to over-65s is perhaps a case in point.
The original data that regulators in all countries have been looking at, which was published in the Lancet, does indeed show fewer trial participants in the over-65s category.
So on one hand the German regulator is correct to say there is not enough data from over-65s. But for others to extrapolate this observation to a conclusion that the vaccine is either ineffective, or dangerous (and thus shouldn’t be given) to this older age group is not appropriate – absence of evidence is not evidence of absence. This decision is more likely to represent a quirk of the way the German regulator works rather than a major medical or scientific concern that should cause concern for other countries.
Asking the right question
Research is a complex process, and contrary to the popular saying, interpreting medical research is far trickier than even rocket science. One of the main problems is the difficulty in asking the right question, or working out whether the data being reported actually relates to the question that you (or the politicians) are interested in. Medical research is extremely specific, and it is dangerous to generalise conclusions from studies that are by necessity very precise.
Take, for instance, the difference between “efficacy” and “effectiveness”. Novavax has recently announced an extremely precise 89.3% efficacy for its new COVID-19 vaccine. So what should we make of this – another triumph of medical research or the start of a marketing campaign by the pharmaceutical company?
Here it is important to understand that efficacy relates to the performance of a vaccine under carefully controlled trial conditions, while effectiveness is the performance under real world conditions.
So although efficacy may be a predictor of effectiveness, we shouldn’t be disappointed if vaccines perform differently in the real world compared to their clinical trial efficacy figures.
Expecting the expected
Then why do pharmaceutical companies report efficacy figures when the rest of us are more interested in effectiveness?
The reason is because it is not always easy to define what is meant when we say vaccine effectiveness. We all want science to stop the disease and allow us to get back to normal, so this is probably what most people mean when they talk about an effective vaccine. But this apparently simple aspiration is not as straightforward as it seems.
Take the phrase “stopping the disease”. If we are hoping that vaccines will do this for us we may be disappointed. Vaccines can generally be useful in two different ways. They can either reduce the severity of infection, or they can stop the virus spreading between people. This latter function – known as sterilising immunity – is the holy grail of vaccine development, but in practice very difficult to achieve.
Most vaccines reduce the severity of disease and, if the vaccine designers are lucky, also reduces infectiousness at least a bit. The current coronavirus vaccines have been licensed mostly on the basis of reducing the severity of the disease simply because data on transmission is much harder to get and often requires longer studies. This is why preliminary data, like that received from Israel, is not necessarily too concerning.
Also consider the phrase “back to normal”. What society is really interested in is reducing the number of people admitted into hospitals, and perhaps more specifically into intensive care. Without spare capacity in hospitals, all of our lives become significantly more dangerous.
Taking this as the main consideration, whether vaccines prevent infectiousness by providing sterilising immunity is perhaps not what we mean by effective for getting us back to normal. Just stopping people going to hospital should be enough for the vaccine campaign to be successful.
Taking the time to think
All this shows that data relating to vaccine efficacy, and apparently conflicting data from real-world situations, does not represent the whole picture, especially when trying to determine national vaccination strategies.
Realistically, any licensed vaccine is going to be safe and have a sufficient biological effect to contribute meaningfully to getting us back to normal. On an individual level, we should take any licensed vaccine we are offered.
Judging which vaccine works best in which situation is a problem for professional regulators and scientists because the parameters involved are so complex that the headline figures will never reveal the true story. And this is before we even start to consider the complications caused by new variants of the virus.
We must take care when determining where new data is coming from and whether it is reliable or complete. Medical research takes a very long time because it can be quite difficult to work out what data really means. This is the reason why the scientific community has a drawn-out publication processes involving peer review.
This can be frustrating in a rapidly moving pandemic situation, but history (and even our experience over the last year or so) shows that we should be very wary about making far-reaching decisions based upon quick and dirty interpretations of new and exciting or contentious data.
Unfortunately the best way to catch mistakes is to spend time thinking about the research, and where possible collecting additional data to confirm or refute conclusions. This is essentially the scientific method, and operates in a very different time frame to the news or political cycle.
Dr Simon Kolstoe, Senior Lecturer in Evidence-Based Healthcare and University Ethics Advisor, University of Portsmouth. Chair of research ethics committees for the NHS, the Ministry of Defence & Public Health England.