Before any non-specialist attempts to write something on COVID-19, we should wonder: are any actual experts already writing on this, and if not, why not. Enough badly informed stuff has already been published by amateurs that before posting we should be certain we are actually being helpful.
Recently, several interesting preprints have come out on why the practical “herd immunity” level may be lower, perhaps far lower, than the oft quoted 60-70%. This is mostly due to statistical variations in susceptibility to SARS-CoV-2, as well as in differences in the propensity for spreading the disease.
- Individual variation in susceptibility or exposure to SARS-CoV-2 lowers the herd immunity threshold, M. Gabriela M. Gomes, Rodrigo M. Corder, Jessica G. King, Kate E. Langwig, Caetano Souto-Maior, Jorge Carneiro, Guilherme Goncalves, Carlos Penha-Goncalves, Marcelo U. Ferreira, Ricardo Aguas
- The disease-induced herd immunity level for Covid-19 is substantially lower than the classical herd immunity level, Tom Britton, Frank Ball, Pieter Trapman
These recent PDFs appear to be good, but they are particularly toxic to discuss, even though they are part of mainstream epidemiology. Large parts of the internet have lost their minds on the particular topic of herd immunity. Simply mentioning it makes the trolls and conspiracy theorists come out in droves. Serious scientists have therefore learned to be very hesitant before discussing herd immunity in public.
So, this is my excuse why I as a non-specialist feel compelled to do an explainer of these new papers: I’m not a serious scientist with a scientific reputation to worry about.
NOTE: This article in no way argues for rapid lifting of control measures or lock-downs. If you read it that way, that is all up to you. I personally am still going nowhere.
R, the reproduction number
Remember R, the famous ‘reproduction number’ of COVID-19. Professional epidemiologists and virologists must have been kicking themselves when everyone started obsessing over this number, “The nuclear!! R0 of COVID-19” as one other non-specialist called it.
R is the average number of new infections one infected patient causes. Anything above 1 means the outbreak grows, and the higher it is the faster the epidemic spreads. R0 is the R for when no one has yet built up any immunity.
The “R of COVID-19” is oft quoted as 2 or 3 or even 4. But the thing is, R is not a property of the virus. We do know that COVID-19 is around 100nm big, and we know its RNA has 30000 nucleotides. We also know, down to the atom, the shape of many of its components. These are all properties of SARS-CoV-2. But R itself is not a viral property.
R is a property of at least:
- The SARS-CoV-2 virus
- the nature of the population (demographics, health, ethnicity)
- how people interact (public transport, festivals, working conditions, etc)
- the weather
- building ventilation standards
- countermeasures (masks etc) / restrictions on meetings
So in short, there is no “R of the virus”, but we can say that in a given location and time, a certain reproduction number is observed (and even that is not easy to do).
Through sometimes draconian measures, many countries have now been successful in driving R down to below 1, which means the epidemic is shrinking in those places.
Or with a more friendly name, “community immunity”. If a typical patient would normally infect 3 more people, the virus would stop growing only once two thirds (67%) of the population has achieved immunity - because that drives the effective R to 1.
Sadly, it turns out epidemics have a momentum of their own, and while the growth of new infections will stop at the point of community immunity, there can be a substantial overshoot, since people were still incubating at that point (‘in the process of getting ill’).
Yet the concept fascinates us, because if herd immunity could be achieved, life could more or less go back to normal.
Of vital importance, it appears that even the most heavily impacted regions only see around 20% of people carrying antibodies against SARS-CoV-2, and most places are reporting far lower numbers (less than 4%).
This would appear to be bad news, as this foreshadows that lifting restrictions would fan the flames of the epidemic again, until 60-70% infection rate is reached.
But, luckily for us perhaps, things may be more complicated than this.
Variability of susceptibility
Interestingly, even if we know the R of a population/weather/behaviour/etc-constellation, it turns out it is a population average. It does not capture for example that children appear to be much less infectious than adults. The “R” for children is likely far lower, meaning the R for adults must be higher.
Does this matter? Much like 99% of people have more legs than average (yes), averages can mask a more complex reality.
Our immune system consists of many layers, each of which attempts to stave off disease. Our innate immune system is generic, and can prevent infection by even unknown viruses and bacteria. In addition, our adaptive immune system has a long memory and could have antibodies “ready” that can rapidly be adapted to a new disease.
The performance of our innate immune system depends on a lot of factors, and it therefore stands to reason not everyone will be equally susceptible to COVID-19.
As a toy example, let’s assume that (as with influenza) only 25% of people are infected if you expose them to a moderate aerosol dose of COVID-19.
Let’s also assume we’ve measured an R of 3, and a single “index” patient will infect 3 more people. Because we assume only 25% of people exposed get infected, we can reason in total 12 people got exposed, of which only 3 got infected.
Interestingly, out of these 13 (1 + 12) people, 4 (1 + 3) got ill in total. But within this group, no further infections are likely. Even though only around 30% got infected, in our toy model, this group has achieved “community immunity”.
Now, how would this scale up to the real world? Intuitively, we may see that the highly susceptible people are rapidly infected, and they initially continue to infect each other at a high rate.. until the highly susceptible people run out. At that point the virus can only spread through far less susceptible candidates. This sort of explains why community immunity might be reached a lot earlier with variable susceptibility.
With actual non-toy models, it is possible to calculate the effect of variable susceptibility, as explored in “Individual variation in susceptibility or exposure to SARS-CoV-2 lowers the herd immunity threshold” by M. Gabriela M. Gomes from the Liverpool School of Tropical Medicine (and Ricardo Aguas, Oxford, and many others).
In these two graphs, the epidemic in Italy is modeled using low variability in susceptibility (left), or moderate variability (right). As input to this model, they assumed the lock-down measures successfully drove R below one, but that this lock-down is gradually relaxed starting from May 20th. This relaxation can be seen in the middle graph labeled R0. They assume that everything is “back to normal” in May 2021.
The top graph shows the percentage of the population infected at any point in time, and it shows the initial peak of around 0.01% of the population reported as falling ill every day.
The bottom graph shows the “effective R”, which takes the variability of susceptibility into account, and the relaxation of the control measures.
The low variability graph on the left predicts a huge new outbreak if containment measures are relaxed. The moderate variability graph on the right shows a much smaller second wave.
Clearly the variability in susceptibility matters a lot. Just how much is made clear by this graph:
There’s a lot going on here. First focus on the solid lines - these represent the calculated herd immunity level, on the y-axis. On the x-axis is plotted the variability of both susceptibility and exposure (more about which later).
We see that if the ‘coefficient of variability’ (CV) of susceptibility is 4, community immunity is already achieved at a paltry 10% (the solid black line).
But, what can we expect for this CV value? The vertical lines may help. Green lines represent malaria, blue lines tuberculosis. The red line stands for “the original SARSv1”. And finally, the dotted black line is derived from a single Wellcome preprint analysing various COVID-19 outbreaks.
Although the research is still very preliminary, a CV of 2-3 seems eminently reasonable to expect, and our best current guess is in fact 3.2, seemingly hinting at a sub-20% community immunity threshold.
Variability of exposure
Some people go to festivals and travel by crowded public transport, while others barely go out and do not mingle a lot. It stands to reason that such variability of exposure can similarly reduce the community immunity level - highly mobile people get infected first, reducing effective transmission of the virus.
Tom Britton, Frank Ball and Pieter Trapman argue in “The disease-induced herd immunity level for Covid-19 is substantially lower than the classical herd immunity level” that a model that takes such interaction levels into account leads to a lower community immunity threshold, even without assuming variable susceptibility.
In their model, the population consists of six age groups, and there are coefficients of mixing between these age groups. In addition, each age group is then divided into low, medium and high level “active” groups. Depending on the age, more or less people are active within their group. With activity they mean the number of contacts.
Using conservative numbers, Britton et al derive that community immunity is sensitive to better modeling of exposure. Being of a mathematical background, and likely well aware of the controversy around (Swedish) herd immunity studies, the paper is very careful not to state any definitive conclusions.
When I read their paper, I personally see that with very conservative inputs, their model shows a reduction of the herd immunity threshold from 60% to 43%, purely based on variable mixing patterns in the population.
In this graph, various levels of ‘lock-down’ are implemented until day 135, at which point all restrictions are abruptly lifted. In this graph, an alpha of 1 corresponds to no restrictions.
On the y-axis is the number of concurrently infective people, which more or less corresponds to the number of patients ill at that time. In their model, no control measures peak at nearly 10% of the population being ill at the same time.
Various levels of restrictions are plotted. The net effect is more easily seen on this plot of cumulative infections:
Without any measures, in total somewhat more than 70% of the population gets infected - likely due to the ‘overshoot’ phenomenon mentioned earlier. With increasingly stringent measures, the epidemic indeed stops earlier than at the classical ‘60%’ herd immunity assumed in this paper.
Tantalizingly, at least in this model, a brief but very heavy lock-down followed by a complete relaxation of restrictions actually leads to a larger total amount of infections. The supplemental information of the article has additional graphs for more gradual lifting of lock-down.
Discussion & Conclusion
In the Gomes paper we see the large impact of variability of susceptibility (and connectivity), while the Britton paper theorizes variable mixing and activity can also have a significant effect.
From what I understand, the combined effect of these two has not been modeled. There might be significant interaction between susceptibility and exposure - for example, people in ill health could be way more susceptible, and also not go out much.
Much more research is clearly needed to see how large the effect is in real life. At least one prominent researcher opines it might only move the needle a bit from “very very bad”. Marc Lipsitch also has written a careful thread on the practical implications and sees a somewhat bigger impact (with caveats).
But for me, the inescapable conclusion is that any level of variability in susceptibility and exposure will lower the effective ‘herd immunity’ level, and this can only be good news. Time will tell how large this effect is.