Subscribe To Our Newsletter
Abonnez-vous Ć  notre bulletin
Time to accept that we do not know how to prepare for pandemics
GFO Issue 424

Time to accept that we do not know how to prepare for pandemics

Author:

Quentin BatrƩau* & Daniel Townsend

Article Type:
Analysis

Article Number: 4

ABSTRACT With renewed attention on pandemic prevention preparedness and response from the global health community, this article explores the inadequacies in the current indices used to measure country readiness. Drawing from lessons learned throughout the HIV epidemic, we proposes a way to improve these indices by making them more community-focused.

The COVID-19 pandemic exposed the global lack of pandemic preparedness. Despite drastic measures, containment failed, and millions died. Our best measure of COVID-19 deaths is the World Health Organization (WHO) and its excess mortality data. The measure is imperfect because it only looks at 2020 and 2021 while the pandemic is still ongoing at the end of 2022, but at least gives a number: 14.9 million deaths caused by COVID-19, directly or indirectly.

Keep this number in mind throughout the discussion that follows on the dry topics of indicators and measurements. It is what is at stake. But not all that is at stake. It captures only a fraction of the diseaseā€™s impact, leaving aside the short and long-term harm of isolation, poverty and lost opportunities caused by the pandemic, children falling behind in their education and the heightening of gender imbalance worldwide. This is no less important but harder to count and quantify. 14.9 million is the number of people who paid with their lives for our collective failure to stop COVID-19 in its tracks. When talking about the pandemic, we cannot forget that we are talking from atop a pile of graves.

Long before the pandemic, many had been warning that the world was vulnerable to the emergence of new pathogens. In the past twenty years, the ā€˜close callsā€™ of SARS, H1N1, MERS and Ebola have focused on preparing for pandemics, at least among the scientific community and some policy circles. The consensus as of early 2020, expressed in the main report of the Independent Panel for Pandemic Preparedness and Response (IPPPR), was that the world was by and large not ready, that not enough had been learned from our brushes with pandemics and that we were leaving ourselves wide open for a ā€œbig oneā€ that could be around the corner.

Evaluating vulnerability to health threats

One is then tempted to think that the emergence of COVID-19, the failure of containment and the inadequate response has validated those early watchdogs. But, as the IPPPR report shows, the reality is more complicated: if claims of vulnerability were correct in their generalities, they were wrong in their details. While the world as a whole was described as vulnerable, some countries were seen as more susceptible than others. Two essential tools have been developed to assess a countryā€™s readiness to face health threats: the Joint External Evaluation (JEE) and the Global Health Security (GHS) index.

Two global tools to assess national resilience to health threats

Both tools have been developed by experts in the field and are grounded in realistic assumptions.

The JEE is a voluntary collaborative assessment of 19 technical areas. It is used primarily to assess and validate the capacity of a country to prevent and respond to public and global threats and emergencies. It was developed by WHO following the ten-year assessment of its International Health Regulations (IHR) in 2015. This recommended that some form of independent evaluation of national capacity would be beneficial to strengthen countriesā€™ ability to comply with the IHR. It is an administrative monitoring tool whose primary purpose is to give governments a framework to identify gaps in their capacity. Because it is rooted in WHO law, it is available to many countries.

The GHS Index is a tool developed collaboratively by the Nuclear Threat Initiative, the Center for Health Security, Johns Hopkins Bloomberg School of Public Health and the Economistā€™s Intelligence Unit. Its first edition was launched in 2019 and explicitly aims to fill some of the gaps left by the JEE in order to provide a more comprehensive and reliable assessment of a countryā€™s readiness. It uses publicly available data and was made available for most countries for its two editions (2019 and 2021).

JEE scores and the GHS Index appeared to have been well regarded as pandemic preparedness measures. One crucial difference is that GHS is available to nearly all countries. However, the JEE is only available for about ninety, with unequal regional coverage ā€“ mainly, countries of Latin and Central America and Europe are underrepresented. This difference aside, both tools agree on where investment is most badly needed. Ā They were both used to argue that considerably more financing of pandemic preparedness was necessary because the world was, in general, inadequately prepared. COVID-19, the first ā€˜properā€™ pandemic since HIV, was a chance to test both measures.

Neither tool did an adequate job

The IPPPR found in its first report that both JEE scores and the GHS Index were not predictive of COVID-19 death rates, an assessment shared by WHO teams and researchers, and reflected in the GHS Index 2021 report. Some of these early results were imperfect because they used data potentially biased against JEE and GHS; in particular, better-equipped countries might report higher COVID-19 cases and deaths because of superior testing capacity and not because of higher case numbers or death toll. Excess mortality data circumvent some of these problems and provide a more robust test for JEE scores and the GHS Index. The authorsā€™ calculation used WHO excess mortality data, the GHS Index score for 184 countries, and the JEE score for 84 countries. None were predictive of COVID-19 outcomes despite their comprehensiveness. It is helpful to distinguish between the two indices to discuss the importance of such results.

First, the JEE. It was not designed as a comprehensive indicator of pandemic preparedness but as a monitoring tool for implementing IHR. It can be argued that the IHR provides some level of protection against emerging pathogens and therefore the weakness of the impact of the JEE scores and COVID-19 outcomes is worrying. The consensus is that the IHR needs revising, our global regulatory toolkit needs expanding, and the process is ongoing: hence WHOā€™s work on a Pandemic Treaty. The fact is that the JEE is an imperfect tool and one that was used as an assessment for pandemic preparedness mostly because, at that time, there were few viable alternatives with comparable scope.

The GHS Index, on the other hand, was created as an evaluation instrument that should have been predictive. The authors, conscious of this issue, addressed it in their 2021 report: ā€œThe GHS Index, as with other models, should be viewed not as a predictive measure, but as an assessment for understanding the existing capacities of countries to prevent, detect, and respond to outbreaks, whether deliberate, accidental or naturally occurring.ā€ In other words, according to the authors, their method is sound, but bad political leadership contributed to low performance in top-ranked countries, particularly the United States.

This assessment is misleading for two reasons. First, while the United States appears as an outlier, it does not account for the overall underperformance of the index. The data show that the issue is not a few distant cases where countriesā€™ preparedness was buried by unsound leadership. Instead, there is an almost complete disconnection between GHS Index scores and COVID-19 excess mortality ā€“ some high-ranking countries did very well (Israel, Japan, New Zealand), others terribly (Armenia, Bulgaria), some were adequate (Australia, Canada), and every shade in between. And the issue is not limited to high-scoring countries; again, among those with middling GHS Index scores, Indonesia and Turkey underperformed, while Bhutan and Vietnam beat the odds.

Attention should not be solely focused on a few examples of botched responses, such as Brazil or the United States; others failed just as much but less bombastically (see India), and small miracles went unremarked (Laos, Sri Lanka). Second, the resilience of bureaucracies to political authorities is no more of a challenge to account for than many of the dimensions of pandemic preparedness included in the indices. If history proves that the behavior of its leadership mainly controls a countryā€™s fate in the face of a new infection ā€“ and to be clear, it will need to be shown ā€“ any measure of pandemic readiness should aim to explain it rather than throw in the towel without accounting for future health threats.

So where do we go from here?

We believe that in their current form, JEE and GHS must be retired as indicators of countriesā€™ readiness to fight global health threats. At the very least, they should not be used by global funders as guides for investment in pandemic preparedness since they so clearly do not point in the right direction. This is not an indictment of the authors of these tools. JEE and GHS are both grounded in expertise and reasonable assumptions. And yet sometimes, knowledge and reason are trumped by reality.

Where to go from here? We still need to invest in pandemic preparedness. COVID-19 has not disappeared, and we are seeing the spread of other worrying pathogens spread. Time is scarce, so we need tools to tell us how and where to invest. The challenge is that, as far as we know, there is not an obvious, proven replacement for JEE and GHS ā€“ which is largely why many fall back on these indices. And why not, you might say? Arenā€™t imperfect tools better than none? The answer, which should be obvious, is that the tool analogy is entirely inappropriate. These are not imperfect tools ā€“ they are tools proven to be useless. We should leave them behind because we should not treat COVID-19 with Ivermectin, widely touted to treat COVID-19 ā€“ it, and they, do not work.

So we need to find an alternative. Global indices cannot be made predictive, there is something fundamentally local about pandemic preparedness. Or it might be that the indexes were just off-centre, over-emphasizing this, under-emphasizing that, on a scale wide enough to make them useless. Whatever we find has the potential to beĀ helpful and improve our chances against the next pathogen ā€“ but we have to look for it and pay attention to the results.

Now, this is well and good, but what if you found yourself with $1.4 billion to spend on pandemic preparedness and response and have a tight deadline? First, some fraction of that funding should go to research ā€“ this kind of research is not expensive in the grand scheme of things. Second, again, do not use JEE as the basis for building your theory of change and critical performance indicators. Third, if the timeline cannot be helped, prioritize dual-use investment. If we cannot be sure what investments will improve pandemic preparedness, we can at least make sure money is not wasted by putting it where it makes a difference in fighting current threats: health system strengthening, fostering local manufacturing or supporting community health workers.

We want to close with our thoughts on where to find better indicators for a country’s pandemic preparedness. Drawing from the lessons of the HIV epidemic, we know that much expertise about the state of the health system, the population’s trust in their government and the willingness to follow health directives doĀ not filter upward into national statistics. We will miss crucial information on stopping pandemics if we do not work with community groups to co-create knowledge. Our further work in this area, and subsequent articles, will try to determine how this approach can be scaled up and feed into better measurements of pandemic preparedness.

*Quentin Batreau, PhD, is GFANā€™s Communications and Advocacy Officer based in Fiji. Ā Quentin can be reached at Quentin@globalfundadvocatesnetwork.org

 

**Daniel Townsend, PhD (c) is the Constituency Focal Point for the Developed Country NGO Delegation to the board of the Global Fund. He is based in Berlin, Germany.

Leave a Reply

Your email address will not be published.

Aidspan

Categories*

Loading
Aidspan

Categories*

Loading