Spread & Containment
From a ‘deranged’ provocateur to IBM’s failed AI superproject: the controversial story of how data has transformed healthcare
To understand the potential for machine learning to transform medicine, we must go back to the controversial origins of data use in healthcare

Just over a decade ago, artificial intelligence (AI) made one of its showier forays into the public’s consciousness when IBM’s Watson computer appeared on the American quiz show Jeopardy! The studio audience was made up of IBM employees, and Watson’s exhibition performance against two of the show’s most successful contestants was televised to a national viewership across three evenings. In the end, the machine triumphed comfortably.
You can listen to more articles from The Conversation, narrated by Noa, here.
One of Watson’s opponents Ken Jennings, who went on to make a career on the back of his gameshow prowess, showed grace – or was it deference? – in defeat, jotting down this commentary to accompany his final answer: “I, for one, welcome our new computer overlords.”
In fact, his phrase had been poached from another American television mainstay, The Simpsons. Jennings’ wry pop culture reference signalled Watson’s reception less as computer overlord and more as technological curio. But that was not how IBM saw it. On the back of this very public success, in 2011 IBM turned Watson toward one of the most lucrative but untapped industries for AI: healthcare.
What followed over the next decade was a series of ups and downs – but mostly downs – that exemplified the promise, but also the numerous shortcomings, of applying AI to healthcare. The Watson health odyssey finally ended in 2022 when it was sold off “for parts”.
There is much to learn from this story about why AI and healthcare seemed so well-suited, and why that potential has proved so difficult to realise. But first we need to revisit the controversial origins of data use in this field, long before electronic computers were invented, and meet one of its American pioneers, Ernest Amory Codman – an elite by birth, a surgeon by training, and a provocateur by nature.
Data’s role in the birth of modern medicine
While the utility of data in a general way had already been clear for several centuries, its collection and use on a massive scale was a feature of the 19th century. By the 1850s, collecting census data had become commonplace. Its use was not merely descriptive; it formed a way to make determinations about how to govern.
The 19th century marked the first time that, as US systems expert Shawn Martin explains, “managers felt the need to tie the information that society collected to things like performance [and] productivity”. This applied to public health as well, where “big data” played a critical role in establishing relationships between populations, their habits and environment (both at home and work), and disease.

A well-known example is John Snow’s discovery of the source of a cholera outbreak in London’s Soho neighbourhood in 1854. Now considered one of epidemiology’s founding fathers, Snow canvassed door to door asking whether the families within had had cholera. His analysis came chiefly in the re-organisation of the data he collected – its plotting on a map – such that a pattern might emerge. This ultimately established not just the extent of the outbreak but also its source, the Broad Street water pump.
For Boston-born Codman, an outspoken medical reformer working at the beginning of the 20th century, such use of data to understand disease was up there as “one of the greatest moments in medicine”.

This article is part of Conversation Insights
The Insights team generates long-form journalism derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.
Though Codman was involved in many data-driven reforms during his controversial career, one of the most successful was the Registry of Bone Sarcoma, which he established in 1920. His goal was to collect and analyse all of the cases of bone cancer (or suspected bone cancer) from across the US, and to use these to establish diagnostic criteria, therapeutic effectiveness and a standardised nomenclature.
There were a few rules for this registry. Individual doctors who contributed had to send x-rays, case reports and, if possible, tissue samples for examination by the registry’s consulting pathologists and Codman himself. This would ensure both the accuracy and uniformity of pathological analysis. The effort was a success which grew over time: by 1954, when the American College of Surgeons sought a new home for the registry, it contained an impressive 2,400 complete, cross-referenced cases.

On the face of it, Codman’s decision to focus on bone cancer was baffling. It was neither a pressing nor a common concern for doctors across the US. But the disease’s relative rarity was one reason he chose it. Codman felt the amount of data received from his nationwide request would not be overwhelming for his small team of researchers to analyse.
Perhaps more importantly, he knew that studying bone cancer would raise the ire of far fewer of his colleagues than a more common disease might. In a clinical atmosphere in which expertise was understood as a combination of long experience with a dash of intuition – the physician’s “art” – Codman’s touting of data as a better way to obtain knowledge about a disease and its treatment was already being met with vociferous opposition.
It didn’t help that he tended to be inflammatory and provocative in the pursuit of his data-driven goals. At a medical meeting in Boston in 1915, he launched a surprise attack on his fellow practitioners. In the middle of this staid affair, Codman unveiled an 8ft cartoon lampooning his colleagues for their apathy toward healthcare reform and, as he saw it, their wilful ignorance of the limitations of the profession. As one (former) friend put it in the event’s aftermath, Codman’s only hope was that people would take the “charitable” view and consider him not an enemy of the profession but merely “mentally deranged”.

Undeterred, Codman continued this pugnacious approach to his pioneering work. In a 1922 letter to the prestigious Boston Medical and Surgical Journal, he complained that the surgeons of Massachusetts had been particularly unhelpful to his registry. He explained that he had – politely – asked the 5,494 physicians in the state to “drop him a postal stating whether or not he knew of a case” so that Codman could acquire “the best statistics ever obtained on the frequency of the disease”. To his chagrin, he had received only 19 responses in nearly two years. Needling the journal’s editors and readers simultaneously, he asked:
Is this because your Journal is not read? … [Or] because of the indifference of the medical profession as to whether the frequency of bone sarcoma is known or not?
Codman proposed a questionnaire that would allow the journal to see whether the problem was its lack of readership, or his colleagues’ “inertia, procrastination, disapproval, opposition or disinterest”. A subsequent editorial in response to Codman’s proposal was surprisingly magnanimous:
Whether we will it or not, we are obliged to be irritated, amused or instructed, according to our temperaments, by Dr Codman. Our advice is to be instructed.
An end to elitism?
Despite the establishment’s resistance, submissions to Codman’s registry began to grow such that by 1924, he had enough material to make preliminary comments about bone cancer. For one thing, he had succeeded in standardising the much-contested matter of the proper nomenclature for the disease. This, he exulted, was so significant that it should be likened to the “rising of the sun”.

The registry also offered up many pieces of “impersonal proof”, as Codman called his data-driven findings, of the rightness of certain theories that individual physicians had promoted. Claims, for example, that combined treatments of “surgery, mixed toxins and radium” were more effective than treatments that relied on any of these alone were borne out by the data.
The registry, as Codman’s colleague Joseph Colt Bloodgood put it, “excited great interest” among practitioners, and not just because it had “influenced the entire medical world to pay more attention to bone tumours”. More importantly, it provided a new model for how to do medical work. Another admiring colleague responded to Bloodgood:
The work of the registry [is] one of the outstanding American contributions to surgical pathology. As a method of study, it shows the necessity of very wide experience before a surgeon is capable of handling intelligently cases of this disease … [It] is impossible for any single individual to claim finality of this sort.
This emphasis on “very wide experience” over the experience of “any single individual” points to another critical reason to prefer data, according to Codman. His goal in changing the method by which medical knowledge was made was not just to get better results. By seeking to undo the image of medicine as an “art” that depended on the wisdom of a select group of preternaturally talented individuals, Codman also threatened to undo the class-ridden reality that underlay this public veneer.
As the efficiency engineer Frank Gilbreth implied in a 1913 article in the American Magazine, if it was true that medicine required no specific intrinsic gifts (monetary or otherwise), then absolutely anybody – whatever their class, race or background – could do it, including “bricklayers, shovellers and dock-wallopers” who were currently shut out of such “high-brow” occupations.
Codman was even more pointed. If data was used to evaluate the outcomes of his physician colleagues, he insisted, it would show that the quality of doctors and hospitals was generally poor. He sniped that they excelled chiefly in “making dying men think they are getting better, concealing the gravity of serious diseases, and exaggerating the importance of minor illnesses to suit the occasion”.

“Nepotism, pull and politics” were the order of the day in medicine, Codman wrote in one of his most scathing takedowns of his colleagues at the Massachusetts General Hospital. Yet he made himself the centrepiece of this critique, conceding that his entrance to Harvard Medical School had come on the back of “friends and relatives among the well-to-do”. The only difference, he suggested, was that he was willing to own up to it, and to subject himself and his work to the scrutiny of data.
Data’s unflattering view of medicine
Codman was not the only person having a come-to-Jesus moment with data over this period. In the 1920s, the American social science researchers Robert and Helen Lynd collected data in the small US town of Muncie, Indiana, as a way of creating a picture of the “averaged American”.
By the 1930s, the similarly-minded Mass Observation project took off in Britain, intending to collect data about everyday life so as to create an “anthropology of ourselves”. Crucially, both reflected the thinking that also drove Codman: that the right way to know something – a people, a disease – was to produce what seemed a suitably representative average. And this meant the amalgamation of often quite diverse and wide-ranging characteristics and their compression into a single, standard, efficient unit.
The turn from describing representative averages to learning from these averages is probably best articulated in the work of pollsters, whose door-to-door interrogations were aimed at helping a nation to know itself by statistics. In 1948, inspired by their failure to correctly predict the outcome of the US presidential election – one of the most famous psephological errors in the nation’s history – pollsters such as George Gallup and Elmo Roper began to rethink their analytic methods, spinning away from quota sampling and towards random sampling.

At the same time, thanks primarily to its military applications, the science of computing began to gather pace. And the growing fascination with knowing the world via data combined with the unparalleled ability of computers to crunch it appeared a match made in heaven.
In a late-in-life preface to his 1934 data-driven magnum opus on the anatomy of the shoulder, Codman had comforted himself with the thought that he was a man ahead of his time. And indeed, just a few years after his death in 1940, statistical analysis began to pick up steam in medicine.
Over the next two decades, figures such as Sir Ronald Fisher, the geneticist and statistician remembered for suggesting randomisation as an antidote to bias, and his English compatriot Sir Austin Bradford Hill, who demonstrated the connection between smoking and lung cancer, also pushed forward the integration of statistical analysis into medicine.

However, it would take many more years for word to finally leak out that, by data’s measure, both the methodologies of medical research and much of medicine itself was ineffective. In a movement led in part by outspoken Scottish epidemiologist Archie Cochrane, this unflattering statistical view of medicine finally really saw the light of day in the 1960s and 70s.
Cochrane went so far as to say that medicine was based on “a level of guesswork” so great that any return to health after a medical intervention was more a “tribute to the sheer survival power of the minds and bodies” of patients than anything else. Aghast at the revelations embedded in Cochrane’s 1972 book, Random Reflections on Health Services, the Guardian journalist Ann Shearer wrote:
Isn’t it … more than fair to ask what on Earth we – and more particularly, the medical They – have been doing all these years to let the health machine develop with such a lack of quality control?
The answer dates back to Codman’s bone cancer registry half a century earlier. The medical establishment on both sides of the Atlantic had been avoiding with all their might the scrutiny that data would bring.
Computers finally acquire medical currency
Despite their increasing ubiquity in the 1970s and 80s, computers had still only haltingly joined the medical mainstream. Though a smattering of AI applications began to appear in healthcare in the 1970s, it was only in the 1990s that computers really started to acquire some medical currency.
In a page borrowed straight from Codman’s time, the pioneering American biomedical informatician Edward Shortliffe noted in 1993 that the future of AI in medicine depended on the realisation that “the practice of medicine is inherently an information-management task”.
In the US, the Institute of Medicine and the President’s Information Technology Advisory Council released reports highlighting the failures of medicine to fully embrace information technology. By 2004, a newly appointed national coordinator for health information technology was charged with the herculean task of establishing an electronic medical record for all Americans by 2014.

This explosion of interest in bringing computers into healthcare made it an enticing and potentially lucrative area for investment. So it is no surprise that IBM celebrated Watson’s winning turn on Jeopardy! in 2011 by putting it to work on an oncology-focused programme with multiple US-based clinical partners selected on the basis of their access to medical data.
The idea was laudable. Watson would do what machine learning algorithms do best: mining the massive amounts of data these institutions had at their disposal, searching for patterns that would help to improve treatment. But the complexity of cancer and the frustratingly unique responses of patients to it, yoked together by data systems that were sometimes incomplete and sometimes incompatible with each other or with machine learning’s methods more generally, limited Watson’s ability to be useful.
One sorry example was Watson’s Oncology Expert Advisor, a collaboration with the MD Anderson Cancer Center in Houston, Texas. This had begun its life as a “bedside diagnostic tool” that pored through patient records, scientific literature and doctors’ notes in order to make real-time treatment recommendations. Unfortunately, Watson couldn’t “read” the doctors’ notes. While good at mining the scientific literature, it couldn’t apply these large-scale discussions to the specifics of the individuals in front of it. By 2017, the project had been shelved.
Elsewhere, at New York City’s famed Memorial Sloan Kettering Cancer Center, clinicians found a more elaborate – and infinitely more problematic – way forward. Rather than relying on the retrospective data that is machine learning’s usual fodder, clinicians invented new “synthetic” cases that were, by virtue of having been invented, infinitely less messy and more complete than any real data could be.
The project re-litigated the “data v expertise” debate of Codman’s time – once more in Codman’s favour – since this invented data had built into it the specifics of cancer treatment as understood by a small group of clinicians at a single hospital. Bias, in other words, was programmed directly in, and those engaged in training the system knew it.
Viewing historical patient data as too narrow, they rationalised that replacing this with data that reflected their own collective experience, intuition and judgment could build into Watson For Oncology the latest and greatest treatments. Of course, this didn’t work any better in the early 21st century than it had in the early 20th.

Furthermore, while these clinicians sidestepped the problem of real data’s impenetrable messiness, treatment options available at a wealthy hospital in Manhattan were far removed from those available in the other localities that Watson was meant to serve. The contrast was perhaps starkest when Watson was introduced to other parts of the world, only to find the treatment regimens it recommended either didn’t exist or were not in keeping with the local and national infrastructures governing how healthcare was done there.
Even in the US, the consensus, as one unnamed physician in Florida reported back to IBM, was that Watson was a “piece of shit”. Most of the time, it either told clinicians what they already knew or offered up advice that was incompatible with local conditions or the specifics of a patient’s illness. At best, it offered up a snapshot of the views of a select few clinicians at a moment in time, now reified as “facts” that ought to apply uniformly and everywhere they went.
Many of the elegies written to mark Watson’s selling-off in 2022, having failed to make good on its promise in healthcare, attributed its downfall to the same kind of overpromise and under-delivery that has spelled the end for many health technology start-ups.
Some maintained that the scaling-up of Watson from gameshow savant to oncological wunderkind might have been successful with more time. Perhaps. But in 2011, time was of the essence. To capitalise on the goodwill toward Watson and IBM that Jeopardy! had created, to be the trailblazer into the lucrative but technologically backward world of healthcare, had meant striking first and fast.
Watson’s high-profile failure highlights an overlooked barrier to modern, data-driven healthcare. In its encounters with real, human patients, Watson stirred up the same anxieties that Codman had encountered – difficult questions about what it is exactly that medicine produces: care, and the human touch that comes with it; or cure, and the information management tasks that play a critical role here?
Read more: AI can excel at medical diagnosis, but the harder task is to win hearts and minds first
A 2019 study of US patient perspectives of AI’s role in healthcare gave these concerns some statistical shape. Though some felt optimistic about AI’s potential to improve healthcare, a vast majority gave voice to fundamental misgivings about relinquishing medicine to machine learning algorithms that could not explain the logic they employed to reach their diagnosis. Surely the absence of a physician’s judgment would increase the risk of misdiagnosis?
The persistence of this worry has quite often resulted in caveating the work of machine learning with reassurances that humans are still in charge. In a 2020 report on the InnerEye project, for example, which used retrospective data to identify tumours on patient scans, Yvonne Rimmer, a clinical oncologist at Addenbrooke’s Hospital in Cambridge, addressed this concern:
It’s important for patients to know that the AI is helping me in my professional role. It’s not replacing me in the process. I doublecheck everything the AI does, and can change it if I need to.
Data’s uncertain role in the future of healthcare
Today, whether a doctor gives you your diagnosis or you get it from a computer, that diagnosis is not primarily based on the intuition, judgment or experience of either doctor or patient. It’s driven by data that has made our cultures of mainstream care relatively more uniform and of a higher standard. Just as Codman foresaw, the introduction of data in medicine has also forced a greater degree of transparency, both in terms of methodologies and effectiveness.
However, the more important – and potentially intractable – problem with this modern approach to health is its lack of representation. As the Sloan Kettering dalliance with Watson began to show, datasets are not the “impersonal proofs” that Codman took them to be.
Even under less egregiously subjective conditions, data undeniably replicates and concretises the biases of society itself. As MIT computer scientist Marzyeh Ghassemi explains, data offers the “sheen of objectivity” while replicating the ethnic, racial, gender and age biases of institutionalised medicine. Thus the tools, tests and techniques that are based on this data are also not impartial.
Ghassemi highlights the inaccuracy of pulse oximeters, often calibrated on light-skinned individuals, for those with darker skin. Others might note the outcry over the gender bias in cardiology, spelled out especially in higher mortality rates for women who have heart attacks.
The list goes on and on. Remember the human genome project, that big data triumph which has, according to the US National Institutes of Health website, “accelerated the study of human biology and improved the practice of medicine”? It almost exclusively drew upon genetic studies of white Europeans. According to Esteban Burchard at the University of California, San Francisco:
96% of genetic studies have been done on people with European origin, even though Europeans make up less than 12% of the world’s population … The human genome project should have been called the European genome project.
A lack of representative data has implications for big data projects across the board – not least for precision medicine, which is widely touted as the antidote to the problems of impersonal, algorithm-driven healthcare.
Precision or “personalised” medicine seeks to address one of the essential perceived drawbacks of data-based medicine by locating finer-grained commonalities between smaller and smaller subsets of the population. By focusing on data at a genetic and cellular level, it may yet counter the criticism that the data-driven approach of recent decades is too blunt and insensitive a tool, such that “even the most frequently prescribed drugs for the most common conditions have very limited efficacy”, according to computational biologist Chloe-Agathe Azencott.
But personalised medicine still feeds on the same depersonalised data as medicine more generally, so it too is handicapped by data’s biases. And even if it could step beyond the problems of biased data – and, indeed, institutions – the question of its role in the future of our everyday healthcare does not end there.
Even taking the utopian view that personalised medicine might make possible treatments as individual as we are, pharmaceutical companies won’t develop these treatments unless they are profitable. And that requires either prices so high that only the wealthiest of us could afford them, or a market so big that these companies can “achieve the requisite return on investment”. Truly individualised care is not really on the table.
Read more: In defence of ‘imprecise’ medicine: the benefits of routine treatments for common diseases
If our goal in healthcare is to help more people by being more representative, more inclusive and more attentive to individual difference in the medical everyday of diagnosis and treatment, big data isn’t going to help us out. At least not as things currently stand.
For the story of healthcare data to date has pointed us squarely in the other direction, towards homogenisation and standardisation as medical goals. Laudable as the rationales for such a focus for medicine have been at different moments in our history, our expectations for the potential for machine learning to enable all of us to live longer, healthier lives remain something of a pipe dream. Right now it is still us humans, not our computer overlords, who hold most sway over our individual health outcomes.
Dr Caitjan Gainty is a winner of The Conversation’s Sir Paul Curran award for academic communication

For you: more from our Insights series:
The discovery of insulin: a story of monstrous egos and toxic rivalries
Drugs, robots and the pursuit of pleasure – why experts are worried about AIs becoming addicts
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Caitjan Gainty does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
treatment genome genetic covid-19 mortality recovery europeanSpread & Containment
Candida auris: what you need to know about the deadly fungus spreading through US hospitals
A drug-resistant fungus is a threat to human health.

A fungal superbug called Candida auris is spreading rapidly through hospitals and nursing homes in the US. The first case was identified in 2016. Since then, it has spread to half the country’s 50 states. And, according to a new report, infections tripled between 2019 and 2021. This is hugely concerning because Candida auris is resistant to many drugs, making this fungal infection one of the hardest to treat.
Candida auris is a yeast-type fungus that is the first to have multiple international health alerts associated with it. It has been found in over 30 countries, including the UK, since it was first identified in Japan in 2009.
It is related to other types of yeast that can cause infections, like Candida albicans which causes thrush. However, Candida auris is very different to these other fungi and in some ways, highly unusual.
First, it can grow, or “colonise”, human skin. Unlike many other Candida species that like to grow in our guts as part of the microbiome, Candida auris does not grow in this environment and seems to prefer the skin. This means that people who are colonised with Candida auris can shed lots of yeast from their skin, and this contaminates bed clothes and surfaces with the fungus. This can lead to outbreaks.
It is unusual for a fungal infection to spread from person to person, but that seems to be how Candida auris infections spread. Outbreaks can happen with this fungus, especially in intensive care units (ICU) and nursing homes where people are at a higher risk for getting fungal infections generally.
The fungus can live on surfaces for several weeks, and getting rid of it can be difficult. Enhanced cleaning and hand washing is needed to try and limit the spread of the fungus and exposure to patients who get ill from it.
Most people who are colonised with Candida auris will not get ill from it, or even know it is there. It causes infections when it gets into surgical wounds or the blood from an intravenous line. Once it gets into the body, it can infect organs and the blood causing a very serious and potentially fatal disease.
The mortality rate for people infected (as opposed to colonised) with the fungus is between 30 and 60%. But a precise mortality rate can be hard to pin down as people who are infected are often critically ill with other conditions.
Diagnosing an infection can be difficult as there can be a wide range of symptoms including fever, chills, headaches and nausea. It is for this reason that we need to keep a close eye on Candida auris as it can easily be confused with other conditions.
In the last few years, new tests to help identify this fungus accurately have been developed.
The first Candida auris infection was reported in the UK in 2013. However, there may have been other cases before this – there is evidence that some early cases were misidentified as unrelated yeasts.
The UK has so far managed to stop any major outbreaks, and most cases have been limited in their spread.
Most patients who have become ill from Candida auris in the UK had recently travelled to parts of the world where the fungus is more common or has been circulating for longer.
Spurred by COVID
Rising numbers of Candida auris infections are thought to be partially linked to the COVID pandemic. People who become very ill from COVID may need mechanical ventilation and long stays in the ICU, which are both risk factors for Candida auris colonisation and infection.
It will take some time to figure out exactly how the pandemic has affected rates and numbers of fungal infections around the world, but these are important questions to answer to help predict how Candida auris cases might fluctuate in the future.
As for most life-threatening fungal infections, treatment is difficult and limited. We have only a handful of antifungal drugs to fight these infections, so when a species is resistant to one or more of these drugs, the options for treatment are extremely limited. Some Candida auris infections are resistant to all three types of antifungal drug.
Healthcare professionals must remain vigilant to this drug-resistant fungus. Without close monitoring and enhanced awareness of this infection, we could see more outbreaks and serious disease associated with Candida auris in the future.
Rebecca A. Drummond receives funding from the Medical Research Council.
treatment pandemic mortality spread japan ukInternational
The ONS has published its final COVID infection survey – here’s why it’s been such a valuable resource
The ONS’ Coronavirus Infection Survey has ceased after three years. Two experts explain why it was a uniquely useful source of data.

March 24 marked the publication of the final bulletin of the Office for National Statistics’ (ONS) Coronavirus Infection Survey after nearly three years of tracking COVID infections in the UK. The first bulletin was published on May 14 2020 and we’ve seen new releases almost every week since.
The survey was based primarily on data from many thousands of people in randomly selected households across the UK who agreed to take regular COVID tests. The ONS used the results to estimate how many people were infected with the virus in any given week.
In the survey’s first six months, we had results from 1.2 million samples taken from 280,000 people. Although the number of people participating each month declined over time, the survey has continued to be a highly valuable tool as we navigate the pandemic.
In particular, because the ONS bulletins were based on surveying a large, random sample of all UK residents, it offered the least biased surveillance system of COVID infections in the UK. We are not aware of any similar study anywhere else in the world. And, while estimating the prevalence of infections was the survey’s main output, it gave us a lot of other useful information about the virus too.
Unbiased surveillance
An important advantage of the ONS survey was its ability to detect COVID infections among many people who had no symptoms, or were not yet displaying symptoms.
Certainly other data sets existed (and some continue to exist) to give a sense of how many people were testing positive. For example, earlier in the pandemic, case numbers were reported at daily national press conferences. Figures continue to be published on the Department of Health and Social Care website.
But these totals have usually only encompassed people who tested because they had reason to suspect they may have been infected (for example because of symptoms or their work). We know many people had such minor symptoms that they had no reason to suspect they had COVID. Further, people who took a home test may or may not have reported the result.
Similarly, case counts from hospital admissions or emergency room attendances only captured a very small percentage of positive cases, even if many of these same people had severe healthcare needs.
Symptom-tracking applications such as the ZOE app or online surveys have been useful but tend to over-represent people who are most technologically competent, engaged and symptom-aware.
Testing wastewater samples to track COVID spread in a community has proved difficult to reliably link to infection numbers.
Read more: The tide of the COVID pandemic is going out – but that doesn't mean big waves still can't catch us
What else the survey told us
Aside from swab samples to test for COVID infections, the ONS survey collected blood samples from some participants to measure antibodies. This was a very useful aspect of the infection survey, providing insights into immunity against the virus in the population and individuals.
Beginning in June 2021, the ONS survey also published reports on the “characteristics of people testing positive”. Arguably these analyses were even more valuable than the simple infection rate estimates.
For example, the ONS data gave practical insights into changing risk factors from November 21 2021 to May 7 2022. In November 2021, living in a house with someone under 16 was a risk factor for testing positive but by the end of that period it seemed to be protective. Travel abroad was not an important risk factor in December 2021 but by April 2022 it was a major risk. Wearing a mask in December 2021 was protective against testing positive but by April 2022 there was no significant association.
We shouldn’t find this changing picture of risk factors particularly surprising when concurrently we had different variants emerging (during that period most notably omicron) and evolving population resistance that came with vaccination programmes and waves of natural infection.
Also, in any pandemic the value of non-pharmaceutical interventions such wearing masks and social distancing declines as the infection becomes endemic. At that point the infection rate is driven more by the rate at which immunity is lost.

The ONS characteristics analyses also offered evidence about the protective effects of vaccination and prior infection. The bulletin from May 25 2022 showed that vaccination provided protection against infection but probably for not much more than 90 days, whereas a prior infection generally conferred protection for longer.
After May 2022, the focused shifted to reinfections. The analyses confirmed that even in people who had already been infected, vaccination protects against reinfection, but again probably only for about 90 days.
It’s important to note the ONS survey only measured infections and not severe disease. We know from other work that vaccination is much better at protecting against severe disease and death than against infection.
Read more: How will the COVID pandemic end?
A hugely valuable resource
The main shortcoming of the ONS survey was that its reports were always published one to three weeks later than other data sets due to the time needed to collect and test the samples and then model the results.
That said, the value of this infection survey has been enormous. The ONS survey improved understanding and management of the epidemic in the UK on multiple levels. But it’s probably appropriate now to bring it to an end in the fourth year of the pandemic, especially as participation rates have been falling over the past year.
Our one disappointment is that so few of the important findings from the ONS survey have been published in peer-reviewed literature, and so the survey has had less of an impact internationally than it deserves.
Paul Hunter consults for the World Health Organization. He receives funding from National Institute for Health Research, the World Health Organization and the European Regional Development Fund.
Julii Brainard receives funding from the NIHR Health Protection and Research Unit in Emergency Preparedness.
link pandemic coronavirus testing antibodies spread social distancing european uk world health organizationGovernment
Four global problems that will be aggravated by the UK’s recent cuts to international aid
The UK is among countries cutting international aid payments, which could affect the world in four key areas: poverty, extremism, democracy and refuge…

UK economic forecasts have improved markedly since the September 2022 mini-budget. The economic recession may now be more shallow and public borrowing lower than previously expected.
However, faced with persistently high inflation and continued uncertainty caused by Russia’s war in Ukraine, financial cuts remained the order of the day in the UK government’s spring 2023 budget announcement.
While Chancellor Jeremy Hunt introduced a £5 billion increase to military spending over the next two years, the international aid budget was cut for the third time in three years. This is part of an increasingly concerning international trend.
UK aid has been deceasing since 2019. And the country is not alone in cutting its aid commitments. Sweden – one of the world’s leading donors in this area – is also set to abolish its target of spending 1% of GDP on aid. Across several European countries, recent cuts have largely been driven by the Ukraine war, as well as national pressures caused by the COVID pandemic.
And yet aid is sorely needed if the world is to meet the 2030 Agenda for Sustainable Development, a plan to end world poverty agreed by UN members in 2015. The “great finance divide” – which sees some countries struggle to access resources and affordable finance for economic investment – continues to grow, according to the UN, leaving developing countries in Asia, Africa and Latin America more susceptible to shocks.
The UK and Europe’s support for Ukraine is admirable and much-needed. But when countries are faced with important domestic political and financial challenges, governments tend to look inwards – often in an attempt to rally their electorate.
Cuts to aid budgets are one example of this. For the UK in particular, neglecting multilateral solutions to important global challenges could actually exacerbate what are thought of as “domestic issues”. Our research highlights four such issues that could be affected by the UK’s budget cuts.
1. Increasing poverty could affect global stability
While the exact direction of the relationship remains up for debate, poverty is an important cause and effect of war. We know that up to two-thirds of the world’s extreme poor (defined as people earning less than $1.90 a day) will be concentrated in fragile and conflict-affected countries by 2030.
Research shows that aid promotes economic growth. So, reducing international aid will only exacerbate these recent negative trends. According to the chief executive of Oxfam GB, aid is an investment in a more stable world – something that is in all of our interests.
2. Extremism could spread as western influence falls
Violent extremism is on the rise in Africa. It reduces international investment and undermines the rights of minority groups, women and girls. This goes against important UN sustainable development goals aimed at building peace and prosperity for the planet and its people.
Reducing international aid will create opportunities for new political actors to emerge and influence the direction of countries with weak government institutions. Cutting back western influence in international architecture (especially while these countries support a conflict in their own continent) may also be resented by countries in other parts of the world that would like more support.
3. Democracy could be threatened in some countries
When aid is provided in the right way, it can give a boost to democratic outcomes. Again, if western, democratic and liberal states don’t support countries struggling to tackle poverty and extremism, other actors could step in.
Russia’s increasing involvement in the Central African Republic and Burkina Faso are recent examples. Equally, China’s Belt and Road Initiative (through which it lends money to other countries to build infrastructure) has significantly broadened its economic and political influence in many parts of the world. But some experts fear that China is laying a debt trap for borrowing governments, whereby the contracts agreed allow it to seize strategic assets when debtor countries run into financial problems.
The growing influence of both states may explain global trends towards democratic backsliding because research shows democratic stability is often undermined in waves. In recent UN votes, Russia and China’s growing influence via such aid has been seen to bear fruit. For example, in October 2022 Uzbekistan and Kazakhstan –- both temporary members of the UN Human Rights council –- voted against a decision to discuss human rights concerns in China’s Muslim-majority Xinjiang region.
4. More countries could struggle to welcome refugees
People flee their homes for many reasons but mostly due to conflict, violent extremism and poverty. Most refugees do not travel to western countries such as the UK, although the number of people arriving in small boats across the English Channel has risen substantially recently.
But there are more “internationally displaced people” than refugees. That is, most people fleeing war remain in their country, while refugees tend to remain in neighbouring states.
Turkey receives the highest numbers of refugees due to its proximity to the ongoing war in Syria, and Poland welcomed the highest number of refugees fleeing the war in Ukraine.
This, combined with the fact that countries most likely to experience conflict are geographically distant from the UK, indicates that numbers seeking asylum in the UK will remain relatively low. But reducing aid will impose further pressures on poor countries that are already struggling to accommodate refugee flows, as well as increasing push factors for migration from fragile regions.
International aid should be one of many solutions
Failure to tackle global problems like poverty, extremism, and democratic backsliding could further destabilise fragile regions. This will have human costs including increased numbers of desperate people attempting to cross the channel.
Aid is an investment in a more stable world. Deals with France or the risk of deportation to Rwanda will have limited impact on reducing the number of people arriving on small boats if the root causes of their migration are not tackled.
In our globalised world, looking inwards can only exacerbate these problems. It is crucial that states adopt multilateral solutions – including funding international aid programmes – to tackle global problems.
Patricia Justino receives funding from the UK Economic and Social Research Council.
Kit Rickard does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
recession pandemic economic growth extremism spread recession gdp africa european europe uk france sweden poland russia ukraine china-
Government17 hours ago
CDC Found COVID-19 Vaccine Safety Signals Months Earlier Than Previously Known, Files Show
-
Government23 hours ago
Four global problems that will be aggravated by the UK’s recent cuts to international aid
-
Spread & Containment9 hours ago
Candida auris: what you need to know about the deadly fungus spreading through US hospitals
-
Government5 hours ago
Financial Stress Continues to Recede
-
Uncategorized5 hours ago
New ways to protect food crops from climate change and other disruptions
-
Uncategorized17 hours ago
The secret of pitching to male VCs: Female crypto founders blast off
-
International9 hours ago
The ONS has published its final COVID infection survey – here’s why it’s been such a valuable resource
-
Uncategorized9 hours ago
A Federal Reserve Pivot is not Bullish