Can Computer Models Select the Best Public Health Interventions for COVID-19?

A new XPrize challenges simulators to go from forecasting case numbers to recommending policy

3 min read
covid-19 and statistics
Illustration: iStockphoto

Many associate XPrize with a $10-million award offered in 1996 to motivate a breakthrough in private space flight. But the organization has since held other competitions related to exploration, ecology, and education. And in November, they launched the Pandemic Response Challenge, which will culminate in a $500,000 award to be split between two teams that not only best predict the continuing global spread of COVID-19, but also prescribe policies to curtail it.

“The whole point was to create a platform to create pandemic mitigation strategies based on evidence and science,” says Amir Banifatemi, XPrize’s chief innovation and growth officer. “But also to make the resulting insights available freely to everyone, in an open-source manner—especially for all those communities that may not have access to data and epidemiology divisions, statisticians, or data scientists.”

Pandemic predictions are hard enough, as we’ve seen with forecasting’s spotted track record over the past year. Prescriptions are harder still. Any non-pharmaceutical intervention (NPI), like closing schools and businesses, limiting travel, or establishing contact tracing, will be implemented differently in different areas; these interventions can also interact in surprising ways.

The XPrize Pandemic Response Challenge emerged from a paper posted to the preprint server arXiv in May 2020 by a team led by Risto Miikkulainen, a computer scientist at the University of Texas at Austin and associate vice president for evolutionary intelligence at Cognizant Technology Solutions, an IT and consulting company.

The paper, by Miikkulainen and colleagues at UT and Cognizant, lays out a way to go from prediction to prescription for COVID-19. As a first step, the team trained a neural network to predict new infections, using past data on infections and NPIs implemented. Then they created another neural net to serve as the prescriptor, taking in past infections and NPIs and outputting a new set of NPIs. To optimize the prescriptor, they created a whole population of prescriptors and used artificial evolution. They evaluated the prescriptors using the predictor as a surrogate for reality; in other words, based on the interventions prescribed, what would be the predicted effect on case numbers? The best performing prescriptors were kept, copied, and mutated.

Notably, evolution produced not a single good prescriptor but a set of them, each good in its own way. They were selected for their ability to minimize not just infections, but also interventions themselves—otherwise, they’d just prescribe total lockdowns, which have serious impacts on the economy and quality of life. Policymakers could theoretically look at the set of prescriptors and pick one, depending on how much they wanted to emphasize physical health or social and economic health.

Miikkulainen’s team placed an interactive demo online. “Amir [Banifatemi] saw that and figured that this would make a great XPrize,” Miikkulainen says. Suddenly, artificial intelligence and big data seemed capable of authoring useful policy recommendations. Cognizant is partnering with XPrize to run the challenge, and their code is offered to contestants as an optional starting point.

Some XPrizes span years. This one has a compressed schedule, for obvious reasons. There are two phases. For Phase 1, teams had to submit prediction models by 22 December. They were given data on infections and NPIs around the world (the NPI data came from the comprehensive Oxford COVID-19 Government Response Tracker), and the models are now being judged over a three-week period on how closely their predictions of new cases each day match reality across more than 200 regions (countries, U.S. states, and provinces of Canada and Brazil). Teams will also be judged qualitatively on factors such as innovation, model speed, prediction consistency, explanation, and collaboration with other teams.

Up to 50 teams will make it to Phase 2, where they must submit a prescription model. The best predictors from Phase 1 will be combined to evaluate the prescriptions in Phase 2. Prescriptors can offer up to 10 prescriptions per region per day, covering different infection-intervention tradeoffs. (The economic cost of each intervention will be given to the models. Of course, figuring out the real costs is a problem in itself.) Again, these will be evaluated both quantitatively and qualitatively. The top two teams will split half a million dollars.

The competition may not end there. XPrize’s Banifatemi says a third phase might test models on vaccine deployment prescriptions. And beyond the contest, some cities or countries might put some of the Phase 2 or 3 models into practice, if Banifatemi can find adventurous takers.

The organizers expect a wide variety of solutions. Banifatemi says the field includes teams from AI strongholds such as Stanford, Microsoft, MIT, Oxford, and Quebec’s Mila, but one team consists of three women in Tunisia. In all, 104 teams from 28 countries have registered.

“We’re hoping that this competition can be a springboard for developing solutions for other really big problems as well,” Miikkulainen says. Those problems include pandemics, global warming, and challenges in business, education, and healthcare. In this scenario, “humans are still in charge,” he emphasizes. “They still decide what they want, and AI gives them the best alternatives from which the decision-makers choose.”

But Miikkulainen hopes that data science can help humanity find its way. “Maybe in the future, it’s considered irresponsible not to use AI for making these policies,” he says.

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}