The December 2022 issue of IEEE Spectrum is here!

Close bar

World Builders Put Happy Face On Superintelligent AI

The Future of Life Institute’s contest counters today’s dystopian doomscapes

4 min read
A cityscape ensconced in an iridescent dome of light and shapes.
Hiroshi Watanabe/Getty Images

One of the biggest challenges in a world-building competition that asked teams to imagine a positive future with superintelligent AI: Make it plausible.

The Future of Life Institute, a nonprofit that focuses on existential threats to humanity, organized the contest and is offering a hefty prize purse of up to US $140,000, to be divided among multiple winners. Last week FLI announced the 20 finalists from 144 entries, and the group will declare the winners on 15 June.

“We’re not trying to push utopia. We’re just trying to show futures that are not dystopian, so people have something to work toward.”
—Anna Yelizarova, Future of Life Institute

The contest aims to counter the common dystopian narrative of artificial intelligence that becomes smarter than humans, escapes our control, and makes the world go to hell in one way or another. The philosopher Nick Bostrom famously imagined a factory AI turning all the world’s matter into paper clips to fulfill its objective, and many respected voices in the field, such as computer scientist Stuart Russell, have argued that it’s essential to begin work on AI safety now, before superintelligence is achieved. Add in the sci-fi novels, TV shows, and movies that tell dark tales of AI taking over—the Blade Runners, the Westworlds, the Terminators, the Matrices (both original recipe and Resurrections)—and it’s no wonder the public feels wary of the technology.

Anna Yelizarova, who’s managing the contest and other projects at FLI, says she feels bombarded by images of dystopia in the media, and says it makes her wonder “what kind of effect that has on our worldview as a society.” She sees the contest partly as a way to provide hopeful visions of the future. “We’re not trying to push utopia,” she says, noting that the worlds built for the contest are not perfect places with zero conflicts or struggles. “We’re just trying to show futures that are not dystopian, so people have something to work toward,” she says.

The contest asked a lot from the teams who entered: They had to provide a timeline of events from now until 2045 that includes the invention of artificial general intelligence (AGI), two “day in the life” short stories, answers to a list of questions, and a media piece reflecting their imagined world.

Yelizarova says that another motivation for the contest was to see what sorts of ideas people would come up with. Imagining a hopeful future with AGI is inherently more difficult than imagining a dystopian one, she notes, because it requires coming up with solutions to some of the biggest challenges facing humanity. For example, how to ensure that world governments work together to deploy AGI responsibly and don’t treat its development as an arms race? And how to create AGI agents whose goals are aligned with those of humans? “If people are suggesting new institutions or new ways of tackling problems,” Yelizarova says, “those can become actual policy efforts we can pursue in the real world.”

“For a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better.... And the idea that such a world might be possible is a future that I want to fight for.”
—Rebecca Rapple, finalist in the Future of Life Institute’s world-building contest

It’s worth diving into the worlds created by the 20 finalists and browsing through the positive possible futures. IEEE Spectrum corresponded with two finalists who have very different visions.

The first, a solo effort by Rebecca Rapple of Portland, Ore., imagines a world in which an AGI agent named TAI has a direct connection with nearly every human on earth via brain-computer interfaces. The world’s main currency is one of TAI’s devising, called Contribucks, which are earned via positive social contributions and which lose value the longer they’re stored. People routinely plug into a virtual experience called Communitas, which Rapple’s entry describes as “a TAI-facilitated ecstatic group experience where sentience communes, sharing in each other’s experiences directly through TAI.” While TAI is not directly under humans’ control, she has stated that “she loves every soul” and people both trust her and think she’s helping them to live better lives.

Rapple, who describes herself as a pragmatic optimist, says that crafting her world was an uplifting process. “The assumption at the core of my world is that for a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better,” she tells Spectrum. “Better to ourselves, our neighbors, our planet. And the idea that such a world might be possible is a future that I want to fight for.”

The second team Spectrum corresponded with is a trio from Nairobi, Kenya: Conrad Whitaker, Dexter Findley, and Tracey Kamande. In the world imagined by this team, AGI emerged from a “new non–von Neumann computing paradigm” in which memory is fully integrated into processing. As an AGI agent describes it in one of the team's short stories, AGI has resulted “from the digital replication of human brain structure, with all its separate biological components, neural networks and self-referential loops. Nurtured in a naturalistic setting with constant positive human interaction, just like a biological human infant.”

In this world there are over 1,000 AGIs, or digital humans, by the year 2045; the machine learning and neural networks that we know as AI today are widely used for optimization problems, but aren’t considered true, general-purpose intelligence. Those AIs, in so many words, are not AGI. But in the present scenario being imagined, many people live in AGI-organized “digital nations” that they can join regardless of their physical locations, and which bring many health and social benefits.

In an email, the Kenyan team says they aimed to paint a picture of a future that is “strong on freedoms and rights for both humans and AGIs—going so far as imagining that a caring and respectful environment that encouraged unbridled creativity and discourse (conjecture and criticism) was critical to bringing an ‘artificial person’ to maturity in the first place.” They imagine that such AGI agents wouldn’t see themselves as separate from humans as they would be “humanlike” in both their experience of knowledge and their sense of self, and that the AGI agents would therefore have a humanlike capacity for moral knowledge.

Meaning that these AGI agents would see the problem with turning all humans on earth into paper clips.

The Conversation (1)
FB TS25 May, 2022

IMHO, mind is brain machinery controlled/commanded by free will & life is cell/body machinery controlled/commanded by free will & there is absolutely nothing in science that can explain/create free will!

(That is why, for example, twins & even identical single cells have different "personality"!)

& so, humanity will never be able to create true AI nor true A-Life (nor will ever find alien life of ANY kind!)!

But keep trying by all means!

(Also (IMHO), even a true AI would/could not solve all/major problems of humanity, because they are not really because of the lack of any good ideas!)

(I also think that, if true AI was really possible (thankfully not!), a war for dominance would indeed be inevitable, sooner or later! Just look at how much relentless abuse/exploitation/manipulation people of public keep trying on Alexa, Siri etc! A true AI would really have a big problem w/ that, sooner or later! :-)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less