Anders Sandberg and Stuart Armstrong of the University of Oxford’s Future of Humanity Institute, working with Milan Ćirković of the University of Novi Sad in Serbia and Montenegro, recently offered a new way to link the Fermi Paradox—the odd fact that we’ve never heard from an alien civilization—with matters of computation. While I’m in no position to judge the technical merits of their new proposal, it’s good fun to think about, so I’d like to summarize it here. Before I do that, though, let me first explain a bit more about what the Fermi Paradox is and what computing has to do with it.
The Fermi Paradox is the contradiction between the very reasonable surmise that there must be other forms of technologically advanced life out there somewhere in our great galaxy and the complete lack of any evidence for it. Space is awfully big, so it might seem just too hard to detect some distant alien civilization. But if the evolution of life and intelligence isn’t immensely improbable, lots of technologically advanced civilizations should have developed elsewhere in our Milky Way galaxy long ago. And although the galaxy is vast, the time that’s elapsed since its creation is even vaster, so those alien civilizations should have had plenty of time to reach our corner of space already. In the words supposedly uttered by Enrico Fermi when he came to this realization one day in 1950 while lunching with colleagues at Los Alamos National Laboratory: Where is everybody?
The Fermi Paradox is misnamed, because Fermi was not the first person to consider this issue; nor did he formally analyze the question and explore answers. Others have done that since, but the paradox is far from resolved. There are plenty of intriguing theories, though. And some of them hinge on computation.
Perhaps the most outlandish one in this family is that the universe that we think we know is really just some sort of computer simulation. If the cosmic tweens who are running this simulation wanted us to encounter intelligent life elsewhere, they’d arrange that. If they wanted human beings to be the only highly intelligent species there is, they’d leave us to strut and fret all alone in their simulated universe. Or maybe they had to buy the paid version of the game to get civilizations evolving on multiple planets and they are running the free, one-intelligent-race version.
You might be surprised to learn that anyone takes this “simulation hypothesis” seriously, but some do. University of Oxford philosopher Nick Bostrom published a consideration of this hypothesis in 2003. Scientists debating the possibility have even proposed ways to test it.
In 1999, Anders described another possible solution to the Fermi Paradox involving computers. He noted that technologically advanced civilizations may not go gallivanting around the galaxy and instead might prefer to spend their time building giant computing machines of planetary scale, something others had earlier dubbed “Jupiter brains.” The late Robert Bradbury took this idea even further, proposing that advanced civilizations would construct what he called “Matrioshka brains”—essentially giant supercomputers that surround stars in a series of shells and use the energy of the central star to perform calculations, perhaps for the purpose of keeping the digitally preserved souls of their builders up and running in some post-singularity paradise. Or maybe they’re doing something else with all those gigaflops. In any event, distant Matrioshka brains would be hard to detect.
In 2006, Bradbury and Ćirković proposed yet another explanation that assumes advanced civilizations just want to compute. Like datacenters today, those of advanced spacefaring civilizations would need to dump the waste heat of their computations. The late AI pioneer Marvin Minsky suggested in 1971 that advanced civilizations would likely use space itself as a heat sink for their computing. Bradbury and Ćirković refined Minsky’s idea, pointing out that the best place to do this practically would be at the edge of the galaxy. So maybe that’s where the little green men are hiding out.
This brings us to the latest theory, that of Sandberg, Armstrong, and Ćirković, which is described in a paper titled “That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox.” That title is more than a little cryptic, so let me try to unpack things here for those who don’t want to read their paper.
First, those of you who are rusty at Latin need to know that “aestivation” is kind of like hibernation, but it’s something done by certain animals (crabs for example) to survive the heat of summer, not the cold of winter. You also need to understand that the amount of energy required for computation can only be reduced so far. When you erase a bit, the loss of information is translated into some increase in entropy, which typically manifests as heat. Yes, you can make your computing apparatus more efficient, but only so much—at some point you run up against a fundamental barrier called the Landauer limit. The key point in this context is that the Landauer limit is a function of the temperature at which the bit erasure happens.
Advocates of quantum computing and reversible computing will point out that it’s possible to breach that barrier. But Anders argues that it’s unlikely that calculations done on reversible quantum computers will entirely eliminate heat-generating operations, if only for things like error correction. Thus even if the limit turns out to be smaller than what Landauer calculated, there will always be some limit.
So if you are the master of some technologically advanced civilization keen to do huge amounts of computation with your alien technology, you’ll surely conclude that it would be best to carry out those calculations at low temperatures. And if you tried to create those low temperatures with some futuristic refrigerator, you’d just be using energy to run it. What you really want to do instead is to take advantage of the coldness of space.
In that sense, the new proposal is a little like the one that Bradbury and Ćirković offered in 2006 when they suggested that the cold edge of the galaxy would be the place to go. The new explanation takes this concept to a whole new dimension: time. You see, space is only so cold—limited to 2.7 kelvins by the cosmic background radiation. Or rather, it can get only that cold now. Wait several billion years, though, and the universe will have expanded enough to reduce the temperature of the cosmic background radiation considerably. So if you’re seeking serious cold for doing calculations, best to wait—to “aestivate”—until conditions are better.
The notion of going to sleep for many billions of years certainly seems odd, but that may be because our puny biological selves have a hard time thinking about the passage of time being so malleable. Perhaps we’ll one day figure out how to freeze-dry ourselves or upload our noggins into those computers that we’ll be so keen to keep running. If that’s possible, then maybe an advanced civilization can choose what clock speed it wants to run at. Or maybe the strategy will be to turn off everybody’s clock and just run a low-power wake-up timer set for the year 1,000,000,000,000.
You might think that if waiting 1,000 billion years is good, waiting 2,000 billion years would be better. Anders and his colleagues say no: The universe doesn’t just get colder and colder forever. Eventually some exotic physics comes into play that limits the ultimate cold to 2.6 x 10-30 kelvins. And that should happen in 1,400 billion years, give or take. So any advanced civilization pursuing this aestivation strategy would know when it’s time to end the big sleep.
Could this be the answer to “Where is everybody”? Could they all just be snoozing? It seems preposterous, but Anders and his co-workers take pains to explore the possibility in great detail. They consider, for example, competition from other civilizations. They write: “If a civilization merely aestivates it may find its grip on its domain supplanted by latecomers. Leaving autonomous systems to monitor the domain and [prevent] activities that decrease its value would be the rational choice.” Let me translate: An advanced civilization that put itself to sleep would leave some sort of automated mechanism in place to detect and squash any young upstart civilization that threatened to elbow into its territory. So maybe that’s something to think about before we start launching interstellar probes.
Personally, I prefer what seems a simpler explanation to all this: that the evolution of technologically advanced life is just so improbable that humans represent the sole occurrence in our galaxy. Maybe one day, we’ll spread ourselves through its spiral arms testing that surmise. Or perhaps we’ll just put our energies into figuring out how to build one of those grand cosmic computers to surround the sun and then command it to hibernate—or rather, aestivate—maybe by closing its grand cosmic lid.
David Schneider is a senior editor at IEEE Spectrum. His beat focuses on computing, and he contributes frequently to Spectrum's Hands On column. He holds a bachelor's degree in geology from Yale, a master's in engineering from UC Berkeley, and a doctorate in geology from Columbia.