The Forklift Ballet: How DARPA Trucked Its Massive Radio-Frequency Testbed Across the United States

Colosseum, the centerpiece of DARPA's Spectrum Collaboration Challenge, just took its first road trip for the AI challenge's live finale at MWC Los Angeles

4 min read
Image of the Colosseum
Photo: DARPA

When it comes to relocating a data center, Joel Gabel is an expert. But when the U.S. Defense Advanced Research Projects Agency (DARPA) selected the company he works for, Pivot Technology Services, to help them with a project, he says that it more or less went against all the best practices.

That’s because the project was relocating Colosseum. At first glance, Colosseum may look like a data center, but in reality, it’s a massive radio-frequency emulation testbed that DARPA built for its Spectrum Collaboration Challenge (SC2). SC2 has been a three-year competition to demonstrate the validity of using artificial intelligences to work together in order to use wireless spectrum more efficiently than operating on pre-allocated bands would be.

Colosseum was originally built and housed at Johns Hopkins University Applied Physics Laboratory. That changed at the beginning of October, when the testbed was dismantled and later trucked to Los Angeles for the competition’s finale, scheduled to begin at 3:30pm PDT today at MWC Los Angeles.

Over the past three years, two rounds of preliminary events have winnowed the competing teams down to 10 finalists. Today, they’ll be competing for a US $2 million first-place prize, as well as a $1 million and $750,000 prize for second and third places, respectively. The teams are set to have a smooth finale, but as anyone involved in the moving process can attest, it took a lot of work, and thinking on the fly, to move the emulator and get it running again.

“Friday night we actually had it booted up, and we’re starting to go through the initial checks,” says Paul Tilghman, the DARPA program manager leading the competition, “and it was at that point [the operations lead] comes up and goes, ‘We’re chasing down a couple of little issues. Not going to tell you what they are, because they’re little issues. If I tell you, you’ll turn the molehill into a mountain and there’s no reason to do that.”

There may have been some molehills during the checks, but moving Colosseum definitely qualifies as a mountain. The testbed uses 3 Peta-Ops per second of computing power and 52 terabytes per second of data to emulate 65,000 channel operations between 256 wireless devices. It can draw up to 92 kilowatts of power and requires 200 gallons of water per minute to cycle through its cooling system to keep it from overheating. 

Image of the colosseum constructionTechnicians assemble Colosseum, the world’s largest RF emulator, in the Los Angeles Convention Center ahead of DARPA’s Spectrum Collaboration Challenge.Photo: DARPA

Colosseum is housed within a space twice of the size of a cargo container—in fact, its housing is literally built from two converted cargo containers put side by side. The halves arrived at the Los Angeles Convention Center during the set-up for MWC Los Angeles, and were hauled into the building and onto the convention floor by two 18-wheelers.

We’re going to move right past the crazy fact that DARPA and its hired logistics companies drove two semi-trucks into the Los Angeles Convention Center, because it gets better. To actually lower Colosseum’s halves onto the ground, the next step involved something that both Tilghman and Gabel referred to as a “forklift ballet.”

As it turned out, the convention center didn’t have a forklift strong enough to lift either half, so everyone improvised and used four smaller forklifts simultaneously by carefully arranging them around each half of Colosseum. It worked, but Gabel, in showing me a video of the forklift ballet, pointed out a moment where one of the forklift’s rear wheels lifted off the ground as the machine and its operator grappled with Colosseum’s weight.

Time for a second mountain: the cooling system. Remember the 200 gallons of water per minute? That’s a lot of water, and that’s especially a lot of water that you don’t want going to places other than specifically where you need it. Basically, the goal is to not flood the convention center while MWC Los Angeles is ongoing (or, to be fair, during teardown after the convention has ended).

The original idea was to run the water through large hoses under the show floor, but there wasn’t enough space and there were too many power systems in the vicinity. So instead, the water is running through hoses from a pool behind the convention center, up the center of 47-foot-high stairwell, across the catwalks overlooking the convention center’s floor, and down through some trusses into Colosseum. And then it goes all the way back. According to Gabel, making the cooling system work required accounting for variables like pressure at junctions where hose diameters changed and condensation build-up outside smaller-diameter hoses over time. All told, Gabel said, there were 142 variables to keep track off (I never could figure out if the 142 number was exact or a joke about how complicated the system was to design; though I think it might be the former).

Image of the final colosseumPhoto: Michael Koziol

Yet, perhaps surprisingly, Colosseum was ready to go ahead of schedule. Tilghman says the competing teams got their first chance to see Colosseum in person only the day before the finale, when MWC Los Angeles began. In fact, even though these teams have been pitting their AI-managed radio systems against and alongside one another for years using Colosseum’s computing power, they did it all over the Internet until today’s live finale. Tilghman says he scheduled a meeting with the teams early on the first day of MWC Los Angeles to show them Colosseum, but that a few teams had already snuck over to see it.

Closeup of colosseumPhoto: Michael Koziol

Tilghman says it would have been perfectly possible to leave Colosseum in Maryland and still conduct the finale in Los Angeles. However, he decided the challenge of moving the testbed across the country was worth it for two reasons. One, with Colosseum on site, there’s no risk of a connection loss between Johns Hopkins and the Los Angeles Convention Center during the finale. More importantly, he believes, it’s important for people to actually see Colosseum, and to realize that SC2 is more than just data points on a screen representing how efficiently spectrum is being shared. Tilghman thinks the physicality of Colosseum during the finale will give some weight to a topic that could otherwise feel very abstract.

Tilghman seems as prepared as he possibly can be to present the conclusion of a DARPA grand challenge that he’s overseen. But given that the last match-ups of the finale will be done live, with Colosseum chugging out the results in real-time on the convention center floor, things will probably get interesting. “It’s completely live, it’s completely unscripted,” he says, “and we’re all going to find out together who has the best collaborative AI out there.”

The Conversation (0)

Why the Internet Needs the InterPlanetary File System

Peer-to-peer file sharing would make the Internet far more efficient

12 min read
Horizontal
An illustration of a series
Carl De Torres
LightBlue

When the COVID-19 pandemic erupted in early 2020, the world made an unprecedented shift to remote work. As a precaution, some Internet providers scaled back service levels temporarily, although that probably wasn’t necessary for countries in Asia, Europe, and North America, which were generally able to cope with the surge in demand caused by people teleworking (and binge-watching Netflix). That’s because most of their networks were overprovisioned, with more capacity than they usually need. But in countries without the same level of investment in network infrastructure, the picture was less rosy: Internet service providers (ISPs) in South Africa and Venezuela, for instance, reported significant strain.

But is overprovisioning the only way to ensure resilience? We don’t think so. To understand the alternative approach we’re championing, though, you first need to recall how the Internet works.

Keep Reading ↓Show less