The December 2022 issue of IEEE Spectrum is here!

Close bar

IEEE Spectrum editor Steven Cherry pointed me to a blog post over at GIZMODO which talked about the story in Wired magazine written by Nicholas Thompson on Russia's supposedly still active doomsday device called "Perimeter" or Dead Hand.

According to the Wired magazine story (which is based on an interview with a 72-year-old former colonel of the Soviet Strategic Rocket Forces and Soviet General Staff named Valery Yarynich) Perimeter, which has been an open secret since 1993, came online in 1985, during a peak crisis period in the US-USSR cold war. According to the story, the USSR government believed that then President Ronald Reagan wasn't afraid to start a nuclear war, and the USSR government wanted a means to strike back even if a US nuclear missile attack decapitated the government.

(The Able-Archer 83 exercise, which nearly caused World War III, no doubt played a role in the USSR government's thinking at the time.)

Perimeter, the Wired story says,

 "...was designed to lie semi-dormant until switched on by a high official in a crisis. Then it would begin monitoring a network of seismic, radiation, and air pressure sensors for signs of nuclear explosions. Before launching any retaliatory strike, the system had to check off four if/then propositions: If it was turned on, then it would try to determine that a nuclear weapon had hit Soviet soil. If it seemed that one had, the system would check to see if any communication links to the war room of the Soviet General Staff remained. If they did, and if some amount of time - likely ranging from 15 minutes to an hour - passed without further indications of attack, the machine would assume officials were still living who could order the counterattack and shut down. But if the line to the General Staff went dead, then Perimeter would infer that apocalypse had arrived. It would immediately transfer launch authority to whoever was manning the system at that moment deep inside a protected bunker - bypassing layers and layers of normal command authority. At that point, the ability to destroy the world would fall to whoever was on duty: maybe a high minister sent in during the crisis, maybe a 25-year-old junior officer fresh out of military academy. And if that person decided to press the button ... If/then. If/then. If/then. If/then.."

The story says that the Russian government continues to improve Perimeter, believing that having (and publicizing) it makes nuclear war less likely.

The Wired story asks the obvious question about the fail safe methods built into the system to keep it from basically going rogue,

"What if they fail? What if something goes wrong? What if a computer virus, earthquake, reactor meltdown, and power outage conspire to convince the system that war has begun?"

Apparently, there is little concern about these events on the part of Russian government, which believes it has designed in enough safeguards to prevent some accidental triggering of the system, especially since it is not automatically activated and autonomously-driven.

Not sure I am all that convinced. Having people in the loop doesn't mean something can't go stupidly wrong. The US Air Force, through a series of what should have been highly improbable missteps, lost track of nuclear armed cruise missile in August of 2007, for example.

The Wired story is worth a quick read; the sometimes wild reactions to the story posted there and at GIZMODO are probably as interesting  if not more so.

BTW, IEEE Spectrum had a story on the aging US nuclear weapon stockpile and some thoughts about what to do about it in the March 2009 issue.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}