The December 2022 issue of IEEE Spectrum is here!

Close bar

Amazon released on Friday a 5,700+ word summary post-mortem of why a portion of its web cloud servicesembarrassingly went down two weeks ago.

The long and the short of it was that a network change to upgrade its network's capacity went bad, which caused other bad things to happen.

Amazon's summary explanation states:

"At 12:47 AM PDT on April 21st, a network change was performed as part of our normal AWS [Amazon Web Services] scaling activities in a single Availability Zone in the US East Region. The configuration change was to upgrade the capacity of the primary network. During the change, one of the standard steps is to shift traffic off of one of the redundant routers in the primary EBS [Elastic Block Storage] network to allow the upgrade to happen. The traffic shift was executed incorrectly and rather than routing the traffic to the other router on the primary network, the traffic was routed onto the lower capacity redundant EBS network. For a portion of the EBS cluster in the affected Availability Zone, this meant that they did not have a functioning primary or secondary network because traffic was purposely shifted away from the primary network and the secondary network couldn't handle the traffic level it was receiving. As a result, many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster. Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

You can read the rest of the summary to see what the node isolation caused, which included a "re-mirroring storm," the task that Amazon faced when trying to fix the cascading problems, and what it has done and intends to do to try to keep this from being a problem in the future.

Amazon may also want to reiterate to its customers to plan for outages in the future, and configure their web sites accordingly.

Amazon says that it will be providing an automatic 10-day service credit as compensation for the outage, although some of its customers are grousing that it is not enough, says an article at the Wall Street Journal. The company hasn't said how it plans to compensate those whose data they could not recover or how much the compensation (and effort to resolve the outage) is costing. I guess we will have to wait for Amazon's quarterly report to find out.

The Conversation (0)

Why the Internet Needs the InterPlanetary File System

Peer-to-peer file sharing would make the Internet far more efficient

12 min read
Horizontal
An illustration of a series
Carl De Torres
LightBlue

When the COVID-19 pandemic erupted in early 2020, the world made an unprecedented shift to remote work. As a precaution, some Internet providers scaled back service levels temporarily, although that probably wasn’t necessary for countries in Asia, Europe, and North America, which were generally able to cope with the surge in demand caused by people teleworking (and binge-watching Netflix). That’s because most of their networks were overprovisioned, with more capacity than they usually need. But in countries without the same level of investment in network infrastructure, the picture was less rosy: Internet service providers (ISPs) in South Africa and Venezuela, for instance, reported significant strain.

But is overprovisioning the only way to ensure resilience? We don’t think so. To understand the alternative approach we’re championing, though, you first need to recall how the Internet works.

Keep Reading ↓Show less