Amazon Says Cloud Problems Caused by Configuration Change

Amazon released on Friday a 5,700+ word summary post-mortem of why a portion of its web cloud services embarrassingly went down two weeks ago.

The long and the short of it was that a network change to upgrade its network's capacity went bad, which caused other bad things to happen.

Amazon's summary explanation states:

"At 12:47 AM PDT on April 21st, a network change was performed as part of our normal AWS [Amazon Web Services] scaling activities in a single Availability Zone in the US East Region. The configuration change was to upgrade the capacity of the primary network. During the change, one of the standard steps is to shift traffic off of one of the redundant routers in the primary EBS [Elastic Block Storage] network to allow the upgrade to happen. The traffic shift was executed incorrectly and rather than routing the traffic to the other router on the primary network, the traffic was routed onto the lower capacity redundant EBS network. For a portion of the EBS cluster in the affected Availability Zone, this meant that they did not have a functioning primary or secondary network because traffic was purposely shifted away from the primary network and the secondary network couldn't handle the traffic level it was receiving. As a result, many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster. Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

You can read the rest of the summary to see what the node isolation caused, which included a "re-mirroring storm," the task that Amazon faced when trying to fix the cascading problems, and what it has done and intends to do to try to keep this from being a problem in the future.

Amazon may also want to reiterate to its customers to plan for outages in the future, and configure their web sites accordingly.

Amazon says that it will be providing an automatic 10-day service credit as compensation for the outage, although some of its customers are grousing that it is not enough, says an article at the Wall Street Journal. The company hasn't said how it plans to compensate those whose data they could not recover or how much the compensation (and effort to resolve the outage) is costing. I guess we will have to wait for Amazon's quarterly report to find out.

Related Stories

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributors

 
Contributor
Willie D. Jones
 

Newsletter Sign Up

Sign up for the ComputerWise newsletter and get biweekly news and analysis on software, systems, and IT delivered directly to your inbox.

Advertisement
Advertisement