Amazon released on Friday a 5,700+ word summary post-mortem of why a portion of its web cloud servicesembarrassingly went down two weeks ago.
The long and the short of it was that a network change to upgrade its network's capacity went bad, which caused other bad things to happen.
Amazon's summary explanation states:
"At 12:47 AM PDT on April 21st, a network change was performed as part of our normal AWS Amazon n Web Services] scaling activities in a single Availability Zone in the US East Region. The configuration change was to upgrade the capacity of the primary network. During the change, one of the standard steps is to shift traffic off of one of the redundant routers in the primary EBS [Elastic Block Storage] network to allow the upgrade to happen. The traffic shift was executed incorrectly and rather than routing the traffic to the other router on the primary network, the traffic was routed onto the lower capacity redundant EBS network. For a portion of the EBS cluster in the affected Availability Zone, this meant that they did not have a functioning primary or secondary network because traffic was purposely shifted away from the primary network and the secondary network couldn't handle the traffic level it was receiving. As a result, many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster. Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."
You can read the rest of the summary to see what the node isolation caused, which included a "re-mirroring storm," the task that Amazon faced when trying to fix the cascading problems, and what it has done and intends to do to try to keep this from being a problem in the future.
Amazon may also want to reiterate to its customers to plan for outages in the future, and configure their web sites accordingly.
Amazon says that it will be providing an automatic 10-day service credit as compensation for the outage, although some of its customers are grousing that it is not enough, says an article at the Wall Street Journal. The company hasn't said how it plans to compensate those whose data they could not recover or how much the compensation (and effort to resolve the outage) is costing. I guess we will have to wait for Amazon's quarterly report to find out.
Robert N. Charette is a Contributing Editor to IEEE Spectrum and an acknowledged international authority on information technology and systems risk management. A self-described “risk ecologist,” he is interested in the intersections of business, political, technological, and societal risks. Charette is an award-winning author of multiple books and numerous articles on the subjects of risk management, project and program management, innovation, and entrepreneurship. A Life Senior Member of the IEEE, Charette was a recipient of the IEEE Computer Society’s Golden Core Award in 2008.