The December 2022 issue of IEEE Spectrum is here!

Close bar

Rule No. 1 for Hurricanes: Don’t Place a Backup System Where It Will Flood

Once again, poor IT continuity management does in hospital electronic health record systems

3 min read
Rule No. 1 for Hurricanes: Don’t Place a Backup System Where It Will Flood

In the wake of 9/11, Hurricane Katrina, and last year’s Japanese Fukushima nuclear disaster, numerous stories were soon written about how important it was for organizations to check the ability of their computer back-up systems to continue to operate in the face of a major disaster. Then came Hurricane Irene, whose storm surge almost flooded Lower Manhattan late August last year, along with warnings that it wasn’t whether Lower Manhattan would ever flood, but only when. These same stories dutifully appeared yet again.

It only took a year for that prediction to come true. Hurricane Sandy storm surge, did in fact flood Lower Manhattan. And, as endlessly forewarned, the New York region is already drowning in stories of organizations that had inadequate IT contingency management plans. 

For instance, several major websites, such as Gizmodo and the Huffington Post, went offline Monday night. According to the New York Times and the Wall Street Journal, those two sites, which are associated with Gawker Media, shared a common ISP located in Lower Manhattan by the name of Datagram. When the local utility's transformers blew up (check out the spectacular video of it, beginning at 0:20) and took out the power to Lower Manhattan Monday evening, Datagram’s backup generators were supposed to switch on and continue to provide power to its servers. However, as the NY Times noted, “Although Datagram uses backup electricity generators in the event of a storm, its offices were flooded, knocking those machines out as well.” The WSJ also pointed out that Datagram’s basement flooded and damaged the building’s electrical system.

After Hurricane Katrina, the potential flooding of hospital basements where paper medical records are typically stored was touted as a good reason for moving to electronic health records. However, using EHR systems create a different risk than ruined medical records caused by flood waters: what happens when there is a loss of power? Losing electricity has been a hot EHR-related topic recently, especially in the aftermath of the Cerner EHR remote-hosting service outage in July. 

As noted in anLA Times story on the Cerner outage, “Federal law requires medical providers and their vendors to have contingency plans for when systems go down.” However, Federal law doesn’t require the contingency plans to be thoroughly vetted, apparently.

There's perhaps no clearer example of this than the loss of power at 705-bed New York University Langone Medical Center also in Lower Manhattan, which took out its EPIC electronic health record system, and subsequently (in conjunction with rising water) caused the hospital to evacuate its patients during the height of hurricane.  There are conflicting reports as to what happened, but according to news reports, after the hospital lost power due to the aforementioned transformer disruption, the hospital’s primary backup electrical generator failed as did the backup to the backup generator. A CBS News story says that flooding in the hospital's basement overwhelmed the primary backup generator. The other generator, which is on the roof, then also failed, CBS News said because “the pump that supplies fuel to that generator is on a lower floor and was [also] flooded.”

A story at the Huffington Post, which did eventually come back online, reports that a NYU hospital spokesperson vigorously defended the adequacy (and architecture) of the hospital’s backup power system, stating that, “Our generators are fully compliant with all state and federal regulations and, using good prudence, we test them all the time as we have to do anyway.”

What is a bit puzzling is that the hospital, which is in the Lower Manhattan hurricane evacuation zone, was closed last year on the orders of Mayor Bloomberg before Hurricane Irene, but was kept open as Sandy approached, even though the prediction was for a higher storm surge to come ashore this time around. The hospital claimed, according to the Guardian, that it did not anticipate Hurricane Sandy  causing heavy flooding. Mayor Bloomberg, when questioned about why he didn’t order the hospital closed as before, claimed the hospital told him it could handle the risks posed by Sandy.

Ensuring the controversy will live for a while longer, the mayor's own Bloomberg News reported that the hospital knew full-well that the generators were vulnerable. It quoted a hospital trustee as saying that the hospital’s board “knew the facilities’ generators were outdated and at risk” and that the back-up generators “are not state of the art and not in the most state-of-the-art location.”

Other reliable sources indicate that the NYU hospital basement is prone to flooding during heavy rains, and that it wasn't unusual for the backup generator to go off-line as a result.

Another Bloomberg story notes that New York’s Staten Island University Hospital “started flooding Oct 29, shutting down the computers and electronic medical records and forcing workers to use paper records.” Patients were evacuated from the hospital before the hurricane arrived, however.

Here's one storm prediction you can take to the bank: More stories of poor IT contingency management will soon appear. The lessons to be learned from them will be written and soon then forgotten, at least until after the next disaster strikes.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}