“An extraordinary failure in leadership,” a “masterclass in sloppy project management,” and a “test case in maladministration” were a few of the more colorful descriptions of UK government IT failures made by MP Edward Leigh, MP for Gainsborough, England when he was the Chairman of the Public Accounts Committee.
Leigh repeatedly pointed that government departments consistently were not only wholly unrealistic in their IT project costs, schedule and technical feasibility, but also didn’t take any responsibility for the consequences of these assumptions.
This identical theme appeared frequently during our review of the past decade of IT project development and operational failures. The “over optimism” disease, aka “Hubble Psychology,” is frequently cited in audit reports as a primary root cause of IT failures. Hubble Psychology is the term used by NASA Inspector General Paul Martin a few years ago in his report into the space agency’s project troubles(pdf) to describe the:
“[E]xpectation among NASA personnel that projects that fail to meet cost and schedule goals will receive additional funding and that subsequent scientific and technological success will overshadow any budgetary and schedule problems. They pointed out that although Hubble greatly exceeded its original budget, launched years after promised, and suffered a significant technological problem that required costly repair missions, the telescope is now generally viewed as a national treasure and its initial cost and performance issues have largely been forgotten.”
In other words, as long as you can keep your program alive, you have a very good chance of continuing to receive sufficient money (and time) to make it work sooner or later. While the expectation that “all will be forgiven” doesn’t always come true, as even the government will eventually run out of money and patience, it works enough—especially in defense programs—to make it a belief worthy of pursuing. If you have the time to dig into the six governmental IT projects we highlighted in our “Life Cycle of Failed Projects,”you’ll soon discover that each suffered from a version of NASA’s Hubble Psychology.
Skewed bias towards extreme optimism doesn’t just affect program development plans, but also infects the decisions made about when to take an IT system live. Thumbing through the myriad of Risk Factor blog posts will quickly show a plethora of IT projects being deployed long before they were ready due to unfounded, if not delusional, optimism concerning their operational state.
Take, for instance, the case of the Los Angeles Unified School District’s (LAUSD) disastrous decision last year to roll out its new integrated $10 million student information system called MISIS. Dozens of operational snags with MISIS immediately cropped up: Thousands of students did not receive class schedules for weeks; an untold number of teachers were assigned 70 or more students in their classes; students were placed in classes they had already completed; middle school students were placed in high school classes; high school seniors were unable to send transcripts to colleges they were applying for; and so forth. It has taken over a year of hard effort plus an additional $100 million plus to make MISIS operations stable, although still it is far from delivering the functionality originally promised.
What makes the MISIS so mind boggling and yet so unsurprising is that the original project schedule, which was already aggressive (two years for $29 million), was compressed by half while the project’s expenditures were condensed by two-thirds. Predictably, severe operational problems began appearing in the weeks before system rollout, caused by an acknowledged lack of system testing. LAUSD teachers, school administrators and others were warning LAUSD senior administration that MISIS was not anywhere near ready to deploy, not only because of the technical hitches, but because only a small minority of LAUSD teachers had been fully trained on how to use the system. Even the LAUSD chief information officer acknowledged a few days before the rollout that it might be “bumpy,” but that did not matter. The LAUSD Superintendent, who admitted IT was not his strong suit, was confident that MISIS was ready to be deployed, so it was deployed.
What makes this situation even more incredible was that the MISIS disaster almost exactly mirrored another massive LAUSD IT project disaster involving the botched rollout of a new payroll system back in 2007 that caused a year of pain. For whatever reason, the lessons from that event were ignored completely.
Not learning from failure seems to go hand in hand with being confident about your IT project’s status. This seemed especially true over the past ten years in the airline industry. For instance, US Airways was over-brimming with confidence in March 2007 when it switched over to a new reservation system after its merger with America West airlines in 2005. Such was its confidence that just before the cut-over to its new system, a US Airways senior vice president of customer service boasted, “We get to demonstrate that these transitions aren't as big and as difficult as historically has been proclaimed.” Well, the new reservation system had a meltdown on the day it went live; it took nearly six months to get everything back to normal.
Then there was the case of British Airway’s London’s Heathrow International Airport’s new Terminal 5 baggage system project. Avoiding duplicating the ignominy of Denver’s International Airport failed IT baggage system was uppermost in system integrator BAA’s mind. While more successful than DIA’s baggage system, Terminal 5’s baggage system still face-planted spectacularly on its opening day of 27 March 2008 and for days afterwards, with some 430 flights cancelled and more than 20,000 bags mishandled in the first eight days of its operation.
It later came out in a UK Parliamentary inquiry into the baggage system mayhem that BA senior management went ahead with the opening even though it knew that the baggage system wasn’t fully ready and would likely need another six months as its test program and staff training were both “compromised.” But waiting would cost BA money, so a “calculated risk” was taken to open Terminal 5 as planned and hope for the best. Of course, BA didn’t bother to tell its thousands of passengers using Terminal 5 that tidbit of information, instead proclaiming to one and all everything was “tried, tested and ready to go.”
United Airlines was similarly self-assured when it moved to a single passenger service system and website in March 2012 to complete its 2010 merger with Continental Airlines. Then CEO Jeff Smisek said he was confident the process to proceed would go smoothly, proclaiming that the airline was "exceedingly well prepared for it." Again, things didn’t go as calmly as expected to say the least, with United Airlines still suffering the financial and reputational after-effects to this day.
We should note, in fairness, that the recent cut-over of the merged airline American-US Airways reservation system did go well, so perhaps finally, a sort of humility has been gained to offset the overweening hubris that usually goes along when implementing these types of IT systems.
Healthcare IT systems also seem to be prone to the optimism bug. The UK’s £12 billion and Australia’s $A566 million electronic health record system fiascos are prime examples of the belief that a righteous idea alone will create a successful IT system. And of course, the various botched attempts in the US at creating a federal and state health exchanges to support the Affordable Care Act (ACA) are case studies in hope over experience and good sense. A fitting description of the entire situation in late 2013 into early 2014 was unwittingly given by then HHS Kathleen Sebelius when she told a Congressional Committee that the federal exchange “works unless you try to use it.”
While the botched rollout of the ACA health exchanges might arguably be the greatest examples of IT hubris coupled with self-denial over the past decade, I personally think the IT development of New York City’s personnel management system called CityTime is an even better example. Originally slated in 1998 to cost $63 million and be completed within 5 years, the project ballooned to over $722 million by March 2010 with a completion date set for June of 2011. A government investigation in December 2010 uncovered what at the time looked like $80 million in fraudulent billing, but that soon exploded into more than $500 million.
CityTime’s prime contractor, SAIC, agreed to forfeit $500 million of the $690 million it was paid to avoid prosecution for defrauding New York City. It admitted that it failed to investigate internal warnings that things were amiss, a well-practiced lack of curiosity no doubt helped along by the firehose of money it was showered with.
What intrigues me more is how long CityTime stayed alive and unexamined by New York City officials as the project costs rapidly climbed to S224 million in 2006, $348 million in 2007, and $628 million in 2009 before breaching the $700 million mark a year later. Even though irregularities in billing were raised back in 2003 and many times thereafter over the years, these warnings were studiously either ignored or played down. Not until the very end did New York City’s comptroller audit or question the program, especially since the original overall benefits of the project s were estimated in 1998 to save the city only $60 million in timesheet fraud!
In fact, there seemed to be a collective “shrug of the shoulders” acceptance by the Bloomberg Administration that big, government IT projects always overrun estimates, so why be overly concerned about CItyTime’s increasing cost? Even as the fraud was being exposed, Mayor Michael Bloomberg cavalierly dismissed paying no attention to the project’s problems and exploding cost as just being one of those things that fell through the oversight cracks. Some crack!!
If future government IT project failures are ever to be minimized, the Hubble Psychology is going to need to be directly addressed. Making individuals in both government and contracts accountable in a meaningful way is a good start. However, even this will not be easy. As exemplified by US Air Force leadership in the aftermath of the $1 billion spent on the Expeditionary Combat Support System (ECSS) without anything to show for it, government doesn’t seem to believe in personal accountability when it comes to IT project failure.
If we can’t hold people accountable, maybe the next-best thing is to shed more light on these failures. That will be the topic of the final blog post in our special report on the Lessons of a Decade of IT Failures.
Robert N. Charette is a Contributing Editor to IEEE Spectrum and an acknowledged international authority on information technology and systems risk management. A self-described “risk ecologist,” he is interested in the intersections of business, political, technological, and societal risks. Charette is an award-winning author of multiple books and numerous articles on the subjects of risk management, project and program management, innovation, and entrepreneurship. A Life Senior Member of the IEEE, Charette was a recipient of the IEEE Computer Society’s Golden Core Award in 2008.