We Need Better IT Project Failure Post-Mortems

It's hard to find trustworthy data about IT debacles

3 min read
We Need Better IT Project Failure Post-Mortems
Illustration: Getty Images

In pulling together this special interactive report on a decade’s worth of IT development projects and operational failures, the most vexing aspect of our efforts was finding trustworthy data on the failures themselves.

We initially started with a much larger set than the 200 or so projects depicted in this report, but the project failure pool quickly shrank as we tried to get reliably documented, quantifiable information explaining what had occurred, when and why, who was affected, and most importantly, what the various economic and social impacts were.

This was true not only for commercial IT project failures—which one would expect, given that corporations are extremely reticent to advertise their misfortunes in detail if at all—but also for government IT project failures. Numerous times, we reviewed government audit reports and found that a single agency had inexplicably used different data for a project’s initial and subsequent costs, as well as for its schedule and functional objectives. This project information volatility made getting an accurate, complete, and consistent picture of what truly happened on a project problematic to say the least.

Our favorite poster child for a lack of transparency regarding a project’s failure is the ill-fated $1 billion U.S. Air Force Expeditionary Combat Support System (ECSS) program (although the botched rollout of HealthCare.gov is a strong second).  Even after multiple government audits, including a six-month, bi-partisan Senate Armed Services Committee investigation into the high-profile fiasco, the full extent of what this seven-year misadventure in project management was trying to accomplish could not be uncovered. Nor could the final cost to the taxpayer be ascertained.

With that in mind, we make our plea to project assessors and auditors asking that they apply a couple of lessons learned the hard way over the past decade of IT development project and operational failures: 

In future assessments or audit reports of IT development projects, would you please publish with each one a very simple chart or timeline? It should show, at a glance: an IT project’s start date (i.e., the time money is first spent on the project); a list of the top three to five functional objectives the project is trying to accomplish; and the predicted versus actual cost, date of completion, and delivered functionality at critical milestones where the project being reviewed, delivered, or canceled.

Further, if the project has been extended, re-scoped or reset, please make the details of such a change absolutely clear. Don’t forget to indicate how this deviation affects any of the aforementioned statistics. Finally, if the project has been canceled, account for the opportunity costs in the final cost accounting. For example, the failure of ECSS is currently costing the Air Force billions of dollars annually because of the continuing need to maintain legacy systems that should have been retired by now. You’d think that this type of project status information would be routinely available. But unfortunately, it is rarely published in its totality; when it is, it’s even less likely to be found all in one place.

Similarly, for records related to IT system operational failures, would you please include all of the consequences being felt—not only financially but to the users of the system, both internally and externally?  Too often an operational failure is dismissed as just a “teething problem,” when it feels more like a “root canal” to the people dependent upon the system working properly. 

A good illustration is Ontario’s C$242 million Social Assistance Management System (SAMS), which was released more than a year ago, and it is still not working properly. The provincial government remains upbeat and positive about the system’s operation while callously downplaying the impact of a malfunctioning system on the poor in the province.

More than 100 years ago, U.S. Supreme Court Judge Louis Brandeis argued that, “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.” Hopefully, the little bit of publicity we have tried to bring to this past decade of IT project failures will help to reduce their number in the future.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}