Risk Factor iconRisk Factor

Detroit's IT Systems “Beyond Fundamentally Broken”

IT Hiccups of the Week

Last week’s IT Hiccups parade was a bit slower than normal, but there were a couple of IT snafus that caught my eye. For instance, there was the embarrassed admission by Los Angeles Unified School District (LAUSD) chief strategic officer Matt Hill that the new-but-still-problem-plagued MiSiS student tracking system I wrote about a few weeks ago should have had “a lot more testing” before it was ever rolled out. There also was the poorly thought out pasta promotion by Olive Garden restaurants that ended up crashing its website. However, what sparked my curiosity most was the disclosure by Beth Niblock, Detroit’s Chief Information Officer, that the city’s IT systems were broken.

How broken are they? According to Niblock:

“Fundamentally broken, or beyond fundamentally broken. In some cases, fundamentally broken would be good.”

Niblock’s comment was part of her testimony during Detroit’s bankruptcy hearings. Last July, Detroit filed bankruptcy and since then has been in bankruptcy court trying to work out debt settlements with its creditors, some of whom are unhappy over the terms the city offered. Niblock was a witness at a court hearing looking into whether the city’s bankruptcy plan was feasible and fair to its many creditors, and whether the plan would put the city on more sound financial and operational footing.

Critical to Detroit returning to financial and operational soundness is the state of the city’s IT systems. However, since the 1990s, the city’s IT systems have generally been a shambles, and that is putting it charitably. Currently, according to Niblock (who took on the CIO job in February after turning it down twice and maybe wishing she did a third time), the city’s IT systems are “atrocious”, “unreliable” and “deficient,” Reuters reported.

Reuters went on to report Niblock's testimony that the city’s Unisys mainframe systems are “so old that they are no longer updated by their developers and have security vulnerabilities.” She added that the desktop computers, which mostly use Windows XP or something older, “take 10 minutes” to boot. It probably doesn’t matter anyway, since the computers run so many different versions of software that city workers can’t share documents or communicate, Niblock says. That also may not be so bad, given that city computers have apparently been infected several times by malware.

Detroit’s financial IT systems are so bad that the city really hasn’t known what it is owed or in turn, what it owes, for years. A Bloomberg News story last year, for example, told the story of a $1 million check from a local school district that wasn’t deposited by Detroit for over a month. During that time, the check sat in a city hall desk drawer. That isn’t surprising, the Bloomberg story noted, as the city has a hard time keeping track of funds electronically wired to it. The financial systems are so poor that city income-tax receipts need to be processed by hand; in fact, some 70 percent of all of the city’s financial accounting entries are still done manually. The costs of doing things manually are staggering: it costs Detroit $62 to process each city paycheck, as opposed to the $18 or so it should cost.  Bloomberg stated that a 2012 Internal Revenue Service audit of the city’s tax collection system termed it as being “catastrophic.”

While the financial IT system woes are severe, the fire and police departments' IT systems may be in even worse shape. According to the Detroit News Free Press, there is no citywide computer aided dispatch system to communicate emergency alerts to fire stations. Instead, fire stations receive the alerts by fax machine. To make sure the alarm is actually heard, fire fighters have rigged Radio Shack buzzers and doorbells, among other homemade Rube Goldberg devices that are triggered by the paper coming out of the fax machine. Detroit's Deputy Fire Commissioner told the Detroit News Free Press that, “It sounds unbelievable, but it’s truly what the guys have been doing and dealing with for a long, long time.”

You really need to check out the video accompanying the Detroit News Free Press story which shows fire fighters using a soda can filled with coins and screws perched on the edge of the fax machine so that it will be knocked off by the paper coming out of the machine when an emergency alert is received at the fire station. Makes one wonder what happens if the fax runs out of paper.

The Detroit police department's IT infrastructure, what there is of it, isn’t in much better shape. Roughly 300 of its 1150 computers are less than three years old. Apparently even those “modern” computers have not received software updates, and in many cases, the software the police department relies on is no longer supported by vendors. The police lack an automated case management system, which means officers spend untold hours manually filling out, filing, and later trying to find paperwork. Many Detroit police cars also lack basic Mobile Data Computers (MDC), which means officers have to rely on dispatchers to perform even basic functions they should be able to do themselves. An internal review (pdf) of the state of Detroit’s police department was published in January, and it makes for very sad, if not scary, reading.

If you are interested in how Detroit’s IT systems became “beyond fundamentally broken,” there is a great case study that appeared in a 2002 issue of Baseline magazine. It details Detroit’s failed attempt, beginning in 1997, to upgrade and integrate its various payroll, human resources, and financial IT systems into a single be-all Detroit Resource Management System (DRMS) that went by the name “Dreams.” The tale told is a familiar one to Risk Factor readers: attempting to replace 22 computer systems used across 43 city departments with one city-wide system resulted in a massive cost overrun and little to show for it five years on. Crain’s Detroit Business also took a look back at the DRMS implementation nightmare in a July article.

Detroit hopes, the Detroit News reports, that the bankruptcy judge will approve its proposed $101 million IT “get well” plan, which includes $84.8 million for IT upgrades and $16.3 million for additional IT staff. (In February, according to a story in the Detroit News Free Press, the city wanted to invest $150 million, but that amount apparently needed to be scaled back because of budgetary constraints.) Spending $101 million, Niblock admitted, will not buy world-class IT systems, but ones that are, “on the grading scale… a ‘B’ or a B-minus” at best. And Niblock concedes that getting to a “B” grade will require a lot of things going perfectly right, which is not likely to happen.

On one final note, I’d be remiss not to mention that last week was also the 25th anniversary of the infamous Parisian IT Hiccup. For those who don’t remember, in September 1989, some 41,000 Parisians who were guilty of simple traffic offenses were mailed legal notices that accused them of committing everything from manslaughter to hiring prostitutes or both.  As a story in the Deseret News from the time noted:

“A man who had made an illegal U-turn on the Champs-Élysées was ordered to pay a $230 fine for using family ties to procure prostitutes and ‘manslaughter by a ship captain and leaving the scene of a crime.’”

Local French officials blamed the problem on “human error by computer operators.”

Plus ça change, plus c'est la même.

In Other News ….

Coding Error Exposes Minnesota Students' Personal Information

Computer Glitch Sounds Air Raid Sirens in Polish Town

Computer Problems Change Florida County Vote Totals

Billing Error Affects Patients at Tennessee Regional Hospital

Dallas Police Department Computer Problems Causing Public Safety Concerns

New York Thruway Near Albany Overbills 35,000 EZ‐Pass Customers

Olive Garden Shoots Self in Foot With Website Promotion

Apple Store Crashes Under iPhone6 Demand

Scandinavian Airlines says Website Now Fixed After Two Days of Trouble

Housing New Zealand Tenants Shocked by $10,000 a Week Rent Increases

GM's China JV Recalling 38,328 Cadillacs to Fix Brake Software

LAUSD MiSiS System Still Full of Glitches

FCC Fines Verizon $7.4 Million Over Six-Year Privacy Rights “IT Glitch”

IT Hiccups of the Week

The number of IT snafus, problems and burps moved back to a more normal rate last week. There were a surprising number of coincidental outages that hit Apple, eBay, Tumblr and Facebook, but other than these, the most interesting IT Hiccup of the Week was the news that the U.S. Federal Communications Commission (FCC) fined Verizon Communications a record $7.4 million for failing to notify two million customers of their opt-out rights concerning the use of their personal information for certain company marketing campaigns.

According to the Washington Post, Verizon is supposed to inform new customers via a notice in their first bill that they could opt-out of having their personal information used by the company to craft targeted marketing campaigns of products and services to them. However, since 2006, Verizon failed to include the opt-out notices.

A Verizon spokesperson blamed the oversight as being “largely due to an inadvertent IT glitch,” the Post reported. The Verizon spokesman, however, didn’t make it clear as to why the company didn’t notice the problem until September 2012, nor why it didn’t inform the FCC of the problem until 18 January 2013, some 121 days later than the agency requires. (Companies are required to inform the FCC of issues like this within five business days of their discovery.)  

The FCC’s press release annoucing the fine showed that the agency was clearly irritated by Verizon’s tardiness. Travis LeBlanc, the acting chief of the FCC Enforcement Bureau, said that, “In today’s increasingly connected world, it is critical that every phone company honor its duty to inform customers of their privacy choices and then to respect those choices. It is plainly unacceptable for any phone company to use its customers’ personal information for thousands of marketing campaigns without even giving them the choice to opt out.”   

Of course, a better solution would be for the FCC to force companies to allow customers only to opt-in to the use of their personal information, but that discussion is for another day.

On top of the $7.4 million fine, which the FCC took pains to point out is the “largest such payment in FCC history for settling an investigation related solely to the privacy of telephone customers’ personal information,” Verizon will have to include opt-out notices in every bill, as well as put a system in place to monitor and test its billing system to ensure that they actually go out.

Verizon tried to downplay the privacy rights violation, of course, even implying that its customers benefited from the glitch by being able to receive “marketing materials from Verizon for other Verizon services that might be of interest to them.”

Readers of the Risk Factor may remember another Verizon inadvertent IT glitch disclosed in 2010 in which  Verizon admitted that it over-billed customers by $52.8 million for “mystery fees” over three years.  During that time, Verizon customers who called the company to complain over the fees were told  basically to shut up and pay them. The FCC smacked Verizon with a then FCC record-setting $25 million fine for that little episode of customer non-service and IT ineptitude.

Last year, Verizon agreed to pay New York City $50 million for botching its involvement in the development of a new 911 emergency system. Alas, that wasn’t a record-setting settlement; SAIC owns that honor after paying the city $466 million to settle fraud charges related to its CityTime system development.

In Other News…

eBay Access Blocked by IT Problems

Facebook Experiences Third Outage in a Month

Tumblr Disrupted by Outage

Apple iTunes Outage Lasts 5 Hours

Twitter Sets Up Software Bug Bounty Program

Children Weight Entry Error Placed Australian Jet at Risk

Spanish ATC Computer Problem Scrambles Flights

Yorkshire Bank IT Problems Affects Payments

Computer Problem Hits Boston MBTA Corporate Pass Tickets

Unreliable Washington, DC Health Exchange Still Frustrates Users

South African Standard Bank Systems Go Offline

New Zealand Hospital Suffers Major Computer Crash

Computer Crash Forces Irish Hospital to Re-Check Hundreds of Blood Tests

Fiji Airways Says No to $0 Tickets Caused by Computer Glitch

Portugal’s New Court System Still Buggy

Hurricane Projected Landfall Only 2,500 Miles Off

Vulnerable "Smart" Devices Make an Internet of Insecure Things

According to recent research [PDF], 70 percent of Americans plan to own, in the next five years, at least one smart appliance like an internet-connected refrigerator or thermostat. That's a skyrocketing adoption rate considering the number of smart appliance owners in the United States today is just four percent. 

Read More

310,000 Healthcare.gov Enrollees Must Provide Proof Now or Lose Insurance

IT Hiccups of the Week

Last week, there were so many reported IT snags, snarls and snafus that I felt like the couple who finally won the 20-year jackpot on the Lion’s Share slot machine at the Las Vegas MGM Grand casino. Among IT Hiccups of note was the routine maintenance oofta at Time Warner Cable Wednesday morning that knocked out Internet and on demand service across the US for over 11 million of its customers and continued to cause other service issues for several days afterward; the “coding error” missed for six years by German Deutsche Bank that caused the misreporting to the UK government of 29.4 million equity swaps, with buys being reported as sales and vice versa; and the rather hilarious software bugs in the new Madden NFL 15 American football game, which has players flying around the field in interesting ways.

However, for this week, we just can’t ignore yet another Healthcare.gov snafu of major proportions. Last week, USAToday reported that the Centers for Medicare and Medicaid Services sent letters to 310,000 people who enrolled for health insurance through the federal website asking for proof of citizenship or immigration status by 5 September or they were going to lose their health insurance at the end of September.

Read More

LA School District Continues Suffering MiSiS Misery

IT Hiccups of the Week

With schools starting to open for the 2014-2015 academic year across the United States, one can confidently predict that there will be several news stories of snarls, snafus, and hitches with new academic IT supports systems as they go live for the first time. (You may may recall that happening in MarylandNew York, and Illinois a few years ago.)

While most of these “teething problems” are resolved during the first week or so of school, significant IT issues affecting the performance of the new integrated student educational tracking system recently rolled out in the Los Angeles Unified School District—the second largest in the country with 650,000 students—has already stretched beyond the first few weeks of the school term with no definitive end in sight. Furthermore, the many software bugs being encountered were known by LAUSD administrators, but they decided to roll out the system anyway.

Read More

The Routing Wall of Shame

IT Hiccups of the Week

While I have been en vacances the past few weeks, there have been several potential IT Hiccups of the Week stories of interest, including the 200-to-500 year old Indian women getting free sewing machines and Philippine’s fast food giant Jollibee Food having to temporarily close 72 of its restaurants in the Manila region because of problems the company experienced migrating to a new IT system—much to the disappointment of its Chickenjoy fans. However, the one hiccup that stands above the rest was the Internet difficulties reportedly experienced last week by the likes of eBay, Amazon, and LinkedIn, among many others.

Read More

Black Hat 2014: How to Hack the Cloud to Mine Crypto Currency

Using a combination of faked e-mail addresses and free introductory trial offers for cloud computing, a pair of security researchers have devised a shady crypto currency mining scheme that they say could theoretically net hundreds of dollars a day in free money using only guile and some clever scripting.

The duo, who are presenting their findings at this week’s Black Hat 2014 cyber security conference in Las Vegas, shut down their proof-of-concept scheme before it could yield any more than a token amount of Litecoins (an alternative to Bitcoin). The monetary value of both virtual currencies is based on enforced scarcity that comes from the difficulty of running processor-intensive algorithms.

Rob Ragan, senior security associate at the consulting firm Bishop Fox in Phoenix, Ariz., says the idea for the hack came to him and his collaborator Oscar Salazar when they were hired to test the security around an online sweepstakes.

“We figured if we could get 100,000 e-mails entered into the sweepstakes, we could have a really good chance of winning,” he says. “So we generated a script that would allow us to generate unique e-mail addresses and then automatically click the confirmation link.”

Once Ragan and Salazar had finished securing the sweepstakes against automated attacks, they were still left with all those e-mail addresses.

“We realized that … for about two-thirds of cloud service providers, their free trials only required a user to confirm an e-mail address,” he says. So the duo discovered they effectively had the keys to many thousands of separate free trial offers of cloud service providers’ networked storage and computing.

In other words, they had access to many introductory accounts at sites like Google’s Cloud Platform, Joyent, CloudBees, iKnode, CloudFoundry, CloudControl, ElasticBox and Microsoft Windows Azure.

Some of these sites, each offering their own enticement of free storage and free computing as a limited introductory offer, could be spoofed, the researchers discovered. Troves of unique e-mail addresses, using a non-discoverable automated process they developed, could be readily made on the fly and then used to get free storage and processor time.

A spoof e-mail address of course has two components, Ragan says, the local part (the stuff to the left of the “@“ sign) and the domain (to the right). To appear like a random stream of e-mail addresses signing up for any given service, Ragan says they scraped real local addresses from legit e-mail address dumps on sites like Pirate Bay. The domain side they set up using “FreeDNS” servers that attach e-mail addresses to existing domains, a service that can be exploited for domains that have poor security measures in place.

So, say there’s an address dump file on the Internet containing the legit e-mail addresses “CatLover290 at gmail” and “CarGuy909 at Yahoo.” Ragan and Salazar’s algorithm would attach “CatLover290” and “CarGuy909” to one of thousands of spoof URLs they’d set up through the FreeDNS sites. The original e-mail accounts would then be unaffected. But the resulting portmanteau e-mail addresses would appear to be coming from a random stream of humans on the Internet.

Thus, Ragan says, not even a human observer watching the e-mails registering for free cloud computing accounts—none appearing to be produced by a simple algorithm or automated process—would detect anything overtly suspicious. And to further throw off the scent of suspicious activity, they used Internet anonymizing software like TOR and virtual private networks to spoof where the trial account requests were coming from. (Ragan says that generating real-seeming names using name-randomizing algorithms would probably be good enough.)

“A lot of the e-mail confirmation and authentication features rely on the old concept that one person has one e-mail address—and that is simply not the case anymore,” Ragan says. “We’ve developed a platform that would allow anyone to have 30,000 e-mail addresses.”

So they signed up for hundreds of free cloud service trial accounts and, in the process, strung together a free, ersatz virtual supercomputer.

“We demonstrated that we could generate a high amount of crypto hashes for a high return on Litecoin mining, using these servers that didn’t belong to us,” Ragan says. “We didn’t have an electricity bill, and we were basically able to generate money for free out of thin air.”

Ragan says at their scheme’s peak, they had 1000 accounts that were each generating 25 cents per day: $250 of free Litecoin. He says they shut the system down before it generated any real monetary value or made any noticeable performance dent in the cloud service systems.

And Ragan stressed that the devious schemes he and Salazar developed are being disclosed in order to raise awareness of problems in security measures that real criminal elements around the world can and probably already are taking advantage of.

“Not planning for and anticipating automated attacks is one of the biggest downfalls a lot of online services are currently experiencing,” Ragan says.

One measure Ragan says he and Salazar wanted to see that would combat their scheme’s spoofing of cloud service providers was the introduction of random anti-automation controls. Captchas, credit card verification, and phone verification can all be spoofed, he says, if they’re at predictable places in the cloud service signup and setup process.

“Some services don’t want to add a Captcha, because it annoys users,” Ragan says. “But…there are compromises that can be [employed], like once an abnormal behavior is detected from a user account, they then prompt for a Captcha. Rather than prompting every user for a Captcha every time, they can find that balance. There’s always a balance to be made between security and usability.”

Ragan says that’s what he and Salazar want the takeaway from their talk to be: that a lot more consideration is given to how to better implement anti-automation controls and features.

Black Hat 2014: A New Smartcard Hack

According to new research, chip-based “Smartcard” credit and debit cards—the next-generation replacement for magnetic stripe cards—are vulnerable to unanticipated hacks and financial fraud. Stricter security measures are needed, the researchers say, as well as increased awareness of changing terms-of-service that could make consumers bear more of the financial brunt for their hacked cards. 

The work is being presented at this week’s Black Hat 2014 digital security conference in Las Vegas. Ross Anderson, professor of security engineering at Cambridge University, and co-authors have been studying the so-called Europay-Mastercard-Visa (EMV) security protocols behind emerging Smartcard systems.

Though the chip-based EMV technology is only now being rolled out in North America, India, and elsewhere, it has been in use since 2003 in the UK and in more recent years across continental Europe as well. The history of EMV hacks and financial fraud in Europe, Anderson says, paints not nearly as rosy a picture of the technology as its promoters may claim.

“The idea behind EMV is simple enough: The card is authenticated by a chip that is much more difficult to forge than the magnetic strip,” Anderson and co-author Steven Murdoch wrote in June in the Communications of the ACM [PDF]. “The card-holder may be identified by a signature as before, or by a PIN… The U.S. scheme is a mixture, with some banks issuing chip-and-PIN cards and others going down the signature route. We may therefore be about to see a large natural experiment as to whether it is better to authenticate transactions with a signature or a PIN. The key question will be, “Better for whom?””

Neither is ideal, Anderson says. But signature-based authentication does put a shared burden of security on both bank and consumer and thus may be a fairer standard for consumers to urge their banks to adopt.

“Any forged signature will likely be shown to be a forgery by later expert examination,” Anderson wrote in his ACM article. “In contrast, if the correct PIN was entered the fraud victim is left in the impossible position of having to prove that he did not negligently disclose it.”

And PIN authentication schemes, Anderson says, have a number of already discovered vulnerabilities, a few of which can be scaled up by professional crooks into substantial digital heists.

In May, Anderson and four colleagues presented a paper at the IEEE Symposium on Security and Privacy on what they called a “chip and skim” (PIN-based) attack. This attack takes advantage of some ATMs and credit card payment stations at stores that unfortunately take shortcuts in customer security: The EMV protocol requires ATMs and point-of-sale terminals to broadcast a random number back to the card as an ID for the coming transaction. The problem is many terminals and ATMs in countries where Smartcards are already used issue lazy “random” numbers generated by things like counters, timestamps, and simple homespun algorithms that are easily hacked.

As a result, a customer can—just in buying something at one of these less-than-diligent stores or using one of these corner-cutting ATMs—fall prey to an attack that nearby criminals could set up. The attack would allow them to “clone” a customer’s Smartcard and then buy things on the sly with the compromised card. Worse still, some banks’ terms and conditions rate card cloning—which EMV theoretically has eliminated—as the customer’s own fault. So this sort of theft might leave an innocent victim with no recourse and no way of refunding their loss.

“At present, if you dispute a charge, the bank reverses it back to the merchant,” Anderson says. “Merchants are too dispersed to go after customers much. But EMV shifts the liability to the bank, and the banks in anticipation are rewriting their terms and conditions so they can blame the customer if they feel you might have been negligent. I suggest you check out your own bank's terms and conditions.”

U.S. State Department Global Passport, Visa Issuing Operations Disrupted

IT Hiccups of the Week

Last week saw an overflowing cornucopia of IT problems, challenges and failures being reported. From these rich pickings, we decided to focus this week’s edition of IT Hiccups first on a multi-day computer problem affecting the US Department of State’ passport and visa operations, followed by a quick rundown of the numerous US and UK government  IT project failures that were also disclosed last week.

According to the Associated Press, beginning on Saturday, 21 July, the U.S. Department of State has being experiencing unspecified computer problems including “significant performance issues, including outages” with its Consular Consolidated Database [pdf], which has interfered with the “processing of passports, visas, and reports of Americans born abroad.” A story at ComputerWorld indicates that the problems began after maintenance was performed on the database. State Department spokeswoman Marie Harf told the AP that the computer problem effects were being felt across the globe.

The AP story says that a huge passport and visa application backlog is already forming, with one unidentified country already reporting that the backlog of applications had reached 50,000 as of Wednesday. The growing backlog has also “hampered efforts to get the system fully back on line,” Haff told AP.

The rapidly expanding backlog is easy to understand, as the Oracle-based database, which was completed in 2010, “is the backbone of all consular applications and services and supports domestic and overseas passport and visa activities,” according to a State Department document [pdf]. In 2013, for example, the database was used in the issuing of some 13 million passports and 9 million visitor visas.

Department spokeswoman Harf was quoted by the AP as saying, “We apologize to applicants and recognize this may cause hardship to applicants waiting on visas and passports. We are working to correct the issue as quickly as possible.” However, she did not give any indications when the problems would be fixed or the backlog would be erased. Stories of families stuck overseas and not able to return to the US are rapidly growing.

Earlier this summer, the UK saw a similar passport backlog develop over the mismanagement of the closures of passport offices at British Embassies during the past year. The backlog, which blossomed into a political embarrassment to Prime Minister Cameron’s Government, is still not fully under control. It remains to be seen whether the U.S. passport and visa problems will do the same for the Obama Administration—if it lasts for a couple of weeks, it very well could.

More likely to cause embarrassment to the Obama and the Cameron administrations are the numerous government IT failures reported last week. For example, the AP reported that the U.S. Army had to withdraw  its controversial Distributed Common Ground System (DCGS-A) from an important testing exercise later this year because of “software glitches.” DCGS-A, the Army website says, “is the Army’s primary system for posting of data, processing of information, and disseminating Intelligence, Surveillance and Reconnaissance information about the threat, weather, and terrain to all components and echelons.”

The nearly $5 billion spent on DGCS-A so far has not impressed many of its Army operational  users in Afghanistan, who have complained that the system is complex to use and unreliable, among other things. They also point out there is a less costly and more effective system available called Palantir, but the Army leadership is not interested in using it after spending so much money and effort  on DCGS-A.

The AP also reported last week that a six year, $288 million U.S. Social Security Administration Disability Case Processing System (DCPS) project had virtually collapsed, and that the SSA was trying to figure out how to salvage it. DCPS, which was supposed to replace 54 legacy computer systems, was intended to allow SSA workers across the country “to process claims and track them as benefits are awarded or denied and claims are appealed,” the AP said. 

The AP story says that the SSA may have tried to keep quiet a June report [pdf] by McKinsey and Co. into the program’s problems so as to not embarrass Acting Social Security Commissioner Carolyn Colvin who President Obama recently nominated to head the SSA. The McKinsey report indicates that one reason for the mess is that no one could be found to be in charge of the project. The report also states that “for past 5 years, Release 1.0 [has been] consistently projected to be 24-32 months away.” Colvin was deputy commissioner for 3½ years before becoming acting commissioner in February 2013, the AP says, so the DCPS debacle is squarely on her watch.

Then there was a story in the Fiscal Times concerning a Department of Homeland Security (DHS) Inspector General report [pdf] indicating that the Electronic Immigration System (ELIS), which was intended to “provide a more efficient and higher quality adjudication [immigration] process,” was doing the opposite. The IG wrote that, “instead of improved efficiency, time studies conducted by service centers show that adjudicating on paper is at least two times faster than adjudicating in ELIS.”

Why, you may ask? The IG states that, “Immigration services officers take longer to adjudicate in ELIS in part because of the estimated 100 to 150 clicks required to move among sublevels and open documents to complete the process. Staff also reported that ELIS does not provide system features such as tabs and highlighting, and that the search function is restricted and does not produce usable results.”

Hey, what did those immigration service officers expect for the $1.7 billion spent so far on ELIS, something that actually worked?  DHS is now supposed to deploy an upgraded version of ELIS later this year, the IG says, but he is also warning that major improvements in efficiency should not be expected.

As I mentioned, reports of project failure were the story of the week in the UK as well. Computing published an article concerning the UK National Audit Office’s report into the 10-year and counting Aspire outsourcing contract for the on-going modernization and operation of some 650 HM Revenue & Customs tax systems. While the NAO has said that the work performed by the consortium led by Capgemini has resulted in a “high level of satisfactory implementations,” the cost to do so has been a staggering amount.

HMRC let the Aspire contract in 2004, after ending a ten-year outsourcing contract with EDS (now HP) when the relationship soured. HMRC said at the time that the ten-year cost of the Aspire contract would be between £3.6bn and £4.9bn; however, the NAO says the cost has topped £7.9 billion through the end of March this year, and may reach £10.4 billion by June 2017 when the contract, which was extended in 2007, expires. Public Accounts Committee (PAC) chair Margaret Hodge MP says the cost overrun is an example of HMRC’s management of the Aspire contract being “unacceptably poor.”

On top of being unhappy about the doubling in contract costs, and the high level of profits the suppliers made on it, the NAO also warned HMRC that it needs to get serious about a replacement contract when the Aspire contract ends. Hodge says that while HMRC has started planning Aspire’s replacement, “its new project is still half-baked, with no business case and no idea of the skills or resources needed to make it work.”

Apparently the NAO found another half-baked UK government IT project as well. According to the London Telegraph, the NAO published a report [pdf] describing how the UK Home Office has managed to waste nearly £347 million since 2010 on its “flag ship IT programme” called the Immigration Case Work system, which is intended to deal “with immigration and asylum applications.” The NAO says that the Home Office has now abandoned the effort, thereby, “forcing staff to revert to using an old system that regularly freezes.”

In addition, the NAO says that the Home Office is planning to spend at least another £209 million by 2017 on what it hopes to be a working immigration case work system.  Until that new system comes on line, however, the Home Office will need to spend an undetermined amount of money trying to keep the increasingly less reliable legacy immigration system from completely falling over dead. The legacy system support contract ends in 2016, the NAO states, so that Home Office doesn’t have a lot of wiggle room to get its new replacement immigration system operational.

Finally,  the London Telegraph reported that the UK National Health Service may have reached a deal to pay Fujitsu £700 million as compensation for the NHS unilaterally changing the terms of its National Program for IT (NPfIT) electronics health record contract with the Japanese company. The changes sought by the NHS led Fujitsu to walk off the program (as did Accenture) in 2008. The NPfIT project, a brain child of then Prime Minister Blair in 2002, was cancelled in 2011 after burning through some £7.5 billion so far.

In Other News…

Vancouver’s SkyTrain Suffers Failures over Multiple Days

North Carolina’s Fayetteville Public Works Commission Experiences New System Billing Problems

UK Nationwide Bank Customers Locked Out of Accounts

Nebraska Throws Out Writing Test Scores in Wake of Computer Testing Problems

GAO Finds It Easy to Fraudulently Sign up for Obamacare

Washington State Obamacare Exchange Glitches Hits 6,000 Applicants

Pennsylvania State Payroll Computer Glitch Fixed

UK Couple Receives £500 Million Electricity Bill

Senate Condemns US Air Force ECSS Program Management’s Incompetence

IT Hiccups of the Week With no compelling IT system snafus, snags, or snarls last week to report on, we thought we’d return to an oldie but goodie project failure of the first order: the disastrous U.S. Air Force Expeditionary Combat Support System (ECSS) program.

The reason for our revisit is the public release a short time ago of the U.S. Senate staff report [pdf] into the fiasco.  Last December,  Senators Carl Levin and John McCain, respectively the chairman and ranking member of the Senate Armed Services Committee, requested the report. The request was made in the wake of the Air Force’s publication of the executive summary [pdf] of its own investigative report which apparently the Senators were not altogether happy with. You may recall that Levin and McCain christened the billion-dollar program failure—which the Air Force admitted failed to produce any significant military capability after almost eight years in development—as being “one of the most egregious examples of mismanagement in recent memory.” Given the number of massive DoD IT failures to choose from, that is saying something.

Not surprisingly, the Senate staff report identified basically the same contributing factors for the debacle as the internal Air Force report, albeit with different emphasis. Whereas the Air Force report listed four contributing factors for the ECSS program’s demise (poor program governance; inappropriate program management tactics, techniques, and procedures; difficulties in creating organizational change; and excessive personnel and organizational churn), the Senate staff report condensed them into three contributing factors:

  • Cultural resistance to change within the Air Force
  • Lack of leadership to implement needed changes; and
  • Inadequate mitigation of identified risks at the outset of the procurement.

The Senate report focused much of its attention on the last bullet concerning ECSS program risk mismanagement. In large part, the report blamed the calamity on the Air Force’s failure to adhere to business process reengineering guidelines “mandated by several legislative and internal DOD directives and [that] are designed to ensure a successful and seamless transition from old methods to new, more efficient ways of doing business.” From reading the report, one gets the image of an exasperated parent scolding a recalcitrant child: Congress seemed as miffed at the Air Force for ignoring its many IT-related best practices directives as for the failure itself.

Clearly adding to the sense of frustration is that the Air Force “identified cultural resistance to change and lack of leadership as potential [ECSS] problems in 2004” when the service carried out a mandated risk assessment as the program was being initially planned. Nevertheless, the risk mitigation approaches the service ended up developing were “woefully inadequate.” In fact, the report said that the Air Force identified cultural resistance as an ongoing risk issue throughout the program. However, the lack of action to address it permitted the “potential problem” to become an acute problem.

To its credit, the ECSS program did try to set out an approach in 2006 to try to contain the technical risks involved in developing an integrated logistics system to replace hundreds of legacy systems then in use across the Air Force. Two key risk reduction aspects of the plan were to “forego any modifications” to the Oracle software selected for ECSS and to “conduct significant testing and evaluation” of the system.  However, by the time the ECSS project was canceled in 2012, the report notes, Oracle’s software was not only being heavily customized, but it also wasn’t being properly tested.

Several things contributed to this 180 degree turn in project risk reduction, according to the report. One was partially a problem of the Air Force conducting what can only be called bait-and-switch procurement. As the report states:

"In its March 2005 solicitation, the Air Force requested an “integrated product solution.” The Air Force solicitation stated that it wanted to obtain “COTS [commercial off-the-shelf] software [that is] truly ‘off-the-shelf’: unmodified and available to anyone.” Oracle was awarded the software contract in October 2005, and provided the Air Force with three stand-alone integratable COTS software components that were “truly off the shelf.” Oracle also provided the Air Force with tools to put the three components together into a single software “suite,” which would “[require] a Systems Integrator (SI) to integrate the functions of the three [components].” Essentially, this meant the various new software pieces did not initially work together as a finished product and required additional integration to work as intended.

Furthermore,

"In December 2005, the Air Force issued its solicitation for a systems integrator (SI) … portrayed the three separate Oracle COTS software components, as a single, already-integrated COTS product which was to be provided to the winning bidder as government funded equipment (GFE). Confusion about the software suite plagued ECSS, contributing significantly to program delays. Not only was time and effort dedicated to integrating the three separate software components into a single integrated solution, but there were disagreements about who was responsible for that integration. While CSC [the system integrator] claimed in its bid to have expertise with Oracle products, the company has said that it assumed, that the products it would receive from the Air Force would already be integrated. Among the root causes of the integration-related delay was the Air Force’s failure to clearly understand and communicate program requirements.

Adding to the general confusion was the small issue of exactly how many legacy systems were going to be replaced. The report states:

"When the Air Force began planning for ECSS, it did not even know how many legacy systems the new system would replace. The Air Force has, on different occasions, used wildly different estimates on the number of existing legacy programs, ranging from “175 legacy systems” to “hundreds of legacy systems” to “over 900 legacy systems.”

Curiously, the Senate report doesn’t note that even if the Air Force was trying to get rid of “only” 175 legacy systems, that was still some 20 times more than the Air Force’s last failed ERP attempt a few years earlier. The staff report seems to assume that such a business process engineering undertaking was still feasible from the start (and during a period of conflict as well), which is a highly dubious assumption to be making.

Probably the most damning sentence in the whole report is the following:

"To date, the Air Force still cannot provide the exact number of legacy systems ECSS would have replaced."

Two years after ECSS was terminated, after two major investigations into why ECSS failed, and while the Air Force is actively engaged in planning for another try, this fact is still rather amazing.

I’ll let you read the report to dig through the other gory details involving the risk-related issues involving cultural resistance and lack of leadership, but suffice to say you have to wonder where top Air Force and Department of Defense leadership was during the eight years this project blunder unfolded. As I have noted elsewhere, the DoD CIO at the time claimed to be “closely” monitoring the program, and up to the day ECSS was terminated, the CIO viewed it as being only a moderately risky program.

There was the same lack of curiosity on the part of Congress as well, however. DoD ERP system developments have been well-documented by the US Government Accountability Office [pdf] for over two decades as being prone to self-immolation. But Congress has kept the money flowing to them anyway without bothering to perform much in the way of oversight. Predictably, the Senate report avoids looking into Congress's own role in permitting the ECSS failure to occur.

The Senate report goes on to list several other DoD ERP programs that are trying their best to imitate ECSS. In this time of tight government budgets, that list might actually move Congress to quit acting as a disinterested party to their future outcomes. In fact, Federal Computer Week ran an article last week that indicated the Senate Appropriations Defense Subcommittee was slicing $500 million dollars off of DoD’s IT budget, which is clearly a warning shot across DoD’s bow.

Another warning shot of note is that both Senators Levin and McCain have noted that: “No one within the Air Force and the Department of Defense has been held accountable for ECSS’s appalling mismanagement. No one has been fired. And, not a single government employee has been held responsible for wasting over $1 billion dollars in taxpayer funds.” The Senators have stated they plan to introduce legislation to hold program managers more accountable in the future.

I suspect—and dearly hope—that if another ECSS happens in defense (or in other governmental agencies or departments, for that matter), more than a few civil and military careers will be, like ECSS, terminated.

In Other News …

Birmingham England Traffic Wardens Unable to Issue Tickets

Chicago Car Sticker Enforcement Delayed After Computer Glitch

Ohio’s Lorain City Municipal Court Records are Computer "Nightmare"

Immigration System Crash Leads to Chaos at Santo Domingo’s Las Americas Airport

Texas TxTag Toll System Upgrade Causes Problems

Melbourne Members Equity Bank System Upgrade Issues Vexes Customers

Reservation System Issue Hits Las Vegas-based Allegiant Air Flights

Vancouver’s Skytrain Shutdown Angers Commuters

Computer Assigns Univ of Central Florida Freshman to Live in Bathrooms and Closets

Australia’s Woolworth Stores Suffers Store-wide Checkout Glitch

Advertisement

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributor
Willie D. Jones
 
Advertisement
Load More