Risk Factor iconRisk Factor

310,000 Healthcare.gov Enrollees Must Provide Proof Now or Lose Insurance

IT Hiccups of the Week

Last week, there were so many reported IT snags, snarls and snafus that I felt like the couple who finally won the 20-year jackpot on the Lion’s Share slot machine at the Las Vegas MGM Grand casino. Among IT Hiccups of note was the routine maintenance oofta at Time Warner Cable Wednesday morning that knocked out Internet and on demand service across the US for over 11 million of its customers and continued to cause other service issues for several days afterward; the “coding error” missed for six years by German Deutsche Bank that caused the misreporting to the UK government of 29.4 million equity swaps, with buys being reported as sales and vice versa; and the rather hilarious software bugs in the new Madden NFL 15 American football game, which has players flying around the field in interesting ways.

However, for this week, we just can’t ignore yet another Healthcare.gov snafu of major proportions. Last week, USAToday reported that the Centers for Medicare and Medicaid Services sent letters to 310,000 people who enrolled for health insurance through the federal website asking for proof of citizenship or immigration status by 5 September or they were going to lose their health insurance at the end of September.

Read More

LA School District Continues Suffering MiSiS Misery

IT Hiccups of the Week

With schools starting to open for the 2014-2015 academic year across the United States, one can confidently predict that there will be several news stories of snarls, snafus, and hitches with new academic IT supports systems as they go live for the first time. (You may may recall that happening in MarylandNew York, and Illinois a few years ago.)

While most of these “teething problems” are resolved during the first week or so of school, significant IT issues affecting the performance of the new integrated student educational tracking system recently rolled out in the Los Angeles Unified School District—the second largest in the country with 650,000 students—has already stretched beyond the first few weeks of the school term with no definitive end in sight. Furthermore, the many software bugs being encountered were known by LAUSD administrators, but they decided to roll out the system anyway.

Read More

The Routing Wall of Shame

IT Hiccups of the Week

While I have been en vacances the past few weeks, there have been several potential IT Hiccups of the Week stories of interest, including the 200-to-500 year old Indian women getting free sewing machines and Philippine’s fast food giant Jollibee Food having to temporarily close 72 of its restaurants in the Manila region because of problems the company experienced migrating to a new IT system—much to the disappointment of its Chickenjoy fans. However, the one hiccup that stands above the rest was the Internet difficulties reportedly experienced last week by the likes of eBay, Amazon, and LinkedIn, among many others.

Read More

Black Hat 2014: How to Hack the Cloud to Mine Crypto Currency

Using a combination of faked e-mail addresses and free introductory trial offers for cloud computing, a pair of security researchers have devised a shady crypto currency mining scheme that they say could theoretically net hundreds of dollars a day in free money using only guile and some clever scripting.

The duo, who are presenting their findings at this week’s Black Hat 2014 cyber security conference in Las Vegas, shut down their proof-of-concept scheme before it could yield any more than a token amount of Litecoins (an alternative to Bitcoin). The monetary value of both virtual currencies is based on enforced scarcity that comes from the difficulty of running processor-intensive algorithms.

Rob Ragan, senior security associate at the consulting firm Bishop Fox in Phoenix, Ariz., says the idea for the hack came to him and his collaborator Oscar Salazar when they were hired to test the security around an online sweepstakes.

“We figured if we could get 100,000 e-mails entered into the sweepstakes, we could have a really good chance of winning,” he says. “So we generated a script that would allow us to generate unique e-mail addresses and then automatically click the confirmation link.”

Once Ragan and Salazar had finished securing the sweepstakes against automated attacks, they were still left with all those e-mail addresses.

“We realized that … for about two-thirds of cloud service providers, their free trials only required a user to confirm an e-mail address,” he says. So the duo discovered they effectively had the keys to many thousands of separate free trial offers of cloud service providers’ networked storage and computing.

In other words, they had access to many introductory accounts at sites like Google’s Cloud Platform, Joyent, CloudBees, iKnode, CloudFoundry, CloudControl, ElasticBox and Microsoft Windows Azure.

Some of these sites, each offering their own enticement of free storage and free computing as a limited introductory offer, could be spoofed, the researchers discovered. Troves of unique e-mail addresses, using a non-discoverable automated process they developed, could be readily made on the fly and then used to get free storage and processor time.

A spoof e-mail address of course has two components, Ragan says, the local part (the stuff to the left of the “@“ sign) and the domain (to the right). To appear like a random stream of e-mail addresses signing up for any given service, Ragan says they scraped real local addresses from legit e-mail address dumps on sites like Pirate Bay. The domain side they set up using “FreeDNS” servers that attach e-mail addresses to existing domains, a service that can be exploited for domains that have poor security measures in place.

So, say there’s an address dump file on the Internet containing the legit e-mail addresses “CatLover290 at gmail” and “CarGuy909 at Yahoo.” Ragan and Salazar’s algorithm would attach “CatLover290” and “CarGuy909” to one of thousands of spoof URLs they’d set up through the FreeDNS sites. The original e-mail accounts would then be unaffected. But the resulting portmanteau e-mail addresses would appear to be coming from a random stream of humans on the Internet.

Thus, Ragan says, not even a human observer watching the e-mails registering for free cloud computing accounts—none appearing to be produced by a simple algorithm or automated process—would detect anything overtly suspicious. And to further throw off the scent of suspicious activity, they used Internet anonymizing software like TOR and virtual private networks to spoof where the trial account requests were coming from. (Ragan says that generating real-seeming names using name-randomizing algorithms would probably be good enough.)

“A lot of the e-mail confirmation and authentication features rely on the old concept that one person has one e-mail address—and that is simply not the case anymore,” Ragan says. “We’ve developed a platform that would allow anyone to have 30,000 e-mail addresses.”

So they signed up for hundreds of free cloud service trial accounts and, in the process, strung together a free, ersatz virtual supercomputer.

“We demonstrated that we could generate a high amount of crypto hashes for a high return on Litecoin mining, using these servers that didn’t belong to us,” Ragan says. “We didn’t have an electricity bill, and we were basically able to generate money for free out of thin air.”

Ragan says at their scheme’s peak, they had 1000 accounts that were each generating 25 cents per day: $250 of free Litecoin. He says they shut the system down before it generated any real monetary value or made any noticeable performance dent in the cloud service systems.

And Ragan stressed that the devious schemes he and Salazar developed are being disclosed in order to raise awareness of problems in security measures that real criminal elements around the world can and probably already are taking advantage of.

“Not planning for and anticipating automated attacks is one of the biggest downfalls a lot of online services are currently experiencing,” Ragan says.

One measure Ragan says he and Salazar wanted to see that would combat their scheme’s spoofing of cloud service providers was the introduction of random anti-automation controls. Captchas, credit card verification, and phone verification can all be spoofed, he says, if they’re at predictable places in the cloud service signup and setup process.

“Some services don’t want to add a Captcha, because it annoys users,” Ragan says. “But…there are compromises that can be [employed], like once an abnormal behavior is detected from a user account, they then prompt for a Captcha. Rather than prompting every user for a Captcha every time, they can find that balance. There’s always a balance to be made between security and usability.”

Ragan says that’s what he and Salazar want the takeaway from their talk to be: that a lot more consideration is given to how to better implement anti-automation controls and features.

Black Hat 2014: A New Smartcard Hack

According to new research, chip-based “Smartcard” credit and debit cards—the next-generation replacement for magnetic stripe cards—are vulnerable to unanticipated hacks and financial fraud. Stricter security measures are needed, the researchers say, as well as increased awareness of changing terms-of-service that could make consumers bear more of the financial brunt for their hacked cards. 

The work is being presented at this week’s Black Hat 2014 digital security conference in Las Vegas. Ross Anderson, professor of security engineering at Cambridge University, and co-authors have been studying the so-called Europay-Mastercard-Visa (EMV) security protocols behind emerging Smartcard systems.

Though the chip-based EMV technology is only now being rolled out in North America, India, and elsewhere, it has been in use since 2003 in the UK and in more recent years across continental Europe as well. The history of EMV hacks and financial fraud in Europe, Anderson says, paints not nearly as rosy a picture of the technology as its promoters may claim.

“The idea behind EMV is simple enough: The card is authenticated by a chip that is much more difficult to forge than the magnetic strip,” Anderson and co-author Steven Murdoch wrote in June in the Communications of the ACM [PDF]. “The card-holder may be identified by a signature as before, or by a PIN… The U.S. scheme is a mixture, with some banks issuing chip-and-PIN cards and others going down the signature route. We may therefore be about to see a large natural experiment as to whether it is better to authenticate transactions with a signature or a PIN. The key question will be, “Better for whom?””

Neither is ideal, Anderson says. But signature-based authentication does put a shared burden of security on both bank and consumer and thus may be a fairer standard for consumers to urge their banks to adopt.

“Any forged signature will likely be shown to be a forgery by later expert examination,” Anderson wrote in his ACM article. “In contrast, if the correct PIN was entered the fraud victim is left in the impossible position of having to prove that he did not negligently disclose it.”

And PIN authentication schemes, Anderson says, have a number of already discovered vulnerabilities, a few of which can be scaled up by professional crooks into substantial digital heists.

In May, Anderson and four colleagues presented a paper at the IEEE Symposium on Security and Privacy on what they called a “chip and skim” (PIN-based) attack. This attack takes advantage of some ATMs and credit card payment stations at stores that unfortunately take shortcuts in customer security: The EMV protocol requires ATMs and point-of-sale terminals to broadcast a random number back to the card as an ID for the coming transaction. The problem is many terminals and ATMs in countries where Smartcards are already used issue lazy “random” numbers generated by things like counters, timestamps, and simple homespun algorithms that are easily hacked.

As a result, a customer can—just in buying something at one of these less-than-diligent stores or using one of these corner-cutting ATMs—fall prey to an attack that nearby criminals could set up. The attack would allow them to “clone” a customer’s Smartcard and then buy things on the sly with the compromised card. Worse still, some banks’ terms and conditions rate card cloning—which EMV theoretically has eliminated—as the customer’s own fault. So this sort of theft might leave an innocent victim with no recourse and no way of refunding their loss.

“At present, if you dispute a charge, the bank reverses it back to the merchant,” Anderson says. “Merchants are too dispersed to go after customers much. But EMV shifts the liability to the bank, and the banks in anticipation are rewriting their terms and conditions so they can blame the customer if they feel you might have been negligent. I suggest you check out your own bank's terms and conditions.”

U.S. State Department Global Passport, Visa Issuing Operations Disrupted

IT Hiccups of the Week

Last week saw an overflowing cornucopia of IT problems, challenges and failures being reported. From these rich pickings, we decided to focus this week’s edition of IT Hiccups first on a multi-day computer problem affecting the US Department of State’ passport and visa operations, followed by a quick rundown of the numerous US and UK government  IT project failures that were also disclosed last week.

According to the Associated Press, beginning on Saturday, 21 July, the U.S. Department of State has being experiencing unspecified computer problems including “significant performance issues, including outages” with its Consular Consolidated Database [pdf], which has interfered with the “processing of passports, visas, and reports of Americans born abroad.” A story at ComputerWorld indicates that the problems began after maintenance was performed on the database. State Department spokeswoman Marie Harf told the AP that the computer problem effects were being felt across the globe.

The AP story says that a huge passport and visa application backlog is already forming, with one unidentified country already reporting that the backlog of applications had reached 50,000 as of Wednesday. The growing backlog has also “hampered efforts to get the system fully back on line,” Haff told AP.

The rapidly expanding backlog is easy to understand, as the Oracle-based database, which was completed in 2010, “is the backbone of all consular applications and services and supports domestic and overseas passport and visa activities,” according to a State Department document [pdf]. In 2013, for example, the database was used in the issuing of some 13 million passports and 9 million visitor visas.

Department spokeswoman Harf was quoted by the AP as saying, “We apologize to applicants and recognize this may cause hardship to applicants waiting on visas and passports. We are working to correct the issue as quickly as possible.” However, she did not give any indications when the problems would be fixed or the backlog would be erased. Stories of families stuck overseas and not able to return to the US are rapidly growing.

Earlier this summer, the UK saw a similar passport backlog develop over the mismanagement of the closures of passport offices at British Embassies during the past year. The backlog, which blossomed into a political embarrassment to Prime Minister Cameron’s Government, is still not fully under control. It remains to be seen whether the U.S. passport and visa problems will do the same for the Obama Administration—if it lasts for a couple of weeks, it very well could.

More likely to cause embarrassment to the Obama and the Cameron administrations are the numerous government IT failures reported last week. For example, the AP reported that the U.S. Army had to withdraw  its controversial Distributed Common Ground System (DCGS-A) from an important testing exercise later this year because of “software glitches.” DCGS-A, the Army website says, “is the Army’s primary system for posting of data, processing of information, and disseminating Intelligence, Surveillance and Reconnaissance information about the threat, weather, and terrain to all components and echelons.”

The nearly $5 billion spent on DGCS-A so far has not impressed many of its Army operational  users in Afghanistan, who have complained that the system is complex to use and unreliable, among other things. They also point out there is a less costly and more effective system available called Palantir, but the Army leadership is not interested in using it after spending so much money and effort  on DCGS-A.

The AP also reported last week that a six year, $288 million U.S. Social Security Administration Disability Case Processing System (DCPS) project had virtually collapsed, and that the SSA was trying to figure out how to salvage it. DCPS, which was supposed to replace 54 legacy computer systems, was intended to allow SSA workers across the country “to process claims and track them as benefits are awarded or denied and claims are appealed,” the AP said. 

The AP story says that the SSA may have tried to keep quiet a June report [pdf] by McKinsey and Co. into the program’s problems so as to not embarrass Acting Social Security Commissioner Carolyn Colvin who President Obama recently nominated to head the SSA. The McKinsey report indicates that one reason for the mess is that no one could be found to be in charge of the project. The report also states that “for past 5 years, Release 1.0 [has been] consistently projected to be 24-32 months away.” Colvin was deputy commissioner for 3½ years before becoming acting commissioner in February 2013, the AP says, so the DCPS debacle is squarely on her watch.

Then there was a story in the Fiscal Times concerning a Department of Homeland Security (DHS) Inspector General report [pdf] indicating that the Electronic Immigration System (ELIS), which was intended to “provide a more efficient and higher quality adjudication [immigration] process,” was doing the opposite. The IG wrote that, “instead of improved efficiency, time studies conducted by service centers show that adjudicating on paper is at least two times faster than adjudicating in ELIS.”

Why, you may ask? The IG states that, “Immigration services officers take longer to adjudicate in ELIS in part because of the estimated 100 to 150 clicks required to move among sublevels and open documents to complete the process. Staff also reported that ELIS does not provide system features such as tabs and highlighting, and that the search function is restricted and does not produce usable results.”

Hey, what did those immigration service officers expect for the $1.7 billion spent so far on ELIS, something that actually worked?  DHS is now supposed to deploy an upgraded version of ELIS later this year, the IG says, but he is also warning that major improvements in efficiency should not be expected.

As I mentioned, reports of project failure were the story of the week in the UK as well. Computing published an article concerning the UK National Audit Office’s report into the 10-year and counting Aspire outsourcing contract for the on-going modernization and operation of some 650 HM Revenue & Customs tax systems. While the NAO has said that the work performed by the consortium led by Capgemini has resulted in a “high level of satisfactory implementations,” the cost to do so has been a staggering amount.

HMRC let the Aspire contract in 2004, after ending a ten-year outsourcing contract with EDS (now HP) when the relationship soured. HMRC said at the time that the ten-year cost of the Aspire contract would be between £3.6bn and £4.9bn; however, the NAO says the cost has topped £7.9 billion through the end of March this year, and may reach £10.4 billion by June 2017 when the contract, which was extended in 2007, expires. Public Accounts Committee (PAC) chair Margaret Hodge MP says the cost overrun is an example of HMRC’s management of the Aspire contract being “unacceptably poor.”

On top of being unhappy about the doubling in contract costs, and the high level of profits the suppliers made on it, the NAO also warned HMRC that it needs to get serious about a replacement contract when the Aspire contract ends. Hodge says that while HMRC has started planning Aspire’s replacement, “its new project is still half-baked, with no business case and no idea of the skills or resources needed to make it work.”

Apparently the NAO found another half-baked UK government IT project as well. According to the London Telegraph, the NAO published a report [pdf] describing how the UK Home Office has managed to waste nearly £347 million since 2010 on its “flag ship IT programme” called the Immigration Case Work system, which is intended to deal “with immigration and asylum applications.” The NAO says that the Home Office has now abandoned the effort, thereby, “forcing staff to revert to using an old system that regularly freezes.”

In addition, the NAO says that the Home Office is planning to spend at least another £209 million by 2017 on what it hopes to be a working immigration case work system.  Until that new system comes on line, however, the Home Office will need to spend an undetermined amount of money trying to keep the increasingly less reliable legacy immigration system from completely falling over dead. The legacy system support contract ends in 2016, the NAO states, so that Home Office doesn’t have a lot of wiggle room to get its new replacement immigration system operational.

Finally,  the London Telegraph reported that the UK National Health Service may have reached a deal to pay Fujitsu £700 million as compensation for the NHS unilaterally changing the terms of its National Program for IT (NPfIT) electronics health record contract with the Japanese company. The changes sought by the NHS led Fujitsu to walk off the program (as did Accenture) in 2008. The NPfIT project, a brain child of then Prime Minister Blair in 2002, was cancelled in 2011 after burning through some £7.5 billion so far.

In Other News…

Vancouver’s SkyTrain Suffers Failures over Multiple Days

North Carolina’s Fayetteville Public Works Commission Experiences New System Billing Problems

UK Nationwide Bank Customers Locked Out of Accounts

Nebraska Throws Out Writing Test Scores in Wake of Computer Testing Problems

GAO Finds It Easy to Fraudulently Sign up for Obamacare

Washington State Obamacare Exchange Glitches Hits 6,000 Applicants

Pennsylvania State Payroll Computer Glitch Fixed

UK Couple Receives £500 Million Electricity Bill

Senate Condemns US Air Force ECSS Program Management’s Incompetence

IT Hiccups of the Week With no compelling IT system snafus, snags, or snarls last week to report on, we thought we’d return to an oldie but goodie project failure of the first order: the disastrous U.S. Air Force Expeditionary Combat Support System (ECSS) program.

The reason for our revisit is the public release a short time ago of the U.S. Senate staff report [pdf] into the fiasco.  Last December,  Senators Carl Levin and John McCain, respectively the chairman and ranking member of the Senate Armed Services Committee, requested the report. The request was made in the wake of the Air Force’s publication of the executive summary [pdf] of its own investigative report which apparently the Senators were not altogether happy with. You may recall that Levin and McCain christened the billion-dollar program failure—which the Air Force admitted failed to produce any significant military capability after almost eight years in development—as being “one of the most egregious examples of mismanagement in recent memory.” Given the number of massive DoD IT failures to choose from, that is saying something.

Not surprisingly, the Senate staff report identified basically the same contributing factors for the debacle as the internal Air Force report, albeit with different emphasis. Whereas the Air Force report listed four contributing factors for the ECSS program’s demise (poor program governance; inappropriate program management tactics, techniques, and procedures; difficulties in creating organizational change; and excessive personnel and organizational churn), the Senate staff report condensed them into three contributing factors:

  • Cultural resistance to change within the Air Force
  • Lack of leadership to implement needed changes; and
  • Inadequate mitigation of identified risks at the outset of the procurement.

The Senate report focused much of its attention on the last bullet concerning ECSS program risk mismanagement. In large part, the report blamed the calamity on the Air Force’s failure to adhere to business process reengineering guidelines “mandated by several legislative and internal DOD directives and [that] are designed to ensure a successful and seamless transition from old methods to new, more efficient ways of doing business.” From reading the report, one gets the image of an exasperated parent scolding a recalcitrant child: Congress seemed as miffed at the Air Force for ignoring its many IT-related best practices directives as for the failure itself.

Clearly adding to the sense of frustration is that the Air Force “identified cultural resistance to change and lack of leadership as potential [ECSS] problems in 2004” when the service carried out a mandated risk assessment as the program was being initially planned. Nevertheless, the risk mitigation approaches the service ended up developing were “woefully inadequate.” In fact, the report said that the Air Force identified cultural resistance as an ongoing risk issue throughout the program. However, the lack of action to address it permitted the “potential problem” to become an acute problem.

To its credit, the ECSS program did try to set out an approach in 2006 to try to contain the technical risks involved in developing an integrated logistics system to replace hundreds of legacy systems then in use across the Air Force. Two key risk reduction aspects of the plan were to “forego any modifications” to the Oracle software selected for ECSS and to “conduct significant testing and evaluation” of the system.  However, by the time the ECSS project was canceled in 2012, the report notes, Oracle’s software was not only being heavily customized, but it also wasn’t being properly tested.

Several things contributed to this 180 degree turn in project risk reduction, according to the report. One was partially a problem of the Air Force conducting what can only be called bait-and-switch procurement. As the report states:

"In its March 2005 solicitation, the Air Force requested an “integrated product solution.” The Air Force solicitation stated that it wanted to obtain “COTS [commercial off-the-shelf] software [that is] truly ‘off-the-shelf’: unmodified and available to anyone.” Oracle was awarded the software contract in October 2005, and provided the Air Force with three stand-alone integratable COTS software components that were “truly off the shelf.” Oracle also provided the Air Force with tools to put the three components together into a single software “suite,” which would “[require] a Systems Integrator (SI) to integrate the functions of the three [components].” Essentially, this meant the various new software pieces did not initially work together as a finished product and required additional integration to work as intended.

Furthermore,

"In December 2005, the Air Force issued its solicitation for a systems integrator (SI) … portrayed the three separate Oracle COTS software components, as a single, already-integrated COTS product which was to be provided to the winning bidder as government funded equipment (GFE). Confusion about the software suite plagued ECSS, contributing significantly to program delays. Not only was time and effort dedicated to integrating the three separate software components into a single integrated solution, but there were disagreements about who was responsible for that integration. While CSC [the system integrator] claimed in its bid to have expertise with Oracle products, the company has said that it assumed, that the products it would receive from the Air Force would already be integrated. Among the root causes of the integration-related delay was the Air Force’s failure to clearly understand and communicate program requirements.

Adding to the general confusion was the small issue of exactly how many legacy systems were going to be replaced. The report states:

"When the Air Force began planning for ECSS, it did not even know how many legacy systems the new system would replace. The Air Force has, on different occasions, used wildly different estimates on the number of existing legacy programs, ranging from “175 legacy systems” to “hundreds of legacy systems” to “over 900 legacy systems.”

Curiously, the Senate report doesn’t note that even if the Air Force was trying to get rid of “only” 175 legacy systems, that was still some 20 times more than the Air Force’s last failed ERP attempt a few years earlier. The staff report seems to assume that such a business process engineering undertaking was still feasible from the start (and during a period of conflict as well), which is a highly dubious assumption to be making.

Probably the most damning sentence in the whole report is the following:

"To date, the Air Force still cannot provide the exact number of legacy systems ECSS would have replaced."

Two years after ECSS was terminated, after two major investigations into why ECSS failed, and while the Air Force is actively engaged in planning for another try, this fact is still rather amazing.

I’ll let you read the report to dig through the other gory details involving the risk-related issues involving cultural resistance and lack of leadership, but suffice to say you have to wonder where top Air Force and Department of Defense leadership was during the eight years this project blunder unfolded. As I have noted elsewhere, the DoD CIO at the time claimed to be “closely” monitoring the program, and up to the day ECSS was terminated, the CIO viewed it as being only a moderately risky program.

There was the same lack of curiosity on the part of Congress as well, however. DoD ERP system developments have been well-documented by the US Government Accountability Office [pdf] for over two decades as being prone to self-immolation. But Congress has kept the money flowing to them anyway without bothering to perform much in the way of oversight. Predictably, the Senate report avoids looking into Congress's own role in permitting the ECSS failure to occur.

The Senate report goes on to list several other DoD ERP programs that are trying their best to imitate ECSS. In this time of tight government budgets, that list might actually move Congress to quit acting as a disinterested party to their future outcomes. In fact, Federal Computer Week ran an article last week that indicated the Senate Appropriations Defense Subcommittee was slicing $500 million dollars off of DoD’s IT budget, which is clearly a warning shot across DoD’s bow.

Another warning shot of note is that both Senators Levin and McCain have noted that: “No one within the Air Force and the Department of Defense has been held accountable for ECSS’s appalling mismanagement. No one has been fired. And, not a single government employee has been held responsible for wasting over $1 billion dollars in taxpayer funds.” The Senators have stated they plan to introduce legislation to hold program managers more accountable in the future.

I suspect—and dearly hope—that if another ECSS happens in defense (or in other governmental agencies or departments, for that matter), more than a few civil and military careers will be, like ECSS, terminated.

In Other News …

Birmingham England Traffic Wardens Unable to Issue Tickets

Chicago Car Sticker Enforcement Delayed After Computer Glitch

Ohio’s Lorain City Municipal Court Records are Computer "Nightmare"

Immigration System Crash Leads to Chaos at Santo Domingo’s Las Americas Airport

Texas TxTag Toll System Upgrade Causes Problems

Melbourne Members Equity Bank System Upgrade Issues Vexes Customers

Reservation System Issue Hits Las Vegas-based Allegiant Air Flights

Vancouver’s Skytrain Shutdown Angers Commuters

Computer Assigns Univ of Central Florida Freshman to Live in Bathrooms and Closets

Australia’s Woolworth Stores Suffers Store-wide Checkout Glitch

UK Retailer Marks & Spencer’s Revenue Results Smacked by Website Woes

IT Hiccups of the Week

We concentrate this week’s edition of IT snarls, snags, and snafus on the lessons being learned the hard way by Marks & Spencer—the U.K.'s largest clothing retailer and one of the top five retailers in the country—on what happens when your online strategy goes awry. What makes this more than a run-of-the-mill website goes bad story, at least in the U.K., is that as London's Daily Mail put it late last year, “Marks & Spencer, to coin a phrase, is not just any shop. It is the British shop, as much a part of our cultural heritage as the Women’s Institute, the BBC and the Queen.”

M&S launched with great fanfare a new £150 million website in February as a primary means to stem declining sales and profitability, as well as accelerate the achievement of the 128–year old company’s objective of being an international multichannel retailer. However, last week, CEO Marc Bolland announced shortly before the company’s annual meeting that on-going “settling in” problems with its website contributed to an 8.1 percent drop in online sales over the previous quarter. The decline in online sales, which was more than expected, helped M&S chalk up its 12th quarter in a row of declining sales in its housewares and clothing division.

Read More

Thousands of Bags Miss Flights at Heathrow Terminal 5 Again

IT Hiccups of the Week

Here's some glitch déjà vu from 2008, namely another baggage system miscue involving British Airways (BA) at Heathrow International Airport in London. As you may remember, in March 2008, BA and Heathrow operator British Airports Authority (now known as Heathrow Airport Holdings) opened the long-awaited BA Terminal 5 with great fanfare, with BAA loudly proclaiming the “world-class” baggage system was “tried, tested and ready to go.” No Denver International Airport baggage system-like problems for them! And BA's deservedly poor reputation as the top airline for losing luggage would finally be over.

Of course, such publicly-stated optimism over the reliability of automation is rarely left unpunished. Almost immediately, a massive meltdown of the baggage system on the first day of T5’s operation led to more than 28,000 passenger bags piled high across the terminal, hundreds more being lost, and some 15 percent of BA flights being cancelled over the course of nearly a week. It took three weeks before the majority of bags were reunited with passengers. The extreme embarrassment for both BA and Heathrow management because of the incident was acute, as was BA passenger rage, to say the least.

The nightmares of that week have slowly receded from BA passengers' memories. That is, until Friday, 27 June, when London papers like the Daily Mail reported that T5’s automated baggage system had suffered another major IT failure, with bags having to be handled manually again. As a result, thousands of BA passengers were sent (unknowingly) on their way without their luggage, including those passengers transiting through London via T5. The Mail quoted a BA spokesperson as saying, “On Thursday morning, the baggage system in Terminal 5 suffered an IT problem which affected how many bags could be accepted for each flight… We are very sorry for the difficulties this has caused and we have been working hard with the airport to make sure we reunite all of our customers with their luggage as quickly as possible.”

The BA spokesperson failed to point out that the phrase “how many bags could be accepted for each flight” actually meant no bags were accompanying their owners on an untold number of BA flights. BA also insisted to the press that they stop saying that passenger bags were lost; the bags merely “missed” their flights, BA pouted.  

A short two-paragraph Heathrow Airport Holdings press release did BA one better at trying to downplay the baggage system problem, stating that it affected only “some bags,” and that flights were in fact operating “normally.” You have to love press statements that are totally true but also totally disingenuous.

BA passengers on Thursday were naturally displeased at traveling without their bags, but at least they got to their destination, unlike those flying out of T5 last September, when another but very short-lived IT problem with the baggage system prevented hundreds of passengers from ever boarding their flights and had to be rebooked onto new ones, many the next day.

While BA passengers from June 27 were naturally miffed, what BA and Heathrow’s operator failed to make clear until early this week was that the “intermittent” IT problems with T5’s baggage system had actually begun on Thursday, 26 June and continued well into Sunday, 29 June. I am sure that many BA passengers flying out of T5 on June 28 and 29 would have changed airlines if they knew the full extent of the baggage problems. Conveniently, neither BA nor the airport operator came forward with the information about the multi-day operational problem until Tuesday, 1 July. Nor have they disclosed the total number of bags or passengers inconvenienced.

Both BA and Heathrow Airport Holdings are in damage control mode as BA passengers, many of them famous, have taken to social media to lambast them both. Many passengers, for example, have complained that when they finally did receive their bags, they had been ransacked with items stolen from them. Others complained that their journeys were over by the time their bags finally reached them.

BA put out another press release blaming international airline security rules for bags being opened as well as being delayed, and further promised to look into the ransacking claims. A BA spokesperson went on to apologize, stating that, “We are very sorry that this process is taking longer than anticipated, and we fully understand the frustration that this is causing.” Heathrow Airport Holdings new CEO John Holland-Kaye also apologized, saying the IT problem had taken too long to resolve and that airport needs “to do better.” Disclosing IT problems while they are occurring would be a good start.

The BA spokesperson went on to warn that it would still take “several days” before all the bags that “missed” their flights are reunited with their owners. BA also indicated that because of the number of bags involved, its bag tracking system was not working as it should, which could further add to the delays.

BA is reminding its customers flying out of T5 that, “You may wish to carry essential items in hand baggage where possible.” That is probably good advice. ComputerWorldUK reports that Heathrow Airport Holdings is remaining very tight-lipped over what caused the baggage system fault and why it took four days to fix it, which is rarely a good sign that everything is under control.

In Other News…

Florida’s DMV Computer System Back Online

Bombay Stock Exchange Recovers from Outage

New Zealand Exchange Suffers IT Glitch

DNS Error Hits British Telecom

Irish Drivers Avoid Parking Fines in County Clare Due to Computer Error

PayPal Error Blocks CERN and MIT anti-Spying ProtonMail Fundraising Efforts

Microsoft Anti-crime Operation Disrupts Legitimate Servers

UK Adult Content Filters Hit 20 Percent of Legal Popular Sites

Goldman Sachs Gets Court to Order Google to Block Misdirected Email

HHS IG Reports Say Federal and State Health Insurance Exchange Controls Very Weak

 

Outages Galore: Microsoft, Facebook, Oz Telecom Users are Unhappy Lot

IT Hiccups of the Week

We go on an IT Hiccups hiatus for a week and wouldn’t you know it, Facebook does a worldwide IT face plant for thirty minutes while mobile phone users of two of the three largest telecom providers in Australia, Optus and Vodafone, coincidentally suffer concurrent nationwide network outages for hours on the same day. Microsoft follows that with back-to-back Office 365-related outages, each lasting more than six plus hours. In addition, there were system operational troubles in Finland, India and New York to name but a few. So, we decided to focus this week’s edition of IT problems, snafus and snarls on the recent outbreak of reported service disruptions that happened around the world as well as those sincere sounding but ultimately vacuous apologies that always now accompany them. 

Our operational oofta review begins last Tuesday, when Microsoft’s Exchange Online was disrupted for some users starting from around 0630 to until almost 1630 or so East Coast time, leaving those affected without email, calendar and contact information capability. The disruption was somewhat embarrassing for Microsoft, which likes to tout that its cloud version of Office365 is effectively always available (or at least 99.9% of the time).

Read More
Advertisement

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributor
Willie D. Jones
 
Load More