Risk Factor iconRisk Factor

New Jersey Finally Cancels $118 Million Social Welfare Computer System

IT Hiccups of the Week

We end this year’s IT Hiccups of the Week series much like how we began it, with yet another expensive, incompetently managed, and ultimately out-of-control U.S. state government IT project spiraling into abject failure. This one involves the New Jersey Department of Human Services’ six-year, $118.3 million Consolidated Assistance Support System (CASS). It was supposed to modernize the management of the state’s social welfare programs, but it was CASS itself that was in dire need of assistance.

The Department of Human Services decided to announce that it had pulled the project’s plug over the Thanksgiving holiday—no doubt to try to reduce the bad publicity involved while people were enjoying their much-easier-to-swallow, non-IT turkey. A DHS spokesperson would not explain why the CASS contract was terminated; her only related comment made to a NJ.com reporter was that “an analysis is in progress to determine next steps.”

Hewlett-Packard, which was the CASS project prime contractor (the contract was originally awarded to EDS in 2007; HP acquired the firm in 2008), was equally mum on the subject. However, an HP spokesperson did seem to hint strongly that any and all project problems were the fault of New Jersey’s DHS, when he stated that, “Out of respect, HP does not comment on customer relationships.”

Last week, an audit report (pdf) by Stephen Eells, New Jersey’s state auditor, showed why both DHS and HP did not want to discuss why a system touted as “New Jersey's comprehensive, cutting-edge social service information system” had turned into a debacle. According to the report, both DHS and HP botched the project nearly from its outset in August 2009. The audit report, for example, found HP’s overall technical performance “poor,” due in part to the company’s “absentee management.” HP has changed project managers on the eight-phase CASS effort three times since 2010. One of the managers the state rejected, Eells stated, because they lacked the qualifications “to manage such a large project.”

The audit report also notes that while the CASS contract cost was $118 million (it was originally $83 million), the state’s own project-related costs added up to an additional $109 million. According to a NJSpotlight.com article, Eells, in testimony last week before New Jersey’s Human Services Committee, made it clear that the state botched its CASS oversight role as well. DHS senior management, he indicated, consistently ignored red flags that the project was in deep trouble, and apparently failed to bring “concerns over the contract to the Department of Treasury, which is responsible for ensuring that problems with contracts are resolved.”

Eells also ruefully noted that the state’s contract with HP didn’t “allow the state to recoup damages from the failure to complete the contracted work.” A minor oversight, one might say.

The Human Services Committee wasn’t able to find out why DHS ignored the warnings that the CASS project was in trouble or failed to report the contract troubles to the state department that really needed to know about them, either. This void in the record is because DHS Commissioner Jenifer Velez “declined to speak at the hearing, citing the ongoing talks with Hewlett-Packard,” NJSpotlight.com reported.

I tend to doubt that the Commissioner will ever explain why her department’s IT managers chose to ignore the facts screaming out to them that the CASS project was on the fast track to failure, or why her department’s contract managers failed to protect state taxpayers from the cost of failure as is routiniely done. It’s not like the Commissioner is personally accountable for what happens on her watch or anything.

In Other News…

Ontario and IBM Locked in Court Battle Over Bungled Transportation System Project

Fixing Ontario’s Social Services’ Buggy Computer System Will Be Costly

Profits for UK’s Brewin Dolphin Drop on IT Debacle

LA DWP Says Billing Mess Over After Inflicting Customers With Year of Pain

Hertz Car Rental Blames Computer Issues for Failing to Pay $435,777 in Taxes

LAUSD Gets $12 Million More to Fix Wayward School Information Management System

6,000 Health Exchange Insurance Plans in Washington State Canceled by Mistake

Robotic Cameras Go Rogue, Irritate BBC News Presenters

Software Bungles in Oregon Child Welfare Data System Cost State $23 Million

Amazon UK Erroneously Selling Hundreds of Products for a Penny

Second Major Air Traffic Computer Problem in Year Cancels, Delays Scores of UK Flights

MPs Demand Investigation into UK Air Traffic System Meltdown

UK Air Traffic Chief Blames Unprecedented Software Issue for Shutdown

How the Internet-Addicted World Can Survive on Poisoned Fruit

There is no “magic bullet” for cybersecurity to ensure that hackers never steal millions of credit card numbers or cripple part of a country’s power grid. The conveniences of living in an interconnected world come with inherent risks. But cybersecurity experts do have ideas for how the world can “survive on a diet of poisoned fruit” and live with its dependence upon computer systems.

Cybersecurity risks have grown with both stunning scale and speed as the global economy has become increasingly dependent upon the Internet and computer networks, according to Richard Danzig, vice chair of The RAND Corporation and former U.S. Secretary of the Navy. He proposed that the United States must prepare to make hard choices and tradeoffs—perhaps giving up some conveniences—in order to tackle such risks. Such ideas became the focus of a cybersecurity talk and panel discussion hosted by New York University’s Polytechnic School of Engineering on Dec. 10.

“You are trading off the virtue in order to buy security,” Danzig said. “To the degree that you indulge in virtue, you breed insecurity. The fruit is poisonous, but also nutritious.”

Read More

How Not to Be Sony Pictures

The scope of the recent hack of Sony Pictures — in which unidentified infiltrators breached the Hollywood studio’s firewall, absconded with many terabytes of sensitive information and now regularly leak batches of damaging documents to the media — is only beginning to be grasped. It will take years and perhaps some expensive lawsuits too before anyone knows for certain how vast a problem Sony’s digital Valdez may be. 

But the take-away for the rest of the world beyond Sony and Hollywood is plain: Being cavalier about cybersecurity, as Sony’s attitude in recent years has been characterized, is like playing a game of corporate Russian roulette.

According to a new study of the Sony hack, one lesson learned for the rest of the world is as big as the breach itself. Namely, threat-detection is just the first step.

Read More

Amazon Plays Santa after IT Glitch, Singapore Airlines’ Plays Scrooge

IT Hiccups of the Week

This week’s edition of IT Hiccups focuses on the two different customer service reactions to IT errors, a nice one by Amazon UK and a not so nice one on the part of Singapore Airlines.  

According to the Daily Mail, a student at the University of Liverpool by the name of Robert Quinn  started to receive a plethora of packages from Amazon at his family’s home in Bromley, South London that he hadn’t ordered. The 51 packages included a baby buggy, a Galaxy Pro tablet, a 55-inch 3-D Samsung television set, a Sony PSP console, an electric wine cooler, a leaf blower, a bed, a bookcase and a chest of drawers, among other things. In total, the 51 items were worth some £3,600 (US $5,650).

The Daily Mail reported that Quinn called up Amazon and asked what was going on. According to Quinn, Amazon told him that people must be “gifting” the items to him. That surprised Quinn, since he didn’t know the people who were supposedly gifting him the items. Quinn told the Mail that he speculated that there was some sort of computer glitch affecting the Amazon’s purchase return address labels, since the items all looked as though they were meant to be sent back to Amazon by their original purchasers.

Quinn told the Mail:

 I was worried that people were losing out on their stuff so I phone Amazon again and said I’m happy to accept these gifts if they are footing the cost, but I’m not happy if these people are going to lose out. But Amazon said ‘it’s on us.

The Mail checked with Amazon, who confirmed Quinn’s story. While not confirming that a computer problem affecting its return labels was the cause for the errant packages, Amazon didn’t go out of its way to deny it.

Quinn, who is an engineering student, later told the Mail that packages were still arriving. Quinn indicated that he was going to give some of the items he has received to charity, and then sell the rest to fund “an ‘innovative’ new [electric] cannabis grinder” he was designing.

Whereas Amazon played Santa, Singapore Airlines decided instead to take on the role of Scrooge last week. According to the Sydney Morning Herald, when Singapore Airlines uploaded its business class fares for trips from Australia to Europe into a global ticket distribution system, it instead mistakenly uploaded its economy fare prices. As a result, instead of paying US $5,000 for a business class ticket, travel agents sold over 900 tickets for $2,900 before Singapore Airlines fixed the problem.

Singapore Airlines decided that its mispricing mistake wasn’t, in fact, its problem, but the travel agents’.  The Herald reported that the airline, “told travel agents who sold the cheap tickets that they will have to seek the difference between the actual price and what they should have sold for from their customers, or foot the bill themselves,” if their customers want to fly in business class.

Singapore Airlines admitted, according to a Fox News story, that while it had recently “recently reassigned a booking subclass originally designated to economy class bookings to be used for business class bookings from December 8, 2014,” which could cause confusion, “the airfare conditions for the fare clearly stated that it was only valid for economy class travel.” In other words, we may have screwed up, but the travel agents should have caught our error anyway.

Scrooge would indeed be proud.

Last year, both Delta and United Airlines decided to honor online fare errors, in the latter case even when fares were priced at $0.

Update: The Daily Mail is now reporting that Singapore Airlines has decided to honor the mispriced tickets after all. Tiny Tim must be rejoicing.

In Other News ….

Coding Issues Forces 10,000 New York Rail Commuters to Buy New Fare Cards

Microsoft Experiences Déjà vu Update cum Human Azure Error

New $240 Million Ontario Welfare System Pays Out Too Much and Too Little

New Jersey Social Services Glitch Causes Wrong Cash Payments

Singapore Stock Exchange Suffers Third Outage of Year

Air India Suffers Check-in Glitch

Best Buy Website Crashes Twice on Black Friday

Mazda Issues Recall to Fix Tire Pressure Monitoring Software

Washington Health Insurance  Exchange Glitches Continue

Blob Front-End Bug Bursts Microsoft Azure Cloud

IT Hiccups of the Week

It being the Thanksgiving holiday week in the United States, I was tempted to write once more about the LA Unified School District’s MiSiS turkey of a project, which the LAUSD Inspector General fully addressed in a report [pdf] released last week. If you like your IT turkey burnt to a crisp, over-stuffed with project management arrogance, served with heapings of senior management incompetence, and topped off a ladleful of lumpy gravy of technical ineptitude, you’ll feast mightily on the IG report. However, if you are a parent of the over 1,000 LAUSD school district students who still have not received a class schedule nearly 40 percent of the way into the academic year—or a Los Angeles taxpayer for that matter—you may get extreme indigestion from reading it.

However, the winner of the latest IT Hiccup of the Week award goes to Microsoft for the intermittent outages that hit its Azure cloud platform last Wednesday, disrupting an untold number of customer websites along with Microsoft Office 365,  Xbox Live , and other services across the United States, Europe, Japan, and Asia. The outages occurred over an 11-hour (and in some cases longer) period.

According a detailed post by Microsoft Azure corporate vice president Jason Zanderon, the outage was caused by “a bug that got triggered when a configuration change in the Azure Storage Front End component was made, resulting in the inability of the Blob [Binary Large Object] Front-Ends to take traffic.”

The configuration change was made as part of a “performance update” to Azure Storage, that when made, exposed the bug, and “resulted in reduced capacity across services utilizing Azure Storage, including Virtual Machines, Visual Studio Online, Websites, Search and other Microsoft services.” The bug, which had escaped detection during “several weeks of testing,” caused the storage Blob Front-Ends to go into an infinite loop, Zander stated. “The net result,” he wrote, “was an inability for the front ends to take on further traffic, which in turn caused other services built on top to experience issues.”

Once the error was detected, the configuration change was rolled backed immediately. However, the Blob Front-Ends needed a restart to halt their infinite looping, which slowed the recovery time, Zander wrote.

The effects of the bug could have been contained, except that Zander indicated someone apparently didn’t follow standard procedure in rolling out the performance update.

“Unfortunately the issue was wide spread, since the update was made across most regions in a short period of time due to operational error, instead of following the standard protocol of applying production changes in incremental batches.”

Zander apologized for the “inconvenience” and says that it is going to “closely examine what went wrong and ensure it never happens again.”

In Other News…

Polish President Says Voting Glitch Doesn’t Warrant Vote Rerun

RBS Hit With £56 Million Fine for “Unacceptable” 2012 IT Meltdown

Wal-Mart Ad Match Scammed for $90 PS4s

Computer Problems Close South Australian Government Customer Service Centers

British Columbia Slot Machines’ Software Fixed After Mistaken $100K Payout

Washington State Temporarily Closes Health Exchange Due to Computer Issues

Software Bug in Washington State Department of Licensing Fails to Alert Drivers to Renew Licenses

RBS Group Facing Huge Fine over Massive 2012 IT System Meltdown

IT Hiccups of the Week

We turn our attention in this week’s IT Hiccups to one of the truly major IT ooftas of the past decade—one that was back in the news this week: the meltdown of the IT systems supporting the RBS banking group. (That group includes NatWest, Northern Ireland’s Ulster Bank, and the Royal Bank of Scotland.) The meltdown began in June 2012 but wasn’t fully resolved until nearly two months later. The collapse kept 17 million of the Group’s customers from accessing their accounts for a week, while thousands of customers at Ulster Bank reported access issues for more than six weeks.

Last week, Sky News reported that the UK’s Financial Conduct Authority (FCA) informed RBS that it was facing record breaking fines in the “tens of millions of pounds” for the malfunction, which was blamed on a faulty software upgrade. In addition, the Sky News story states that the Central Bank of Ireland is looking at imposing fines on Ulster Bank over the same issue.  The meltdown has already cost RBS some £175 million in compensation and other corrective costs.

Read More

FCC Chairman Calls April's Seven State Sunny Day 911 Outage "Terrifying"

IT Hiccups of the Week

This edition of IT Hiccups of the Week revisits the 911 emergency call system outages that affected all of Washington State and parts of Oregon just before midnight, 9 April 2014. As I wrote at the time, CenturyLink—a telecom provider from Louisiana that is contracted by Washington State and the three affected counties in Oregon to provide 911 communication services—blamed the outages, which lasted several hours each, on a “technical error by a third party vendor.”

CenturyLink gave few details in the aftermath of the outages other than to say that the Washington State and Oregon outages were merely an “uncanny” coincidence, and to send out the standard “sorry for the inconvenience” press release apology. The company estimated that approximately 4,500 emergency calls to 911 call centers went unanswered during the course of the Washington State outage. No details were available regarding the number of failed 911 calls there were during the two-hour Oregon outage, which affected some 16,000 phone customers.

Well, 10 days ago, the U.S. Federal Communications Commission released its investigative report into the emergency system outages. It cast a much different light on the Washington State “sunny day” outage (i.e., not caused by bad weather or a natural disaster) that CenturyLink initially tried to play down. FCC Chairman Tom Wheeler even went so far as to call the report’s findings “terrifying.”

As it turns out, while the 911 system outages that hit Oregon and Washington State were indeed coincidental, they were also connected in a strange sort of way that caused a lot of confusion at the time, as we will shortly see. More importantly, the 911 outage that affected Washington State on that April night didn’t just affect that state, but also emergency calls being made in California, Florida, Minnesota, North Carolina, Pennsylvania and South Carolina. In total, some 6,600 emergency calls made over a course of six hours across the seven states went unanswered.

As the FCC report notes, because of the multi-state emergency system outage, “Over 11 million Americans … or about three and half percent of the population of the United States, were at risk of not being able to reach emergency help through 911.” Since the outage happened very late at night into the early morning and there was no severe weather in the affected regions, the emergency call volume was very low; luckily, no one died because of their inability to reach 911.

The cause of the outage, the FCC says, was a preventable “software coding error” in a 911 Emergency Call Management Center (ECMC) automated system in Englewood, Colorado, operated by Intrado, a subsidiary of West Corporation.  Intrado, the FCC report states, “is a provider of 911 and emergency communications infrastructure, systems, and services to communications service providers and to state and local public safety agencies throughout the United States…  Intrado provides some level of 911 function for over 3,000 of the nation’s approximately 6,000 PSAPs .”

As succintly explained in an article in the Washington Post, “Intrado owns and operates a routing service, taking in 911 calls and directing them to the most appropriate public safety answering point, or PSAP, in industry parlance. Ordinarily, Intrado's automated system assigns a unique identifying code to each incoming call before passing it on—a method of keeping track of phone calls as they move through the system.”

“But on April 9, the software responsible for assigning the codes maxed out at a pre-set limit [at 11:54 p.m. PDT]; the counter literally stopped counting at 40 million calls. As a result, the routing system stopped accepting new calls, leading to a bottleneck and a series of cascading failures elsewhere in the 911 infrastructure,” the Post article went on to state.

All told, 81 PSAPs across the seven states were unable to receive calls; dialers to 911 heard only “fast busy” signals.

When the software hit its 40 million call limit, the FCC report says, the emergency call-routing system did not send out an operator alarm for over an hour. When it finally did, the system monitoring software indicated that the problem was a “low level” problem; surprisingly, it did not immediately alert anyone that emergency calls were no longer being processed. 

As a result, Intrado’s emergency call management center personnel did not realize the severity of the outage, nor did they get any insight into its cause, the FCC report goes on to state. In addition, the ECMC personnel were already distracted with alarms they were receiving involving the Oregon outage also involving Century link.

Worse still, says the FCC, the low-level alarm designation not only failed to get ECMC personnel’s attention, but it also prevented an automatic rerouting of 911 emergency calls to Intrado’s ECMC facility in Miami.

It wasn’t until 2:00 a.m. PDT on 10 April that ECMC personnel became aware of the outage. That, it seems, happened only because CenturyLink called to alert them that its PSAPs in Washington State were complaining of an outage. After the emergency call management center personnel received the CenturyLink call, both they and CenturyLink thought the Washington State and Oregon outages were somehow closely interconnected. It took several hours for them to realize that they were entirely separate and unrelated events, the FCC report states. Apparently, it wasn’t until other several other states’ PSAPs and 911 emergency system call providers started complaining of outages that call management center personnel and CenturyLink realized the true scope of the 911 call outage, and were finally able zero in on the cause.

Once the root cause was discovered, the Colorado-based ECMC personnel initiated a manual failover of 911 call traffic to Intrado’s ECMC Miami site at 6:00 a.m. PDT. When problems plaguing the Colorado site were fixed later that morning, traffic was rerouted back.

The FCC report states that, “What is most troubling is that this is not an isolated incident or an act of nature. So-called ‘sunny day’ outages are on the rise. That’s because, as 911 has evolved into a system that is more technologically advanced, the interaction of new [Next Generation 911 (NG911)] and old [traditional circuit-switched time division multiplexing (TDM)] systems is introducing fragility into the communications system that is more important in times of dire need.”

IEEE Spectrum published an article in March of this year that explains the evolution of 911 in the U.S. (and Europe) and provides good insights into some of the difficulties of transitioning to NG911. The FCC’s report also goes into some detail on how the transition from traditional 911 service to NG911 can create subtle problems that are difficult to unravel when a problem does occur.

According to a story at Telecompetitor.com, Rear Admiral David Simpson, chief of the FCC’s Public Safety and Homeland Security Bureau, told the FCC during a hearing into the outage that there were three additional major “sunny day” outages in 2014, though none were ever reported before this year. All three—which I believe involved outages in Hawaii, Vermont and Indiana—involved NG911 implementations or time division multiplexing–to-IP transitions, Simpson said.

The FCC report indicates that Intrado has made changes to its call routing software and monitoring systems to prevent this situation from happening again, but it also said that 911 emergency service providers need to examine their system architecture designs. The hope is that they’ll better understand how and why their systems may fail, and what can be done to keep the agencies operating when they do. In addition, the communication of outages among all the emergency service providers and PSAPs needs to be improved; the April incident highlighted how miscommunications hampered finding the extent and cause of the outage.

Finally, the five FCC Commissioners unanimously agreed that such an outage was “simply unacceptable” and that future “lapses cannot be permitted.” While no one died this time, they note that next time everyone may not be so lucky.

In Other News…

Sarasota Florida Schools Plagued by Computer Problems

Weather Forecasts Affected as National Weather Satellite Goes Dark?

Bad Software Update Hits Aspen Colorado Area Buses

Bank of England Suffers Embarrassing Payments Crash

Google Drive for Work Goes Down

Google Gmail Experiences Global Outage

Cut Fiber Optic Cables Knock-out Air Surveillance in East India for 13 Hours

Bank of America Customers Using Apple Pay Double Charged

iPhone Owners Complain of Troubles with iOS 8.1

UK Bank Nationwide Apologizes Once More for Mobile and Online Outages

Vehicle Owners Seeking Info on Takata Airbag Recall Crash NHSTA Website

West Virginia Delays Next Phase of WVOASIS Due to Testing Issues

UK’s Universal Credit Program Slips At Least Four Years

Heathrow Airport Suffers Yet Another Baggage System Meltdown

LA School District Superintendent’s Resigns in Wake of Continuing MiSiS Woes

We turn our IT Hiccups of the Week attention once again to the Los Angeles Unified School District’s shambolic roll out of its integrated student educational tracking system called My Integrated Student Information Systems (MiSiS). I first wrote about MiSiS a few months ago, and it has proved nothing but trouble to the point that it became a major contributing factor in “encouraging” John Deasy to resign his position last week as superintendent of the second largest school system in the United States. He’d  been on the job three and a half years.

Deasy claimed in interviews after his resignation that the MiSiS debacle “played no role” in his resignation, and instead blamed it on district teachers and their unions opposing his crusading efforts to modernize the LAUSD school system. That is putting a positive spin on the situation to put it mildly.

Why? You may recall from my previous post that LAUSD has been under a 2003 federal district court approved consent decree to implement an automated student tracking system so that disabled and special need students’ educational progress can be assessed and tracked from kindergarten to the end of high school. Headway toward complying with the obligations agreed under the consent decree is assessed by a court-appointed independent monitor who publishes periodic progress reports. Deasy repeatedly failed to deliver on the school district’s promises made to the independent monitor over the course of his tenure.

What really helped seal Deasy’s fate was the latest progress report [pdf] from the independent monitor released last week. The report essentially said that despite numerous “trust me” promises by LAUSD officials (including Deasy), MiSiS was still out of compliance. The officials had promised that MiSiS would be completely operationally tested and ready at the beginning of this school year. But, said the report, the system’s incomplete functionality, the ongoing poor reliability due to inadequate testing, and the misunderstood and pernicious data integrity issues were causing unacceptable educational hardships to way too many LAUSD students—especially to those with special educational needs.

An LA Times story, for one, stated that the monitor found that MiSiS, instead of helping special needs students, made it difficult to place them in their required programs. A survey conducted by the independent monitor of 201 LAUSD schools trying to use MiSiS found that “more than 80% had trouble identifying students with special needs and more than two-thirds had difficulty placing students in the right programs,” the Times article stated.

Deasy’s fate had been hanging by a thread for a while. For instance, at several LAUSD schools—especially at Thomas Jefferson High School in south Los Angeles—hundreds of students were still without correct class schedules nearly two months after the school year had started. 

Another story in the LA Times reported that continuing operational issues with MiSiS meant that some Jefferson students were being “sent to overbooked classrooms or were given the same course multiple times a day. Others were assigned to ‘service’ periods where they did nothing at all. Still others were sent home.”

The problems at Jefferson made Deasy’s insistence that issues with MiSiS were merely a matter of “fine tuning” look disingenuous at best.

The MiSiS fueled difficulties at Jefferson, which extended to several other LAUSD  schools, caused a California Superior Court judge about two weeks ago to intervene and order the state education department to work with LAUSD officials to rectify the situation immediately. In issuing the order, the judge damningly wrote that, “there is no evidence of any organized effort to help those students” at Jefferson by LAUSD senior officials.

As a result of the judge’s order, the LAUSD school board last week quickly approved a $1.1 million plan to try to eliminate the disarray at Jefferson High. Additionally, the school board is now undertaking an audit of other district high schools to see how many other students are being impacted by the MiSiS mess and what additional financial resources may be needed to eliminate it.

Fraying Deasy’s already thin thread further was his admission that MiSiS is in need of some 600 enhancements and bug fixes (up from a reported 150 or so when the system was rolled out in August), which would likely cost millions of dollars on top of the $130 million already spent to address them. Further, he also acknowledged that one of the core functions solemnly promised to the independent monitor would be available for this school year—the proper recording of student grades—could take yet another year to fix all the bugs with it, the LA Times reported.

According to the LA Daily News, LAUSD teachers complain that they not only have a hard time accessing the grade book function, but when they finally do, they find that student grades or even their courses have disappeared from MiSiS. Hundreds if not thousands of student transcripts could be complete shambles, which for seniors applying for colleges is causing major concern. Their parents are also unamused, to say the least.

Probably the last fiber of Deasy’s thread was pulled away last week when it turned out that even if MiSiS had been working properly, a majority of LAUSD schools likely wouldn’t have been able to access all of its functionality anyway. According to a story at Contra Costa Times, LAUSD technology director Ron Chandler informed the district’s school board last week that most of the LAUSD schools’ administrative desktop computers were incapable of completely accessing MiSiS because of known compatibility problems.

A clearly frustrated school board wanted to know why this situation was only being disclosed now; Chandler told the board that the initial plan was for the schools to use the Apple iPads previously purchased by the school board to access MiSiS. But questions over Deasy's role in that $1 billion contract put a hold to that approach. The school board was more than a bit incredulous about that explanation since they had not approve the purchase of iPads with the intent that they were to be used by teachers and school administrators as the primary means to access MiSiS.

Reluctantly, the school board approved $3.6 million in additional funding to purchase 3,340 new desktop computers for 784 LAUSD schools to allow them unfettered access to MiSiS.

While Deasy’s resignation will alleviate some of the immediate political pressure on LAUSD officials caused by MiSiS fiasco, the technical issues will undoubtedly last throughout this academic year and possibly well into the next. However, for many unlucky LAUSD students, the impacts may last for many years beyond that.

In Other News…

Baltimore County Maryland Teachers Tackling Student Tracking System Glitches

Tallahassee’s New Emergency Dispatch System Offline Again

Washington State’s Computer Network Suffers Major Outage

Software Glitch Hits Telecommunications Services of Trinidad and Tobago

New Mexico Utility Company Incorrectly Bills Customers

Software Issue Means Oklahoma Utility Company Overbills Customers

Computer Error Allows Pink Panther Gang Member Early Out of Austrian Jail

Dropbox Bug Wipes Out Some Users’ Files

Generic Medicines Might Have Been Approved on Software Error

Australia’s iiNet Apologizes to Hundreds of Thousands of Customers for Three-day Email Outage

Spreadsheet Error Costs Tibco Investors $100 Million

Duke Energy Falsely Reports 500,000 Customers as Delinquent Bill Payers Since 2010

IT Hiccups of the WeekThere were several IT Hiccups to choose from last week. Among them were: problems with the Los Angeles Unified School District’s fouled up new student information and management system that are so egregious that a judge ordered the district to address them immediately; and the UK Revenue and Customs department’s embarassing admission that its trouble-plagued modernized tax system has again made multiple errors in computing thousands of tax bills. However, the winner of this week’s title as the worst of the worst was an oofta by Duke Energy, the largest electric power company in the U.S. Duke officials apologized in a press release to over 500,000 of the utility’s 800,000-plus current and former customers (including 5,000 non-residential customers) across Indiana, Kentucky, and Ohio for erroneously reporting them as being delinquent in paying their utility bills since 2010.

Duke Energy admitted that the root cause of the problem was a coding error that occurred when customers opted to pay their monthly utility bills via the utility’s Budget Billing or Percentage of Income Payment Plan Plus (in Ohio only).  A company spokesperson told Bloomberg BusinessWeek that while customers were sent the correct invoices and their on-time payments were properly credited, the billing system indicated that the customers’ bills were paid late.

 As a result, that late payment information for residential customers was sent by formal agreement to the National Consumer Telecom & Utilities Exchange (NCTUE). The NCTUE is a consortium of over 70 member companies from the telecommunications, utilities and pay TV industries that serves as a credit data exchange service for its members. Holding over 325 million consumer records, NCTUE provides information to its members regarding the credit risk of their current and potential customers. For non-residential customers, the “late payment” snafu had worse consequences: the delinquency reports were sent to the business credit rating agencies Dun & Bradstreet and Equifax Commercial Services.

Duke Energy’s press release said that the company “deeply regretted” the error that has effectively trashed the credit scores of hundreds of thousands of its residential and business customers for years. The utility says the erroneous information has now been “blocked” for use by the NCTUE, Dun & Bradstreet and Equifax, and it has dropped its membership in all three.

The press release mentioned that the company is still investigating whether additional customers who had “unique” billing circumstances were affected by the coding error.

But what the written statement failed to mention is that the utility found the error only after a former customer discovered that she was having trouble setting up service at another NCTUE utility member because of a supposedly poor payment history at Duke Energy. After contacting Duke Energy and asking why she was being shown as a delinquent bill payer when she was not, the utility realized that the woman’s erroneous credit information was only the tip of a very large IT oofta iceberg.

While Duke Energy claims that “we take responsibility” for the error, it is being rather quiet about explaining what exactly “taking responsibility” means for the hundreds of thousands of customers who may have been unjustly financially affected by the erroneous information sent to the three credit agencies over the past four years. It wouldn’t surprise me to see a class action lawsuit filed against Duke Energy in the near future to help the company gain greater clarity on what its responsibility is.

In Other News…

Judge Orders California to Help LAUSD Fix School Computer Fiasco

UK’s Tax Agency Admits it Can’t Compute Taxes Properly

Tahoe Ski Resort Withdraws Erroneous $1 Season Pass

UK NHS Hospital Patients Offered Harry Potter Names

Florida Utility Insists New Billing System is Right: Empty House Used 614,000 Gallons of Water in 18 Days

Audit Explains How Kansas Botched Its $40 Million DMV Modernization Effort

Indiana BMV Finally Sending Out Overbilling Refund Checks

Nielson Says Software Error Skews Television Viewer Stats for Months

Japan Trader's $617 Billion “Fat Finger” Near-Miss Rattles Tokyo Market

IT Hiccups of the Week

This week’s IT Hiccup of the Week concerns yet another so-called “fat finger” trade embroiling the Tokyo Stock Exchange (TSE). This time it involved an unidentified trader who last week mistakenly placed orders for shares in 42 major Japanese corporations.

According to a story at Bloomberg News, the trader placed over-the-counter (OTC) orders adding up to a total value of 67.78 trillion yen ($617 billion) in companies such as Canon, Honda, Toyota and Sony, among others. The share order for Toyota alone was for 1.96 billion shares—or 57 percent of the car company—amounting to about $116 billion.

Bloomberg reported that its analysis “shows that someone traded 306,700 Toyota shares at 6,399 yen apiece at 9.25 a.m. ... The total value of the transaction was 1.96 billion yen. The false report was for an order of 1.96 billion shares. [The Japan Securities Dealers Association] said the broker accidentally put the value of the transaction in the field intended for the number of shares.”

The $617 billion dollar order, which Bloomberg said was “greater than the size of Sweden’s economy and 16 times the Japanese over-the-counter market’s traded value for the entire month of August,” was quickly canceled before the orders could be completed. Given the out-sized orders and that OTC orders can be canceled anytime during market hours, it is unlikely that the blunder would have gone unfixed for very long, but the fact that it happened resurrected bad memories for the Tokyo Stock Exchange.

Back in 2005, Mizuho Financial Group made a fat finger trade on the TSE that could not be canceled out. A Financial Times of London story states that, “Mizuho Securities mistakenly tried to sell 610,000 shares in recruitment company J-Com at ¥1 apiece instead of one share at ¥610,000. The brokerage house said it had tried, but failed, to cancel the J-Com order four times.” The mistaken $345 million trade cost the president of the TSE along with two other exchange directors their jobs.

Then in 2009, a Japanese trader for UBS ordered $31 billion worth of bonds instead of buying the $310,000 he had intended, the London Telegraph reported.  Luckily, the order was sent after hours, so it was quickly discovered and corrected.

A little disconcerting, however, was a related Bloomberg News story from last week that quoted Larry Tabb, founder of research firm Tabb Group LLC. According to Tabb, despite all the recent efforts by US regulators and the exchanges themselves to keep rogue trades from occurring (e.g., the Knight Capital implosion), fat finger trades still “could absolutely happen here.”

“While we do have circuit breakers and pre-trade checks for items executed on exchange,” Tabb told Bloomberg, “I do not believe that there are any such checks on block trades negotiated bi-laterally and are just displayed to the market.”

Don’t insights like that from a Wall Street insider just give you a warm and fuzzy feeling about the reliability of financial markets?

In Other News…

Computer Glitch Affects 60,000 Would-be Organ Donors in Canada

Korean Air New Reservations System Irritates Customers

Ford Recalls 850,000 Vehicles to Fix Electronics

Mitsubishi i-MiEV Recalled to Fix Software Brake Issue

Doctors’ “Open Payments” Website Still Needs Many More Government Fixes

Apple iOS 8 Hit by Bluetooth Problems

Electronic Health Record System Blamed for Missing Ebola at Dallas Hospital

Advertisement

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributor
Willie D. Jones
 
Advertisement
Load More