Risk Factor iconRisk Factor

IT Hiccups of the Week: Computer Issues Create Misleading U.S. Jobless Numbers

Last week provided a nice variety of IT-related miscalculations, ooftas and other surprises. We start off this week with a follow-on to a story from last week that has created some unexpected consequences.

Computer Issues Create US Jobless Claim Number “Anomaly”

A little bit of background first. You may recall from last week's edition that at the end of August, Nevada’s Department of Employment, Training and Rehabilitation (DETR) took down its 30-year-old unemployment insurance system and began the roll-out of its new US $45 million UInv system, which has had a less than auspicious start including being offline for longer than predicted.

Nevada wasn’t the only state busily upgrading its unemployment system or having problems as a result. California’s Employment Development Department (EDD) has spent US $157.8 million upgrading the state’s 30-year-old unemployment payment processing system. California has the largest unemployment system in the U.S., disbursing some $33 million a day in unemployment checks.

According to an EDD statement over the weekend, there have been “some processing delays in [the] transition” to the new system, which began, like Nevada’s, at the end of August. The statement says that EDD has been “working around the clock to catch up on unemployment claims.” A news story quoted an EDD spokesperson reporting that about 5 percent of the claims—or roughly 20 000—were affected by the upgrade issues. She added that, “We apologize for the inconvenience to those affected.”

Now, for the punch line: Last week, the U.S. Department of Labor released its report on the number of Americans receiving unemployment benefits. The agency reported a drop of 31 000 from the previous week, for a seasonally adjusted total of 292 000—the lowest number since April 2006. According to the Wall Street Journal, drops like that are rare.

Fantastic news, eh, since it could indicate the end of the Great Recession! When the report first came out, economists were overjoyed, until they found out the numbers were “flawed;” then they were a bit upset, to state it mildly.

What the Labor Department's jobless report didn’t disclose was that the drop was related to the computer problems in Nevada and in California. Only after the report was release did the Department admit the discrepancy  to reporters. On top of that, it refused to identify the two states involved, even though it isn’t a mystery to reporters which two states were having troubles upgrading their unemployment insurance systems and reporting timely unemployment insurance claims numbers. In addition, the total claims number was also likely skewed downward because of the shortened work week following the U.S. Labor Day holiday. Finally, Massachusetts’ new unemployment system is said to be “riddled with problems” as well; it is unclear how those problems have affected the total claims number, either.

A Labor Department spokesperson defended the faulty report to the New York Times, saying, “When we get data, we have an obligation to put it out there.” And anyway, he emphasized, "the department did not recommend reading too much into any one week’s figure, at any rate.”

Of course, if the unemployment number is a positive one (and also accurate), the Labor Department doesn’t seemed constrained from touting it from the rooftops, or trying to bury it if is negative. Watch what happens when Nevada and California send in revised unemployment claim numbers later this month.

Stephen Stanley, chief economist at Pierpont Securities, said that the episode was a classic example of “bureaucratic ineptitude.” That is being way too nice.

Later this month, after spending US $69 million, Michigan is going to upgrade its 30-year old unemployment system. It may be déjà vu all over again in October.

United Airlines and Jet Blue Have System Problems

Last Friday, both United Airlines and Jet Blue experienced problems with their reservation systems. According to a story at Fox News story, Jet Blue reported that morning that an “IT system outage” caused by “connectivity issues” (supposedly at Verizon) had caused delays for about 60 flights. However, the Fox story also reported that, “flight tracker FlightAware said it recorded 70 flights that were delayed for more than an hour. Two hundred twenty two flights were delayed for more than 15 minutes.”

A horror more appropriate to the Friday the 13th date on the calendar was the disturbance that hit United Airlines. A computer error on its website slashed United ticket prices to zero (but with the US $2.50 security charge per leg intact). It took two hours before United realized what was going on and shut down its website. But by then, word of the pricing error had spread across social media, with lots of people announcing their good luck.

United, after thinking about it, decided it would, like Italian airline Alitalia did last year, honor the tickets. Apparently, the good publicity was thought to outweigh the costs, especially in light of the bad feelings United’s other recent computer problems have left with many of its customers.

Other carriers, like British Airlines, have not been so generous.

UnitedHealth Recalls EHR Software

Last week, UnitedHealth Group issued a recall of its software that manages electronic health records used in 35 hospital emergency facilities in 22 states. According to Reuters, UnitedHealth—the largest U.S. health insurer—found that an error in its Picis ED PulseCheck software caused some doctors’ notes on patient prescriptions to vanish.

A story at Bloomberg News says that UnitedHealth acquired the maker of the software, Picis, Inc. of Wakefield, Massachusetts, in 2010. Bloomberg also noted that this is the sixth recall of Picis electronic health record-related software since 2009.

In related EHR safety news, the Pennsylvania Patient Safety Authority issued an advisory last week to hospitals and other health care providers telling them to check the default settings in their EHR and computerized physician order entry (CPOE) systems. The Authority found 324 adverse patient events traceable to supplier-set system default settings not being reset to more appropriate ones matching the operating context of the healthcare provider.

I don’t doubt the suppliers have warnings about checking the defaults in their manuals, but hey, who reads a manual anymore?

Of Other Interest…

UK TSB Bank Website Crashes on Relaunch

University of Minnesota Students Find Extra US $1000 Billed

Amazon Cloud Disrupts Some Websites

Third Outage of New Jersey State Computer Systems Since July

“Technical Glitch” Affects Access to Thousands of Xbox Games

New York City Penalizes Two Testing Companies Over Exam Errors

Illustration: iStockPhoto

This Week in Cybercrime: Companies to FTC: Your Data Security Reach Exceeds Your Grasp

The U.S. Federal Trade Commission is wrong to claim broad authority to seek sanctions against companies for data breaches when it has no clearly defined data security standards, said panelists at a forum sponsored by Tech Freedom, a Washington, D.C., think tank that regularly rails against government regulation.

The event, held on Thursday, coalesced around the fact that in the last decade, the FTC has settled nearly four dozen cases after filing complaints based on its reasoning that a failure to have sufficient data security constitutes an unfair or deceptive trade practice. Two pending court cases, says a Tech Freedom statement, "may finally allow the courts to rule on the legal validity of what the FTC calls its 'common law of settlements.'"

Read More

IT Hiccups of the Week: A Bad Week for U.S. State Government IT

It’s been another relatively normal week in the land of IT inconveniences except perhaps for government computing systems, which is where we will concentrate our focus this week. We start off with the problems occurring in several U.S. states that recently introduced new IT infrastructure that is proving balkier than hoped.

Nevada, Massachusetts, and North Carolina Each Have Buggy New IT Systems

On 26 August, Nevada’s Department of Employment, Training and Rehabilitation (DETR) took down its 30-year-old unemployment insurance system and began the rollout of its new $45 million UInv system. In a press release issued that day, DETR announced (pdf) that the new system would be operational on 1 September; 51 000 or so Nevadans looking to file unemployment claims would have to wait until then.

However, the DETR enthused that after four years of development the new system would be worth the wait since it would, “allow claimants to view up-to-date information related to their individual claims. It will also give claimants access to their payment history and allow them instant, real time feedback on their unemployment claim.”

Unfortunately, the UInv system wasn’t ready for prime time until 4 September. The DETR "explained" (pdf) the delay in a subsequent press release, citing a number of undisclosed “minimal issues.” The DETR release went on to say that the department was “being very conservative” with the launch of the new system, and it asked for “patience as we gradually ramp up the new system to full deployment over the coming days.”

The Nevadans affected by the delay were clearly not amused.  The DETR had promised that when the new system was up and running, “claims and benefit services will continue as normal.” But by the 4th, it was backpedaling on that promise. The ballyhooed new online system wasn’t working and wouldn’t be for another three days. And according to the Las Vegas Sun, Nevadans couldn’t reach anyone at the DETR to get help with their claims. The reason was a study in irony: the phone lines were unexpectedly overwhelmed with callers when the DETR encouraged claimants to call in because online access had not yet become operational.

A DETR spokesperson said unhelpfully, “keep calling, relax and we will get to you.”  After hours upon hours of waiting on hold, many Nevadans gave up. Governor Brian Sandoval is said to be aware of the problem, but there is little that he can do in reality.

DETR says no one will miss out on their unemployment checks since the claims would be backdated, but also admitted that it may still take a while for the payments to be made.  Hope none of those unemployed Nevadans have any pressing bills to pay. Future upgrades to the UInv system are schedule for later this year and early next, which everyone no doubt hopes will be smoother.

Massachusetts residents who receive unemployment insurance are also unhappy with that state’s new unemployment benefits computer system that was launched in July. The $46 million system ($6 million over budget) has been plagued by problems and is “unable to make proper payments to hundreds of financially strapped workers hunting for jobs,” according to a Boston Globe story.

The system's contractor, Deloitte Consulting, has until the end of the month to fix the system without penalty, the Globe reports, but the newspaper also states that, “It's unclear what remedies are available to the state if the system is still not working properly after that.”

Deloitte says it has hired extra workers to help with the backlog of unemployment claims as well as notices sent out by the new system to unemployed workers demanding repayment for money they never received.

Former secretary of Labor and Workforce Development Suzanne Bump, who is now the state auditor, listed the upgrading of the new system as one of her accomplishments during her tenure at the labor department, but the Globe states that she is now trying to disassociate herself from the project as quickly as possible. A big surprise, eh?

Michelle Amante, the state official now in charge of the project, used to work for Deloitte on the project. She claims that despite all of the problems, “we fundamentally believe that the system is working.” Another big surprise.

Finally, joining (or remaining in) the ranks of unhappy constituents this week are North Carolina residents and businesses. The state recently rolled out two new systems, one called NCFast and the other NCTracks. NCFast (North Carolina Families Accessing Services through Technology), which was “soft-launched” in mid-summer (the system will not be finished until 2017), is the new N.C. Department of Health and Human Services computer system that is supposed to streamline the work activities and business processes of the department and county social services agencies so that more time can be spent helping those requiring public assistance and less on bureaucratic tasks

However, there have been ongoing issues with the $48 million system that have caused many families on food-assistance to go without their benefits. The state is blaming the counties for the problems, while the counties are blaming the state.

The same department has another headache in the form of its new NCTracks system. (I have no idea what that is an acronym for, if in fact it is one.) On 1 July, the department launched its controversial $484 million system in the wake of a state audit (pdf) released in May that cast doubt on whether the system—which was $200 million over budget and two years late—was ready to go live. The audit cited, among other things, the lack of testing of key system elements.  

The Department of Human Services insisted on 1 July that there was nothing major to worry about, regardless of what the audit reported. It conceded that there might be an “initial rough patch of 30 to 90 days as providers get used to using the new system,” but that there should be smooth sailing after that. Well, it has been a very rough patch indeed for many providers, who are, after 70 days and counting, still very unhappy with the system. The department has even had to mail emergency paper checks to over a thousand providers who couldn’t get their claims accepted by the new system and were facing financial hardship.   A Triangle Business Journal story from last week reported that the NCTracks “has missed its own targets nearly across the board, some by significant amounts.

With the ongoing problems at both NCFast and NCTracks, North Carolina lawmakers are now going to get involved. Exactly how they intend to improve the situation is a bit of a mystery.

Australian Very Happy over Credit Card Glitch

According to a story from Australia News Nine, Carlo Spina wanted to buy a magazine at a BP service station in Sydney. However, Spina discovered that he did not have enough cash on him, which meant he had to pay by credit card. When he tried to do so, Spina had trouble getting the card to work. The few seconds spent fussing with the card reader probably saved Spina's life as an out of control SUV crashed into the service station in the path he would have taken to leave.

You can watch the very close call via a Youtube video here. Spina wasn’t hurt, but was shaken up. I don't know if he ever did read his magazine.

Volvo Recalls 2014 Models

Finally, Volvo announced that it was recalling 8000 of its 2014 model year cars in the United States and Canada because of “a software glitch that could drain the battery and cause headlights, windshield wipers and turn signals to malfunction.” The models affected include S60 and S80 sedans and XC60 and XC70 crossovers.

Of Other Interest…

Tesco Pricing Error Sells White Chocolate Oranges for 1p

DC Metro Computer Problems Resolved

New Zealand Visa Bungle Blamed on Overloaded Computers

New School Computer Software Causing Confusion in Seattle

London Black Cab Production Restarts

NASA’s LADEE Glitch Fixed

Photo: iStockphoto

This Week in Cybercrime: Middle East’s Upheaval Breeds Hacktivists

Unrest in the streets of Egypt and Syria has led to thousands of deaths and a lack of personal security in many public spaces. But the civil wars raging there are turning out to be the backdrop for diminished security across online networks. McAfee, the cybersecurity firm best known for its online antivirus solution, told Reuters this week that more than half of the cybercrime activity now occurring in the Middle East can be characterized as “hacktivism” by politically motivated programmers looking to sabotage opposition institutions or groups.

“It’s difficult for people to protest in the street in the Middle East and so defacing websites and [carrying out] denial of service (DOS) attacks are a way to protest instead,” Christiaan Beek, director of incident response forensics for McAfee in the Europe, Middle East and Africa (EMEA) region, told Reuters.

The targets have overwhelmingly been entities linked to the region’s economic underpinnings, which in most cases means crude oil. Cyber attacks in the region are reportedly focused on Saudi Arabia, the world’s leading oil exporter; Qatar, the top supplier of liquefied natural gas; and Dubai, the region’s aviation, commercial and financial hub.

Gert-Jan Schenk, McAfee president for the EMEA region, told Reuters, “Ten years ago, it was all about trying to infect as many people as possible. Today we see more and more attacks being focused on very small groups of people. Sometimes malware is developed for a specific department in a specific company.”

Read More

Are STEM Workers Overpaid?

One of the strongest reasons given by those trying to entice more students to enter the STEM education pipeline is the “earnings premium” STEM workers make in comparison to non-STEM workers. Typical is the statement by the U.S. Department of Commerce press release from 2011 that, “STEM workers command higher wages, earning 26 percent more than their non-STEM counterparts.”

Further, the Commerce Department press release quotes U.S. Secretary of Education Arne Duncan’s plea to prospective STEM students that, “A STEM education is a pathway to prosperity – not just for you as an individual but for America as a whole. We need you in our classrooms, labs and key government agencies to help solve our biggest challenges.”

However, not everyone is happy as Duncan with STEM workers earning a premium for solving those big challenges, instead believing that the U.S. would be even more competitive and have a more equitable society if the earnings premium disappeared. For instance, former Federal Reserve Bank Chairman Alan Greenspan, speaking at a U.S. Treasury conference on U.S. Capital Markets Competitiveness put it bluntly: “Our skilled wages are higher than anywhere in the world. If we open up a significant window for skilled [guest] workers, that would suppress the skilled-wage level and end the concentration of income.”

Greenspan, to ensure everyone got the point, added, “Significantly opening up immigration to skilled workers solves two problems. The companies could hire the educated workers they need. And those workers would compete with high-income people, driving more income equality.”

[For a detailed examination of how effective this policy could be, I strongly suggest you read Eric Weinstein’s National Bureau of Economic Research draft working paper titled, “How and Why Government, Universities, and Industry Create Domestic Labor Shortages of Scientists and High-Tech Workers,” on the active suppression of STEM Ph.D. salaries by way of false National Science Foundation claims of a STEM shortage coupled with aggressive lobbying efforts to change STEM guestworker policies in the late 1980s to early 1990s. While the NSF eventually apologized for its misrepresentations to Congress in 1992 and admitted that there was in fact a surplus of STEM workers, the damage was already done with the fallout continuing into today.]

Greenspan is not alone in his thinking that STEM worker salaries should look like a lot more like non-STEM worker salaries. In March, over “100 executives from the technology sector and leading innovation advocacy organizations” sent an open letter to President Obama begging that he and Congress would approve the expansion of H-1B visa program beyond the 85 000 visa limit today (including 20 000 reserved for foreign graduates with advanced degrees at U.S. universities) to a minimum of 115 000 per year and possibly as high as 300 000 within a decade (not to mention the granting of permanent legal status to an unlimited number of foreign students who earn graduate degrees from U.S. universities in STEM subjects).

The  executives, whose companies have spent millions of dollar lobbying on the issue, collectively wrote in their letter that, “One of the biggest economic challenges facing our nation is the need for more qualified, highly‐skilled professionals, domestic and foreign, who can create jobs and immediately contribute to and improve our economy.”

Four company executives from IBM, Intel, Microsoft and Oracle claimed in the letter they had 10 000 openings that they apparently could only fill with guestworkers. That was, of course, before IBM announced its layoff, or Intel acknowledging that it was going to slow down its hiring as well as cut production at some of its U.S. plants which might lead to workers being let go. It will be interesting to see whether Microsoft CEO Steve Ballmer’s successor decides that the company payroll needs a bit of trimming, like Lou Gertsner did at IBM. And one other company on the letter list is Cisco, which also announced layoffs since the March letter. Maybe the companies can just trade workers.

While the tech company executives insist that their motive is not to cut payroll costs by increasing the supply of guestworker labor, few STEM workers believe their claims that there exists a “technology skills gap” that only guestworkers can fill, any more than there is a U.S. manufacturing skills gap. For instance, various surveys have claimed there is a skilled manufacturing work force shortage at between 300 000 and 600 000, for example.

However, others digging through the veracity of those claims like the Boston Consulting Group point out that the manufacturing skills gap is in reality closer to 80 000 and 100 000, and would be even less if employers were to increase their pay or hire less-skilled workers and train them, both of which employers seem highly reluctant to do. As the BCG study states, “Trying to hire high-skilled workers at rock-bottom rates is not a skills gap.”

Likewise, tech companies are definitely not interested in paying higher wages, something they were unhappily forced to do during the dot-com boom period (and which some tried to avoid afterwards as well by tacitly agreeing not to poach employees from one another until they got caught). The fact that thousands of these unfilled jobs stay unfilled for long periods of time also seems an indication that they are not nearly as critical or valuable as the companies make them out to be. Netflix, for example, pays its 700 software engineers 10 to 20 percent above its competitors in Silicon Valley, allowing it to “hire just about any engineer it wants.”  As Netflix clearly demonstrates, an expanding U.S. company in a competitive marketplace that needs technology workers can get the U.S. technology workers it wants, if it really wants them.

Anuj Srivas, the technology and business reporter of the Indian paper The Hindu, pointed out earlier this year that despite all the rhetoric, the “H-1B visa farce” as he calls it is indeed “about the profit margins of Indian and American IT companies,” something that Vivek Wadhwa, an academic and entrepreneur who advocates more H-1B visas, has acknowledged. Wadhwa candidly wrote in 2008 that, “I know from my experience as a tech CEO that H-1Bs are cheaper than domestic hires. Technically, these workers are supposed to be paid a ‘prevailing wage,’ but this mechanism is riddled with loopholes. In the tech world, salaries vary widely based on skill and competence. Yet the prevailing wage concept works on average salaries, so you can hire a superstar for the cost of an average worker. Add to this the inability of an H-1B employee to jump ship and you have a strong incentive to hire workers on these visas.”

Exactly how many “superstar” H1B guestworkers are working in the U.S. is in some doubt. For example, currently the majority of H1B guestworkers are Indian (pdf). Yet, Indian companies themselves complain over the cost of making the vast majority of engineering graduates employable (pdf).  Back in 2006, when India graduated some 350 000 engineers, employers there estimated that about 10 percent to at best 25 percent were employable by multi-national companies.  In 2012, however, there were over 850 000 graduates from Indian engineering colleges. It is doubtful that the quality of engineering education in India has been maintained to even 2006 levels given the huge increase in graduates and engineering schools.

Given the long standing complaints by tech company executives of a skill shortage and the need for more guestworker labor (e.g., in 1983, John Calhoun, the director of business development at Intel testified to the U.S. Congress in regard to the need for more technology worker immigration, “The problem is absolutely one of a shortage and not one of lower-cost labor. We in the industry have been forced to hire guestworkers in order to grow.”), it is interesting to see how engineering and computer professional salaries have risen. The reason I say risen is that rising wages is usually considered the best indications of a shortage, according to the RAND Corporation.

Using data from a Northwestern University 1995 study (pdf) that provided starting engineering graduate salary information from 1950 to 1994 and normalizing it to 2011 U.S. dollars, you see that average starting salaries peaked around 1970 at $61 200 and then dropped slowly but surely to $52 470 in 1995. The engineering class of 2011 average starting salary (pdf) was $59 590, thanks in part to the dot.com demand, which pushed starting salaries up about a decade ago.

This is shown in the IEEE-USA published salary survey data. In 1972, the median salary was $101 300 in 2011 dollars, whereas in 1975, the median salary had dropped to $98 460.  By 1992, median salaries had fallen to $91 823. Median salary recovered during the mid to late 1990s, and hit a peak of $122 315 in 2002. The 2011 IEEE survey data shows the median salary to be $115 790.

For computer professionals, the last ten years have also shown little change in salary as well. Compiling a decade’s worth of published DICE salary data, the average salary in 2011 constant dollars was $86 823 in 2001, and was only $83 858 last year. According to a story from CNN at the time, a software developer’s average salary in 1990 ranged from $84 750 to $101 600 in 2011 dollars.

It is hard to see from the salary data that there has been a huge spike in engineering salary over the last thirty plus years—the same period that tech executives have been claiming that they have been faced with a tech skill shortage, the same period during which numerous tech companies apparently succeeded and made pretty decent profits. They may have had somewhat of a case to whine about in the mid to late 1990s in regard to engineering jobs, but the study by RAND found little evidence of one even then. The salary data definitely doesn’t indicate a STEM shortage is occurring now.

As an EPI analysis in April which found no STEM worker shortage in the US noted, “policies that expand the supply of guestworkers will discourage U.S. students from going into STEM, and into IT in particular.”

If that happens, and then foreign STEM workers decide to stay home because the salaries they are also earning wages in the U.S. approach those of non-STEM workers, U.S. high tech executives and the government will have no one to blame but themselves.

Photo: Muharrem Oner/Getty Images

What Ever Happened to STEM Job Security?

Figuring out how to get more students drawn into the “STEM education pipeline” has been a major concern of those arguing that there exists an acute shortage of STEM workers, be it in the U.S., the U.K., Brazil, Australia, or almost any country you choose. Typically, the arguments made to encourage students to enter the STEM pipeline center around how interesting STEM careers are and especially how much money you can earn over pursuing non-STEM careers.

However, others point out that many students aren’t interested in STEM careers because they see that the academic work needed at both the high school and university-level to pursue a STEM degree is just too hard in comparison to non-STEM degrees. Until this changes (for example, by increasing the readiness of a prospective STEM student by “redshirting” them), the argument goes, don’t expect a full STEM pipeline anytime soon.

Another factor little talked about that I personally witnessed has been the changing social compact between STEM workers and employers over the past several decades and the impact it has had on convincing students today to pursue a STEM career. When my father, an electro-optical engineer was laid off from his company late in the recession of 1957-1958, he assumed the company would be rehiring him a few months later when the economy got better. His wasn’t an unreasonable assumption, since that was the general practice in the 1950s. When he wasn’t soon rehired, and with a new house mortgage to pay and three children under age 5 to feed, my father left his temporary job of selling Electrolux vacuums door-to-door and found another electro-optical engineering job. He stayed with that company for another 25 years when he retired with the usual gold desk pen-set, which now sits on my desk.

Read More

An Engineering Career: Only a Young Person’s Game?

If you are an engineer (or a computer professional, for that matter), the danger of becoming technologically obsolete is an ever-growing risk. To be an engineer is to accept the fact that at some future time—always sooner than one expects—most of the technical knowledge you once worked hard to master will be obsolete.

An engineer’s “half-life of knowledge,” an expression coined in 1962 by economist Fritz Machlup to describe the time it takes for half the knowledge in a particular domain to be superseded, everyone seems to agree, has been steadily dropping. For instance, a 1966 story in IEEE Spectrum titled, “Technical Obsolescence,” postulated that the half-life of an engineering degree in the late 1920’s was about 35 years; for a degree from 1960, it was thought to be about a decade.

Thomas Jones, then an IEEE Fellow and President of the University of South Carolina wrote a paper in 1966 for the IEEE Transactions on Aerospace and Electronic Systems titled, “The Dollars and Cents of Continuing Education,” in which he agreed with the 10 year half-life estimate. Jones went on to roughly calculate what effort it would take for a working engineer to remain current in his or her field.

Read More

IT Hiccups of the Week: Sutter Health’s $1 Billion EHR System Crashes

After a torrid couple of months, last week saw a slowdown in the number of reported IT errors, miscalculations, and problems. We start off this week’s edition of IT Hiccups with the crash of a healthcare provider’s electronic health record system.

Sutter Health’s Billion Dollar EHR System Goes Dark

Last Monday, at about 0800 PDT, the nearly US $1 billion EPIC electronic health record (EHR) system used by Sutter Health of Northern California crashed. As a result, the Sacramento Business Journal reported, healthcare providers at seven major medical facilities, including Alta Bates Summit Medical Center facilities in Berkeley and Oakland, Eden Medical Center in Castro Valley, Mills Peninsula Health Services in Burlingame and San Mateo, Sutter Delta in Antioch, Sutter Tracy, Sutter Modesto and affiliated doctor’s offices and clinics, were unable to access patient medications or histories.

A software patch was applied Monday night, and EHR access was restored. Doctors and nurses no doubt spent most of the day Tuesday entering in all the handwritten patient notes they scribbled on Monday.

It still is unclear whether the crash was related to a planned system upgrade that was done the Friday evening before the crash, but if I were betting, I would lay some coin on that likelihood.

Nurses working at Sutter Alta Bates Summit Hospital have been complaining for months about problems with the EHR system, which was rolled out at the facility in April. Nurses at Sutter Delta Medical Center have also complained that hospital management there has threatened to discipline nurses for not using the EHR system; its system went live about the same time as Alta Bates Summit's, but for billing for chargeable items. Sutter management said that it was unaware of any of the issues the nurses were complaining about, and that any complaints they might have lodged were the result of an ongoing management-labor dispute.

Sutter is now about midway through its EHR system roll-out, an effort it first started in 2004 at a planned cost of $1.2 billion and completion date of 2013. It later backed off that aggressive schedule, and then “jump started” its EHR efforts once more in 2007. Sutter plans to complete the roll-out across all 15 of its hospitals by 2015 at a cost now approaching $1.5 billion.

Hospital management said in the aftermath of the incident, “We regret any inconvenience this may have caused patients.” It did not express regret to its nurses, however.

Computer Issue Scraps Japanese Rocket Launch

Last Tuesday, the launch of Japan’s new Epsilon rocket was scrubbed with 19 seconds to go because a computer aboard the rocket “detected a faulty sensor reading.” The Japan Aerospace Exploration Agency (JAXA) had spent US $200 million developing the rocket, which is supposed to be controllable from conventional desktop computers instead of massive control centers. This added convenience has resulted from the extensive use of AI to self-perform status-checks.

The Japan Times reported on Thursday that the problem was traced to a “computer glitch at the ground control center in which an error was mistakenly identified in the rocket’s positioning.”

The Times stated that, “According to JAXA project manager Yasuhiro Morita, the fourth-stage engine in the upper part of the Epsilon that is used to put a satellite in orbit, is equipped with a sensor that detects positioning errors. The rocket’s computer system starts calculating the rocket’s position based on data collected by the sensor 20 seconds before a launch. The results are then sent to a computer system at the ground control center, which judges whether the rocket is positioned correctly. On Tuesday, the calculation started 20 seconds before the launch, as scheduled, but the ground control computer determined the rocket was incorrectly positioned one second later based on data sent from the rocket’s computer.”

The root cause(s) of the problem are still unknown, although it is speculated that it was a transmission issue. JAXA says that it will be examining “the relevant computer hardware and software in detail.” The Times reported on Wednesday that speculation centered on a “computer programming error and lax preliminary checks.”

JAXA President Naoki Okumura apologized for the launch failure, which he said brought “disappointment to the nation and organizations involved.” A new launch date has yet to be announced.

Nasdaq Blames Software Bug For Outage

Two weeks ago, Nasdaq suffered what it called at the time a “mysterious trading glitch.” The problem shut down trading for three hours. After pointing fingers at rival exchange NYSE Arca, it admitted last week that perhaps it wasn’t all Arca’s fault after all.

A Reuters News story quoted Bob Greifeld, Nasdaq's chief executive, as saying Nasdaq’s backup system didn’t work because, “There was a bug in the system, it didn't fail over properly, and we need to work hard to make sure it doesn't happen again.”

However, Greifeld didn’t fully let Arca off the hook. A story at the Financial Times said that in testing, Nasdaq’s Securities Information Processor (SIP), the system that receives all traffic on quotes and orders for stocks on the exchange, “was capable of handling around 500,000 messages per second containing trades and quotes. However, in practice, Nasdaq said repeated attempts to connect to the SIP by NYSE Arca, a rival electronic trading platform, and streams of erroneous quotes from its rival eroded the system’s capacity in a manner similar to a distributed denial of service attack. Whereas the SIP had a capacity of 10,000 messages per data port, per second, it was overwhelmed by up to more than 26,000 messages per port, per second.”

Nasdaq said that it was now looking at design changes to make the SIP more resilient.

A detailed report looking into the cause of the failure will be released in about two weeks or so.

Of Other Interest…

Computer Error Causes False Weather Alert and Cancelled Classes at Slippery Rock University

UK’s HSBC Bank Suffers IT Glitch

NC Fast Computer System Can’t Shake Processing Problems

North Carolina DMV Computer System Now Back to Normal

Australia’s Telstra Faces Large Compensation Bill for Internet Problems

Data Glitch Hits CBOE Futures Exchange

China Fines Everbright Security US $85 million Over Trading Error

Photo: iStockphoto

Is There a U.S. IT Worker Shortage?

Someone who is a data scientist today is said by Harvard Business Review to have the sexiest job alive. And if sexy isn’t enough, how about being a savior of the economy?  According to a 2011 report by consulting company McKinsey & Company, “Big Data” is “the next frontier for innovation, competition and productivity.” That is, of course, if enough of those sexy data scientists can be found.

For also according to McKinsey’s report, “the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions,” by 2018.

However, Peter Sondergaard, senior vice president at Gartner and global head of research asserts that the shortage situation is even more frightening than what McKinsey implies. Sondergaard stated in October 2012 that, “By 2015, 4.4 million IT jobs globally will be created to support Big Data, generating 1.9 million IT jobs in the United States. In addition, every big data‐related role in the U.S. will create employment for three people outside of IT, so over the next four years a total of 6 million jobs in the U.S. will be generated by the information economy.”

Wow. Not only will Big Data make a significant dent in the U.S. unemployment rate, but the U.S. IT technical workforce of 3.9 million or so needs to increase by almost 50 percent within the next two years.

But wait, there’s more.

Read More

Chinese Internet Rocked by Cyberattack

China’s Internet infrastructure was temporarily rocked by a distributed denial of service attack that began at about 2 a.m. local time on Sunday and lasted for roughly four hours. The incident, which was initially reported by the China Internet Network Information Center (CNNIC), a government-linked agency, is being called the “largest ever” cyberattack targeting websites using the country’s .cn URL extension. Though details about the number of affected users have been hard to come by, CNNIC apologized to users for the outage, saying that “the resolution of some websites was affected, leading visits to become slow or interrupted.” The best explanation offered so far is that the attacks crippled a database that converts a website’s URL into the series of numbers (its IP address) that servers and other computers read. The entire .cn network wasn’t felled because some Internet service providers store their own copies of these databases.

A Wall Street Journal report notes that the attack made a serious dent in Chinese Web traffic. Matthew Prince, CEO of Internet security firm CloudFlare told the WSJ that his company observed a 32 percent drop in traffic on Chinese domains. But Prince was quick to note that although the attack affected a large swath of the country, the entity behind it was probably not another country. “I don’t know how big the ‘pipes’ of .cn are,” Prince told the Wall Street Journal, “but it is not necessarily correct to infer that the attacker in this case had a significant amount of technical sophistication or resources. It may have well have been a single individual.”

That reasoning stands in stark contrast to the standard China-blaming reaction to attacks on U.S. and Western European Internet resources or the theft of information stored on computers in those regions. In the immediate aftermath of the incident, there was an air of schadenfreude from some observers. Bill Brenner of cloud-service provider Akami told the Wall Street Journal that “the event was particularly ironic considering that China is responsible for the majority of the world’s online ‘attack traffic.’” Brenner pointed to Akami’s 2013 ‘State of the Internet’ report, which noted that 34 percent of global attacks originated from China, with the U.S. coming third with 8.3 percent.

For its part, the CNNIC, rather than pointing fingers, said it will be working with the Chinese Ministry of Industry and Information Technology to shore up the nation’s Internet “service capabilities.”

Photo: Ng Han Guan/AP Photo

Advertisement

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributor
Willie D. Jones
 
Load More