Risk Factor iconRisk Factor

Are STEM Workers Overpaid?

One of the strongest reasons given by those trying to entice more students to enter the STEM education pipeline is the “earnings premium” STEM workers make in comparison to non-STEM workers. Typical is the statement by the U.S. Department of Commerce press release from 2011 that, “STEM workers command higher wages, earning 26 percent more than their non-STEM counterparts.”

Further, the Commerce Department press release quotes U.S. Secretary of Education Arne Duncan’s plea to prospective STEM students that, “A STEM education is a pathway to prosperity – not just for you as an individual but for America as a whole. We need you in our classrooms, labs and key government agencies to help solve our biggest challenges.”

However, not everyone is happy as Duncan with STEM workers earning a premium for solving those big challenges, instead believing that the U.S. would be even more competitive and have a more equitable society if the earnings premium disappeared. For instance, former Federal Reserve Bank Chairman Alan Greenspan, speaking at a U.S. Treasury conference on U.S. Capital Markets Competitiveness put it bluntly: “Our skilled wages are higher than anywhere in the world. If we open up a significant window for skilled [guest] workers, that would suppress the skilled-wage level and end the concentration of income.”

Greenspan, to ensure everyone got the point, added, “Significantly opening up immigration to skilled workers solves two problems. The companies could hire the educated workers they need. And those workers would compete with high-income people, driving more income equality.”

[For a detailed examination of how effective this policy could be, I strongly suggest you read Eric Weinstein’s National Bureau of Economic Research draft working paper titled, “How and Why Government, Universities, and Industry Create Domestic Labor Shortages of Scientists and High-Tech Workers,” on the active suppression of STEM Ph.D. salaries by way of false National Science Foundation claims of a STEM shortage coupled with aggressive lobbying efforts to change STEM guestworker policies in the late 1980s to early 1990s. While the NSF eventually apologized for its misrepresentations to Congress in 1992 and admitted that there was in fact a surplus of STEM workers, the damage was already done with the fallout continuing into today.]

Greenspan is not alone in his thinking that STEM worker salaries should look like a lot more like non-STEM worker salaries. In March, over “100 executives from the technology sector and leading innovation advocacy organizations” sent an open letter to President Obama begging that he and Congress would approve the expansion of H-1B visa program beyond the 85 000 visa limit today (including 20 000 reserved for foreign graduates with advanced degrees at U.S. universities) to a minimum of 115 000 per year and possibly as high as 300 000 within a decade (not to mention the granting of permanent legal status to an unlimited number of foreign students who earn graduate degrees from U.S. universities in STEM subjects).

The  executives, whose companies have spent millions of dollar lobbying on the issue, collectively wrote in their letter that, “One of the biggest economic challenges facing our nation is the need for more qualified, highly‐skilled professionals, domestic and foreign, who can create jobs and immediately contribute to and improve our economy.”

Four company executives from IBM, Intel, Microsoft and Oracle claimed in the letter they had 10 000 openings that they apparently could only fill with guestworkers. That was, of course, before IBM announced its layoff, or Intel acknowledging that it was going to slow down its hiring as well as cut production at some of its U.S. plants which might lead to workers being let go. It will be interesting to see whether Microsoft CEO Steve Ballmer’s successor decides that the company payroll needs a bit of trimming, like Lou Gertsner did at IBM. And one other company on the letter list is Cisco, which also announced layoffs since the March letter. Maybe the companies can just trade workers.

While the tech company executives insist that their motive is not to cut payroll costs by increasing the supply of guestworker labor, few STEM workers believe their claims that there exists a “technology skills gap” that only guestworkers can fill, any more than there is a U.S. manufacturing skills gap. For instance, various surveys have claimed there is a skilled manufacturing work force shortage at between 300 000 and 600 000, for example.

However, others digging through the veracity of those claims like the Boston Consulting Group point out that the manufacturing skills gap is in reality closer to 80 000 and 100 000, and would be even less if employers were to increase their pay or hire less-skilled workers and train them, both of which employers seem highly reluctant to do. As the BCG study states, “Trying to hire high-skilled workers at rock-bottom rates is not a skills gap.”

Likewise, tech companies are definitely not interested in paying higher wages, something they were unhappily forced to do during the dot-com boom period (and which some tried to avoid afterwards as well by tacitly agreeing not to poach employees from one another until they got caught). The fact that thousands of these unfilled jobs stay unfilled for long periods of time also seems an indication that they are not nearly as critical or valuable as the companies make them out to be. Netflix, for example, pays its 700 software engineers 10 to 20 percent above its competitors in Silicon Valley, allowing it to “hire just about any engineer it wants.”  As Netflix clearly demonstrates, an expanding U.S. company in a competitive marketplace that needs technology workers can get the U.S. technology workers it wants, if it really wants them.

Anuj Srivas, the technology and business reporter of the Indian paper The Hindu, pointed out earlier this year that despite all the rhetoric, the “H-1B visa farce” as he calls it is indeed “about the profit margins of Indian and American IT companies,” something that Vivek Wadhwa, an academic and entrepreneur who advocates more H-1B visas, has acknowledged. Wadhwa candidly wrote in 2008 that, “I know from my experience as a tech CEO that H-1Bs are cheaper than domestic hires. Technically, these workers are supposed to be paid a ‘prevailing wage,’ but this mechanism is riddled with loopholes. In the tech world, salaries vary widely based on skill and competence. Yet the prevailing wage concept works on average salaries, so you can hire a superstar for the cost of an average worker. Add to this the inability of an H-1B employee to jump ship and you have a strong incentive to hire workers on these visas.”

Exactly how many “superstar” H1B guestworkers are working in the U.S. is in some doubt. For example, currently the majority of H1B guestworkers are Indian (pdf). Yet, Indian companies themselves complain over the cost of making the vast majority of engineering graduates employable (pdf).  Back in 2006, when India graduated some 350 000 engineers, employers there estimated that about 10 percent to at best 25 percent were employable by multi-national companies.  In 2012, however, there were over 850 000 graduates from Indian engineering colleges. It is doubtful that the quality of engineering education in India has been maintained to even 2006 levels given the huge increase in graduates and engineering schools.

Given the long standing complaints by tech company executives of a skill shortage and the need for more guestworker labor (e.g., in 1983, John Calhoun, the director of business development at Intel testified to the U.S. Congress in regard to the need for more technology worker immigration, “The problem is absolutely one of a shortage and not one of lower-cost labor. We in the industry have been forced to hire guestworkers in order to grow.”), it is interesting to see how engineering and computer professional salaries have risen. The reason I say risen is that rising wages is usually considered the best indications of a shortage, according to the RAND Corporation.

Using data from a Northwestern University 1995 study (pdf) that provided starting engineering graduate salary information from 1950 to 1994 and normalizing it to 2011 U.S. dollars, you see that average starting salaries peaked around 1970 at $61 200 and then dropped slowly but surely to $52 470 in 1995. The engineering class of 2011 average starting salary (pdf) was $59 590, thanks in part to the dot.com demand, which pushed starting salaries up about a decade ago.

This is shown in the IEEE-USA published salary survey data. In 1972, the median salary was $101 300 in 2011 dollars, whereas in 1975, the median salary had dropped to $98 460.  By 1992, median salaries had fallen to $91 823. Median salary recovered during the mid to late 1990s, and hit a peak of $122 315 in 2002. The 2011 IEEE survey data shows the median salary to be $115 790.

For computer professionals, the last ten years have also shown little change in salary as well. Compiling a decade’s worth of published DICE salary data, the average salary in 2011 constant dollars was $86 823 in 2001, and was only $83 858 last year. According to a story from CNN at the time, a software developer’s average salary in 1990 ranged from $84 750 to $101 600 in 2011 dollars.

It is hard to see from the salary data that there has been a huge spike in engineering salary over the last thirty plus years—the same period that tech executives have been claiming that they have been faced with a tech skill shortage, the same period during which numerous tech companies apparently succeeded and made pretty decent profits. They may have had somewhat of a case to whine about in the mid to late 1990s in regard to engineering jobs, but the study by RAND found little evidence of one even then. The salary data definitely doesn’t indicate a STEM shortage is occurring now.

As an EPI analysis in April which found no STEM worker shortage in the US noted, “policies that expand the supply of guestworkers will discourage U.S. students from going into STEM, and into IT in particular.”

If that happens, and then foreign STEM workers decide to stay home because the salaries they are also earning wages in the U.S. approach those of non-STEM workers, U.S. high tech executives and the government will have no one to blame but themselves.

Photo: Muharrem Oner/Getty Images

What Ever Happened to STEM Job Security?

Figuring out how to get more students drawn into the “STEM education pipeline” has been a major concern of those arguing that there exists an acute shortage of STEM workers, be it in the U.S., the U.K., Brazil, Australia, or almost any country you choose. Typically, the arguments made to encourage students to enter the STEM pipeline center around how interesting STEM careers are and especially how much money you can earn over pursuing non-STEM careers.

However, others point out that many students aren’t interested in STEM careers because they see that the academic work needed at both the high school and university-level to pursue a STEM degree is just too hard in comparison to non-STEM degrees. Until this changes (for example, by increasing the readiness of a prospective STEM student by “redshirting” them), the argument goes, don’t expect a full STEM pipeline anytime soon.

Another factor little talked about that I personally witnessed has been the changing social compact between STEM workers and employers over the past several decades and the impact it has had on convincing students today to pursue a STEM career. When my father, an electro-optical engineer was laid off from his company late in the recession of 1957-1958, he assumed the company would be rehiring him a few months later when the economy got better. His wasn’t an unreasonable assumption, since that was the general practice in the 1950s. When he wasn’t soon rehired, and with a new house mortgage to pay and three children under age 5 to feed, my father left his temporary job of selling Electrolux vacuums door-to-door and found another electro-optical engineering job. He stayed with that company for another 25 years when he retired with the usual gold desk pen-set, which now sits on my desk.

Read More

An Engineering Career: Only a Young Person’s Game?

If you are an engineer (or a computer professional, for that matter), the danger of becoming technologically obsolete is an ever-growing risk. To be an engineer is to accept the fact that at some future time—always sooner than one expects—most of the technical knowledge you once worked hard to master will be obsolete.

An engineer’s “half-life of knowledge,” an expression coined in 1962 by economist Fritz Machlup to describe the time it takes for half the knowledge in a particular domain to be superseded, everyone seems to agree, has been steadily dropping. For instance, a 1966 story in IEEE Spectrum titled, “Technical Obsolescence,” postulated that the half-life of an engineering degree in the late 1920’s was about 35 years; for a degree from 1960, it was thought to be about a decade.

Thomas Jones, then an IEEE Fellow and President of the University of South Carolina wrote a paper in 1966 for the IEEE Transactions on Aerospace and Electronic Systems titled, “The Dollars and Cents of Continuing Education,” in which he agreed with the 10 year half-life estimate. Jones went on to roughly calculate what effort it would take for a working engineer to remain current in his or her field.

Read More

IT Hiccups of the Week: Sutter Health’s $1 Billion EHR System Crashes

After a torrid couple of months, last week saw a slowdown in the number of reported IT errors, miscalculations, and problems. We start off this week’s edition of IT Hiccups with the crash of a healthcare provider’s electronic health record system.

Sutter Health’s Billion Dollar EHR System Goes Dark

Last Monday, at about 0800 PDT, the nearly US $1 billion EPIC electronic health record (EHR) system used by Sutter Health of Northern California crashed. As a result, the Sacramento Business Journal reported, healthcare providers at seven major medical facilities, including Alta Bates Summit Medical Center facilities in Berkeley and Oakland, Eden Medical Center in Castro Valley, Mills Peninsula Health Services in Burlingame and San Mateo, Sutter Delta in Antioch, Sutter Tracy, Sutter Modesto and affiliated doctor’s offices and clinics, were unable to access patient medications or histories.

A software patch was applied Monday night, and EHR access was restored. Doctors and nurses no doubt spent most of the day Tuesday entering in all the handwritten patient notes they scribbled on Monday.

It still is unclear whether the crash was related to a planned system upgrade that was done the Friday evening before the crash, but if I were betting, I would lay some coin on that likelihood.

Nurses working at Sutter Alta Bates Summit Hospital have been complaining for months about problems with the EHR system, which was rolled out at the facility in April. Nurses at Sutter Delta Medical Center have also complained that hospital management there has threatened to discipline nurses for not using the EHR system; its system went live about the same time as Alta Bates Summit's, but for billing for chargeable items. Sutter management said that it was unaware of any of the issues the nurses were complaining about, and that any complaints they might have lodged were the result of an ongoing management-labor dispute.

Sutter is now about midway through its EHR system roll-out, an effort it first started in 2004 at a planned cost of $1.2 billion and completion date of 2013. It later backed off that aggressive schedule, and then “jump started” its EHR efforts once more in 2007. Sutter plans to complete the roll-out across all 15 of its hospitals by 2015 at a cost now approaching $1.5 billion.

Hospital management said in the aftermath of the incident, “We regret any inconvenience this may have caused patients.” It did not express regret to its nurses, however.

Computer Issue Scraps Japanese Rocket Launch

Last Tuesday, the launch of Japan’s new Epsilon rocket was scrubbed with 19 seconds to go because a computer aboard the rocket “detected a faulty sensor reading.” The Japan Aerospace Exploration Agency (JAXA) had spent US $200 million developing the rocket, which is supposed to be controllable from conventional desktop computers instead of massive control centers. This added convenience has resulted from the extensive use of AI to self-perform status-checks.

The Japan Times reported on Thursday that the problem was traced to a “computer glitch at the ground control center in which an error was mistakenly identified in the rocket’s positioning.”

The Times stated that, “According to JAXA project manager Yasuhiro Morita, the fourth-stage engine in the upper part of the Epsilon that is used to put a satellite in orbit, is equipped with a sensor that detects positioning errors. The rocket’s computer system starts calculating the rocket’s position based on data collected by the sensor 20 seconds before a launch. The results are then sent to a computer system at the ground control center, which judges whether the rocket is positioned correctly. On Tuesday, the calculation started 20 seconds before the launch, as scheduled, but the ground control computer determined the rocket was incorrectly positioned one second later based on data sent from the rocket’s computer.”

The root cause(s) of the problem are still unknown, although it is speculated that it was a transmission issue. JAXA says that it will be examining “the relevant computer hardware and software in detail.” The Times reported on Wednesday that speculation centered on a “computer programming error and lax preliminary checks.”

JAXA President Naoki Okumura apologized for the launch failure, which he said brought “disappointment to the nation and organizations involved.” A new launch date has yet to be announced.

Nasdaq Blames Software Bug For Outage

Two weeks ago, Nasdaq suffered what it called at the time a “mysterious trading glitch.” The problem shut down trading for three hours. After pointing fingers at rival exchange NYSE Arca, it admitted last week that perhaps it wasn’t all Arca’s fault after all.

A Reuters News story quoted Bob Greifeld, Nasdaq's chief executive, as saying Nasdaq’s backup system didn’t work because, “There was a bug in the system, it didn't fail over properly, and we need to work hard to make sure it doesn't happen again.”

However, Greifeld didn’t fully let Arca off the hook. A story at the Financial Times said that in testing, Nasdaq’s Securities Information Processor (SIP), the system that receives all traffic on quotes and orders for stocks on the exchange, “was capable of handling around 500,000 messages per second containing trades and quotes. However, in practice, Nasdaq said repeated attempts to connect to the SIP by NYSE Arca, a rival electronic trading platform, and streams of erroneous quotes from its rival eroded the system’s capacity in a manner similar to a distributed denial of service attack. Whereas the SIP had a capacity of 10,000 messages per data port, per second, it was overwhelmed by up to more than 26,000 messages per port, per second.”

Nasdaq said that it was now looking at design changes to make the SIP more resilient.

A detailed report looking into the cause of the failure will be released in about two weeks or so.

Of Other Interest…

Computer Error Causes False Weather Alert and Cancelled Classes at Slippery Rock University

UK’s HSBC Bank Suffers IT Glitch

NC Fast Computer System Can’t Shake Processing Problems

North Carolina DMV Computer System Now Back to Normal

Australia’s Telstra Faces Large Compensation Bill for Internet Problems

Data Glitch Hits CBOE Futures Exchange

China Fines Everbright Security US $85 million Over Trading Error

Photo: iStockphoto

Is There a U.S. IT Worker Shortage?

Someone who is a data scientist today is said by Harvard Business Review to have the sexiest job alive. And if sexy isn’t enough, how about being a savior of the economy?  According to a 2011 report by consulting company McKinsey & Company, “Big Data” is “the next frontier for innovation, competition and productivity.” That is, of course, if enough of those sexy data scientists can be found.

For also according to McKinsey’s report, “the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions,” by 2018.

However, Peter Sondergaard, senior vice president at Gartner and global head of research asserts that the shortage situation is even more frightening than what McKinsey implies. Sondergaard stated in October 2012 that, “By 2015, 4.4 million IT jobs globally will be created to support Big Data, generating 1.9 million IT jobs in the United States. In addition, every big data‐related role in the U.S. will create employment for three people outside of IT, so over the next four years a total of 6 million jobs in the U.S. will be generated by the information economy.”

Wow. Not only will Big Data make a significant dent in the U.S. unemployment rate, but the U.S. IT technical workforce of 3.9 million or so needs to increase by almost 50 percent within the next two years.

But wait, there’s more.

Read More

Chinese Internet Rocked by Cyberattack

China’s Internet infrastructure was temporarily rocked by a distributed denial of service attack that began at about 2 a.m. local time on Sunday and lasted for roughly four hours. The incident, which was initially reported by the China Internet Network Information Center (CNNIC), a government-linked agency, is being called the “largest ever” cyberattack targeting websites using the country’s .cn URL extension. Though details about the number of affected users have been hard to come by, CNNIC apologized to users for the outage, saying that “the resolution of some websites was affected, leading visits to become slow or interrupted.” The best explanation offered so far is that the attacks crippled a database that converts a website’s URL into the series of numbers (its IP address) that servers and other computers read. The entire .cn network wasn’t felled because some Internet service providers store their own copies of these databases.

A Wall Street Journal report notes that the attack made a serious dent in Chinese Web traffic. Matthew Prince, CEO of Internet security firm CloudFlare told the WSJ that his company observed a 32 percent drop in traffic on Chinese domains. But Prince was quick to note that although the attack affected a large swath of the country, the entity behind it was probably not another country. “I don’t know how big the ‘pipes’ of .cn are,” Prince told the Wall Street Journal, “but it is not necessarily correct to infer that the attacker in this case had a significant amount of technical sophistication or resources. It may have well have been a single individual.”

That reasoning stands in stark contrast to the standard China-blaming reaction to attacks on U.S. and Western European Internet resources or the theft of information stored on computers in those regions. In the immediate aftermath of the incident, there was an air of schadenfreude from some observers. Bill Brenner of cloud-service provider Akami told the Wall Street Journal that “the event was particularly ironic considering that China is responsible for the majority of the world’s online ‘attack traffic.’” Brenner pointed to Akami’s 2013 ‘State of the Internet’ report, which noted that 34 percent of global attacks originated from China, with the U.S. coming third with 8.3 percent.

For its part, the CNNIC, rather than pointing fingers, said it will be working with the Chinese Ministry of Industry and Information Technology to shore up the nation’s Internet “service capabilities.”

Photo: Ng Han Guan/AP Photo

IT Hiccups of the Week: Stock Exchange “Gremlins” Attack

We were blessed with another impressive week of IT-related burps, belches and eructs. This time, stock market officials are reaching for the antacid.

Nasdaq Suffers Three-Hour Trading “Glitch”

Well, opinions vary about whether it was or wasn’t a big deal. Last Thursday, the Nasdaq suffered what the AP called a “mysterious trading glitch” that suspended trading on the exchange from 12:15 p.m. to 3:25 p.m. EDT. After trading resumed, the market closed up 39 points.

The trading suspension was the longest in Nasdaq history and was a major embarrassment for the exchange, which is still trying to recover from its Facebook IPO screw-up. The exchange blamed the problem on a “connectivity issue” involving its Securities Information Processor, which Reuters describes as being “the system that receives all traffic on quotes and orders for stocks on the exchange.” When the SIP doesn’t work, stock quotations cannot be disseminated.

Nasdaq Chief Executive Robert Greifeld has refused to discuss the cause of the problem in public. Greifeld did darkly hint, however, that the problems were someone else’s fault. The Guardian quoted him as saying, “I think where we have to get better is what I call defensive driving. Defensive driving means what do you do when another part of the ecosystem, another player, has some bad event that triggers something in your system?”

He then went on to say, “We spend a lot of time and effort where other things happen outside our control and how we respond to it.”

Greifeld’s statement immediately triggered further speculation that the other “player” was rival exchange NYSE Arca, which was known to have had connectivity issues with Nasdaq.  Greifield refused, however, to elaborate further on his statements.

Today’s Wall Street Journal published a lengthy story that has shed more light on what happened—although why it happened is still being investigated. According to the WSJ, between 10:53 a.m. and 10:55 a.m. EDT Thursday, “NYSE Arca officials tried and failed to establish a connection with Nasdaq about 30 times, according to people familiar with the events of that day. Nasdaq, for its part, was having its own problems regarding its connectivity to Arca, the people said.”

The WSJ goes on to say that: “What remained unclear Sunday was how that connectivity problem—which Nasdaq officials privately have called ‘unprecedented’—could have had so catastrophic an effect on Nasdaq's systems that the exchange decided to stop trading in all Nasdaq-listed shares, causing ripples of shutdowns across the market and spreading confusion.”

The Journal said NYSE Arca went to a backup system, and after several failed attempts, finally re-established connection with Nasdaq at 11:17 a.m. However, once the two exchanges were reconnected, “Nasdaq's computers began to suffer from a capacity overload created by the multiple efforts to connect the two exchanges.”

As a result, other markets also started to report problems in receiving and sending quotes from Nasdaq. Officials at Nasdaq decided they had better pull the plug in order to figure out how to get back to a normal operating state, which they did at 12:15 p.m.

Many traders viewed the episode as a non-event, while other interested observers, like U.S. Securities and Exchange Commission Chairman Mary Jo White, were more concerned. Given the complexity of the systems involved, no one should be surprised to see more hiccups in the future. In Charles Perrow’s terminology, they are now just “normal accidents.

The folks at Goldman Sachs and Everbright were probably very happy about the distraction created by Nasdaq’s difficulties. Last Tuesday, Goldman “accidentally sent thousands of orders for options contracts to exchanges operated by NYSE Euronext, Nasdaq OMX and the CBOE, after a systems upgrade that went awry. The faulty orders roiled options markets in the opening 17 minutes of the day’s trading and sparked reviews of the transactions,” the Financial Times reported.

Bloomberg News reported that Goldman had placed four senior technology specialists on administrative leave because of the programming error, but Goldman declined to discuss why. Probably a smarter move than when Knight Capital Group CEO Thomas Joyce blamed “knuckleheads” in IT when a similar problem a year ago this month resulted in loss of US $440 million in about 45 minutes. Goldman's losses were expected to be less than US $100 million.

Knight Capital was sold last December to Getco Holdings Co.

Everbright Securities, the state-controlled Chinese brokerage, is also likely happy at the timing of Nasdaq’s and Goldman’s problems. On 16 August, a trading error traced to the brokerage significantly disrupted the Shanghai market. The error, which cost the brokerage US $31.7 million and the brokerage’s president his job, was blamed originally on a “fat finger” trade. However, a “computer system malfunction” was the real cause, the Financial Times reported. Needless to say, the China Securities Regulatory Commission is investigating Everbright and says “severe punishments” might be in order.

Finally, a real fat finger incident hit the Tel Aviv Stock Exchange (TASE) yesterday. The Jerusalem Post reported that, “a trader from a TASE member intending to carry out a major transaction for a different company's stock accidentally typed in Israel Corporation, the third-largest company traded on the exchange. The disparity in prices cause the company's stock value did a nose-dive, from an opening value of NIS 1690 down to NIS 2.10,” or a 99.9 percent loss. The trader quickly realized his typo, and requested the transaction be canceled. However, by then, the error had already triggered a halt in the exchange’s trading.

Helsinki’s Automated Metro Trains Rough First Half Day

Last week, Helsinki's Metro tried out its three driverless Seimens-built trains for the first time. However, in a bit of irony, after a few hours, problems developed with the ventilation system in the trains' drivers cabins, and the trains had to be taken out of service. Drivers were aboard the automated trains for safety reasons. The Metro didn’t indicate whether the trains would have been pulled out of service (or the problem even detected) if they had been running in full automatic mode without drivers.

Indian Overseas Bank Back to Normal

The Hindu Times reported on Friday that the Indian Overseas Bank announced that the problem with the bank’s central server had been finally fixed. For three days, hundreds of thousands of bank customers were unable to deposit checks or use the bank’s ATM network. A story at Business Standard said that the problem was related to annual system maintenance of the core banking system, which instead ended up  creating what the bank said was a “complex technological malfunction.”

Of Other Interest….

Chrysler Delaying Launch of 2014 Cherokee Jeep Due to Transmission Software Issues

Network Issues Stop Marines from Using Unclassified Network

Tesco Pricing Glitch Lowers Price of Ice Cream by 88 Percent

Xerox Releases Scanner “Error” Software Patch

Photo: Seth Wenig/AP Photo

This Week in Cybercrime: Facebook Feels Backlash After Balking on Bug Bounty

This hasn’t been a week for major headline-making hacks, but a few interesting stories bubbled to the surface.

Last week, we reported that a Palestinian security researcher notified Facebook of a security vulnerability on its site by posting a message on the page of Facebook founder Mark Zuckerberg. The struggling researcher, Khalil Shreateh, was looking forward to receiving a reward under the social media site’s bug bounty program for reporting the problem, which would have allowed anyone to post messages to another user’s page, regardless of whether he or she is on the user’s Friends list.

But Facebook denied him, giving itself a PR black eye in the process. It seems Facebook fixed the bug but wouldn’t shell out any money to Shreateh. Why? The site’s security team reasoned that his method of notifying the company—first posting an Enrique Iglesias video to the page belonging to one of Zuckerberg’s college friends, then posting to Zuckerberg's page itself after the security team still insisted that the issue wasn't a bug—violated its terms of service.

Only after Marc Maiffret, CTO of network security firm BeyondTrust, heard about the snub and launched a page whose aim is to raise $10,000 for Shreateh did Facebook try to explain itself.

According to a Wired article, “Matt Jones, a member of Facebook’s security team, posted a note on the Hacker News web site saying a language barrier with Shreateh had been part of the problem for the company’s initial rejection of his submission…He also said that Shreateh had failed to provide any details about the bug that would help Facebook reproduce the problem and fix it.” But the bottom line, despite Jones’ attempts to rationalize a response that came off as miserly, is: 1) Facebook fixed the problem. 2) It was Shreateh who alerted them to it.

“Mistakes were made on both sides,” Jesse Kornblum, a network security engineer for Facebook later told Wired. “We should have asked for more details rather than saying, ‘this is not a bug.’ But Khalil should have demonstrated the vulnerability on a test account, not a real person. We’ve made an interface for [researchers] to create multiple test accounts [for that purpose].”

But Maiffret, who has met his goal (with $3000 coming from his own pocket), says nixing the bounty for the Palestinian researcher sent the wrong message. “It was a good thing that he did,” Maiffret, who got his start as a teenage hacker, told Wired. “He might have done it slightly wrong, but ultimately it was a bug he got killed off before anyone did a bad thing [with it].” Maiffret pointed to his own beginnings, noting that he went from being a rudderless high school dropout to having a successful career after someone agreed to take a chance on him. “Ultimately, [Shreateh] was well-intentioned and hopefully he stays on the same track of doing research,” Maiffret says.

Google App Engine an Unwitting Conduit for Adware

The adware that floods a computer user’s browser with come-ons is nothing new. But purveyors of this pestilence have come up with a new way to spread it. Jason Ding, a research scientist at Barracuda Labs, posted a note on the company’s research blog this week alerting the world that two sites are lacing users’ machines with malware posing as legitimate application software on Google’s App Engine.

According to Ding, the sites, which appeared about a week ago, prey on the inexperienced or inattentive user. The first one (java-update[dot]appspot[dot]com), which passes itself off as a free Java download site, looks a lot like Oracle’s official Java site. But clicking links on this sinister page causes the download of “setup.exe,” which in turn tries to install the Solimba adware program. The endgame for the other site (updateplayer[dot]appspot[dot]com) also involves plaguing the user with Solimba. But instead of baiting users with a Java imposter, it tells them that their media player is outmoded and needs an update. And guess who’s generous enough to offer a just the right fix? Clicking on any of the site’s links pulls down the same executable file that installs Solimba.

The people who set up these sites are using Google’s App Engine as an intermediary because it gives their pitches the air of credibility and hides URLs that would instantly put users on alert that something is fishy.

More People Affected by Outages from Cyberattacks than from Hardware Failures

A report released on Tuesday by the European Union Agency for Network and Information Security (ENISA) reveals some startling information about the reach and effectiveness of cybercrime. Last year, hardware failures accounted for about 38 percent of incidents that resulted in “severe outages of both mobile and fixed telephony and Internet services” in the E.U. These attacks affected 1.4 million people, on average. Though cyberattacks made up 6 percent of European outages last year, each incident affected an average of 1.8 million users.

“League of Legends” Maker Hacked

Marc Merrill and Brandon Beck, founders of video game maker Riot Games, said in a blog post this week that cybercrooks had hacked into its network and gained access to usernames, email addresses, salted password hashes, some first and last names, and encrypted credit card numbers. The company, developer of the online multiplayer game “League of Legends,” says it is looking into just what details were gleaned in the unauthorized access of 120 000 transaction records dating back to 2011.

In Other Cybercrime News…

Reuters: Ex-Soviet hackers play outsized role in cyber crime world

ZDNet Reviews the New Book “Cyber Crime & Warfare”

IT Hiccups of the Week: You May Want That Burger Well-Cooked

The summertime streak of interesting IT snafus, tangles and general “oops” incidents continues unabated. We start off with a story that appeared in the New York Times over the weekend that may make you reconsider your meat-cooking preference for your next outdoor barbeque.

U.S. Agriculture Meat Inspection Computer Outage Means Meat and Poultry Left Uninspected

The U.S. Department of Agriculture (USDA) considers it to be a non-issue, since there have yet to be any documented instances of people having gotten sick. However, one wonders how long it will be before the continuing problems with a new $20 million computer system upon which some 3000 meat and poultry inspectors working at 6300 packing and processing plants across the U.S. depend on contribute to a major food-borne illness outbreak.

According to a Saturday New York Times story, the computer system the USDA Food Safety and Inspection Service (FSIS) meat and poultry inspectors use has experienced several recent break downs. Earlier this month, for instance, it shut down for two days, putting “at risk [consumers of] millions of pounds of beef, poultry, pork and lamb that had left the plants before workers could collect samples to check for E. coli bacteria and other contaminants.”

What's the risk? Well, a USDA report (pdf) states that, “the Centers for Disease Control and Prevention estimate that E. coli O157:H7 causes about 73,000 cases of illness and 61 deaths annually in the U.S. The USDA's Economic Research Service estimates that the total costs associated with consuming E. coli-contaminated meat are about $488 million annually.”

The new computer system was installed in 2011 as a way to help hasten the meat and poultry inspection process. Previously, it could take days before inspected food flagged as being contaminated could be traced to the offending plant. The new system speeds up completion of the paperwork used to trace inspected meat and poultry, dramatically reducing the time needed to identify the source of any compromised food. That's under normal circumstances. The downside is that when the computer system isn’t working, which inspectors tell the Times happens frequently, meat and poultry sometimes go without being inspected at all.

Last year, computer system issues led to problems at 18 meat processing and packing plants. The Times stated that, “At one of the plants, auditors found that inspectors had not properly sampled some 50 million pounds of ground beef for E. coli over a period of five months. At another plant, which the report identified as among the 10 largest slaughterhouses in the United States, auditors found that computer failures had caused inspectors to miss sampling another 50 million pounds of beef products.”

But not to worry, the USDA says. Many of the highlighted problems have been corrected. Additionally, USDA officials claim, the problem wasn’t really with the computer system itself, but with balky wireless networks the computer system has to connect to in the rural areas where many meat processing and packing plants operate.

Does that mean that the USDA doesn’t include the communication system as part of an overall system test before it fields such a system? Regardless of the answer, questions abound; mainly because USDA field inspectors told the Times that even where wireless connections are first-rate, the computer system still keeps crashing.  

Until the reliability of the USDA’s computer system improves, you may want to make sure your meat and poultry are thoroughly cooked.

U.K. Post Computer System Leads to False Theft Accusations

The Daily Mail published a story last week about the four-year fight between Tom Brown, of South Stanley, County Durham and the U.K. Post Office over charges that Brown, a sub-postmaster, fiddled £85,426 from its accounts. Last week, the Mail reported, the Post Office decided to drop its two civil court charges of false accounting against Brown (the police decided two years ago not to pursue the case), and a judge has recorded not-guilty verdicts.

Brown is one of more than 100 people across the U.K. that the Post Office has accused of theft since the introduction of its £1 billion Horizon computer system, used to record transactions across its 14 000-branch network, over a decade ago.  However, Brown and the others claimed that the disappearance of the money they were accused of stealing was caused by computer problems with the Horizon system that created false shortfalls in the sub-postmasters’ financial accounts.

The U.K. paper Computer Weekly has been diligently following this story for years, noting that sub-postmasters were complaining about computer problems as far back as 2003. However, the U.K. Post Office steadfastly refused to believe that there was anything wrong with the Horizon system. It was convinced that those whom it had accused of stealing were merely using “computer glitches” as an excuse to hide their theft.

In fact, the Post Office was even able to convince some U.K. judges that the Horizon system was extremely reliable and didn’t make mistakes. As a result, some sub-postmasters were sent to jail and many lost their homes or went bankrupt in order to pay back the alleged shortfalls in their accounts.

However, as more and more sub-postmasters were accused, the Post Office finally succumbed to pressure to conduct an investigation into the Horizon system just this past year. In July of this year, as word filtered out that the system’s reliability wasn't as phenomenal as claimed, the Post Office finally admitted that the investigation did find defects in the system that caused accounting shortfalls at 76 branches. In light of those shortfalls, the Post Office stated that more investigation would be required into the system's operations, BBC News reported.

The Post Office also stated it would be looking into how to “take better account” of sub-postmaster complaints “going forward,” but did not directly address those lodged by the dozens who say they were falsely accused. Maybe it is because they are looking to bring legal action against the Post Office.

Like all good bureaucracies, the Post Office also proposed to set up a working group to investigate further the problems so far uncovered.

Wall Street Journal Doesn’t Let Rival’s Crisis Go to Waste

Finally, last Wednesday morning, the New York Times website and mobile app suffered a two-hour outage, although new articles didn’t appear until about four hours after the site and the app first became unavailable. Speculation ran rampant that the Times was the victim of a cyberattack, but the Times said the incident was likely the result of a “scheduled maintenance update being pushed out.”  

However, soon after the outage began at 11:10 a.m., which is the start of the peak traffic time for the paper, the Wall Street Journal decided to try to capitalize on its rival's misfortune by lowering its own pay wall for two hours. The Journal later said it lowered its pay wall not because of the New York Times outage, but because of the violent protests then happening in the Egypt.


Google also suffered an outage last week, but only for a few minutes.  Reports were that Internet traffic dropped by some 40 percent because of it.

Microsoft, which has been taking very public potshots at Google, remained mum on the Google outage. Was it, perhaps, because Microsoft seems to be having significant problems of its own with Outlook.com over the past week?

Of Other Interest…

Moose Detector System Out For Extended Periods

Software Issue Delays Work for Months on Guernsey Airport’s New £3.5 million Radar

Everbright Securities Fat Finger Trading Error Roils Shanghai Stock Market

Computer Problem Forces Arizona Lottery to Issue New Pick 3 Tickets

BT Sport Issue Angers Premier League Fans


Photo: Remy Gabalda/AFP/Getty Images

This Week in Cybercrime: Computer Glitch Opens Prison Doors?

Florida prison officials are trying to figure out whether a computer glitch may be behind two recent, as yet unexplained incidents where all of the doors at a facility’s maximum-security wing opened simultaneously. In the latest occurrence, on 13 June, guards at the Turner Guilford Knight Correctional Center in Miami, Florida, had to rush to corral prisoners back into their cells after a “group release” button in the computerized system was triggered. The entire facility, including locks on cell doors, surveillance cameras, water and electricity, and other systems, has been automated. Anyone who gains full access to the network—whether from a touch-screen monitor in the guard tower, or from outside, via a security hole—can control any of these functions. Guards say they don’t know how it happened, and recently released surveillance footage does not pinpoint the source of the errant command.

Read More

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Robert Charette
Spotsylvania, Va.
Willie D. Jones
New York City
Load More