Risk Factor iconRisk Factor

This Week in Cybercrime: Companies to FTC: Your Data Security Reach Exceeds Your Grasp

The U.S. Federal Trade Commission is wrong to claim broad authority to seek sanctions against companies for data breaches when it has no clearly defined data security standards, said panelists at a forum sponsored by Tech Freedom, a Washington, D.C., think tank that regularly rails against government regulation.

The event, held on Thursday, coalesced around the fact that in the last decade, the FTC has settled nearly four dozen cases after filing complaints based on its reasoning that a failure to have sufficient data security constitutes an unfair or deceptive trade practice. Two pending court cases, says a Tech Freedom statement, "may finally allow the courts to rule on the legal validity of what the FTC calls its 'common law of settlements.'"

Read More

IT Hiccups of the Week: A Bad Week for U.S. State Government IT

It’s been another relatively normal week in the land of IT inconveniences except perhaps for government computing systems, which is where we will concentrate our focus this week. We start off with the problems occurring in several U.S. states that recently introduced new IT infrastructure that is proving balkier than hoped.

Nevada, Massachusetts, and North Carolina Each Have Buggy New IT Systems

On 26 August, Nevada’s Department of Employment, Training and Rehabilitation (DETR) took down its 30-year-old unemployment insurance system and began the rollout of its new $45 million UInv system. In a press release issued that day, DETR announced (pdf) that the new system would be operational on 1 September; 51 000 or so Nevadans looking to file unemployment claims would have to wait until then.

However, the DETR enthused that after four years of development the new system would be worth the wait since it would, “allow claimants to view up-to-date information related to their individual claims. It will also give claimants access to their payment history and allow them instant, real time feedback on their unemployment claim.”

Unfortunately, the UInv system wasn’t ready for prime time until 4 September. The DETR "explained" (pdf) the delay in a subsequent press release, citing a number of undisclosed “minimal issues.” The DETR release went on to say that the department was “being very conservative” with the launch of the new system, and it asked for “patience as we gradually ramp up the new system to full deployment over the coming days.”

The Nevadans affected by the delay were clearly not amused.  The DETR had promised that when the new system was up and running, “claims and benefit services will continue as normal.” But by the 4th, it was backpedaling on that promise. The ballyhooed new online system wasn’t working and wouldn’t be for another three days. And according to the Las Vegas Sun, Nevadans couldn’t reach anyone at the DETR to get help with their claims. The reason was a study in irony: the phone lines were unexpectedly overwhelmed with callers when the DETR encouraged claimants to call in because online access had not yet become operational.

A DETR spokesperson said unhelpfully, “keep calling, relax and we will get to you.”  After hours upon hours of waiting on hold, many Nevadans gave up. Governor Brian Sandoval is said to be aware of the problem, but there is little that he can do in reality.

DETR says no one will miss out on their unemployment checks since the claims would be backdated, but also admitted that it may still take a while for the payments to be made.  Hope none of those unemployed Nevadans have any pressing bills to pay. Future upgrades to the UInv system are schedule for later this year and early next, which everyone no doubt hopes will be smoother.

Massachusetts residents who receive unemployment insurance are also unhappy with that state’s new unemployment benefits computer system that was launched in July. The $46 million system ($6 million over budget) has been plagued by problems and is “unable to make proper payments to hundreds of financially strapped workers hunting for jobs,” according to a Boston Globe story.

The system's contractor, Deloitte Consulting, has until the end of the month to fix the system without penalty, the Globe reports, but the newspaper also states that, “It's unclear what remedies are available to the state if the system is still not working properly after that.”

Deloitte says it has hired extra workers to help with the backlog of unemployment claims as well as notices sent out by the new system to unemployed workers demanding repayment for money they never received.

Former secretary of Labor and Workforce Development Suzanne Bump, who is now the state auditor, listed the upgrading of the new system as one of her accomplishments during her tenure at the labor department, but the Globe states that she is now trying to disassociate herself from the project as quickly as possible. A big surprise, eh?

Michelle Amante, the state official now in charge of the project, used to work for Deloitte on the project. She claims that despite all of the problems, “we fundamentally believe that the system is working.” Another big surprise.

Finally, joining (or remaining in) the ranks of unhappy constituents this week are North Carolina residents and businesses. The state recently rolled out two new systems, one called NCFast and the other NCTracks. NCFast (North Carolina Families Accessing Services through Technology), which was “soft-launched” in mid-summer (the system will not be finished until 2017), is the new N.C. Department of Health and Human Services computer system that is supposed to streamline the work activities and business processes of the department and county social services agencies so that more time can be spent helping those requiring public assistance and less on bureaucratic tasks

However, there have been ongoing issues with the $48 million system that have caused many families on food-assistance to go without their benefits. The state is blaming the counties for the problems, while the counties are blaming the state.

The same department has another headache in the form of its new NCTracks system. (I have no idea what that is an acronym for, if in fact it is one.) On 1 July, the department launched its controversial $484 million system in the wake of a state audit (pdf) released in May that cast doubt on whether the system—which was $200 million over budget and two years late—was ready to go live. The audit cited, among other things, the lack of testing of key system elements.  

The Department of Human Services insisted on 1 July that there was nothing major to worry about, regardless of what the audit reported. It conceded that there might be an “initial rough patch of 30 to 90 days as providers get used to using the new system,” but that there should be smooth sailing after that. Well, it has been a very rough patch indeed for many providers, who are, after 70 days and counting, still very unhappy with the system. The department has even had to mail emergency paper checks to over a thousand providers who couldn’t get their claims accepted by the new system and were facing financial hardship.   A Triangle Business Journal story from last week reported that the NCTracks “has missed its own targets nearly across the board, some by significant amounts.

With the ongoing problems at both NCFast and NCTracks, North Carolina lawmakers are now going to get involved. Exactly how they intend to improve the situation is a bit of a mystery.

Australian Very Happy over Credit Card Glitch

According to a story from Australia News Nine, Carlo Spina wanted to buy a magazine at a BP service station in Sydney. However, Spina discovered that he did not have enough cash on him, which meant he had to pay by credit card. When he tried to do so, Spina had trouble getting the card to work. The few seconds spent fussing with the card reader probably saved Spina's life as an out of control SUV crashed into the service station in the path he would have taken to leave.

You can watch the very close call via a Youtube video here. Spina wasn’t hurt, but was shaken up. I don't know if he ever did read his magazine.

Volvo Recalls 2014 Models

Finally, Volvo announced that it was recalling 8000 of its 2014 model year cars in the United States and Canada because of “a software glitch that could drain the battery and cause headlights, windshield wipers and turn signals to malfunction.” The models affected include S60 and S80 sedans and XC60 and XC70 crossovers.

Of Other Interest…

Tesco Pricing Error Sells White Chocolate Oranges for 1p

DC Metro Computer Problems Resolved

New Zealand Visa Bungle Blamed on Overloaded Computers

New School Computer Software Causing Confusion in Seattle

London Black Cab Production Restarts

NASA’s LADEE Glitch Fixed

Photo: iStockphoto

This Week in Cybercrime: Middle East’s Upheaval Breeds Hacktivists

Unrest in the streets of Egypt and Syria has led to thousands of deaths and a lack of personal security in many public spaces. But the civil wars raging there are turning out to be the backdrop for diminished security across online networks. McAfee, the cybersecurity firm best known for its online antivirus solution, told Reuters this week that more than half of the cybercrime activity now occurring in the Middle East can be characterized as “hacktivism” by politically motivated programmers looking to sabotage opposition institutions or groups.

“It’s difficult for people to protest in the street in the Middle East and so defacing websites and [carrying out] denial of service (DOS) attacks are a way to protest instead,” Christiaan Beek, director of incident response forensics for McAfee in the Europe, Middle East and Africa (EMEA) region, told Reuters.

The targets have overwhelmingly been entities linked to the region’s economic underpinnings, which in most cases means crude oil. Cyber attacks in the region are reportedly focused on Saudi Arabia, the world’s leading oil exporter; Qatar, the top supplier of liquefied natural gas; and Dubai, the region’s aviation, commercial and financial hub.

Gert-Jan Schenk, McAfee president for the EMEA region, told Reuters, “Ten years ago, it was all about trying to infect as many people as possible. Today we see more and more attacks being focused on very small groups of people. Sometimes malware is developed for a specific department in a specific company.”

Read More

Are STEM Workers Overpaid?

One of the strongest reasons given by those trying to entice more students to enter the STEM education pipeline is the “earnings premium” STEM workers make in comparison to non-STEM workers. Typical is the statement by the U.S. Department of Commerce press release from 2011 that, “STEM workers command higher wages, earning 26 percent more than their non-STEM counterparts.”

Further, the Commerce Department press release quotes U.S. Secretary of Education Arne Duncan’s plea to prospective STEM students that, “A STEM education is a pathway to prosperity – not just for you as an individual but for America as a whole. We need you in our classrooms, labs and key government agencies to help solve our biggest challenges.”

However, not everyone is happy as Duncan with STEM workers earning a premium for solving those big challenges, instead believing that the U.S. would be even more competitive and have a more equitable society if the earnings premium disappeared. For instance, former Federal Reserve Bank Chairman Alan Greenspan, speaking at a U.S. Treasury conference on U.S. Capital Markets Competitiveness put it bluntly: “Our skilled wages are higher than anywhere in the world. If we open up a significant window for skilled [guest] workers, that would suppress the skilled-wage level and end the concentration of income.”

Greenspan, to ensure everyone got the point, added, “Significantly opening up immigration to skilled workers solves two problems. The companies could hire the educated workers they need. And those workers would compete with high-income people, driving more income equality.”

[For a detailed examination of how effective this policy could be, I strongly suggest you read Eric Weinstein’s National Bureau of Economic Research draft working paper titled, “How and Why Government, Universities, and Industry Create Domestic Labor Shortages of Scientists and High-Tech Workers,” on the active suppression of STEM Ph.D. salaries by way of false National Science Foundation claims of a STEM shortage coupled with aggressive lobbying efforts to change STEM guestworker policies in the late 1980s to early 1990s. While the NSF eventually apologized for its misrepresentations to Congress in 1992 and admitted that there was in fact a surplus of STEM workers, the damage was already done with the fallout continuing into today.]

Greenspan is not alone in his thinking that STEM worker salaries should look like a lot more like non-STEM worker salaries. In March, over “100 executives from the technology sector and leading innovation advocacy organizations” sent an open letter to President Obama begging that he and Congress would approve the expansion of H-1B visa program beyond the 85 000 visa limit today (including 20 000 reserved for foreign graduates with advanced degrees at U.S. universities) to a minimum of 115 000 per year and possibly as high as 300 000 within a decade (not to mention the granting of permanent legal status to an unlimited number of foreign students who earn graduate degrees from U.S. universities in STEM subjects).

The  executives, whose companies have spent millions of dollar lobbying on the issue, collectively wrote in their letter that, “One of the biggest economic challenges facing our nation is the need for more qualified, highly‐skilled professionals, domestic and foreign, who can create jobs and immediately contribute to and improve our economy.”

Four company executives from IBM, Intel, Microsoft and Oracle claimed in the letter they had 10 000 openings that they apparently could only fill with guestworkers. That was, of course, before IBM announced its layoff, or Intel acknowledging that it was going to slow down its hiring as well as cut production at some of its U.S. plants which might lead to workers being let go. It will be interesting to see whether Microsoft CEO Steve Ballmer’s successor decides that the company payroll needs a bit of trimming, like Lou Gertsner did at IBM. And one other company on the letter list is Cisco, which also announced layoffs since the March letter. Maybe the companies can just trade workers.

While the tech company executives insist that their motive is not to cut payroll costs by increasing the supply of guestworker labor, few STEM workers believe their claims that there exists a “technology skills gap” that only guestworkers can fill, any more than there is a U.S. manufacturing skills gap. For instance, various surveys have claimed there is a skilled manufacturing work force shortage at between 300 000 and 600 000, for example.

However, others digging through the veracity of those claims like the Boston Consulting Group point out that the manufacturing skills gap is in reality closer to 80 000 and 100 000, and would be even less if employers were to increase their pay or hire less-skilled workers and train them, both of which employers seem highly reluctant to do. As the BCG study states, “Trying to hire high-skilled workers at rock-bottom rates is not a skills gap.”

Likewise, tech companies are definitely not interested in paying higher wages, something they were unhappily forced to do during the dot-com boom period (and which some tried to avoid afterwards as well by tacitly agreeing not to poach employees from one another until they got caught). The fact that thousands of these unfilled jobs stay unfilled for long periods of time also seems an indication that they are not nearly as critical or valuable as the companies make them out to be. Netflix, for example, pays its 700 software engineers 10 to 20 percent above its competitors in Silicon Valley, allowing it to “hire just about any engineer it wants.”  As Netflix clearly demonstrates, an expanding U.S. company in a competitive marketplace that needs technology workers can get the U.S. technology workers it wants, if it really wants them.

Anuj Srivas, the technology and business reporter of the Indian paper The Hindu, pointed out earlier this year that despite all the rhetoric, the “H-1B visa farce” as he calls it is indeed “about the profit margins of Indian and American IT companies,” something that Vivek Wadhwa, an academic and entrepreneur who advocates more H-1B visas, has acknowledged. Wadhwa candidly wrote in 2008 that, “I know from my experience as a tech CEO that H-1Bs are cheaper than domestic hires. Technically, these workers are supposed to be paid a ‘prevailing wage,’ but this mechanism is riddled with loopholes. In the tech world, salaries vary widely based on skill and competence. Yet the prevailing wage concept works on average salaries, so you can hire a superstar for the cost of an average worker. Add to this the inability of an H-1B employee to jump ship and you have a strong incentive to hire workers on these visas.”

Exactly how many “superstar” H1B guestworkers are working in the U.S. is in some doubt. For example, currently the majority of H1B guestworkers are Indian (pdf). Yet, Indian companies themselves complain over the cost of making the vast majority of engineering graduates employable (pdf).  Back in 2006, when India graduated some 350 000 engineers, employers there estimated that about 10 percent to at best 25 percent were employable by multi-national companies.  In 2012, however, there were over 850 000 graduates from Indian engineering colleges. It is doubtful that the quality of engineering education in India has been maintained to even 2006 levels given the huge increase in graduates and engineering schools.

Given the long standing complaints by tech company executives of a skill shortage and the need for more guestworker labor (e.g., in 1983, John Calhoun, the director of business development at Intel testified to the U.S. Congress in regard to the need for more technology worker immigration, “The problem is absolutely one of a shortage and not one of lower-cost labor. We in the industry have been forced to hire guestworkers in order to grow.”), it is interesting to see how engineering and computer professional salaries have risen. The reason I say risen is that rising wages is usually considered the best indications of a shortage, according to the RAND Corporation.

Using data from a Northwestern University 1995 study (pdf) that provided starting engineering graduate salary information from 1950 to 1994 and normalizing it to 2011 U.S. dollars, you see that average starting salaries peaked around 1970 at $61 200 and then dropped slowly but surely to $52 470 in 1995. The engineering class of 2011 average starting salary (pdf) was $59 590, thanks in part to the dot.com demand, which pushed starting salaries up about a decade ago.

This is shown in the IEEE-USA published salary survey data. In 1972, the median salary was $101 300 in 2011 dollars, whereas in 1975, the median salary had dropped to $98 460.  By 1992, median salaries had fallen to $91 823. Median salary recovered during the mid to late 1990s, and hit a peak of $122 315 in 2002. The 2011 IEEE survey data shows the median salary to be $115 790.

For computer professionals, the last ten years have also shown little change in salary as well. Compiling a decade’s worth of published DICE salary data, the average salary in 2011 constant dollars was $86 823 in 2001, and was only $83 858 last year. According to a story from CNN at the time, a software developer’s average salary in 1990 ranged from $84 750 to $101 600 in 2011 dollars.

It is hard to see from the salary data that there has been a huge spike in engineering salary over the last thirty plus years—the same period that tech executives have been claiming that they have been faced with a tech skill shortage, the same period during which numerous tech companies apparently succeeded and made pretty decent profits. They may have had somewhat of a case to whine about in the mid to late 1990s in regard to engineering jobs, but the study by RAND found little evidence of one even then. The salary data definitely doesn’t indicate a STEM shortage is occurring now.

As an EPI analysis in April which found no STEM worker shortage in the US noted, “policies that expand the supply of guestworkers will discourage U.S. students from going into STEM, and into IT in particular.”

If that happens, and then foreign STEM workers decide to stay home because the salaries they are also earning wages in the U.S. approach those of non-STEM workers, U.S. high tech executives and the government will have no one to blame but themselves.

Photo: Muharrem Oner/Getty Images

What Ever Happened to STEM Job Security?

Figuring out how to get more students drawn into the “STEM education pipeline” has been a major concern of those arguing that there exists an acute shortage of STEM workers, be it in the U.S., the U.K., Brazil, Australia, or almost any country you choose. Typically, the arguments made to encourage students to enter the STEM pipeline center around how interesting STEM careers are and especially how much money you can earn over pursuing non-STEM careers.

However, others point out that many students aren’t interested in STEM careers because they see that the academic work needed at both the high school and university-level to pursue a STEM degree is just too hard in comparison to non-STEM degrees. Until this changes (for example, by increasing the readiness of a prospective STEM student by “redshirting” them), the argument goes, don’t expect a full STEM pipeline anytime soon.

Another factor little talked about that I personally witnessed has been the changing social compact between STEM workers and employers over the past several decades and the impact it has had on convincing students today to pursue a STEM career. When my father, an electro-optical engineer was laid off from his company late in the recession of 1957-1958, he assumed the company would be rehiring him a few months later when the economy got better. His wasn’t an unreasonable assumption, since that was the general practice in the 1950s. When he wasn’t soon rehired, and with a new house mortgage to pay and three children under age 5 to feed, my father left his temporary job of selling Electrolux vacuums door-to-door and found another electro-optical engineering job. He stayed with that company for another 25 years when he retired with the usual gold desk pen-set, which now sits on my desk.

Read More

An Engineering Career: Only a Young Person’s Game?

If you are an engineer (or a computer professional, for that matter), the danger of becoming technologically obsolete is an ever-growing risk. To be an engineer is to accept the fact that at some future time—always sooner than one expects—most of the technical knowledge you once worked hard to master will be obsolete.

An engineer’s “half-life of knowledge,” an expression coined in 1962 by economist Fritz Machlup to describe the time it takes for half the knowledge in a particular domain to be superseded, everyone seems to agree, has been steadily dropping. For instance, a 1966 story in IEEE Spectrum titled, “Technical Obsolescence,” postulated that the half-life of an engineering degree in the late 1920’s was about 35 years; for a degree from 1960, it was thought to be about a decade.

Thomas Jones, then an IEEE Fellow and President of the University of South Carolina wrote a paper in 1966 for the IEEE Transactions on Aerospace and Electronic Systems titled, “The Dollars and Cents of Continuing Education,” in which he agreed with the 10 year half-life estimate. Jones went on to roughly calculate what effort it would take for a working engineer to remain current in his or her field.

Read More

IT Hiccups of the Week: Sutter Health’s $1 Billion EHR System Crashes

After a torrid couple of months, last week saw a slowdown in the number of reported IT errors, miscalculations, and problems. We start off this week’s edition of IT Hiccups with the crash of a healthcare provider’s electronic health record system.

Sutter Health’s Billion Dollar EHR System Goes Dark

Last Monday, at about 0800 PDT, the nearly US $1 billion EPIC electronic health record (EHR) system used by Sutter Health of Northern California crashed. As a result, the Sacramento Business Journal reported, healthcare providers at seven major medical facilities, including Alta Bates Summit Medical Center facilities in Berkeley and Oakland, Eden Medical Center in Castro Valley, Mills Peninsula Health Services in Burlingame and San Mateo, Sutter Delta in Antioch, Sutter Tracy, Sutter Modesto and affiliated doctor’s offices and clinics, were unable to access patient medications or histories.

A software patch was applied Monday night, and EHR access was restored. Doctors and nurses no doubt spent most of the day Tuesday entering in all the handwritten patient notes they scribbled on Monday.

It still is unclear whether the crash was related to a planned system upgrade that was done the Friday evening before the crash, but if I were betting, I would lay some coin on that likelihood.

Nurses working at Sutter Alta Bates Summit Hospital have been complaining for months about problems with the EHR system, which was rolled out at the facility in April. Nurses at Sutter Delta Medical Center have also complained that hospital management there has threatened to discipline nurses for not using the EHR system; its system went live about the same time as Alta Bates Summit's, but for billing for chargeable items. Sutter management said that it was unaware of any of the issues the nurses were complaining about, and that any complaints they might have lodged were the result of an ongoing management-labor dispute.

Sutter is now about midway through its EHR system roll-out, an effort it first started in 2004 at a planned cost of $1.2 billion and completion date of 2013. It later backed off that aggressive schedule, and then “jump started” its EHR efforts once more in 2007. Sutter plans to complete the roll-out across all 15 of its hospitals by 2015 at a cost now approaching $1.5 billion.

Hospital management said in the aftermath of the incident, “We regret any inconvenience this may have caused patients.” It did not express regret to its nurses, however.

Computer Issue Scraps Japanese Rocket Launch

Last Tuesday, the launch of Japan’s new Epsilon rocket was scrubbed with 19 seconds to go because a computer aboard the rocket “detected a faulty sensor reading.” The Japan Aerospace Exploration Agency (JAXA) had spent US $200 million developing the rocket, which is supposed to be controllable from conventional desktop computers instead of massive control centers. This added convenience has resulted from the extensive use of AI to self-perform status-checks.

The Japan Times reported on Thursday that the problem was traced to a “computer glitch at the ground control center in which an error was mistakenly identified in the rocket’s positioning.”

The Times stated that, “According to JAXA project manager Yasuhiro Morita, the fourth-stage engine in the upper part of the Epsilon that is used to put a satellite in orbit, is equipped with a sensor that detects positioning errors. The rocket’s computer system starts calculating the rocket’s position based on data collected by the sensor 20 seconds before a launch. The results are then sent to a computer system at the ground control center, which judges whether the rocket is positioned correctly. On Tuesday, the calculation started 20 seconds before the launch, as scheduled, but the ground control computer determined the rocket was incorrectly positioned one second later based on data sent from the rocket’s computer.”

The root cause(s) of the problem are still unknown, although it is speculated that it was a transmission issue. JAXA says that it will be examining “the relevant computer hardware and software in detail.” The Times reported on Wednesday that speculation centered on a “computer programming error and lax preliminary checks.”

JAXA President Naoki Okumura apologized for the launch failure, which he said brought “disappointment to the nation and organizations involved.” A new launch date has yet to be announced.

Nasdaq Blames Software Bug For Outage

Two weeks ago, Nasdaq suffered what it called at the time a “mysterious trading glitch.” The problem shut down trading for three hours. After pointing fingers at rival exchange NYSE Arca, it admitted last week that perhaps it wasn’t all Arca’s fault after all.

A Reuters News story quoted Bob Greifeld, Nasdaq's chief executive, as saying Nasdaq’s backup system didn’t work because, “There was a bug in the system, it didn't fail over properly, and we need to work hard to make sure it doesn't happen again.”

However, Greifeld didn’t fully let Arca off the hook. A story at the Financial Times said that in testing, Nasdaq’s Securities Information Processor (SIP), the system that receives all traffic on quotes and orders for stocks on the exchange, “was capable of handling around 500,000 messages per second containing trades and quotes. However, in practice, Nasdaq said repeated attempts to connect to the SIP by NYSE Arca, a rival electronic trading platform, and streams of erroneous quotes from its rival eroded the system’s capacity in a manner similar to a distributed denial of service attack. Whereas the SIP had a capacity of 10,000 messages per data port, per second, it was overwhelmed by up to more than 26,000 messages per port, per second.”

Nasdaq said that it was now looking at design changes to make the SIP more resilient.

A detailed report looking into the cause of the failure will be released in about two weeks or so.

Of Other Interest…

Computer Error Causes False Weather Alert and Cancelled Classes at Slippery Rock University

UK’s HSBC Bank Suffers IT Glitch

NC Fast Computer System Can’t Shake Processing Problems

North Carolina DMV Computer System Now Back to Normal

Australia’s Telstra Faces Large Compensation Bill for Internet Problems

Data Glitch Hits CBOE Futures Exchange

China Fines Everbright Security US $85 million Over Trading Error

Photo: iStockphoto

Is There a U.S. IT Worker Shortage?

Someone who is a data scientist today is said by Harvard Business Review to have the sexiest job alive. And if sexy isn’t enough, how about being a savior of the economy?  According to a 2011 report by consulting company McKinsey & Company, “Big Data” is “the next frontier for innovation, competition and productivity.” That is, of course, if enough of those sexy data scientists can be found.

For also according to McKinsey’s report, “the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions,” by 2018.

However, Peter Sondergaard, senior vice president at Gartner and global head of research asserts that the shortage situation is even more frightening than what McKinsey implies. Sondergaard stated in October 2012 that, “By 2015, 4.4 million IT jobs globally will be created to support Big Data, generating 1.9 million IT jobs in the United States. In addition, every big data‐related role in the U.S. will create employment for three people outside of IT, so over the next four years a total of 6 million jobs in the U.S. will be generated by the information economy.”

Wow. Not only will Big Data make a significant dent in the U.S. unemployment rate, but the U.S. IT technical workforce of 3.9 million or so needs to increase by almost 50 percent within the next two years.

But wait, there’s more.

Read More

Chinese Internet Rocked by Cyberattack

China’s Internet infrastructure was temporarily rocked by a distributed denial of service attack that began at about 2 a.m. local time on Sunday and lasted for roughly four hours. The incident, which was initially reported by the China Internet Network Information Center (CNNIC), a government-linked agency, is being called the “largest ever” cyberattack targeting websites using the country’s .cn URL extension. Though details about the number of affected users have been hard to come by, CNNIC apologized to users for the outage, saying that “the resolution of some websites was affected, leading visits to become slow or interrupted.” The best explanation offered so far is that the attacks crippled a database that converts a website’s URL into the series of numbers (its IP address) that servers and other computers read. The entire .cn network wasn’t felled because some Internet service providers store their own copies of these databases.

A Wall Street Journal report notes that the attack made a serious dent in Chinese Web traffic. Matthew Prince, CEO of Internet security firm CloudFlare told the WSJ that his company observed a 32 percent drop in traffic on Chinese domains. But Prince was quick to note that although the attack affected a large swath of the country, the entity behind it was probably not another country. “I don’t know how big the ‘pipes’ of .cn are,” Prince told the Wall Street Journal, “but it is not necessarily correct to infer that the attacker in this case had a significant amount of technical sophistication or resources. It may have well have been a single individual.”

That reasoning stands in stark contrast to the standard China-blaming reaction to attacks on U.S. and Western European Internet resources or the theft of information stored on computers in those regions. In the immediate aftermath of the incident, there was an air of schadenfreude from some observers. Bill Brenner of cloud-service provider Akami told the Wall Street Journal that “the event was particularly ironic considering that China is responsible for the majority of the world’s online ‘attack traffic.’” Brenner pointed to Akami’s 2013 ‘State of the Internet’ report, which noted that 34 percent of global attacks originated from China, with the U.S. coming third with 8.3 percent.

For its part, the CNNIC, rather than pointing fingers, said it will be working with the Chinese Ministry of Industry and Information Technology to shore up the nation’s Internet “service capabilities.”

Photo: Ng Han Guan/AP Photo

IT Hiccups of the Week: Stock Exchange “Gremlins” Attack

We were blessed with another impressive week of IT-related burps, belches and eructs. This time, stock market officials are reaching for the antacid.

Nasdaq Suffers Three-Hour Trading “Glitch”

Well, opinions vary about whether it was or wasn’t a big deal. Last Thursday, the Nasdaq suffered what the AP called a “mysterious trading glitch” that suspended trading on the exchange from 12:15 p.m. to 3:25 p.m. EDT. After trading resumed, the market closed up 39 points.

The trading suspension was the longest in Nasdaq history and was a major embarrassment for the exchange, which is still trying to recover from its Facebook IPO screw-up. The exchange blamed the problem on a “connectivity issue” involving its Securities Information Processor, which Reuters describes as being “the system that receives all traffic on quotes and orders for stocks on the exchange.” When the SIP doesn’t work, stock quotations cannot be disseminated.

Nasdaq Chief Executive Robert Greifeld has refused to discuss the cause of the problem in public. Greifeld did darkly hint, however, that the problems were someone else’s fault. The Guardian quoted him as saying, “I think where we have to get better is what I call defensive driving. Defensive driving means what do you do when another part of the ecosystem, another player, has some bad event that triggers something in your system?”

He then went on to say, “We spend a lot of time and effort where other things happen outside our control and how we respond to it.”

Greifeld’s statement immediately triggered further speculation that the other “player” was rival exchange NYSE Arca, which was known to have had connectivity issues with Nasdaq.  Greifield refused, however, to elaborate further on his statements.

Today’s Wall Street Journal published a lengthy story that has shed more light on what happened—although why it happened is still being investigated. According to the WSJ, between 10:53 a.m. and 10:55 a.m. EDT Thursday, “NYSE Arca officials tried and failed to establish a connection with Nasdaq about 30 times, according to people familiar with the events of that day. Nasdaq, for its part, was having its own problems regarding its connectivity to Arca, the people said.”

The WSJ goes on to say that: “What remained unclear Sunday was how that connectivity problem—which Nasdaq officials privately have called ‘unprecedented’—could have had so catastrophic an effect on Nasdaq's systems that the exchange decided to stop trading in all Nasdaq-listed shares, causing ripples of shutdowns across the market and spreading confusion.”

The Journal said NYSE Arca went to a backup system, and after several failed attempts, finally re-established connection with Nasdaq at 11:17 a.m. However, once the two exchanges were reconnected, “Nasdaq's computers began to suffer from a capacity overload created by the multiple efforts to connect the two exchanges.”

As a result, other markets also started to report problems in receiving and sending quotes from Nasdaq. Officials at Nasdaq decided they had better pull the plug in order to figure out how to get back to a normal operating state, which they did at 12:15 p.m.

Many traders viewed the episode as a non-event, while other interested observers, like U.S. Securities and Exchange Commission Chairman Mary Jo White, were more concerned. Given the complexity of the systems involved, no one should be surprised to see more hiccups in the future. In Charles Perrow’s terminology, they are now just “normal accidents.

The folks at Goldman Sachs and Everbright were probably very happy about the distraction created by Nasdaq’s difficulties. Last Tuesday, Goldman “accidentally sent thousands of orders for options contracts to exchanges operated by NYSE Euronext, Nasdaq OMX and the CBOE, after a systems upgrade that went awry. The faulty orders roiled options markets in the opening 17 minutes of the day’s trading and sparked reviews of the transactions,” the Financial Times reported.

Bloomberg News reported that Goldman had placed four senior technology specialists on administrative leave because of the programming error, but Goldman declined to discuss why. Probably a smarter move than when Knight Capital Group CEO Thomas Joyce blamed “knuckleheads” in IT when a similar problem a year ago this month resulted in loss of US $440 million in about 45 minutes. Goldman's losses were expected to be less than US $100 million.

Knight Capital was sold last December to Getco Holdings Co.

Everbright Securities, the state-controlled Chinese brokerage, is also likely happy at the timing of Nasdaq’s and Goldman’s problems. On 16 August, a trading error traced to the brokerage significantly disrupted the Shanghai market. The error, which cost the brokerage US $31.7 million and the brokerage’s president his job, was blamed originally on a “fat finger” trade. However, a “computer system malfunction” was the real cause, the Financial Times reported. Needless to say, the China Securities Regulatory Commission is investigating Everbright and says “severe punishments” might be in order.

Finally, a real fat finger incident hit the Tel Aviv Stock Exchange (TASE) yesterday. The Jerusalem Post reported that, “a trader from a TASE member intending to carry out a major transaction for a different company's stock accidentally typed in Israel Corporation, the third-largest company traded on the exchange. The disparity in prices cause the company's stock value did a nose-dive, from an opening value of NIS 1690 down to NIS 2.10,” or a 99.9 percent loss. The trader quickly realized his typo, and requested the transaction be canceled. However, by then, the error had already triggered a halt in the exchange’s trading.

Helsinki’s Automated Metro Trains Rough First Half Day

Last week, Helsinki's Metro tried out its three driverless Seimens-built trains for the first time. However, in a bit of irony, after a few hours, problems developed with the ventilation system in the trains' drivers cabins, and the trains had to be taken out of service. Drivers were aboard the automated trains for safety reasons. The Metro didn’t indicate whether the trains would have been pulled out of service (or the problem even detected) if they had been running in full automatic mode without drivers.

Indian Overseas Bank Back to Normal

The Hindu Times reported on Friday that the Indian Overseas Bank announced that the problem with the bank’s central server had been finally fixed. For three days, hundreds of thousands of bank customers were unable to deposit checks or use the bank’s ATM network. A story at Business Standard said that the problem was related to annual system maintenance of the core banking system, which instead ended up  creating what the bank said was a “complex technological malfunction.”

Of Other Interest….

Chrysler Delaying Launch of 2014 Cherokee Jeep Due to Transmission Software Issues

Network Issues Stop Marines from Using Unclassified Network

Tesco Pricing Glitch Lowers Price of Ice Cream by 88 Percent

Xerox Releases Scanner “Error” Software Patch

Photo: Seth Wenig/AP Photo

Advertisement

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributor
Willie D. Jones
 
Advertisement
Load More