Risk Factor iconRisk Factor

Chinese Internet Rocked by Cyberattack

China’s Internet infrastructure was temporarily rocked by a distributed denial of service attack that began at about 2 a.m. local time on Sunday and lasted for roughly four hours. The incident, which was initially reported by the China Internet Network Information Center (CNNIC), a government-linked agency, is being called the “largest ever” cyberattack targeting websites using the country’s .cn URL extension. Though details about the number of affected users have been hard to come by, CNNIC apologized to users for the outage, saying that “the resolution of some websites was affected, leading visits to become slow or interrupted.” The best explanation offered so far is that the attacks crippled a database that converts a website’s URL into the series of numbers (its IP address) that servers and other computers read. The entire .cn network wasn’t felled because some Internet service providers store their own copies of these databases.

A Wall Street Journal report notes that the attack made a serious dent in Chinese Web traffic. Matthew Prince, CEO of Internet security firm CloudFlare told the WSJ that his company observed a 32 percent drop in traffic on Chinese domains. But Prince was quick to note that although the attack affected a large swath of the country, the entity behind it was probably not another country. “I don’t know how big the ‘pipes’ of .cn are,” Prince told the Wall Street Journal, “but it is not necessarily correct to infer that the attacker in this case had a significant amount of technical sophistication or resources. It may have well have been a single individual.”

That reasoning stands in stark contrast to the standard China-blaming reaction to attacks on U.S. and Western European Internet resources or the theft of information stored on computers in those regions. In the immediate aftermath of the incident, there was an air of schadenfreude from some observers. Bill Brenner of cloud-service provider Akami told the Wall Street Journal that “the event was particularly ironic considering that China is responsible for the majority of the world’s online ‘attack traffic.’” Brenner pointed to Akami’s 2013 ‘State of the Internet’ report, which noted that 34 percent of global attacks originated from China, with the U.S. coming third with 8.3 percent.

For its part, the CNNIC, rather than pointing fingers, said it will be working with the Chinese Ministry of Industry and Information Technology to shore up the nation’s Internet “service capabilities.”

Photo: Ng Han Guan/AP Photo

IT Hiccups of the Week: Stock Exchange “Gremlins” Attack

We were blessed with another impressive week of IT-related burps, belches and eructs. This time, stock market officials are reaching for the antacid.

Nasdaq Suffers Three-Hour Trading “Glitch”

Well, opinions vary about whether it was or wasn’t a big deal. Last Thursday, the Nasdaq suffered what the AP called a “mysterious trading glitch” that suspended trading on the exchange from 12:15 p.m. to 3:25 p.m. EDT. After trading resumed, the market closed up 39 points.

The trading suspension was the longest in Nasdaq history and was a major embarrassment for the exchange, which is still trying to recover from its Facebook IPO screw-up. The exchange blamed the problem on a “connectivity issue” involving its Securities Information Processor, which Reuters describes as being “the system that receives all traffic on quotes and orders for stocks on the exchange.” When the SIP doesn’t work, stock quotations cannot be disseminated.

Nasdaq Chief Executive Robert Greifeld has refused to discuss the cause of the problem in public. Greifeld did darkly hint, however, that the problems were someone else’s fault. The Guardian quoted him as saying, “I think where we have to get better is what I call defensive driving. Defensive driving means what do you do when another part of the ecosystem, another player, has some bad event that triggers something in your system?”

He then went on to say, “We spend a lot of time and effort where other things happen outside our control and how we respond to it.”

Greifeld’s statement immediately triggered further speculation that the other “player” was rival exchange NYSE Arca, which was known to have had connectivity issues with Nasdaq.  Greifield refused, however, to elaborate further on his statements.

Today’s Wall Street Journal published a lengthy story that has shed more light on what happened—although why it happened is still being investigated. According to the WSJ, between 10:53 a.m. and 10:55 a.m. EDT Thursday, “NYSE Arca officials tried and failed to establish a connection with Nasdaq about 30 times, according to people familiar with the events of that day. Nasdaq, for its part, was having its own problems regarding its connectivity to Arca, the people said.”

The WSJ goes on to say that: “What remained unclear Sunday was how that connectivity problem—which Nasdaq officials privately have called ‘unprecedented’—could have had so catastrophic an effect on Nasdaq's systems that the exchange decided to stop trading in all Nasdaq-listed shares, causing ripples of shutdowns across the market and spreading confusion.”

The Journal said NYSE Arca went to a backup system, and after several failed attempts, finally re-established connection with Nasdaq at 11:17 a.m. However, once the two exchanges were reconnected, “Nasdaq's computers began to suffer from a capacity overload created by the multiple efforts to connect the two exchanges.”

As a result, other markets also started to report problems in receiving and sending quotes from Nasdaq. Officials at Nasdaq decided they had better pull the plug in order to figure out how to get back to a normal operating state, which they did at 12:15 p.m.

Many traders viewed the episode as a non-event, while other interested observers, like U.S. Securities and Exchange Commission Chairman Mary Jo White, were more concerned. Given the complexity of the systems involved, no one should be surprised to see more hiccups in the future. In Charles Perrow’s terminology, they are now just “normal accidents.

The folks at Goldman Sachs and Everbright were probably very happy about the distraction created by Nasdaq’s difficulties. Last Tuesday, Goldman “accidentally sent thousands of orders for options contracts to exchanges operated by NYSE Euronext, Nasdaq OMX and the CBOE, after a systems upgrade that went awry. The faulty orders roiled options markets in the opening 17 minutes of the day’s trading and sparked reviews of the transactions,” the Financial Times reported.

Bloomberg News reported that Goldman had placed four senior technology specialists on administrative leave because of the programming error, but Goldman declined to discuss why. Probably a smarter move than when Knight Capital Group CEO Thomas Joyce blamed “knuckleheads” in IT when a similar problem a year ago this month resulted in loss of US $440 million in about 45 minutes. Goldman's losses were expected to be less than US $100 million.

Knight Capital was sold last December to Getco Holdings Co.

Everbright Securities, the state-controlled Chinese brokerage, is also likely happy at the timing of Nasdaq’s and Goldman’s problems. On 16 August, a trading error traced to the brokerage significantly disrupted the Shanghai market. The error, which cost the brokerage US $31.7 million and the brokerage’s president his job, was blamed originally on a “fat finger” trade. However, a “computer system malfunction” was the real cause, the Financial Times reported. Needless to say, the China Securities Regulatory Commission is investigating Everbright and says “severe punishments” might be in order.

Finally, a real fat finger incident hit the Tel Aviv Stock Exchange (TASE) yesterday. The Jerusalem Post reported that, “a trader from a TASE member intending to carry out a major transaction for a different company's stock accidentally typed in Israel Corporation, the third-largest company traded on the exchange. The disparity in prices cause the company's stock value did a nose-dive, from an opening value of NIS 1690 down to NIS 2.10,” or a 99.9 percent loss. The trader quickly realized his typo, and requested the transaction be canceled. However, by then, the error had already triggered a halt in the exchange’s trading.

Helsinki’s Automated Metro Trains Rough First Half Day

Last week, Helsinki's Metro tried out its three driverless Seimens-built trains for the first time. However, in a bit of irony, after a few hours, problems developed with the ventilation system in the trains' drivers cabins, and the trains had to be taken out of service. Drivers were aboard the automated trains for safety reasons. The Metro didn’t indicate whether the trains would have been pulled out of service (or the problem even detected) if they had been running in full automatic mode without drivers.

Indian Overseas Bank Back to Normal

The Hindu Times reported on Friday that the Indian Overseas Bank announced that the problem with the bank’s central server had been finally fixed. For three days, hundreds of thousands of bank customers were unable to deposit checks or use the bank’s ATM network. A story at Business Standard said that the problem was related to annual system maintenance of the core banking system, which instead ended up  creating what the bank said was a “complex technological malfunction.”

Of Other Interest….

Chrysler Delaying Launch of 2014 Cherokee Jeep Due to Transmission Software Issues

Network Issues Stop Marines from Using Unclassified Network

Tesco Pricing Glitch Lowers Price of Ice Cream by 88 Percent

Xerox Releases Scanner “Error” Software Patch

Photo: Seth Wenig/AP Photo

This Week in Cybercrime: Facebook Feels Backlash After Balking on Bug Bounty

This hasn’t been a week for major headline-making hacks, but a few interesting stories bubbled to the surface.

A Palestinian security researcher recently notified Facebook of a security vulnerability on its site by posting a message on the page of Facebook founder Mark Zuckerberg. The struggling researcher, Khalil Shreateh, was looking forward to receiving a reward under the social media site’s bug bounty program for reporting the problem, which would have allowed anyone to post messages to another user’s page, regardless of whether he or she is on the user’s Friends list.

But Facebook denied him, giving itself a PR black eye in the process. It seems Facebook fixed the bug but wouldn’t shell out any money to Shreateh. Why? The site’s security team reasoned that his method of notifying the company—first posting an Enrique Iglesias video to the page belonging to one of Zuckerberg’s college friends, then posting to Zuckerberg's page itself after the security team still insisted that the issue wasn't a bug—violated its terms of service.

Only after Marc Maiffret, CTO of network security firm BeyondTrust, heard about the snub and launched a page whose aim is to raise $10,000 for Shreateh did Facebook try to explain itself.

According to a Wired article, “Matt Jones, a member of Facebook’s security team, posted a note on the Hacker News web site saying a language barrier with Shreateh had been part of the problem for the company’s initial rejection of his submission…He also said that Shreateh had failed to provide any details about the bug that would help Facebook reproduce the problem and fix it.” But the bottom line, despite Jones’ attempts to rationalize a response that came off as miserly, is: 1) Facebook fixed the problem. 2) It was Shreateh who alerted them to it.

“Mistakes were made on both sides,” Jesse Kornblum, a network security engineer for Facebook later told Wired. “We should have asked for more details rather than saying, ‘this is not a bug.’ But Khalil should have demonstrated the vulnerability on a test account, not a real person. We’ve made an interface for [researchers] to create multiple test accounts [for that purpose].”

But Maiffret, who has met his goal (with $3000 coming from his own pocket), says nixing the bounty for the Palestinian researcher sent the wrong message. “It was a good thing that he did,” Maiffret, who got his start as a teenage hacker, told Wired. “He might have done it slightly wrong, but ultimately it was a bug he got killed off before anyone did a bad thing [with it].” Maiffret pointed to his own beginnings, noting that he went from being a rudderless high school dropout to having a successful career after someone agreed to take a chance on him. “Ultimately, [Shreateh] was well-intentioned and hopefully he stays on the same track of doing research,” Maiffret says.

Google App Engine an Unwitting Conduit for Adware

The adware that floods a computer user’s browser with come-ons is nothing new. But purveyors of this pestilence have come up with a new way to spread it. Jason Ding, a research scientist at Barracuda Labs, posted a note on the company’s research blog this week alerting the world that two sites are lacing users’ machines with malware posing as legitimate application software on Google’s App Engine.

According to Ding, the sites, which appeared about a week ago, prey on the inexperienced or inattentive user. The first one (java-update[dot]appspot[dot]com), which passes itself off as a free Java download site, looks a lot like Oracle’s official Java site. But clicking links on this sinister page causes the download of “setup.exe,” which in turn tries to install the Solimba adware program. The endgame for the other site (updateplayer[dot]appspot[dot]com) also involves plaguing the user with Solimba. But instead of baiting users with a Java imposter, it tells them that their media player is outmoded and needs an update. And guess who’s generous enough to offer a just the right fix? Clicking on any of the site’s links pulls down the same executable file that installs Solimba.

The people who set up these sites are using Google’s App Engine as an intermediary because it gives their pitches the air of credibility and hides URLs that would instantly put users on alert that something is fishy.

More People Affected by Outages from Cyberattacks than from Hardware Failures

A report released on Tuesday by the European Union Agency for Network and Information Security (ENISA) reveals some startling information about the reach and effectiveness of cybercrime. Last year, hardware failures accounted for about 38 percent of incidents that resulted in “severe outages of both mobile and fixed telephony and Internet services” in the E.U. These attacks affected 1.4 million people, on average. Though cyberattacks made up 6 percent of European outages last year, each incident affected an average of 1.8 million users.

“League of Legends” Maker Hacked

Marc Merrill and Brandon Beck, founders of video game maker Riot Games, said in a blog post this week that cybercrooks had hacked into its network and gained access to usernames, email addresses, salted password hashes, some first and last names, and encrypted credit card numbers. The company, developer of the online multiplayer game “League of Legends,” says it is looking into just what details were gleaned in the unauthorized access of 120 000 transaction records dating back to 2011.

In Other Cybercrime News…

Reuters: Ex-Soviet hackers play outsized role in cyber crime world

ZDNet Reviews the New Book “Cyber Crime & Warfare”

IT Hiccups of the Week: You May Want That Burger Well-Cooked

The summertime streak of interesting IT snafus, tangles and general “oops” incidents continues unabated. We start off with a story that appeared in the New York Times over the weekend that may make you reconsider your meat-cooking preference for your next outdoor barbeque.

U.S. Agriculture Meat Inspection Computer Outage Means Meat and Poultry Left Uninspected

The U.S. Department of Agriculture (USDA) considers it to be a non-issue, since there have yet to be any documented instances of people having gotten sick. However, one wonders how long it will be before the continuing problems with a new $20 million computer system upon which some 3000 meat and poultry inspectors working at 6300 packing and processing plants across the U.S. depend on contribute to a major food-borne illness outbreak.

According to a Saturday New York Times story, the computer system the USDA Food Safety and Inspection Service (FSIS) meat and poultry inspectors use has experienced several recent break downs. Earlier this month, for instance, it shut down for two days, putting “at risk [consumers of] millions of pounds of beef, poultry, pork and lamb that had left the plants before workers could collect samples to check for E. coli bacteria and other contaminants.”

What's the risk? Well, a USDA report (pdf) states that, “the Centers for Disease Control and Prevention estimate that E. coli O157:H7 causes about 73,000 cases of illness and 61 deaths annually in the U.S. The USDA's Economic Research Service estimates that the total costs associated with consuming E. coli-contaminated meat are about $488 million annually.”

The new computer system was installed in 2011 as a way to help hasten the meat and poultry inspection process. Previously, it could take days before inspected food flagged as being contaminated could be traced to the offending plant. The new system speeds up completion of the paperwork used to trace inspected meat and poultry, dramatically reducing the time needed to identify the source of any compromised food. That's under normal circumstances. The downside is that when the computer system isn’t working, which inspectors tell the Times happens frequently, meat and poultry sometimes go without being inspected at all.

Last year, computer system issues led to problems at 18 meat processing and packing plants. The Times stated that, “At one of the plants, auditors found that inspectors had not properly sampled some 50 million pounds of ground beef for E. coli over a period of five months. At another plant, which the report identified as among the 10 largest slaughterhouses in the United States, auditors found that computer failures had caused inspectors to miss sampling another 50 million pounds of beef products.”

But not to worry, the USDA says. Many of the highlighted problems have been corrected. Additionally, USDA officials claim, the problem wasn’t really with the computer system itself, but with balky wireless networks the computer system has to connect to in the rural areas where many meat processing and packing plants operate.

Does that mean that the USDA doesn’t include the communication system as part of an overall system test before it fields such a system? Regardless of the answer, questions abound; mainly because USDA field inspectors told the Times that even where wireless connections are first-rate, the computer system still keeps crashing.  

Until the reliability of the USDA’s computer system improves, you may want to make sure your meat and poultry are thoroughly cooked.

U.K. Post Computer System Leads to False Theft Accusations

The Daily Mail published a story last week about the four-year fight between Tom Brown, of South Stanley, County Durham and the U.K. Post Office over charges that Brown, a sub-postmaster, fiddled £85,426 from its accounts. Last week, the Mail reported, the Post Office decided to drop its two civil court charges of false accounting against Brown (the police decided two years ago not to pursue the case), and a judge has recorded not-guilty verdicts.

Brown is one of more than 100 people across the U.K. that the Post Office has accused of theft since the introduction of its £1 billion Horizon computer system, used to record transactions across its 14 000-branch network, over a decade ago.  However, Brown and the others claimed that the disappearance of the money they were accused of stealing was caused by computer problems with the Horizon system that created false shortfalls in the sub-postmasters’ financial accounts.

The U.K. paper Computer Weekly has been diligently following this story for years, noting that sub-postmasters were complaining about computer problems as far back as 2003. However, the U.K. Post Office steadfastly refused to believe that there was anything wrong with the Horizon system. It was convinced that those whom it had accused of stealing were merely using “computer glitches” as an excuse to hide their theft.

In fact, the Post Office was even able to convince some U.K. judges that the Horizon system was extremely reliable and didn’t make mistakes. As a result, some sub-postmasters were sent to jail and many lost their homes or went bankrupt in order to pay back the alleged shortfalls in their accounts.

However, as more and more sub-postmasters were accused, the Post Office finally succumbed to pressure to conduct an investigation into the Horizon system just this past year. In July of this year, as word filtered out that the system’s reliability wasn't as phenomenal as claimed, the Post Office finally admitted that the investigation did find defects in the system that caused accounting shortfalls at 76 branches. In light of those shortfalls, the Post Office stated that more investigation would be required into the system's operations, BBC News reported.

The Post Office also stated it would be looking into how to “take better account” of sub-postmaster complaints “going forward,” but did not directly address those lodged by the dozens who say they were falsely accused. Maybe it is because they are looking to bring legal action against the Post Office.

Like all good bureaucracies, the Post Office also proposed to set up a working group to investigate further the problems so far uncovered.

Wall Street Journal Doesn’t Let Rival’s Crisis Go to Waste

Finally, last Wednesday morning, the New York Times website and mobile app suffered a two-hour outage, although new articles didn’t appear until about four hours after the site and the app first became unavailable. Speculation ran rampant that the Times was the victim of a cyberattack, but the Times said the incident was likely the result of a “scheduled maintenance update being pushed out.”  

However, soon after the outage began at 11:10 a.m., which is the start of the peak traffic time for the paper, the Wall Street Journal decided to try to capitalize on its rival's misfortune by lowering its own pay wall for two hours. The Journal later said it lowered its pay wall not because of the New York Times outage, but because of the violent protests then happening in the Egypt.

A-huh.                                                                                                          

Google also suffered an outage last week, but only for a few minutes.  Reports were that Internet traffic dropped by some 40 percent because of it.

Microsoft, which has been taking very public potshots at Google, remained mum on the Google outage. Was it, perhaps, because Microsoft seems to be having significant problems of its own with Outlook.com over the past week?

Of Other Interest…

Moose Detector System Out For Extended Periods

Software Issue Delays Work for Months on Guernsey Airport’s New £3.5 million Radar

Everbright Securities Fat Finger Trading Error Roils Shanghai Stock Market

Computer Problem Forces Arizona Lottery to Issue New Pick 3 Tickets

BT Sport Issue Angers Premier League Fans

 

Photo: Remy Gabalda/AFP/Getty Images

This Week in Cybercrime: Computer Glitch Opens Prison Doors?

Florida prison officials are trying to figure out whether a computer glitch may be behind two recent, as yet unexplained incidents where all of the doors at a facility’s maximum-security wing opened simultaneously. In the latest occurrence, on 13 June, guards at the Turner Guilford Knight Correctional Center in Miami, Florida, had to rush to corral prisoners back into their cells after a “group release” button in the computerized system was triggered. The entire facility, including locks on cell doors, surveillance cameras, water and electricity, and other systems, has been automated. Anyone who gains full access to the network—whether from a touch-screen monitor in the guard tower, or from outside, via a security hole—can control any of these functions. Guards say they don’t know how it happened, and recently released surveillance footage does not pinpoint the source of the errant command.

Read More

IT Hiccups of the Week: Sabre Outage Hit Flights Worldwide

It’s been another eventful week in the land of IT snags, snarls and complications. We start with another major airline reservation system hiccup, but this time one with world-wide effects.

Sabre Reservation System Outage Affects 400 Airlines

The Sabre airline reservation system experienced a still unexplained two-hour-plus “system issue” beginning at 8:40 p.m. PDT on Monday evening. The outage affected airlines around the world. (Sabre’s website states that about 380 airlines use its reservation system.) As a result, ticket agents needed to hand write boarding passes, and the affected airlines’ websites couldn’t book or change reservations.  Where in the world you happened to be at the time impacted how severely you felt the system outage.

In the U.S., a small number of late-night West Coast domestic and international flights experienced delays. In Europe and the Middle East, the effects were felt a bit more, as it was early to mid morning. The greatest problems were felt in Australia, where it was early afternoon Sidney time. Virgin Australia got the worst of it.

Virgin Australia reportedly had to cancel 35 domestic and international flights and delay many others.  For whatever reason, Virgin seems snake-bit when it comes to reservation system IT issues. This complication came on the heels of a router outage that occurred just a few weeks ago. You may remember its nearly two week reservation meltdown back in 2010, as well. Virgin moved from Navitaire’s New Skies platform, which was at the heart of the 2010 meltdown, to the Sabre system earlier this year. The costs incurred in the switch helped send the company into the red.

Although the exact number of passengers affected is unknown, it was undoubtedly thousands worldwide given that Sabre says some 300 million passenger reservations are processed by its system every year. On Tuesday, in the wake of the outage, Sabre sent out the standard, “We apologize and regret the inconvenience caused.”

Denver International Airport officials were also apologizing on Tuesday to passengers as all 1200 airport flight boards were out for most of the day.Maintenance had been performed on the system that runs the flight boards on Monday night. The boards were not restored to operation until late Tuesday evening.

False Emergency Warnings Sent in Japan, Virginia, California

The Japan Meteorological Agency sent an emergency message last Thursday warning most of Japan to expect “violent shaking” after detecting a magnitude-2.3 earthquake in Wakayama prefecture in western Japan at 4:56 pm local time, Bloomberg News reported. According to the story, the Wakayama earthquake prompted JMA’s warning system to predict that a magnitude 7.8 earthquake was possible.

As a result of the warning, Central Japan Railway suspended some bullet train operations, and a number of mobile phone networks became jammed as a multitude of people called friends and family.

However, it soon became clear that the prediction was in error. The JMA blamed the false warning on “electrical noise” on the ocean floor, and offered a televised apology. The JMA admitted Thursday's incorrect warning to be the “biggest misreading” since the early warning system was begun in 2007.

On Wednesday morning, human error was blamed for a tornado alert mistakenly being sent to 500 people in the Charlottesville-Albemarle County area of Virginia, the AP reported. Apparently the notification was sent during a training session on how to use the local emergency alert system.

Also on Wednesday, a real emergency alert of a reported gas leak was sent to more people than intended. The automated wireless emergency alert message, which urged residents and businesses to evacuate immediately and only take essential belongings with them, was sent out across all of Contra Costa County, California, instead of the homes and businesses within a 1000 foot radius of a damaged gas pipe. County officials said that they would be working with the vendor the county uses to send alerts to ensure that the messages are better targeted, the San Jose Mercury News reported.

Xerox to Patch Scanner Feature

I doubt many people would think to look at a document they scanned to check whether in fact what was scanned actually match that as on the original. It might be a good idea to do so in the future, however.

Last week, a story Tuesday at BBC News reported that German computer scientist David Kessel  “discovered” that the compression software used by several Xerox scanner models had the nasty habit of changing the characters in the scanned document from those on the original document. The Daily Mail published an article showing some of the changes that could result.  The legal implications, a London lawyer told the BBC, were “Interesting.”

Xerox played down the error, however, saying that the vast majority of scanner users would never experience the problem since it only happened when the scanner's default resolution setting was reset to low resolution in order to save smaller-sized computer files.  The character changing/substitution issue was long known to be a possibility, and a warning about it was in all its user manuals, Xerox said. However, in light of the uproar the BBC News story generated—which was also fueled by Xerox’s nonchalant response to the issue—Xerox said it would be sending out a patch in the next few weeks to disable the highest compression mode which it claimed would eliminate the problem.

Even so, it might be a good idea to routinely check over your scanned documents just in case, and also maybe read  your scanner’s user manual. There may actually be something useful in it.

Also of Interest…

Navy Explains Why the USS Guardian Got Stuck on “Misplaced” Philippine Reef

BATS Exchange Experiences Another Outage

New Zealand Vodafone's Data Networks Problems Affect Mobile EFTPOS

X-Ray Computer System Problems Continue Unabated in Kent, England

Wisconsin State Government Recovering From Computer Crash

Los Angeles Fire Department 911 System Repeatedly Breaks Down

Japanese Luxury Toilet Has Computer Hardware Flaw

Photo: iStockphoto

This Week in Cybercrime: Is there Anonymity on Anonymity Networks?

Security researchers studying malware that exploits a hole in the Firefox browser’s security to unmask users of the privacy-protecting Tor anonymity network suspect that the author of the malicious code is…wait for it…the U.S. government. Journalists and human rights activists depend on Tor and services like it to evade surveillance or protect users’ privacy. But the hidden services have found themselves in U.S. law enforcement’s crosshairs because, according to the agencies, the services cloak the activities of criminals. The FBI says that people such as Eric Eoin Marques, who was recently described by an FBI special agent as “the largest facilitator of child porn on the planet,” use Tor to hide in plain sight.

A Sunday attack on several websites hosted by Freedom Hosting originated at “some IP in Reston, Virginia,” security engineer Vlad Tsyrklevich told Wired. “It’s pretty clear that it’s FBI or it’s some other law enforcement agency that’s U.S.-based.” So much for China being the nexus of cyber espionage.

Tsrklevich and other researchers think the malicious code is an example of the FBI’s decade-old “computer and internet protocol address verifier,” or CIPAV, the tool it has used to track down hackers, sexual predators, and other cybercriminals who use proxy servers or anonymity services like Tor to hide their identities. Wired reported on the spyware way back in 2007.

“Court documents and FBI files released under the FOIA have described the CIPAV as software the FBI can deliver through a browser exploit to gather information from the target’s machine and send it to an FBI server,” says a Wired article. Where is the FBI server in question? In Virginia.

The first clue that law enforcement is behind the hack is that the malware doesn’t steal anything nor does it lay any groundwork for future access to the systems. All it does is “look up the victim’s MAC address—a unique hardware identifier for the computer’s network or wireless card—and the victim’s Windows hostname. Then it sends it to the Virginia server, outside of Tor, to expose the user’s real IP address…”

DIY Femtocell Hack Sniffs Out Malware on Mobile Phones

In last week’s edition, we highlighted a presentation at Black Hat Las Vegas by researchers who figured out how to hack a femtocell portable cellular base station in order to intercept all data transmitted by nearby mobile handsets. They informed device makers such as Verizon about the exploit so it could be remedied. This week, Wired reported that the good guys have devised a method for using a femtocell to detect malware on mobile phones. In a presentation at the Def Con hacker conference in Las Vegas, researchers from LMG Security demonstrated a system they built for less than $300 that can view data transmitted from smartphones, through a femtocell, to a cellular carrier’s network. This allows a phone’s user to monitor his or her own data traffic for malicious activity.

“If your phone is infected … it can send audio recordings, copies of your text messages, and even intercept copies of your text messages so you never receive them,” LMG’s Sherri Davidoff told Wired. “Our goal is to give people the ability to see the network traffic” to determine if this is occurring.” The LMG jury rig not only allows traffic monitoring, says Wired, it also gives the user the ability “to stop the data from being passed to attackers from infected phones, alter it to feed the attackers false data, or pass commands back to the smart phone to remotely disable the malware.”

The researchers went a step further, releasing a paper describing their method that includes information so consumers can build the system as a DIY project.  for others to use to develop their own system.

Cybersecurity Expert Advocates Fighting Hackers With Hackers

“Large organizations are shooting themselves in the foot if they're not willing to hire a reformed computer hacker to aid with cyber security.” That’s the bottom line, at least according to Robert Hansen, director of product management for security firm WhiteHat Security. In an interview with Computing Magazine, Hansen goes on to say not only is shunning so-called black hat hackers a bad idea, but that many large businesses unknowingly employ them anyway.

"One guy I know who does training for military contractors, he lives in a state where they're not allowed to do background checks on people for whatever reason. But he's been to jail before, for hacking," Hansen told Computing.

"He's gone to jail for something and now he's teaching the best of the best how to defend against hackers and they're not allowed to ask the question if he's gone to jail or not. " Hansen, who regularly talks with black hats, reasons that, if a company is going to have people on the payroll who at one time or another went to prison for hacking or committed cybercrimes but weren’t caught, it’s better to do so know knowingly. "If you intentionally do it then at least it's on the table and they can do the things they need to do to help you [avoid becoming the victim of cybercrime]," he said.

In Other Cybercrime News…

A Chinese hacker gang infiltrated more than 100 companies, sat in on private teleconferences

Two providers of secure e-mail shut down rather than comply with secret U.S. government court orders for access to their customers' data

The Cybercrime of Things: Adding Internet connectivity to everything in your home means convenience. It also means greater vulnerability.

Photo: Getty Images

Queensland Government Bans IBM from IT Contracts

That didn’t take long.

A day after the Queensland Health Payroll System Commission of Inquiry delivered its 264-page blistering report (pdf) on the Queensland Health payroll system project cock-up, which I have been following for the past few years, Australia’s Queensland Premier Campbell Newman released a statement today in which he banned IBM from entering into “any new contracts with the State Government until it improves its governance and contracting practices.” This comes right on the heels of last week's news that Pennsylvania had opted out of renewing its contract with IBM to modernize the state's unemployment compensation computer system, and yesterday's news that a Credit Suisse analyst cut IBM's stock rating, saying that, “Organically, we believe IBM is effectively in decline.” It's starting to seem like a very bad month for IBM indeed.

Newman cited as a reason for the ban the commission report’s finding that the fiasco—which saw an effort to replace Queensland Health’s legacy payroll system at an expected cost of A$6.19 million (fixed price) explode into one that will cost around A$1.2 billion to develop and operate properly when all is said and done—“must take place in the front rank of failures in public administration in this country. It may be the worst.”

Maybe the scariest thing for Queensland taxpayers reading the report’s conclusion was that the commission wasn’t sure that this farce was the worst in Australian government’s history. Maybe it was unsure because the commission has so many worthy candidates to choose from.

Newman went on to say, “It appears that IBM took the state of Queensland for a ride.”

Newman’s statement warned that before IBM will be allowed to bid again it “must prove it has dealt with past misconduct and will prevent future misconduct.”

Naturally, IBM took exception to the report’s conclusion, saying it was at best only minimally responsible for the project blunder—a major admission for the company which has long insisted it “successfully delivered” what it was contracted for. An IBM spokesperson is quoted in an article today at the Delimiter as saying, “IBM cooperated fully with the Commission of Inquiry into Queensland Health Payroll, and while we will not discuss specifics of the report we do not accept many of these findings as they are contrary to the weight of evidence presented.”

Additionally, the IBM spokesperson stated, “As the prime contractor on a complex project IBM must accept some responsibility for the issues experienced when the system went live in 2010. However, as acknowledged by the Commission’s report, the successful delivery of the project was rendered near impossible by the State failing to properly articulate its requirements or commit to a fixed scope. IBM operated in a complex governance structure to deliver a technically sound system. When the system went live it was hindered primarily through business process and data migration issues outside of IBM’s contractual, and practical, control.”

“Reports that suggest that IBM is accountable for the $1.2 billion costs to remedy the Queensland Health payroll system are completely incorrect. IBM’s fees of $25.7 million accounted for less than 2 percent of the total amount. The balance of costs is made up of work streams which were never part of IBM’s scope.”

In other words, all that money that Queensland has had to spend on those "work streams" to get the payroll system to work correctly since we were kicked out ain’t on us.

IBM’s statement, however, studiously avoided tackling head-on a commission conclusion in the report that: “The only finding possible is that IBM should not have been appointed” to the contract in the first place in part because of “ethical transgressions” on the part of some of its employees, including “the obligation not to use the State’s confidential [bid and proposal] information” that it had somehow couldn't explain came into its possession from a restricted government database along with the apparent privileged insider information from a government consultant to the project who happened to be a former-IBM employee.

The report damningly states that IBM’s “conduct shows such disregard for the responsibilities of a tenderer, and a readiness to take advantage of the State’s lapse in security, as to make it untrustworthy.”

This is not to say that the government was blameless in the shamble, either. The commission report states that:

“a. the scoping of the system (ie its definition) was seriously deficient and remained highly unstable for the duration of the Project. That being so, and although the problem was firmly known to each party, no effective measures were taken to rectify the problem or to reset the Project;

b. the State, who would ultimately bear the risk of a dysfunctional payroll system, gave up several important opportunities to restore the Project to a stable footing and to ensure that the system of which it would ultimately take delivery was functional. [An expert] characterised the approach of both parties as being ‘Plan A or die’;

c. the decision to Go Live miscarried, both because it ought to have been obvious to those with responsibility for making that decision that the system would not be functional and because the decision to Go Live involved no proper and measured assessment of the true risks involved in doing so;

d. the system, when it went live, failed to function in a way in which any payroll system, even one which was interim and to have minimal functionality only, ought to have done.”

The report did “clear” former Premier Anna Bligh, former Health Minister Paul Lucas and former Public Works Minister Robert Schwarten of any wrong doing in the decisions they made during and after the screw-up because the three politicos were merely following the advice given to them by their senior public servants. However, the report did not endorse their decisions as necessarily being correct: it notes that, “Those who read the Report, and have an interest in good government, can judge for themselves” whether the decisions made by the former senior Queensland politicians were “improvident” or not.

The commission report makes for a jolly good read for those students of major government IT project catastrophes. I found nothing in it particularly surprising or what I haven't been writing about for years here at the Risk Factor, and the numerous recommendations for the government for avoiding misadventures in the future can be summed up as, “Don’t take on IT projects that you don’t have the professional competence or capability to ensure you aren’t being taken for a ride.”

Good advice that undoubtedly everyone will agree with but will also be studiously ignored in practice.

Photo: IBM logo/Wikimedia Commons, Denied stamp/iStockphoto

IT Hiccups of the Week: Port of New York and New Jersey’s Computer Problems are Virginia’s Gain

There was a clustering of IT-related problems, outages, and apologies last week in the government and banking spheres. However, we'll start off this week’s IT Hiccups review with a story of how a problem with a new terminal operating system at the Port of New York and New Jersey that has become a boon to the port's competitors at the Port of Virginia and elsewhere.

Port Authority of New York and New Jersey Tells Shippers to Come Back

It is hard to overestimate the importance (pdf) of an effective, fully-functioning terminal operating system (TOS) to a port’s attractiveness to shippers, and therefore, to the port’s profitability. And when a TOS doesn’t work well, it doesn’t take long for shippers to decide to go somewhere else, as the Port Authority of New York and New Jersey—the busiest on the East Coast despite still being in recovery from the ravages of Hurricane Sandy—is finding out to its dismay. The Port of Virginia, among others, is delighting in the shift.

As part of a US $3 billion investment by the Port Authority and New Jersey, Maher Terminals, one of the largest multi user container terminal operators in the world, began on 20 April to upgrade its Navis TOS over several phases at Maher’s Elizabeth, N.J., facility. The project's planners originally expected that the upgrade would be completed by 8 June. The early phases of the upgrade seemed to go well, but the final phase has proved troublesome.

According to a joint Maher-Navis statement issued on 20 June, the system’s “operations has encountered some unexpected issues” which “have led to delays.” The companies stated that they were committing “all available resources to identify and resolve” the technical issues involved, and expected the issues to be resolved shortly.

The statement also said, “With the implementation of new systems, there is always a risk of initial declines in productivity as new operating procedures and processes are streamlined into the operation.” This was a hint that the companies saw that some of the operational problems were caused by the shippers themselves, and to ensure the hint wasn’t missed, the statement added, “Noticeable improvements are already being realized as users adjust to new systems and processes.”

Apparently the “blame the customer” excuse for the port's problems didn’t go over very well, especially as the TOS-related issues continued for not days, but for several weeks. As a result, the Port of Virginia as well as other East Coast ports have seen an increase in their container traffic, a story in the Hampton Roads, Virginia, paper Daily Press reported last week. The paper said that Hapag-Lloyd, a German shipping line that operates150 ships, even went so far as to urge “its rail and local cargo customers on July 26 to seek out alternative ports” and also told its customers that the issues in NY/NJ were being made worse by a labor and trucking shortage.  

The Port of Virginia successfully migrated to the Navis SPARCS N4, the same TOS that the Port Authority of New York and New Jersey is having trouble with, last year.

A story in today’s Wall Street Journal adds insights into the problems being encountered at the Port of New York and New Jersey.  It says that “some trucks endured waits of an extra four to five hours for routine jobs that require one hour on a good day” and that “ships have been diverted to nearby terminals in New Jersey and Staten Island, causing more delays.” Thousands of containers worth millions of dollars are still stuck at the port, many of which contain holiday goods that are stores are anxious to receive.

One trucking company executive was quoted in the Journal as saying, “There's no word for it other than ‘hell’… I've been in business thirtysome-odd years, and this is the most stressful time I've had.”

A joint Maher-Navis statement last week said the companies had “determined that the real-time interactions between the various [TOS] system components deployed in the container yard were not operating as designed” and that they created a temporary fix: certain automated features were turned off and would be phased back in “on a controlled basis.” When they would be phased in, the companies didn't say. The companies also claimed that service had returned to “acceptable levels during the past several weeks, albeit at reduced volume.”

In the June joint-statement, the companies stated that they had expected the Port of New York and New Jersey to be returning to its previous “exceptional performance” soon. Somehow, I don’t think shippers will settle for the currently touted “acceptable” service; other Port Authorities such as those in Virginia, Baltimore, and Boston will likely keep reminding the shippers that their ports aren’t having any problems.

There is a neat real-time animation of a TOS in action at the Bremerhaven port for those interested.

Banking IT Systems Have a Bad Week

Gross understatement alert: Banking customers took it on the chin last week.

We start off with reports today that Westpac Bank customers in Australia and New Zealand are having trouble with their online and mobile banking services for a second straight week. Last Thursday and Friday, the bank apologized for intermittent connectivity problems, while today it apologized for intermittent slowdowns. The bank said last week that it didn’t know what the cause of the problems was: it hadn’t performed any updates and it wasn’t experiencing a cyber-attack, two of the usual culprits. Westpac said it would ensure that no one was out any bank fees because of the problems.

Also in Australia, the National Australia Bank apologized today to customers who were experiencing connectivity issues with its online and mobile banking services. It too denied experiencing a cyberattack, but didn’t give any further explanation for its problems. NAB, which is in the midst of a 10-year IT system modernization effort, suffered another major system issue a few weeks ago.

NAB suffered a separate black eye, but in the U.K. A week ago, customers of Clydesdale and Yorkshire banks, which are owned by NAB, discovered that they were unable to access the banks’ online services. The banks originally said it had nothing to do with their IT systems, and urged customers to check with their local ISPs. However, it soon turned out that NAB had failed to renew its domain names on time, a story at Computer World UK reported.  Interestingly, the banks haven’t admitted that this oversight was the cause of their customers’ access problems, instead reiterating the previous excuse, Computer World stated.

Also in the UK, the Telegraph reported that payment processor Streamline, which the Telegraph stated “handles half of all face-to-face card transactions in Britain,” had trouble processing payments made between 26 and 28 July. As a result, an unknown number of customers may “have not received settlement for transactions processed” with their banks over those days.  “We apologise for any inconvenience caused,” the processor said, although it didn’t promise to ensure any bank fees caused by the settlement issues would be covered.

Finally, in Vermont, the State Treasurer’s Office said that a processing problem at TD Bank, which is the state’s bank of record, would be delaying by a day electronic payment of retirement checks to 7059 retirees of the Vermont State Teachers’ Retirement System. TD Bank said that it would cover any late charges, overdrafts or any financial penalties assessed caused by the delay. TD Bank said it “apologizes for the inconvenience.”

US Government IT Systems Have Another Bad Week

A few weeks ago, I wrote about a number of U.S. state governments that suffered serious IT meltdowns. Last week, there was more of the same.

We start off with a mainframe system upgrade a week ago Sunday at New York State’s Department of Motor Vehicles that didn’t go as planned. A hiccup ended up taking the department’s computer system offline for several hours on Monday morning. A story at the Times Union quoted a DMV official as saying that, “Although the system was fully tested after the upgrade was completed, an issue presented itself under this morning's high transaction volumes that was not detected during testing.

Even after the DMV’s system came back up, it was said to be operating very slowly. It wasn't until Tuesday that the system resumed normal operation.

Also on Monday, a network connectivity problem prevented the Texas Department of Motor Vehicles from registering or titling vehicles, a Star-Telegram story reported. That problem was also fixed on Tuesday.

On Tuesday, a problem with a mainframe computer in Georgia “caused the computer systems at [the Department of Driver Services], the Department of Revenue, the Department of Human Services and the Office of the Secretary of State to crash,” WGCL, CBS television's Atlanta affiliate, reported. The effects from the crash were still being felt on Wednesday. The cause of the problem was not disclosed.

Then on Thursday, the city of Alamogordo, New Mexico, said that all of the city’s e-mail and scheduled appointments between 19 and 31 July were apparently lost “while technicians were transitioning from one email system to another,” the Alamogordo Daily News reported. In addition, city employees can’t currently access their e-mail or their online calendars because of the “glitch.” The city said it has a backup system, but it also failed. There is no date being given when the situation will be rectified.

Also on Thursday, the computer system used by Nevada's Welfare System was reportedly out for the entire day. No reason was provided for the outage, other than that it was an “internal” issue, whatever that means.

Finally, on Friday, the computer system supporting all Oklahoma state agencies, including telephone services, went down because of a power failure. The system was successfully rebooted Friday night, but child support payments made by the Oklahoma Department of Human Services are going to be delayed by about a week because of the outage.

And the reason for the power outage? Apparently, “someone inadvertently hit the emergency shut off system.” Who that “someone” is has not yet been revealed.

Also of Interest…

Victoria Australia 000 Emergency System Goes Down for the Fifth Time in Six Weeks

Fat Finger Error Adds US$18 Billion to Kemper Corporation Capitalization

Glitch at M&G Accounting, UK's Biggest Bond Fund Manager, Hits 32 000 Investors

Indiana ISTEP Testing Glitches Didn’t Hurt Much (We Think)
 

Photo: Captain Albert E. Theberge/NOAA

This Week in Cybercrime: Black Hat USA 2013 Uncovers a Bevy of Exploits

Spy Chief Addresses Hacker Nation

The highlight of this week in cybercrime was the Black Hat USA 2013 conference that took place in Las Vegas. Though dozens of cybersecurity researchers showed up to alert the world to the wide-ranging vulnerabilities that could be exploited by cybercriminals, the top story was the appearance of Gen. Keith Alexander, director of the National Security Agency and chief of U.S. Cyber Command. Alexander was booked to deliver the gathering’s opening keynote address well before Edward Snowden’s revelation’s about the NSA’s PRISM program for collecting phone call metadata. So there was much speculation about whether Alexander would show up, whether he should, and what type of reception he would receive. In video of the talk, recorded by Kaspersky Lab’s Threatpost, the audience,

“was initially cordial and attentive, but soon turned somewhat restive and hostile. While Alexander defended the NSA’s intelligence-gathering efforts and provided examples of how they had led to the disruption of terror attacks in recent years, some people in the audience were uninterested and shouted criticisms and accusations at him.”

What a nice way to get the party started.

Read More
Advertisement

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Contributor
Willie D. Jones
 
Load More