Risk Factor iconRisk Factor

This Week in Cybercrime: Judge Upholds LinkedIn's "If You Put It on Our Site, Don't Blame Us If It Gets Out"

LinkedIn Not Liable

Earlier this week, a U.S. District Court in Northern California dismissed a class action lawsuit accusing LinkedIn of failing to deliver the level of security the plaintiffs say the social networking site’s privacy policy promised. A June 2012 data breach resulted in more than 6 million LinkedIn passwords being posted online. A few weeks later, a woman from Illinois and a woman from Virginia filed the suit—after learning that LinkedIn had encrypted the passwords with an outdated algorithm. Judge Edward Davila noted that the suit should not proceed to trial for several reasons. The plaintiffs, he said, wrongfully assumed that by paying for the site’s premium upgrade, they were entitled to a higher level of encryption for their data than users of the free version. Davila pointed out that, although the accusers admittedly never read the site’s privacy policy, it read,

“…we cannot ensure or warrant the security of any information you transmit to LinkedIn. There is no guarantee that information may not be accessed, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. It is your responsibility to protect the security of your login information.”

The judge also failed to see how the posting of the passwords had, as the plaintiffs claimed, caused any economic harm or put them at future risk of identity theft.

Google’s Ups and Downs

It seems that the one-year anniversary of Google Play is not turning out to be the auspicious occasion Google had likely imagined. On Wednesday, the KrebsonSecurity.com blog reported that a new botkit is being used to trick Android users into downloading fraudulent banking apps capable of intercepting multifactor authentication messages from banks. The apps then send text messages with the purloined login credentials to the phony apps’ creators. That news appeared in the context of data that Google itself released on the Android developer blog showing that Android users can’t help but be plagued by malware. Google admitted that, based on data gleaned from mobile devices that accessed its app store during the two-week period that ended on Monday, only 16 percent of Android users have bothered to update their operating systems to the newest, safest versions. More than 40 percent of people with Android mobile devices still run a two-year old version known as Gingerbread. Kaspersky Lab, which keeps track of attempted malware installations on Android, reported that as of the end of 2012, Gingerbread was the most commonly targeted version of Google’s OS. (A SecurityLedger.com article notes that Apple, by contrast, has no such migration problems with its gadgets; 98 percent of all iPhone and iPad users run one or the other of the latest two iterations of iOS.)

The news isn't all bad about Google, though. The search-and-now-just-about-everything-else company did something this week for which it should be lauded. It struck a blow against the U.S. government surveillance program that has expanded rapidly since the passage of special laws that allow agencies such as the FBI to much more easily demand information from Internet service providers, credit bureaus, banks, and businesses like Google—all without a warrant. The demands for information, called National Security Letters (NSLs), come with a built-in gag order barring the companies receiving them form even mentioning that they’ve received them. But on Tuesday, Google became the first company to give a hint of the extent to which the FBI uses this authority. It published a document giving ballpark figures for the number of accounts for which it turned over information in a given year. For instance, it reported that in 2010 it divulged information on “2000–2999” customers; in 2009, 2011, and 2012, the range was “1000–1999.”

Although the U.S. Congress requires the FBI to disclose the number of times it issues NSLs (it sent out more than 16 000 in 2011), Google didn’t report exact numbers. “This is to address concerns raised by the FBI, Justice Department and other agencies that releasing exact numbers might reveal information about investigations,” Richard Salgado, a Google legal director, wrote in a blog post. But at least the existence of the NSLs and the potential for abuse is out in the open. The FBI continues to have this power to say information about you is “relevant” to an investigation and get unquestioned access to records—even after a 2007 Justice Department inquiry revealed that after the September 2001 terrorist attacks, the FBI regularly ran afoul of the relaxed rules regarding the acquisition of evidence.

U. S. Electronic Health Record Initiative: A Backlash Growing?

There seems to be a slow but steady backlash growing among healthcare providers against the U.S. government’s $30 billion initiative to get all its citizens an electronic health record, initially set to happen by 2014 but now looking at 2020 or beyond. The backlash isn’t so much about the need for, or eventual benefits of, electronic health records but more about the perceived (and real) difficulties caused by the government's incentive program and a growing realization of the actual financial and operational costs involved in rolling out, using, and paying for EHR systems.

The backlash began to publicly surface last September when the U.S. government accused healthcare providers of “upcoding,” i.e., claiming with a single click on a field in a electronic health record to have provided a medical service or procedure when it wasn’t really performed. Kathleen Sebelius, the current HHS Secretary, and Eric Holder, the Attorney General, sent a letter to five major hospital trade associations (pdf) warning them that electronic health records were not to be used to “game the system” and “possibly” obtain “illegal payments” from Medicare. The letter said that Medicare billing is being scrutinized for fraud, and implied that those using EHRs to bill Medicare will be scrutinized even more carefully.

Healthcare providers were outraged by accusations in the letter, and said that the reason for the increased billing was that EHRs facilitated billing for services they used to provide to the government without charging for them.

About the same time, professors Stephen Soumerai from Harvard Medical School and Ross Koppel from the University of Pennsylvania wrote an article for the Wall Street Journal contending that EHRs don’t save money as claimed. They wrote that, “…. the most rigorous studies to date contradict the widely broadcast claims that the national investment in health IT—some $1 trillion will be spent, by our estimate—will pay off in reducing medical costs. Those studies that do claim savings rarely include the full cost of installation, training and maintenance—a large chunk of that trillion dollars—for the nation's nearly 6000 hospitals and more than 600 000 physicians. But by the time these health-care providers find out that the promised cost savings are an illusion, it will be too late. Having spent hundreds of millions on the technology, they won't be able to afford to throw it out like a defective toaster.”

The professors went on to say that, “We fully share the hope that health IT will achieve the promised cost and quality benefits. As applied researchers and evaluators, we actively work to realize both goals. But this will require an accurate appraisal of the technology's successes and failures, not a mixture of cheerleading and financial pressure by government agencies based on unsubstantiated promises.”

Read More

IT Hiccups of the Week: NASA Rover Curiosity Placed Into Safe Mode

It’s been a fairly quiet week in regard to IT glitches of any major significance. That said, there were still a sufficient number of snarls, snafus and errors to interfere with work as well as generally upset, annoy and outrage a lot of people. We start off this week's review with an issue affecting NASA’s $2.5 billion Mars rover mission.

NASA Curiosity Goes into Safe Mode Due to Memory Issue

Responding to a problem it detected Wednesday morning with the data coming from the Mars rover Curiosity, NASA announced on Thursday that it had “switched the rover to a redundant onboard computer in response to a memory issue on the computer that had been active.”

NASA said that it will shift the rover from its current “safe mode” operation to full operational status over the next few days as well as troubleshoot what is causing the “glitch in flash memory linked to the other, now-inactive, computer.”

The NASA press release stated that on Wednesday the rover communicated "at all scheduled communication windows…but it did not send recorded data, only current status information. The status information revealed that the computer had not switched to the usual daily ‘sleep’ mode when planned. Diagnostic work in a testing simulation at JPL indicates the situation involved corrupted memory at an A-side memory location used for addressing memory files.”

A detailed story at CNET quoted Curiosity Project Manager Richard Cook as telling CBS News that, “We were in a state where the software was partially working and partially not, and we wanted to switch from that state to a pristine version of the software running on a pristine set of hardware.”

The project team thinks that space radiation, while a remote possibility, may in fact be to blame, CNET said. Again quoting Cook:

“In general, there are lots of layers of protection, the memory is self correcting and the software is supposed to be tolerant to it…But what we are theorizing happened is that we got what's called a double bit error, where you get an uncorrectable memory error in a particularly sensitive place, which is where the directory for the whole memory was sitting…So you essentially lost knowledge of where everything was. Again, software is supposed to be tolerant of that...But it looks like there was potentially a problem where software kind of got into a confused state where parts of the software were working fine but other parts of software were kind of waiting on the memory to do something...and the hardware was confused as to where things were.”

Cook indicated that, in essence, a reboot of the inactive computer should clear things up, but that the team will do a lot of analysis before that happens to make sure that there isn’t anything more troublesome lurking about.

Read More

This Week in Cybercrime: Stuxnet Two Years Older Than Previously Believed

Stuxnet’s Development Program Was a Long Thought-Out Process

On Tuesday, researchers from Symantec’s Security Response team released a report offering proof that the Stuxnet worm that targeted industrial facilities in Iran—most especially the Natanz uranium enrichment facility suspected to be part an Iranian effort to produce nuclear weapons— is two years older than previously thought. The 18-page report reveals that development of the malware dates back to 2005, although it first appeared in the wild in 2007. It wasn’t identified until July 2010. What explains the two-year lead time? An extended refinement process was probably part of what made Stuxnet and its precursor, Flame, so sophisticated. The exploits these bits of malware pulled off without attracting attention were "nothing short of amazing," Mikko H. Hypponen, chief research officer for F-Secure, a security firm in Helsinki, Finland, told IEEE Spectrum. Furthermore, says Hypponen, "You need a supercomputer and loads of scientists to do this." Symantec acknowledges that Stuxnet, which was designed to “take snapshots of the normal running state of the system, replay normal operating values during an attack so that the operators are unaware that the system is not operating normally... [and] prevent modification to the [compromised system] in case the operator tries to change any settings during the course of an attack cycle” is among the most complicated coding ever seen.

For more on how Stuxnet really worked and on the efforts to track it down, see "The Real Story of Stuxnet" in this month's issue of IEEE Spectrum.

Advanced Malware Escapes Sandbox with Help from Twitter

New malware designed to steal sensitive information exploits a patched sandbox-bypass vulnerability in Adobe Reader. The malicious code, dubbed MiniDuke by the researchers at Kaspersky Lab and CrySyS Lab, who discovered it and released a report about it this week, has attacked the systems of government agencies in 23 countries, mostly in Europe. Among its novel features are the use of steganography to hide the code it uses to create, then slip in and out of backdoors in the compromised systems; the ability to assess whether a computer is in use; and the ability to determine what detection capability the machine has. MiniDuke can also reach out to Twitter accounts created by the attackers to access tweets seeded with information pointing to command and control servers offering continually updated commands and encrypted backdoors. MiniDuke successfully bypassed the sandbox protection in Adobe Reader despite a patch meant to cover the vulnerability added on 20 February.

Read More

West Virginia Taken to the Cleaners by Cisco

There was a great story over at Ars Technica this week regarding a recently published special audit report (pdf) by West Virginia’s Legislative Auditor regarding the state’s purchase three years ago of 1164 Cisco model 3945 routers at a price of US $24 million using federal stimulus funds (a tip of the hat to a Risk Factor reader for bringing this to our attention in a comment to a recent post).  The auditor concluded that not only did the purchase bypass the state’s competitive purchasing rules for IT equipment; the state bought far more capability than it would ever need now or in the foreseeable future, and at non-competitive prices to boot. 

The audit report, for example, gives as an example the “city of Clay in Clay County [which] received 7 total routers to serve a population of 491. Five of these routers are located within .44 miles of the each other.” The cost of those seven servers—each of which can support 200 simultaneous users—was around $20 000 apiece.

The auditor noted that over $6.6 million was spent on Cisco model 3945 router features that weren’t necessary to begin with. Furthermore, if the state had actually purchased the correctly sized routers, it could have saved at least another $8 million or so. I say at least, because that number is based on router prices quoted in a non-competitive bidding environment—holding a competition that included other router manufacturers (Alcatel-Lucent, Brocade, HP, Juniper, et al.) would have likely saved even more money. For each $5 million saved on routers, the state could have purchased 104 additional miles of needed broadband fiber, the auditor noted.

I name those manufacturers specifically because the West Virginia audit report points to “California State University, the largest four-year university in America, [which] used a competitive bidding purchase to purchase an eight-year refreshing of its 23-campus 10G network. The Director of Cyber Infrastructure of California State University provided documentation showing that Alcatel-Lucent won the project with a bid of $22 million. Cisco’s bid was $122.8 million. The other bids were Brocade at $24 million, Juniper at $31.6 million, and HP at $41 million. Furthermore in May of 2011, Purdue University bid out replacement components for its Hansen Computer Cluster. Cisco won the Purdue University competitive bid process by offering a 76 percent discount off the cost of its products.”

Why did this wasteful fiasco happen? The audit report basically says no one really knows for certain—or at least is willing to 'fess up to being the party who screwed up: stuff just sort of happened.  The best that can be determined was that those receiving the federal stimulus funds wanted to spend as much of them as fast as possible, need be damned. Or in the auditor’s words, “Those making the decisions on how to spend the money did not consult individuals with technical knowledge on the best methods to utilize the funds.”

Read More

IT Hiccups of the Week: At least 17.4 Million U.S. Medication Errors Avoided by Hospital Computerized Provider Order Entry Systems

This past week has seen a hodgepodge of IT-related uff das, glitches and snarls. However, we are going to start this week off with millions of human errors avoided by IT.

Computerized Provider Order Entry Systems Avoid an Estimated 17.4 Million Medication Errors Per Year

Last week, the Journal of the American Medical Informatics Association (JAMIA) published a study that estimated the reduction in medication errors in U.S. hospitals that could reasonably be attributed to their computerized provider order entry (CPOE) systems.  The study’s authors said that they “conducted a systematic literature review and applied random-effects meta-analytic techniques” to develop a “pooled estimate” of the effects of CPOEs on medication errors.

They then took this estimate and combined it “with data from the 2006 American Society of Health-System Pharmacists Annual Survey, the 2007 American Hospital Association Annual Survey, and the latter's 2008 Electronic Health Record Adoption Database supplement to estimate the percentage and absolute reduction in medication errors attributable to CPOE.”

Working through the data, the authors concluded that a CPOE system decreases the likelihood of error by about 48 percent . "Given this effect size," say the authors, "and the degree of CPOE adoption and use in hospitals in 2008, we estimate a 12.5% reduction in medication errors, or ∼17.4 million medication errors averted in the USA in 1 year.”

The study authors are careful to note that it is unclear whether this reduction in medication error actually “translates into reduced harm for patients,” although the research tends to lead one towards that conclusion.

The number of medication errors avoided because of CPOEs is expected to rise as more hospitals install them. Only about 20 percent of U.S. hospitals had deployed CPOE systems as of the middle of 2012.

Read More

Déjà Vu All Over Again: California’s DMV IT Project Cancelled

The Golden State's Department of Motor Vehicles (DMV) must think it has checked into an IT version of Hotel California, where once a DMV modernization project is started, it can never ever finish it.

Last week, on behalf of DMV's management, California’s CIO informed state legislators that it had decided to cancel at the end of January the remainder of its US $208 million, 6-year IT modernization project with Hewlett-Packard, which was supposed to be completed in May of this year. As reported in the LA Times, after spending some $134 million ($50 million on HP) and having “significant concerns with the lack of progress,” the DMV decided to call it quits and do a rethink of the program’s direction. HP had apparently saw the handwriting on the wall. Its contract ended last November, and HP refused to hire key staff until the contract was renegotiated.

The DMV IT modernization program was started in 2006 in the wake of a previous DMV project failure (called Info/California) that blew through $44 million between its start in 1987 and cancellation in 1994. That “hopeless failure,” as it was then described, was supposed to be a 5-year, $28 million effort; when it was terminated seven years in, the project’s cost to complete had skyrocketed to an estimated $201 million with an uncertain finish date. A 1994 LA Times story reported that an assessment found the DMV had limited experience in computer technology, grossly underestimated the project’s scope and size, and lacked consistent and sustained management. The project's failure also sparked a full legislative probe.

The current DMV debacle, along with this month’s termination of the MyCalPay’s project, has spurred calls for yet another probe. Legislators could save a lot of time and money by just cutting and pasting from the the earlier project's investigation. I'm sure they'll find a lot of the same inexperience, underestimating, and inconsistent management.

Not all was lost in the current effort: at least a new system for issuing California drivers’ licenses was rolled out. However, the critical vehicle registration portion of the DMV system, with its decades-old “dangerously antiquated technology” (pdf), will have to stay in use while a new go-forward plan is developed.

Read More

IT Hiccups of the Week: U.K. O2 Mobile Customers Told To Be Careful What They Say

This week’s IT snafus and snarls have a definite international flavor to them. The first story takes us to the U.K., and a story of some “crossed lines.”

O2 Customers Complain About Eavesdropping on Calls

Last Tuesday, the Register ran a story about some Birmingham, England-area customers of U.K. mobile provider O2 being able to listen in on calls apparently originating in Scotland. According to the Register, customers started to complain about the “crossed lines” the previous week, but the weekend was nearly over before O2 was even able to confirm that this eavesdropping was indeed happening. Still, said O2 to the Register on Monday, it was “unable to replicate the problem despite having received ‘a handful’ of complaints.’”

Then a story in the London Telegraph said that the problem had spread beyond Birmingham to Scotland, Wales, and Liverpool, and potentially involved anyone using the O2 network in the affected areas.

On Thursday, a Daily Mail story reported that O2 had traced the problem to a network cable and card. The Mail quoted an O2 spokesperson as saying that, “We had a problem with a network card responsible for transferring call traffic in the Birmingham area which resulted in a handful of customers experiencing crossed lines during phone conversations...Our engineers identified that a cable linked to the card was not working correctly and fixed the problem at 6.15pm on Tuesday. We have been monitoring the situation closely with no further reported issues. We apologise for any inconvenience caused to our customers.”

During the eavesdropping interlude, U.K. financial expert Martin Lewis warned O2 and other wireless customers to be careful what they said, especially concerning their financial and personal affairs.  But according to the Register, this same problem has been intermittently reported by O2 customers since 2010, and Martin's opinion is probably good advice given that the U.K. security services want to snoop on all phone calls being made.

Read More

U.S. Agency Issues Call for National Cybersecurity Standards

In the post-Stuxnet world, the prospect of undeclared cyberwar has been dragged out of the shadows to the front pages. With that in mind, yesterday the U.S. National Institute of Standards and Technology (NIST) kicked off an effort to establish a set of best practices for protecting the networks and computers that run the country’s critical infrastructure. The Cybersecurity Framework was initiated at the behest of President Barack Obama, who issued an executive order calling for a common core of standards and procedures aimed at keeping power plants and financial, transportation, and communication systems from falling prey to any of a wide range of cybersecurity threats.

The first step, says NIST, will be a formal Request for Information from infrastructure owners and operators, plus federal agencies, local government authorities, and other standards-setting organizations. NIST says it wants to know what has been effective in terms of keeping the wolves at bay. To that end, it will hold a series of workshops over the next few months where it will gather more input. The agency says that when the framework is completed in about a year, it should give organizations “a menu of management, operational, and technical security controls, including policies and processes” that will make them reasonably sure that their efforts represent an effective use of their time and resources. 

Oddly, though, the press release announcing the development of the Cybersecurity Framework makes no mention that the final public version of a report titled, "Security and Privacy Controls for Federal Information Systems and Organizations" was released on 5 February and that the public comment period continues through 1 March.

Image: Linda Bucklin/iStockphoto

California’s Payroll Project Debacle: Another $50 Million Up in Smoke

Ah, I love the smell of napalmed IT projects in the morning!

Not, though, when they are government IT projects and the wafting odor is from taxpayer monies going up in smoke.  And unfortunately, for past few weeks, the stench of burning government IT projects has been especially pungent.

We start off in California, where after burning through some $50 million, California State Controller John Chiang announced last Friday he had decided to terminate the state’s US $89.7 million contract “with SAP as the system integrator for the MyCalPAYS system, the largest payroll modernization effort in the nation.” The planned 5-phase effort mercifully never made it past the first pilot phase.

Furthermore, Chiang said that the Secretary of the California Technology Agency (CTA)  has “suspended further work until the CTA and SCO [State Controller’s Office] together conduct an independent assessment of SAP’s system to determine whether any of SAP’s work can be used in the SCO’s go-forward plan to address the State’s business needs.”

You may remember that Chiang sent SAP a letter last October warning that the project was “foundering and is in danger of collapsing,” and gave SAP one last chance in the form of a demand for urgent get-well efforts from the company. Chiang claimed that there were errors in one out of every three tasks performed by SAP's system, and that there hadn’t been a single pay cycle without material payroll errors occurring.

In Friday’s announcement, Chiang threw in the towel. He said that while he had hoped “for a successful cure to SAP’s failure to deliver an accurate, stable, reliable payroll system, SAP has not demonstrated an ability to do so.” This was especially disheartening, Chiang implied, given that the SAP effort covered only 1300 SCO employees who had “fairly simple payroll requirements.”  There was no way the SAP system could be trusted to support the payroll requirements of the state's "240 000 employees, operating out of 160 different departments, under 21 different bargaining units."

SAP said in response to the news of its contract termination that it was “extremely disappointed in the actions. SAP stands behind our software and actions.... SAP also believes we have satisfied all contractual obligations in this project.”

All of this, of course, suggests that when the napalm smoke clears, a date in court will be in the offing. Chiang as much as said so in the announcement: “The SCO will pursue every contractual and legal option available to hold SAP accountable for its failed performance and to protect the interests of the State and its taxpayers. This includes contractually required mediation and, if necessary, litigation.”

An SCO spokesperson called the project’s performance “frightening,” but what must be really frightening to California taxpayers is the continued inability of the state to manage the acquisition of its IT projects. So far, nearly $254 million has been spent so far in two unsuccessful attempts to get a state government payroll system in place, the LA Times reports. If SAP fights instead of settles, it would at least be a public service, exposing the depth of California’s IT project risk mismanagement.

The upshot is that California will continue to use its decades-old Cobol-based payroll system until it figures out what to do next. And to help it figure that out, the SCO has—in the best tradition of government—set up an IT Procurement Task Force. Whenever in doubt, form a committee.

I hope the Task Force members have strong stomachs; the stench of IT project failure coming out of California is of the mephitis variety.

Read More

Risk Factor

IEEE Spectrum's risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.

Willie D. Jones
Load More