FTC Puts Uber on a Short Leash for Security Breaches

For the next 20 years, the agency will review reports on Uber’s privacy and security practices

4 min read

A tablet sits on display at a Uber Technologies Inc. office
Photo: Huiying Ore/Bloomberg via Getty Images

It’s not nice―or smart―to deceive the U.S. Federal Trade Commission, especially while you’re in negotiations with the agency over penalties it’s going to impose for previously being dishonest.

Last August, the ride-hailing company Uber entered into a consent agreement with the FTC regarding its supposedly “securely stored” and “closely monitored” (pdf) customer and driver information. Uber bragged that it was using “the most up-to-date technology and services to ensure that none of these are compromised,” and promised that information was “encrypted to the highest security standards available.”

Alas, the FTC found these claims were more chimera than reality. As a consequence of its lackadaisical security practices, Uber experienced a data breach in May 2014 that allowed attackers to access the names and driver’s licenses of 100,000 Uber drivers, along with many of the drivers’ bank accounts and Social Security numbers.

In that consent agreement, Uber agreed to stop misrepresenting the quality of its security and privacy practices; put a comprehensive privacy program into place, and; get independent third-party risk assessments of its privacy program every two years for the next 20 years. The first assessment report would be sent to the FTC, while the rest would be retained by Uber, which promised to act on any recommendations made in the reports.

Then, in November 2017, Uber admitted there had been another data breach about one year earlier. This time, hackers accessed some 25.6 million names and email addresses, 22.1 million names and mobile phone numbers, and 607,000 names and driver’s license numbers of U.S. Uber drivers and customers. Furthermore, Uber confessed that it had paid $100,000 in ransomware disguised as a “bug bounty” to intruders to delete the data and keep the breach out of the public eye.

Naturally, the FTC was miffed that Uber failed to disclose this second breach even as the company was finalizing a consent agreement involving the 2014 breach. The FTC’s irritation increased when it found that the intruders used virtually the same attack method in both breaches.

Cybercriminals have increasingly targeted software development and pre-production environments.

Last week, Uber and the FTC finally settled on a revised consent agreement that now covers both the 2014 and 2016 breaches. The new agreement includes even more comprehensive security and privacy risk assessments, covering the security of Uber’s software development environment and use of the bug bounty.

The FTC also wants all the risk assessments, not just the first one, sent to it for the next 20 years. This will give the FTC insight into whether Uber actually follows through on any recommendations to improve its privacy and security practices.

In addition, the FTC is requiring Uber to inform the agency if the company discovers a breach involving unauthorized access or acquisition of consumer information that Uber is required to report to any local, state, or federal governments. The FTC also warned Uber that non-compliance with the consent agreement could result in financial penalties.

In a blog post, the FTC emphasized that all companies, not just Uber, need to secure their software development environments. Cybercriminals have increasingly targeted software development and pre-production environments since security there is often less robust than in deployed IT systems. Uber’s two breaches provide good examples of what not to do.

For instance, in the 2014 data breach, the FTC reports (pdf) that, “An intruder was able to access consumers’ personal information in plain text in [Uber’s] Amazon S3 Datastore using an access key that one of [Uber’s] engineers had publicly posted to GitHub, a code-sharing website used by software developers. The publicly posted key granted full administrative privileges to all data and documents stored within [Uber’s] Amazon S3 Datastore.” Ouch.

Then in the 2016 data breach, the FTC states (pdf) that, “Once again, intruders gained access to the Amazon S3 Datastore using an access key that an Uber engineer had posted to GitHub. This time, the key was in plain text in code that was posted to a private GitHub repository. However, Uber granted its engineers access to Uber’s GitHub repositories through engineers’ individual GitHub accounts, which engineers generally accessed through personal email addresses.”

Uber is not unique in having development environment security practices that are less than stellar.

Uber is not unique in having development environment security practices that are less than stellar. In a survey last year of U. S. developers, 52 percent admitted to running vulnerable or undeveloped web applications on their servers, and 55 percent also acknowledged that their servers were directly connected to the Internet.

The FTC-Uber revised consent agreement comes on the heels of another cybersecurity-related news event. Last month, Yahoo unexpectedly settled a class action suit filed by shareholders for the financial impacts of Yahoo’s data breaches over the past five years, with the one in 2013 involving all of its 3 billion accounts. The lawsuit contended that these data breaches caused investors financial harm because breach disclosures, some being the largest to date, were not immediately made or were woefully incomplete.

The lawsuit also claimed that Yahoo was, like Uber, misrepresenting the capabilities and extent of its cybersecurity practices. While Yahoo claimed it was using “best practices,” the reality was the opposite.

In the past, cybersecurity-related securities class action lawsuits routinely failed because a data breach is not by itself proof that a company’s security practices were deficient. The Yahoo settlement, however, may set a precedent and open new legal (and monetary) avenues against public companies that claim to have acceptable cybersecurity practices in place, but which do not, and then suffer a breach.

Equifax’s breach immediately comes to mind, given that its former CEO Richard F. Smith proclaimed, two weeks after the situation was discovered but not yet disclosed, that data security and protecting customer data was a “huge priority.” However, Equifax’s cybersecurity practices were far from robust, undermining Smith’s claim.

Add into the mix the U.S. Securities and Exchange Commission’s recently released guidance concerning how public companies should disclose their material cybersecurity risks, and the fact that enforcement of the new European General Data Protection Regulation (GDPR) begins next month, and the direct cost of data breaches may soon reach the point that corporations will actually take cybersecurity seriously.

The Conversation (0)