That’s my impression after spending a day at the IEEE Symposium on Security and Privacy, held this week in Berkeley, Calif. I knew privacy protection is not something I can take for granted when using Facebook or Google (more on that coming up in Spectrum’s June special issue on social networks and the Web).
But it’s not just Facebook and Google. According to researchers, systems that seem to protect privacy in other venues aren’t doing a very good job either.
Take recommendation engines. These systems compile recommendations by comparing a user’s selections against selections made by other users in order to come up with a list of suggested materials—think Amazon, Netflix, and the like. Because personal data is tossed in with data from thousands to millions of other users, you’d think it would be lost. But according to researchers from Princeton University, the University of Texas, and Stanford University, it’s not lost; in fact, it’s pretty easy to figure out exactly what someone purchased from the recommendations made to them, particularly when recommendations change. This, said presenter Joseph Calandrino, could be worrisome to someone who perhaps purchased books about a health problem they’d like to keep private, or books that might lead an employer to conclude that the employee is getting ready to switch jobs.
And then there’s location data. We all became aware that our mobile devices are collecting location data when news about iPhone’s tracking data came out last month. But, again, it turns out that the tools available that could protect this data don’t work that well, according to researchers from the École Polytechnique Fédérale de Lausanne. They looked at techniques like removing identifying information to make the data anonymous, obscuring the data at various intervals, and reducing its precision and found out that it’s not that hard to tease out the location information from behind these smoke screens. They designed a tool that they hope will be used to screen protection software for efficacy, but the message I got from their research is that it’s quite hard to hide location data, particularly if you know something about the person, for example, where they live or work.
So in terms of protecting privacy, computer security researchers didn’t have much in the way of good news.
But computer security researchers had great news on one front; the war on spam, it turns out, can be won, or close to it. Researchers from the University of California at San Diego and Berkeley, the International Computer Science Institute in Berkeley, and the Budapest University of Technology and Economics worked together to analyze a host of individual spam campaigns. The were looking at how information flowed between web sites and servers. And they found a choke point—the vast array of payments for goods or services advertised by spam goes through a handful of banks. And, they discovered, this banking cluster, while it changes, cannot change quickly. Therefore, countermeasures could cut off the flow of money to spammers and make becoming a spammer a less viable career choice. At the time of the study, the researchers determined that herbal and replica purchases were being cleared by a bank in St. Kitts, pharmaceutical purchases by a bank in Azerbaijan and one in Latvia, and software purchases by another bank in Latvia and one in Russia. The researchers suggested that credit card providers simply blacklist these banks, refusing to settle certain transactions with them, updating the blacklist as spammers try to shift to other banks. The blacklist, they indicated, could likely do a good job of keeping up with shifts from bank to bank, given the group is so small.
Tekla S. Perry is a senior editor at IEEE Spectrum. Based in Palo Alto, Calif., she's been covering the people, companies, and technology that make Silicon Valley a special place for more than 40 years. An IEEE member, she holds a bachelor's degree in journalism from Michigan State University.