Last time I wrote about the social panopticon, I was being tongue in cheek. But today I don my tinfoil hat for real to bring you the Danger Room story of Recorded Future, a company being funded by a CIA research branch and Google to mine publicly available data (including social networking data) for event prediction. If you haven't read it, go do that now. I'll be here when you get back.
The idea is to figure out for each incident who was involved, where it happened and when it might go down. Recorded Future then plots that chatter, showing online “momentum” for any given event. “The cool thing is, you can actually predict the curve, in many cases,” says company CEO Christopher Ahlberg.
Oh, Philip K. Dick, which dystopian future development can’t you predict? Minority Report gave us the department of pre-crime, and we seem to be pretty cool with galloping toward that reality. What else do you do with the "gusher of money" U.S. Defense Secretary Robert Gates describes in the Washington Post’s fantastic expose of the homeland security boondoggle?
That money gusher, according to the Post, enabled nine years of boundless privatization of national security work. This amazing graphic lays out just how many organizations are newly in existence thanks to 9/11 and taxpayer money. And they all basically exist to do one thing: predict the future.
For Recorded Future, the tool for predicting the future is pattern recognition through social networking.
[Recorded Future] scours tens of thousands of websites, blogs and Twitter accounts to find the relationships between people, organizations, actions and incidents — both present and still-to-come. In a white paper, the company says its temporal analytics engine “goes beyond search” by “looking at the ‘invisible links’ between documents that talk about the same, or related, entities and events."
Unfortunately Danger Room's rare sunlit example is only a hint of what’s to come. In a New Scientist article out this week (read it while it’s hot; I think it will soon disappear behind a subscriber wall) Mark Buchanan points out that researchers are using all this publicly available information to harden what has up till now been the squishiest branch of human knowledge: social science.
Our online footprints (those thousands of baby pictures, all your "likes" and interminable retweets) are helping the social sciences become more like the "hard" sciences by stepping toward quantifying once-elusive qualities like the effects of public opinion. And why worry about the "stolen" facebook data torrent when you're already voluntarily giving up the details of your offline existence with location-tracking smart phones, foursquare check-ins, and contact lists?
The researchers Buchanan profiles "ultimately hope to discover mathematical laws that describe human behavior, and which could be used to predict what people will do."
I have two predictions.
1) After prediction comes machinery.
What kind of machinery? Consider the once soft and squishy field of "deception detection," which now has actual hardware in use by the good people at the Transportation Security Administration. Back in May, Sharon Weinberger profiled the Future Attribute Screening Technology (FAST), hardware under development that is being financed "around $10 million a year," no doubt courtesy of the gusher of money.
FAST (a latter day polygraph) measures nonverbal cues: Weinberger says "the idea is to have passengers walk through a portal as sensors remotely monitor their vital signs for 'malintent': a neologism meaning the intent or desire to cause harm." Inside the portal, thermal cameras and something called BioLIDAR purportedly measure tiny physiological signals including flickers in eye movement, pupil dilation, heart rate and respiration.
How long before “GoogleMe” means something completely different?
2) Recorded Future CEO Christopher Ahlberg is about to lose at least one of his funding sources.
After his proud chat with Danger Room about his cool new project, it’s hard to say which funding source will be the first to disassociate itself from this story—will it be Google, which wants to protect its "don’t be evil” reputation but already has two sizeable black eyes from street-view-data-gate and the Google Buzz privacy debacle? Or In-Q-Tel, which is likely roasting under the gaze of some new scrutiny in the wake of the Washington Post report?