IARPA’s New Director Wants You to Surprise Him

Jason Matheny, former leader of the Office for Anticipating Surprise, hopes to cast a wide net to help solve spy-agency problems

6 min read

IARPA’s New Director Wants You to Surprise Him
Photo: IARPA

img 10NWMathenyQAIARPAPhoto: IARPA

Got a concept for cutting-edge spy tech? Jason Matheny, who was named director of the Intelligence Advanced Research Projects Activity (IARPA) in August, wants your great ideas. The agency, established in 2006, invests in high-risk, high-payoff research to solve problems faced by the U.S. intelligence community. Partly due to Matheny’s work, the agency is tapping resources outside of government, including crowdsourcing ideas from the general public.

Matheny joined IARPA in 2009 after a career in both academia—Oxford University, Princeton University, and the Johns Hopkins Applied Physics Laboratory—and the startup world. He previously headed IARPA’s comically named Office for Anticipating Surprise—which develops new forecasting capabilities—and served as program manager of Aggregative Contingent Estimation (ACE), an outfit that crowdsourced forecasts from more than 20,000 people on various geopolitical issues. He also worked at the agency’s Office of Incisive Analysis, which analyzes data sets.

IEEE Spectrum contributing editor Tam Harbert talked with Matheny about the agency’s recent work and his goals for the future.

Jason Matheny on…

  1. Predicting the Future
  2. Supercomputers for Spy Work
  3. Winning Fabulous Prizes

IEEE Spectrum: Because IARPA is associated with the intelligence community, people assume its research is secret. Yet you seem pretty open in collaborating with industry and academia, and even crowdsourcing.

Jason Matheny: One of the features that distinguishes IARPA is its degree of openness and external engagement. We know that a lot of the problems cannot be solved internally. Unlike the secretive Q in the James Bond movies, who is always developing gadgets for 007 deep inside a lab, IARPA identifies Qs who are working outside—in academia and industry—on important technologies. We find these Qs by putting out calls for research proposals and by staging research tournaments.

Most of IARPA’s work is unclassified, and we make sure that work gets published. We also make the data sets produced by our research publicly available. These data sets may be annotated speech files for speech recognition, for example, or event data sets of political instability. They can be quite useful to other researchers who are working on related problems.

IEEE Spectrum: You’ve been at IARPA since its founding. What have been IARPA’s biggest accomplishments so far?

Jason Matheny: First is getting an organization into place that funds high-risk, high-reward research. It’s a nontrivial challenge in government to create such an organization and to sustain it.

Specific successes include our research on human judgment, including the Aggregative Contingent Estimation program, which ran from June 2010 until June 2015. [According to IARPA's website, the goal of ACE was to develop advanced techniques that combine the judgments of many analysts in ways that would enhance the accuracy, precision, and timeliness of intelligence forecasts.] This was the world’s largest forecasting experiment. It involved more than 20,000 people collecting over 2 million judgments that were crowdsourced on hundreds of geopolitical topics. It asked thousands of participants to forecast who would win a political election, for example, or which countries would go to war. We kept score of whose forecasts were right, whose were wrong, what distinguished the good forecasters from the not-so-good forecasters, and discovered ways of combining the judgments from individuals to create better forecasts than any single individual.

ACE provided a template for how to do a range of research at IARPA. Specifically, it inspired several other IARPA forecasting tournaments, including a program to forecast cyberattacks, a program to forecast disease outbreaks and political instability, a program to forecast military mobilization and terrorism, and a program to forecast insider threats.

There is also a program called Forecasting Science & Technology (ForeST). As far as we know, ForeST was the world’s largest tournament in science and technology forecasting. We had about 15,000 people over a two-year period forecast around 1,000 different science and technology milestones. What would be the most advanced photovoltaic cell by the end of 2014? What would be the top of the top 500 supercomputers by the end of 2013? IEEE was a partner in some of that work.

That program, which ended in June 2015, found that by crowdsourcing these science and technology forecasts, we could significantly outperform traditional statistical forecasts of technology trends. We could also outperform small groups of experts. And that’s a common theme in some of the research that we funded. If you want a good forecast, you probably don’t depend on a single model, or a single expert, or a small group of deliberating experts. It makes sense to look at a range of models, a range of individuals, and take some sort of average of those forecasts.

IEEE Spectrum: So what are you doing with these forecasts? How do you apply them? Have you been able to successfully predict certain events, like predicting what ISIS will do next or predicting the Ebola outbreak?

Jason Matheny: There have been a number of successes. A team in our Open Source Indicators (OSI) program was the first to notify U.S. public health officials about the Ebola outbreak in West Africa. It identified the outbreak from automated detection of news reports of an undefined hemorrhagic fever in West Africa. That program also accurately forecasted the Brazilian Spring, which was a series of nationwide protests in Brazil in 2013.

Back to top

IEEE Spectrum: What other IARPA research may specifically interest IEEE’s readership?

Jason Matheny: In electrical engineering, we have a program focused on developing the next generation of high-performance computers based on cryogenic computing. The goal of the program, called Cryogenic Computing Complexity (C3), is to create a computer that requires only 20 percent or less of the energy used by a traditional supercomputer. Rather than using semiconductors to move information between logic and memory, it uses superconductors.

Today’s supercomputers can’t go up to an exascale without requiring a football field of computers and a power plant large enough to supply a midsize city. With cryogenic computing, you reduce to zero the energy cost of moving bits around because you’re using superconductors rather than semiconductors. However, this requires a whole new kind of chip. Rather than using CMOS, you’re using niobium, a superconductor. And you’re using a different kind of logic and memory, so it requires a different way of programing the computers.

The C3 program is among the foundational research and development agencies for the new National Strategic Computing Initiative, created by an executive order President Obama issued in July.

The other project that might be of interest is quantum computing, which is something we’ve invested in for several years. In fact, one of our funded researchers [physicist David Wineland] got the Nobel Prize for Physics in 2012 for work on quantum computing that we funded.

IEEE Spectrum: A lot of the work you’re doing is with algorithms and forecasting. The logical partner in the commercial world is Google. Have you teamed up with it or other companies that are experts in search and algorithmic development?

Jason Matheny: Our goal is not to replicate what’s being done in industry. We try to find the problems that aren’t going to be solved by industry or academia without our investment. We stay updated on what industry is already doing so we can identify where we can make advances that industry isn’t pursuing.

For example, Google searches video for tags but doesn’t actually look at the content of the video itself. For most of the videos that Google indexes, that’s enough. The problem for us is that the videos we need to find might be martyrdom videos of people planning a suicide bombing, or IED-placement videos. Those videos are not tagged because the people posting them want them to be found only after the fact, or only by those who were sent a link. They don’t want them to be searchable in the same way.

So we need to be able to index the video content as opposed to just the tags that are associated with the video. That’s meant a completely different approach to video search. We’ve led the way through a program called Aladdin Video, which indexes based on what’s happening inside the video. For example, it can characterize that a video is of a baseball game, or of a traffic jam, without any tags associated with the videos.

Another example is a program called Finder, which geolocates images. If you don’t know where the picture was taken, can you figure that out by analyzing the features that are in the picture, like the arrangement of trees or a certain mountain skyline? There aren’t a whole lot of commercial applications for that because people usually want you to know where photos were taken, and they’ve tagged it themselves. But the classic problem for us would be the images of Osama bin Laden. Can we tell which cave this is, which part of Pakistan this is?

Back to top

IEEE Spectrum: What are your priorities and goals for the agency moving forward?

Jason Matheny: One is to increase the number of prize-driven challenges. We’ve run a couple of challenges in which we don’t limit the research to those whom we’ve awarded a research contract but open the competition to anybody who wants to participate. These two challenges were Automatic SPeech recognition In Reverberant Environments (ASpIRE) and Investigating Novel Statistical Techniques to Identify Neurophysiological Correlates of Trustworthiness (INSTINCT).

A lot of people who have great ideas are not part of the federal contracting system. They wouldn’t even know how to team up with others to assemble a proposal. But they might be working in their basement on some data-science project and might have an idea for how to solve an important problem. It is a very cost-effective way to identify talent in the world.

Those challenges were successful in that they exceeded our goals and were incredibly cost effective. The value of the prizes was relatively low. [ASpIRE, for example, awarded US $110,000 to the winning researchers.] Most people want to compete because they’re tinkerers. They want bragging rights for having succeeded, and they’re just really interested in working on a hard problem, and when they’re given data for that hard problem, it’s pretty enticing.

Two is to increase the number of programs, and program managers, that IARPA has. We’re always hiring. We’re always looking for new program managers who are passionate about a research problem and would like to spend a few years in government trying to solve that problem by getting the best and brightest to work on it. Scientists and engineers who want to come to IARPA should think of pitching us a program.

Back to top

The Conversation (0)