Tech Talk iconTech Talk


Three Software Tricks for Sorting Through the Barrage of UAV Footage

PHOTO CREDIT: Lt. Col. Leslie Pratt/USAF. An MQ-9 Reaper drone flies a combat mission over southern Afghanistan.

In 2009 alone, the U.S. Air Force shot 24 years’ worth of video over Iraq and Afghanistan using spy drones (UAVs). With so many planes in the air, and more and more cameras being attached to each plane, the Air Force is generating more footage than analysts can sift through, the New York Times reported back in January. “We’re going to find ourselves in the not too distant future swimming in sensors and drowning in data,” Lt. Gen. David A. Deptula the U.S. Air Force’s top intelligence official, told National Defense Magazine the same month.

Of course, the best way to reach the surface of such a large data pool is to get a computer to show you which way to swim. As one might guess, programming a system to automatically search video and pick out noteworthy information is not an easy problem. And so far, no one has developed software that can keep up with the Air Force’s high-tech hardware. But Mubarak Shah, who founded and now directs the Computer Visions Lab at the University of Central Florida, recently presented a few ideas about how to make improvements.

Shah, who spoke Monday at the 7th IEEE International Conference on Advanced Video and Signal-Based Surveillance in Boston, Massachusetts, is a graying, deep-eyed fellow. According to the computer engineer perched next to me, Shah is “the guy you’ve heard about if you know anything about anyone in video surveillance.”

Shah focused on three problems for surveillance software that are notoriously difficult, particularly when you’re trying to analyze video shot thousands of meters above the ground from a drone flying hundreds of kilometers per hour.

Follow the Dots

The first problem he addressed was how to track big swarms of objects, such as cars, traveling over a wide area, such as an expressway. The difficulty of this task lies in the fact that, when shot from above, cars traveling on an expressway are exceedingly small (no more than 30 pixels), and there are thousands of them. Plus, the plane that’s shooting footage is moving faster than the cars it’s capturing, so you’ve only got a few frames to work with for each car.

Shah’s solution to this problem depends on keeping track of all the possible paths a vehicle may have taken, then weeding out the poor choices based on common sense (Two vehicles probably didn’t cross paths at the same time…unless, of course, they crashed) and a bit of modern transportation theory (If one car is behind another car, it’s probably accelerating at a similar rate). It looked like he got pretty good results: about 80-90 percent accuracy.

Wait... Is That a Pedestrian or a Palm Tree?

If you think antlike cars are hard for a computer to follow, you can imagine the difficulty in trying to program it to find a person. Some of the best people-detection systems rely on histograms—statistical distributions that determine whether an object is a person based on probabilities. But because people are so very tiny in aerial images, the histogram method mistakes quite a lot of things (trees, mailboxes, stoplights) for people. “There can be thousands of those that are completely wrong,” Shah says. He proposes using some basic 8th-grade geometry tricks to find a person based on the relationship between the height of an object and the length of its shadow. He admits that this strategy wouldn’t work so well on video shot on cloudy days or using infrared light.

Mapping by Motion

The third—and in my opinion, coolest—tool Shah presented was a method for determining movement patterns. Say, for example, you have some aerial footage of an Afghan city and you want to automatically know how it’s laid out—where the roads are, the bridges, the intersections, where people regularly travel, the areas they avoid, where they gather. What a computer sees, however, is “very noisy optical flow,” Shah says—lots of motion but not much order. Using a mathematical noise-reducing tool known as a "mixture of Gaussians," Shah can find order in the static-like mess of optical data and get his software to draw a picture of the city in motion. “Using this, we can basically discover the road networks,” he says.

Automating video search seems an ambitious project, but one that needs to be done. The newer Reaper drones now shoot video in 10 directions at once, and the Air Force plans to eventually upgrade that number to 65. That’s 65 video streams coming from one spy plane. The deluge of data isn’t stopping, and there just aren’t enough eyeballs to sort through it all.

Google: "Goggles Does NOT Do Face Recognition"

The big news out of Hot Chips on Monday was Google's promise to have its Goggles visual search app ready for the iPhoneby the end of 2010. Google Goggles project lead David Petrous also provided the inside scoop into how Goggles deciphers your images in the cloud. But the most interesting takeaway from Petrous' talk was his repeated insistence that Google Goggles does not do facial recognition—interspersed by a long tutorial on how well it would work if it did.

Augmenting your reality
Augmented reality is a step toward intuitive search, like having an insightful personal assistant following your every move, answering not just "what am I looking at?" but intuiting exactly what you want to know about it and why. For a machine, contextualizing and anticipating what you actually want is pretty difficult. Heck, it’s no picnic for a human. With that in mind, pointing your Android phone at the Eiffel Tower is pretty straightforward because there are only so many actions associated with that. 1) Here’s what you’re looking at. 2) Here’s some historical and technical information about the Eiffel Tower. 3) Here are directions to there from where you are standing.

It gets harder when you're pointing at something ambiguous. Petrous demonstrated this point by capturing a Goggles image of a random old book called "Basic Machines and How They Work."

6.5 seconds later three results came back. The first was the book result. The second was some more information about the book. The third was the interesting part: From the picture on the cover of the book, the Goggles infrastructure had figured out to put a link to “manual transmission linkage.” The whole audience swooned and clapped.

"A picture is worth a thousand words. How do we pick the best three?"
Here’s how it works. You take the picture. You stare in wonder as a laser beam scans the image, distracting you while you wait the 6.5 seconds for the Google cloud to chew on your image.

During those 6.5 seconds, the image is sent to a Google front door, which passes it off to the Goggles root, which in turn sprays the image in parallel to many different, discretely housed "recognition disciplines." These are visual search engines that specialize in narrow fields such as barcodes, landmarks, DVDs, wine labels, text, logos, and so on. Petrous' slide showed about 20 of these but it’s not clear whether the diagram was representative or for illustration purposes only.

All these discrete entities then vote on what they think the image is, and the Goggles root, electoral-college style, tallies the votes in some esoteric fashion and returns the results to the user 6.5 seconds later.

So what's it good and bad at? "Given a new photo, we can recognize the image 57 percent of the time," Petrous said. Google has bagged and tagged a database of 1 billion recognizable images at this point. It nails most corporate logos, notably Coca Cola. It does less well with minimalist icons like the Nike swoosh. Where it does really badly? Black cats. No kidding. In fact, it is easier for Google Goggles to recognize a specific face than to identify a black cat.

Not That Google Goggles Does Face Recognition

Google Goggles does not do face recognition. Have I mentioned that? Petrou mentioned it no fewer than four times (specifically name-checking any journalists in the audience). But he also made sure to mix his message by mentioning that Google can do face recognition. And pretty well, too!

"The more labeled samples you have—say pictures on social networks—the better we can do," Petrous said. For all his protestations that Goggles wouldn’t use facial recognition, he sure could not help himself from bragging about how awesome Goggles could hypothetically do at picking your face out of a crowd. "There’s a sweet spot, around 17 images, when this technology, given a new picture of you, will rank you in the top ten results 50 percent of the time.

When you feed it 50 pictures (not difficult given the horrifying new Facebook suggestion to tag random images of people you recognize) you will appear in the top 5 results half the time.

"We do it well but it’s not deployed." Is that a threat or a promise?
Ominously, Petrous blew right past a slide titled "Must Be Deployed Responsibly." I guess he thought Hot Chips wasn’t the audience for that kind of soft-focus Lifetime Television for Women hand-wringing.

Not true!

I heard a lot of muttering at lunch after the session from engineers referring to Google as Big Brother. Several people independently brought up the Wifi sniffing fiasco.

Implications? That’s not an MP, that’s a YP (your problem)
Opening up the talk, Petrou said "society may be ready for this technology, or it might not."

In his book Halting State, British sci-fi writer Charles Stross laid out what will likely be the first implementation of Augmented Reality.

In the book, law enforcement officials are issued standard AR glasses, which can be tweaked to provide a transparent overlay the way you can turn on and off layers in Google maps. Except, what they see is not just maps and landmarks, but the dossier and criminal history of every person who crosses their path.

What would you need to make this sci-fi a reality? 1) A Google Goggles-type back-end that incorporates face recognition; 2) some jaunty AR specs; and 3) access to the databases that contain the public records and personal information shady aggregator web now sites offer up for $49.95.

Now consider the plain (unaugmented) reality:

1)  Petrous tells us that already exists.
2)  Augmented reality glasses have just gotten much better.
3) Right now, the query latency is determined in part by network delays (6.5 seconds comes from 3G, where Wfi offers 1.2 seconds): The coming 4G network that MindSpeed described at Hot Chips will make the data stream much faster.

"I don't care, I’m not doing anything wrong," a commenter posted on my recent rant about social networking and the surveillance state. "No one wants to find me." Sure they do, Gerry! If someone can break into a database, they will be well-served by a centralized repository of all your pertinent information. 

What law enforcement (or Google) aggregateth, the hacker taketh away.

GreenDroid Application Processor Will Battle Dark Silicon

Dark silicon is when you have three billion transistors on your chip but you can only use 1.8 percent of them at a time, so as to squeak under the threshold of your chip's draconian energy budget. So the lights are out on vast swaths of a chip's area; hence "dark silicon." This is becoming the rule rather than the exception as process technologies shrink, and the problem is getting worse. To combat it, researchers from the University of California at San Diego are building a mobile application processor with specialized cores that flip dark silicon from a liability to a benefit. Their results, showcased at this week's Hot Chips symposium in Palo Alto, have shown up to 13-fold power efficiency gains. More intriguingly, there’s a chance this could signal competition for ARM.

The problem: Dark Silicon

We'll start with the problem, which as UCSD grad student Nathan Goulding explained, can be called the utilization wall: With each successive process generation, power constraints cause the percentage of a chip that can actively switch to drop exponentially.

The utilization wall is changing the way people build chips. To make his point, Goulding pointed to Intel’s Nehalem architecture, whose "turbo mode" makes some cores run faster while the remaining cores are switched off.

In fact, Goulding's group tested their theory by partially replicating TSMC’s 90-nm process technology, and comparing it to the 65- and 45-nm version of the same. At 65 nm, they found that only 5 percent of the chip could remain active within a 3 watt power budget. At 45 nm, that number had dropped to 1.8 percent.

The solution: C-cores
The solution the UCSD group came up with depends on two insights:

1) Power is now more expensive than area, so trading area for reduced power constitutes a win.

2) Specialized cores can increase energy efficiency by orders of magnitude.

So, the UCSD team is developing a prototype chip that puts specialized cores in the silicon. These are called conservation cores, or C-cores. They sit alongside the general-purpose processor and share the same data cache and memory hierarchy.

The genius thing the UCSD team did is divide up code into two types: Cold code, which is code that’s infrequently used, runs on the general purpose processor as before. Hot code, by contrast, which is accessed frequently, is diverted to the C-cores to reduce energy use.

The result: GreenDroid

GreenDroid will be a mobile application processor (targeted at Android or iPhone) whose energy efficiency has been improved by a factor of 11 by utilizing this special architecturre.

UCSD's prototype chip is targeted for Android phones, which have extreme power constraints. “We think Android is a great fit for C cores,” Goulding said. That’s because when they profiled some common Android apps (including RoboDefense, Google Maps, and Pandora), it turned out that for the hottest 40,000 static instructions, 72 percent of that code was shared by multiple applications. That’s an ideal environment for C-cores.

They compared their processor to a baseline MIPS processor running at 1.5 Ghz. That MIPS processor used 91 picoJoules per instruction. C-cores by themselves were about 18 times more energy efficient. Combining the C-cores (running the hot code) with the general purpose processor running the cold code yielded energy useage of 12 pJ per instruction, about a seven-fold reduction.

Caveats--or are they

Sure, special purpose hardware can achieve great energy efficiency. But what if the software changes? ASICs are troublesome for this very reason: if protocol changes the hardware becomes useless.

But the UCSD team wasn’t caught sleeping: their C-cores use patching to support changes. Better yet, cold regions aren’t even affected by changes, because they already run on the general purpose processor. Hot code can be patched by way of a reconfigurability mechanism, preventing the hardware from becoming immediately obsolete.

To be sure, the warm and fuzzy reception wasn’t heterogeneous; some engineers were heard grumbling that this was just custom cores with a catchy name. But Christos Kozyrakis, the Stanford EE/CS associate professor who was session chair for the GreenDroid presentation, said that while he saw where the comments were coming from, he also did not think any solution had previously been as automated. He was sufficiently impressed with the UCSD work that he chose it for his session: It's the only academic presentation at Hot Chips amid a sea of Intels and AMDs and multiple IBMS.

Three’s company?
If you believe Intel, their Atom processor will ARM-wrestle (sorry) the smartphone market into their grasp any day now.

But while Goulding insisted that the C-core technique would go with ARM like peanut butter with jelly, natural curiosity arose about why UCSD had chosen MIPS as their baseline processor instead of ARM. "We have access to some of the MIPS core," Goulding explained with -1 evasion skills. "Does that mean you’re going to partner with MIPS?" the inquisitor pressed. Goulding looked like the cat who ate the canary: "We have not discussed that yet," he said.

So pardon my ignorance, but does this mean MIPS is going to bust out a chip that will compete with ARM?


The Future of Music Technology

Ten years ago, it was nuts to think you’d have your entire music collection in your pocket.  Today, no biggie.  So what’s coming in the next decade?  Plenty.  The future of music technology is all about accessibility – easier access to both consuming, and creating, the music you want to hear.   It starts with a massive ramp-up in mobile networks, letting people suck down music (and video) at lightning fast speeds.  Verizon Wireless and AT&T have just started deploying Long Term Evolution or LTE networks, capable of sucking down as much as 50 megabits per second.   And Sprint is following with WiMAX networks, for around 10 megabits per second.   

Wireless speeds are accelerating just as more and more electronics – from televisions to car stereos – are connecting online.  Technology research firm Strategy Analytics predicts Internet-connected TVs in the US to boom from the less than 5 million today to a whopping 107 million by 2014.  Internet service providers and cable companies are already busy preparing for the great uploading.  Comcast has an internal project called Project Infinity, aimed putting massive catalogs of music and videos on-demand. Cable and wireless providers are expected to bundle music subscription costs into bills, making subscribing to music services more like how we know think of cable TV.  Yes, this means your music player is becoming extinct, especially when you’re toting your terabyte phone. 

Streaming music subscriptions services like Spotify will continue to grow, letting you listen to millions of songs from an online cloud.  But for those times when you’re not able to get online, you’re still going to want to store songs on your phone.  And, by the end of the decade, you’ll be able to hold 20 million of them in the palm of your hand.   Mobile devices are projected to hit 50 gigabytes of storage by 2020, with top shelf phones supporting as much as one terabyte.  The bigger challenge will be engineering a new generation of batteries to keep you listening. 

Today, listening to your iPod requires your eyes and hands.  But over the next decade, portable music devices will evolve to their more sensible home:  your ears.  Controlled by voice commands, all the necessary components could be housed on a chip the size of a pin – or an earring.  Say the song you want to hear, and the music plays, discretely resonating the bone near your inner ear.

It’s also getting easier to figure out a song title you don’t know.  You can hold up your phone and use Shazam, a (killer) app that identifies songs by audio fingerprints, or, if you know a few words, punch the lyrics into a search.  Now the ultimate dream of searching by humming is coming true.  Computer scientists at Northwestern University are building TuneBot, an online service that matches a riff you sing against a database of others.  Tone-deaf fans don’t need to sweat, because the TuneBot software analyzes the peaks and valleys of a pitch interval rather than the exactness.  Bryan Pardo, the project’s engineering lead, expects that TuneBot database to grow from the current few thousand to millions by 2015 – as long as music labels license their songs.

How Do You Teach the Memristor?

Two years ago, HP Labs’ Stan Williams and Greg Snider shocked the EE community by announcing a fourth fundamental circuit design element to join the nuclear family of resistor, capacitor and inductor.

A number of news stories, including mine, cited Leon Chua--the originator of the theory of the memristor--who told IEEE Spectrum’s intrepid reporter, "Now all the textbooks must be changed.” (Nature News also contained this assessment, along with Physics World and EE Times, in which he elaborated: “All electronic textbooks have been teaching using the wrong variables--voltage and charge--explaining away inaccuracies as anomalies. What they should have been teaching is the relationship between changes in voltage, or flux, and charge."

So, two years on, what happened with all those EE textbooks?

Most appear untouched by the memristor's actualization. If MIT's Open Courseware site is any guide, Prof. Anant Agarwal’s Circuits and Electronics class makes no mention of the memristor. Nor should it—the class appears on the site as it was taught in 2007, when the memristor was still an irrelevant mathematical dalliance. At the University of California at Berkeley, instructor Josh Hug teaches an intro EE class that acknowledges the memristor in the slides that includes R, L, and C. I’m going to call that a thumbs up. At Columbia University, Prof. David Vallancourt * does not mention the memristor in his intro EE class. We’ll count that as a thumbs down.

But the 2-1 ratio revealed in my breathtakingly ad hoc survey came as no surprise to Leon Chua.

"I don't think the memristor will be taught in undergraduate courses until it is widely adopted in industry for the simple reason that any circuit containing even only one memristor must be analyzed by nonlinear techniques," he told me in an email. "Most professors are educated in linear theories, and are illiterate on nonlinear circuits." As a result, he says memristor circuits will likely be taught initially by computer simulations, like SPICE, where you don't need to know much about nonlinear circuits.  

And Chua’s prediction of the changing of the textbooks still stands, albeit on a slightly longer time scale: “It will take another generation for memristors to be taught in undergraduate courses. I have no doubt however that memristor will be included as standard topics in future textbooks because memristor is not an invention--which becomes obsolete, sooner or later. Rather, memristor is a discovery, and memristive phenomena will become ubiquitous in nano-electronic circuits.”

“Columbus did not invent America,” Chua concludes. “He discovered America.”

Tinkering with memristors. Memtinkering?
"Do you think we'll ever be able to get our hands on a memristor (in a convenient 2-lead surface mount package), or are these destined to stay in the realm of integrated devices and specialty circuits?” asked a particularly incisive poster in this ars technica forum thread the day after the memristor news broke.

"Given that the effect described here requires a nanostructure, probably not, unless someone puts a single one in a macroscale package," replied an equally trenchant observer.

So if you can't play with a physical memristor, you're left with math. "If you understand the math behind memristors, you can create superior device models, such as for SPICE, which means you can design better or more realistic circuits,” elaborates Stan Williams. (Computational neuroscience blog Neurdon has a tutorial on modeling the HP memristor with SPICE.)

This kind of modeling, then, is the only access any young engineer has to "tinkering" with a memristor. But before you can adequately tinker, don't you need to understand the math behind the memristor?

Any EE profs out there who can weigh in? Do you teach the memristor? Do you mention it as a fundamental circuit design element and then save the elaboration until finals week?

Also, given all the advances recently made with memristor logic, spintronic memristors, crossbar memory, and even their use in strong AI projects, should they be taught early on to get students amped about engineering?

* Vallancourt, the poor soul independent source I asked for comment when the story first broke, tells me, "I don’t teach the memristor at all." However, he is haunted forever by his name in my article in an Internet that never forgets. “Ironically, students sometimes mention it to me because they have Googled me and found my brief remarks from a few years back,” Vallancourt told me. "So thanks a lot, dingbat," he did not add.


Is Thorium the Nuclear Fuel of the Future?

PHOTO CREDIT: Nuclear Power Corporation of India
The Kalpakkam fast breeder reactor, near Tamil Nadu, India, is well on its way to completion by 2012. Once complete, it will usher in the second phase of India’s three-stage plan to achieve thorium-based energy independence by 2025.

Many of the reactor choices in this month’s Nuclear Redux are sure to be controversial, both in terms of what we included and what we left out. Our seven designs run the gamut, from incremental advances on existing designs to designs so new they’re barely on the drawing board.

But there’s one design that we’d surely include in a possible follow-up article (look for it sometime in 2015): By that point, it’s likely that someone will have submitted a credible design to the US Nuclear Regulatory Commission for a thorium reactor. That’s because the United States, India, Japan, and Russia (PDF) are among the countries now working on thorium reactors.

The thorium reactor has a sizeable fan base. Proponents argue that thorium provides a nuclear energy generation magic bullet: It’s clean, abundant, cheap, and safe. Now let’s quickly review each of these points.

Clean. Like all nuclear power, a thorium reactor would produce few emissions (which is not to say that nuclear power carries no caveats).

Abundant. Thorium is three to four times more abundant on Earth than uranium. “Any cubic meter of Earth, Moon or Mars has enough Th-232 to run a profligate American's energy life for several years,” says Alexander Cannara, an electrical engineer and green activist who is also an IEEE Life Member.

Cheap. Not only is thorium actually cheaper than uranium, it’s indirectly cheaper. 1) A fully functioning thorium reactor would be smaller and produce less waste. 2) In countries like China and India, where the natural abundance of thorium exceeds that of uranium, obviously the price tag for imported material would be lower.

Safe. Cannara tells us that "there are millennia of thorium atoms within easy reach, requiring no energy-intensive, proliferation-endangering 'enrichment', and no wasteful removal of delicate fuel pellets and rods before even 10 percent of their fuel is consumed."

But perhaps the most promising advantage is that a thorium reactor cleans up after itself. It eats its own waste. Proponents say the thorium reactor could function as a kind of waste disposal mechanism for plutonium and other weapons grade material, as part of its regular energy generation process. This is the miracle that proponents point to. “A Thorium-Fluoride Molten-Salt Reactor is a neutron machine that will fission down any fissile element,” says Cannara.

You’d have to be Ebenezer Scrooge himself to argue with something so amazing.

And in fact, several countries are investigating the possibility of thorium-based energy generation: India's working on an Advanced Heavy Water Reactor, Japan has the miniFuji, Russia is working on the VVER-1000 and even the United States has long term plans to experiment with commercial energy generation by thorium. Most of these plans are nebulous, but for some it’s a serious option. The country with the most specific plan is India, which has drawn up a three-stage process to rely almost entirely on thorium by 2030.

When IEEE Spectrum interviewed thorium reactor designer Ratan Kumar Sinha (who was recently promoted to director of Bhabha Atomic Research Centre, or BARC) two years ago, he explained India’s plan:

In the first stage, pressurized heavy water reactors (PHWRs)--similar to those used in advanced industrial countries--burn natural uranium. In the second stage, fast-breeder reactors, which other countries have tried to commercialize without success, will burn plutonium derived from standard power reactors to stretch fuel efficiency. In the key third stage, on which India's long-term nuclear energy supply depends, power reactors will run on thorium and uranium-233 (an isotope that does not occur naturally).

A year earlier, Sudhinder Thakur, an executive director at the Nuclear Power Corporation of India (NPCIL), told Spectrum that construction on the 500-MW fast breeder reactor, was expected to be complete in 2011. And apart from a couple of minor hiccups, that schedule is on target.

In an email update, Thakur tells me that the Kalpakkam fast breeder reactor is progressing well, with operation slated for 2012. That will take India into stage 2 of their plan. By 2020, four more such reactors will be operational.

The fast breeder reactor is only the second stage of a long-term project. “There are no defined time lines as lot of technology development, research and demonstration activities need to be completed before commercial deployment of thorium reactors for power,” Thakur told me in an email. “I think it is decades away.” First, he explains, “we need to have a significant capacity of the fast breeder reactors where thorium could be used as a blanket.” (For a good overview on what this means, read this article on thorium reactor physics at the World Nuclear Association.)

BARC's 300-MW advanced Heavy Water Reactor will test thorium as a fuel. That project is under IAEA design review, and after it obtains regulatory approval, it will take an estimated seven years to build it.
Thorium has always looked attractive theoretically, but it just has not taken off in countries that have adequate supplies of regular uranium. Despite the many features that recommend it, it's only really attractive for nations–like India and China—that have too little uranium and a big surplus of thorium.

Finally, there are a lot of objections to characterizing thorium as a promising nuclear fuel. I won’t get into the endless back and forth, but the gist of the arguments according to the Institute for Energy and Environmental Research (PDF) is that because Th-232 is not fissile, you need some kind of weapons-grade material to kick-start the chain reaction.

In addition, the IEER challenges the claim that the fuel for these reactors is proliferation-resistant. That’s because thorium is converted into (what IEER calls) fissile uranium-233 in the course of the reaction. “U-233 is as effective as plutonium-239 for making nuclear bombs,” according to the report.

I must note here that there are counter-arguments to these arguments and counter-counter-arguments to boot. If I listed them all it would just be turtles all the way down. Ultimately, we can argue all we want, but the proof will come in the most basic possible form—someone submitting a credible design to the US Nuclear Regulatory Commission or some analogous body. So far, that hasn’t happened. NRC spokesperson Scott Burnell told Spectrum that there “isn't anything on our radar for a thorium-based reactor at this point.”

Required Reading:

The history of the thorium reactor

The thorium reactor and other proliferation-resistant reactor designs

Why thorium is so green

Thorium power in China

Google Tech Talks remix: the Liquid Fluoride Thorium reactor

fMRI in Film: Take it With a Grain of "Salt"

Angelina Jolie in "Salt"I was already politely suspending my disbelief to watch Angelina Jolie's new Cold War-turned-modern thriller, "Salt." I was prepared to play along with the premise that the thin-boned CIA operative she plays could kick muscle-bound male butt for the hour and a half I was paying my $12.50 for. What got me giggling was the film's portrayal of fMRI, the technology featured on the cover of this month's IEEE Spectrum.

The movie's big question hinges on a scene in which Jolie's Evelyn Salt interrogates a "walk-in" claiming to have information about Russian spies well placed in the United States government. When Orlov states that Salt herself is one of them, the audience has to determine the truth of that statement. The CIA operatives, however, claim to be relying on fMRI. Sitting in the next room, apparently looking at brain scans, they claim that the fMRI says that he's telling the truth. So it must be true!

Having just worked on our article, I started laughing. It's always baffling when screenwriters use real technology (Image enhancement! Tritium!) but use it in a way that is futuristically incorrect. Not only was there no evidence of an actual MRI machine--and the enormous magnet that goes with one--but I could spot no wires, and nothing to show how they were pinpointing the brain functions of Orlov himself. Maybe all that acoustic tiling is supposed to be helping. Though maybe they were doing iPhone4 antenna research there as well.

Maybe one day we will be able to do MRI's without an MRI machine or wires. If you want to learn more about what fMRI's are capable of in real life, read our article.

What's the most absurd movie technology that drives you nuts?


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More