A new camera system can locate objects hidden around a corner and track their movements with centimeter precision. The camera, which captures images outside its field of view in real time—which is no mean feat—could be used to look for survivors in search-and-rescue scenarios and in vehicle collision avoidance systems.
Cameras that can see stationary objects through walls mostly rely on radar-based imaging technology. But radar systems are typically big, expensive, low-resolution and limited to short distances.
So researchers have switched to other parts of the spectrum. Earlier this year, an MIT team reported a low-cost, high-resolution camera that uses low-power microwaves to create 3-D images of objects hidden behind walls. Optical techniques based on lidar (laser illuminated detection and ranging) have also been used to image hidden objects. But both the microwave and lidar techniques take an hour or more. “That’s not going to work if you want to know whether a car is coming around the corner,” says Genevieve Gariepy, a physics doctoral student at Heriot-Watt University in Edinburgh, UK.
Gariepy, Daniele Faccio, and their colleagues designed a new system that is similar to lidar, but much faster and more sensitive. It can precisely detect a hidden object’s position in just a few seconds, “so we can track the object’s motion in real time,” she says.
To detect an object around a blind corner, the system fires millions of very short laser pulses at the floor just past the corner. The light hits the floor and scatters in every direction as a spherical light wave. When the photons bounce off the object, they again scatter and some of them reach the field-of-view of the camera, which is also pointed at the spot where the laser was aimed. The system then calculates the object’s position with centimeter precision by looking at two things: how long it takes for light to go from the laser to the object and back, and at the shape and direction of the returning spherical wave.
Key to the new system is a super-sensitive, ultra-fast single-photon avalanche diode, or SPAD, camera. Each pixel in the camera’s 32x32 pixel grid contains an avalanche diode that can sense a single photon, along with some integrated electronics that allow the camera to snap images of the scene at a rate of 20 billion frames per second. The camera has a temporal resolution of 50 picoseconds. This high sensitivity and temporal resolution lets it pick up the tiny amount of light speeding back from the object that a regular high-speed camera wouldn’t detect.
In a paper published in the journal Nature Photonics, the team demonstrates the system using a 30-centimeter-tall human form made of foam. They put the figure on a moving track and were able to detect its position as it moved at 2.8 centimeters per second. “We used that in this proof-of-concept demonstration just because it’s easier to do things on a small scale,” Gariepy says. “But of course we want to go to a bigger scale and detect real people or cars. So we’ve started to use me as an object to track.”
What about the light scattering off walls and ceilings? The camera records those signals too, but the system isolates the signals that change as time goes by—this would come from the moving target—and eliminates the steady signals coming from static objects.
The team also wants to make the technology faster and effective at longer distances. Right now, it takes 3 seconds for the SPAD camera to acquire the raw data and another 2 seconds to display the results of the position calculations. The researchers hope to bring total time down to 1 second. The system currently tracks objects up to a meter away; they hope to extend that to 10 meters.
Next will come more complicated tasks like 3-D reconstruction of the target and distinguishing multiple moving objects.
But even if they achieve those goals, the system would still face a problem that nags present-day radar and laser-based detection technology: it might not work when there’s fog, dust, or smoke in the air.