It used to be that a bad guy besieged by police could just shoot out the lights and hide in the dark. As if it weren’t enough that today’s cornered malefactors have to worry about night vision goggles, tomorrow’s thugs may also have to worry about the soft radio glow of wireless routers and mobile communications towers.
Researchers at University College London (UCL) have devised a system for detecting the Doppler shifts of ubiquitous Wi-Fi and mobile telephone signals to “see” people moving, even behind masonry walls 25 centimeters thick. The method, which could be useful in situations from hostage-takings to traffic control, won the Engineering Impact Award in the RF and Communications category at this National Instrument’s NI Week 2015 meeting (which convened in Austin, Tex., 3-9 August).
Other researchers—notably Dina Katabi and Fadel Adib of MIT—have built through-wall radars in the household communication bands, but these are active radars that transmit as well as receive. The UCL technique uses only passive radiation—from Wi-Fi routers (using emissions in any of the IEEE 802.11 b, g, n, ac), ambient GSM and LTE mobile signals, and other sources—so there is nothing to betray the surveillance. The system calculates the positions of hidden target by comparing two signals: a reference channel, receiving the baseline signal from the Wi-Fi access point or other RF source, and a surveillance channel, which picks up Doppler-shifted waves reflecting from the moving subject.
Tan and company built their “high Doppler resolution passive Wi-Fi radar” on two multi-frequency, software-defined, FPGA-based transceivers (National Instruments’ USRP, or Universal Software Radio Peripheral. The system compares the reference and surveillance signals, interprets the very small frequency shifts, and reveals the hidden subject’s location and motion.
By tweaking the processing parameters—increasing signal-integration time and lowering sensitivity thresholds—the engineers could coax the passive radar into “seeing” quite subtle movements, even hand gestures. At this stage, the device doesn’t produce anything resembling a photograph of the subject. It delivers a radar-style scatter plot, a flare of color that says, “Here he is!” along with a variety of signal data. The system is described in more detail in a paper that Tan and UCL colleagues Qingchao Chen, Karl Woodbridge, and Kevin Chetty presented at the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), held 19-24 April in South Brisbane, Australia.