How to build a synthetic-aperture imaging system with tin cans and AA batteries
As I stand on the side of the road, a couple of joggers stop to ask about the two coffee cans sprouting coaxial cables. “I’m testing a home-brew radar,” I explain, adding as they jog off, “you’re showing up just fine.” It reminded me of an old “Star Trek” episode in which Mr. Spock is transported back in time and must construct a futuristic electronic gadget using only 1930s-era vacuum-tube technology. “What on earth is that?” asks his landlady. “I am endeavoring, ma’am, to construct a mnemonic memory circuit using stone knives and bearskins,” he answers.
Leonard Nimoy’s wry delivery reflects how I felt trying to turn two coffee cans, eight AA batteries, and a few hundred U.S. dollars’ worth of mail-order parts into a synthetic-aperture radar (SAR). Such sophisticated radar systems provide information about the shape of objects they scan, and high-resolution SAR can produce images with photographic levels of detail.
I succeeded, but credit goes to Greg Charvat, who designed this startlingly simple hardware while working at MIT’s Lincoln Laboratory in order to give students some hands-on experience during a three-week radar course. A detailed description of the radar and how to build it is available at MIT’s open courseware website.
Having little experience with radio frequency circuitry, I worried that this project might be too challenging. Ironically, the RF section was the easiest part to construct. It merely required screwing together a few microwave components. And as long as you follow the prescription in the lecture notes exactly, you won’t need a network analyzer to match the antennas to the radar circuitry.
Most people’s mental picture of how radar operates is that the apparatus gives off a radio pulse and then waits to receive an echo, timing how long it takes to return. Dividing by the speed of light gives the round-trip distance to a target. Some radar sets do just that, but this one uses a different strategy: One antenna emits a continuous stream of waves while the other receives a continuous stream of echoes. The circuitry for this isn’t complicated, but interpreting the received signals requires some computational horsepower.
The key to this design is that the frequency of the outgoing radio waves increases linearly over time (for a short period, after which the cycle repeats), so the frequency of the reflected waves also increases linearly. But the reflected waves return to the receiving antenna after a short delay, by which time the waves being emitted are at a slightly higher frequency. The farther away the target, the greater the difference between these two frequencies.
To measure this difference, you use what radio engineers call a mixer, which here generates an output signal containing two new frequencies that are the sum and difference of the transmitted and received frequencies. Only the difference matters for this application, so the radar circuitry filters out the high frequencies, including the sum, and amplifies what’s left. This final signal is in the audio frequency range and can easily be recorded using a computer’s sound card—much more practical than trying to build a system that works directly with microwave frequency signals throughout.
I first set up the radar next to my garage and recorded about a half minute of data as I ran up and down the driveway. I captured that data with Audacity, a free audio editor, running on an old desktop PC that had a sound card with a line-in port. I analyzed the recording using a Matlab script provided by the instructors at MIT. Running the script proved a challenge, though, because Matlab was too pricey for my shoestring budget. But I found a free open-source alternative that served as a reasonable stand-in: Octave.
It took about 4 minutes to process the data, but it was worth the wait: The script transformed subtle changes in the audio signal into a zigzag plot that matched my back-and-forth movements. Wow!
I was eager to put the radar to more of a test. But a desktop computer is awkwardly immobile. And, like many laptops, none of mine are capable of recording in stereo (two channels are needed to capture both the radar signal and the sync pulses). Happily, I discovered that my digital sound recorder, a Zoom H4n, could operate as a USB audio interface, allowing any laptop to record in stereo.
I took the radar out to a nearby ball field, where I discovered that it was surprisingly sensitive: Without difficulty, it could track me running to at least 50 meters away, and it can follow vehicles out more than 100 meters. It can even measure ranges in real time, if you use Python code written by Gustavo Goretkin, an MIT student who took the build-your-own radar course when it was first offered in 2011.
After having established that my coffee-can radar could measure the range to various targets, I set about to create a SAR image, which requires moving the radar laterally 5 centimeters at a time and recording multiple “snapshots.” This took some doing. It was a big challenge just to reproduce the example image provided with the courseware. That’s because the Matlab script for processing SAR data caused a pesky out-of-memory error in Octave. The problematic operation turned out to be a matrix rotation, so I had to do some hacking to get around that.
My initial attempts to create a SAR image produced underwhelming results. But when I looked harder at the images the folks at MIT produced, I realized that I would have to find a really big target for this to work. So I selected a building-size water tank that I could scan from a balcony located about 40 meters away. That exercise produced a plot with a clear radar hot spot located at the right distance and location. Calling that an “image” might be a little generous, but, well, what do you want from not much more than stone knives and bearskins?