Time was when most radio sets had no software at all, and those that had any didn’t do much with it. But Joseph Mitola III, an engineer working for a company called Eâ¿¿Systems (now part of Raytheon), envisioned something very different—a mostly digital radio that could be reconfigured in fundamental ways just by changing the code running on it. In a remarkably prescient article he wrote in 1992 for the IEEE National Telesystems Conference, he dubbed it software-defined radio (SDR).
A few short years later, Mitola’s vision became reality. The mid-1990s saw the advent of military radio systems in which software controlled most of the signal processing digitally, enabling one set of electronics to work on many different frequencies and communications protocols. The first example was the U.S. military’s Speakeasy radio, which allowed units from different branches of the armed forces to communicate effectively for the first time. But the technology was costly and rather unwieldy—the first design took up racks that only a large vehicle could carry around.
In the new millennium, SDR has spread from the battlefield to the commercial arena. Wireless service providers, in particular, have begun using it in the transceivers in cellphone base stations, allowing the same hardware to handle different cellular protocols. Next, SDR will spread to sets that fit in the palm of your hand.
That will come none too soon. Today’s wireless mix is an often-turbulent alphabet soup of communication schemes: BGAN (Broadband Global Area Network), BT (Bluetooth), DECT (Digital Enhanced Cordless Telecommunication), EDGE (Enhanced Data Rates for GSM Evolution), GPRS (General Packet Radio Service), GSM (Global System for Mobile communication), IMT-A (International Mobile Telecommunications–Advanced), UMTS (Universal Mobile Telecommunications System), WiBro (Wireless Broadband), Wiâ¿¿Fi, WiMax (Worldwide Interoperability for Microwave Access), and more. A mobile software radio that could communicate in all of these ways would, of course, be invaluable.
Up until now, SDR technology worked only in applications that didn’t need to be small in size or frugal in power consumption. New technology should, however, overcome these constraints. Indeed, within
the next year or so, you can expect to see people mothballing their old cellphones in favor of new software-defined handsets. By 2015, the transition should be nearly complete.
The first software-enabled sets to crawl out of the primordial ooze of traditional analog radio were modest affairs ”Evolutionary Developments” (pdf)]. They used embedded computers only to change the output level of the RF amplifier or to shift between individual RF front ends so that one unit could cover multiple bands.
In some of today’s radios, software—often with the aid of digital hardware accelerators—does far more: It determines everything that happens to the signal after it’s converted from RF to lower frequencies and before it’s put in a form that’s suitable for your ears. In these radios, only the RF front end and the amplifier that powers the speaker still use analog components.
The next era in SDR evolution will see what some call the ideal software radio or true software radio, in which the filtering and conversion from RF to lower frequencies that’s normally accomplished in the radio’s front end will be done digitally using the appropriate software—a strategy that requires moving the analog-to-digital converter (ADC) much closer to the antenna. These radios will still require a low-noise RF amplifier, though, because it’s hard to imagine any ADC being able to pick up the micro- or even nanovolt signals generated in the antenna.
In the progression to that ideal radio, you’ll probably notice that cellphones, mobile TV sets, GPS equipment, satellite phones, PDAs, digital music players, game consoles, and their kin will begin to look less and less distinct. As with the beaks of the duck and the platypus, the evolution of such gadgetry will converge toward the most functional form—in this case a small wireless unit that allows its user always to stay connected, from anywhere and for any type of content or use.
Designing such universal gizmos will be tough, of course. Perhaps the highest hurdle will be engineering the antenna, the size of which normally depends on the frequency of operation. Indeed, it’s very difficult to make a radio with an antenna that is not a significant fraction of a wavelength in size. This dictate of physics introduces a fundamental problem, because you’d ideally like a single compact antenna to cover everything from FM reception, at roughly 100 megahertz, to satellite- and personal-network communications, which operate in the few-gigahertz range.
To cover such a large chunk of spectrum, you’d probably need a combination of something quite short, likely built into the unit’s printed-circuit board, and something relatively long, such as the wire that connects with the user’s earphones. But even if the frequency span isn’t so great, designers probably won’t be satisfied with just one antenna: RF engineers are quickly moving toward using multiple antennas, even for single-frequency operation. This strategy—known as multiple-input, multiple-output, or MIMO—allows for more reliable links and higher data rates. For example, IEEE 802.11n networking gear uses multiple antennas to communicate at about five times the speed of previous versions of Wi-Fi.
You can understand how MIMO works, at least in broad terms, with a simple thought experiment. Suppose you set up a transmitter with a single antenna and then move a receiver, also with a single antenna, far enough away for the reception to fade in and out once in a while. Such problems arise because the transmitted signal takes multiple routes to the receiver—some of it perhaps bouncing off a passing car, other parts reflecting off the steel beams of the building where the receiver is located. When the difference in length between two paths is half a wavelength (or three halves, or five halves, and so forth), the two waves will interfere destructively, clobbering the signal.
MIMO sidesteps that pitfall by multiplying the number of possible paths between transmitter and receiver. If the signal passed from one transmitting antenna to one receiving antenna fades, the signal from a different pair should still come in loud and clear, taking advantage of a phenomenon known to radio designers as transmit diversity.
Throw in some serious number crunching to process the digitized signals and you can achieve extraordinarily high data rates. Researchers at NTT DoCoMo, in Japan, which is developing such systems for 4G mobile communications, have managed 5 gigabits per second. And this wasn’t just in a controlled laboratory setting; they achieved this rate outdoors, albeit with the receiver moving no faster than a swift walking pace (doing the same while traveling down the highway would be much more difficult). Impressive results with MIMO and other advanced antenna systems are also coming out of Stanford’s Information Systems Laboratory, MIT’s Lincoln Laboratory, and the Center for TeleInfrastructure at Aalborg University, in Denmark.
Another tricky issue for the makers of SDR handsets is designing the transmitter’s power amplifier so that it can operate over a broad range of frequencies without mangling the signal. The challenge is not so great for FM transmission, but for communication schemes that require the amplitude of the wave to be manipulated, things can rapidly go awry.
Avoiding problems in such cases typically requires some kind of feedback mechanism. You can, for example, sample the output of the power amplifier and convert this RF signal to lower frequencies, which you can then compare with the signals used to modulate the amplifier. You can then compensate for any error you find by digitally adding the reverse distortion to the input signal. Among the leading manufacturers of such designs are RF Micro Devices, in Greensboro, N.C.; Acco Semiconductor, in Saint-Germain-en-Laye, France; and Axiom MicroDevices, in Irvine, Calif.
Another challenge for SDR designers is making much faster ADCs. To avoid ”aliasing”—the effect that makes rapidly spinning wagon wheels in old Westerns look as though they’re turning slowly, or even backward—the ADC must sample the signal at a rate at least twice that of the highest-frequency component, and this may be quite high. The upcoming 4G technologies, for example, are expected to operate in the vicinity of 3.5 gigahertz, which means you’d need to take 7 billion samples per second—more than 10 times as fast as what today’s best ADCs of sufficient resolution can manage.
Many SDR researchers consider this to be among the toughest obstacles ahead—not only because they must up the sampling rate so much but also because they’ll simultaneously need to make significant improvements in the signal-to-noise ratio, power consumption, and physical size of this circuitry. Typically, you can better one of these parameters only by making trade-offs with the others. So achieving gains on all fronts at once is going to be extremely difficult.
There is, however, a strategy that might allow direct conversion of RF in the not-so-distant future: purposeful subsampling. The trick here is to arrange the sampling frequency of the ADC so that the inevitable aliasing that occurs works to your advantage. In one step, the operation both digitizes the RF signal and converts it to a lower frequency. This may seem a bit magical, but it’s not so hard to understand. Just imagine the RF signal as one of those rapidly spinning wagon wheels. Adjust the frame rate of the motion-picture camera appropriately and your captured version of this wheel will turn at whatever lower frequency you want [see ”Aliasing Harnessed”].
Astute readers might notice that the difficulties we’ve outlined so far all involve hardware. Software-defined handsets will have some challenging software, too. It’ll manage the modulation, demodulation, encoding, decoding, encryption, and decryption, as well as the packing and unpacking of the data needed for the communications protocol employed—all computationally intensive tasks. What’s the best kind of microprocessor for such heavy lifting?
Most SDR designers struggling with that question instinctively fixate on the MIPS rate—how many million instructions per second the processor can execute. That’s because it must carry out a huge number of arithmetic operations—largely multiplications and additions—to massage the digitized signal. Specialized digital signal processors (DSPs) are usually the best chips for such things, but they may not be the only solution for SDR handsets. The reason is that these radios must do other kinds of signal processing, too.
In particular, SDR handsets need to detect and correct errors in the received digital bit stream, and the algorithms for that consist less of multiplications and additions than of ”if-then-else” statements. Those branching operations are better done by a general-purpose processor, which would normally also be assigned the tasks of running the unit’s real-time operating system, keyboard, and display. So a software-defined handset will need to have such a chip around anyway.
A general-purpose processor will also be required to host the software interfaces that connect different applications with the underlying hardware. In the near term at least, that ”middleware” is likely to conform to the rules laid out in the U.S. military’s Software Communications Architecture, an object-oriented computing framework that has become the de facto standard for software radios intended for combat use.
Designers of future SDR handsets will revel in the flexibility afforded by having the software control so much of the signal processing. And designers will no doubt want as much of the set’s hardware as possible to be reconfigurable—that is, they’ll want software to do not only signal processing but also be able to switch between different antennas and RF front ends. Such capability would allow you to turn a cellphone into a satellite-radio receiver, say, at the touch of a button.
While the technology for accomplishing this has been around for years, until now it’s been too bulky and power hungry to be used in handsets. But consumers are now on the verge of enjoying the fruits of this approach, implemented with modest amounts of power and in very small packages. In February 2008, BitWave Semiconductor, of Lowell, Mass., announced its BW 1102 Softransceiver RFIC, a chip intended to bring SDR to both cellphones and femtocells (small wireless base stations that can be set up in a home or business to improve cellular coverage indoors). The BW 1102 is a single complementary metal-oxide-semiconductor integrated circuit containing a transceiver that supports a variety of wireless protocols and can operate anywhere on the spectrum from 700 MHz to 3.8 GHz.
Suppose, however, that you are a radio designer and want more than BitWave’s chip can handle, such as the ability to receive FM broadcasts—and maybe even transmit on FM, too, so that you can play your favorite MP3 files on your car radio. How hard would it be to create the perfect IC for that? Hard indeed, it turns out, and that’s why BitWave still has essentially no competitors.
But let’s say you’re keen to try. You might start by estimating the allowable execution time for each of the radio’s intended functions and its power consumption, physical size, and other properties, including the frequency bands to be covered. Based on that assessment, you’d decide how to divide the overall system into hardware and software. Although this exercise isn’t trivial, tools for hardware-software codesign are available.
Now comes the more difficult job: You’ve got to come up with detailed designs for each piece. Fortunately, you won’t have to do that from scratch. Suitable designs for at least some of the larger building blocks—a DSP here, a general-purpose processor there—should be possible to find and license. After the hardware has been pinned down, you’ll need to pull together the software to run it, which itself should keep you and your team busy for a large number of programmer-years.
The next challenge is to verify that your radio works correctly. Unfortunately, even state-of-the-art simulation tools aren’t guaranteed to show system performance properly—and subtle errors here might be lethal for your product. Worse, many of the expected mobile services may be safety critical, so a tiny slip-up could be a literal killer, too.
One way to address this uncertainty is to go a step further than simulation: You can prototype the digital portion of your newly designed SDR system using one or more field-programmable gate arrays (FPGAs), integrated circuits that contain a vast number of logic blocks and potential interconnections. These devices can be configured after their manufacture to serve almost any purpose, constituting entire systems on a chip.
The problem with FPGAs for production is that they are the energy hogs of the semiconductor world, lacking the power-management features of their hardwired counterparts. Moreover, FPGAs suffer from the integrated-circuit equivalent of suburban sprawl, taking up a relatively large area on a silicon wafer. They are also expensive, which helps to explain why we haven’t seen FPGAs being used to manufacture SDR handsets—at least not yet. A few researchers are exploring low-power FPGA technologies, so it’s not out of the question that they could one day serve for high-volume production of handsets.
In the meantime, FPGAs remain a convenient way to build and test SDR prototypes. Among the most interesting examples of this is the Berkeley Emulation Engine 2 (BEE2) project at the University of California, Berkeley. This test-bed setup consists of five high-performance FPGAs, which with proper programming can be turned into various next-generation SDR systems. Another example of this approach is the SDRâ¿¿based design effort at San Diego State University, which became widely known through a 2007 article in DSP Magazine titled ”How to Pack a Room of Analog FM Modulators Into a Xilinx FPGA.”
No doubt, many people are waiting for the day when they’ll carry just one handheld gadget they can instantly switch from cellphone mode to that of a satellite radio receiver, or from a wireless Web browser to a mobile TV set; indeed, their handset might carry out all of these functions at once. Others, including the world’s many technophobes, might be less enthusiastic about such a prospect. But SDR technology offers something for them, too—the possibility that their wireless equipment will eventually become smart enough to adapt to its communications environment all by itself.
A radio intelligent enough to reconfigure itself—perhaps by detecting free spectrum and switching its frequency of operation to claim it—would make wireless services cheaper and more reliable for their users, most of whom will not even be aware that such marvelous things are going on under the hood. As with SDR, this is a concept that Mitola promoted early on, in a 1999 article he wrote with Gerald Maguire Jr., of the Royal Institute of Technology, in Stockholm. They called it cognitive radio.
Ah, to have a radio that not only switches function on demand but also configures itself into the most effective form possible without its user even knowing it. Now that will be a truly universal handset.
About the Author
PETER KOCH and RAMJEE PRASAD, who explain how software-defined radio will soon transform cellphones in ”The Universal Handset”, are professors at Aalborg University, in Denmark. Koch works at the university’s Center for Software Defined Radio and also operates an amateur radio station for fun. He’s shooting to reach other hams in all parts of the world. ”I’m not there yet,” he says. Prasad, an IEEE Fellow, heads the university’s Center for TeleInfrastructure. He, too, enjoys making international contacts, but rather than doing so wirelessly, he regularly travels to the far corners of the world.