Cognitive Radio and Game Theory

As our radios get smarter, they’ll be competing for overcrowded airwaves. Game theory can make them cooperate

10 min read
Cognitive Radio and Game Theory
Illustration: James Provost

Steve Jobs was unveiling the iPhone 4 at Apple’s worldwide developers conference in San Francisco last June when disaster seemed to strike. Jobs found he couldn’t connect to the conference center’s Wi-Fi network. Fortunately, his technical team rapidly pinpointed the problem. “We figured out why my demo crashed,” Jobs announced to the audience. “Because there are 570 Wi-Fi base stations operating in this room. We can’t deal with that.”

Jobs then exhorted the crowd to free up the airwaves so that he could continue. “All you bloggers need to turn off your base stations. Turn off your Wi-Fi. Every notebook, I’d like them down on the floor.” Most complied, but some refused to sever their wireless links, causing a sluggishness in connectivity that continued to dog Jobs during his presentation.

The incident was just a minor blip in the life of Apple. But suppose you had been a tech reporter whose livelihood depended on being able to send out real-time descriptions of events like this. Suppose further that nobody else would have known whether you were using a wireless base station. Would you have done as directed and turned off your gear? Why not stay connected? After all, one blogger, no matter how fast his fingers, wouldn’t have slowed network traffic perceptibly.

The problem, of course, is that the same logic applied to everyone else in that room. And if all those hundreds of Wi-Fi base stations had remained on, Jobs wouldn’t have been able to present the demonstration everyone was so eager to see.

The basic conundrum can be distilled into a simple—albeit not very realistic—game. Suppose you are one of only two reporters present. Imagine that if one of you stays connected, Jobs can still carry out his demo, but if you both ignore his urgings, everyone’s Wi-Fi will completely cut out. You and the other reporter cannot communicate with each other. You just have to pick a course of action and stick to it. What’s the smart thing to do?

Let’s say your counterpart complies with Jobs’s edict and powers down. In this case, you are clearly better off staying connected. Jobs will demo the new iPhone, and you can instantly report about it to the world, earning you fame and fortune.

Now let’s say the other reporter ignores Jobs’s directions while you comply. He will scoop you, making you the laughingstock of the newsroom when you get back to the office. So here again you’re better off ignoring the instructions.

The logic is inescapable: No matter what the other person does, it’s better for you to act selfishly. But the other reporter will surely come to the same conclusion, so you will both miss the demo. Why didn’t the two of you cooperate?

What I’ve concocted here may be thought of as a wireless-networking version of what long ago came to be called the prisoner’s dilemma, which depicts this same basic conflict in terms of two suspects in a crime, each of whom may (or may not) try to pin guilt on the other party. The prisoner’s dilemma is commonly used to illustrate the rudiments of game theory, a branch of mathematics that analyzes not games like chess or checkers but the possible actions of intelligent—and potentially deceitful—adversaries.

seesaw illustration

Illustration: James Provost

Game theory was almost unknown before 1944, when mathematician and computer pioneer John von Neumann, then at the Institute for Advanced Study, in Princeton, N.J., and Oskar Morgenstern, an economist at nearby Princeton University, published Theory of Games and Economic Behavior. Their work was soon applied well beyond the boundaries of economics—in particular, to nuclear weapons policy during the Cold War.

Although the world’s superpowers are no longer flirting so closely with mutually assured nuclear destruction, game theory remains as relevant as ever to various conflicts of modern life—including conflicting uses of the electromagnetic spectrum. Game theory is especially timely in that regard, because soon the parties squabbling over use of the airwaves won’t just be individuals or companies. In the very near future, our radios themselves will contain so much intelligence that they will be battling one another for access to chunks of the wireless spectrum.

The kind of communication I’m talking about here falls under the umbrella of cognitive radio. That term refers to intelligent systems of wireless communication in which the radios people carry around try to achieve the best performance possible by sensing and adapting to changes in their electromagnetic environments, including changes in the way other radios are operating.

Regulatory bodies such as the U.S. Federal Communications Commission (FCC) are just beginning to embrace the idea of cognitive radio, recognizing that the traditional way they have assigned fixed portions of the spectrum is hopelessly inefficient. It’s inefficient because whoever holds the license to broadcast at some assigned frequency doesn’t actually use that privilege everywhere or at all times. Yet nobody else is free to use that frequency even when it’s wide open. The upshot is that radio spectrum, one of the world’s most valuable resources, is mostly wasted.

Policymakers and engineers both want to develop systems that would let people take advantage of otherwise unused parts of the radio spectrum without creating chaos for license holders or themselves. One scheme under consideration, for example, seeks to exploit fallow TV spectrum—known in the trade as white space—for what some have dubbed “Wi-Fi on steroids.” That certainly sounds attractive, but unless it’s done with care, the people trying to use such networking equipment may end up with the same kind of frustrations Jobs experienced last June, only on a catastrophic scale.

The question of how best to share limited radio spectrum has mostly been studied from the perspective of whoever is calling the shots—say, a wireless carrier like Verizon or AT&T. How, for example, should each cellular base station divide up the airwaves it controls among the different customers it’s communicating with? Such questions are comparatively easy to answer, because the base-station equipment can divvy up the available spectrum in whatever way the system designer specifies.

The up-and-coming world of cognitive radio will be a lot more complicated, because usually there will be no central authority. And you can’t take for granted that different radios will harmoniously manage their operations so that they don’t step on one another’s transmissions. Indeed, a better bet is that they will act as selfishly as they possibly can.

Regulatory authorities like the FCC may demand that these radios be programmed to act cooperatively. But in the end, it would be nearly impossible to police exactly how each person uses his or her cognitive-radio handset or wireless network adapter. Game theorists are not put off by such realities. Rather, they assume that rational participants in any conflict for resources will always act selfishly and will cooperate only when it improves their own chances of getting whatever it is they want.

Rational participants in any conflict for resources will always act selfishly

Using game theory, my students and I recently devised some clever ways [PDF]  to foster cooperation among cognitive-radio users. There are many variations on this general approach. Which one you use depends on the details of the wireless network you are considering, how you measure success, and the means you have at your disposal to get everyone to play nice. Here I describe one example that’s easy to understand without so much as a single equation.

Let’s say three pairs of radio users are vying for the same slice of the electromagnetic pie. In this little game, each “player” is really a pair: two entities that are exchanging digital data wirelessly. The details of how they do that don’t really matter.

Imagine that each of these three players is able to transmit data at different rates because of differences in the condition of the radio channels used. Maybe the two sides of each player pair are located different distances apart or perhaps the three players experience different amounts of radio noise. To keep things concrete, let’s say that player A can (in the absence of other players) send data at 10 million bytes per second, player B at 5 million B/s, and player C at 1 million B/s. Assume also that each player has an insatiable hunger for wireless bandwidth—which isn’t all that different from real life.

The simplest cooperative strategy you can imagine is for these three players to graciously take turns: Player A uses the airwaves for some period of time (to keep things simple, assume it’s 1 second); then B communicates for 1 second; then C transmits for 1 second; and then the turn passes back to A. Although the three players get equal airtime, networking specialists call this allocation proportionally fair, because the number of bytes they get to transmit in each second—a direct reflection of the bandwidth they are allotted—is not equal. Rather, the amount of data transmitted is proportional to the quality of their communication channels.

Arranging for all three to get equal bandwidth would be another way to share the available radio spectrum. But equal bandwidth wouldn’t be such a great goal to strive for, because to achieve it, you’d have to give most of the airtime to the player with the slowest connection, which squanders the radio resource available to the group. Better to divide wireless privileges into brief time slots of equal duration and allot those one after another in turn to each of the three players.

The problem with using such a simple round-robin is that the conditions of radio channels are constantly in flux—a phenomenon that anybody who has ever used a cellphone in a building can attest to. Player A may be able to transmit 10 million B/s on average, but during certain seconds it will be able to transmit a few million bytes more or a few million less. Channel conditions for player C could sometimes become so poor that the pair would not be able to exchange any bytes at all.

It would, of course, be silly to allocate a second of precious airtime to player C during such moments. Better to give that particular 1-second slot to another player that can use it. You can allocate an extra time slot to player C when its channel conditions improve.

That makes for only a minor variation on a simple round-robin. You still split up the periods of exclusive spectrum use into three equal parts and divide them equally among the three players. But you arrange the three-way division of time slots more intelligently, giving each player the slots that are most beneficial to it.

Many cellular base stations do exactly this. It’s easy to accomplish, because the base station, being a participant in each exchange, knows what the channel conditions are for everyone. To do something like that when there is no central controller, you’d have to set aside a sliver of the spectrum at hand so that the different players could exchange such information. Each player would send the others second-by-second reports of its channel conditions and would follow a rule saying that the one who can best take advantage of a given time slot could use it. “Best advantage” would be measured by comparing how quickly each player could transfer information with how quickly that player communicated on average in the recent past.

Let me try to make this strategy more concrete. If, say, player C could communicate at 2 million B/s during a particular time slot—twice C’s average rate—C would receive a rating of 2. If B could send only 2.5 million bytes in that same time slot instead of its usual 5 million, B would receive a rating of 0.5. If A could send 15 million bytes instead of its normal 10 million, A would get a rating of 1.5. Because C earned the highest rating, C would get to use that time slot.

One second later, the players again would gauge what the ensuing 1-second time slot promised to provide them and then compare those values with their average throughputs. And once again, the player with the highest rating would enjoy exclusive use of the airwaves for the following second.

You’re now primed to ask the obvious question: Why would these players share their private information about how good their radio channels are? Wouldn’t players be tempted to exaggerate the quality of their channels so that their ratings would be high enough to win the use of the upcoming time slot? It would seem that such a system is just another example of the classic prisoner’s dilemma, dooming this scheme to failure.

Maybe not. With a simple modification to this arrangement, the players could, in fact, be coaxed to cooperate, resulting in a proportionally fair allotment of the available radio spectrum.

One way to do that for these three players goes like this: To start, the players have to keep scorecards. If one of them is cheating, that player will end up with more than a fair one-third share of the time slots. In short order, the other players will recognize this and realize that the cheating player is exaggerating its channel quality. The other players can then punish the offending player by refusing to play nice for some set period of time.

During the subsequent punishment phase, a Hobbesian electromagnetic war rages. The airwaves rapidly become so cluttered with competing transmissions that nobody is able to get through. Perhaps it’s not as nasty as all that, but things are still bad enough that everybody’s transmission rate plummets.

We have worked out the mathematical details of this scheme—how you detect cheating and how long the system must continue to remain in the punishment phase to wipe out any ill-gotten gain a cheating player may garner in the short term. If the players’ cognitive radios follow our prescriptions, nobody will ever have an incentive to cheat.

The basic concept we used to arrive at that happy conclusion is easy enough to understand. We just had to model a repeated game, where players make decisions about how to act based on the other players’ past moves. Cooperation then emerges from the threat of punishment, as I described, or by developing mechanisms to track and reward individual reputation, by fostering mutual trust, and so on.

I’ve been studying various ways to do that using something called mechanism design theory [PDF], a component of modern game theory forged by Leonid Hurwicz, Eric S. Maskin, and Roger B. Myerson, who together won the Nobel Prize in Economics for this work in 2007. They showed that by carefully setting up the game and its rules, the system’s designer can give even the most self-interested players incentives to behave.

The cognitive-radio game I’ve sketched here is admittedly an idealization. Things rapidly become more complicated when you consider more realistic spectrum-sharing situations. For some frequency bands, for example, a primary licensee will be involved, in which case you need to figure in the value of the payments secondary users must make to the “owner” of the spectrum (or even to one another) in determining their motivations. Also, real cognitive radios are able to adjust in more ways than just by switching their transmissions on or off. They can be required to limit the amount of power they use to transmit or to shift to a protocol that sends less data but produces less interference for others. The variations are nearly endless.

Yet another complication I’ve glossed over is how to ensure that the system doesn’t fall prey to what’s known as a Sybil attack [PDF], where one player pretends to be multiple players, harvesting the privileges allotted to each. Worse, it may have to deal with participants whose only aim is to disrupt communications.

Although I and other engineers have been fruitfully applying classical game theory to wireless communications, this branch of mathematics hasn’t by any means provided universal solutions to all possible problems. But it does offer a framework for analyzing challenges and constructing defenses.

Game theory’s first blossoming took place more than a half-century ago, under the dark military and economic clouds of that era. The world today is a very different place. But there is no shortage of competitive, cooperative, and conflicting interactions of selfish or malicious players to contend with—ones that nowadays are often governed by algorithms rather than by psychology. For making those parts of life work more smoothly, nothing is better suited than game theory.

This article originally appeared in print as “Cognitive-Radio Games.”

About the Author

K.J. Ray Liu, a professor in the department of electrical and computer engineering at the University of Maryland, began applying game theory about a decade ago to wireless communication—a subject he discusses in “Cognitive-Radio Games.” Although game theory is less than a century old, people have relied on these kinds of ideas since the Stone Age. “It’s in our blood,” he says. It’s only fitting, then, that soon it’ll also be in our phones.

To Probe Further

See K.J.R. Liu and B. Wang, Cognitive Radio Networking and Security: A Game-Theoretic View, Cambridge University Press, 2010.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions