In August 2004, Todd Proebsting, a researcher in Microsoft’s platform and services division, was approached by a manager in the company’s testing organization who had spent months helping to create a piece of software to be used by other Microsoft programmers. Although it was an internal product, the software still had a rigid development schedule and an official launch date: November 2004, just a few months away.
The manager had heard a talk by Proebsting about something called a prediction market, a sort of stock market for ideas, in which Microsoft employees would in effect place bets on predictions, instead of on racehorses or football teams. A lot was riding on the timely completion of the testing software. “You said that a market could be used to predict schedules,” the manager said. ”I want to know when my team will finish writing the software.”
Proebsting created a market with six possible bets: that the product would ship before November, in November, in December, in January, in February, or later than February. His pool of bettors included members of the development team itself, other developers, and program managers from related teams, as well as internal “customers”—the programmers within Microsoft who would use the software. He showed them all how to use the market, gave them each US $50 with which to wager, and then sat back and watched prices fluctuate.
“All six months were started equally at 16 2/3 cents on the dollar,” Proebsting says, meaning that you only had to bet that amount to win $1 if you were right. “Within seconds, the pre-November market went to $0.00 and never moved from there.” So much for beating the deadline. “The November date went down to 1.2 cents in about 3 minutes.” So much for meeting the deadline.
”The director of the group came to see me. He asked, ’What have you done?’ ”
”No one believes your product will ship on time,” Proebsting told him. The director replied, ”No one on the team is telling me this.”
After discussing things with his development team, the director came to accept what the market was “saying.” He decided to cut some of the software features that were holding things up. “And the price of the markets started to reflect that—the November price rose,” Proebsting says. “Then the internal customers got wind of the fact that some of their favorite features were being cut and demanded their features back. So the market then reflected that!” In other words, the markets that predicted the software would be very late went back up. “In the end,” Proebsting says, “the product shipped in February, which is what the market predicted.”
ArcelorMittal, Best Buy, General Electric, Hewlett-Packard, Nokia, and Samsung have all begun tapping into the ”wisdom of crowds” to help them predict public reaction to new products, the future price of a commodity, or sales revenue in the next quarter. In the past few years, the technique has really taken off, with at least a dozen start-ups competing for business in the field. Some offer software and services to help companies tap the wisdom of their workers or the outside world. Others create markets that allow anyone to go to a Web site to bet or even to pose a question that can be bet on.
Chris F. Masse, a financial consultant in Sophia Antipolis, France, who specializes in prediction markets, says that by 2010, “10 percent of Fortune 500 companies will have gone public about their use of internal prediction markets, and probably another 10 percent will be testing some projects.”
Among the leaders in the emerging field are Consensus Point, in Nashville, which counts GE and Best Buy among its clients, and Inkling, a Chicago start-up that designs internal markets. Computer-game manufacturer Electronic Arts, in Redwood City, Calif., uses Inkling to predict industry assessments of its products. There is, inevitably, an open-source software for prediction markets: the Zocalo project, which is run by software engineer Chris Hibbert and affiliated with North Carolina State University.
Meanwhile, the number of public markets is growing at an astonishing rate. You can already predict the popularity of Web sites, new movies, computer game hardware, financial instruments, and the eventual success of a book proposal or a musical artist’s first CD. You can bet on the success of sports stars or entire teams in an absurdly varied number of ways—including how many goals a team will score in a season and the number of fans who will attend its games. You can guess how many inches of snow will fall in New York City’s Central Park in December, when Osama bin Laden will be captured, and the outcome of a 2008 Senate race. At Smarkets, based in Austin, Texas, you can even buy shares representing relative sales of Amazon products, guessing if the retailer will sell more books, iPods, or 500-thread-count sheets next month.
Prediction markets have caught on so well in the United States that they’ve even attracted the attention of the state and federal regulators who oversee lotteries, casino gambling, and racetrack wagering. So in May of this year, a group of distinguished economists including Nobel laureate Kenneth Arrow, of Stanford, issued a statement asking that prediction markets be exempt from gambling regulations. In the statement, the group declared that ”using these markets as forecasting tools could substantially improve decision making in the private and public sectors.”
Users bet on one outcome (the month of a product launch, a political candidate, or a sports team) more than another, which establishes a favorite and a long shot, just as in a horse race. As explained by financial journalist James Surowiecki, who wrote the 2004 book The Wisdom of Crowds , “under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them.”
Prediction markets aren’t perfect. They failed spectacularly to predict Howard Dean’s startling 2004 defeat in the Iowa caucuses and Michael Jackson’s high-profile acquittal in 2005. In the 2007 National Collegiate Athletic Association men’s basketball tournament, they trailed 30 different expert sports analysts. But all in all, they consistently do better than other methods of predicting events. In May, Intel published the results of a comprehensive 18-month study of prediction markets. It found that they were as least as accurate as official forecasts by Intel management, and often better by as much as 20 percent.
Prediction markets got their start almost 150 years ago in the form of wagering on presidential elections. According to researchers Paul W. Rhode and Koleman S. Strumpf of the University of North Carolina, Chapel Hill, there were “large and often well-organized markets” in the period between 1868 and 1940. Rhode and Strumpf’s study found that the market did ”an admirable job in forecasting elections in a period before scientific polling.”
In 1988, the University of Iowa College of Business revived the tradition of wagering on presidential elections with the world’s first electronic prediction market. Since then, researchers there have set up increasingly complicated markets to study their behavior and accuracy. In its first three elections, the Iowa Electronic Markets were off by an average of only 1.37 percent.
By 2006, electronic markets were predicting some elections in the United States with stunning success. A commercial marketplace, Tradesports, let the public wager on all 33 U.S. Senate contests held that year. Not a single public opinion poll predicted all 33 correctly, but the bettors at Tradesports collectively did. The site even forecast the Virginia and Montana contests, which were decided by mere tenths of a percent—a few thousand votes out of hundreds of thousands.
Formal research in prediction markets goes back at least to the storied economist Friedrich Hayek. As early as 1948, Hayek wrote about the ways that free markets emit information. By the late 1980s, Robin Hanson at George Mason University, in Fairfax, Va., and other researchers elsewhere had begun to study market behavior under controlled laboratory conditions.
So how do prediction markets typically work? First of all, they use real money. That’s important for keeping bettors honest. The price you pay is set by the market’s opinion on the odds of that outcome. If, for example, you have to pay 33 cents for a bet that former U.S. Senator Fred Thompson of Tennessee will be the Republican candidate for president of the United States next year (the price in mid-July), and 40 cents to bet on Rudolph Giuliani, former mayor of New York, then the market says there is a 1 in 3 chance of Thompson getting the nomination, while Giuliani’s chances are 2 in 5.
Corporate prediction markets work the same way. Real money or some other trinkets are still necessary, because they reduce the chances that participants will lie, out of boredom or to advance their agendas in some way. Using real money is a double-edged sword, however—it can also motivate people to manipulate the market, by virtue of being able to influence the outcome of events in the real world. For example, in the Microsoft market, if enough money were on the line, a programmer could deliberately introduce bugs into the code that would affect its release date, just as a college basketball star can throw a key tournament game. For that reason, and because companies don’t want to run afoul of insider trading laws, some markets limit the amount of money involved. Others use fake money, issuing modest prizes or honoring the winner in some other way.
In corporate prediction markets, the company involved usually subsidizes the wagers by giving participants initial stakes, real or otherwise. But even though they’re not risking their own money, the bettors generally don’t lie or misrepresent their beliefs with their bets. It’s human nature to want to win more money. And in addition to the financial reward for success, prediction markets are public forums, and winners can take pride in their success.
The first experimental corporate markets were at Hewlett-Packard Co., in Palo Alto, Calif. From 1997 to 1999 a researcher there, Kay-Yut Chen, with the help of Charles R. Plott, an economics professor at Caltech, let selected individuals bet on future sales of some of the company’s printer products. They found prediction markets to be “a considerable improvement over the HP official forecast.”
HP is no longer merely experimenting with prediction markets. Today, a market is used to predict the future cost of dynamic random access memory chips. “HP is the largest DRAM buyer in the world,” says Leslie Fine, a game theorist who works in the Information Dynamics Lab at the company’s HP Labs. “DRAM accounts for between 7 and 10 percent of the price of a new computer. Our profit margins are often less than that, so we’re intensely interested in its price.” Prior to using a betting system, Fine says, about 25 managers used to attend “endless meetings” each month to forge the next corporate prediction, which was then used by those who purchase the chips.
Last year, Fine and her colleagues assembled a group of 14 executives, ”none of whom should have had the big picture.” After 3 hours of training in prediction markets, she set them loose on an internal company Web site, where they spent about an hour a month making their bets. The result? Besides spending far less time, the executives were more accurate. The endless meetings had produced predictions that were, on average, 4 percent off from the actual future prices of DRAM, while the prediction markets missed by 2.5 percent.
Prediction markets work so well because they ferret out those confident enough to back up their beliefs with cash. Suppose a marketer wants to predict what his company’s sales figures will be in the next quarter. He can set up a prediction market that puts the question directly to his salespeople, marketers, accountants, and others. The market gives those people an incentive to express their knowledge. The hope of winning money smokes out people who think they know the right answer, so the group of bettors is self-selecting.
Of course, some people don’t know as much as they think they do, and some will make lousy bets. But as it turns out, that’s a feature, not a problem. As long as there are people with both money and expertise, they will trump the bad betting of the ill-informed and overconfident with additional wagers of their own.
In the Microsoft case, for example, if the manager who believed, incorrectly, in the original release date of November bet accordingly, the payoff for November would go down, and the payoff for all the other months would go up. That higher payoff would raise the stakes for the people who were sure the product would ship later than November, and they would bet more money on one or more of those other months. Eventually, an equilibrium would be reached—which might not be different from the state of the betting before the bad November bet was made.
A market that consisted only of experts who were always right wouldn’t see much action, because bettors couldn’t win much. Imagine a poker game where almost all the cards are face up and everyone is a good player. As soon as the player with the best hand makes a raise, the other players drop out.
One way to inspire more betting is for the house to throw some initial money into the pot or to give the players some chips. That’s why most corporate markets give employees an account with which to start betting. That stake is a subsidy of sorts, as is “sucker” money—the betting of the ignorant. As Hanson notes, “the winners are attracted by losers, just as wolves are attracted by sheep.”
The importance of a diverse user pool for the success of a prediction market can hardly be overstated. If everyone has a similar mind-set or is using the exact same information, each person will predict events uniformly, and like the poker game with all the cards face up, the betting will be minimal at best.
Consider a market set up at the University of Iowa in Iowa City to predict outbreaks of influenza. The market was established because while organizations such as the U.S. Centers for Disease Control and Prevention track actual outbreaks of influenza, there was no good way of predicting them.
Betting in the Iowa influenza market is by invitation. If only epidemiologists participated, the market would suffer from similar mind-sets and information. So the researchers also invited doctors, nurses, and pharmacists, giving the market fresh information and a different set of perspectives.
Each week the CDC ranks influenza activity on a scale of 1 to 5. Bettors wager on which of the CDC’s levels will be reported in a given week, up to five weeks in advance. The market runs throughout the flu season, from September to April. According to a study published this year in the journal Clinical Infectious Diseases, during the market’s first full year, 2004–2005, it correctly predicted the exact level 71 percent of the time one week in advance and 50 percent of the time two weeks ahead.
The use of the CDC’s five levels satisfies a key requirement for a successful prediction market: specificity. Markets have to be about measurable outcomes. Bets on a product release need specific time frames, such as the month-by-month market Microsoft set up. Sales figures can be divided into ranges, as the HP printer product markets were.
Last year, a poorly worded contract by Tradesports caused a ruckus when some bettors lost money. Tradesports had set up a market to predict whether North Korea would test a long-range missile—which was defined as sending a missile beyond the country’s airspace. Those who bought that contract, which expired on 31 July 2006, thought they had won when a North Korean missile flew 442 kilometers (275 miles) into the Sea of Japan (East Sea). However, the contract also specified that the source of the confirming information had to be the U.S. Department of Defense, which declined to release any specifics about the test. Despite a White House statement that confirmed the missile ”went out about 275 miles,” Tradesports awarded the contract to those who bet against a successful test.
Vague or ambiguous outcomes are an ongoing problem at Inkling, the Chicago start-up, which also runs a Web site that lets anyone create a market. The company has more than 1200 active or completed markets, according to cofounder Adam Siegel. But about 450, or three out of eight, are ”collecting dust”—no one is making wagers. Siegel says the most common problem is a bad question. Some are problematic because they ask for opinions instead of predictions. “We get a lot of ‘Will my wife get pregnant?’ How would anyone know enough to say?”
Many prediction markets use a format known as a double auction, in which buyers and sellers submit bids. Such a market resembles a newspaper’s stock listings, with “ask” and “bid” prices.
For example, on a particular night in mid-July, the Atlanta Braves were favored to win a baseball game the following night against the San Diego Padres. To have the chance to win $1 betting on them, you had to wager 57.5 cents (the ask price). If you had already made a wager on the Braves and wanted to sell it, though, you would get only 57 cents (the bid price). At any given time, the price—in this case, 57.5 cents—tells us the market’s prediction: the Braves will win, and the likelihood is 23 out of 40 (575/1000).
As in the stock market, the bid price is what someone will give you when you want to sell, and the ask price is the one at which someone will sell you the item you want to buy. A large gap between them represents a lack of what is called liquidity, and it’s usually a result of there being too few buyers and sellers overall or too many of one or the other.
In 2002 and 2003 George Mason’s Hanson wrote a pair of papers that suggested a new way of running a prediction market that avoids the gap between bid and ask prices; it lets users simply bet on what they think the future will hold. Hanson named it the Logarithmic Market Scoring Rule Market Maker, but other researchers have whittled that down to Hanson’s Market Maker.
When users come to a prediction market that uses the Market Maker, they can just buy or sell whatever the system is offering. Even if there’s only one bettor, he or she can still make a trade at any time. The market uses an algorithm to make up a price of its own and takes the trade. The algorithm contains variables that include the amount of money that bettors have collectively spent so far, the number of shares outstanding, and the maximum amount of money the Market Maker is willing to lose. The more money it is willing to lose, the more market liquidity there is, meaning that bettors can buy more shares at or near the current price without causing massive price swings.
Suppose the wager in question is whether a Democrat will win next year’s U.S. presidential election, and you want to wager in the affirmative. You’d win $1 if a Democrat wins, and nothing if the Democratic candidate loses. When the first trader comes along and wants to buy, say, 10 shares, the algorithm kicks in to determine how much the trader has to pay for them. According to David Pennock, who is in charge of prediction markets at Yahoo Research, plugging in a value of $100 as the maximum that the Market Maker is willing to lose, the algorithm will generate a price, in this case, $5.12 (that is, $0.512 per share).
Hanson’s Market Maker is in effect another subsidy used to create an effective prediction market. As such, it’s a reasonable thing for a corporation to do. After all, anything a company does to suss out hidden pockets of expertise involves some explicit or tacit expense, whether it’s taking a poll or scheduling monthly meetings. A commercial exchange such as Tradesports, however, is in the business of making money; the fact that it generates useful information is only an incidental benefit. Hence, it continues to use the more cumbersome—but frugal—method of separate bid and ask prices. In contrast to a Market Maker, Tradesports doesn’t take any stake in the bets.
HedgeStreet, a San Mateo, Calif., exchange devoted to foreign currencies and real estate, as well as gold, oil, and other commodities, also uses bid and ask. It claims to be the first retail market for certain types of trades, such as derivatives, which are in effect wagers on the future price of something, such as gold, without trading the commodity itself. Its markets, which operate under the oversight of U.S. regulatory agencies, blur the distinction between a prediction market and more traditional financial exchanges.
One market with an interesting structure is run by Media Predict, of New York City. Media Predict users can bet on events such as first-day movie grosses. For example, on 9 July, two days before the release of the film Harry Potter and the Order of the Phoenix, trading was at $56.80 (each dollar represents $1 million in ticket revenue). If you bought at that price and the movie grossed $70 million on its opening day, you would win $13.20. If ticket sales were only $50 million, you would lose $5.80. In addition, the site’s users can review book proposals or music from unsigned bands and make wagers on the likelihood that they will be popular and successful. Such markets are springing up all over. At the simExchange, for example, gamers predict the sales of console hardware and upcoming video games [see table, “So Many Markets”].
Companies can set up their own markets by hiring a company such as Inkling, or by using increasingly sophisticated software designed for the purpose. Or you can roll your own, using the Zocalo project’s open-source code.
Despite all the theoretical and practical evidence of their usefulness, many companies are still reluctant to use prediction markets. Hanson notes that corporate managers would often rather retain control over decisions, even if a market would be more accurate. And an accurate prediction of, say, a product release date can undermine a manager’s efforts. “If you act as if you can still make a deadline, even if you know you can’t, you can get people to be more productive,” he says. “Moreover, some corporate decisions have less to do with data than coalition building.” Still, prediction markets are gaining acceptance. For example, HP researcher Fine says her company’s consulting arm has begun a trial with Pfizer, the giant pharmaceutical manufacturer, to predict the success of new drugs while they are in the earliest stages of development.
Henry Berg, who runs the Information Markets group within Microsoft, notes that in many cases a company has no formal prediction methods in place. “An organization adopting prediction markets needs to make two major adjustments: deciding to start making formal predictions about the future and choosing to use prediction markets as the mechanism,” Berg says. “In my experience, the first adjustment is greater than the second.”
Author Surowiecki says prediction markets “threaten the established order.” For example, it would be interesting but unlikely for a company to ask its workers about the consequences of the chief executive being forced to step down. If the workers bet the firm’s stock price would rise, that would suggest they thought it would be a good idea.
Prediction markets are not only subversive, they’re controversial in other ways. There’s no better example than FutureMAP, a 2003 attempt to create some markets at the U.S. Defense Advanced Research Projects Agency. FutureMAP’s markets were devoted to political questions—the political stability of Jordan; North Korean nuclear missile attacks; whether Yasser Arafat, then the head of the Palestinian Authority, would be assassinated. It was to have two components: a private market, like the ones created by HP and Microsoft, for analysts at the CIA and intelligence agencies; and one for the general public. FutureMAP quickly ran afoul of public opinion. Members of Congress found it offensive and ghoulish, and they quickly terminated the program.
Surowiecki also says prediction markets are counterintuitive. Sometimes, even he finds crowd thinking hard to believe. When The Wisdom of Crowds came out, he would tell the audiences at his book signings an anecdote in which a crowd at a 1906 English county fair collectively guesses the weight of a slaughtered ox. Then he would do a comparable experiment by having his audience guess the number of jelly beans in a jar. “I always had a thrill of fear,” he says. “Maybe it won’t work this time; maybe the crowd will be stupid. It always does work. But I spent two and a half years researching this, and I still have those thoughts.” Nonetheless, he says that if managers can get past their doubts, they can create markets that foretell problems while there’s still time to do something about them.
If prediction markets seem awfully close to out-and-out gambling, it’s worth noting that stock markets were originally thought of the same way, and many financial instruments since then have been similarly condemned, including short selling, whereby you make money when a stock goes down instead of up; options trading; commodity futures; derivatives; and, most recently, hedge funds. Hence the concern expressed by economists such as Nobel laureate Arrow that we not lose the social utility that prediction markets offer.
Futures markets help farmers decide how much wheat and orange juice to produce. So, too, prediction markets might soon help media executives decide which books and movies to green-light. After all, it was ordinary consumers who made the Harry Potter books and movies successful.
It’s not surprising that Internet-based mechanisms can be found to tap into the wisdom of the masses. What’s surprising is that there’s a way to draw it out so quickly and efficiently. And that it works so well.