The Inside Story of Texas Instruments’ Biggest Blunder: The TMS9900 Microprocessor

The TMS9900 could have powered the PC revolution. Here’s why it didn’t

graphic link FPO

If you use a laptop or desktop computer, chances are it has a microprocessor from the Intel 808x line, regardless of whether it’s a Windows machine or a Mac. The utter dominance of these Intel microprocessors goes back to 1978, when IBM chose the 8088 for its first personal computer. Yet that choice was far from obvious. Indeed, some who know the history assert that the Intel 8088 was the worst among several possible 16-bit microprocessors of the day.

It was not. There was a serious alternative that was worse. I know because I was in charge of the organization within Texas Instruments that developed it: the TMS9900. Although this dog of a chip went on to be used in the world’s first 16-bit home computer, you’ve probably never heard of it. As they say, history is written by the winners.

This particular chapter of history is interesting not just for TI’s chip but also for another also-ran, the Motorola 68000, which was technologically superior to both the Intel 8088 and the TMS9900. And yet the 68000 did not end up in the IBM PC. Here is the inside story of how IBM came to pick an inferior chip, TI birthed a loser, and Motorola’s seeming winner lost, too.

I joined Texas Instruments in 1972, fresh out of graduate school, and about two years later I found myself giving a presentation to Jack Carsten, the manager of TI’s MOS Division in Houston, where the company’s work on metal-oxide-semiconductor chips was based. As a young engineer, I was somewhat intimidated by Jack, who sat with his feet on the conference room table throughout my presentation, smoking a cigar and muttering “bull****” when he disagreed with something that was said.

At the time, the “Big 3” semiconductor companies—Fairchild, Motorola, and TI—were struggling to make the transition from bipolar integrated circuits to metal-oxide-semiconductor circuits. MOS chips required a substantially different design and process technology from those of bipolar chips, and chip startups like Intel moved much faster than the established companies. Of the Big 3, TI had done the best job of making the transition, thanks in large part to people like L.J. Sevin [PDF], who left TI in 1969 to form Mostek and later became a venture capitalist. Carsten, who had previously served as general manager of TI’s profitable transistor-transistor logic (TTL) family of products, was also a key player in making the move to MOS.

TI’s MOS Division had achieved its most notable success with logic chips for the emerging handheld calculator market. Although the company had competed with, and eventually beaten, Intel to develop the first general-purpose microprocessor, TI engineers didn’t really pay much attention to either the Intel 4-bit 4004 or 8-bit 8008 microprocessors. TI did take note of Intel’s 8080 and subsequent 8080A 8-bit microprocessors, which showed much more promise than the 4004. The MOS division was given the job of catching up to Intel in both microprocessors and DRAM (or dynamic random-access memory, which can cram in more memory cells per chip than can static RAM but has to be constantly refreshed to prevent data loss).

And thus, a strategy for general-purpose microprocessor development emerged at TI. The key assumptions behind the strategy were that application software would drive the evolution of these chips and that with a successful line of MOS ICs, TI would be in a position to develop an industry standard for minicomputers, defense systems, and consumer products, all of which were rapidly growing businesses for the company. But to do that, TI would have to leapfrog the current 8-bit state of the art, represented by Intel’s 8080, and be first to market with a 16-bit architecture. From this strategy emerged the plan for the TMS9900.

TI had already demonstrated its computer prowess in the supercomputer race of the late 1960s. Driving that race were oil companies seeking a competitive advantage in 3D seismic analysis for oil exploration, which was TI’s founding business. IBM, Control Data Corporation, and others competed in this race, but TI was the first to market with its Advanced Scientific Computer [PDF].

So for TI, selecting a chip architecture for the 16-bit microprocessor was straightforward. TI had a strategy of “one company, one computer architecture,” which aimed to exploit any synergies among the company’s disparate divisions. TI’s Data Systems Division had already launched a family of TTL-based minicomputers for use in Ramada Inns across the United States. So the TMS9900 would use a chip architecture very similar to that of the TI minicomputer.

Carsten’s team knew that development of the TMS9900—as well as a bipolar version for the military market called the SBP9900—would require time and that the chips probably wouldn’t be ready until 1975 or ’76. In the meantime, the MOS Division needed to act. The plan was to start by copying the Intel 8080A to get something into the market, then develop a TI-original 8-bit microprocessor architecture (which would be called the TMS5500), and finally move to the 16-bit TMS9900. (National Semiconductor had already released a 16-bit general-purpose logic chip set, called the IMP-16, but because of its multiple chips, it never achieved much popularity.)

The TMS9900 had its fair share of development challenges and delays, but it was finally ready in 1976. Even then it faced several big problems. First, there were no compatible 16-bit peripheral chips. Without peripheral chips to handle communications and storage, the microprocessor would be worthless for system designs. The second problem was that the 9900 architecture, being the same as that used in TI’s minicomputers, had only 16 bits of logical address space, which was the same as the 8-bit microprocessors of the day. This problem couldn’t be solved without developing a whole new architecture. The final problem was that while TI could use a single microprocessor technology for its minicomputer, defense, and semiconductor businesses, competitors in those businesses would be at a disadvantage if they adopted the TI microprocessor architecture in their products.

To attack the lack of 16-bit peripherals for the TMS9900, TI engineers landed on an innovation. Why not put an 8-bit port on the TMS9900, so that the large number of existing peripheral chips designed for 8-bit microprocessors would work with it? I’m sure the idea sounded reasonable at the time. The result was the TMS9980, which emerged in 1977. The attachment of an 8-bit peripheral to a 16-bit microprocessor negated the only real advantage of the 16-bit architecture: its performance. The 9980 took two instruction cycles to execute an instruction for an 8-bit peripheral, thus cutting the effective performance in half and making it no better than existing 8-bit microprocessors. Before TI’s grand plan was realized, Carsten left to become VP of sales and marketing at Intel, no doubt sensing that Intel was going to be a difficult competitor to beat in the microprocessor market.

Intel was, of course, developing its own 16-bit microprocessor, the 8086, which was eventually introduced in April 1978. The company addressed the lack of compatible 16-bit peripheral chips in exactly the same way TI had, by adding an 8-bit port to its microprocessor, which yielded the Intel 8088. Like the TI 9980, the Intel 8088 was also a dog, showing reduced performance compared to the 8086 in any real system design. The Intel chip did have one fundamental advantage over the TI chip: It had 20 bits of logical address space instead of 16. That translates into the ability to address one megabyte of memory, as compared to 64K bytes for TI’s 9900. In addition, the off-chip registers for the TMS9900 and 9980 slowed down their performance even more.

And while Intel had successfully developed alternate production sources for the 8086, TI struggled to close similar deals. At the time, most customers wanted at least two competing suppliers for any new family of semiconductor components, to ensure product availability and keep down prices.

Meanwhile, a few competitors had announced plans for their own 16-bit general-purpose microprocessors. Motorola’s 68000 was the most ambitious. Although it had 16 external pins, it actually had a 32-bit architecture internally, with the ability to address 24 bits of logical address space externally. A follow-on product would probably be able to address 32 bits. Zilog, creator of the popular 8-bit Z80 microprocessor, announced it would introduce the 16-bit Z8000, which had a segmented memory, in late 1978 or early 1979. Unlike the 68000, though, the Z8000 had a straightforward 16-bit architecture.

In October 1978, six months after the announcement of the Intel 8086, I moved to TI’s MOS Division and became the manager for microprocessors. By this time, everyone at the company, and many people outside the company, knew that TI’s 16-bit microprocessor strategy wasn’t working. Compounding that problem was the division’s largely unsuccessful attempt to develop a compatible 16-bit microcontroller, called the TMS9940, which was in its fifth or sixth re-spin by the time I arrived. I knew I was inheriting a difficult situation. So why would I give up a good job as engineering department manager of the consumer products group? The answer is location, location, location. The microprocessor business was based in Houston, whereas TI had moved the consumer products group to Lubbock, Texas. Lubbock is a city where the correct answer to the question, “How do you like living here?” is “The people are wonderful.” The country music singer Mac Davis, who grew up there, once wrote a song whose refrain went “I thought happiness was Lubbock, Texas, in my rearview mirror.”

Shortly after I arrived in Houston, I was told that I would need to give a presentation on the TMS9900 to a group from IBM that was working on a very secret project that required a 16-bit microprocessor. The group came from a rather unusual location for IBM: Boca Raton, Florida. I spent a lot of time preparing, gave what I thought was a well-polished presentation, and diligently followed up. But the IBM team displayed limited enthusiasm. We wouldn’t know until 1981 just what we had lost.

John Opel, president and then CEO of IBM, had done something rather revolutionary when he formed the Boca Raton group, which later became known as the Entry Systems Division. He realized that personal computers from Apple, Commodore, Radio Shack, TI, and others might eventually pose a threat to IBM’s dominance of the computer business. So he gave the Boca Raton group, which reported to Philip (Don) Estridge, carte blanche on the product it was developing—which was the IBM personal computer, of course. They could use third parties for anything they chose, including the operating system and application software. This latitude made the system quite “open” by IBM’s standards, and it would presumably accelerate the time to market. Opel imposed one restriction, however. The product would carry the IBM name, so it couldn’t damage the corporate reputation for quality and reliability. To that end, IBM’s massive quality-assurance organization had to sign off on the product before it could be released.

Selection of a 16-bit microprocessor by the IBM team couldn’t have been much of a debate. The Motorola 68K, as it was later known, was undoubtedly the hands-down winner. It had the largest logical address space, which was even more important than the minimum 16-bit internal architecture. It was also easily expandable to a full-fledged 32-bit architecture. And, most important, the 68K was a “Big Endian,” unlike the other contenders. The terms “Big Endian” and “Little Endian” refer to the order in which a computer stores bytes in memory. As 16-bit architectures evolved from 8-bit architectures, engineers had to decide which 8-bit byte came first in a 16-bit word. Digital Equipment Corp. chose the Little Endian approach for its Programmed Data Processor (PDP) and VAX architectures. Intel also opted for Little Endian. But IBM’s computers were all Big Endians. For a Big Endian to talk to a Little Endian, the byte order had to be reversed in real time. This conversion of data was nontrivial at the time. Motorola’s 68K required no such conversion for use with the IBM PC. So why aren’t we all using 68K-based computers today?

The answer comes back to being first to market. Intel’s 8088 may have been imperfect but at least it was ready, whereas the Motorola 68K was not. And IBM’s thorough component qualification process required that a manufacturer offer up thousands of “production released” samples of any new part so that IBM could perform life tests and other characterizations. IBM had hundreds of engineers doing quality assurance, but component qualifications take time. In the first half of 1978, Intel already had production-released samples of the 8088. By the end of 1978, Motorola’s 68K was still not quite ready for production release.

And unfortunately for Motorola, the Boca Raton group wanted to bring its new IBM PC to market as quickly as possible. So they had only two fully qualified 16-bit microprocessors to choose from. In a competition between two imperfect chips, Intel’s chip was less imperfect than TI’s.

TI’s TMS9900 didn’t just quietly die after missing the brass ring of the IBM PC. Senior managers still held out hope of leveraging corporate synergy. Surely TI’s yet-to-be-announced home computer could use the TMS9900?

The computer’s development team reluctantly agreed to give it a shot. The group was the result of an unhappy merger of two departments, one that had been developing a video game console and the other a personal computer. The hybrid product they came up with was suitable for neither application. But TI doggedly pursued it anyway. The TI-99/4, as it was called, came to market in 1979, followed by the TI-99/4A in 1981. The company eventually sold 2.8 million units, most of them at a significant loss, before pulling out of the home computer market in 1984. [For more on the TI-99/4, see “The Texas Instruments 99/4: World’s First 16-Bit Home Computer.”]

Meanwhile, the Intel 8086 architecture evolved and overcame its shortcomings. (It is still a Little Endian, but that makes little if any difference today.) And Motorola, with its superior technology, lost the single most important design contest of the last 50 years.

While I’m on the topic of also-rans, let me say a few words about the IBM PC’s operating system. The logical choice for a 16-bit operating system was an extension of the popular CP/M operating system, developed by Gary Kildall at Digital Research and based upon Zilog’s Z80. IBM’s Boca Raton group understood the momentum behind CP/M as an open standard, and so they commissioned Digital Research to develop a version, called CP/M-86. Later in the process, however, Microsoft came through with the MS-DOS operating system, about which much has already been written. And so the world of PCs evolved in a different direction for both the operating system and the microprocessor.

So what are the lessons to be learned from this history? One is that for anyone developing a product based on rapidly changing high technology, being first to market is paramount, no matter how extensive the limitations of your initial product may be; today, this concept is known to Silicon Valley types as creating a “minimal viable product.” Provided your product has distinctive new capabilities, your customers will explore innovative ways to use it.

The second lesson is that, if you’re running a large corporation that wants to create a skunk works project free of the baggage of tradition, think hard about any restrictions you place upon it. It’s likely that restricting the operating system for the IBM PC would have provided IBM with much better long-term value than did imposing onerous qualification procedures. No one could have anticipated the magnitude of the impact of personal computers, but the real value lay in the operating system compatibility rather than in the hardware. If IBM, and not Microsoft, had controlled MS-DOS, Windows, and so on, the computing world would now be a different environment.

Finally, for people who are mainly bystanders watching a high-tech parade of events, keep your eyes open for opportunity. In TI’s case, we concluded in 1979 that the TMS9900 had lost the general-purpose microprocessor race, and so we looked ahead to what would be important after general-purpose microprocessors. Our strategy focused on special-purpose microprocessors and led to the development of the TMS320 series of digital signal processors [PDF]. Announced at the International Solid-State Circuits Conference in February 1982 and introduced the following year, the 320 DSP family and its derivatives became nearly half of TI’s revenue, groomed the company’s current management, and put TI into a competitive position for the race for the embedded processor system on a chip. In the 1990s, this strategy reversed TI’s decline in ranking among the top semiconductor companies and generated billions of dollars in chip sales for baseband modems, disk drive controllers, and a wide variety of other products.

A correction to this article was made on 26 June 2017.

About the Author

Walden C. Rhines is chairman and CEO of Mentor Graphics, in Wilsonville, Ore.