Java’s Forgotten Forebear

Java’s ability to run on many different kinds of computers grew out of much older software

11 min read
Photo of Ken Bowles.
Photo: Robert Glasheen Photograph Collection


Photo: Robert Glasheen Photograph Collection/Mandeville Special Collections Library/UCSD
PERSONAL VISION: To help bring interactive computing to education, Professor Ken Bowles of the University of California, San Diego, applied the virtual-machine concept to mini- and microcomputers.

The enduring appeal of Java isn’t hard to understand: With Java, you write code once and it can run on almost any modern computer or operating system—PC or Mac, Windows, Linux, OS X, whatever. It works that way because the Java compiler turns the source code into a kind of ersatz machine code that each of these different systems can execute when equipped with the proper run-time software. So different computers running different operating systems can all become, in programmers’ parlance, Java virtual machines.

That wonderful write-once, run-anywhere capability has made Java the most popular programming language now in use and, by some measures, the most successful computer language of all time. But what you may not know, particularly if you don’t have graying hair, is that this kind of software portability didn’t start with Java (which Sun Microsystems released in 1995). Indeed, the roots of this approach date all the way back to the late 1970s and early 1980s, during the heyday of the Apple II, the first IBM PC, and many other personal microcomputers built by companies that are long gone.

Much of the relevant work took place at the University of California, San Diego (UCSD), and it influenced academic computer science, the design of the Pascal programming language, object-oriented programming, and graphical user interfaces. Although that work did not produce a commercial success, the story of these visionary programmers and their audacious plans offers some unique insights into how the computer industry evolved—for example, why the Apple Macintosh is what it is. It also explains how an accident of fate would later bring these ideas to the world again in the form of Java.

Sex, drugs, and rock and roll were ascendant on U.S. college campuses in the 1960s. But something else was, too: computer programming (although it was far less popular than the other three). At most universities, computers took the form of centralized batch systems—a Control Data Corporation 3600 at UCSD, for example. To use it, you had to translate your program into a stack of punched cards, which you would submit to the technical wizards who managed the computer. Then, after perhaps a day or two, you could return to the campus computing center to pick up reams of green-bar printouts, which you would pore over late into the night to try to figure out why your code hadn’t executed properly. And after you found the bug in your code, you could go back to the card-punch machine to repair your logic or syntax.

Programming that way appealed only to the most dedicated. Starting in the late 1960s, however, a much better approach—interactive computing—beaconed from just over the technological horizon. And at UCSD, a young professor of applied engineering physics named Ken Bowles was determined to make it happen.

Before coming to UCSD, Bowles had witnessed the power of interactive computing at the Jicamarca Radio Observatory in Peru, where he had used a small Packard-Bell PB250 computer to control a massive antenna array and to collect data. So he knew firsthand how much small computers could aid research. He was also convinced that interactive computing could advance education in general.

Because of his deep interest in computers, Bowles was named to the committee formed to find a new director for UCSD’s computing center. That search failed, though, and in 1968 UCSD’s chancellor convinced Bowles to take the job himself. UCSD’s aging CDC 3600 mainframe needed to be replaced, giving Bowles the perfect opportunity to move his institution in the direction of interactive computing.

With that goal in mind, Bowles selected a Burroughs computer, in part because its designers were making strides toward interactive systems. Bowles and his students then began modifying the Burroughs operating system, making it more user friendly. All went well for a while, but during a 1974 lecturing trip to the University of Oxford, Bowles got word that UCSD had decided to replace the Burroughs machine, not with many smaller units but with big iron: an IBM mainframe. The university had also relieved Bowles of much of his authority as director.

The university took these actions to keep the computing center focused on administrative support, not better experiences for students. Rather than fight what would surely be a losing battle, Bowles stepped down as director, and his research into user-friendly operating systems was loaded onto a truck and carted away. One of his graduate students, Mark Overgaard, who had intended to work further on the Burroughs system, was left without a research topic.

Making the best of a bad situation, Bowles and Overgaard set out into uncharted research territory: They obtained a grant to use the LSI-11 minicomputer for education. The LSI-11 was a version of Digital Equipment Corp.’s then-popular PDP-11 with a central processing unit consisting of four large-scale-integration chips. Smaller than other PDP-11 models, it showed promise at the time as a platform for truly personal computing. Bowles and Overgaard aimed to apply the LSI-11 to a system for self-paced learning that Columbia University psychologist Fred S. Keller and his colleagues had popularized within educational circles in the 1960s—the “Personalized System of Instruction.”

Bowles also worked with the late Alfred Bork, a computer scientist and educational technologist at the University of California, Irvine, to help create ways for students to use small computers to advance through topics as their mastery of the material allowed. Although he was a physicist and engineer, Bowles was working to forge tools for instruction in any academic discipline. He anticipated that students and teachers would write and freely exchange courseware and software, as is done in today’s open-source community.

It was clear to Bowles that such software would have to include powerful commands for manipulating text, and it would also have to support complicated graphics—features few computer scientists of the era gave much thought to. Bowles also wanted the software to be portable. If you wrote a program for, say, an LSI-11, Bowles believed, you should be able to run it on a completely different kind of computer without having to modify the source code.

Bowles intended to create a computing environment that wouldn’t require developers to relearn skills when they moved from one machine to another. Programmers wouldn’t have to master a new set of monitor commands, a new debugger, or a new text editor—in the premouse era these used special key commands to manipulate the cursor. This environment, Bowles envisioned, should be able to run on a campus mainframe computer, on the minicomputers found in many research labs, or on personal computers, which he believed would increasingly appear in classrooms.

Bowles’s new system was to be based on Pascal, a programming language designed by the Swiss computer scientist Niklaus Wirth in the late 1960s. Bowles recognized Pascal’s superiority to the widely available BASIC, which was interactive and simple to learn but lacked adequate support for modularity and for programming using complex data structures.

But how could he ensure that this new system would be as portable as possible? Urs Ammann, a student of Wirth’s at the Swiss Federal Institute of Technology Zurich, had recently devised the perfect solution: Create a “p-machine” (the p stands for pseudo), which is to say a virtual machine, in software.

To appreciate what such a virtual machine is, you need to understand something about how computer programming is done. The central processing unit of a computer carries out very simple instructions—add one number to another, that sort of thing. These instructions are encoded and passed to the processor in the form of binary numbers, a sequence of 1s and 0s.

Writing a program directly in machine code is tedious in the extreme. People generally program in higher-level languages, whose instructions are considerably easier to understand: “If A equals B, execute command C,” for instance. Special software translates such high-level instructions into the machine code that the processor actually runs. That special software is typically a compiler that is specific to the language you’re using to write the program and the machine you want it to run on. A compiler that works on a Windows computer, for example, will not compile your program to run on an iPad.

The UCSD p-System included a compiler that translated high-level instructions into a machine code of sorts—but not one that would run on any computer of the day. The code was designed for a machine that didn’t exist. A processor could in principle be built to run this flavor of machine code, but that wasn’t the idea. Rather, the strategy was to write another piece of software that could translate this code on the fly into the current machine’s native operations. With such software, your computer—no matter what kind of processor it had at its heart—could become, at least virtually, a p-machine.

Ammann wrote a Pascal compiler that generated what he called “p-code” for a p-machine, which was used to bring Pascal to several different mainframes. Bowles recognized the beauty of this approach, and his UCSD team adopted this strategy for use on micro- and minicomputers. UCSD’s p-machine was slimmed down in some ways compared with Ammann’s to meet the limitations of the many small computers of the time. By early 1976, Bowles’s students had programmed the first UCSD p-machine in assembly language on a PDP-11.

To Wirth’s relatively bare-bones definition of Pascal, the UCSD group added variables that could represent character strings (up to that point, the available kinds of variables could hold only numbers or at most one character each). The researchers included more powerful input and output operations, allowing you, for example, to access data in the middle of a file without having to read everything that came before it first. The group also added something called units, which were modules of code, each with a well-defined interface to other system components. Units were a step toward modern object-oriented design, in which the building blocks of a program are cohesive, reusable software modules that package data and code together.

In addition, the UCSD p-System included a file handler, a debugger, and a text editor for writing programs—all developed in Pascal and compiled into p-code. Graduate student Richard Kaufmann wrote the editor, one of the earliest to offer a full screen of text and a cursor that could be moved anywhere on the page using key commands. These software tools included drop-down menus, which appeared again much later in Apple’s Lisa and Macintosh computers.

Although the initial UCSD p-System was developed for a PDP-11 minicomputer, some of Bowles’s students also implemented a p-machine for microcomputers that used Zilog’s Z80 microprocessor. The first test of the system’s portability, carried out soon after that chip was introduced in July 1976, was an ambitious one: running the p-code for Kaufmann’s text editor on a Z80. It operated flawlessly on the first attempt. The UCSD p-System was later ported to work with other chips, including the MOS 6502 (of Apple II fame), the Motorola 68000 (the original Macintosh chip), and the Intel 8086 family of microprocessors (used in the first IBM PCs).

Bowles and his students hoped the p-System would revolutionize how someone got an education. But for that to happen, they needed to get their software out to the world, which was no small trick in the pre-Web era.

Their first plan was to distribute the software themselves. Starting in 1978, UCSD students copied the p-System onto floppy disks and sent them to anyone who paid the licensing fee. Bowles soon realized, though, that it would take more to bring the p-System to a truly wide audience. So he went searching for a commercial partner.

The earliest successful connection was with a company called Terak Corp., in Scottsdale, Ariz. The UCSD p-System was a natural fit for the computer Terak was marketing, which was based on the LSI-11 and had bit-mapped graphics, pretty jazzy for the times. Bowles worked closely with Terak’s engineers, and Terak’s desktop computers equipped with UCSD Pascal soon started appearing at universities around the United States. But bigger things were to come.

Bowles achieved his greatest commercial coup when he licensed the UCSD p-System to Apple for distribution on Apple II personal computers. He had begun nurturing a connection with Apple in 1977, when he met Steve Jobs at the introduction of the Apple II at the West Coast Computer Faire. Jobs wanted to foster relations with a university, and Bill Atkinson, an early and influential employee of Apple, showed interest in the p-System. The result was Apple Pascal, a packaging of the UCSD p-System for the Apple II. It ultimately sold tens of thousands of copies.

Bowles also tried to come to an agreement with Bill Gates. At the time, Microsoft Corp. had just begun positioning itself to enter the operating-system market. But Gates wasn’t interested in paying royalties, and he ultimately purchased another operating system outright. It proved a wise decision. As we all know, IBM chose Microsoft’s operating system, MS-DOS, for its line of personal computers, which began selling in 1981. But the University of California struck a deal with IBM, and buyers of early IBM PCs could obtain units equipped with the UCSD p-System if they chose.

“I had to do this architecture-neutral distribution format, and
then I just went ka-ching! You know, this p-code translator thing would actually just drop in there.”

For the university, the commercial success of the UCSD p-System, limited as it was, created legal difficulties under the federal tax code of the day. As a result, Bowles had to spin off the profit-making parts of the work on the UCSD p-System, which was done by licensing it to a Massachusetts company called SofTech Microsystems.

SofTech benefited from a very favorable licensing deal, but sales of the UCSD p-System nevertheless flagged as MS-DOS took off—even though Microsoft’s product was technically inferior to the UCSD p-System in many respects. MS-DOS itself didn’t include a full-screen editor for writing code (or for composing any sort of text document), nor did it come with a Pascal compiler or any other high-level language. More troubling, MS-DOS lacked easy-to-use menus. Its users had to struggle to remember the required syntax and then type the commands they needed, say, to delete a file or to create a new subdirectory.

Apple achieved something much closer to Bowles’s vision of a friendly, easy-to-use computing environment when it introduced the Macintosh in 1984. No doubt the company’s experience with Apple Pascal and the influence of the UCSD graduates that Apple employed helped make that possible. Nevertheless, MS-DOS dominated the personal-computer landscape for the rest of the decade, while Apple’s operating system maintained a comparatively small foothold.

Luck and timing, of course, determined how much of this played out. But there was a technical reason, too, that the p-System failed to catch on during the early 1980s: Virtual-machine software is slower than native machine code. And in that era, before microprocessor clock speeds and transistor counts took off into the stratosphere, speed mattered a lot.

When Bowles realized that the UCSD p-System wouldn’t make significant inroads into the operating-system market, he tried to convince SofTech to make the p-System available to run on top of other microcomputer operating systems, just as Java does today. But the firm’s leadership didn’t agree, a decision they must have ended up regretting.

So from the commercial demise of the UCSD p-System in 1985 until Java was released in 1995, there was essentially no way for a program compiled only once to run on more than one kind of computer. You could take, say, something written in C for one platform and compile it on a different one, but you’d have to make lots of fussy changes to the source code for the program to run properly. The task became even more vexing as graphical user interfaces became widespread, because such interfaces rely on routines built into the operating system. The user interface often constitutes most of the code in any given program, so amending it to run on a different kind of computer can be a major headache.

In the early 1980s, while the UCSD p-System was still struggling to find a place in the hypersonically expanding microcomputer industry, James Gosling, a graduate student at Carnegie Mellon University, in Pittsburgh, was presented with a problem. His research group had acquired some rather unusual workstations, designed to run UCSD Pascal’s p-code directly in hardware. Gosling’s advisor figured that those machines didn’t have much of a future, yet his group had a lot of software that ran on them. So he assigned Gosling the job of porting that code to a more established brand of minicomputers, Digital’s VAX line.

Gosling could have done this by making the requisite fussy changes in every application program and recompiling. But he found an easier way: He wrote code that turned the VAX into a p-machine. Mission accomplished, he went on to obtain a Ph.D. based on completely unrelated work and soon took a job at Sun Microsystems. “Then fast-forward a bunch of years, when I was trying to do the project that Java came out of,” said Gosling in an interview for ACM Queue. “I had to do this architecture-neutral distribution format, and then I just went ka-ching! You know, this p-code translator thing would actually just drop in there.”

Java, the architecture-neutral programming language that Gosling and his colleagues at Sun developed in the early 1990s, achieved long-term success precisely because it focused on portability, which is what Bowles had tried to convince SofTech to do in the waning days of the UCSD p-System. Java virtual-machine software doesn’t take over your computer; it doesn’t include an editor or debugger; it simply runs Java programs—and it does it for almost any operating system. SofTech was in a position to create such a product a decade before Sun did, but it fumbled the opportunity. Java has since become the dominant software language in the world.

And what became of Ken Bowles? He took early retirement from UCSD in 1984 to work on the standardization and promotion of the Ada programming language, which is largely derived from Pascal. Since 1995, he has immersed himself in the world of wildflowers, photographing them and working with botanists on their classification. He built a website that displays his work and wrote code to support it. His language of choice is Delphi Pascal, which owes a great deal to the ideas first embodied more than three decades ago in UCSD Pascal.

It just goes to show that old programming languages that are any good never actually die. They just fade into ones that go by other names.

This article originally appeared in print as “The Soul of the Virtual Machine.”

About the Author

William W. McMillan, who heads the computer science department at Eastern Michigan University, recalls being impressed by the UCSD p-System when he first came across the operating system in the late 1970s. “In many ways we went backward when we went to MS-DOS,” he says. He recently decided to explore the history of the p-System, in part because it is conceptually similar to Java. To his surprise, he found evidence linking the two.

The Conversation (0)

Video Friday: DARPA Subterranean Challenge Final

1 min read

This week we have a special DARPA SubT edition of Video Friday, both because the SubT Final is happening this week and is amazing, and also because (if I'm being honest) the SubT Final is happening this week and is amazing and I've spent all week covering it mostly in a cave with zero access to Internet. Win-win, right? So today, videos to watch are DARPA's recaps of the preliminary competition days, plus (depending on when you're tuning in) a livestream of the prize round highlights, the awards ceremony, and the SubT Summit with roundtable discussions featuring both the Virtual and Systems track teams.

Keep Reading ↓ Show less

Making 3D-Printed Objects Feel

3D-printing technique lets objects sense forces applied onto them for new interactive applications

2 min read

Researchers from MIT have developed a method to integrate sensing capabilities into 3D printable structures comprised of repetitive cells, which enables designers to rapidly prototype interactive input devices.


Some varieties of 3D-printed objects can now “feel," using a new technique that builds sensors directly into their materials. This research could lead to novel interactive devices such as intelligent furniture, a new study finds.

The new technique 3D-prints objects made from metamaterials—substances made of grids of repeating cells. When force is applied to a flexible metamaterial, some of their cells may stretch or compress. Electrodes incorporated within these structures can detect the magnitude and direction of these changes in shape, as well as rotation and acceleration.

Keep Reading ↓ Show less

How to Write Exceptionally Clear Requirements: 21 Tips

Avoid bad requirements with these 21 tips

1 min read

Systems Engineers face a major dilemma: More than 50% of project defects are caused by poorly written requirements. It's important to identify problematic language early on, before it develops into late-stage rework, cost-overruns, and recalls. Learn how to identify risks, errors and ambiguities in requirements before they cripple your project.

Trending Stories

The most-read stories on IEEE Spectrum right now