Alan Turing: How His Universal Machine Became a Musical Instrument

The computing pioneer gave his computer the ability to play notes

12 min read
Opening photo for this feature article.
Photo: SSPL/Getty Images

f1Pictured here aged 35, Alan Turing was one of the most important figures in early computing.Photo: Archivio GBB/Contrasto/Redux

Alan Turing is one of the great pioneers of the digital age, establishing the mathematical foundations of computing and using electromechanical digital machines to break German ciphers at Bletchley Park, in England, during World War II. But one of his contributions that has been largely overlooked is his pioneering work on transforming the computer into a musical instrument.

It’s often said that computer-generated musical notes were first heard in 1957, at Bell Labs, in the United States. In fact, the computer in Turing’s Computing Machine Laboratory at the University of Manchester, in England, was playing musical notes many years before that.

f2Tom Kilburn and Freddie Williams created “Baby,” the first general-purpose electronic computer designed to run programs stored in its internal memory.Photo: University of Manchester/Press Association/AP

It was in the Manchester lab, in June 1948, that the first electronic all-purpose, stored-program computer ran its first program. Nicknamed “Baby,” this prototype was rough and ready. Programs were entered into memory, bit by bit, via a panel of hand-operated switches. Bright dots and dashes on a tiny glass screen formed the output. Baby was created by two brilliant engineers—Freddie Williams and Tom Kilburn—as a test bed for their new groundbreaking, high-speed electronic memory, the Williams-Kilburn tube (a type of cathode-ray tube). Although Baby ran its first program a few weeks before Turing arrived at the Manchester lab, Turing’s ideas had heavily influenced Kilburn as he designed the computer. (Kilburn didn’t like giving Turing credit, but the historical evidence on this point is clear.)

f3Baby’s memory was built out of cathode-ray tubes. Bombarding the tube’s screen with electrons at a given point had the effect of storing a small area of charge there; this represented one bit, which could be read out by a metal pickup plate.Photo: SSPL/Getty Images

After his arrival, Turing improved on the bare-bones nature of Baby, designing an input-output system that was based on the wartime technology used at Bletchley Park. Williams and Kilburn themselves knew nothing of Bletchley Park and its nine gigantic Colossus computers. These secret machines were the world’s first large-scale electronic computers, although they were not all-purpose and did not incorporate the concept of the stored program. Instead, each Colossus was controlled by switches and a patch panel. The war ended before a plan to use a program punched into a teleprinter tape to control the computer could be tested.

Turing used the same punched tape as the basis of his input-output punch and reader. As with Colossus, a row of light-sensitive cells converted the tape’s patterns of holes into electrical pulses and fed these pulses to the computer. What made Baby unique was that rather than running the program directly from a tape, it stored the program in memory for execution. (Once programs are stored in internal memory, a computer can edit them before—or even while—they run.)

Soon, a larger computer took shape in the laboratory. Turing called it the Mark I. Kilburn and Williams worked primarily on the hardware and Turing on the software. Williams developed a new form of supplementary memory—a rotating magnetic drum—while Kilburn took the lead in developing the guts of the computer, such as the central processor. Turing designed the Mark I’s programming system and went on to write the world’s first computer programming manual. The Mark I was operational in April 1949, with refinements continuing as the year progressed. Ferranti, a Manchester engineering firm, was contracted to build a marketable version of the computer, and the basic designs for the new machine were handed over to Ferranti in July 1949. The first Ferranti computer was installed in Turing’s Manchester lab in February 1951 (a few weeks before the earliest American-built commercial computer, the UNIVAC I, became available).

Turing referred to the new machine as the Manchester Electronic Computer Mark II, while others called it the Ferranti Mark 1. (Turing’s nomenclature will be followed here.) He wrote the programming manual in anticipation of the Mark II’s arrival and titled it Programmers’ Handbook for Manchester Electronic ComputerMark II, but it was based on his programming design work for the Mark I.

f5The Manchester computer was commercialized in the form of the Ferranti Mark I, the world’s first electronic general-purpose computer to go on the  market, released in early 1951.Photo: SSPL/Getty Images

How did Turing turn the Manchester computer into a musical instrument? His Programmers’ Handbook spelled it out. As far as we know, the Handbook contains the earliest written tutorial on how to program an electronic computer to play musical notes.

The Manchester computer had a loudspeaker—called a “hooter”—which served as an alarm when the machine needed attention. With some additional programming, the hooter could be made to emit a range of musical notes.

This was quite a trick. To produce a tone from a loudspeaker, you have to send it an oscillating electrical signal. The frequency of the oscillations gives the frequency of the tone. Modern digital sound equipment can generate all sorts of complex oscillating waveforms, but all the builders of the Mark II could manage was to send sequences of on/off digital pulses to the loudspeaker. And this is exactly what the computer’s hoot instruction did; executing the instruction once sent one pulse to the hooter. But one pulse on its own would just make a sound that Turing described as “something between a tap, a click, and a thump.” One could produce a recognizable tone by using a program loop that repeatedly executed the hoot command, sending a train of pulses to the hooter. The frequency of the tone was fixed by the length of time between pulses.

Now here’s the clever part. The length of time a computer uses to execute an instruction is measured by programmers in clock cycles: To keep all its circuits working in sync, a computer has a master clock, and only at each tick of this clock are the results from one set of circuits accepted by the next. (Modern computers have clock speeds measured in gigahertz—that is, billions of cycles per second. The Mark II chugged along at a little over 4 kilohertz, that is, four thousand cycles per second.) The hoot instruction took four cycles to complete, sending a one-cycle-long pulse to the loudspeaker on the fourth cycle. The instruction to loop also took four cycles, so when looping, the pulse would be sent every eight cycles, or at a frequency of 521 hertz, which is very close to the note C5. (The subscript indicates the octave; middle C is C4.) Turing realized that by using multiple hoot instructions inside the loop and/or “dummy” instructions that the computer would “waste” cycles in executing, he could vary the time between pulses, creating notes with different frequencies. For example, two hoots in a row followed by the loop instruction would produce F4.

Turing himself seems not to have been particularly interested in programming the machine to play conventional pieces of music. Instead he conceived of the different musical notes as aural indicators of the computer’s internal states—one note might play for “job finished,” another for “error when transferring data from the magnetic drum” or “digits overflowing in memory,” and so on. Running one of Turing’s programs must have been a noisy business, with different musical notes and rhythms of clicks enabling the user to “listen in” (as Turing put it) to what the program was doing.

It is not known precisely when the Manchester computer played its first programmed note. Geoff Tootill was one of the electrical engineers charged with building the hardware, and his laboratory notebook is one of the few surviving documents relating to the transition from Baby to the Mark I. In the notebook the Mark I’s 5-digit instruction code for “hoot”—11110—is listed in an October 1948 entry but is not yet matched up with any instruction (even today, computer processor designers will set aside codes that don’t actually do anything, so as to easily add new instructions later). But at the end of November an entry in the notebook showed that by then, 11110 had been matched up with an instruction.

The notebook labeled this new instruction simply as “stop,” but the computer already had an instruction for stopping (00010), so it seems likely the new instruction was the special kind of stop that was later called “hoot-stop.” The programmer would use this for debugging purposes: He or she would insert a hoot-stop instruction into a program to cause the computer to pause at that point in the execution. The hoot—a steady, continuous note of middle C sharp—alerted the Mark I’s operators that the computer had paused as instructed. Of course, we can’t now conclude for certain that the hoot-stop idea was actually tried out on the computer in November 1948, but if it wasn’t put into practice then, it doubtless wasn’t long before Turing and Tootill had the computer piping out its first note.

It took several more years before someone strung together these groundbreaking notes to create a complete piece of digital music on the Mark II.

f8Christopher Strachey actualized the potential of the Manchester computer to play melodies, not just individual tones for software diagnostics.Photo: Barbara Halpern

That someone was Christopher Strachey, who turned up at the Computing Machine Laboratory one summer’s day in 1951. Strachey soon emerged as one of Britain’s most talented programmers, and he would eventually direct Oxford University’s Programming Research Group. But when he first walked into the Manchester lab, he was a mathematics and physics teacher, albeit at Harrow School, one of Britain’s foremost bastions of upper-class education.

Strachey felt drawn to digital computers as soon as he heard about them, in January 1951, or thereabouts. Before the war, he had known Turing at King’s College, Cambridge, and in April 1951 he wrote to Turing about the Manchester computer. Turing sent him a copy of his Handbook, and Strachey studied it assiduously. The Handbook was “famed in those days for its incomprehensibility,” Strachey said.

This incomprehensibility was in no small part due to the way Turing had incorporated the conventions of the tape reader into the system software. Turing used a variant of the international teleprinter code to abbreviate the computer’s machine code instructions.

Teleprinter code, in use for decades by that time, associates letters, numbers, and other characters with strings of five bits; for example, A is 11000 and B is 10011 (it’s the ancestor of the ASCII and UTF-8 codes, which are used today to store text digitally). Teleprinter code was well known to engineers in that era; Turing was very familiar with it from his wartime work at Bletchley Park breaking the “Tunny” teleprinter cipher used by Hitler and his generals.

At the most basic level, computer instructions are, of course, simply strings of bits. The Mark II machine code instruction that commanded the hooter to click was 0000001111. Strings of bits are hard to remember and easy to mistype, so Turing chopped up his machine code instructions into blocks of five digits, and he wrote out his programs using the corresponding teleprinter characters as abbreviations for the machine code instructions. In teleprinter code, 00000 is “/” and 01111 is “V”; thus, “/V” is the teleprinter code abbreviation of the Mark II’s 10-digit hoot instruction 0000001111. (The teleprinter abbreviation of the earlier Mark I’s 5-digit hoot instruction, 11110, was simply K.)

In this scheme, unfortunately, the abbreviation gave no hint as to the nature of the instruction it represented. Even worse, numbers—such as memory addresses—had to be encoded the same way. For example, “B@” would correspond to 1001101000 in binary. (The use of “@” to abbreviate 01000 is in fact a Turing idiosyncracy: His Handbook noted that in “true” teleprinter code, 01000 corresponds to a line-feed instruction.) Thus, where today a programmer might write “JMP 0616” as a way to tell a computer to “jump” to memory address 616, a Mark II programmer would write “X /P,” where “/P” is the Mark II’s version of a jump instruction and “X” is a memory address expressed in teleprinter code. The cheerful embrace of such nightmarish notation seems to have been a Turing hallmark—his pioneering 1936 paper establishing the foundations of computing relied heavily on a hard-to-read German gothic typeface.

Strachey was motivated to persist, however. He had made his first visit to the Manchester lab in July 1951, during which Turing suggested to him that he write a program to make the computer check itself. That Turing viewed this suggestion as the computer programming equivalent of a hazing can be seen by his remark to his friend Robin Gandy after Strachey had left: “That will keep him busy!”

As Strachey dived into learning what he needed for his task, his background as a pianist primed him to pick up on the page in the manual devoted to the hooter and how it could be made to produce different notes. When Strachey returned to Manchester a few months later, he brought with him the main program he’d been working on—called “Checksheet”—and a little something extra. Checksheet ran to 20 pages or so of code, which made it the longest program yet attempted. “Turing came in and gave me a typical high-speed, high-pitched description of how to use the machine,” recalled Strachey, and then Turing left him to his own devices at the console for the night. It was an intimidating moment: “I sat in front of this enormous machine,” said Strachey, “with four or five rows of 20 switches and things, in a room that felt like the control room of a battleship.”

Soon, Strachey had Checksheet up and running to show Turing—as well as his little something extra: The computer hooted out a performance of the British national anthem, “God Save the King.”

A budding programmer could have hardly thought of a better way to get attention. A few weeks later, Max Newman, founder of the Computing Machine Laboratory and professor of mathematics at Manchester—and in effect Turing’s boss—heard the computer grinding through the anthem. Newman quickly wrote to Strachey suggesting he might like a programming job in the lab. But Strachey had other irons in the fire and took up a position with a government department.

A patriotic musical computer did not go unnoticed by the press; headlines such as “Electronic Brain Can Sing Now” started appearing. The BBC got in on the action early by sending a recording team from the popular radio program “Children’s Hour.” The Mark II’s repertoire had expanded by this point, and in addition to the anthem it was recorded performing “Baa Baa Black Sheep” and Glenn Miller’s “In the Mood,” although it crashed halfway through the latter. “The machine’s obviously not in the mood,” exclaimed the BBC’s presenter. It seems that the BBC may have made a return visit to the Manchester computer lab later that year, to record some Christmas music. Ferranti’s marketing supremo Vivian Bowden reported that a BBC broadcast in December 1951 included the computer’s renditions of “Jingle Bells” and “Good King Wenceslas.”

The Manchester Mark II was not the only zeroth-generation electronic stored-program computer to play music. Trevor Pearcey’s Sydney-built CSIRAC (pronounced “sigh-rack”) first ran a test program in about November 1949. The computer seems to have been partially operational from late 1950 onward (a few months after the note-playing Manchester Mark I was switched off for the last time) and was in regular operation starting in about mid-1951. CSIRAC belted out tunes at the first Australian Conference on Automatic Computing Machines, held at the University of Sydney in August 1951.

f9The Australian CSIRAC (Council for Scientific and Industrial Research Automatic Computer) was also programmed to play music.Photo: William West/AFP/Getty Images

Exactly when CSIRAC first played musical notes is unrecorded; presumably it was in late 1950 or in 1951. A 2008 BBC News article, based on Australian sources, stated that CSIRAC was the first computer to play music, saying CSIRAC’s performance at the Sydney conference preceded the date of the BBC recording of the Manchester computer. Unfortunately, The Oxford Handbook of Computer Music also states that CSIRAC was “the first computer to play music.” However, this most certainly was not the case.

Before CSIRAC ran so much as a test program, the American BINAC [PDF], built by the Eckert-Mauchly Computer Corporation, was making music. BINAC was completed in August 1949. The team celebrated with a party that included a musical contribution from BINAC itself. One of the Eckert-Mauchly engineers present at the celebration, Herman Lukoff, explained: “Someone had discovered that, by programming the right number of cycles, a predictable tone could be produced. So BINAC was outfitted with a loudspeaker … and tunes were played for the first time by program control.” Although Lukoff does not mention the name of the person who created this pioneering music-playing program—the first in the world, so far as we know—she was in fact the veteran ENIAC programmer Betty Snyder [PDF], later Betty Holberton, and in 1997 the winner of the IEEE Computer Pioneer Award. Like Strachey, Holberton wrote her music-playing program during the night hours. Reminiscing about her time programming BINAC, she said, “I was on the machine 16 hours [with] 8 hours off and I slept in the ladies room.” Holberton recollected that, at the celebration, her program played the congratulatory “For He’s a Jolly Good Fellow.” So the timeline looks like this: At Manchester, an experimental computer-generated musical note (middle C sharp) was produced in the Computing Machine Laboratory, possibly before the end of 1948, and at Eckert-Mauchly in Philadelphia, during the summer of 1949, BINAC played different notes strung together into actual melodies. (For more information about the early ENIAC programmers, including Betty Snyder, the Eniac Progammers Project website is an excellent resource).

f10The American BINAC, programmed by Betty Synder, jumped ahead of the Manchester computer to play the first computer-generated music in 1949.Photo: PhotoQuest/Getty Images

Listening to the surviving unedited BBC recording of the Mark II playing music in 1951, one has a sense of people interacting with something entirely new. “The machine resented that,” Muriel Levy, the presenter, observed at one point. The idea of a thinking machine, an electronic brain, was in the air at Manchester. Turing merrily fanned the flames. He provocatively told a reporter from the British Times newspaper that he saw no reason why the computer should not “enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms.” Computers taking on the artistic field of musical performance powerfully illustrated that these machines were more than just number crunchers. (Turing poked fun at fear of out-of-control AI, though he expected machines to outsmart us in the end.)

© Kathryn A. Kleiman 1997

Both the promise—and the limitations—of the new technology were captured by Max Newman in a 1952 lecture. Speaking to the annual conference of the Incorporated Society of Musicians, he discussed the advent of computer music. He detailed how the Manchester computer reproduced melodies stored in its memory, and then went on to explain that they had figured out how to make the computer compose new tunes. Just how this was done isn’t recorded, but it’s likely the compositions relied on a random-number generator of Turing’s creation that was built into the Mark II.

As to their musical quality, Newman noted they were “very bad tunes.”

Some portions of this article are based on material previously published in Turing, Pioneer of the Information Age (Jack Copeland, Oxford University Press, 2012).

About the Authors

Jack Copeland is Distinguished Professor of Philosophy at the University of Canterbury in Christchurch, New Zealand. He is also the director of the online Turing Archive for the History of Computing. Jason Long is a composer and sound artist currently pursuing a doctorate at the Victoria University of Wellington, also in New Zealand.

The Conversation (0)