# The Future of Cybersecurity Is the Quantum Random Number Generator

## Truly random numbers will provide an unbreakable tool set for cryptography

Illustration: Greg Mably

In 1882, a banker in Sacramento, Calif., named Frank Miller developed an absolutely unbreakable encryption method. Nearly 140 years later, cryptographers have yet to come up with something better.

Miller had learned about cryptography while serving as a military investigator during the U.S. Civil War. Sometime later, he grew interested in telegraphy and especially the challenge of preventing fraud by wire—a problem that was frustrating many bankers at the time. As a contemporary, Robert Slater, the secretary of the French Atlantic Telegraph Co., wrote in his 1870 book Telegraphic Code, to Ensure Secresy [sic] in the Transmission of Telegrams, “Nothing then is easier for a dishonest cable operator than the commission of a fraud of gigantic extent.”

In his own book on telegraphic code, published in 1882, Miller proposed encrypting messages by shifting each letter in the message by a random number of places, resulting in a string of gibberish. For example, to encode the word HELP, you might shift the H by 5 so that it became an M, the E by 3 so that it became an H, the L by 2 so that it became an N, and the P by 4 so that it became a T. Even a meddlesome cable operator wouldn’t know what to make of MHNT unless he also had the list of random numbers, 5-3-2-4. For truly unbreakable encryption, each string of random numbers would encode only one message before being discarded.

About 35 years after Miller’s book, Bell Labs engineer Gilbert S. Vernam and U.S. Army Capt. Joseph Mauborgne came out with essentially the same idea, which they called the one-time pad. And ever since, cryptographers have tried to devise a way to generate and distribute the unique and truly random numbers that the technique requires. That, it turns out, is incredibly hard to do.

So instead, we’ve relied on less secure encryption methods, with the consequence that attackers who are sufficiently patient and knowledgeable can now crack into any encrypted data they want. And compared with Miller’s day, today we have more ways of connecting than the telegraph—through Internet of Things devices, wearable tech, and blockchain-dependent services, to name just a few—and they all need strong encryption. According to the 2017 “Cyber Incident & Breach Trends Report” [PDF] by the Online Trust Alliance, more than 150,000 businesses and government institutions were the victims of cybercrime last year. In just one of those attacks, on the consumer credit reporting company Equifax, hackers culled the personal information of nearly 148 million customers. “Surprising no one, 2017 marked another ‘worst year ever’ in personal data breaches and cyber incidents around the world,” the report concluded.

Fortunately, researchers have made good progress in recent years in developing technologies that can generate and distribute truly random numbers. By measuring the unpredictable attributes of subatomic particles, these devices can use the rules of quantum mechanics to encrypt messages. And that means we’re finally getting close to solving one of cryptography’s biggest puzzles and realizing the unbreakable encryption envisioned by Miller so many years ago.

You can’t beat one-time pads for security, if you use truly random numbers to shift the letters. Unfortunately, most one-time pads today use algorithms to generate pseudorandom numbers, like this example, which used numbers generated by Google. Illustration: Erik Vrielink

As any cryptographer knows, you need three ingredients to make a hackproof encryption method. First, you need an algorithm that converts your message into a string of meaningless characters. Second, you need a way to produce random numbers. And finally, you need the means to deliver the first two ingredients to the intended recipient without anyone else gaining access.

You cannot protect a message with the first ingredient alone, no matter how good the algorithm is. An encrypted message will be completely exposed to anyone who knows the algorithm used to secure it. That’s why we combine the algorithm with random numbers. Despite its relatively simple algorithm, the one-time pad becomes unbreakable with the addition of random numbers. To recover the original message, you need to know the specific sequence of random numbers the algorithm used to encrypt the message. Those random numbers are a cryptographic key, which unlocks the content of the encrypted message, but it’s useless for deciphering other messages, just as your house key opens your front door but not your neighbor’s. Your encryption system is thus only as strong as your cryptographic key is unpredictable.

Unfortunately, most sources of random numbers aren’t truly random. These pseudorandom-number generators use algorithms to produce sequences of numbers that look random. But again, if you know the underlying algorithm, they become completely predictable.

We can also generate random numbers by measuring physical processes, like flipping a coin or the interference of radio communications on an electric current. One problem with this approach is that if the process is bound by the laws of classical physics, the measurements can be predicted. To be sure, it may take some doing to reverse engineer what’s being measured, but a cryptographer has to assume that somebody will eventually find a way to do so.

Many physical random number sources are also slow. One common method is to record the coordinates of mouse clicks or movements on a computer screen. KeePass, an open-source password manager, uses mouse jiggles to generate a master password. Think how much random clicking or jiggling it would entail just to encrypt every email you wanted to send.

What’s needed, then, is a source of true randomness that is fast enough and that any device can use. That’s where quantum mechanics comes in.

By their nature, subatomic particles like electrons and photons behave in ways that can’t be predicted. If you take two photons emitted by the same atom at different times but under the same conditions, they may exhibit different behaviors, and there’s no way to predict those behaviors ahead of time. That’s not to say any behavior is possible, but of the outcomes that are possible, we can’t predict which one we’ll get. That unpredictability is crucial for developing a random number generator.

### One-Way Functions

The most common example of a one-way function is the multiplication of two large prime numbers (typically thousands of digits long). Any computer can multiply two large primes in the blink of an eye, but even for the fastest, it’s very slow going to reverse the process, taking the answer and checking all the possible options until it finds the two initial numbers. Illustration: Erik Vrielink

In the 1990s, a team at the U.K. Ministry of Defence became the first to propose a way to use quantum mechanics for random number generation. Today, you can buy commercial quantum random number generators from companies like QuintessenceLabs and ID Quantique. QuintessenceLabs’ generators are based on quantum tunneling, which occurs when subatomic particles spontaneously pass through a barrier that according to classical physics they shouldn’t be able to cross. The ID Quantique generator tracks the distribution of individual photons as they hit a detector.

All of the available commercial generators are limited to specialized applications, such as encrypting classified military data or financial transactions. They’re much too large, or too slow, or too expensive for mass market use. Imagine instead having a tiny quantum random number generator installed in your phone, your laptop, or anything else that needs to communicate securely. Creating such cheap, compact, and quick quantum systems has been the focus of our group’s research at the Institute of Photonic Sciences, or ICFO, in Barcelona, for the past eight years.

One of the most promising approaches is based on a type of semiconductor laser called a distributed feedback laser diode. We start by oscillating the laser diode above and below its threshold level—that is, the energy level at which the stimulated emission of photons starts. For our laser diodes, the threshold is about 10 milliamperes. Each time the laser exceeds its threshold level, the laser will emit photons with a random phase, which means that the photons will be at an unpredictable point along their wavelength. Those random phases become the basis for the random numbers we use to generate a cryptographic key.

We’ve already built several devices that have helped confirm the “spooky action at a distance” principle in quantum mechanics, which is the idea that entangled particles can interact with one another instantaneously regardless of distance. Specifically, our devices provided an observer-independent method of verifying that the spooky action could occur, which is important when it comes to proving that the instantaneous interaction is actually occurring. We built those devices using fiber optic cable, and each was about the size of a shoebox. Now, using standard chip-fabrication techniques, we’ve integrated the components for our quantum random number source onto an indium phosphide chip measuring less than 2 by 5 millimeters. This chip can be installed directly into a phone or an IoT sensor.

### RSA Algorithms

Quside Technologies, a company spun off from our institute last year, is commercializing components using our technology. (One of us, Abellán, is now Quside’s CEO.) Quside’s latest generation of quantum sources can produce several gigabits of random numbers per second, which means one source should be enough for any current or emerging encryption need. And because they can be made using standard chip-fabrication techniques, it should be easy to manufacture them in large volumes.

What’s more, our chips are immune to nearby electronic interference. Generally speaking, any electronic device may be susceptible to thermal or electronic interference. White noise, for example, can interfere with the reception of radio signals. Quantum sources, being so tiny, are especially susceptible, so in most cases, their designers need to pay close attention to eliminate any effects that might corrupt the pure, inherent randomness from the quantum process. Our solution neatly avoids this problem simply because a photon’s phase is largely unaffected by electrical currents in the vicinity.

Another good quantum source for random numbers is light-emitting diodes. In 2015, researchers at the Vienna University of Technology demonstrated the first such compact random number generator. It consists of a silicon-based LED that emits photons in the near infrared and a single-photon detector. Its random number generation was linked to when the photons arrive at the detector. The lab prototype generated random numbers at a rate of a few megabits per second.

Illustration: Greg Mably

A year later, our group in Barcelona demonstrated the chip-based quantum source we mentioned before, that is capable of producing gigabits of random numbers per second using distributed feedback lasers. As a bonus, our sources are built from off-the-shelf components and rely on standard optical communication and manufacturing techniques.

Meanwhile, researchers at SK Telecom [PDF], one of the largest telecom providers in South Korea, have demonstrated a random number generator chip that uses a smartphone camera to detect the fluctuations in an LED’s light intensity. The design was based on a patent from ID Quantique. The prototype, unveiled in 2016, measured 5 by 5 mm; since then SK Telecom has announced plans for a commercial version that’s about the same size—that is, small enough to fit inside your smartphone.

Other researchers are investigating quantum random number generators based on single-photon detection arrays. The arrays can detect the small variations as a light source fluctuates and should provide even better detection of quantum fluctuations than a traditional camera can.

Having an encryption algorithm paired with truly random numbers isn’t enough. You still need a secure way to send your message along with the cryptographic key to the recipient.

For encrypting and decrypting keys, the standard protocol for many years has been the RSA algorithm. Developed in 1977 by cryptographers Ron Rivest and Adi Shamir and computer scientist Leonard Adleman, it hinges on a mathematical trick known as a one-way function—that’s any calculation that is very easy to solve in one direction but extremely hard to solve in reverse. A classic example—and the one that Rivest, Shamir, and Adleman used—is to multiply two large prime numbers, typically 1,024 or even 2,048 bits in length. It’s of course very easy to multiply the numbers together, but it’s very hard to factor the result back to the original prime numbers.

RSA and similar algorithms give every network user two keys: a public key (known to everyone) and a private key (known only to the user). To send information, you encrypt it using the recipient’s public key. The recipient then decrypts the information using her private key. The algorithms have worked remarkably well for more than four decades because it’s extremely hard to crack the private key, even knowing the public key.

Flipping a Coin: Quside’s random number generator is fully integrated into a chip that’s a fraction of a coin’s size. It’s faster than flipping a coin, too, and it can generate gigabits of random numbers every second. Photo: Optica

The algorithms aren’t perfect, however. One of the main problems is that they take a long time to encrypt and decrypt a relatively small amount of data. For that reason, we use these algorithms to encrypt keys but not messages. The other big problem is that the algorithms are crackable, at least in theory. Right now, the only methods to crack the code take too long, provided a mathematical breakthrough doesn’t make RSA and similar algorithms easily solvable. For any practical attack, not even today’s supercomputers are up to the task.

Using a clever 20-year-old algorithm, a quantum computer, however, could easily calculate prime number factors by exploiting the quantum property of superposition to drastically decrease the computation time needed to find the correct factors. Today’s quantum computers aren’t powerful enough to handle an RSA-level hack. But it’s only a matter of time, and when that day comes, our current cybersecurity infrastructure will become obsolete.

Ideally, we should be able to exchange cryptographic keys that cannot be cracked before quantum computers or mathematical breakthroughs catch us by surprise. One possibility is to use a technology called quantum key distribution. Much like generating truly random numbers, quantum key distribution relies on the unpredictable nature of quantum mechanics, in this case to distribute unique keys between two users without any third party being able to listen in. One of the most common methods is to encode the cryptographic key into the orientation of a photon and send that photon to the other person. To achieve full security, we need to combine quantum key distribution with one-time pads to encrypt our messages, which will still require extremely fast random number generators.

We believe these quantum random number generators will be able to provide all the random numbers we’ll ever need. We’ll also have to continually check that our quantum sources are free from defect and interference and are producing numbers that are truly random. At our lab, we’ve developed a method for determining how confident we can be in a source’s true randomness. Our “randomness metrology” begins with establishing both the physical process that the source uses and the precision of the source’s measurements. We can use that information to set a boundary on how much of the randomness is arising purely from the quantum process.

Now that we’ve taken the first steps in developing quantum random number generators that are small enough, cheap enough, and fast enough for widespread, everyday use, the next step will be to install and test them in computers, smartphones, and IoT devices. With true random number generators, we can produce unpredictable cryptographic keys, and if we combine those keys with a secure method to distribute them, no longer will we have to worry about the computational or mathematical skills of an enemy—even the most capable attacker is powerless against true unpredictability. Nearly a century and a half after Frank Miller proposed his one-time pad, unbreakable security could finally be within our grasp.

This article appears in the July 2018 print issue as “The Future of Cybersecurity Is Quantum.”

Carlos Abellán is CEO of the quantum cryptography startup Quside, in Barcelona. Valerio Pruneri is a cofounder of Quside and the Corning Inc. chair and leader of the optoelectronics group at the Institute of Photonic Sciences, also in Barcelona.

## Membership includes:

• Follow your favorite topics to create a personalized feed of IEEE Spectrum content
• Save Spectrum articles to read later
• Network with other technology professionals
• Establish a professional profile
• Create a group to share and collaborate on projects
• Discover IEEE events and activities
• Join and participate in discussions

## Video Friday: Robot Training

Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

##### CoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Finally, after the first Rocky movie in 1976, the Robotic Systems Lab presents a continuation of the iconic series. Our transformer robot visited Philly in 2022 as part of the International Conference on Robotics and Automation.

Human cells grown in the lab could one day be used for a variety of tissue grafts, but these cells need the right kind of environment and stimulation. New research suggests that robot bodies could provide tendon cells with the same kind of stretching and twisting as they would experience in a real human body. It remains to be seen whether using robots to exercise human cells results in a better tissue for transplantation into patients.

[ Nature ]

Researchers from Carnegie Mellon University took an all-terrain vehicle on wild rides through tall grass, loose gravel and mud to gather data about how the ATV interacted with a challenging, off-road environment.

The resulting dataset, called TartanDrive, includes about 200,000 of these real-world interactions. The researchers believe the data is the largest real-world, multimodal, off-road driving dataset, both in terms of the number of interactions and types of sensors. The five hours of data could be useful for training a self-driving vehicle to navigate off road.

[ CMU ]

Chengxu Zhou from the University of Leeds writes, "we have recently done a demo with one operator teleoperating two legged manipulator for a bottle opening task."

Thanks, Chengxu!

We recently hosted a Youth Fly Day, bringing together 75 Freshman students from ICA Cristo Rey All Girls Academy of San Francisco for a day of hands-on exposure to and education about drones. It was an exciting opportunity for the Skydio team to help inspire the next generation of women pilots and engineers.

[ Skydio ]

Legged robotic systems leverage ground contact and the reaction forces they provide to achieve agile locomotion. However, uncertainty coupled with the discontinuous nature of contact can lead to failure in real-world environments with unexpected height variations, such as rocky hills or curbs. To enable dynamic traversal of extreme terrain, this work introduces the utilization of proprioception to estimate and react to unknown hybrid events and elevation changes and a two-degree-of-freedom tail to improve control independent of contact.

If you like this and are in the market for a new open source quadruped controller, CMU's got that going on, too.

A bolt-on 360 camera kit for your drone that costs \$430.

[ Insta360 ]

I think I may be too old to have any idea what's going on here.

[ Neato ]

I'm not the biggest fan of the way the Stop Killer Robots folks go about trying to make their point, but they have a new documentary out, so here you go.

This symposium hosted by the U.S. Department of Commerce and National Institute of Standards and Technology, Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the FinRegLab, brought together leaders from government, industry, civil society, and academia to explore potential opportunities and challenges posed by artificial intelligence and machine learning deployment across different economic sectors, with a particular focus on financial services and healthcare.

## Charles Babbage’s Difference Engine Turns 200

### Error-riddled astronomical tables inspired the first computer—and the first vaporware

During Charles Babbage’s lifetime, this 2,000-part clockwork was as near to completion as his Difference Engine ever got.

Science Museum Group

It was an idea born of frustration, or at least that’s how Charles Babbage would later recall the events of the summer of 1821. That fateful summer, Babbage and his friend and fellow mathematician John Herschel were in England editing astronomical tables. Both men were founding members of the Royal Astronomical Society, but editing astronomical tables is a tedious task, and they were frustrated by all of the errors they found. Exasperated, Babbage exclaimed, “I wish to God these calculations had been executed by steam.” To which Herschel replied, “It is quite possible.“

Babbage and Herschel were living in the midst of what we now call the Industrial Revolution, and steam-powered machinery was already upending all types of business. Why not astronomy too?

Babbage set to work on the concept for a Difference Engine, a machine that would use a clockwork mechanism to solve polynomial equations. He soon had a small working model (now known as Difference Engine 0), and on 14 June 1822, he presented a one-page “Note respecting the Application of Machinery to the Calculation of Astronomical Tables” to the Royal Astronomical Society. His note doesn’t go into much detail—it’s only one page, after all—but Babbage claimed to have “repeatedly constructed tables of squares and triangles of numbers” as well as of the very specific formula x2 + x + 41. He ends his note with much optimism: “From the experiments I have already made, I feel great confidence in the complete success of the plans I have proposed.” That is, he wanted to build a full-scale Difference Engine.

Perhaps Babbage should have tempered his enthusiasm. His magnificent Difference Engine proved far more difficult to build than his note suggested.

It wasn’t for lack of trying, or lack of funds. For Babbage managed to do something else that was almost as unimaginable: He convinced the British government to fund his plan. The government saw the value in a machine that could calculate the many numerical tables used for navigation, construction, finance, and engineering, thereby reducing human labor (and error). With an initial investment of £1,700 in 1823 (about US \$230,000 today), Babbage got to work.

## The Difference Engine was a calculator with 25,000 parts

The 19th-century mathematician Charles Babbage’s visionary contributions to computing were rediscovered in the 20th century.The Picture Art Collection/Alamy

Babbage based his machine on the mathematical method of finite differences, which allows you to solve polynomial equations in a series of iterative steps that compare the differences in the resulting values. This method had the advantage of requiring simple addition only, which was easier to implement using gear wheels than one based on multiplication and division would have been. (The Computer History Museum has an excellent description of how the Difference Engine works.) Although Babbage had once dreamed of a machine powered by steam, his actual design called for a human to turn a crank to advance each iteration of calculations.

Difference Engine No. 1 was divided into two main parts: the calculator and the printing mechanism. Although Babbage considered using different numbering systems (binary, hexadecimal, and so on), he decided to stick with the familiarity of the base-10 decimal system. His design in 1830 had a capacity of 16 digits and six orders of difference. Each number value was represented by its own wheel/cam combination. The wheels represented only whole numbers; the machine was designed to jam if a result came out between whole numbers.

As the calculator cranked out the results, the printing mechanism did two things: It printed a table while simultaneously making a stereotype mold (imprinting the results in a soft material such as wax or plaster of paris). The mold could be used to make printing plates, and because it was made at the same time as the calculations, there would be no errors introduced by humans copying the results.

Difference Engine No. 1 contained more than 25,000 distinct parts, split roughly equally between the calculator and the printer. The concepts of interchangeable parts and standardization were still in their infancy. Babbage thus needed a skilled craftsman to manufacture the many pieces. Marc Isambard Brunel, part of the father-and-son team of engineers who had constructed the first tunnel under the Thames, recommended Joseph Clement. Clement was an award-winning machinist and draftsman whose work was valued for its precision.

Babbage and Clement were both brilliant at their respective professions, but they often locked horns. Clement knew his worth and demanded to be paid accordingly. Babbage grew concerned about costs and started checking on Clement’s work, which eroded trust. The two did produce a portion of the machine [shown at top] that was approximately one-seventh of the complete engine and featured about 2,000 moving parts. Babbage demonstrated the working model in the weekly soirees he held at his home in London.

The machine impressed many of the intellectual society set, including a teenage Ada Byron, who understood the mathematical implications of the machine. Byron was not allowed to attend university due to her sex, but her mother supported her academic interests. Babbage suggested several tutors in mathematics, and the two remained correspondents over their lifetimes. In 1835, Ada married William King. Three years later, when he became the first Earl of Lovelace, Ada became Countess of Lovelace. (More about Ada Lovelace shortly.)

Despite the successful chatter in society circles about Babbage’s Difference Engine, trouble was brewing—cost overruns, political opposition to the project, and Babbage and Clement’s personality differences, which were causing extreme delays. Eventually, the relationship between Babbage and Clement reached a breaking point. After yet another fight over finances, Clement abruptly quit in 1832.

## The Analytical Engine was a general-purpose computer

Ada Lovelace championed Charles Babbage’s work by, among other things, writing the first computer algorithm for his unbuilt Analytical Engine.Interim Archives/Getty Images

Despite these setbacks, Babbage had already started developing a more ambitious machine: the Analytical Engine. Whereas the Difference Engine was designed to solve polynomials, this new machine was intended to be a general-purpose computer. It was composed of several smaller devices: one to list the instruction set (on punch cards popularized by the Jacquard loom); one (called the mill) to process the instructions; one (which Babbage called the store but we would consider the memory) to store the intermediary results; and one to print out the results.

In 1840 Babbage gave a series of lectures in Turin on his Analytical Engine, to much acclaim. Italian mathematician Luigi Federico Menabrea published a description of the engine in French in 1842, “Notions sur la machine analytique.” This is where Lady Lovelace returns to the story.

Lovelace translated Menabrea’s description into English, discreetly making a few corrections. The English scientist Charles Wheatstone, a friend of both Lovelace and Babbage, suggested that Lovelace augment the translation with explanations of the Analytical Engine to help advance Babbage’s cause. The resulting “Notes,” published in 1843 in Richard Taylor’s Scientific Memoirs, was three times the length of Menabrea’s original essay and contained what many historians consider the first algorithm or computer program. It is quite an accomplishment to write a program for an unbuilt computer whose design was still in flux. Filmmakers John Fuegi and Jo Francis captured Ada Lovelace’s contributions to computing in their 2003 documentary Ada Byron Lovelace: To Dream Tomorrow. They also wrote a companion article published in the IEEE Annals of the History of Computing, entitled “Lovelace & Babbage and the Creation of the 1843 ‘Notes’.”

Although Lovelace’s translation and “Notes” were hailed by leading scientists of the day, they did not win Babbage any additional funding. Prime Minister Robert Peel had never been a fan of Babbage’s; as a member of Parliament back in 1823, he had been a skeptic of Babbage’s early design. Now that Peel was in a position of power, he secretly solicited condemnations of the Difference Engine. In a stormy meeting on 11 November 1842, the two men argued past each other. In January 1843, Babbage was informed that Parliament was sending the finished portion of Difference Engine No. 1 to the King’s College Museum. Two months later, Parliament voted to withdraw support for the project. By then, the government had spent £17,500 (about US \$3 million today) and waited 20 years and still didn’t have a working machine. You could see why Peel thought it was a waste.

But Babbage, perhaps reinvigorated by his work on the Analytical Engine, decided to return to the Difference Engine in 1846. Difference Engine No. 2 required only 8,000 parts and had a much more elegant and efficient design. He estimated it would weigh 5 tons and measure 11 feet long and 7 feet high. He worked for another two years on the machine and left 20 detailed drawings, which were donated to the Science Museum after he died in 1871.

## A modern team finally builds Babbage’s Difference Engine

In 1985, a team at the Science Museum in London set out to build the streamlined Difference Engine No. 2 based on Babbage’s drawings. The 8,000-part machine was finally completed in 2002.Science Museum Group

Although Difference Engine No. 2, like all the other engines, was never completed during Babbage’s lifetime, a team at the Science Museum in London set out to build one. Beginning in 1985, under the leadership of Curator of Computing Doron Swade, the team created new drawings adapted to modern manufacturing techniques. In the process, they sought to answer a lingering question: Was 19th-century precision a limiting factor in Babbage’s design? The answer is no. The team concluded that if Babbage had been able to secure enough funding and if he had had a better relationship with his machinist, the Difference Engine would have been a success.

That said, some of the same headaches that plagued Babbage also affected the modern team. Despite leaving behind fairly detailed designs, Babbage left no introductory notes or explanations of how the pieces worked together. Much of the groundbreaking work interpreting the designs was done by Australian computer scientist and historian Allan G. Bromley, beginning in 1979. Even so, the plans had dimension inconsistencies, errors, and entire parts omitted (such as the driving mechanism for the inking), as described by Swade in a 2005 article for the IEEE Annals of the History of Computing.

The team had wanted to complete the Difference Engine by 1991, in time for the bicentenary of Babbage’s birth. They did finish the calculating section by then. But the printing and stereotyping section—the part that would have alleviated all of Babbage’s frustrations in editing those astronomical tables—took another nine years. The finished product is on display at the Science Museum.

A duplicate engine was built with funding from former Microsoft chief technology officer Nathan Myhrvold. The Computer History Museum displayed that machine from 2008 to 2016, and it now resides in the lobby of Myhrvold’s Intellectual Ventures in Bellevue, Wash.

The title of the textbook for the very first computer science class I ever took was The Analytical Engine. It opened with a historical introduction about Babbage, his machines, and his legacy. Babbage never saw his machines built, and after his death, the ideas passed into obscurity for a time. Over the course of the 20th century, though, his genius became more clear. His work foreshadowed many features of modern computing, including programming, iteration, looping, and conditional branching. These days, the Analytical Engine is often considered an invention 100 years ahead of its time. It would be anachronistic and ahistorical to apply today’s computer terminology to Babbage’s machines, but he was clearly one of the founding visionaries of modern computing.

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the June 2022 print issue as “The Clockwork Computer."

Related Articles Around the Web