Mum No More: 3D-Printed Vocal Tract Lets Mummy Speak

Researchers reproduce the sound of a 3000-year-old mummified vocal tract using CT scanning, 3D printing and an electronic larynx

3 min read
The mummified body of Nesyamun about to be CT scanned at Leeds General Infirmary
Photo: Leeds Teaching Hospitals/Leeds Museums and Galleries

The coffin that holds the mummified body of the ancient Egyptian Nesyamun, who lived around 1100 B.C., expresses the man’s desire for his voice to live on. Now, 3,000 years after his death, that wish has come true.

Using a 3D-printed replica of Nesyamun’s vocal tract and an electronic larynx, researchers in the UK have synthesized the dead man’s voice. The researchers described the feat today in the journal Scientific Reports

“We’ve created sound for Nesyamun’s vocal tract exactly as it is positioned in his coffin,” says David Howard, head of the department of electrical engineering at Royal Holloway University of London, who coauthored the report. “We didn’t choose the sound. It is the sound the tract makes.”

So what does a 3,000-year-old dead guy sound like? In his mummified position, Nesyamun’s vocal tract makes a sustained vowel sound somewhere between the vowels in ‘bed’ and ‘bad.’ Yes, it’s weird. Take a listen:

The Synthesized Sound of an Egyptian Mummy's Voice

Researchers reconstructed the vocal tract of a mummy using CT scans and 3D printing

The sound is not likely one Nesyamun (pronounced “NEZ-ee-uh-moon”) would have made while living. “It is speech-like, but of course he’s not in the middle of an utterance” in his coffin, says Howard. “His neck is tilted backward, his tongue is on top of his lower teeth and is not as bulky as it would be, and no air is coming out. So this is not a speech-sound position.

In replicating the mummy’s vocal tract, the researchers first took state-of-the-art CT scans of Nesyamun’s body. From the scans, the researchers created a digital model of the vocal tract using medical imaging modeling software, and then synthesized a physical model using 3D printing. In order to hear a sound from the 3D-printed tract, an input sound similar to that of a human larynx is needed. This was computer synthesized based on what is used in modern speech synthesis. 

Two photos of the 3D-printed vocal tract of the mummified body of Nesyamun3D-printed vocal tract of the mummified body of Nesyamun.Photos: David Howard

The use of plastic for the 3D printed vocal tract, along with the electronic larynx, gives a buzzy quality to Nesyamun’s voice from the grave. But Howard says he is confident that the sound is true to the mummy.

As proof, Howard notes that he and his colleagues have made 3D-printed replicas of living men’s vocal tracts, and compared the sound to the men’s real voices. “We’ve done extensive work on 3D vocal tracts,” Howard says. “I can recreate my vocal tract and then you can hear it next to me and tell me if it’s similar or not, and the answer is: It is. We are using that fact to transpose this back 3,000 years and say we have something like Nesyamun would have sounded.”

Key to their success was the fact that Nesyamun’s soft tissue, aside from his tongue, is so well preserved by the mummification process. The vocal tract is dried up, of course, but that didn’t make much of a difference for the purposes of this project, Howard says. 

Historians believe Nesyamun was an Egyptian priest who worked at the temple of Karnak in the ancient city of Thebes (modern-day Luxor).  Nesyamun’s service dates to the reign of pharaoh Ramses XI, around 1100 B.C. 

Inscriptions on Nesyamun’s coffin describe him as ‘maat kheru’ or ‘true of voice’ and ask that his soul be able to speak to his gods in his afterlife. “He had a desire that his voice would be everlasting,” says Howard. “In a sense you could argue we’ve heeded that call, which is a slightly strange thing, but there we are.” 

Nesyamun’s daily duties as a priest likely involved chanting or singing. Phonetic transcriptions for exactly how this chanting would have sounded exist, Howard says, and as a next step, he would like to try to replicate the chanting by computationally manipulating the shape of Nesyamun’s vocal tract to make the different sounds. “My hope is to produce a few syllables and then build that up to a short phrase,” he says. 

That would be a great feat, considering that no living person today has heard the sound of human speech prior to 1860, when the earliest audio recordings were made. 

The Conversation (0)

Are You Ready for Workplace Brain Scanning?

Extracting and using brain data will make workers happier and more productive, backers say

11 min read
Vertical
A photo collage showing a man wearing a eeg headset while looking at a computer screen.
Nadia Radic
DarkGray

Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}