16 July 2008— Robot researcher Atsuo Takanishi, a professor of engineering at Waseda University, in Tokyo, is driven by a vision that would probably appall many musicians. Takanishi wants to create a humanoid robot orchestra. So far he and his group of researchers have developed a pretty good flute-playing robot and have begun work on a saxophone player. Though he has spent many years perfecting the flutist, Takanishi expects things will move much faster now, because he tackled one of the hardest instruments to play first.
”Anyone can play some notes the first time he tries a reed instrument like the saxophone. But getting even a sound out of the flute is very difficult,” says Takanishi.
The seated robot is essentially made up of two acrylic cylinders and bellows for the lungs, a vibrato mechanism to imitate human vocal cords, an artificial tongue and lips made of a thermoplastic rubber called Septon, two CCD cameras for the eyes, and flexible arms and fingers that can open and close. Together, these ”organs” have 41 degrees of freedom and are driven by complex mechanical systems of motorized levers and pulleys under the control of actuators and a computer.
Getting the robot to produce a melody turned out to be a monumental task. First, the researchers worked with professional players to create a performance index of what constitutes the best flute sounds. They translated these sounds into mathematical formulations, to which the robot refers. The researchers then programmed the robot’s organs to create a sound. Once a sound was produced, they used the parameters controlling the organs that produced the sound as a base and then adjusted those parameters repeatedly until the sound improved and eventually approximated a target sound in the performance index.
”We had to teach it everything,” says Takanishi. ”The different positions of the lips and fingers, the strength of the air pressure, everything. There are any number of parameters [making it] almost impossible to engineer.… It was a very slow process.”
To make the procedure less laborious and more autonomous, audio feedback control has been added to help the robot make its own adjustments. Also, more computer intelligence has been incorporated so that the robot can now ”read” Musical Instrument Digital Interface (MIDI) data and translate it into the parameter controls needed to transform the data into flute playing. ”We can now download virtually any MIDI file into the robot’s computer and it can reproduce the music unaided,” says Takanishi [watch the video]. ”It may not play perfectly yet, but it plays well.”
Other robot researchers have been chasing similar orchestral dreams. A Honda robot conducted the Detroit Symphony Orchestra in May, and a Toyota humanoid played the trumpet for crowds at the SAE World Congress in April.
Takanishi is motivated by more than having robots produce sweet sounds. He lists three main goals for his research: to get a better understanding of human motor control, to develop robots that can mimic and respond to human emotions in order to improve human-machine interaction, and to produce humanoid robots that can perform new tasks such as caring for the elderly and the infirm.
In expanding on the first goal, Takanishi explains that having the robot mimic the actions of a human flutist will help researchers produce a mathematical model of the human oral structure. Paralleling this work, he is also developing a bipedal robot that mimics how humans walk, with the idea that this will lead to the production of a mathematical model of walking.
”So maybe 50 years from now, future engineers will be able to integrate these different models—the hands, legs, throat, mouth, etc.—and produce a really good mathematical model of the human being,” says Takanishi. ”Then we can use it for producing optimum designs of everything we interact with.” Today, by contrast, design engineers have to start with assumptions about how humans move when they go about creating new designs. ”So when Toyota engineers design a new car’s interior, they do it by trial and error because there is no mathematical model of humans available,” says Takanishi.
But Takanishi is not content to stick with producing physical mathematical models. He also wants to produce a mathematical model of human psychology and has begun collaborating with a psychologist at Waseda University. In other words, he wants to move beyond passive MIDI playing and produce a more autonomous robot that can interact with human members of a jazz band.
The first step along this path has been to incorporate a vision-processing algorithm into the sax-playing robot’s system that helps it track the hand of a musician partner and to respond to certain hand gestures by changing the parameters controlling its own lips, fingers, lungs, and other organs. The next step for the researchers is to develop an acoustic system that allows the robot to respond to sound cues.
Clearly there is a long way to go before anything like an orchestra of robots will be able to perform in the pit. But, Takanishi says, ”I’m 52 now and I must retire at 70. I hope we can accomplish my dream by then.”