Researchers are developing a jazz-playing robot that listens to a human jazz musician and responds, in real time, by playing jazz improvisation.
“Humans tend to think that art is something that only humans can make,” says Benjamin Grosser, a professor of new media in the School of Art and Design at the University of Illinois at Urbana-Champaign. “I’m interested in what the boundaries of that are.”
In other words, is creativity only achievable by humans? Or, can computers also be artists?
“Computers don’t have a lifetime of cultural learning like we do,” he says. “So, they have a different vantage point of the world than we do.” In essence, this is what an artist is trying to accomplish.
Because jazz improvisation is “cognitive but not linguistic,” says Grosser, this research could help gain insight into human-computer interaction. This might seem like a far-fetched concept, but we’ve already seen robots spray painting on walls and even drawing on the beach (one of Grosser’s previous projects was a robotic painting machine).
But, if you’re looking for a new jazz partner, you’re going to have to wait a bit. Researchers just started their five-year project, which is part of the new Communicating with Computers program sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA).
According to Grosser, the front end of the computer system listens to the human player and analyzes the elements of what’s being played, including pitch, beat, and rhythm. This gets fed to the back end, an artificial intelligence system that’s tapped into a “knowledge base” of pre-analyzed jazz solos.
The system “analyzes incoming music the same way that it analyzes the jazz solos, using image schema as the mechanism,” he says. The real-time performance system then plays back jazz music that it “thinks” is within the boundary.
“By creating computational systems that act as artists, we not only start to reconsider what art is, who makes it, and what its boundaries are, but we also get a better look at how we make art and what we think as artists,” Grosser says.
Theresa Chong is a video host and multimedia technology journalist based in Palo Alto, Calif. As on-camera talent, she has performed science experiments for “Discovery News,” explained how virtual reality works for USA Today, and interviewed Adam Savage for IEEE Spectrum. She has written about wearables for Scientific American and travel tech for Architectural Digest. With a DSLR, GoPro, and green screen by her side, she has produced digital videos of robots, driverless cars, and 3D printing. She earned a master’s degree from Northwestern University’s Medill School of Journalism, and in a prior life she worked as a civil engineer.