Close

Building Human-Robot Relationships Through Music and Dance

A performance called FOREST is exploring trust through creative collaboration

2 min read
Ten colorfully lit robot arms on metal pedestals spread out across a dance stage

There’s no reliably good way of getting a human to trust a robot. Part of the problem is that robots, generally, just do whatever they’ve been programmed to do, and for a human, there’s typically no feeling that the robot is in the slightest bit interested in making any sort of non-functional connection. From a robot’s perspective, humans are fragile ambulatory meatsacks that are not supposed to be touched and who help with tasks when necessary, nothing more.

Humans come to trust other humans by forming an emotional connection with them, something that robots are notoriously bad at. An emotional connection obviously doesn’t have to mean love, or even like, but it does mean that there’s some level of mutual understanding and communication and predictability, a sense that the other doesn’t just see you as an object (and vice versa). For robots, which are objects, this is a real challenge, and with funding from the National Science Foundation, roboticists from the Georgia Tech Center for Music Technology have partnered with the Kennesaw State University dance department on a “forest” of improvising robot musicians and dancers who interact with humans to explore creative collaboration and the establishment of human-robot trust.


According to the researchers, the FOREST robots and accompanying musical robots are not rigid mimickers of human melody and movement; rather, they exhibit a remarkable level of emotional expression and human-like gesture fluency–what the researchers call “emotional prosody and gesture” to project emotions and build trust.

Looking up what “prosody” means will absolutely take you down a Wikipedia black hole, but the term broadly refers to parts of speech that aren’t defined by the actual words being spoken. For example, you could say “robots are smart” and impart a variety of meanings to it depending on whether you say it ironically or sarcastically or questioningly or while sobbing, as I often do. That’s prosody. You can imagine how this concept can extend to movements and gestures as well, and effective robot-to-human interaction will need to account for this.

Many of the robots in this performance are already well known, including Shimon, one of Gil Weinberg’s most creative performers. Here’s some additional background about how the performance came together:

What I find personally a little strange about all this is the idea of trust, because in some ways, it seems as though robots should be totally trustworthy because they can (in an ideal world) be totally predictable, right? Like, if a robot is programmed to do things X, Y, and Z in that sequence, you don’t have to trust that a robot will do Y after X in the same way that you’d have to trust a human to do so, because strictly speaking the robot has no choice. As robots get more complicated, though, and there’s more expectation that they’ll be able to interact with humans socially, that gap between what is technically predictable (or maybe, predictable after the fact) and what is predictable by the end user can get very, very wide, which is why a more abstract kind of trust becomes increasingly important. Music and dance may not be the way to make that happen for every robot out there, but it’s certainly a useful place to start.

The Conversation (0)

Letting Robocars See Around Corners

Using several bands of radar at once can give cars a kind of second sight

10 min read
Horizontal
Illustration of the modeling of a autonomous vehicle within a urban city intersection.

Seeing around the corner is simulated by modeling an autonomous vehicle approaching an urban intersection with four high-rise concrete buildings at the corners. A second vehicle is approaching the center via a crossing road, out of the AV’s line of sight, but it can be detected nonetheless through the processing of signals that return either by reflecting along multiple paths or by passing directly through the buildings.

Chris Philpot

An autonomous car needs to do many things to make the grade, but without a doubt, sensing and understanding its environment are the most critical. A self-driving vehicle must track and identify many objects and targets, whether they’re in clear view or hidden, whether the weather is fair or foul.

Today’s radar alone is nowhere near good enough to handle the entire job—cameras and lidars are also needed. But if we could make the most of radar’s particular strengths, we might dispense with at least some of those supplementary sensors.

Keep Reading ↓ Show less