My favorite approach to human-robot interaction is minimalism. I’ve met a lot of robots, and some of the ones that have most effectively captured my heart are those that express themselves through their fundamental simplicity and purity of purpose. What’s great about simple, purpose-driven robots is that they encourage humans to project needs and wants and personality onto them, letting us do a lot of the human-robot-interaction (HRI) heavy lifting.
In terms of simple, purpose-driven robots, you can’t do much better than a robotic trash barrel (or bin or can or what have you). And in a paper presented at HRI 2023 this week, researchers from Cornell explored what happened when random strangers interacted with a pair of autonomous trash barrels in NYC, with intermittently delightful results.
What’s especially cool about this, is how much HRI takes place around these robots that have essentially no explicit HRI features, since they’re literally just trash barrels on wheels. They don’t even have googly eyes! However, as the video notes, they’re controlled remotely by humans, so a lot of the movement-based expression they demonstrate likely comes from a human source—whether or not that’s intentional. These remote-controlled robots move much differently than an autonomous robot would. Folks who know how autonomous mobile robots work, expect such machines to perform slow, deliberate motions along smooth trajectories. But as an earlier paper on trash barrel robots describes, most people expect the opposite:
One peculiarity we discovered is that individuals appear to have a low confidence in autonomy, associating poor navigation and social mistakes with autonomy. In other words, people were more likely to think that the robot was computer controlled if they observed it getting stuck, bumping into obstacles, or ignoring people’s attempts to draw its attention.
We initially stumbled upon this perception when a less experienced robot driver was experimenting with the controls, actively moving the robot in strange patterns. An observer nearby asserted that the robot “has to be autonomous. It’s too erratic to be controlled by a person!”
A lot of inferred personality can come from robots that make mistakes or need help; in many contexts this is a bug, but for simple social robots where their purpose can easily be understood, it can turn into an endearing feature:
Due to the non-uniform pavement surface, the robots occasionally got stuck. People were keen to help the robots when they were in trouble. Some observers would proactively move chairs and obstacles to clear a path for the robots. Furthermore, people interpreted the back-and-forth wobbling motion as if the robots were nodding and agreeing with them, even when such motion was caused merely by uneven surfaces.
Another interesting thing going on here is how people expect that the robots want to be “fed” trash and recycling:
Occasionally, people thought the robots expected trash from them and felt obligated to give the robots something. As the robot passed and stopped by the same person for the second time, she said: “I guess it knows I’ve been sitting here long enough, I should give it something.” Some people would even find an excuse to generate trash to “satisfy” and dismiss the trash barrel by searching through a bag or picking rubbish up off the floor.
The earlier paper goes into a bit more detail on what this leads to:
It appears that people naturally attribute intrinsic motivation (or desire to fulfill some need) to the robot’s behavior and that mental model encourages them to interact with the robot in a social way by “feeding” the robot or expecting a social reciprocation of a thank you. Interestingly, the role casted upon the robot by the bystanders is reminiscent of a beggar where it prompts for collections and is expected to be thankful for donations. This contrasts sharply with human analogs such as waitstaff or cleanup janitors where they offer assistance and the receiving bystander is expected to express gratitude.
I wonder how much of this social interaction is dependent on the novelty of meeting the trash barrel robots for the first time, and whether (if these robots were to become full-time staff) humans would start treating them more like janitors. I’m also not sure how well these robots would do if they were autonomous. If part of the magic comes from having a human in the loop to manage what seems like (but probably aren’t) relatively simple human-robot interactions, turning that into effective autonomy could be a real challenge.