Georgia Tech Robots Learn Deceptive Behaviors from Squirrels

Squirrels use deception to protect their caches of food, and robots are learning to do the same thing

4 min read

Georgia Tech Robots Learn Deceptive Behaviors from Squirrels

We know, we know, robots being deceptive sounds like a bad thing. Potentially a very bad thing. But the fact is, deception is everywhere, and for good reason: being deceptive is often the most efficient and effective way to protect yourself from harm. Deception is by no means unique to humans, either: animals are masters at deceiving other animals (and us), and Ron Arkin’s group at Georgia Tech has been teaching robots to learn deception from a pro: the squirrel.

Deception can be a difficult thing for robots, because it’s dependent on understanding how someone else (or something else) is thinking. In other words, if you’re going to fool someone, you have to be able to imagine how they’re going to interpret and react to your actions. Robots, of course, are well known for not having much in the way of imaginations, but the immediate tangible benefits of successful deception mean that robots have evolved the capacity to deceive each other in specific circumstances.

Meanwhile, Ron Arkin has been working at Georgia Tech to teach robots more abstract methods of deception. A few years ago, he co-authored a study in which robots used algorithms to determine whether deception is a good idea in a given situation, and then executed a deceptive act by providing a false communication to another robot:

This is the sort of thing that humans do all the time, at least in the movies. Bad guy is chasing good guy, and good guy runs down a hall, opens a door, and then doubles back and hides in a closet. Bad guy enters hall, sees open door, and does the logical thing by interpreting it as a marker accidentally left by the good guy. The good guy was essentially running an algorithm that measured the risk of opening the door (it takes additional time and effort) against the reward of fooling the bad guy, and the robot in the above video simply did the same. Read more here.

Humans aren’t the only ones to benefit from such strategies, of course. Animals are spectacular deceivers, and examples abound of everything from insects to primates demonstrating deception against predators, prey, and members of their own species. To continue their research into robot deception, Arkin and his team were inspired by the food caching behaviors of squirrels. Squirrels, you see, store nuts in super secret caches for later scarfing. This takes a lot of work, so after the caches are full, the squirrels will patrol their locations to protect their nuts from being stolen by pilferers. Here’s the squirrels’ method:

Deceptive behavior in the tree squirrel has been observed with respect to food protection. While patrolling, tree squirrels visit the cache locations and check on their food. However, if potential competitors are present nearby, tree squirrels visit several empty cache locations. This deceptive behavior attempts to confuse competitors about the food’s location, so that they can protect against the loss of their hoarded food. After the potential competitors leave the territory, the tree squirrels move the location of their stored food items, if pilfering has occurred.

Arkin and student Jaeeun Shim were able to model these squirrel-type deceptive cache protection behaviors in a small mobile robot, teaching it to forage for and cache virtual food. When a pilfering robot appeared, the squirrel robot starts visiting empty caches instead. Since all the pilferer knows is that squirrel robots like to visit caches, it gets fooled by the deception. Below, some of the simulations and experiments they performed.

The average time to successful pilferage when the squirrel robot includes deceptive behavior is 10.4 minutes, compared to the average time duration without deception is 7.69 minutes. As a result, it can be concluded that the deceptive behavior affects significantly the robot’s performance. With deceptive behaviors, the squirrel robot protects resources longer and performs significantly better than the one without deceptive behaviors.

It’s tempting to want to focus on all of the military aspects of robotic deception, and there certainly are many, especially in direct analogues with resource protection. As the researchers speculate,

In the military domain, robots might face this situation, where it is important to discourage an adversary from discovering a protected site, so the application of these bio-inspired animal food protection behaviors can be particularly beneficial.
“This application could be used by robots guarding ammunition or supplies on the battlefield,” says Arkin. “If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive. In military operations, a robot that is threatened might feign the ability to combat adversaries without actually being able to effectively protect itself,” Arkin suggests. “Being honest about the robot’s abilities risks capture or destruction. Deception, if used at the right time in the right way, could possibly eliminate or minimize the threat.”

However, we shouldn’t underestimate the importance of deception in our daily lives. People deceive other people all the time, often for beneficial purposes. The researchers suggest:

In other areas, such as search and or healthcare, deceptive robots might also add value, for example, for calming victims or patients when required for their own protection. Conceivably even in the field of educational robots, the deceptive behavior of a robot teacher may potentially play a role in improving human learning efficiency.

In either case, it’s true that teaching robots to deceive anything up to and including humans has significant ethical implications, but the researchers are encouraging “considerable discussion in a broader community.” We’ll leave it up to you to speculate on what might happen with deceptive military robots, but we’d also like to point out that Arkin’s group in particular has devoted a lot of thought to this sort of thing, and while they don’t address it in this paper, you can read about it here.

[ Georgia Tech ]

[ Mobile Robot Lab ]

The Conversation (0)