Close

Canny Robot Rocks Out to Audio Programming

Being programmable through headphones could keep Canny relevant for decades

2 min read
Canny Robot Rocks Out to Audio Programming
Un...canny!
Image: MIT/Adam Kumpf

When we think about programming a robot, we focus on the part about writing code for the robot; we don’t pay much attention to the task of sending the code from our computers to the robot. To do that, we rely on things like WiFi or Bluetooth, or maybe USB or Ethernet cables, along with their specific software interfaces. And that’s fine, for now, but what about five years from now? Or 10 years from now? Fifty years? What are the odds that any of the things that we use to talk to our robots will still exist? To put it another way: what are the odds of being able to interact with a piece of 50-year-old technology (or even 10-year-old technology) as sophisticated as a robot?

Adam Kumpf, who did robotics at MIT a while ago and now does other cool stuff, is worried about this kind of obsolescence, so he took a stab at solving the problem with Canny. Canny is a very simple proof of concept robot that doesn’t depend on a depreciable communication interface, because you can transmit instructions to it using nothing more than an audio player and a pair of headphones.

“Canny has red, green, and blue LEDs in each eye that combine to allow a wide range of colors, useful for expressing a mood or indicating status while programming. Above each eye is a servo that can change the angle of the eyebrow to further augment Canny’s expression. A piezo speaker stands in for a mouth, letting the robot play a range of notes. When a user presses Canny’s button nose, the robot performs a combination of color, motion, and sound as specified by the current program.”

To program Canny, all you have to do is put headphones on it. A light sensor in Canny’s right ear triggers when the headphones are on, and a microphone in its left ear receives a series of high frequency tones that shift between 12,345 Hz and 9,876 Hz to encode data. A simple hardware circuit decodes the sound, allowing for data to be transmitted at between 300 and 600 bps. Basically, it’s like a super slow, super old dialup modem—but the point is that at any time in the future, as long as humans are still using our ears, it’ll be possible to send data to Canny.

Canny might not necessarily be the robot that we need 50 years from now, but that’s fine. It’s a proof of concept, designed to encourage us to think about keeping our current technology accessible (and relevant) into the future. We’ll all keep on building newer robots that take advantage of the latest and greatest interfaces, and nobody’s suggesting that we stop doing that. But a few decades from now, when the robot that you’re working on today ends up in the Smithsonian (which will have relocated to Mars due to the alien invasion of 2026), wouldn’t it be cool if you could still talk to it and get it to do stuff?

Kumpf has made everything about Canny open source, and you can get it all at the link below.

[ Canny the Robot ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

This article is part of our special report on AI, “The Great AI Reckoning.

"I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less