Imagine being trapped under a collapsed building following an earthquake. Probably one of the very last things you want to happen is for a swarm of cockroaches to find you in your little pocket of air. But in a few years, you might be very happy indeed to see these hardy six-legged explorers scurrying about the rubble.
In the last few years, researchers have started creating “biobots”—tiny cyborgs that consist of an insect, such as a beetle or cockroach, with a circuit board attached to their back. These radio-equipped boards can host simple sensors and even provide limited control over the insect’s motions.
My guest today, Edgar Lobaton, is a professor in the department of electrical and computer engineering at North Carolina State University. He and his colleague Alireza Dirafzoon have been developing software that could use such biobots to map disaster sites in the absence of traditional location technologies, such as inertial navigation or GPS. With maps of the debris field in hand, first responders can better direct their search-and-rescue operations.
Edgar joins us by phone from his office in North Carolina.
Stephen Cass: Edgar, welcome to the podcast.
Edgar Lobaton: Thank you very much.
Stephen Cass: So, as I mentioned in the introduction, people aren’t generally thrilled when a cockroach sidles up to them at the best of times. Why use them as biobots for mapping a mission such as searching a disaster zone? Why not just build little robots?
Edgar Lobaton: That’s a very good question, actually. And before I answer that, let me just clarify one thing too, that this is also an effort with other faculty here in the department at North Carolina State University, in particular, Alper Bozkurt. He’s been the one building some of the hardware, so he’s the one actually building the cyborgs. And my lab has been the one looking into mainly, How do we program these devices? What kind of behavior do we look to program for each one of these robots, these cyborg insects?
So there’s a couple reasons why we actually want to use cockroaches or any kind of biobotic insect instead of robotic systems: One of them is going to be the design. So nature has already done a great job of improving the design of some of these agents to be able to be very robust and to be able to manage all sorts of natural sorts of extreme situations, kind of scenarios in the world. So why not take advantage of that? So that’s one of the things we’re trying to do.
So, already built in that framework, there’s an actual hardware platform, and also it comes preprogrammed with some kind of software, some sort of behavior that tells it on its own how to survive and how to explore a particular environment. So those are some of the things that we’re trying to take advantage of in terms of trying to use them as some sort of platform for exploration and mapping of environments. So those are some key things.
Another aspect of it is that they also have their own power supplies, so they’re able to do their own motions, so we don’t have to be controlling them all the time. What that means is that if you have some sort of robotic system, you may be very interested in having very precise kind of control and telling them how to move, where to go from one location to another location. In our research, what we’re trying to do is we’re trying to exploit the fact that they have their own natural behavior, their own random motion, random walks, and then exploit that—tell them to move randomly most of the time, and then once in a while we’ll tell them, okay, now it’s time to switch your behavior so you can do more-efficient exploration. Let’s say, switch from one random walking behavior to another type of behavior that is very common in cockroaches, which is wall following, whenever they’re trying to avoid some sort of predators, for instance. So, switch to that particular behavior, and that way maybe do some more-efficient exploration or mapping of the scenes.
Stephen Cass: So on the hardware side, how do you switch them from wall-following mode to random-walk mode and back again?
Edgar Lobaton: Yes, so to be able to answer that, let me talk a little about the interface, the actual hardware interface, between these small circuits that are attached to their backs and the actual insect itself. So, the way things are attached, they can actually be attached in different ways. But the ones that we’re considering right now, for instance, having these cockroaches, and they have these antenna which they use for their sensing quite a bit, so things like wind, for instance, or contact with other objects or insects themselves, lets them have an idea of where it is, or there’s some sort of predator nearby.
So their behavior is fairly responsive based on these kind of stimuli coming from the antennas. So what my collaborator is doing is he’s actually clipping the antennas and then attaching electrodes directly to the antennas. So based on that, we can actually bias the kind of sensing that they have and then tell them to behave in a different way, right? So then we can tell them there is nothing in the environment, there is nothing to worry about, they can do this random motion that they usually do, or otherwise we can tell them there is something maybe that is coming from behind you, and then you’re trying to hide maybe by attaching some other electrodes on the abdomen of the insect. And then at that point, it would try to go, for instance, to some sort of wall to look for some sort of shelter.
Stephen Cass: In terms of turning a cockroach into a cyborg—and this may seem an odd question given how hard we try to kill them all the time—but are there any ethical issues involved?
Edgar Lobaton: My understanding, and actually my collaborator is actually the expert in this subject, but my understanding is that so far we came up with this discussion actually a couple times, and the idea is, or the consensus is, that at this point, based on the biology of the actual insects, it is believed that they don’t actually feel pain, so then there’s not much of this ethical concern in respect to that.
Stephen Cass: Moving on to your mapping approach, what happens when you release a swarm of these biobots?
Edgar Lobaton: Sure. So the idea is that you would start with all of these agents in a single location, let’s say some sort of container, then you release them. They are going to try to do their own kind of random-walk behavior, exploration techniques, maybe looking for food, looking for shelter. And they’re going to start exploring all over the area. Now, once in a while, we’re actually going to be capturing some of that information, and one of the challenges we have is because of the kind of terrain and environments we expect these agents to be dispersed, is that many times, actual localization of the agents is going to be very, very challenging. So there’s not going to be, like, a GPS signal to tell them where they are.
Also, you’re not going to have very accurate odometry information, because they’re going to be going through very rough terrain. So it’s going to be hard to track exactly where they’re going to be. So because of that, we’re not relying on that information. We can certainly incorporate it, but we’re not relying on that, and we’re using the simplest type of information that you can imagine, which is basically keeping track of these agents meeting each other.
Now, how do we know that they actually meet each other at any given time? Each one of them has a backpack, and each backpack has a short-range area transmitter. So when they’re within a couple of yards of each other, they can actually say, “I see you. My ID is this.” So they can actually record encounters between each one of the different agents. And that’s the information that we’re using. So we’re taking all that information, and we’re converting the encounter events into some sort of map of the environment, and that’s where the topological aspect of it comes into place. So once they’re doing this sort of exploration, we can actually switch their behavior to tell them, “Okay, it seems like you have already mapped properly wherever you are, but we do want you to go on and search for new areas.” And at that point, we either tell them to switch their behaviors, or we try to guide them a little bit more deterministically, telling them to go toward a particular area.
Stephen Cass: You mentioned topological information there, because the maps you get out of these are not quite what people would think of as a sort of a traditional map. Can you explain a little bit about how the software builds the map and how it’s different from the kind of thing that people think of as a map?
Edgar Lobaton: The difference is going to be the following: The kind of maps that we’re used to are very accurate maps. You know, you can think of it maybe as a Google Maps kind of version of it, or a very clear layout of the actual floor plan of a building, for instance. So those are very accurate. They could even be 3-D maps of an environment. In our case, we’re not going to be able to recover that, because there’s a lot of uncertainty, right? We’re considering a lot of uncertainty due to these agents not having localization, due to these agents—due to the fact that we cannot track them using some sort of odometry information or inertial measurements.
So in our case, what we’re doing is we’re coming up with a sketch, a rough sketch, of what the environment looks like, but it still gives you some information—useful information to be able to pinpoint where somebody may be trapped, if this is a search-and-rescue operation, and how do we get to that location, right? And a rough idea of how far away it is too.
So this is where the topological map comes into place. So imagine there’s this rough map that could give information, such as whether this region is right next to some other region or some entrance to some particular covered area or open area that you have already explored, and it could also give you some instructions on how to get from location A to location B by following some particular type of landmarks. And the landmarks that you may be interested in might be geometric landmarks that you discover, such as narrow passages.
Stephen Cass: So have you any plans to go forward with some field testing?
Edgar Lobaton: Yes, yes, definitely. What we have right now is we are testing a lot of these algorithms for exploration or mapping using a simulation environment, so we have a simulation environment based on biological models of their motion. So that’s what we’re doing at this point. But the next phase is going to be to test it on some robotic agents, so this is going to be a little bit more of a controlled experiment, where we’re actually using robots preprogrammed to behave more or less like insects, doing this kind of random-motion and wall-following kind of behavior.
So we’re going to use that as our next step, and we’re actually working on that right now. The next phase, which is going to be more of an actual field experiment, is going to be using some of these actual biobots, and for that we’re not thinking of large scale, like 100 or 1000 yet, but we’re considering something on the order of 20 of them. So hopefully, like within a year and a half, we’ll have some results showing how easy or how feasible it is to support all these algorithms into real physical platforms.
Stephen Cass: Well, Edgar, thank you very much for talking with us today.
Edgar Lobaton: Thank you for having me.
Stephen Cass: We’ve been speaking today with Edgar Lobaton, professor at North Carolina State University, about using cyborg cockroaches to map disaster sites. For IEEE Spectrum’s “Techwise Conversations,” I’m Stephen Cass.
This interview was recorded Tuesday, 29 October 2013.
Audio engineer: Francesco Ferorelli
Segment producer: Barbara Finkelstein
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.