When bats leave their caves at night to go eat bugs, they can swarm in the millions while somehow managing to not crash into each other, which is a pretty clever trick. Kenn Sebesta, a researcher at Boston University, is wondering just how exactly they pull this off, and there's nothing better than good old fashioned experimentin' with robots to see how the bats do what they do.
This is Batcopter 2.0 (aka "Quady"), a home-built quadrotor made from carbon-fiber arrow shafts, twine, glue, zip ties, bamboo, foam, and netting to make sure that any bats not doing their jobs wouldn't get decapitated by a stray prop. A GoPro camera was stuck on the front and the whole thing was piloted from the ground with an array of three high-speed infrared cameras watching the glowing hot robot-on-bat nighttime aerial action:
To control the Batcopter, Sebesta says he and his colleagues used OpenPilot, an open source autopilot platform for small UAVs, which "allowed us to get so far so fast and was the real hero."
The UAV did end up having an unfortunate accident shortly thereafter, but not before collecting terabytes of high quality video of the bats interacting with movements of the UAV. The Batcopter team is planning to analyze this footage to try and see if there are any fundamental laws of flying that the bats follow to keep from colliding with other bats and wayward robots. If there are, it could lead to better autonomous flight controllers for UAVs, as well as ultrasonic squeaks of relief from bats everywhere as scientists find something else to do with their time.
UPDATE: No animals were harmed in the making of this robot! Professor John Baillieul, who directs Boston University's Laboratory for Intelligent Mechatronic Systems, writes us to say the researchers involved in the project, which includes several biologists, are very careful to design and use technology that is animal-friendly and meets all of the acceptable standards of animal care and use in the laboratory and field. "We do hope to use robotic air vehicles to observe bats and other flying animals in ways that have not been done up to now," Baillieul says, "but I can't emphasize too strongly that we have not harmed and are not seeking to harm or harass animals in any way, including making them fearful."
This article is the first of a series that will explore recent advances in surgical and medical robotics and their potential impact on society. More articles, videos, and slideshows will appear throughout the year.
Da Vinci surgical system. Photo: Kelleher Guerin
How can the skill of a surgeon be measured? A patient's body has no buzzer that alerts the surgeon when mistakes occur during an operation. There is no Yelp-like website that ranks a surgeon based on user reviews. It is surprising that people can spend less time selecting a surgeon for an operation than they might selecting a restaurant for dinner or a mechanic to fix their car.
According to a study from the U.S. Agency for Healthcare Research and Quality, surgical complications, including postoperative infections, foreign objects left in wounds, surgical wounds reopening, and post-operative bleeding, resulted in a total of 2.4 million extra days of hospitalization, $9.3 billion excessive charges, and 32,000 mostly surgery-related deaths in the United States in 2000.
To what extent training is responsible for those errors is unknown. Some argue that most surgeons never achieve true expertise. One thing is certain, though: Residents need better, more effective training. It isn’t sufficient to have residents merely go through the motions; they must be able to practice deliberately. The problem is that residents already work inhumanely long hours (recent regulations limit their training to 80-hour work weeks, but they typically work more than that) and they must learn a growing number of surgical techniques and technologies, which means new generations of surgeons are having less and less time for hands-on practice.
In the past few years, several research groups, including our team at Johns Hopkins University, have been working to analyze and automate the training process using modern robotic surgical tools. Our goals are to create an objective, standardized method of surgical training as well as to reduce the time and cost of having an experienced surgeon in the training loop.
Surgical skill can be broken down into theoretical skill (consisting of factual and decision-making knowledge) and practical skill (the ability to carry out manual tasks such as dissection and suturing). Theoretical skill is often taught in a classroom and is thought to be accurately tested with written examinations like the Medical College Admission Test (MCAT) and the United States Medical Licensing Examination (USMLE). Practical skill, on the other hand, is much more difficult to judge.
Practical skills, such as driving a car, swinging a golf club, or throwing a football, are most effectively taught "in the field" through demonstration. In 1889, Sir William Halsted at Johns Hopkins University revolutionized surgical training by developing an apprentice-style technique still being used in most modern training programs of surgical residents today. According to this method, a resident would “see one, do one, teach one,” implying that after minimum exposure and the completion of a procedure once, a resident will have mastered the skill and will be capable of teaching the next novice. (Residents practice certain procedures more than once, but the principle is still that one time is really all the exposure they'd need before going out in the field and performing on their own.) Although many talented surgeons are trained this way, the method is time consuming, and evaluating a student's performance is a subjective task that varies depending on the student/teacher pair. The method also involves a lot of yelling.
With the advent of technologies such as robotic surgical systems and medical simulators, researchers now have the tools to analyze surgical motion and evaluate surgical skill. Our group is studying human-machine interaction for surgical training and assistance in multiple contexts with increasing levels of complexity. The first level involves a system that understands what the human and environment are doing. The next level of interaction is for machines to provide assistance to a human operator through augmentation. The last level is to have a robot perform a task autonomously. We'll describe the state of research in each of these areas.
Understanding the surgical environment
Language of surgery. Photo: Carol Reiley
There is an active effort to develop new approaches to surgical training and evaluation. Using techniques from speech recognition, our group is developing mathematical models for motion recognition and skill assessment. These models may be the key to standardizing surgical training by decomposing complex surgical tasks like suturing, blunt dissection, and cutting into elementary “chunks” of motion -- and thus decode the "language of surgery."
These motions can be compared to phonemes, the elementary units of speech. Sequences of subtasks can be constructed like words to form sentences (analogous to various surgical tasks), which can then be used to form paragraphs (analogous to surgical operations). And, just as in speech, a recognition program might call attention to poor "pronunciation" or improper "syntax" in surgical execution, and can try to understand the intent of the surgeon from recorded motion and video data. (This research typically focuses on telepresence surgery as performed using the da Vinci system from Intuitive Surgical.) Using our skill evaluation system, trainees can have their trials evaluated offline or see their trial synchronized with a prerecorded expert trial to shorten the learning curve.
Augmenting the surgical environment
Kidney stone image overlay. Photo: Balazs Vagvolgyi
Super-surgeon performance can be achieved if human intelligence can be combined with robot accuracy and precision. Computer-integrated surgery, using equipment such as a robotic system with a video display, can enhance human senses by providing additional information. For example, the visualization can overlay a reconstructed CT scan of a tumor on the operating site, or the robot can use force feedback to prevent a surgeon’s hand from puncturing a beating heart.
Studies have shown that superimposing graphics, sounds, and forces over the real-world environment in real-time can assist with training.
Robots with intelligent sensors can address humans’ physiological limitations such as poor vision or hand tremor. Even the best surgeons can use intelligent assistance to improve performance. Force sensing “smart” surgical instruments will allow for safer and more effective surgeries. For example, they can be used to measure the local tissue oxygen saturation on the working surfaces of surgical retractors and graspers so that tissue doesn’t become permanently damaged.
JHU Steady Hand-Eye Robot. Photo: Marcin Balicki
The JHU Steady-Hand Eye Robot is a robot used for retinal microsurgery where the surgeon and the robotic manipulator share the control of the instrument. This reduces hand tremors and allows for precise and steady motion. Shaky-handed surgeons, there’s hope for you yet!
The robot surgeons of the future
Researchers are now moving towards understanding how humans and machines can work together as a team to collaboratively finish a surgical task. Training models can be used to automate portions of a tedious task or to predict surgeons’ intent to automate an instrument change. Automation might also allow a surgeon to utilize more than two arms of the system at the same time: although the da Vinci surgical system has four arms (three to hold tools and one for the camera), the third arm generally sits idle, since humans can only control two arms at any given moment.
University of Washington Raven. Photo: BioRobotics Lab
The University of Washington’s Raven System is an impressive mobile surgical robot used for telesurgery. In the next few months, seven schools are receiving this system as a part of a multi-institutional grant: Johns Hopkins University, UC Santa Cruz, University of Washington, UC Berkeley, Harvard, University of Nebraska, and UCLA. A few orders are already in for the next iteration that include schools in Florida, Toronto, and Minnesota. This standardized research surgical platform will lead to exciting new research in telesurgery and surgical training these next few years.
Raven is a mobile laparoscopic surgical system. Because Raven is modular, it is more portable than massive surgical robots used in hospitals and is able to be reassembled by a team of people. And while most commercial surgical robots weigh nearly half a ton, Raven is only 23 kilograms (about 50 lbs). This makes it ideally situated for hazardous environments.
Telesurgery experiments with the Raven generally involve a surgeon at a safe location operating a robot in the field; for example, underwater in a submarine pod or in the desert under scorching temperatures and gusting winds. Control commands and sensor feedback are transferred over a wireless connection. Research questions include how time delays affect performance, how multiple surgeons can operate robots together to complete a surgery, and how surgeons can train on the platform most effectively.
The surgical environment in the operating room is unlike any other because of the constantly moving objects, because no two procedures are identical, and because of the sterilization/FDA approval issues. The state of surgical robotics is still a long way from one-button autonomous surgery, but the future of surgical training might be undergoing a major “facelift.”
About the authors:
Carol E. Reiley is currently finishing her doctoral research in surgical robotics at Johns Hopkins University and running TinkerBelle Labs, focused on creating low-cost, do-it-yourself projects. Reiley, who was the student chair on the IEEE Robotics and Automation Society for 2008-2010, earned her bachelors at Santa Clara University in computer engineering and her masters in computer science at Johns Hopkins.
Gregory D. Hager, an IEEE Fellow, is a professor in the computer science department at Johns Hopkins University, where his research interests include computer vision, robotics, medical devices, and human-machine systems. He directs the Computational Interaction and Robotics Lab and is the deputy directory of the NSF Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST).
Once upon a time, a charming American robot called James met a striking German bot by the name of Rosie. They liked each other, so they moved in together. Now they spend their days taking long walks in the lab and doing other things that robots do.
James is a PR2 robot, built by U.S. robotics firm Willow Garage, and it traveled to Germany as part of the PR2 Beta Program, an effort to popularize personal robots. At the Technical University Munich (TUM), James was introduced to Rosie, a dual-arm robot with a curvy figure and four eyes [photo above].
Their courtship was at first a bit mechanical, but they soon found many things in common: Both run ROS (Robot Operating System), use Hokuyo laser scanners and Kinect 3D sensors, and have omnidirectional mobile bases.
On a recent spring morning, James and Rosie were seen together cooking the traditional Weisswurst Frühstück, a Bavarian sausage breakfast.
It was a demonstration prepared by researchers at CoTeSys (Cognition for Technical Systems), a Munich-based high-tech cluster. This is how the researchers summarize the experiment:
TUM-Rosie is collecting the sausages, putting them into the pot with boiling water, waiting for them to be cooked and, finally, finding and getting them out of the pot into the serving bowl. [The PR2 robot] TUM-James is meanwhile slicing the french baguette using a regular electric bread slicer and in the end serving the sausages and the bread to the class of highly regarded roboticists. [...]
TUM-James makes use of recent advances in the field of real-time RGB-D sensing using a Kinect sensor for the detection of the bread slicer and the baguette. In the serving task it uses PR2's haptic capabilities in order to grasp and manipulate the plate.
TUM-Rosie is also using Kinect and perception algorithms from COP [cognitive perception] module in order to calibrate the skimmer and use it as a new tool center point of the arm. Furthermore it learns the 3D models for the pot and the bowls in order to be able to localize them at any arbitrary pose on the table. Lastly, it uses the torque sensors to resolve depth measurement inaccuracies through contact detection with the objects and blob segmentation in order to localize sausages inside the pot.
The couple has a promising life ahead of them, and we look forward to hearing about their future adventures and, hopefully, seeing some baby robots too.
PS: This is not the first romantic meal the robots have together. Last year, the pair prepared a somewhat more mainstream breakfast: pancakes. Guten Appetit!
This is PR2. PR2 plays pool. PR2 brings you beer. And now, or very soon anyway, PR2 will bake you cookies. Warm, gooey, chocolate chip cookies. Seriously, is this not the greatest robot in the world or what?
This video comes from graduate student Mario Bollini, who's a member of Daniela Rus' Distributed Robotics Lab at MIT CSAIL. It's not in the video, but as you can see from the picture, PR2 (or "bakebot" for the purposes of this demo) is also able to cream butter and sugar, and we already know that it can break (or not break) eggs. It does make a bit of a mess, which is the reason for the surgical smock, but a separate group is programming the robot to wipe down the table afterwards. Incidentally, I love how when PR2 finishes adding an ingredient to its mixing bowl, it just drops the container on the floor. Now that's my kind of clean-up.
Bollini hopes to have PR2 making cookies from start to finish within the shockingly short time of a month. Or actually, it'll be just making one single giant cookie at a time, but you know what, I'm totally okay with that.
One day, the Japanese Ministry of Self-Defense decided to wander into Akihabara, a major electronics shopping center in Tokyo. In what I'm told is a relatively typical Akihabara experience, a year and a half and about a thousand dollars later they came out with this crazy spherical flying robot about the size and shape of a soccer ball.
According to the video, this is the world's first truly spherical flying robot (this may or maynot be true). It can buzz around at up to 60 kilometers per hour [about 40 mph] or hover stably in narrow spaces like hallways. But its neatest trick is to land by just smacking into the ground and rolling to a stop to absorb the impact. It's also ideal for operating indoors, since keeping all of the flying and steering components inside the robot lets it happily bounce off walls, doors, windows, light fixtures, and startled people.
The robot relies on one propeller for thrust and eight separate wings for control, and while it doesn't currently carry a payload, it's designed to mount a camera or other sensors. Next up is to instill this thing with some autonomy, and at only $1000 a pop, they're cheap enough that someone who's not with the Japanese Ministry of Self-Defense should venture into Akihabara and bring us all back a sweet little robot soccer ball kit.
Don't tell anyone, but this looks to be a full-length copy of Killer Robots that's made an appearance on YouTube. We were off giving a talk (and watching other events) and weren't able to brave the mobs of delirious robot fans around the RoboGames heavyweight combat arena, but the Science Channel brought in a squad of cameramen led by Grant Imahara (from Mythbusters) to tape the whole thing.
If you're in too much of a hurry to watch it all, you should probably see a doctor and/or get your priorities straight, but the last two matches (starting at about 36:00) are some the best that I've seen in the last three years of RoboGames and Combots. Now hurry up and watch it already, 'cause there's no telling how long it's going to last online and who knows when it'll be on TV again.
I'm not entirely sure what shuffleboard is. So really, I'm not at all qualified to compare this robotic version of the sport to the real thing. But it's nifty that a bunch of students at Oregon State University got a chance to build these robots as part of their coursework, proving that robots can be for learning and fun and evil, all at the same time! Not that I'm insinuating anything about shuffleboard, but I digress. Here's video of a match:
Not bad for eight weeks and 200 hours of work, right? Now someone just needs to invent robotic curling. There's an action-packed sport that's somehow different from and significantly better than shuffleboard. Oh wait, apparently someone did:
I know nothing about this, besides that I found it on YouTube after searching for "robotic curling," but it does sort of look like it might possibly be autonomous, which would be pretty cool. There's video of another match here. If you know anything about it (it's something to do with an "SMU championship"), speak up in the comments!
You may not realize it, but you've got a lot of springiness going on in your legs. You may also not realize that you change that springiness depending on whether you're running or walking, what surface you're on, and whether or not you're carrying stuff. Our bodies (and most animals) are able to dynamically adapt our legs and gaits to make us more efficient under changing conditions. Dynamic adaptation is something that robots are notoriously bad at, but EduBot, a son or cousin or something of the venerableRHex, has been experiment with six new "tunable" legs that allow it to adjust its gait on the fly.
EduBot's legs are made out of carbon fiber, and by changing the location of a slider along the leg, the overall stiffness of each leg can be adjusted independently. Of course, once the stiffness of the legs changes, EduBot has to adapt its gait to match, which it does all by itself by analyzing its own speed, efficiency, and stability. A bunch of different experiments were performed to help the robot learn what leg stiffnesses and gaits produced the most desirable movement patterns on different surfaces and while carrying different loads, and generally the robot was able to figure out what worked best within 70 tries worth of experimentally fiddling with its own programming. I say "generally," because sometimes it took longer, and because watching the robot failing to use the correct gait is pretty funny:
Overall, these experiments have shown that EduBot runs fastest and most efficiently with stiffer legs, but that things can change on softer surfaces (say, grass, or a shaggy carpet) or with payloads, indicating that adaptive and dynamic leg compliance really would be a useful thing to have on a robot, despite the added complexity. Next up will be teaching the robot to adjust its legs on the fly, and it'll be interesting to see how this technology might benefit otherrobots (or even humans) with similar limbs.
EduBot's new legs were presented in an ICRA paper entitled "Experimental Investigations into the Role of Passive Variable Compliant Legs for Dynamic Robotic Locomotion," by Kevin C. Galloway, Jonathan E. Clark, Mark Yim, and Daniel E. Koditschek, from Harvard University, Florida A&M, and the University of Pennsylvania respectively.
Jumping offers a way for very small robots to get over very large obstacles using a minimal amount of energy. It's tricky, though, because while the first jump might be pretty easy, subsequent jumps depend on the ability of the robot to right itself, aim, and go again. That's essentially three separate subsystems, but since you're only ever using one at a time, the risk is that your robot ends up being three times as bulky as is strictly necessary. And in small robots, efficiency is everything.
EPFL's locust-inspired jumping robot solves one of these problems with a weighted roll cage that helps the bot passively return to an upright position whenever it lands. A second motor then allows the robot to rotate within the cage to change its jumping direction. This works quite well, but it adds bulk plus another motor to the whole system.
Jianguo Zhao and a team from Michigan State University have created a jumping robot that somehow manages to do everything that it needs to do with just one single motor. It can change its orientation, right itself, and then jump (really freakin' high) with one motor and some clever mechanical engineering. Check it out:
The actual jumping mechanism was directly inspired by the legs of a frog, but it's really the rest of the robot that's so cool. Everything is driven by one tiny pager motor, and here's how it works:
To jump, the pager motor engages a gear which pulls the robot's body down towards its legs, slowly charging four torsional springs. The gearing and springs help keep the power requirements low without sacrificing jumping energy. When the springs are fully charged up, the gear trips a little lever, and the legs are released. Boing!
After re-entry, the robot inevitably finds itself lying prone. By driving the pager motor backwards, the same gear that charges the springs instead spins against the ground without engaging anything, allowing the body of the robot to rotate to a new position.
To get up, as the robot's body is pulled down towards its legs, little arms deploy outwards, driven by that same downward motion. These arms push the robot up into a standing position, and keep it there until liftoff.
I really love how simple and clever this all is. It's efficient, too: the robot is 8 centimeters tall and only weighs 20 grams, including the motor and a 50 mAh battery, but it can make approximately 285 jumps without needing to be recharged.
The designers think that it should be possible to make the robot jump even higher and farther, and of course at some point they're going to want to stick some sensors on there or something to move it from just being awesome to being awesome and useful at the same time.
This robot was presented at ICRA in a paper entitled "Development of a Controllable and Continuous Jumping Robot" by Jianguo Zhao, Ning Xi, Bingtuan Gao, Matt W. Mutka, and Li Xiao, all from Michigan State University.
A teleoperated robotic construction machine accidentally hit an oxygen cylinder at the Fukushima Dai-ichi nuclear plant this Tuesday, causing the cylinder to explode. The unmanned machine, a grapple-equipped excavator fitted with cameras to guide a remote operator, was clearing radioactive debris from the south side of the No. 4 reactor building when a loud explosion was heard around 2:30 p.m.
Despite the loudness of the blast, a Tokyo Electric Power Co. (TEPCO) official told IEEE Spectrum that, “It turned out to be nothing. There was no damage and there was nothing to repair. And the machine is being used again.” He added that the cylinder contained "compressed oxygen, so the noise was loud."
The machine, which the TEPCO official insists is "not a robot," was removing debris flung from the No. 3 reactor building after a hydrogen explosion occurred there on March 14, following the meltdown of the reactor’s fuel rods. Workers are trying to clear the plant of radioactive debris from at least two hydrogen explosions in order to facilitate the set-up of reactor cooling systems and also the transfer of pooled radioactive water from the reactor and turbine buildings to a central radioactive waste disposal facility and other temporary storage.
Because much of the rubble is highly radioactive, TEPCO is employing machines like the remote controlled excavator to remove the contaminated debris. But as this explosion shows, that doesn't mean there are no risks. In fact, an operator controlling a teleoperated robotic machine by relying on cameras rigged to the vehicle may have impaired visual access, making it difficult to spot dangerous objects like oxygen cylinders amid the piles of rubble.
Dr. Robin Murphy, director of the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M University, in College Station, and a world experts on rescue robotics, says that she sees "these kinds of accidents or operator errors all the time." The problem, she explains, is that roboticists are still trying to improve remote presence technologies to allow operators to effectively see and act remotely through a device such as a robot or sensor.
"Many manufacturers think that a certain camera position or multiple cameras will solve the problem of what is sometimes called situation awareness or sensemaking, but this neglects the whole host of subtle, but real, cognitive barriers that arise from working remotely and having perception mediated," she says. Remote operating a robotic system in a constrained environment -- say, an office or in space or underwater -- might actually be easier compared to a disaster-stricken area, which is not well understood and not engineered to make it easy for the robot. "Disasters continue to offer surprises and difficult to model situations."