Automaton iconAutomaton

Robot Videos: Festo's SmartBird, Social Robots, and Autonomous Cars

There's no better way to start off the week than with a trio of fascinating robot videos, each of which is easily educational enough that you should be able to convince yourself (and anyone else) that watching them is definitely not procrastinating. 

This first video is a follow-up to Festo's SmartBird robotic seagull that we posted about last month. Creating a heavier than air fully functional robotic bird is no small feat, and this 17 minute video takes you through the development process, including lots of juicy details and behind the scenes test footage:


Cynthia Breazeal gave a seminar at CMU's Robotics Institute on "The Social Side of Personal Robotics." As you may have noticed, robots tend to be pretty lousy at interacting socially with humans, largely because robots have a hard time understanding what's going on inside our heads. I can totally relate to this because I have a hard time understanding what's going on inside other people's heads too, and if it's difficult for me, it's practically impossible for a robot.

Dr. Breazeal talks about new capabilities that her lab is developing to allow robots to employ a higher degree of insight (if you want to call it that) into how humans think, to enable robots to interact with us more naturally and more successfully. For example, the seminar includes video of experiments with Leonardo, where the robot demonstrates how it can understand not just what a human wants, but also what a human believes, which allows the robot to be much more... Well, I'm not sure what else to say but "insightful." Other experiments show how Leonardo can successfully pick up on unknown rules based on behavioral feedback, which is a skill that could hypothetically be extended to abstract social situations.

This talk is just over an hour long, but it's definitely worth watching in its entirety:


Lastly, we've got a (rather brief) TED Talk from Sebastian Thrun, who's been developing autonomous cars at Stanford and, more recently, Google. I never get tired of hearing his vision for the future where we all ride around in safe and efficient robotic vehicles, but it's somewhat ironic that no matter how much safer autonomous cars are over human drivers, it's the risk of accidents that's keeping them out of the hands of consumers. We have a ways to go both socially and legally before sharing the road with robots is going to be acceptable, but there are ways to ease us into it that may help to make the transition both smoother and quicker.

Why We Should Build Humanlike Robots

hanson robotics robokind zeno humanoid robot
Robokind Zeno, a small walking humanoid with an expressive face created by Hanson Robotics.

People often ask me why I build humanlike robots. Why make robots that look and act like people? Why can't robots be more like ... appliances?

In fact, some people argue that it's pointless for robotic researchers to build robots in our image; existing humanoids fall short of what science-fiction authors have dreamed up, so we should just give up. Others even say we'll never have humanoid androids around us, because when you try to make robots look more human, you end up making them look grotesque.

I disagree. I believe robotic researchers should aspire as grandly and broadly as possible. Robots can be useful in many shapes and forms, and the field is young—with so much room left for innovation and diversification in design. Let a thousand robot flowers bloom.

On the tree of robotic life, humanlike robots play a particularly valuable role. It makes sense. Humans are brilliant, beautiful, compassionate, loveable, and capable of love, so why shouldn’t we aspire to make robots humanlike in these ways? Don’t we want robots to have such marvelous capabilities as love, compassion, and genius?

Certainly robots don’t have these capacities yet, but only by striving towards such goals do we stand a chance of achieving them. In designing human-inspired robotics, we hold our machines to the highest standards we know—humanlike robots being the apex of bio-inspired engineering.

In the process, humanoid robots result in good science. They push the boundaries of biology, cognitive science, and engineering, generating a mountain of scientific publications in many fields related to humanoid robotics, including: computational neuroscience, A.I., speech recognition, compliant grasping and manipulation, cognitive robotics, robotic navigation, perception, and the integration of these amazing technologies within total humanoids. This integrative approach mirrors recent progress in systems biology, and in this way humanoid robotics can be considered a kind of meta-biology. They cross-pollinate among the sciences, and represent a subject of scientific inquiry themselves.

hanson robotics humanike robots
Some of Hanson Robotics' creations [from left]: Alice, Zeno, and Albert Hubo.

In addition, humanlike robots do prove genuinely useful in real applications. Numerous studies, including those with humanoids Nao, Bandit, Kaspar, and RoboKind Zeno, show that autistic children respond favorably to such robots, promising treatments and social training uses. Additionally, consider a humanoid robot like NASA's Robonaut (just to name one). Its capabilities for use in space and in factory automation promise safer, more efficient work environments for people. And then, there is the simple wonder and psychological power of humanoid robots. Just as human-inspired depictions brought joy and insights throughout history—such as in the sculptures of Michelangelo, in great works of literature, and in film animation such as those of Disney, Miyazaki, and others, there is no reason that robots can’t inspire similarly. Humanlike robotics already bring us wonder and joy. Why can’t robots communicate just as much wisdom, knowledge and ardor, as do other figurative arts? In addition to known uses for humanlike robots, new uses for humanlike robots will certainly emerge, expand and surprise us, as the capabilities of robots evolve onwards.

It is true that humanlike robots are not nearly human-level in their abilities today. Yes, humanlike robots fail. They fall, they lose the topic in conversation, misunderstand us, and they disappoint as much as they exhilarate us. At times these failures frustrate the public and robotics researchers alike. But we can’t give up. Humanoid robots are still in their infancy. Though they falter, the abilities of humanoid robots continue to grow and improve. Just as babies can’t walk, talk, or really do anything as well as adults do, or do anything particularly useful, this doesn’t mean that babies deserve our contempt. Let’s not give up on our robotic children. They need nurturing. And as a researcher in humanoid robotics, I can attest that it’s a pleasure to raise these robots. They are a lot of fun to develop.

Looking forward, we can find an additional moral prerogative in building robots in our image. Simply put: if we do not humanize our intelligent machines, then they may eventually be dangerous. To be safe when they “awaken” (by which I mean gain creative, free, adaptive general intelligence), then machines must attain deep understanding and compassion towards people. They must appreciate our values, be our friends, and express their feelings in ways that we can understand. Only if they have humanlike character, can there be cooperation and peace with such machines. It is not too early to prepare for this eventuality. That day when machines become truly smart, it will be too late to ask the machines to suddenly adopt our values. Now is the time to start raising robots to be kind, loving, and giving members of our human family.

david hanson hanson roboticsSo I can see no legitimate reason not to make humanlike robots, and many good reasons to make them so. Humanlike robots result in good science and practical applications; they push robots to a higher standard, and may eventually prevent a war with our intelligent machines. What’s not to love about all of that?

David Hanson, Ph.D. [photo right] is the founder and CTO of Hanson Robotics, in Richardson, Texas, a maker of humanlike robots and AI software. His most recent creation is Robokind, a small walking humanoid with an expressive face designed for research.

Da Vinci Surgical Bot Folds, Throws Tiny Paper Airplane

da vinci surgical robot

Everybody already thinks that robot surgery is way cool, but I suppose there's no harm in taking a few minutes to show off the precision that tiny little robot grippers are capable of. On the other end of these steely claws is an even steelier-eyed surgeon with a questionable amount of aeronautical experience, and in between the two is a da Vinci surgical system. This particular robot hails from Swedish Hospital in, you guessed it, Seattle.

The da Vinci system, if you recall, provides surgeons with an interface that allows them to control little robotic hands with their own (much larger) hands, enabling much finer control in a much tighter space. For patients, this means smaller incisions that heal faster, and for surgeons, it means no more going elbow deep into someone else's guts.

I do feel obligated to point out that depending on your definition of robot, the da Vinci system may not qualify as one, in that it doesn't have much of an autonomous component: all of those motions are controlled directly by the surgeon using a master/slave system. However, robots with actual autonomous surgical capabilities aren't that far off, and now that we've seen demos of robots autonomously sucking your blood out and autonomously taking biopsies on simulated turkey prostates, it's just a matter of time before you start having to choose your surgeon based on whether it's running Windows or Linux.

[ Intuitive Surgical ] via [ Nerdist ]

TED Talk: Berkeley Bionics

We were on hand when Berkeley Bionics introduced their eLEGS exoskeleton last October, and there's no doubt that it's a pretty amazing piece of hardware. The same company is also responsible for the HULC exoskeleton, which they've licensed to Lockheed Martin. If you're already familiar with Berkeley Bionics' stuff, there isn't too much new in the presentation, but it's always great to see these incredible exoskeletons in action:

Incidentally, media coverage of the eLEGS launch focused extensively on how the exoskeleton had the potential to "free" people with disabilities from what they seemed to assume is some kind of lousy and pitiable quality of life, which is certainly not the case. I'd encourage you to read this wonderful article by Gary Karp on the subject, and also consider how sometimes, people with "disabilities" can actually be super human in some ways.

[ Berkeley Bionics ] via [ TED ]

Weird French Robot Reeti Wants To Be Your Home Theater

This curious really kinda weird looking robot is Reeti, who's apparently what you get when a robot and a media center PC have offspring. Reeti is designed to provide an interface between your TV and your computer, offering a variety of additional capabilities, or something... I'm honestly not quite sure what it, um, does.

Setting practical uses aside, Reeti is very emotionally expressive, considering its relative simplicity. It has cheeks that glow to communicate mood, and there are touch sensors in its face to enable it to react when you prod it. Each of Reeti's eyes has its own HD camera, and its 3D perceptual view lets it recognize people and objects and track motion. Reeti can understand (and localize) spoken commands, and its speech synthesis allows it to read emails and RSS feeds to you. Oh hey, something it can do!

If you're still wondering what other things Reeti can do for you besides reading aloud, maybe this will answer your question:

Or, uh, maybe not.

I guess what I still don't really understand is why I'd want a Reeti in my house. I mean, I want one, because it's a robot, and it's expressive and funny looking, but at this point I'm not quite sure what Reeti plans to do for me that I couldn't do more efficiently with a mouse and keyboard, you know? It looks like Reeti is designed to be more of an open platform where people can write their own apps to extend the capabilities of the robot, which is fine, but if you look at what makes an app store successful, they're mostly targeted towards devices with enough inherent capability that you can establish a large, happy consumer base without any apps at all, creating your own market. So that goes back to my original question: what can Reeti do for me?

Setting practical uses aside (for the second time), I do appreciate Reeti's overall aesthetic, if you can call it that. Reeti is likely as strange looking as it is, in order to distance itself from any sort of anthropomorphic impressions. It's got eyes and a mouth to help it communicate, but it's so far from looking human that we don't get caught up in how it doesn't look human, if that makes sense

Reeti is made by the French company Robopec, and apparently there will be some way of pre-ordering one at some point for about $7,000 (!). Until we get a little more information on all of the spectacular and amazing things that Reeti may or may not be able to do, though, I'd hold off adopting one of these little guys, unless you're so smitten that it's already too late.

[ Reeti ] via [ Robots Dreams ] and [ CNET ]

The Global Robotics Brain Project

global robotics brain

Why is this man smiling?

Because in his brain resides a database with more than 36,000 robotics companies, robotics labs, robotics projects, robotics researchers, and robotics publications, all categorized, tagged, and linked.

No, not in the brain inside his head. We're talking about the Global Robotics Brain, a project that the man, Wolfgang Heller, started to keep track of the robotsphere.

Inspired by Google's PageRank, Heller, a business intelligence consultant from Sweden, asked himself: Could he use a similar approach to draw a map of interactions between the different robotics players and identify who is doing the most relevant work? What trends are emerging?

In 2005, after a visit to the World Robotics Exhibition in Aichi, Japan, he started to systematically feed his database with anything related to robotics he came across. He then created tools to automate the process. Six years later, the result is a "gigantic mindmap of a broad range of robotics resources," he tells me.

Heller isn't building this brain for fun. His hope is that companies and labs will pay him to access it. A free version is available for students and researchers for personal use; an expanded version with more detailed information is available for organizations on an annual subscription basis.

In the expanded version, you'll find insights on robotics trends that Heller generates periodically (using, we should note, both his brain and the database brain). Here's his latest list of robotics trends:

1. Industrial robotics renaissance. Soft mobile robots start working alongside human workers. Examples: Toyota safe human-robot factory assembling, Festo Bionic Handling Assistant, pi4 Workerbot, Robonaut2).

2. Urban service robotics renaissance. Smart mobile robots enter public space for safe and green city living. Examples: Dustbot, Google autonomous car, ubiquitous robotics, Cyber-Physical-Systems.

3. Civil robotics Renaissance. Transfer of military robotics into civilian robot application. Examples: Telepresence robots, civil UAV & UGV, telesurgery, rescue, Ambient Assisted Living.

4. Robotics toy-to-tool renaissance. New generation reinvents and remixes robotics technology, artificial intelligence, information and communication technologies, nano and biotechnology into new toy-to-tool robot platforms. Examples: Nao, PR2, Kinect, ROS.

5. Robotics promotion renaissance. Governments have recognized robotics as strategic technology that requires R&D investments and public awareness. Examples: National robotics roadmaps, flagship research programs, establishment of centers of excellence, robotics science and amusements parks, national robotics weeks, robotics challenges.

Check out the Global Robotics Brain to see if you envisage other trends. Try to look where the investment is coming from, where the research is taking place, where technology gets commercialized, and so forth. Soon you’ll start feeling like you also have a robotics brain.

Samuel Bouchard is a co-founder of Robotiq.

Quadrotors Demonstrate Mad Cooperative Juggling Skills

Back in December, I posted a little teaser preview of a talented quadrotor juggling a ball at ETH Zurich's entirely awesome Flying Machine Arena. That quadrotor has been practicing, and has even enlisted a friend. Hey look, now robots can amuse themselves!

Besides the quadrotors, what makes this all possible is an extremely sophisticated motion capture system, so it's unlikely that you'll see these skills (or these skills, for that matter) outside of a tightly controlled environment.

For the record, these are easily the most impressive juggling robots in recent memory, which includes one or two or three or four other bots. Now seriously, put a net up there and let's have ourselves some robot volleyball already.

[ ETH - IDSC ]

Robots Are the Next Revolution, So Why Isn't Anyone Acting Like It?

willow garage pr2 robot
This robot can fetch you a beer. But it will cost you $400,000.

Back in 2006, when Bill Gates was making his tear-filled transition from the PC industry into a tear-filled career as a philanthropist, he penned an editorial on robotics that became a rallying cry for… no one. Titled "A Robot in Every Home," Bill Gates highlighted the obvious parallels between the pre-Microsoft PC industry and the pre-anybody personal robotics industry. Industrial use, research work, and a fringe garage hobby. That was the state of the computer industry before Bill Gates and Steve Jobs, and that’s more or less the state of the robotics industry now, five years after Bill’s editorial.

Of course, Bill hasn’t been around to make the dream come true, he’s been busy saving Africa and our public school system and the souls of fellow billionaires. He did leave behind a multi-billion dollar sotware company, however, that is perfectly poised to make "A Robot in Every Home" a piece of fact instead of fiction. Since then, Microsoft’s one major (intentional) contribution to the industry has been the sporadically updated Microsoft Robotics Developer Studio. It’s a good tool for prototyping and simulating simple robotics, but it isn’t moving anything forward. In fact, it treats the robotics industry exactly like computer industry stalwarts treated the burgeoning PC industry: as a hobby. What Microsoft hasn’t been doing over these past years is building a robot operating system, or making an even greater gamble on actual robots themselves.

Oddly enough, Microsoft’s largest contribution to robotics as of yet was largely inadvertent. The Kinect sensor for the Xbox 360 was launched in November of 2010, and was a surprising success with consumers. While normal people snapped up the mysterious sensor by the millions, brought it into their living rooms, and realized how very-out-of-shape they were, pale hobbyists ("hackers," as they’re known these days) quickly sequestered themselves in their garages (circa 2010/2011: poorly heated loft apartments), and taught the Kinect sensor new tricks. The piece of hardware that was originally intended to be a locked down add-on for the 360 became a multipurpose 3D sensor extraordinaire. Microsoft actually issued a mild out-of-touch (and never repeated) threat to the hackers, but the "damage" was done, and hundreds of burgeoning roboticists had a supremely powerful tool in their hands -- and incidentally generated millions of dollars worth of free PR for Kinect with YouTube videos of their exploits.

In 1974, when Intel released the 8080 microprocessor, it wasn’t trying to invent the PC, it was just trying to improve upon its existing, limited 8-bit 8008 chip. It was up to the likes of MITS (the Altair 8800) and Microsoft (Altair Basic) to make good use of it. Clones and successors quickly followed, and Intel has obviously kept up over the years. Perhaps Microsoft would be happy to accidentally spark a robotics revolution with the Kinect sensor, but wouldn’t it prefer to be at the center of it? Besides, Microsoft doesn’t actually build the 3D sensor heart of the Kinect, those honors go to a company called PrimeSense, which is offering the same tech to anyone for a similarly low price.

Someone is going to figure this out. Willow Garage, fueled by some mysterious and apparently inexhaustible venture capital, is taking the open source angle with its ROS (Robot Operating System). The project already has a good amount of traction among bearded hackers and ambitious university robotics programs, since it allows altruistic types to build upon the innovation of others instead of continually "reinventing the wheel" (as Willow Garage puts it) and building their own robot operating system and hardware support from the ground up. Still, while ROS has made great strides and is home to some very exciting innovation -- along with its fair share of Kinect hacks, of course -- it’s nothing a consumer would find useful or even approachable. What the personal computing revolution did was take tools that were already commonplace in the enterprise and hand them to regular cro-mags who wanted to "balance a checkbook" with a spreadsheet application or "word process" without a typewriter ribbon. Microsoft put those tools in the hands of hobbyists, then Apple put them in the hands of regular people, and then Microsoft put them in the hands of everybody.

What we need a Microsoft or a Google or an Apple to do -- or if they won’t do it, Enterprising Upstart X -- is build an operating system that runs on standardized hardware or commodity hardware, with built-in capabilities for doing things that are actually useful for a home user. Buzzing you in when you get locked out, signing for a package, taking that frozen chicken out of the freezer while you’re at work, feeding your pet, and of course the veritable classic of robo-problems: getting you a beer. As simple as these things sound, they’re actually incredibly complex in terms of where general robotics innovation is at currently. That’s why EUX is a longshot, but there’s still room for some barefaced ingenuity. The dawn of the PC was marked by incredible efficiency of code and hardware, techniques that made Bill Gates and Steve Wozniak famous. Currently, the retail robot closest to being able to manage all these tasks is Willow Garage’s PR2, which costs $400,000, harbors two dual-processor Xeon servers (16 cores total) and is still slow as molasses.

Imagine a robot that you could buy at Best Buy for somewhere between $2k and $4k, unbox and configure in half an hour, and then just take for granted as an extremely reliable, whine-free household member for the next few years (or, if you bought it from Apple, exactly 12 months before the upgrade lust sets in). It would change everything. Of course, it sounds preposterous given the current state of this barely-there industry, but it’s going to be a reality within the next decade. Who will get us there first?

Image: Willow Garage

Paul J. Miller, a New York-based technology writer, is a former editor at Engadget. This post appeared originally at pauljmiller.com.

iMobot Brings Robot Modules to Modular Robots

We love the concept behind modular robots: they're simple, cheap, easy to use, and capable of doing anything you want them to do, as long as you're willing to let them reconfigure. They're also easy to fix, and in many cases, capable of fixing themselves. So for example, if you've got a modular humanoid that you decide to kick in the face, it can put itself back together, as long as it's got enough modules attached to each other to enable movement. But single modules, left on their own, are more or less helpless.

iMobot is a project from UC Davis that takes all those cool possibilities embodied in modular robotics and adds a couple extra degrees of freedom that gives each individual module significant capabilities as well. The basic central hinged design is familiar from projects like ckBot, but iMobot adds rotating plates at the end of each module, which can turn the module into a single axle of sorts, capable of driving itself around. The modules can also crawl, roll, and "undulate" to get from place to place:

Besides movement, these additional degrees of freedom allow the modules themselves to perform tasks, like operating as little individual camera turrets. And of course, by sticking a bunch of the modules together, you can create much more sophisticated robots with enhanced capabilities: 

The creators of these modules, Graham Ryland and Professor Harry Cheng, have taken the promising step of starting up their own company to produce these little guys in bulk and make them available for research institutions and anybody else who wants to mess around with modular robots. Barobo already has a sizable NSF grant to kick things off, and they hope to have a product ready to go by the end of the year. 

[ iMobot ] via [ Physorg ]

Japanese Robot Surveys Damaged Gymnasium Too Dangerous for Rescue Workers

Editor's Note: This is part of our ongoing news coverage of Japan's earthquake and nuclear emergency.

japan earthquake tsunami search and rescue robot

Japanese researchers have sent a robot into a damaged gymnasium where a partially collapsed ceiling makes it dangerous for rescue workers.

The team used a remote-controlled ground robot to enter the building in Hachinohe, Aomori Prefecture, in the northeastern portion of Japan's Honshu island, and assess damages.

The roboticists, led by Fumitoshi Matsuno, a professor at Kyoto University and vice president of the International Rescue System Institute, used their KOHGA3 robot, a tank-like machine equipped with cameras and sensors, to carry out the mission.

"Part of the ceiling fell down," Prof. Matsuno told me. "That's why we used the robot." Emergency workers feared that aftershocks could send the rest of the ceiling crashing down.

Several robotics teams have been on standby throughout Japan, ready to assist in rescue and recovery operations after the earthquake and tsunami that struck the country early this month. Robots could also help at the troubled Fukushima Dai-1 nuclear power plant.

At the Hachinohe gymnasium, Prof. Matsuno's group set up the operator station -- a laptop computer with a video game-style controller attached -- at a safe location near the entrance. From there, they deployed their robot. Watch:

The KOHGA3 has powerful motors and four sets of tracks that allow it to traverse rubble, climb steps, and go over inclines up to 45 degrees. The robot is 86 centimeters long, 53 cm tall, and weighs in at 40 kilograms. Its maximum speed is 1.8 meters per second.

The robot carries three CCD cameras, a thermal camera, laser scanner, LED light, attitude sensor, and a gas sensor. Its 4-degrees-of-freedom robotic arm is nearly 1 meter long and equipped with CCD camera, carbon-dioxide sensor, thermal sensor, and LED light.

Upon reaching the area above which the ceiling had collapsed, the robot directed one of its CCD cameras upward, using its zoom capabilities to get a good look of the damage. The robot also pointed its camera to the debris on the ground, so workers could determine whether structural parts of the roof had collapsed.

japan earthquake tsunami search and rescue robot

japan earthquake tsunami search and rescue robot

japan earthquake tsunami search and rescue robot

Then it was time to explore other parts of the gymnasium. The roboticists drove up to a room, whose door was half open. Before entering, they used the robotic arm to peek inside. "Using a camera that is mounted at the tip of the arm, we obtained information on what's inside the room," Prof. Matsuno said.

The Kyoto University team included Dr. Noritaka Sato, Dr. Kazuyuki Kon, and Hiroki Igarashi. Prof. Masatshi Daikoku and Dr. Ryusuke Fujisawa from the Hachinohe Institute of Technology collaborated with the mission. The researchers are members of the IEEE Robotics and Automation Society.

The researchers also inspected the stage of the gymnasium, again using the robot's CCD cameras and the one mounted on the robotic arm. With all the inspection tasks completed, they drove the robot back to the entrance.

The group departed Hachinohe and headed out to Kuji, Iwate Prefecture, hoping to perform more inspections there. Their first stop was the National Kuji Storage Base, one of Japan's main oil stockpiles, with three storage tanks and total capacity of 10.5 million barrels. The facility, located on the seashore, was completely destroyed [photos below], and there wasn't much the robots could do to help.

japan earthquake tsunami search and rescue robots

japan earthquake tsunami search and rescue robots

Next they followed to a shipyard nearby. There were still buildings standing that rescue workers needed to inspect. The roboticists offered their assistance, but the officials in charge told them that a private company owned the buildings and they'd have to get permission to use the robots.

japan earthquake tsunami search and rescue robots

In another attempt to deploy their robot, the roboticists drove to Noda village, located about 15 kilometers south of Kuji. The earthquake and tsunami wiped out the coastal strip of Noda, leaving almost every building completely destroyed [photos below].

japan earthquake tsunami search and rescue robots

japan earthquake tsunami search and rescue robots

On a rooftop overlooking the devastated landscape, the roboticists discussed potential targets for their robot with the rescue workers in charge. But the same impediment came up: The buildings were private property, and the roboticists would need permission from the owners to get in, a process that could take a long time.

japan earthquake tsunami search and rescue robots

After several days on the road looking for opportunities to assist with their robot, the Kyoto University team began to make its way home. The researchers were happy to have helped, but also overwhelmed by the extent of the destruction they saw. Their contribution, Prof. Matsuno said, is only a "very small result."

Images: Fumitoshi Matsuno/Kyoto University

READ ALSO:

Can Robots Fix Troubled Reactors?
Tue, March 22, 2011

Blog Post: It's too dangerous for humans to enter the Fukushima Dai-1 nuclear plant. Why not send in robots?

iRobot Sends Packbots to Fukushima
Fri, March 18, 2011

Blog Post: A group of iRobot employees is on their way to Japan along with specially equipped Packbots and Warriors

More Robots to the Rescue
Fri, March 18, 2011

Blog Post: An underwater vehicle and another ground robot join the rescue and recovery operations

Robots Help Search For Survivors
Sun, March 13, 2011

Blog Post: Japanese engineers are deploying wheeled and snake-like robots to assist emergency responders

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More