This is a robotic dragonfly. If I told you that some company had just invented it and it was flying around today, you’d probably be impressed. Instead, I’m going to tell you that it was developed by the CIA and was flying in the 1970s. And not just flying like proof-of-concept-it-gets-off-the-ground flying, but reportedly, the flight tests were "impressive," whatever that means. It was powered by an ultraminiaturized gasoline engine (!) that would vent its exhaust backwards to increase the bot’s thrust, and the only reason they seemed to have scrapped it was that its performance in a crosswind wasn’t that good:
In the 1970s the CIA had developed a miniature listening device that needed a delivery system, so the agency’s scientists looked at building a bumblebee to carry it. They found, however, that the bumblebee was erratic in flight, so the idea was scrapped. An amateur entymologist on the project then suggested a dragonfly and a prototype was built that became the first flight of an insect-sized machine.
A laser beam steered the dragonfly and a watchmaker on the project crafted a miniature oscillating engine so the wings beat, and the fuel bladder carried liquid propellant.
Despite such ingenuity, the project team lost control over the dragonfly in even a gentle wind. “You watch them in nature, they’ll catch a breeze and ride with it. We, of course, needed it to fly to a target. So they were never deployed operationally, but this is a one-of-a-kind piece.”
In of itself, this dragonfly is not particularly crazy. It’s also not particularly crazy that it was done 30 or 40 years ago, I guess. What IS crazy is when you start thinking about the state of technology 40 years ago versus the state of technology today, and what might be possible now (but currently top secret) if they had an operational insect robot way back then. It blows my mind.
The CIA also came up with a robot squid (its mission is STILL classified) and a robot research fish named Charlie. Pics and video of that, after the jump.
CIA’s Office of Advanced Technologies and Programs developed the Unmanned Underwater Vehicle (UUV) fish to study aquatic robot technology. Some of the specifications used to develop “Charlie” were: speed, endurance, maneuverability, depth control, navigational accuracy, autonomy, and communications status.
The UUV fish contains a pressure hull, ballast system, and communications system in the body and a propulsion system in the tail. It is controlled by a wireless line-of-sight radio handset.
Cute! And once again, seriously not bad for such a long time ago.
Wondering what a $15k telepresence robot can do for you? WONDER NO LONGER. With the help of a 4G wireless hotspot, this QB wandered out of the Anybots office into downtown Mountain View, Calif., looking for a snack. A mile later, it found a Red Rock Coffee and ordered a berry scone, tipped something like 125% (!) and then rolled out. Classy.
While it’s a little hard to tell from the vid, I’m assuming that Anybots sent a chaperone of some sort along to make sure that nobody just grabbed QB by the neck and made off with it. And if they didn’t, well, let me know next time you send a robot out for coffee, because I totally want one and I think grand theft robot is the only way it’s gonna happen.
Hisashi Ishihara, Yuichiro Yoshikawa, and Prof. Minoru Asada of Osaka University in Japan have developed a new child robot platform called Affetto. Affetto can make realistic facial expressions so that humans can interact with it in a more natural way.
Prof. Asada is the leader of the JST ERATO Asada Project and his team has been working on "cognitive developmental robotics," which aims to understand the development of human intelligence through the use of robots. (Learn more about the research that led to Affetto in this interview with Prof. Asada.)
Affetto is modeled after a one- to two-year-old child and will be used to study the early stages of human social development. There have been earlier attempts to study the interaction between child robots and people and how that relates to social development, but the lack of realistic child appearance and facial expressions has hindered human-robot interaction, with caregivers not attending to the robot in a natural way.
Here are some of the expressions that Affetto can make to share its emotions with the caregiver.
Norri Kageki is a journalist who writes about robots. She is originally from Tokyo and currently lives in the San Francisco Bay Area. She is the publisher of GetRobo and also writes for various publications in the U.S. and Japan.
Robotics is off to a good start this year. In January, there was CES, with lots of cool new robot products and demos, and we've also seen plenty of robot hacks using Microsoft's Kinect 3D sensor, which is creating quite a stir. But there was much more, of course, so it's time to review the most striking, stunning, and strange robot videos of January.
No. 10 This mind-bending action sequence from the Indian robot movie Enthiran is a must-watch. Insane, awesome, ridiculous? You be the judge.
Northrop Grumman’s sexily badass X-47B unmanned combat air system made its first flight ever on Friday, circling a desert runway a couple times all by itself before successfully not crashing. Northrop seemed pretty happy about the way things went:
“The flight provided test data to verify and validate system software for guidance and navigation, and the aerodynamic control of the tailless design. The X-47B aircraft will remain at Edwards AFB for flight envelope expansion before transitioning to Naval Air Station Patuxent River, Md. later this year. There, the system will undergo additional tests to validate its readiness to begin testing in the maritime and carrier environment.”
"Flight envelope expansion" means that they’re going to see how crazy the X-47B can get in the air. After that, they’re going to get it ready for its intended purpose, which is carrier operations. We know that drones are already pretty good at precision maneuvers, but I hear carrier landings are especially tricky. I’m optimistic (I always am about robots), but seeing this thing manage an autonomous carrier touchdown is going to go a long way towards convincing skeptics that drones really can function on a level similar to even the most skilled humans in many aspects of combat aircraft control.
As part of the European project RoboEarth, I am currently one of about 30 people working towards building an Internet for robots: a worldwide, open-source platform that allows any robot with a network connection to generate, share, and reuse data. The project is set up to deliver a proof of concept to show two things:
RoboEarth greatly speeds up robot learning and adaptation in complex tasks.
Robots using RoboEarth can execute tasks that were not explicitly planned for at design time.
The vision behind RoboEarth is much larger: Allow robots to encode, exchange, and reuse knowledge to help each other accomplish complex tasks. This goes beyond merely allowing robots to communicate via the Internet, outsourcing computation to the cloud, or linked data.
But before you yell "Skynet!," think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia.
That said, my colleagues and I believe that if robots are to move out of the factories and work alongside humans, they will need to systematically share data and build on each other’s experience.
Imagine the following scenario: A service robot like the one in the hospital room [photo, top] is pre-programmed to serve a drink to a patient. A simple program might include: Locate the drink, navigate to its position, grasp it, pick it up, locate the patient in the bed, navigate to the patient, and finally hand over the drink.
Now imagine that during task execution this robot monitors and logs its progress and continuously updates and extends its rudimentary, pre-programmed world model with additional information. It updates and adds the position of detected objects, it evaluates the correspondence of its map with its actual perception, and it logs successful and unsuccessful attempts during its task performance. If the robot is not able to fulfill a task, it asks a person for help and stores any newly learned knowledge. At the end of its task performance, the robot shares its acquired knowledge by uploading it to a Web-style database.
Some time later, the same task is to be performed by a second robot that has no prior knowledge on how to execute the task. This second robot queries the database for relevant information and downloads the knowledge previously collected by other robots. Although differences between the two robots (e.g., due to wear and tear or different robot hardware) and their environments (e.g., due to changed object locations or a different hospital room) mean that the downloaded information may not be sufficient to allow this robot to re-perform a previously successful task, this information can nevertheless provide a useful starting point.
Recognized objects, such as the bed, can now provide occupancy information even for areas not directly observed. Detailed object models (e.g., of a cup) can increase the speed and reliability of the robot's interactions. Task descriptions of previously successful actions (e.g., driving around the bed) can provide guidance on how the robot may be able to successfully perform its task.
This and other prior information (e.g., the previous location of the cup, the likely place to find the patient) can guide this second robot’s search and execution strategy. In addition, as the two robots continue to perform their tasks and pool their data, the quality of prior information will improve and begin to reveal underlying patterns and correlations about the robots and their environment.
As you can see in the video above, RoboEarth has a way to go. One year into the project, we can download task descriptions from RoboEarth and execute a simple task. We can also upload simple things, like an improved map of the environment. But for now we are far from using or creating the rich amount of prior information described in the scenario above, or addressing potential safety or legal challenges.
I think that the availability of such prior information is a necessary condition for robots to operate in more complex, unstructured environments. The people working on RoboEarth -- me included -- believe that, ultimately, the nuanced and complicated nature of human spaces can't be summarized within a limited set of specifications. A World Wide Web for robots will allow them to achieve successful performance in increasingly complex tasks and environments.
This is maybe only peripherally (ha!) related to robotics, but it’s cool enough that I thought it was worth sharing… Besides, it’s Friday, and you deserve some nifty videos to watch. Anyway, we’ve posted before on all the cool things that roboticists have been able to do with Microsoft’s stupidly cheap and effective 3D camera system, and Willow Garage took some initiative and sponsored a contest to try and kick start even more open source Kinect innovation.
First place (and $3k) went to Garratt Gallagher’s "Customizable Buttons." Using a piece of paper and a pen, you can just draw your own touch-sensitive controls:
Taking home no awards, but one of my personal favorite demos, was Kinemmings, a game of Lemmings played using your body and the Kinect sensor. Yes, it may not be advancing the field of robots or whatever, but it sure looks like fun:
Microsoft should absolutely pay those guys a bajillion dollars and hire them as game designers or something. Seriously, Kinect has way more potential than one company can possibly harness. And as for robots, great strides are obviously being made, and the future is (hopefully) limitless. If any of these projects are of use to you personally, remember that since they’re on ROS, you can just download them and put them to work yourself.
Inspection of high-voltage power lines is costly, difficult, and a dangerous job even for skilled workers. Which means it's the perfect job for a robot.
We first wrote about Expliner, an incredible inspection robot that balances on power lines like an acrobat, more than a year ago. Since then, HiBot, the Japanese company that developed Expliner, has gone on several inspection jobs, remote operating the robot as it crawls on 500-kilovolt live lines.
The company is now gearing up to deliver the robot to customers, first in Japan, and later abroad as well.
Expliner is like a wheeled cable car that rolls along the upper pair of bundled cables. In addition to its manipulator arm, it carries laser sensors, to spot corrosion or scratches, and a high-definition camera, which records details of bolts and spacers far more effectively than even a human worker.
HiBot says that Expliner is a semi-autonomous robot.
"There is always a human in the control loop, but the basic repetitive tasks are automated," says Michele Guarnieri," a HiBot co-founder. "Tasks that require a high degree of precision, like maintaining balance or moving parts to a certain angle, are also automated."
He explains that the robot can inspect up to four cables simultaneously, and software automatically checks all recorded videos and alert users about potential damages or problems on the lines.
HiBot has recently released a new video that shows off the robot's capabilities, including being able to go over cable suspension clamps through a series of acrobatic maneuvers using a dangling counterweight to shift the robot's center of gravity. Watch:
HiBot, which spun off from the laboratory of Tokyo Tech roboticist Shigeo Hirose (known for his incredible snakebots), has recently won an award for the Expliner robot from Japan's Ministry of Economy, Trade, and Industry.
And if you're wondering, "Expliner doesn't fall," claims Guarnieri. "It's equipped with safety devices that prevent the robot from falling, even in case of strong winds."
Last year, we reported that British researchers are using a Charles Babbage robot head to develop emotional machines. We wondered whether the Charles head was a Hanson Robotics creation. We now have the answer.
"Yes, Charles is a Hanson Robotics creation," David Hanson, founder and CTO of the company, tells us.
Hanson says they built the robot more than a year ago and he was pleased to see that the Cambridge researchers have put it to work. "I think they’re up to some good stuff," he says.
Above is an image of Charles at the Hanson robot factory.
Hanson also updated us on his company's latest developments -- they've been busy working on some new robots and updating old ones. These creations are incredible, and I can't decide where I'd put them in the uncanny valley chart.
First, there's Zeno. No, not the little Zeno. This is a big Zeno, modeled after Zeno of Elea, the mathematical philosopher who, as Hanson puts it, "introduced riddles of recursion that vexed the Greeks so terribly, and inspired [Douglas] Hofstadter so much that he included Zeno as a character in 'Gödel, Escher, Bach.' "
Here's a video, and there's a photo of it below as well:
Hanson has been putting a lot of effort on software, and the latest version has "features enabling common sense reasoning and learning," he says. "This is a collaboration of numerous groups through the Apollo Mind Initiative"—a nonprofit he helped found—"dedicated to helping institutions collaborate on realizing greater-than-human genius in machine intelligence."
The company has also just rebuilt their famed Philip K. Dick robot. The upgrade, commissioned by a Dutch public TV station working on a documentary about the author, is "more expressive and intelligent," Hanson says:
And I know you're wondering: What about the little Zeno? Hanson pointed us to this video from last year, and shared this bit of news: He expects the robot to be ready for a release to researchers in 2011, and consumers in 2012.
The pi4 Workerbot is a new industrial robot capable of using its two arms to perform a variety of handling, assembly, and inspection tasks. It's designed to work alongside human workers -- and the robot's LCD face even displays a broad smile when things are running smoothly.
One of the innovative things about the robot is its control system. The Workerbot, which made its debut at the Automatica show last June, relies on a method known as impedance control, which allows the robot's arms to cooperate as they handle objects, keeping forces at desired levels and adjusting to disturbances -- a crucial capability when it comes to bimanual manipulations.
With its human-inspired size and looks [see images above], the Workerbot is a far cry from traditional factory bots, especially those used by the auto industry.
That's not to say that the automotive industry hasn't been good to robotics. Quite the opposite. Thanks to car manufacturers, industrial robots evolved into fast, reliable, powerful, and precise machines. But there's a flip side to the story.
Traditional industrial robots are rather complex to integrate into existing manufacturing processes; deploying them at a factory is an arduous, costly, and time-consuming task. The robots are also difficult to reprogram when changes become necessary, and they can't safely share spaces with human workers.
This barrier to entry has kept small and medium companies in industrialized countries "robot-less" -- at a time when robots, more than ever, could boost productivity and ameliorate labor shortages. To automate their production lines, which often include many different items manufactured in low volumes, these companies need robots that are inexpensive and intuitive, but still reliable and precise.
This is a promising, and potentially hugely lucrative, market that pi4_robotics and other companies -- including, it appears, Rodney Brooks' secretive start-up, Heartland Robotics -- want to explore.
The Workerbot's arms have seven degrees of freedom each (like humans arms), with grippers equipped with force sensors that can adjust the pressure that they apply. The head has two inspection cameras on the sides, a 3-D camera on the forehead, and a display screen that provides feedback to operators (a smile means all is okay; a frown indicates that something is wrong, or that the robot could work faster). The Workerbot is not a mobile robot, though human workers can use its wheeled base to move it manually.
Watch the robot in action:
According to Fraunhofer engineer Dragoljub Surdilovic, their approach to compliant control is what makes the Workerbot different from similar two-armed bots, like the Motoman SDA10D and the DLR/KUKA Justin humanoid.
"We created a new dual-arm programming language and environment that incorporate impedance control and make it easier to plan, program, and realize bimanual contact tasks," Surdilovic says.
Most industrial robots don't use impedance control, but rather they implement position control. In this approach, the robot tries to make its arms follow as closely as possible a series of positions in space. If the arms go off their trajectory, the motors try to bring them back on track.
The problem is, if you have two robots, or one robot with two arms, that need to collaborate and they are position controlled, coordinating their movements can be difficult. Imagine that the two arms are manipulating the same object. If at any point one of the arms becomes off-trajectory and starts pushing to get back on track, the other arm might go off-trajectory and start exerting forces as well.
Using impedance control, bimanual manipulation becomes much easier. The way this scheme works is the robot simulates a dynamic behavior for its arms that is different from the arms' intrinsic mechanical dynamics (which depends on its linkages, motors, and joints). The idea is to actuate the motors by simulating a mass-damper system. Imagine moving an object through a viscous liquid. The control system can adjust its parameters so you feel that you are moving a greater mass, for example, or add more damping so you don't overshoot when trying to bring the arm to a given position.
The upshot is that impedance control makes the arms capable of adjusting to errors and disturbances while at the same time keeping applied forces within desired limits.
This approach is also key to improve safety, because the robot won't push back if a person accidentally comes into contact with it. Indeed, the Workerbot meets the ISO 10218 norm for "inherent safe design of industrial robot." Another important benefit is that human operators can manually guide the robot arms to teach it an assortment of tasks, simplifying the programming process.
It will be interesting to compare the Workerbot to the Heartland Robotics system. Both companies seem to target assembly, handling, and inspection tasks. Whereas pi4_robotics claims that its bot will "help keep European production competitive," Heartland wants to "reinvigorate American manufacturing." The German firm plans to lease their robot for about 4,800 euros per month, and recentreports indicate that Heartland might sell its robot for US $5,000, although details are still murky.
One thing is certain: This is going to be an exciting chapter in robotics, and I'm looking forward to seeing how things will unfold -- and most important, whether these robots will help the many manufacturers that have long awaited for them.