Video Friday: Everything That Happened For the Last Two Weeks, and More

We get you all caught up after our IROS trip with this massive Video Friday

4 min read
Video Friday: Everything That Happened For the Last Two Weeks, and More

We had a fantastic time at IROS; hopefully you kept up with all of the latest research news, and if not, here's a link to everything we've written so far, and there's still more to come. And as for IREX (the International Robot Exhibition, a trade show taking place at the same venue), we've got a video compilation and a gallery of pics coming to you next week.


Being in Japan for two weeks and focusing on just the conference and trade show, it was a teeny tiny bit inevitable that we'd fall just slightly behind on our typical robot video coverage. Juuust slightly. And so, we're going to make up for that today, with this much more massive than normal Video Friday. So hold on to your browsers: this thing has 23 videos.

Let's start with something very cool and very expensive, which would of course be Atlas. This first vid comes from Lockheed Martin, which is working with UPenn and Rensselaer Polytechnic Institute for the DRC:

Lockheed Martin ]

This much less expensively produced but arguably way cooler video shows Team IHMC's Atlas walking over random terrain. It's possible that this is the first truly impressive video that we've seen of Atlas doing anything practical:

[ IHMC Robotics ]



Since we were just talking about Lockheed Martin, here's a much more general video from them about the future of robotics and AI, which reveals that they may or may not be working on some sort of space-jellyfish-robot-thing:

[ Lockheed Martin ]



It's been a while since we've had a good micromouse video, so here's a pair of 'em, one at normal speed, and one slowed way down with a camera operating at 240 frames per second:


[ Robolab ]



Team BlackSheep has been up to more of their usual breathtaking shenanigans, including some first person view formation flying, and a trip to the Grand Canyon:



[ Team BlackSheep ]



Astrobotic has been testing their planetary rover on a gravity offload rig. Warning: many, many bananas were probably harmed in the making of this video:

[ Astrobotic ]



To help get a little rover to hard to reach areas when exploring volcanoes, Tohoku University in Japan is employing a UAV to lower the rover to the ground from VERY VERY HIGH UP:

Observation of an active volcano is very important to determine a strategy for estimating its eruptive activity and providing residents with an evacuation warning. In this research, we proposed robotic observations in a volcanic area after an eruption using a multi-rotor UAV (unmanned aerial vehicle) and a small ground robot. In this videoclip, we introduce our cooperative demonstration between UAV and small UGV in Sept, 2013, in Mt. Asama. It used tether landing to improve its safety, and completed automatic delivering of the small ground robot. The ground robot was tele-operated 3km far away from the robot.

[ Tohoku University ]



The Beta Dash prototype is starting to look nice and dashy:

[ Dash Robotics ]



The NSF is sponsoring research at Vanderbilt University, using Aldebaran's NAO to help teach children with autism interaction skills:

[ NSF ]



iCub is also working on interaction, but in this case, it's playing (and, spoiler alert, winning) a game of gestural simon says versus a human:

"All gestures you can" is a real-time memory game: the goal is to perform the longest sequence of hand gestures that you or the opponent can remember. This game is based on a Gesture Recognition System, that exploits 3D features based on motion and appearance; these features are then enriched with a sparse coding stage and classified by Linear SVMs. In the previous version we used Kinect to obtain 3D information; in this case instead, we rely directly on the stereo vision of the iCub. Furthermore, we are now able to train new gestures in real-time, via a single demonstration.


iCub is also getting this sort of weird head-sock thing, presumably concealing some mechanical bits that allow it to make facial expressions:

[ iCub ]



Sandia National Labs has this concept called Volant, a multi-modal vehicle capable of travelling in the air, in the water, and on land:

[ Sandia ]

So that's cool concept, but UPenn's SEAL Pack is an assemblage of ckBot modules that can do this for real:

[ ckBot ]



Asimo helped introduce Honda at the 2013 Tokyo Motor Show:




The idea behind this Sociable Trashbox Robot is that you don't have to design a complex robot that can detect and pick up trash, if instead you can just design a robot that will somehow motivate people to put trash into it:

[ ICD Lab ]



He may not look like it, but Roboy is a bit of a wine snob:

Okay, but like, isn't Roboy way underage...?

[ Roboy ]



We know that Curiosity is headed to the top of Mt. Sharp on Mars, but how's she going to get there? John Grotzinger, Curiosity's lead project scientist, takes us through the route:

[ Curiosity ]



Okay, let's wrap this thing up with a trio of TED Talks.

You should be familiar with this first one, it's Henry Evans and Chad Jenkins talking about Robots for Humanity, with Henry actually telling his story on stage remotely through a Beam and an AR Drone, which is an amazing thing to watch:

[ Robots for Humanity ] via [ TED ]

I'm personally not a big believer in urban drone delivery, but I think that Matternet, which is all about point to point transport of things like medicine in Africa, might actually have a chance of being successful and doing some good:

[ TED ]

Drones may also be able to play an important role in animal conservation, by providing low cost, persistent surveillance:

[ TED ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓Show less