Video Friday: Robots at Sea, Humanoids at RoboCup, and D-RHex on Sand
“Hey, don't push me, bro.”
Image: University of Bonn/Autonomous Intelligent Systems

Next week is Thanksgiving—a U.S. holiday—which means that Video Friday may take a little bit of a break. And taking a break means that the next Video Friday will inevitably be twice as large, so it’s not like you’ll be missing anything. But in the meantime, let’s give thanks for (among many other things) the fact that robots exist, and that they're awesome, and that nearly another year has gone by without them somehow managing to destroy us all.

Yet.

Royal Caribbean's newest cruise ship is called Quantum of the Seas, and you know it’s got to be all kinds of cool because they blew a ludicrous amount of money adding a sextet of screen-wielding ABB robot arms to the available entertainment:

I guess the addition of some robots would make being stranded on an oversized boat with a horde of people who also have nothing better to do slightly more bearable? Maybe?

[ Quantum of the Seas ]

About 0:50 into this video, iCub gets jiggy with something. I don’t know what, but something.

This video shows some of the latest results achieved in the whole-body control of iCub, the humanoid robot of the Italian Institute of Technology. In particular, it shows the improvements of the balancing controller, which now optimizes the internal torques according to some bounds on the external wrenches (i.e. feet forces and torques). These bounds ensure that, for instance, the robot's feet do not slip even when performing highly dynamic tasks.

[ CoDyCo ]

We saw a video of this crazy transforming humanoid car robot thing last week, but this video has more detail. The thing to pay special attention to is at about 0:30, since we apparently have a 5 meter tall version of this robot to look forward to in 2020, and the CAD on the website makes it look sorta like you’ll be able to sit in it and drive.

[ J-deite ]

Here’s a montage of everything useful that Baxter has been up to lately:

[ Rethink Robotics ]

In August, UCSF’s Center for Systems & Synthetic Biology, along with the Harvard Self-Organizing Systems Research group, held a workshop to see how swarms of Kilobots could be used to model complex biological systems:

[ UCSF ]

If you know a kid (or a bunch of kids) who are interested in robotics but have no idea where to start, LocoRobo is a project on IndieGogo that wants to make hardware and software easy, seamless, and fun:

A LocoBasiX kit will cost you US $250, and if you want to upgrade to a robot with motor encoders, an accelerometer, and a gyro, you’re looking at $300. Each robot comes with a one year subscription to the cloud service, but we're not sure how much that’ll cost on year two.

LocoRobot looks like a reasonably solid platform, and we wish them luck hitting their funding goal and breaking into the (increasingly crowded) educational robotics market.

[ LocoRobo ]

Twenty three incarnations later, Hinamitetu's robot is now a gymnastics master:

[ YouTube ]

From Vijay Kumar’s YouTube channel: 

Work by Jon Fink, Nathan Michael and Ani Hsieh showing how ant-like simple strategies can enable cooperative manipulation and transport.

[ Vijay Kumar ]

Humanoids 2014 just ended in Madrid, and videos are starting to pop up, mostly about (you guessed it) humanoids. This first one features on of the most good-naturedly resilient robots I’ve seen:

Paper is here.

Unlike the regular RoboCup competition, RoboCup@Home doesn’t involve soccer. As the name suggests, it involves robots doing tasks in a simulated home, and it’s great to see robots like the Cosero humanoid getting better year after year.

This video contains footage of demonstrations of tool-use approaches with an anthropomorphic service robot. It shows tool-use skill transfer through deformable shape matching, tool-tip perception for opening a bottle, and the perception of objects for whole-body and tool alignment in a barbecue scenario. The video material has been recorded during public demonstrations at RoboCup@Home competitions.

Paper is here.

Nothing too long and involved to close with this week, so let’s go out with something particularly cool instead. UPenn took a pair of their D-RHex robots out to the desert, gave them beefy sensor payloads, and watched them try to deal with sandy slopes:

Not flawless, but still much more versatile than wheels. Eventually, UPenn wants to send the robots to China to study desertification.

[ Kod*lab ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less