Video Friday: Robot Phone, Cardboard Karts, and The Long Road to Everywhere

This week's best robot videos are here!

5 min read

Video Friday: Robot Phone, Cardboard Karts, and The Long Road to Everywhere
Image: Sharp

We’ve made it back from IROS 2015 in Germany mostly in once piece, and we still have all kinds of other stuff to bring you from the conference as soon as we recover from a potentially lethal combination of jet lag, curry sausages, weird cheese, and pretzels that are full of butter for no reason. So now that IROS is over, you know what that means: time to get ready for ICRA 2016 in Stockholm!

But we’re not there yet, and we have some catching up to do on robot videos, so here’s a couple dozen from the last two weeks to help you make it to the end of your Friday.

I was totally ready to make an enormous amount of fun of this cell phone robot from Sharp, but after watching the video, I’m suddenly not so sure that it’s the worst idea ever:

What I am sure of is that the robot in reality is not going to live up to the fantasy of the video, which is a problem that nearly all socially interactive robots have. So I guess it’s more like, I believe in the value of the fantasy, but I’m skeptical of the reality of the product.

[ RoBoHoN ] via [ Engadget ]

Flying and walking robots can use their complementary features in terms of viewpoint and payload capability to the best in a heterogeneous team. To this end, we present our online collaborative navigation framework for unknown and challenging terrain. The method leverages the flying robot’s onboard monocular camera to create both a map of visual features for simultaneous localization and mapping and a dense representation of the environment as an elevation map. This prior knowledge from the initial exploration enables the walking robot to localize itself against the global map, and plan a global path to the goal by interpreting the elevation map in terms of traversability. While following the planned path, the absolute pose corrections are fused with the legged state estimation and the elevation map is continuously updated with distance measurements from an onboard laser range sensor. This allows the legged robot to safely navigate towards the goal while taking into account any changes in the environment.

This research will be presented at ICRA 2016.

[ ETH Zurich ]

Japan now has a laundry machine that folds clothes, although nobody seems to know exactly how it works:

The entire (translated) description of the technology consists of: “What we in resulted in a breakthrough, the two elemental technologies that we have cultivated in the automation of the plant. Flexible object and recognizes whether or not the (clothing) is what ‘image analysis technology’, was the collapse was recognized object mechanism ‘robotics technology’.”

Right then.

[ Laundroid ] via [ Panasonic ]

The video shows the first results of the project BesMan using the DFKI’s robot AILA. The robot autonomously executes a given high-level action plan in order to perform some operations on a mockup of the International Space Station (ISS). On the one side, a software framework and an accompanying embedded domain specific language (eDSL) have been developed to describe and control robot manipulation behaviors and keep their descriptions (and the descriptions of the tasks) independent of a particular robot. Thus, the same robot high-level behaviour can be re-used on robots of different morphology and/or hardware. On the other side, a whole-body reactive control approach is used in order to automatically find an optimal usage all the available degrees of freedom at runtime.

[ DFKI ]

While we’re generally skeptical about drone delivery, taking mail out to an island is one of the uses which actually seems promising.

I still think the “dense urban landscapes” use case is mostly bunk, though.

[ Channel News Asia ] via [ Engadget ]

The “Rocky” Rough Terrain Mobile Platform is being funded by NASA JSC for extremely important research like this:

[ UT Austin ]

The 27th annual ROBO-ONE happened last week; here are some highlights:

4.5-meter footrace, 1st place:

<3-kg 1st place fight:

Boxing exhibition:

[ ROBO-ONE ] via [ Biped Robot News ]

Best Buy has replaced at least a few of its employees with this robot:

The idea of a robotic system like this being used to pick CDs of all things seems utterly bizarre. But hey, Best Buy.

[ Fox News ] via [ Consumerist ]

PAL Robotics had some fun at IROS:

We had even MORE fun, and video proof is coming, I promise.

[ PAL Robotics ]

Murata’s little cheerleader robots were on the stage at CEATEC:

If you made it through all 7 minutes of that, you like robots more than I do.

[ Murata ] via [ YouTube ]

I’m not exactly sure what “First Ever ‘Classroom-Ready’ Robotics Training Module” means, but FANUC has it:

[ FANUC ]

How do you make compliant actuators for cheap? Try rubber bands:

[ DeCoRo Project ] via [ Trossen Robotics ]

Planting one billion trees per year with a drone? Sure, why not:

As much as I like the idea, as with most drone delivery schemes I’m questioning whether the amount of money that it’s going to take to get this to work might be better spent on having people go out and plant two billion trees the old fashioned way.

[ BioCarbon Engineering ] via [ Gizmodo ]

Here’s a very brief look at how NASA is thinking about its Mars 2020 rover:

[ NASA ]

This is what KUKA was showing at IROS last week:

[ KUKA ]

Gushing waterfalls, a burning fire pit, a gale force storm and deadly swinging pendulum. With just cardboard, a motor and a battery, 14 teams were challenged to create remote control karts that needed to be strong enough to tackle the treacherous race course.

Love the idea, hate the fact that the video is only a minute long.

[ Dyson ]

This is how many really expensive robots it takes to not make you a steak:

No idea what happens next.

[ RoboHow ]

In addition to being a contributor to this blog, Dr. Angelica Lim also works at Aldebaran Robotics, teaching Pepper to understand humans. This is a talk that she gave at Aldebaran about emotional machine learning, which probably isn’t about how machines learn when they get emotional.

[ Aldebaran ]

Robots in Depth interviews Melonee Wise, CEO of some company that I can’t remember the name of:

[ Robots In Depth ]

The latest CMU RI Seminar comes from Matei Ciocarlie, on “Versatility in Robotic Manipulation: the Long Road to Everywhere.” Here’s the abstract to help you decide whether you want to commit to it.

In this talk, I will present a number of methods for increasing the versatility of mobile manipulators, from novel hand designs and grasp planning algorithms to Human-in-the-Loop manipulation and its applications in assistive robotics. I will introduce the concept of eigengrasps as the bases of a linear hand posture subspace, and use it to show that, from a grasp planning perspective, a hand does not have to be complex in order to be versatile. I will then present the Velo Gripper, a novel underactuated design that uses passive adaptation to the object and the environment to complement the traditional sense-plan-and-act loop. Beyond the end-effector, I will present sensing and planning methods for manipulation under uncertainty and through clutter. Finally, I will present our results in Human-in-the-Loop manipulation, enabling non-experts to operate robots performing complex tasks in difficult environments. Among other applications, this approach can enable mobile robots to assist people with motor impairments. I will present results where our robot, operated by a motor impaired person through a head-tracker single-button mouse, performed varied and unscripted manipulation tasks in a real home, and acted as a medium for social interaction.

[ CMU RI Seminar ]

The Conversation (0)