Video Friday: Aibo Reborn, Robot Plus HoloLens, and NREC's Formula

Your weekly selection of awesome robot videos

5 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

NREC's Chimp Robot
Image: NREC via YouTube

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):

Robótica 2017 – November 7-11, 2017 – Curitiba, Brazil
Humanoids 2017 – November 15-17, 2017 – Birmingham, U.K.
iREX 2017 – November 29-2, 2017 – Tokyo, Japan

Let us know if you have suggestions for next week, and enjoy today’s videos.

We already posted about the unveiling of Sony’s new Aibo, but here’s a bit of extra video from the event showing the little robotic dog in live action:

[ Aibo ] via [ Impress ]

In this video we show a compilation of our research for the last 4 years on autonomous navigation of bipedal robots. It is part of the DFG-founded project "Versatile and Robust Walking in Uneven Terrain" (German Research Foundation) and includes development in environment perception and modeling, motion planning and stability control. Additionally, we give a sneak peak of our HoloLens Mixed Reality App, "HoLola" which will be published soon!

TUM ]

Thanks Daniel!

This demonstration video from Near Earth Autonomy shows the kind of capabilities that drones of all kinds are going to need if we expect them to perform unsupervised interaction with the ground.

If you see demos of delivery drones autonomously landing to drop off packages without this kind of tech, be skeptical.

Near Earth Autonomy ]

The National Robotics Engineering Center (NREC) develops and matures robotics technologies and solutions from concept to commercialization. Our unique expertise places us at the forefront of unmanned ground vehicle design, autonomy, sensing and perception, machine learning, machine vision, operator assistance, 3D mapping and position estimation.

NREC ]

University of Michigan’s Cassie goes for a snarky stroll in the rain:

“Oh yeah, let’s go walk in the water” is something roboticists usually don’t say.

This second video is even better, because it’s two and a half minutes long, of which the second minute and a half are bloopers:

Note how Cassie Blue cleverly avoids falling onto the grass and instead aims for the concrete. That’s a robot with confidence!

[ University of Michigan ]

Thriller. By Cozmos.

There are lots of other Cozmo videos on this YouTube channel, including this one that is just bloopers.

[ Life With Cozmo ]

This video shows ongoing research towards using the feet of a legged robot for simple manipulation tasks. In this example, ANYmal makes use of its large range of motion to reach up to press an elevator button. The button is localized with help of a QR tag.

Note that the video may have been sped up slightly, although it would be pretty sweet if ANYmal made that noise anyway.

[ ANYmal ]

František Mach, who leads an emerging group of early-stage researchers at University of West Bohemia in Czech Republic, wrote in to share some of their recent work on magnetically-guided actuation of tiny robots under uncertainty:

Almost all systems for contactless actuation of miniature robots are subject to the uncertainty of operation caused by internal and external effects. In the case of magnetically guided actuation, external magnetic field or, perhaps, ambient ferromagnetic bodies around the system can prevent actuation despite closed control loop. The major goal of this research is to develop the technique for elimination of this kind of uncertainty. The presented technique is based on the superposition of two magnetic fields - actuation field and lock-up field. The video shows high-speed camera captures recorded during experiments and compares positioning accuracy for cycle operation of the miniature robot with and without elimination of the uncertainty.

[ University of West Bohemia ]

Thanks František!

Last Halloween, Sonia Chernova hardly left her doorstep in Decatur, Georgia, handing out candy to a constant stream of kids. This year, she put an autonomous robot on the porch to do it for her. It gave out 1,000 pieces in three hours to hundreds of kids.

“Best Halloween ever,” said Chernova, an assistant professor in the School of Interactive Computing. “Both kids and parents loved getting candy from a real robot, and many people stayed around to watch it work. Now all of my neighbors want one for their house next year!”

[ Georgia Tech ]

I hope they let Sawyer out to play sometimes, because we wouldn’t want it to get.... dull...

[ Rethink Robotics ]

Thanks Jeff!

For Europe durability engineer Svenja Fröhlich: “From the first moment we get into a car, the seat creates an impression of comfort and quality. Previously, we used pneumatic cylinders that simply moved up and down. With the ‘Robutt’, we are now able to replicate very accurately how people really behave.”

Based on an average-sized large man, the “Robutt” was used to simulate ten years of driving in just three weeks as part of testing for the new Fiesta – sitting down and getting up 25,000 times. The new test is now being rolled out for all our vehicles in Europe.

[ Ford Europe ]

Scientists at the IBM Research – Australia lab have combined the TrueNorth chip with deep-learning algorithms and the NAO robot to classify live images into 10 categories, e.g. horse or car. The TrueNorth chip significantly accelerates the picture processing, after which NAO can communicate the classification result through speech.

[ IBM TrueNorth ]

It’s been two weeks since the best weekend of the year: SparkFun Autonomous Vehicle Competition 2017. We had a record-breaking 192 bots and vehicles entered to compete. We had three fires in the combat bots arenas (and a couple cranky fire marshals), so overall the event was a huge success.

Now that is a metric for success that I can appreciate.

[ Sparkfun AVC ]

Now that some people are starting to get Jibos, I’m disappointed that we haven’t seen much in the way of stress testing. In the meantime, here’s an ad.

[ Jibo ]

Piaggio Fast Forward has had Gita out in the wild a bit; looks like they may have ditched that SLAM belt thing:

[ PFF ]

Shadow’s Smart Grasper is one of the beefiest grippers I’ve ever seen:

[ Shadow ]

Tesla is using a lot of robots to make its Model 3:

And the plan is to add even more, though production is going slower than expected and pre-order deliveries have been delayed.

[ Tesla ]

This week’s CMU RI Seminar comes from C. Karen Liu, associate professor at the School of Interactive Computing at Georgia Tech, on Modeling Human Movements for Robotics.

Creating realistic virtual humans has traditionally been considered a research problem in Computer Animation primarily for entertainment applications. With the recent breakthrough in collaborative robots and deep reinforcement learning, accurately modeling human movements and behaviors has become a common challenge faced by researchers in robotics, artificial intelligence, as well as Computer Animation. In this talk, I will focus on two different yet highly relevant problems: how to teach robots to move like humans and how to teach robots to interact with humans.

While Computer Animation research has shown that it is possible to teach a virtual human to mimic human athletes’ movements, transferring such complex controllers to robot hardware in the real world is perhaps even more challenging than learning the controllers themselves. In this talk, I will focus on two strategies to transfer highly dynamic skills from character animation to robots: teaching robots basic self-preservation motor skills and developing data-driven algorithms on transfer learning between simulation and the real world.

The second part of the talk will focus on robotic assistance with dressing, which is a prominent activities of daily living (ADLs) most commonly requested by older adults. To safely train a robot to physically interact with humans, one can design a generative model of human motion based on prior knowledge or recorded motion data. Although this approach has been successful in Computer Animation, such as generating locomotion, designing procedures for a loosely defined task, such as “being dressed”, is likely to be biased to the specific data or assumptions. I will describe a new approach to modeling human motion without being biased toward specific situations presented in the dataset.

[ CMU RI ]

The Conversation (0)