Video Friday: TALOS Humanoid Robot, and More

Your weekly selection of awesome robot videos

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

PAL Robotics' TALOS humanoid
Image: PAL Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

The Promise and the Peril of Artificial Intelligence and Robotics – October 23, 2018 – Corvallis, Oregon, USA
Collaborative Robots, Advanced Vision & AI Conference – October 24-25, 2018 – Santa Clara, Calif., USA
ICSR 2018 – November 28-30, 2018 – Qingdao, China
RoboDEX – January 16-18, 2019 – Tokyo, Japan

Let us know if you have suggestions for next week, and enjoy today’s videos.

PAL Robotics had a busy IROS:

And here’s a better look at their torque-controlled humanoid TALOS:

[ PAL Robotics ]

Thanks Judith!

With all the hype about SpotMini recently, it’s a good time to take a look back at another quadruped that Boston Dynamics helped develop. Not, not BigDog, we’re talking about its baby brother, LittleDog:

[ Paper ] via [ Pieter Abbeel ]

Preferred Networks, a Japanese company specializing in AI, taught some Toyota HSRs to autonomously tidy up a room and put together this demo for CEATEC:

This system is the first of its kind that can automatically keep a cluttered room neat and tidy at a practical level, something that has been difficult to achieve using conventional robot system. Thanks to the rapid advancement of deep learning technology in recent years, PFN has utilized cutting-edge deep learning techniques to enable the robot to recognize objects and understand spoken language as well as controlling the robot. As a result, the robot is able to quickly and accurately grasp and place objects, how to plan its movements, follow human instructions, all of which are essential for a robot to work in the human living space.

It’s very satisfying to watch on 20x:

[ Preferred Networks ]

Thanks Eric!

Here’s some sample data from a Quanergy M8 lidar (followed by data from a pair of them) driving around in the SF bay area. Particularly impressive, I think, is its ability to detect lane markings.

Quanergy provided us with a few specs, too:

  • The M8 sensors have a range accuracy of 3cm at 100m (and 3mm at 10m), resulting in accurate (high-fidelity) mapping where a flat surface looks flat and small features are a correct representation of reality (common LIDARs have a range accuracy of 50cm at 100m)
  • The M8 has an angular resolution of 0.03 degrees (whereas common LiDARs typically have an angular resolution of 0.2 degrees), resulting in more complete coverage of objects sensed, and resolving small features (e.g., hand details of a police officer directing traffic).

[ Quanergy ]

Thanks Emily!

Cubelets, the absolute easiest robot building kit ever, has released a new "Discovery Set:"

Five blocks that you can put together in all kinds of different ways plus a Bluetooth "hat," for $140.

[ Modular Robotics ]

This video is entitled "The Joy of Living." But I’m honestly not sure if Ibuki qualifies...

[ JST ]

University of Michigan Engineering knows how to sell itself- with Cassie.

Pretty sure this means that they’ve committed to get Cassie playing DDR.

[ University of MIchigan ]

An ice rink in Royal Caribbean’s Symphony of the Seas ship hosts 1977, an ice skating spectacular. Verity Studios’ Lucie® micro drones open this show with an aerial ballet. This is the world’s first autonomous indoor drone swarm performance on an ice rink.

Er, that’s a very specific record, but congrats?

[ Verity ]

We present the design and development of the fluid-driven, wearable, Soft Poly-Limb (SPL), from the Greek word polys, meaning many. The SPL utilizes the numerous traits of soft robotics to enable a novel approach in providing safe and compliant mobile manipulation assistance to healthy and impaired users. This wearable system equips the user with a controllable additional limb that is capable of complex three-dimensional motion in space. Similar to an elephant trunk, the SPL is able to manipulate objects using a variety of end-effectors, such as suction adhesion or a soft grasper, as well as its entire soft body to conform around an object, able to lift 2.35 times its own weight.

Of course, what you really want is for this thing to have a mind of its own. Or do you really NOT want that...?

[ Paper ] via [ ASU ]

Thanks Pham!

The 3rd F1/10 International Autonomous Racing Competition was held on October 1st, 2018 in Torino, Italy. 9 teams competed to set the fastest time around a portable track setup at the Torino Incontra.

[ F1/10 ]

Thanks Madhur!

Impressive high-precision assembly from Robotic Materials using an UR arm:

[ Robotic Materials ]

From its worldwide debut at CES 2018 until today, Walker, our humanoid bipedal robot is helping UBTECH Robotics bridge the dream of bringing a robot into every home and integrating intelligent robots into human daily life.

[ UBTECH ]

Autonomous micro aerial vehicles still struggle with fast and agile maneuvers, dynamic environments, imperfect sensing, and state estimation drift. Autonomous drone racing brings these challenges to the fore. Human pilots can fly a previously unseen track after a handful of practice runs. In contrast, state-of-the-art autonomous navigation algorithms require either a precise metric map of the environment or a large amount of training data collected in the track of interest. To bridge this gap, we propose an approach that can fly a new track in a previously unseen environment without a precise map or expensive data collection.

Our approach represents the global track layout with coarse gate locations, which can be easily estimated from a single demonstration flight. At test time, a convolutional network predicts the poses of the closest gates along with their uncertainty. These predictions are incorporated by an extended Kalman filter to maintain optimal maximum-a-posteriori estimates of gate locations. This allows the framework to cope with misleading high-variance estimates that could stem from poor observability or lack of visible gates. Given the estimated gate poses, we use model predictive control to quickly and accurately navigate through the track. We conduct extensive experiments in the physical world, demonstrating agile and robust flight through complex and diverse previously-unseen race tracks. The presented approach was used to win the IROS 2018 Autonomous Drone Race Competition, outracing the second-placing team by a factor of two.

[ Paper ] via [ UZH ]

When NASA’s InSight lands on Mars, it will be the first mission on another planet to use a robotic arm to grasp instruments and place them on the surface. While it may look like an arcade machine, this space claw is designed to come away with a prize every time.

[ JPL ]

This video shows a bunch of Japanese disaster robots that are part of the ImPACT Tough Robotics Challenge. It’s in Japanese, but the YouTube auto-generated subtitles are pretty good.

Japan is one of the most disaster-prone countries in the world. A large-scale earthquake is predicted to occur directly beneath the Tokyo metropolitan area in the near future, and there is an urgent need for measures to reduce the risk. Although the usefulness of robots in a disaster was recognized during the Great East Japan Earthquake, robots that can actually be used in unknown extreme environments where the situation is always changing are still a work in progress. The goal of this program is to develop essential technologies for remote autonomous robots that are tough and can function without faltering even in an extreme disaster conditions. At the same time, this research provides key fundamental technologies for outdoor service robots for establishing foundation of the future advanced outdoor robot services.

[ ImPACT TRC ]

We’re far from developing robots that feel emotions, but we already have feelings towards them, says robot ethicist Kate Darling, and an instinct like that can have consequences. Learn more about how we’re biologically hardwired to project intent and life onto machines -- and how it might help us better understand ourselves.

[ TED ]

This week’s CMU RI seminar comes from Bertram F. Malle at Brown University on “What People See in a Robot: A New Look at Human-Like Appearance.”

A long-standing question in HRI is what effects a robot’s human-like appearance has on various psychological responses. A substantial literature has demonstrated such effects on liking, trust, ascribed intelligence, and so on. Much of this work has relied on a construct of uni-dimensional low to high human-likeness. I introduce evidence for an alternative view according to which robot appearance must be described in a three-dimensional space, encompassing Body/Manipulators (e.g., torso, arms, legs), Facial Features (e.g., head, eyes), and Surface Look (e.g., eyelashes, skin, genderedness). The broad human-likeness concept can thus be decomposed into more concrete appearance dimensions, and robots’ degrees of human-likeness are constituted by different combinations of these dimensions. In a study using 24 robots selected from this three-dimensional appearance space, I then show that the different dimensions separately predict inferences people make about the robot’s affective, social-moral, and physical capacities.

[ CMU RI ]

The Conversation (0)