Video Friday: NOVA's Rise of the Robots, Gecko-Toe Grippers, and Why They Automate

Your weekly selection of awesome robot videos

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Video Friday: NOVA's Rise of the Robots, Gecko-Toe Grippers, and Why They Automate
Image: NOVA/WGBH via YouTube

Video Friday is your weekly selection of awesome robotics videos, collected by your highly automated* Automaton bloggers. We’ll be also posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

The Future of Rescue Simulation Workshop – February 29-4, 2016 – Leiden, Netherlands
ROS-Industrial Consortium Americas – March 3-4, 2016 – San Antonio, Texas
HRI 2016 – March 7-10, 2016 – Christchurch, New Zealand
RobArch 2016 – March 14-19, 2016 – Sydney, Australia
WeRobot 2016 – April 1-2, 2016 – Miami, Fla., USA
National Robotics Week – April 2-10, 2016 – United States
AISB HRI Symposium – April 5-6, 2016 – Sheffield, United Kingdom
Robotics in Education 2016 – April 14-15, 2016 – Vienna, Austria
International Collaborative Robots Workshop – May 3-4, 2016 – Boston, Mass., USA
Robotica 2016 – May 4-8, 2016 – Bragança, Portugal
Advanced Robotics Systems and Competitions – May 6, 2016 – Bragança, Portugal
ARMS 2016 – May 9-13, 2016 – Singapore


Let us know if you have suggestions for next week, and enjoy today’s videos.

* This post is generated by a spreadsheet. Seriously.

Mark your calendars: the premiere of NOVA’s “Rise of the Robots” is in less than two weeks! Loyal readers of this blog will probably recognize all of the robots and most of the people in the trailer, but it looks like NOVA—which bills itself as “the most-watched primetime science series on television”—scored some great expert commentary along with footage of DRC robots that we’ve never seen before.

Watch it on NOVA on Wednesday, February 24 at 9 pm ET on PBS, and then stick around for another PBS premiere: a documentary about Big Data.

[ “Rise of the Robots” ]

Fun fact from a press release: every two days, the human race is generating as much data as was generated from the dawn of humanity through the year 2003. Whoa. When data gets big enough to become Big Data (this is A Thing That Is Happening), then what? You can find out, thanks to PBS:

“The Human Face of Big Data” airs on PBS on February 24 10 pm ET (check local listings). The film will also be streaming as of February 25 on CuriosityStream.

[ The Human Face of Big Data ]

The most amazing thing about this video is the carrying case at the end, which somehow fits 24 drones inside it all at once. Forget the robots, I just want that case!

[ magicLab ]

Baxter and Sawyer wish you the happiest of Chinese New Years!

[ Rethink Robotics ]

I’m very glad that CCTV had 540 robot dancers to celebrate Chinese New Year, because 530 robot dancers wouldn’t have impressed anyone.

[ Shanghaiist ]

This robot is only slightly better at lipstick application than I am, so I’m feeling pretty good about that.

[ Simone Giertz ]

The robot part of this color-changing robot is incidental at best, but it’s still pretty cool to watch:

All that’s going on here is a bunch of light sensors behind the robot are telling screens on the front of the robot what color they should be to help it blend in. The screens themselves are plasmonic, which is the focus of the research.

[ ACS Nano ] via [ New Scientist ]

You may not have known this, but Pepper is a famous and magical musician:

[ Pepper ]

Chris Synan from Dataworlds has managed to get an Amazon Echo to control an Iris drone from 3D Robotics, executing verbal instructions:

[ Dataworlds ] via [ DIY Drones ]

“Following on from the successes of StarlETH, ETH Zurich’s very own bioinspired walking robot, here Marco Hutter (RSL, ETH Zurich and NCCR Robotics) explains how ANYmal has been upgraded and why it’s important.”

[ ANYmal ]

Those gecko-toe grippers that work so well sticking robots to walls also work great grabbing stuff in zero gravity:

[ Stanford BDML ]

Thom Gibson is a middle school teacher at The Khabele School in Austin, Texas. His class is starting to learn to code with Lego Mindstorms, and he’s doing a short video series on the experience.

[ The Khabele School ]

For $60 on Indiegogo, you can pledge for this underactuated two finger gripper that can underactuatedly grip things that need to be gripped underactuatedly:

[ Indiegogo ]

Is a robot getting a hole in one really that remarkable?

Do it twice in a row, then I’ll be impressed.

[ Golf Digest ]

Want ROS on a quadrotor? Erle Robotics has you covered:

[ Erle-Copter ]

The weird thing about Northrop Grumman’s Super Bowl commercial is that their next generation fighters appear to have humans in them. Ugh.

Also, why would they show us the unmanned carrier landing all done in vague CG when it’s way more impressive to see the actual video? 

[ Northrop Grumman ]

TORC Robotics’ remote controlled heavy machinery: where was this stuff when I was little and at the beach?

[ TORC Robotics ]

I would have thought that driving through a barricaded parade course would have been a cinch for IHMC’s ATLAS, but for some reason they had a human driving in the Pensacola Mardi Gras parade instead:

[ IHMC ]

“When designing exoskeletons for rehabilitation of patients after spinal cord injury or stroke, how to create them to be as natural as possible to allow them to train the user to rebuild both muscles and neural networks is a constant issue. Here Stefan Schrade from RELab at ETH Zurich and NCCR Robotics, Switzerland, explains how the ETH knee pertubator is working to train patients to recover from everyday bumps, knocks and stumbles.”

[ Robohub ]

I don’t know what this game is, but it doesn’t stand a chance against a robot:

And this probably keeps a whole bunch of humans entertained for hours at a time. We’re so silly.

[ YouTube ] via [ Kotaku ]

Woohoo drone racing!

Sooner or later, one of these videos is going to end in a giant explosion from a AIM-9 Sidewinder missile launched from one of the F-22 Raptors that the FAA might have now.

[ Propsman ] via [ Gizmodo ]

From University of Toronto’s Autonomous Space Robotics Lab:

“This video presents a study case for local point cloud registration. A Velodyne HDL-64e was mounted on a small rolling table to move in the hallways and on a Clearpath Grizzly RUV for all outdoor data acquisitions.”

[ UTIAS ASRL ]

If you’re going to simulate an environment for a robot, and you can make that environment look like absolutely anything, why would you choose such a horrible-looking floor?

[ PAL Robotics ]

These two videos from DFKI show how robot control can be improved by direct interfaces with our brains:

[ DFKI ]

“The mobile power-grid project envisions a swarm of autonomous robots being inserted into a natural disaster zone. This concept will create a new capability for power-grid deployment in applications such as reestablishing power after a disaster to accelerate search, rescue, and recovery efforts. The aim is to develop a system which will be able to robustly reconfigure itself physically and electrically to adapt in any change in environment situation, power demand, and power generation. The ultimate goal of the proposed work will be intelligent power electronics and a distributed autonomous mobile microgrid capable of regulating power flow at a desired voltage and frequency level, meeting load demands.”

[ MTU ]

This NASA show on robots (their robots) came out a few months ago, but we never heard about it because we’re terrible. It’s a little slow, but there’s lots of awesome info and some great footage, including some of . . . hey, our video of Valkyrie!

[ NASA Edge ]

From CMU’s Robotics Institute Seminar Series:

Matthew Johnson-Roberson: “Underwater mapping: new robotic approaches to an old problem”

“Robust, scalable Simultaneous Localization and Mapping (SLAM) algorithms support the successful deployment of robots in many real-world applications. Robotic platforms now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data, the underwater domain poses many unique challenges. This talk will present recent advances in perception techniques for the generation of 3D reconstructions over tens of thousands of square meters. Topics will include the generation of self-consistent poses, long-term localization, consistency of lighting through an attenuating medium, the visualization of gigapixel textured models using a novel state-of-the-art rendering system, and a crowd sourcing platform using smartphones and tablets to aid in the processing of vast volumes of visual data. The outputs of the proposed techniques will be displayed in real world applications including underwater archeology and marine science. This talk will show results from the monitoring of Australia’s Coral Reefs and the archeological mapping of a 5,000 year-old submerged city. Finally, the talk will look forward towards new work in underwater grasping and manipulation and a proposal for drastic cost reduction in deep water monitoring approaches.”

[ CMU RI Seminar ]

And finally: why do they automate? The Association for Advancing Automation (A3) has an on-going video series featuring a variety of different companies that will tell you exactly why they decided to use robots to automate their businesses. Below is the latest video, and you can watch the other ones here.

[ A3 ]

The Conversation (0)