Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ISR 2018 – August 24-27, 2018 – Shenyang, China
BioRob 2018 – August 26-29, 2018 – University of Twente, Netherlands
RO-MAN 2018 – August 27-30, 2018 – Nanjing, China
ELROB 2018 – September 24-28, 2018 – Mons, Belgium
ARSO 2018 – September 27-29, 2018 – Genoa, Italy
ROSCon 2018 – September 29-30, 2018 – Madrid, Spain
IROS 2018 – October 1-5, 2018 – Madrid, Spain
Let us know if you have suggestions for next week, and enjoy today’s videos.
What has the ability to move and show its colors, is made only of silicone rubber and manufactured at the millimeter scale? A soft robotic peacock spider. Researchers have combined three different manufacturing techniques to create a novel origami-inspired soft material microfabrication process that goes beyond what existing approaches can achieve at this small scale.
Yeah, I mean, that soft robotic spider is cool and all, but peacock spiders are THE CUTEST.
[ Wyss Institute ]
Even serious robots built to do serious things need to have some fun sometimes.
Verity Studios has established itself as a reliable, world-leading provider of indoor drone shows. Its show drones have been featured in a variety of live events, including PARAMOUR, the Broadway show by Cirque du Soleil, where Verity Studios’ Stage Flyer drones performed 398 live shows with more than 7,000 autonomous take-offs, flights, and landings. Verity Studios just unveiled the Synthetic Swarm, a drone show system that combines the same reliable technology successfully pioneered on Broadway with its new Lucie micro drones. These novel micro drones feature powerful lights, yet weigh a mere 49 grams (1.7 ounces), and are both quiet and ultra-safe.
[ Verity Studios ]
CRASAR has been providing Hawaiian authorities with small drones to help them track the Kilauea eruption, and they’ve been taking some spectacular footage:
Needless to say, these are professionals, and you should definitely not be doing this.
[ CRASAR ]
Basketball players need lots of practice before they master the dribble, and it turns out that’s true for computer-animated players as well. By using deep reinforcement learning, players in basketball video games can glean insights from motion-capture data to sharpen their dribbling skills.
Researchers at Carnegie Mellon University and DeepMotion, a California company that develops smart avatars, have for the first time developed a physics-based, real-time method for controlling animated characters that can learn dribbling skills from experience. In this case, the system learns from motion capture of the movements performed by people dribbling basketballs.
[ CMU ]
KAIST Urban Robotics Lab has turned its wall perching drone into a wall driving drone. I’d mute this one, because this has to be the worst sounding drone I’ve ever heard.
[ KAIST ]
Seems like having a conversation with Erica would be ideal for me, because presumably she comes with an off switch that I can use to finish the conversation immediately when I’m fed up with talking. Doesn’t work so well with humans, as much as I’ve tried.
Walmart is collaborating with a startup, Alert Innovation, to introduce automated technology to Online Grocery Pickup at the supercenter in Salem, New Hampshire. By teaming up with the new Alphabot system in this small pilot, associates can spend less time on repeatable tasks and more time serving customers.
That system looks efficient, but it also looks like an absolutely massive amount of dedicated robot infrastructure.
Hinamitetu’s gymnast robots have moved beyond batteries, to supercapacitors!
[ Hinamitetu ]
There’s Waldo is a robot built to find Waldo and point at him. The robot arm is controlled by a Raspberry Pi using the PYARM Python library for the UARM Metal. Once initialized the arm is instructed to extend and take a photo of the canvas below. It then uses OpenCV to find and extract faces from the photo. The faces are sent to the Google Auto ML Vision service which compares each one against the trained Waldo model. If a confident match of 95% (0.95) or higher is found the robot arm is instructed to extend to the coordinates of the matching face and point at it. If there are multiple Waldos in a photo it will point to each one.
While only a prototype, the fastest There’s Waldo has pointed out a match has been 4.45 seconds which is better than most 5 year olds.
Okay, but how about Wenda, the Wizard, Odlaw, and Woof?
[ RedPepper ]
And now, this:
Only $1,200, plus $27 a month. WORTH IT
The 2018 DJI RoboMasters competition looks like fun, in a robots-shooting-other-robots kind of way.
[ DJI RoboMaster ]
Some engineering students at UC Davis took $600 and made a robot that can tie a shoe:
Hey, did you know that TurtleBot3 is expandable with all kinds of sensors? Because it is!
[ TurtleBot ]
Heterogeneous vehicles mission application demo with multiple Autonomous Vehicles Research Studio’s QDrones quadrotors and QBot ground robots, illustrating the software architecture of the system designed to accommodate any approach and control strategy.
[ Quanser ]
I think I’m too old to really understand what a Dota is, but OpenAI’s bots are pretty good at it, I guess?
[ OpenAI ]