Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):
iREX 2017 – November 29-2, 2017 – Tokyo, Japan
IEEE IRC 2018 – January 31-2, 2018 – Laguna Hills, Calif.
Let us know if you have suggestions for next week, and enjoy today’s videos.
With a title like “What’s new, Atlas?” for a video like this, you know that Boston Dynamics is just messing with us now:
And just to be extra infuriating, BD also released an updated video of SpotMini with zero explanation:
[ Boston Dynamics ]
Cozmo got a little bit lost in Reddit as part of an amazing publicity stunt earlier this week:
The game played out on a real set wherein Cozmo would roll through a series of trials, exploring rooms and solving puzzles which tested his ability to move, to place, stack, and turn blocks, and recognize faces and pets, testing Reddit’s collective will to help him. Cozmo’s quest: to gather three golden key cubes to be able to escape to Reddit’s front page. We built a live stream module with a heads up display that allowed Redditors to vote on Cozmo’s actions and determine his path through the platform.
Over the course of 6 hours, Redditors would help Cozmo escape to the front page-or not. As with anything Reddit, there was no telling people what to do. Cozmo could “win,” he could lose, he could be trapped for six hours feeding puppers, or being launched repeatedly toward a rustic rope hammock via a fully functional 1/24 scale trebuchet. It could be over in an hour, or never. We weren’t trusting that Redditors would deliver Cozmo to a promised destination, because that wasn’t the point. We were trusting Redditors to be themselves. Our goal was to earn interaction by providing an experience that was authentic and relevant to the platform, and in the process to show the many facets of Cozmo.
Every technical and creative element of Cozmo: Lost In Reddit was grounded in the absolute necessity of making the experience true to Reddit and the cultures and mores of its communities. This goal wasn’t just rooted in a healthy respect for the, let’s call it skepticism, of the community when it came to Brand Content, but the earnest desire to make something that would add value to a Redditor’s day.
Producing a live, six-hour game turned out to be more akin to a weird, experimental theatrical production than anything, requiring elaborate sets, open-ended scripts and working with an emotive actor (or rather one actor and nine stunt doubles—there were 10 Cozmos in action throughout the game). The drama behind the scenes started five months before the game’s debut and only increased as the experience unfolded. A team that spanned writing, art direction, design, media gaming, and engineering conceived and executed what amounted to 28 mini-productions revolving around everything from machine learning and robotics to the clay modeling of Charging Doge.
Researchers from the National University of Singapore (NUS) have created MantaDroid, an aquatic robot that emulates the swimming locomotion of manta rays. The robotic manta ray, which swims at the speed of twice its body length per second and can operate for up to 10 hours, could potentially be employed for underwater surveillance in future.
It’s adorable and I want one.
[ NUS ]
ANYbotics took ANYmal to the zoo in Zurich, and the results were pretty cute:
What we really want to know, though, is what all the animals thought.
[ ANYmal ]
The IEEE RAS Italian Chapter organized a video contest for students “to show the Italian style in robotics.” If you’re wondering what that means, check out all of this tasty food made by robots:
Authors: Jonathan Cacace, Mario Selvaggio, Università di Napoli "Federico II"
Authors: Nicola Battilani, Giuseppe Riggio, Chiara Talignani Landi, Università di Modena e Reggio Emilia
We’ll have a couple more good ones for you next week.
[ I-RAS ]
Nice to know that some robots are on our side, as UT Austin’s Dreamer tries to prevent accidents from happening to the humans around it. In the first part of this video, I think there’s a coatrack with a fiducial on it standing in for a human, which (as far as robots are concerned) is basically all that we are anyway:
This study, lead by PhD student Kwan Suk Kim of the Human Centered Robotics Lab at UT Austin, explores ramifications from the title questions through empirical investigations between people and robots. Without entering in the deep ethical and moral questions, Kwan Suk devises probabilistic methods to inform computational resources if collisions between objects or other robots and people are likely to happen. Above a threshold, the robot analyzing those estimations takes the action to stop the object or the other robot agent if its advancing on a collision track. Rather than using the end-effector to stop collisions, a logic and motion planner explore the closest parts of the robot’s body that could effectively stop the collisions. The body parts considered for intervention include the forearm, shoulder, elbow, body, or end-effector. Moreover, if the robot is already performing an action with its end-effector such as grasping something in the environment, the planners will first explore trajectories that do not violate the current end-effector task. Only in the case that no constrained body part is found to perform the action the planners will decide to violate the end-effector task at hand. In these video segments, the Dreamer humanoid robot stops using various body parts a ball approaching a person and subsequently a mobile base on track to hit a human on its path. What is your take, should robots be allowed to stop objects or stop other robots?
[ HCRL ]
Sandia is developing energy efficient actuation and drive train technologies to dramatically improve the charge life of legged robots. This video, the third in a series, describes the continued development integration, and testing of the Walking Anthropomorphic Novelly Driven Efficient Robot for Emergency Response (WANDERER).
[ Sandia ]
Jibo is equally as helpful as Amazon Echo or Google Home:
Problem is, the human is likely giving the location of a nearby coffee shop that he also knows, from experience, to be good. Jibo, I would guess, is not yet that clever.
[ Jibo ]
A new miniature robot developed by EPFL researchers can swim with fish, learn how they communicate with each other and make them change direction or come together. These capabilities have been proven on schools of zebrafish.
[ EPFL ]
Cassie Blue isn’t afraid of the dark, according to this video:
I mean, that makes sense, because Cassie Blue doesn’t have any kind of vision system yet, does it...?
Trashbots was a workshop developed by Sonia Roberts and Diedra Krieger for the event "Be a Pennovator" as part of the 2017 Philadelphia Science Festival. Middle school students created locomoting robots using motors, 2 rechargable AAA batteries, post-consumer materials, wire, laser cut acrylic and basic art supplies. The concepts of energy and physical programming were demonstrated using working Trashbots and practiced during an introductory exercise making a vibrating motor from a spinning one. Participants then created a legged robot of their own design using iterative experimentation and took their final creations home with them.
[ Kodlab ]
During Pepper World Paris 2017, some of our Certified Partners were introducing a broad range of B2B applications and solutions developed for Pepper. These applications are answering several business needs in Retail, Banking, Hospitality or Healthcare.
OMG WHERE DO I GET A POCKET PEPPER!
Chill for a few minutes while watching Team Blacksheep’s latest soothing drone video:
[ Team Blacksheep ]
I call this robot Stabby McStabaneedleintoyourarm:
[ Kuka ]
This is a 360° video of the ARM Lab at the University of Michigan, meaning that you can click and drag with your mouse to see what’s going on all around the camera:
Who hasn’t rifled through a basket of laundry to get some garment or other, or a box of tools to find the right one for the job? We do these things without thinking much about them, but for Dmitry Berenson, an assistant professor of electrical engineering and computer science, they represent a trifecta of challenging tasks related to AI and autonomy: perception, planning and manipulation.
Beanbags are squishy, but they can’t deform as wildly as a shirt or a hose can. This variability in how a material can appear is a huge challenge for robots – they need to recognize the object before they can even begin to manipulate it. So the beanbags represent a step toward a robot that has human-like ease with all types of materials. Whether they work in hospital laundry rooms or homes, help rescue people from disasters or set up a greenhouse on Mars, robots will need the skills that Berenson and his group are developing in the Autonomous Robotic Manipulation (ARM) lab.
[ ARM Lab ]
Deepfield Robotics’ monster farm robot may have a drone sidekick now, but it hasn’t lost its beastly weed punchin’ attitude:
[ Flourish Project ]