Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
HRI 2019 – March 11-14, 2019 – Daegu, Korea
Nîmes Robotics Festival – May 17-19, 2019 – Nîmes, France
ICRA 2019 – May 20-24, 2019 – Montreal, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.
Using machine-learning and sensory hardware, Alberto Rodriguez, assistant professor of mechanical engineering, and members of MIT’s MCube lab have developed a robot that is learning how to play the game Jenga. The technology could be used in robots for manufacturing assembly lines.
[ MIT ]
As of now, the winner of the Most Potentially Exciting ICRA 2019 Workshop prize that I just made up goes to the one with all the legged robots doing cool stuff!
We wanted to see how long Cassie could operate during the "polar vortex" that has shutdown the campus! Well, she did one hour and two minutes of continuous operation at -22 C (-8 F). The intrepid investigators stay, for the most part, in the laboratory and watch Cassie through the windows. From time to time, we check on battery levels or record stuff with our phones.... When she fell at the end, the battery showed 60% charge. So what ended the experiment? You tell us!
As a special treat, we’re also posting the unedited hour and twenty minute long video here, for those of you who like watching Cassie Slight-Less-Blue suffer out in the cold. There are two experiments, so if you just want to see where she bites it super hard to end first one, skip to 19:00.
Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Once their robot creates a self-simulation, it can then use that self-model to adapt to different situations, to handle new tasks as well as detect and repair damage in its own body.
[ Science Robotics ]
The yearly Consumer Electronics Show (CES) in Las Vegas, NV, is the world’s biggest gathering for business of consumer technologies. For this event in 2019, the German automotive manufacturing company Continental demonstrated of their vision for seamless mobility for goods delivery. Joining forces with ANYbotics, the exhibition showed cascaded robot delivery by combing a self-driving shuttle with the autonomous legged robot ANYmal.
Due to an increase in global e-commerce and urbanization, the need for automated delivery is growing. Continental envisions a holistic delivery concept that increases availability, efficiency, and safety in the parcel delivery value chain. Solving the last-meter/last-yard delivery, a driver-less vehicle, such as Continental’s CUbE shuttle, carries multiple mobile robots to deploy packages all the way to the consumer. Making use of ANYmal’s high mobility and autonomy, the system can carry packages of up to 10 kg over steps and stairs up to the front door.
[ ANYbotics ]
From Qingkai Lu and Tucker Hermans at the University of Utah:
Different manipulation tasks require different types of grasps. For example, holding a heavy tool like a hammer requires a multi-fingered power grasp offering stability, while holding a pen to write requires a multi-fingered precision grasp to impart dexterity on the object. In this paper, we propose a probabilistic grasp planner that explicitly models grasp type for planning high-quality precision and power grasps in real-time.
We take a learning approach in order to plan grasps of different types for previously unseen objects when only partial visual information is available. Our work demonstrates the first supervised learning approach to grasp planning that can explicitly plan both power and precision grasps for a given object. Additionally, we compare our learned grasp model with a model that does not encode type and show that modeling grasp type improves the success rate of generated grasps. Furthermore we show the benefit of learning a prior over grasp configurations to improve grasp inference with a learned classifier.
From Yuchen Xiao at Northeastern:
In this video, a mobile robot is tasked to search for a small target object in a cluttered environment on a tabletop. From the robot’s initial view, the target object could be either partially or fully occluded by other objects. The robot is able to reason about the potential locations of the target object, and then either move around or remove other surrounding objects in an efficient way and fetch the target one as soon as possible. Sometimes, even though the target object is fully exposed to the robot, it is still not able to be directly reached by the robot’s gripper because it is too close to other objects. In this case, the robot is able to figure out how to manipulate the surrounding objects in a certain order and reach the target one in the end.
[ LLPR ]
Can I be a kid again so that a cute fuzzy robot can tell me stories?
[ MIT PRG ]
A prototype robot that uses intense suction to climb around the outside of aircraft and inspect them for damage has successfully been trialled at Cranfield University. The four-wheeled Vortex Robot, developed as part of the CompInnova project, was tested in field trials using Cranfield’s Boeing 737 and was able to stick to and move around the sides and underbelly of the plane. Structural inspection of aircraft components is currently done manually, making it time-consuming, subject to human error, and costly, with aircraft maintenance expenditure usually representing 20% of overall operating costs.
[ Cranfield ]
We address the problem of pregrasp sliding manipulation, which is an essential skill when a thin object cannot be directly grasped from a flat surface. Leveraged on the passive reconfigurability of soft, compliant, or underactuated robotic hands, we formulate this problem as an integrated motion and grasp planning problem, and plan the manipulation directly in the robot configuration space. The proposed planner is implemented and evaluated both in simulation and on a real robot.
[ Yale Grab Lab ]
Implicit communication is communication that requires an understanding of context in order to properly interpret. This video shows Baxter interpreting natural language commands and gestures in order to correctly interpret the meaning of ordinary remarks like "It’s hot" or "It’s heavy". In a team setting such as shown here, such ambiguous words should be properly interpreted as implicit actionable information affecting Baxter’s behavior. Since Baxter is attempting to grasp a soldering iron and a toolbox, the remarks are interpreted as informing a correct grasp in each case by the robot, thus eliminating other possible interpretations.
Baxter knows what to do when things are hot and heavy, apparently.
[ Paper ]
Panasonic is establishing “Robotics Hubs” in Tokyo and Osaka. They have some exoskeletons and mobile robots and stuff, but the most eye-catching thing is this little self-balancing jumping robot:
We perform outdoor autonomous flying experiment of f-LASDRA (flying Large-size Aerial Skeleton with Distributed Rotor Actuation), which is constructed with multiple ODAR-8 links (https://youtu.be/S3i9NspWtr0), connected via flexible cable with each other. Each ODAR-8 link can generate omni-directional force/torque and also compensate for its own weight, thereby, rendering the f-LASDRA scalable w.r.t. the number of links.
[ INRoL ]
In this work, we present a path planning method for exploiting contact by aerial robots to enable the traversal of highly anomalous surfaces. Apart from sliding in contact, the proposed strategy introduces a new locomotion modality of azimuth rotations perpendicular to the surface, dubbed the flying cartwheel mode.
Flying cartwheel mode wheeeee!
[ Kostas Alexis ]
How fast can a robot pick up and move an object using a suction gripper before it goes all wonky? Let’s find out!
[ Paper ]
The Co4Robots MS2 scenario consists on collaborative grasping and manipulation of an object by two agents, the TIAGo mobile manipulator and a static manipulator; and a collaborating mobile platform and stationary manipulator to facilitate loading and unloading tasks onto the mobile platform.
On November 10, 2003 Joseph Engelberger, considered by many to be the father of robotics, spoke at Carnegie Mellon University’s Robotics Institute before his induction into the Robot Hall of Fame. Presented here, for the historical record, is his talk and conversation from that day where the main idea he was presenting was how will robots one day actually help older people remain living in their homes and do real work for them.
[ CMU ]
Cynthia Breazeal gave an invited talk this week at AAAI 2019 on "Living and Flourishing with AI." This is a recorded livestream, and her talk starts at about 41 minutes in.
Beyond automating productivity and supporting decision making, Artificial Intelligence can also be designed to help people to flourish and grow. As people across ages and stages begin to live with AI in the home, we have the opportunity to develop autonomous machines that can support a diversity people as emotionally intelligent and capable allies. This is particularly poignant as societies struggle to meet the growing demands of aging, chronic disease management, mood disorders, and lifelong learning. This gives rise to important challenges and opportunities for how to design, deploy and evaluate the long-term impact of socially and emotionally intelligent machines in the real world to help people learn, thrive and emotionally connect. This requires a tightly interwoven approach at the intersection of design, human social psychology, ethics, and AI. We have the opportunity to develop AI that places human growth and wellbeing at the center to benefit everyone in personally meaningful and uplifting ways.
[ AAAI ]
This week’s CMU RI Seminar comes from Shaojie Shen, Director, HKUST-DJI Joint Innovation Lab, on Minimalist Visual Perception and Navigation for Consumer Drones.
Consumer drone developers often face the challenge of achieving safe autonomous navigation under very tight size, weight, power, and cost constraints. In this talk, I will present our recent results towards a minimalist, but complete perception and navigation solution utilizing only a low-cost monocular visual-inertial sensor suite. I will start with an introduction of VINS-Mono, a robust state estimation solution packed with multiple features for easy deployment, such as online spatial and temporal inter-sensor calibration, loop closure, and map reuse. I will then describe efficient monocular dense mapping solutions utilizing efficient map representation, parallel computing, and deep learning techniques for real-time reconstruction of the environment. The perception system is completed by a geometric-based method for estimating full 6-DoF poses of arbitrary rigid dynamic objects using only one camera. With this real-time perception capability, trajectory planning and replanning methods with optimal time allocation are proposed to close the perception-action loop. The performance of the overall system is demonstrated via autonomous navigation in unknown complex environments, as well as aggressive drone racing in a teach-and-repeat setting.
[ CMU RI Seminar ]
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
Erico Guizzo is the digital product manager at IEEE Spectrum. An IEEE Member, he is an electrical engineer by training and has a master’s degree in science writing from MIT.