Video Friday: This Robot Is Learning to Slice Onions

The best robot videos from the world's biggest robotics conference

12 min read
This Robot Is Learning to Slice Onions and Potatoes
Researchers from Iowa State University are teaching a robot how to use a knife to slice vegetables just like a chef.
Image: Iowa State University

We hope you’ve been enjoying our ICRA 2019 coverage, which has featured an upgraded Salto robot jumping outdoors, a new version of the HyQ quadruped pulling a plane, and a promising new approach to robot simulation and learning. We’ll have lots more from the conference over the next few weeks, but for you impatient types, we’re cramming Video Friday this week with a selection of ICRA videos.

As usual with our Video Friday ICRA editions, we tried to cover many different areas of robotics: control, humanoids, manipulators, aerial vehicles, AI, etc. We’re posting the abstracts along with the videos, but if you have any questions about these projects, let us know and we’ll get more details from the authors. And of course, this is just a small sample of an enormous amount of amazing research presented at ICRA—if you don’t see your favorite project here, please email us and we’ll try to write about it some time soon.

We’ll return to normal Video Friday next week. Have a great weekend everyone!

Dynamic Walking 2019 – June 3-6, 2019 – Canmore, Alberta, Canada
2nd Annual Robotics Summit & Expo – June 4-6, 2019 – Boston, Mass., USA
ICUAS 2019 – June 11-14, 2019 – Atlanta, Ga., USA
Energy Drone Coalition Summit – June 12-13, 2019 – Woodlands, Texas, USA
RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland

"A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning," by Mel Vecerik, Oleg Sushkov, David Barker, Thomas Rothorl, Todd Hester, and Jon Scholz from DeepMind, London, U.K.

Insertion is a challenging haptic and visual control problem with significant practical value for manufacturing. Existing approaches in the model-based robotics community can be highly effective when task geometry is known, but are complex and cumbersome to implement, and must be tailored to each individual problem by a qualified engineer. Within the learning community there is a long history of insertion research, but existing approaches are either too sample-inefficient to run on real robots, or assume access to high-level object features, e.g. socket pose. In this paper we show that relatively minor modifications to an off-the-shelf Deep-RL algorithm (DDPG), combined with a small number of human demonstrations, allows the robot to quickly learn to solve these tasks efficiently and robustly. Our approach requires no modeling or simulation, no parameterized search or alignment behaviors, no vision system aside from raw images, and no reward shaping. We evaluate our approach on a narrow-clearance peg-insertion task and a deformable clip-insertion task, both of which include variability in the socket position. Our results show that these tasks can be solved reliably on the real robot in less than 10 minutes of interaction time, and that the resulting policies are robust to variance in the socket position and orientation.

"Sharing the Load: Human-Robot Team Lifting Using Muscle Activity," by Joseph DelPreto and Daniela Rus from MIT, Cambridge, Mass., USA.

Seamless communication of desired motions and goals is essential for enabling effective physical human-robot collaboration. In such cases, muscle activity measured via surface electromyography (EMG) can provide insight into a person’s intentions while minimally distracting from the task. The presented system uses two muscle signals to create a control framework for team lifting tasks in which a human and robot lift an object together. A continuous setpoint algorithm uses biceps activity to estimate changes in the user’s hand height, and also allows the user to explicitly adjust the robot by stiffening or relaxing their arm. In addition to this pipeline, a neural network trained only on previous users classifies biceps and triceps activity to detect up or down gestures on a rolling basis; this enables finer control over the robot and expands the feasible workspace. The resulting system is evaluated by 10 untrained subjects performing a variety of team lifting and assembly tasks with rigid and flexible objects.

"Classification of Household Materials via Spectroscopy," by Zackory Erickson, Nathan Luskey, Sonia Chernova, and Charles C. Kemp from Georgia Institute of Technology, Atlanta, Ga., USA.

Recognizing an object’s material can inform a robot on the object’s fragility or appropriate use. To estimate an object’s material during manipulation, many prior works have explored the use of haptic sensing. In this paper, we explore a technique for robots to estimate the materials of objects using spectroscopy. We demonstrate that spectrometers provide several benefits for material recognition, including fast response times and accurate measurements with low noise. Furthermore, spectrometers do not require direct contact with an object. To explore this, we collected a dataset of spectral measurements from two commercially available spectrometers during which a robotic platform interacted with 50 flat material objects, and we show that a neural network model can accurately analyze these measurements. Due to the similarity between consecutive spectral measurements, our model achieved a material classification accuracy of 94.6% when given only one spectral sample per object. Similar to prior works with haptic sensors, we found that generalizing material recognition to new objects posed a greater challenge, for which we achieved an accuracy of 79.1% via leaveone-object-out cross-validation. Finally, we demonstrate how a PR2 robot can leverage spectrometers to estimate the materials of everyday objects found in the home. From this work, we find that spectroscopy poses a promising approach for material classification during robotic manipulation.

"Dentronics: Review, First Concepts and Pilot Study of a New Application Domain for Collaborative Robots in Dental Assistance," by Jasmin Grischke, Lars Johannsmeier, Lukas Eich, and Sami Haddadin from Robotics for Mankind Foundation and Technical University Munich, Germany.

In this paper we introduce dentronics as a new emerging application domain for collaborative lightweight robots in the dental context backed up by a user survey supporting the clear need. Specifically, we developed a multimodal interaction framework, applied this framework to a specific dental use-case, and conducted a preliminary user-study for evaluation. Our results demonstrate usability and feasibility beyond a controlled experimental setup. We conclude that dentronics is indeed within reach given today’s technology and deserves further investigation towards clinical use.

"Torque and Velocity Controllers to Perform Jumps With a Humanoid Robot: Theory and Implementation on the iCub Robot," by Fabio Bergonti, Luca Fiorio, and Daniele Pucci from Istituto Italiano di Tecnologia, Genova, Italy.

Jumping can be an effective way of locomotion to overcome small terrain gaps or obstacles. In this paper we propose two different approaches to perform jumps with a humanoid robot. Specifically, starting from a pre-defined CoM trajectory we develop the theory for a velocity controller and for a torque controller based on an optimization technique for the evaluation of the joints input. The controllers have been tested both in simulation and on the humanoid robot iCub. In simulation the robot was able to jump using both controllers, while the real system jumped with the velocity controller only. The results highlight the importance of controlling the centroidal angular momentum and they suggest that the joint performances, namely maximum power, of the legs and torso joints, and the low level control performances are fundamental to achieve acceptable results.

"End-User Robot Programming Using Mixed Reality," by Samir Yitzhak Gadre, Eric Rosen, Gary Chien, Elizabeth Phillips, Stefanie Tellex, and George Konidaris from Brown University and United States Air Force Academy.

Mixed Reality (MR) is a promising interface for robot programming because it can project an immersive 3D visualization of a robot’s intended movement onto the real world. MR can also support hand gestures, which provide an intuitive way for users to construct and modify robot motions. We present a Mixed Reality Head-Mounted Display (MR-HMD) interface that enables end-users to easily create and edit robot motions using waypoints. We describe a user study where 20 participants were asked to program a robot arm using 2D and MR interfaces to perform two pick-and-place tasks. In the primitive task, participants created typical pick-and-place programs. In the adapted task, participants adapted their primitive programs to address a more complex pick-and-place scenario, which included obstacles and conditional reasoning. Compared to the 2D interface, a higher number of users were able to complete both tasks in significantly less time, and reported experiencing lower cognitive workload, higher usability, and higher naturalness with the MR-HMD interface.

"Design and Characterization of a Novel Robotic Surface for Application to Compressed Physical Environments," by Yixiao Wang, Chase Frazelle, Richa Sirohi, Liheng Li, Ian D. Walker, and Keith E. Green from Cornell University, Ithaca, N.Y., USA, and Clemson University, Clemson, S.C., USA.

Developments of robot arms are countless, but there has been little focus on robot surfaces for the reshaping of a habitable space—especially compliant surfaces. In this paper we introduce a novel, tendon-driven, robot surface comprised of aggregated, overlapping panels organized in a herringbone pattern. The individual 3D-printed panels and their behavior as an aggregation are inspired by the form and behavior of a pinecone. This paper presents our concept, design, and realization of this robot, and compares our prototype to simulations of four physical configurations that are formally distinct and suggestive of how the surface might be applied to habitable, physical space in response to human needs and wants. For the four configurations studied, we found a validating match between prototype and simulations. The paper concludes with a consideration of potential applications for robot surfaces like this one.

"Robotic Cutting: Mechanics and Control of Knife Motion," by Xiaoqian Mu, Yuechuan Xue, and Yan-Bin Jia from Iowa State University, Ames, Iowa, USA.

Effectiveness of cutting is measured by the ability to achieve material fracture with smooth knife movements. The work performed by a knife overcomes the material toughness, acts against the blade-material friction, and generates shape deformation. This paper studies how to control a 2-DOF robotic arm equipped with a force/torque sensor to cut through an object in a sequence of three moves: press, push, and slice. For each move, a separate control strategy in the Cartesian space is designed to incorporate contact and/or force constraints while following some prescribed trajectory. Experiments conducted over several types of natural foods have demonstrated smooth motions like would be commanded by a human hand.

"ModQuad-Vi: A Vision-Based Self-Assembling Modular Quadrotor," by Guanrui Li, Bruno Gabrich, David Saldana, Jnaneshwar Das, Vijay Kumar, and Mark Yim from University of Pennsylvania, Philadelphia, Pa., USA and Arizona State University, Tempe, Ariz., USA.

Flying modular robots have the potential to rapidly form temporary structures. In the literature, docking actions rely on external systems and indoor infrastructures for relative pose estimation. In contrast to related work, we provide local estimation during the self-assembly process to avoid dependency on external systems. In this paper, we introduce ModQuad-Vi, a flying modular robot that is aimed to operate in outdoor environments. We propose a new robot design and vision-based docking method. Our design is based on a quadrotor platform with onboard computation and visual perception. Our control method is able to accurately align modules for docking actions. Additionally, we present the dynamics and a geometric controller for the aerial modular system. Experiments validate the vision-based docking method with successful results.

"The Phoenix Drone: An Open-Source Dual-Rotor Tail-Sitter Platform for Research and Education," by Yilun Wu, Xintong Du, Rikky Duivenvoorden, and Jonathan Kelly from University of Toronto Institute for Aerospace Studies, Canada.

In this paper, we introduce the Phoenix drone: the first completely open-source tail-sitter micro aerial vehicle (MAV) platform. The vehicle has a highly versatile, dual-rotor design and is engineered to be low-cost and easily extensible/modifiable. Our open-source release includes all of the design documents, software resources, and simulation tools needed to build and fly a high-performance tail-sitter for research and educational purposes. The drone has been developed for precision flight with a high degree of control authority. Our design methodology included extensive testing and characterization of the aerodynamic properties of the vehicle. The platform incorporates many off-the-shelf components and 3D-printed parts, in order to keep the cost down. Nonetheless, the paper includes results from flight trials which demonstrate that the vehicle is capable of very stable hovering and accurate trajectory tracking. Our hope is that the open-source Phoenix reference design will be useful to both researchers and educators. In particular, the details in this paper and the available open-source materials should enable learners to gain an understanding of aerodynamics, flight control, state estimation, software design, and simulation, while experimenting with a unique aerial robot.

"Locomotion Dynamics of a Miniature Wave-Like Robot, Modeling and Experiments," by Lee-Hee Drory and David Zarrouk from Ben Gurion University of the Negev, Israel.

In a recent study, we developed a minimally actuated wave-like robot and analyzed its kinematics. In this paper, we present the dynamic locomotion analysis of a miniature version of this wave robot. We examine different crawling environments, determine under which conditions it can advance, and evaluate its propulsion force. We first developed two locomotion models to characterize the cases where the robot is crawling between two straight surfaces or over a single flat surface. We specified the conditions in which the robot will advance and the advance time ratio as a function of the friction forces and weight of the robot. Next, we developed highly flexible tube-like shapes that we molded from silicone rubber to experimentally test the forces acting on the robot inside these tubes. Finally, we designed a miniature model of the robot and experimentally validated its crawling conditions (see video).

"Soft Hands with Embodied Constraints: The Soft ScoopGripper," by G. Salvietti, Z. Iqbal, M. Malvezzi, T. Eslami, and D. Prattichizzo from Universita degli Studi di Siena, Italy, and Istituto Italiano di Tecnologia, Genoa, Italy.

The design of robotic grippers requires the accomplishment of several contrasting requirements. Research in under actuated soft hands is a lively topic, with several potentialities and challenges. Soft hands are simple, robust and able of adapting to uncertain environment and operative conditions, however their intrinsic compliance and underactuation reduce control capabilities and precision. Recent studies attempted to compensate this limitation by wisely exploiting environmental constraints and considering them as supports to accomplish the task rather than obstacle to avoid. The development of grasp primitives taking into account environment features leaded to interesting and encouraging results. In this paper, we propose to embed on the hand the positive aspects of studies on environmental constraints exploitation. We present a modular under actuated soft hand in which we added a scoop as a feature of the palm, which simplify object grasping. The scoop allows to grasp objects in narrow spaces, augments the possible contact areas, allows to obtain more robust grasps, with lower forces. The paper illustrates the main design principles, a prototype and experimental results.

"Dynamic Walking on Slippery Surfaces: Demonstrating Stable Bipedal Gaits with Planned Ground Slippage," by Wen-Loong Ma, Yizhar Or, and Aaron D. Ames from California Institute of Technology, Pasadena, Calif., USA, and Israel Institute of Technology, Haifa, Israel.

Dynamic bipedal robot locomotion has achieved remarkable success due in part to recent advances in trajectory generation and nonlinear control for stabilization. A key assumption utilized in both theory and experiments is that the robot’s stance foot always makes no-slip contact with the ground, including at impacts. This assumption breaks down on slippery low-friction surfaces, as commonly encountered in outdoor terrains, leading to failure and loss of stability. In this work, we extend the theoretical analysis and trajectory optimization to account for stick-slip transitions at point foot contact using Coulomb’s friction law. Using AMBER-3M planar biped robot as an experimental platform, we demonstrate for the first time a slippery walking gait which can be stabilized successfully both on a lubricated surface and on a rough no-slip surface. We also study the influence of foot slippage on reducing the mechanical cost of transport, and compare energy efficiency in both numerical simulation and experimental measurement.

"WheeLeR: Wheel-Leg Reconfigurable Mechanism with Passive Gears for Mobile Robot Applications," by Chuanqi Zheng and Kiju Lee from Case Western Reserve University, Cleveland, Ohio, USA.

This paper presents a new passive wheel-leg transformation mechanism and its embodiment in a small mobile robot. The mechanism is based on a unique geared structure, allowing the wheel to transform between two modes, i.e., wheel or leg, potentially adapting to varying ground conditions. It consists of a central gear and legs with partial gears that rotate around the central gear to open or close the legs. When fully closed, the mechanism forms a seamless circular wheel; when opened, it operates in the leg mode. The central gear actuated by the driving motor generates opening and closing motions of the legs without using an additional actuator. The number of legs, their physical size, and the gear ratio between the central gear and the partial gears on the legs are adjustable. This design is mechanically simple, customizable, and easy to fabricate. For physical demonstration and experiments, a mobile robotic platform was built and its terrainability was tested using five different sets of the transformable wheels with varying sizes and gear ratios. For each design, the performance with successful wheel-leg transformation, obstacle climbing, and locomotion capabilities was tested in different ground conditions.

"UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether," by Takahiro Miki, Petr Khrapchenkov, and Koichi Hori from University of Tokyo, Japan.

This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device. Two robots are connected with a tether, allowing the UAV to anchor the tether to a structure located at the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether. In addition, we present an autonomous framework for the collaborative navigation and tether attachment in an unknown environment. The UAV employs visual inertial navigation with 3D voxel mapping and obstacle avoidance planning. The UGV makes use of the voxel map and generates an elevation map to execute path planning based on a traversability analysis. Furthermore, we compared the pros and cons of possible methods for the tether anchoring from multiple points of view. To increase the probability of successful anchoring, we evaluated the anchoring strategy with an experiment. Finally, the feasibility and capability of our proposed system were demonstrated by an autonomous mission experiment in the field with an obstacle and a cliff.

"High-speed, Small-deformation Catching of Soft Objects based on Active Vision and Proximity Sensing," by Keisuke Koyama, Kenichi Murakami, Taku Senoo, Makoto Shimojo, and Masatoshi Ishikawa from University of Tokyo, Japan.

In this paper, we propose a combination of sensing and control modules for catching soft objects (i.e., a marshmallow and paper balloon) at a high speed with small deformation. A paper balloon and marshmallow are deformed by a small force and they have individual differences in terms of softness. Therefore, model-based, small-deformation catching is difficult. To realize small-deformation catching, we used high-speed sensor-based (vision and proximity) control modules without a deformation model of the soft objects. A high-speed vision-based controller adjusts the positions of the fingertips of a robot to some extent. As the distance to the object decreased, the fingertip positions were accurately controlled by the high-speed, high-precision proximity-based control. Furthermore, the fingertips were stopped by a proximity-based contact detection before the object surface was deformed. Virtual damping control was effective in catching an object whose surface can be easily deformed by a small impact force or vibration of the fingertips, such as a paper balloon.

"Keep Rollin' – Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots," by Marko Bjelonic, C. Dario Bellicoso, Yvain de Viragh, Dhionis Sako, F. Dante Tresoldi, Fabian Jenelten and Marco Hutter from ETH Zurich, Switzerland.

We show dynamic locomotion strategies for wheeled quadrupedal robots, which combine the advantages of both walking and driving. The developed optimization framework tightly integrates the additional degrees of freedom introduced by the wheels. Our approach relies on a zero-moment point based motion optimization which continuously updates reference trajectories. The reference motions are tracked by a hierarchical whole-body controller which computes optimal generalized accelerations and contact forces by solving a sequence of prioritized tasks including the nonholonomic rolling constraints. Our approach has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled including the non-steerable wheels attached to its legs. We conducted experiments on flat and inclined terrains as well as over steps, whereby we show that integrating the wheels into the motion control and planning framework results in intuitive motion trajectories, which enable more robust and dynamic locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4 m/s and a reduction of the cost of transport by 83 % we prove the superiority of wheeled-legged robots compared to their legged counterparts.

The Conversation (0)