Video Friday: Weightlifting Robot, Surprised Android, and More From IROS 2017

Your weekly selection of awesome robot videos

14 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Japanese researchers are developing a disaster-response humanoid robot that can lift 120 kilograms.
Japanese researchers are developing a disaster-response humanoid robot that can climb ladders, crawl through rubble, and lift 120 kilograms.
Image: Waseda University and Mitsubishi Heavy Industries

Another year, another massive amount of awesome robot research presented at IROS! If you’ve missed our coverage this week, check out our stories on transforming dodecapods, skydiving robotic cameras, and humanoids that don’t mind falling down.

And we’re not done. We’ll be bringing you many more IROS posts over the next week or two, and today we’re stuffing Video Friday with 20 IROS videos along with their titles, authors, and abstracts. Note that this is an arbitrary selection: If your video is not here, it doesn’t mean we didn’t like your robot; in fact, we like all robots. But we had to pick these 20 because we didn’t want to crash your browser by embedding all 494 videos, right? 😉

Next week we’ll return to normal Video Friday, so if you have video suggestions, keep them coming as usual. 

“A Four-Limbed Disaster-Response Robot Having High Mobility Capabilities in Extreme Environments,” by Kenji Hashimoto, Takashi Matsuzawa, Tomotaka Teramachi, Kazuhito Uryu, Xiao Sun, Shinya Hamamoto, Ayanori Koizumi, and Atsuo Takanishi from Waseda University and Mitsubishi Heavy Industries.

This paper describes a novel four-limbed robot having high mobility capability in extreme environments. At disaster sites, there are various types of environments where a robot must move such as rough terrain with possibility of collapse, narrow places, stairs, vertical ladders and so forth. In this paper, first we categorized extreme environments based on three indexes: unevenness, narrowness, and inclination. To move in such extreme environments, we proposed a four-limbed robot having various locomotion styles such as bipedal/quadrupedal walking, crawling and ladder climbing. The main contribution of this paper is the concept and hardware design of the four-limbed robot. We developed a prototype of the four-limbed robot having commonly structured limbs. The number of DoF for the whole body is 29, with 7-DoFs in each limb and 1-DoF in the trunk. The robot weight is 110 kg, and the height is 1,290 mm when standing on two legs. The end-effector has hook-like shape. Verification of the prototype robot is conducted through simulations and experiments.

“Downwash-Aware Trajectory Planning for Large Quadrotor Teams,” by James A. Preiss, Wolfgang Hoenig, Nora Ayanian, and Gaurav S. Sukhatme from University of Southern California.

We describe a method for formation-change trajectory planning for large quadrotor teams in obstacle-rich environments. Our method decomposes the planning problem into two stages: a discrete planner operating on a graph representation of the workspace, and a continuous refinement that converts the non-smooth graph plan into a set of Ck-continuous trajectories, locally optimizing an integral-squared-derivative cost. We account for the downwash effect, allowing safe flight in dense formations. We demonstrate the computational efficiency in simulation with up to 200 robots and the physical plausibility with an experiment with 32 nano-quadrotors. Our approach can compute safe and smooth trajectories for hundreds of quadrotors in dense environments with obstacles in a few minutes.

“Motion Evaluation of a Modified Multi-link Robotic Rat,” by Chang Li, Qing Shi, Kang Li, Mingjie Zou, Hiroyuki Ishii, Atsuo Takanishi, Qiang Huang, and Toshio Fukuda from Beijing Institute of Technology and Waseda University.

The interaction test between a robotic rat and living rat is considered as a possible way to quantitatively characterize the rat sociality. In such robot-rat interactions, the robot should be designed to fully replicate a real rat in terms of morphological and behavioral characteristics. To address this problem, a multi-jointed robot prototype has been modified based on our previous work. We optimally update the forelimb of the robot and redesign the control board to make it more dexterous and increase its behavioral capability. Then, we systematically and kinematically analyze the rotational range of joint variables and the workspace of the robot by using traversal method. To evaluate the motion capability of the modified robot, we propose two quantitative parameters: maximum reachable height (MRH) and minimum bendable distance (MBD). Additionally, we achieve to quantitatively evaluate the behavioral similarity between the robot and rat with the calculated accumulative distance (AD) by using dynamic time warping (DTW). These evaluated methods show high promise to improve the robot-rat interaction to be more similar to rat-rat interaction.

“Toward Low-Flying Autonomous MAV Trail Navigation using Deep Neural Networks for Environmental Awareness,” by Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey Smith, and Stan Birchfield from Nvidia.

We present a micro aerial vehicle (MAV) system, built with inexpensive off-the-shelf hardware, for autonomously following trails in unstructured, outdoor environments such as forests. The system introduces a deep neural network (DNN) called TrailNet for estimating the view orientation and lateral offset of the MAV with respect to the trail center. The DNN-based controller achieves stable flight without oscillations by avoiding overconfident behavior through a loss function that includes both label smoothing and entropy reward. In addition to the TrailNet DNN, the system also utilizes vision modules for environmental awareness, including another DNN for object detection and a visual odometry component for estimating depth for the purpose of low-level obstacle detection. All vision systems run in real time on board the MAV via a Jetson TX1. We provide details on the hardware and software used, as well as implementation details. We present experiments showing the ability of our system to navigate forest trails more robustly than previous techniques, including autonomous flights of 1 km.

“Bipedal Oriented Whole Body Master-Slave System for Dynamic Secured Locomotion with LIP Safety Constraints,” by Yasuhiro Ishiguro, Kunio Kojima, Fumihito Sugai, Shunichi Nozawa, Yohei Kakiuchi, Kei Okada, and Masayuki Inaba from the University of Tokyo.

In this study, we propose a novel method to operate whole body of a humanoid robot, which also includes both feet, dynamically and safely with the master-slave approach. The conventional whole body master-slave approaches need static balancing assumption or a certain time length of planning after operator’s input. Then, we introduce a set of limitations that allows the robot to execute human’s daily dynamic bipedal locomotion, but forbid dangerous motions like the COM will be gone outside of the support region. In the limitations, we regulate COM velocity based on a positional relation of the Divergent Component of Motion (Capture Point) and the both feet, and automatically modify the swing foot contact timing with judging the ZMP is inside or outside of the single foot support region. At last, we conducted some experiments of the real time master-slave locomotion with using two life-sized humanoid robots and confirmed the effectiveness of our novel limitation methods.

“Development of a Human Size and Strength Compliant Bi-Manual Platform for Realistic Heavy Manipulation Tasks,” by Lorenzo Baccelliere, Navvab Kashiri, Luca Muratore, Arturo Laurenzi, Małgorzata Kamedula, Alessio Margan, Stefano Cordasco, Jörn Malzahn, and Nikos G. Tsagarakis from the Istituto Italiano di Tecnologia.

Developing a high physical performance robotic manipulation platform with considerable power density, strength and resilience is not a trivial task and frequently leads to heavy and bulky systems unable to meet the application requirements, i.e. such robots should have human body size compatibility to work in infrastructures designed for humans. In this work we present a new high performance human size and weight compatible bi-manual manipulation platform that demonstrates notable physical strength and power capabilities. To attain this performance, design features including custom high performance elastic drives and robust light weight structure principles were considered resulting in large payload to robot mass ratio that is greater than 1.5 for short time heavy payloads. The design principles and mechanics of the upper body bi-manual robot are presented providing details on the solutions adopted for the various mechatronics components. The performance of the system actuation and the strength capacity of the overall platform is verified through the execution of heavy payload motion and impact experiments.

“A Small Hybrid Ground-Air Vehicle Concept,” by Scott Morton and Nikolaos Papanikolopoulos from the University of Minnesota.

Small robots benefit from the ability to go places where humans cannot and are attractive for numerous practical reasons such as portability and manufacturing simplicity. However, with smaller scale comes more difficulty traversing rough terrain, especially for robots which use wheel-based locomotion. Previous approaches to overcome this drawback have included auxiliary mechanisms such as jumping, transformations of the robot or its appendages, and alternative forms of locomotion such as aerial flight capability. This paper presents a small scale robot that is capable of both ground travel and aerial flight. In combination, these methods of locomotion allow for efficient ground-based movement as well as the ability to overcome obstacles and explore otherwise unreachable locations through air travel. The novel aspect of the robot design is a transformation between ground and air configurations. This feature offers advantages over previous approaches such as a highly compact ground configuration and protection of delicate flight hardware when not in use. In this paper, the robot concept is compared to other approaches to address ground robot mobility drawbacks. This is followed by a detailed design description with a focus on the transformation between the ground and air modes. Lastly, a fully functional prototype is presented which is capable of ground and air locomotion and the transformation between these configurations.

“Motion Analysis in Vocalized Surprise Expressions and Motion Generation in Android Robots”, by Carlos T. Ishi, Takashi Minato, and Hiroshi Ishiguro from ATR Hiroshi Ishiguro Labs.

Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.

“Snapbot: a Reconfigurable Legged Robot,” by Joohyung Kim, Alexander Alspach, and Katsu Yamane from Disney Research.

We develop a reconfigurable legged robot, named Snapbot, to emulate configuration changes and various styles of legged locomotion. The body of Snapbot houses a microcontroller and a battery for untethered operation. The body also contains connections for communication and power to the modular legs. The legs can be attached to and detached from the body using magnetic mechanical couplings. In the center of this coupling, there is a multi-pin spring-loaded electrical connector that distributes power and transmits data between the controller and leg actuators. The locomotion algorithm is implemented on the microcontroller. The algorithm enables Snapbot to locomote in various configurations with one to six legs by recognizing configuration changes and selecting the locomotion method according to the current configuration. Snapbot will be utilized for further research on legged locomotion.

“Direct Visual SLAM Fusing Proprioception for a Humanoid Robot,” by Raluca Scona, Simona Nobili, Yvan R. Petillot, and Maurice Fallon from University of Edinburgh, Heriot-Watt University, and University of Oxford.

In this paper we investigate the application of semi-dense visual Simultaneous Localisation and Mapping (SLAM) to the humanoid robotics domain. Challenges of visual SLAM applied to humanoids include the type of dynamic motion executed by the robot, a lack of features in man-made environments and the presence of dynamics in the scene. Previous research on humanoid SLAM focused mostly on feature-based methods which result in sparse environment reconstructions. Instead, we investigate the application of a modern direct method to obtain a semi-dense visually interpretable map which can be used for collision free motion planning. We tackle the challenge of using direct visual SLAM on a humanoid by proposing a more robust pose tracking method. This is formulated as an optimisation problem over a cost function which combines information from the stereo camera and a low-drift kinematic-inertial motion prior. Extensive experimental demonstrations characterise the performance of our method using the NASA Valkyrie humanoid robot in a laboratory environment equipped with a Vicon motion capture system. Our experiments demonstrate pose tracking robustness to challenges such as sudden view change, motion blur in the image, change in illumination and tracking through sequences of featureless areas in the environment. Finally, we provide a qualitative evaluation of our stereo reconstruction against a LIDAR map.

“Robotic Ironing with 3D Perception and Force/Torque Feedback in Household Environments,” by David Estevez, Juan G. Victores, Raul Fernandez-Fernandez, and Carlos Balaguer from Universidad Carlos III de Madrid.

As robotic systems become more popular in household environments, the complexity of required tasks also increases. In this work we focus on a domestic chore deemed dull by a majority of the population, the task of ironing. The presented algorithm improves on the limited number of previous works by joining 3D perception with force/torque sensing, with emphasis on finding a practical solution with a feasible implementation in a domestic setting. Our algorithm obtains a point cloud representation of the working environment. From this point cloud, the garment is segmented and a custom Wrinkleness Local Descriptor (WiLD) is computed to determine the location of the present wrinkles. Using this descriptor, the most suitable ironing path is computed and, based on it, the manipulation algorithm performs the force-controlled ironing operation. Experiments have been performed with a humanoid robot platform, proving that our algorithm is able to detect successfully wrinkles present in garments and iteratively reduce the wrinkleness using an unmodified iron.

“1D Printing of Recyclable Robots,” by Daniel Cellucci, Robert MacCurdy, Hod Lipson, and Sebastian Risi from Cornell University, MIT, Columbia University, and IT University of Copenhagen.

Recent advances in 3D printing are revolutionizing manufacturing, enabling the fabrication of structures with unprecedented complexity and functionality. Yet biological systems are able to fabricate systems with far greater complexity using a process that involves assembling and folding a linear string. Here, we demonstrate a 1D printing system that uses an approach inspired by the ribosome to fabricate a variety of specialized robotic automata from a single string of source material. This proof-of-concept system involves both a novel manufacturing platform that configures the source material using folding and a computational optimization tool that allows designs to be produced from the specification of high-level goals. We show that our 1D printing system is able to produce three distinct robots from the same source material, each of which is capable of accomplishing a specialized locomotion task. Moreover, we demonstrate the ability of the printer to use recycled material to produce new designs, enabling an autonomous manufacturing ecosystem capable of repurposing previous iterations to accomplish new tasks.

“On-Line Simultaneous Learning and Recognition of Everyday Activities from Virtual Reality Performances,” by Tamas Bates, Karinne Ramirez-Amaro, Tetsunari Inamura, and Gordon Cheng from Technical University of Munich and Japan’s National Institute of Informatics.

Capturing realistic human behaviors is essential to learn human models that can later be transferred to robots. Recent improvements in virtual reality (VR) head-mounted displays provide a viable way to collect natural examples of human behavior without the difficulties often associated with capturing performances in a physical environment. We present a realistic, cluttered, VR environment for experimentation with household tasks paired with a semantic extraction and reasoning system able to utilize data collected in real-time and apply ontologybased reasoning to learn and classify activities performed in VR. The system performs continuous segmentation of the motions of users’ hands and simultaneously classifies known actions while learning new ones on demand. The system then constructs a graph of all related activities in the environment through its observations, extracting the task space utilized by observed users during their performance. The action recognition and learning system was able to maintain a high degree of accuracy of around 92% while dealing with a more complex and realistic environment compared to earlier work in both physical and virtual spaces.

“A Multimodal Execution Monitor with Anomaly Classification for Robot-Assisted Feeding,” by Daehyung Park, Hokeun Kim, Yuuna Hoshi, Zackory Erickson, Ariel Kapusta, and Charles C. Kemp from Georgia Tech.

Activities of daily living (ADLs) are important for quality of life. Robotic assistance offers the opportunity for people with disabilities to perform ADLs on their own. However, when a complex semi-autonomous system provides real-world assistance, occasional anomalies are likely to occur. Robots that can detect, classify and respond appropriately to common anomalies have the potential to provide more effective and safer assistance. We introduce a multimodal execution monitor to detect and classify anomalous executions when robots operate near humans. Our system builds on our past work on multimodal anomaly detection. Our new monitor classifies the type and cause of common anomalies using an artificial neural network. We implemented and evaluated our execution monitor in the context of robot-assisted feeding with a general-purpose mobile manipulator. In our evaluations, our monitor outperformed baseline methods from the literature. It succeeded in detecting 12 common anomalies from 8 able-bodied participants with 83% accuracy and classifying the types and causes of the detected anomalies with 90% and 81% accuracies, respectively. We then performed an in-home evaluation with Henry Evans, a person with severe quadriplegia. With our system, Henry successfully fed himself while the monitor detected, classified the types, and classified the causes of anomalies with 86%, 90%, and 54% accuracy, respectively.

“Hybrid Control and Learning with Coresets for Autonomous Vehicles,” by Guy Rosman, Liam Paull, and Daniela Rus from MIT.

Modern autonomous systems such as driverless vehicles need to safely operate in a wide range of conditions. A potential solution is to employ a hybrid systems approach, where safety is guaranteed in each individual mode within the system. This offsets complexity and responsibility from the individual controllers onto the complexity of determining discrete mode transitions. In this work we propose an efficient framework based on recursive neural networks and coreset data summarization to learn the transitions between an arbitrary number of controller modes that can have arbitrary complexity. Our approach allows us to efficiently gather annotation data from the large-scale datasets that are required to train such hybrid nonlinear systems to be safe under all operating conditions, favoring underexplored parts of the data.
We demonstrate the construction of the embedding, and efficient detection of switching points for autonomous and non-autonomous car data. We further show how our approach enables efficient sampling of training data, to further improve either our embedding or the controllers.

“An Electrostatic Gripper for Flexible Objects,” by Ethan W. Schaler, Donald Ruffatto III, Paul Glick, Victor White, and Aaron Parness from University of California, Berkeley, NASA Jet Propulsion Laboratory, and University of California San Diego.

We demonstrate a flexible, electrostatic adhesive gripper designed to controllably grasp and manipulate soft goods in space. The 8-fingered gripper has 50 cm2 of active electrodes operating at 3 kV. It generates electrostatic adhesion forces up to 3.5 N (0.70 kPa) on Ge-coated polyimide film and 1.2 N on MLI blanket, a film composite used for satellite thermal insulation. Extremely low-force gripper engagement (0.08 N) and release (0.04 N) of films is ideal for micro-gravity. Individual fingers generate shear adhesion forces up to 4.76 N (5.04 kPa) using electrostatic adhesive and 45.0 N (47.6 kPa) with a hybrid electrostatic / gecko adhesive. To simulate a satellite servicing task, the gripper was mounted on a 7-DoF robot arm and performed a supervised grasp, manipulate, and release sequence on a hanging, Al-coated PET film.

“A Novel Bioinspired Hexapod Robot Developed By Soft Dielectric Elastomer Actuators,” by Canh Toan Nguyen, Hoa Phung, Phi Tien Hoang, Tien Dat Nguyen, Hosang Jung, Hyungpil Moon, Ja Choon Koo, and Hyouk Ryeol Choi from Sungkyunkwan University.

This paper presents a hexapod crawling robot which has bioinspired design and locomotion posture from the insects. The robot, called S-Hex II, is an upgraded version of the different printable monolithic hexapod robot which is named as S-Hex I. The S-Hex II is studied for the project of developing mesoscale walking robots actuated by the soft dielectric elastomer actuators. In comparison with the S-Hex I, the S-Hex II owns smaller dimension, lighter weight, and significantly faster walking speed. We improve and increase the total number of degree-of-freedom (DOF) of the soft dielectric elastomer actuators (DEAs) from three, used in the S-Hex I, up to five, employed in the S-Hex II, and that provides the promising versatile locomotion ability to the S-Hex II robot. This robot has successfully demonstrated the back and forth ambulation on flat surfaces using the alternating tripod gait with the walking speed of 52 mm/s (approximately 0.35 body-lengths per second) at 7 Hz of actuation frequency.

“Flippy: A Soft, Autonomous Climber with Simple Sensing and Control,” by Melinda Malley, Michael Rubenstein, and Radhika Nagpal from Harvard University and Northwestern University.

Climbing robots have many potential applications including maintenance, monitoring, search and rescue, and self-assembly. While numerous climbing designs have been investigated, most are limited to stiff components. Flippy  is a small, flipping biped robot with a soft, flexible body and on-board power and control. Due to its built-in compliance, flipping gait, and corkscrew gripper, it can autonomously climb up and down surfaces held at any angle relative to gravity and transition from one surface to another, without complex sensing or control. In this paper, we demonstrate the robot’s ability to flip consistently over a flat Velcro surface and 2D Velcro track, where it reliably climbs vertically, upside down and back to a flat surface, completing all the interior transitions in-between.

“Bridge Risk Investigation Diagnostic Grouped Exploratory (BRIDGE) Bot,” by Aaron Sirken, Gedaliah Knizhnik, Jessica McWilliams, and Sarah Bergbreiter from University of Maryland. 

BRIDGE bot is a 158 g, 10.7 x 8.9 x 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45◦ to 340◦ as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.

“Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing,” by Jeffrey I Lipton, Aidan J Fay, and Daniela Rus from MIT.

We demonstrate a low cost telerobotic system that leverages commercial virtual reality(VR) technology and integrates it with existing robotics control infrastructure. The system runs on a commercial gaming engine using off-the-shelf VR hardware and can be deployed on multiple network architectures. The system is based on the homunculus model of mind wherein we embed the user in a virtual reality control room. The control room allows for multiple sensor displays, and dynamic mapping between the user and robot. This dynamic mapping allows for selective engagement between the user and the robot. We compared our system with state of the art automation algorithms and standard virtual reality based telepresence systems by performing a user study. The study showed that new users were faster and more accurate than the automation or a direct telepresence system. We also demonstrate that our system can be used for pick and place, assembly, and manufacturing tasks.

The Conversation (0)