IROS has just ended in Spain but our coverage continues, and we’ll be bringing you more stories over the next week or two. Today we have a special edition of Video Friday, featuring some of the best videos from the conference.
Next week, Video Friday returns to its normal format, so if you have video suggestions, keep them coming as usual. Enjoy today’s videos!
International Robot Safety Conference – October 9-11, 2018 – Detroit, Mich., USA
Japan Robot Week – October 17-19, 2018 – Tokyo, Japan
Collaborative Robots, Advanced Vision & AI Conference – October 24-25, 2018 – Santa Clara, Calif., USA
ICSR 2018 – November 28-30, 2018 – Qingdao, China
"Extended 3D Walking and Skating Motion Generation for Multiple Non-Coplanar Contacts with Anisotropic Friction: Application to Walk and Skateboard and Roller Skate" by Noriaki Takasugi, Kunio Kojima, Shunichi Nozawa, Fumihito Sugai, Kakiuchi Yohei, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.
In this paper, we propose a 3D walking and skating motion generation method to achieve sequential walking and skating motion with skateboard and roller skate. For generating the sequential stable skating motion using passive wheel, we must deal with non-coplanar contacts with anisotropic friction. Therefore, we use Contact Wrench Cone (CWC) for considering complex contact states and introduce the novel terminal constraints using CWC and Divergent Component of Motion (DCM) which guarantee the stability of both walking and skating motion in the 3D Center of Mass (COM) trajectory generation. Applying proposed method, the life-sized humanoid JAXON could successfully walk at 0.6 [m/s] and skate with skateboard and roller skate at 1.0 [m/s].
"PaintCopter: An Autonomous UAV for Spray Painting on 3D Surfaces" by Anurag Sai Vempati, Mina Kamel, Nikola Stilinovic, Qixuan Zhang, Dorothea Reusser, Inkyu Sa, Juan Nieto, Roland Siegwart, and Paul Beardsley from Autonomous Systems Lab at ETH Zurich and Disney Research Zurich, Switzerland.
This paper describes a system for autonomous spray painting using a UAV, suitable for industrial applications. The work is motivated by the potential for such a system to achieve accurate and fast painting results. The PaintCopter is a quadrotor that has been custom fitted with an arm plus a spray gun on a pan-tilt mechanism. To enable long deployment times for industrial painting tasks, power and paint are delivered by lines from an external unit. The ability to paint planar surfaces such as walls in single color is a basic requirement for a spray painting system. But this work addresses more sophisticated operation that subsumes the basic task, including painting on 3D structure, and painting of a desired texture appearance. System operation consists of (a) an offline component to capture a 3D model of the target surface, (b) an offline component to design the painted surface appearance, and generate the associated robotic painting commands, (c) a live system that carries out the spray painting. Experimental results demonstrate autonomous spray painting by the UAV, doing area fill and versatile line painting on a 3D surface.
"A Speech-Driven Hand Gesture Generation Method And Evaluation In Android Robots" by Carlos T. Ishi, Daichi Machiyashiki, Ryusuke Mikata, and Hiroshi Ishiguro from ATR Hiroshi Ishiguro Labs, Japan.
Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.
"Development of Stone Throwing Robot and High Precision Driving Control for Curling" by Jung Hyun Choi, Changyong Song, Kyunghwan Kim, and Sehoon Oh from Daegu Gyeongbuk Institute of Science and Technology and NT Robot, South Korea.
In this paper, a novel mobile robot developed to perform Curling sports is introduced. The developed robot is a Stone Throwing Robot (STR) for Curling that can travel on the ice with wheels and throw a stone as well as make curls of the stone. The STR is developed as a robot component of an Artificial Intelligence (AI) system that can autonomously play the curling sport. The proposed STR can throw a stone at any desired speed and in any desired direction, which are determined by the AI system. To achieve this precise driving of the STR and throwing of the stone, two dimensional drive control is developed for the STR, which consists of 1) anti-slip control for high traction, 2) precise velocity control and 3) high accuracy heading angle control. In addition to the conventional PID controller, model-based feedforward control, Model Following Control (MFC) for the anti-slip control of the wheel on the ice and Yaw Moment Observer (YMO) for the robust heading angle control are applied as key technologies for the STR driving. The design configurations of the STR to achieve the detection of its own location and throwing/curling of the stone is proposed in this paper as well as the detail of the precise driving control.
"Excuse me, May I Say Something?" by Oskar Palinko, Jiro Shimaya, Kristian Hoeck, Kohei Ogawa, Nobuhiro Jinnai, Yuichiro Yoshikawa, and Hiroshi Ishiguro from the Intelligent Robotics Laboratory, Osaka University, Japan, and Department of Social Anthropology, University of Manchester, U.K.
Hiroshi Ishiguro gave a lecture to a group of young students. We employed CommU, the desktop social robot, to manage the questions and answers for the talk. We encouraged the students to ask questions anytime. Half of the classroom was told to ask questions by raising their hand while the other half was shown an online messaging system developed for CommU, which allows the audience to post questions, which the robot would directly say. We had a gatekeeper to monitor for invalid sentences. In the middle of the presentation we asked the students to shift roles. The robot used a neural network based estimator of interruptibility to find the best time to speak. We did not expect too many questions, but the audience really embraced using the robot. They posted 44 questions to the presenter through CommU. On the other hand they asked 8 direct questions by raising their hands and standing up. Students thought that they gained more information from the lecturer using the robot than using the conventional method. In this instance we didn’t stop the students from asking too many questions, but in a real-world application the gatekeeper will have to play an important role.
"Design and Development of Biaxial Active Nozzle with Flexible Flow Channel for Air Floating Active Scope Camera" by Akihiro Ishii, Yuichi Ambe, Yu Yamauchi, Hisato Ando, Masashi Konyo,Kenjiro Tadakuma, and Satoshi Tadokoro from Tohoku University, Japan.
Long flexible continuum robots have a high potential for search and rescue operations that explore deep layered debris. A general problem of these robots is in the control of the head motion because their thin bodies limit the space available to mount multiple actuators. This paper develops a biaxial active nozzle which can rotate the air jet direction along a roll and pitch axis in order to control the direction of reaction force and the head motion of a long flexible robot. A major challenge is how to change the air jet direction without a large resistance to the flow, which reduces the reaction force induced by the air jet. We propose a nozzle whose outlet is connected with a flexible air tube. The direction of the air jet is controlled by the smooth shape deformation of the tube. The nozzle should be compact enough to be installed on a thin robot, although the shape deformation of the tube may cause buckling. The flexible tube is modeled and simulated by a multiple link model used to derive the geometric parameters of the nozzle so that the nozzle is compact and the tube does not buckle. Based on the derived parameters, the biaxial active nozzle was developed. A basic performance experiment shows that the nozzle can change the reaction force direction by deforming the tube shape, while the magnitude of the reaction force is almost constant. We integrated the proposed nozzle with a conventional Active Scope Camera (ASC). The range where the robot can look around in a vertical exploration was significantly improved, which was three times larger than the previous ASC whose head was controlled by pneumatic actuators. The rubble field test demonstrates that the integrated ASC could move over rubble (maximum height of 200 mm) and steer the course.
"Walking on a Steep Slope Using a Rope by a Life-Size Humanoid Robot" by Masahiro Bando, Masaki Murooka, Shunichi Nozawa, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.
In this paper, we propose methods for walking on a steep slope using a rope by a humanoid robot. There are two difficulties for walking on a steep slope without a rope. First, range of motion of ankle joints get limited. Second, feet of a robot slip on a steep slope. For these problems, using a rope is effective solution because the robot can receive enough friction force from the slope and walk on a steep slope by pulling a rope with proper tension. In addition, the robot pulling a rope on a slope can relax limitations of ankle joints. Therefore, we propose methods to determine tension of a grasped rope by solving a linear least-square problem considering deformability of a rope. With these methods, a life-size humanoid robot HRP-2 could walk on a steep slope which angle is 40 degree.
"Real-Time Dance Generation to Music for a Legged Robot" by Thomas Bi, Peter Fankhauser, Dario Bellicoso, and Marco Hutter from Robotics Systems Lab, ETH Zurich, Switzerland.
The development of robots that can dance has received considerable attention. However, they are often either limited to a pre-defined set of movements and music or demonstrate little variance when reacting to external stimuli, such as microphone or camera input. In this paper, we contribute with a novel approach allowing a legged robot to listen to live music while dancing in synchronization with the music in a diverse fashion. This is achieved by extracting the beat from an onboard microphone in real-time, and subsequently creating a dance choreography by picking from a user-generated dance motion library at every new beat. Dance motions include various stepping and base motions. The process of picking from the library is defined by a probabilistic model, namely a Markov chain, that depends on the previously picked dance motion and the current music tempo. Finally, delays are determined online by time-shifting a measured signal and a reference signal, and minimizing the least squares error with the time-shift as parameter. Delays are then compensated for by using a combined feedforward and feedback delay controller which shifts the robot whole-body controller reference input in time. Results from experiments on a quadrupedal robot demonstrate the fast convergence and synchrony to the perceived music.
"Toward the Next Generation of Robotic Waiters" by Lorenzo Moriello, Davide Chiaravalli, Luigi Biagiotti, and Claudio Melchiorri from University of Bologna and University of Modena and Reggio Emilia, Italy.
The gap between human waiters and state-of-the-art robot systems that try to serve something to drink is often embarrassing, with the former able to manipulate glasses and trays or glasses on trays with incredible dexterity and the latter that move at incredible slowness. In this video, we want to show that robots can do it better by moving a bottle or a tankard full of beer that are simply placed on a flat steel plate connected the flange of a robot manipulator. The robot tracks the trajectory defined by a human operator that moves its hand in the 3D space, with a motion capture system that acquires in real time the position. A feed-forward controller, placed between the user and the robot and based on the combination of a smoother and proper orientation compensation, counteracts the lateral accelerations and suppress sloshing phenomena of the liquids. Eventually a camera mounted on the robot arm provides a visual feedback to the operator with monitoring purposes. The challenge for the operator was to drop the carried object. will the feed-forward control be robust enough to avoid this event, even at high speed? Watch the video and find out!
"SwarmTouch: Tactile Interaction of Human with Impedance Controlled Swarm of Nano-Quadrotors" by E. Tsykunov, L. Labazanova, A. Tleugazy, and D. Tsetserukou from Intelligent Space Robotics Laboratory, Skolkovo Institute of Science and Technology, Moscow, Russia.
We propose a novel interaction strategy for a human-swarm communication when a human operator guides a formation of quadrotors with impedance control and receives vibrotactile feedback. The presented approach takes into account the human hand velocity and changes the formation shape and dynamics accordingly using impedance interlinks simulated between quadrotors, which helps to achieve a life-like swarm behavior. Experimental results with Crazyflie 2.0 quadrotor platform validate the proposed control algorithm. The tactile patterns representing dynamics of the swarm (extension or contraction) are proposed. The user feels the state of the swarm at his fingertips and receives valuable information to improve the controllability of the complex life-like formation. The user study revealed the patterns with high recognition rates. Subjects stated that tactile sensation improves the ability to guide the drone formation and makes the human-swarm communication much more interactive. The proposed technology can potentially have a strong impact on the human- swarm interaction, providing a new level of intuitiveness and immersion into the swarm navigation.
"An Omnidirectional Jumper with Expanded Movability via Steering, Self-Righting, and Take-off Angle Adjustment" by Sojung Yim, Sang-Min Baek, Gwang-Pil Jung, and Kyu-Jin Cho from Seoul National University, South Korea.
In this paper, we propose an omnidirectional jumper with expanded locomotion capabilities. The mechanisms for four functions—jumping, steering, self-righting and take-off angle adjustment—are designed using only two motors to maximize the jumping performance. Jumping uses the modified active triggering mechanism with one motor. Steering shares this motor and uses the wheel touching the ground. The take-off angle is adjusted by changing the angle between the body and the foot using another motor. Self-righting is possible by utilizing combinations of the movements that occur in the energy storing and angle adjustment processes. With these four functions, the robot is capable of jumping in all directions and can jump anywhere in between the maximum height and maximum distance. It can also jump multiple times by self-righting. The robot, with a mass of 64.4 g, jumps up to 113 cm in vertical height, and 170 cm in horizontal distance. This robot can be deployed to explore various environments. Moreover, the design method to implement more functions than the number of motors can be applied to design other small-scale robots.
"Ladder Climbing with a Snake Robot" by Tatsuya Takemori, Motoyasu Tanaka, and Fumitoshi Matsuno from Kyoto University and University of Electro-Communications, Japan.
This paper presents a method that allows a snake robot to climb a ladder. We propose a ladder climbing method for a snake robot that has a smooth surface shape. We design a novel gait for the snake using a gait design method that configures the target form of the snake robot by connecting simple shapes. The climbing motion is executed via shift control and the corresponding motion required to catch the next step on the ladder. In addition, we developed a snake robot that has a smooth exterior body surface through construction of pectinate- shaped parts of the links. We demonstrated the effectiveness of both the proposed gait and the design of the snake robot experimentally.
"Hear the Egg — Demonstrating Robotic Interactive Auditory Perception" by Erik Strahl, Matthias Kerzel, Manfred Eppe, Sascha Griffiths, and Stefan Wermter from University of Hamburg, Germany.
We present an illustrative example of an interactive auditory perception approach performed by a humanoid robot called NICO, the Neuro Inspired COmpanion . The video demonstrates a material classification task in the style of a classic TV game show. NICO and another candidate are supposed to determine the content of small plastic capsules that are visually indistinguishable. Shaking the capsules produces audio signals that range from rattling stones, over tinkling coins to swooshing sand. NICO can perceive and analyze these sounds to determine the material of the capsule’s content.
"Teaching a Robot to Grasp Real Fish by Imitation Learning from a Human Supervisor in Virtual Reality" by Jonatan S. Dyrstad, Elling Ruud Øye, Annette Stahl, and John Reidar Mathiassen from SINTEF Ocean AS and NTNU, Department of Engineering Cybernetics, Trondheim, Norway.
We teach a real robot to grasp real fish, by training a virtual robot exclusively in virtual reality. Our approach implements robot imitation learning from a human supervisor in virtual reality. A deep 3D convolutional neural network computes grasps from a 3D occupancy grid obtained from depth imaging at multiple viewpoints. In virtual reality, a human supervisor can easily and intuitively demonstrate examples of how to grasp an object, such as a fish. From a few dozen of these demonstrations, we use domain randomization to generate a large synthetic training data set consisting of 100 000 example grasps of fish. Using this data set for training purposes, the network is able to guide a real robot and gripper to grasp real fish with good success rates. The newly proposed domain randomization approach constitutes the first step in how to efficiently perform robot imitation learning from a human supervisor in virtual reality in a way that transfers well to the real world.
"Design, Modeling, and Control of a Soft Robotic Arm" by Matthias Hofer and Raffaello D’Andrea from Institute for Dynamic Systems and Control, ETH Zurich, Switzerland.
In this paper we present the design of a hybrid robotic arm using soft, inflatable bladders for actuation. Low cost switching valves are used for pressure control, where the valve model is identified experimentally. A model of the robotic arm is derived based on system identification and used to derive a linear quadratic Gaussian controller. A method to solve limitations of the employed switching valves is proposed and experimentally proven to improve tracking performance. The closed loop control performance of the robotic arm is demonstrated by stabilizing a rotational inverted pendulum known as the Furuta pendulum.
"Design of SUPERball v2, a Compliant Tensegrity Robot for Absorbing Large Impacts" by Massimo Vespignani, Jeffrey M. Friesen, Vytas SunSpiral, and Jonathan Bruce from NASA Ames Research Center and UC San Diego Coordinated Robotics Lab, USA.
In this paper, we present the system design and initial testing of SUPERball v2, a completely redesigned 2-meter spherical six-bar tensegrity robot designed to survive high-speed landings as well as locomote to desired locations. SUPERball v2 was designed to enable a host of new actuation and experimentation. The prototype features a fully actuated six-bar design (24 actuators), compliant nylon cables (up to 15% stretch), torque-control enabled motors, and a robust mechanical structure capable of surviving impact velocities upwards of 8 m/s.
Erico Guizzo is the digital product manager at IEEE Spectrum. An IEEE Member, he is an electrical engineer by training and has a master's degree in science writing from MIT.