Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Video Friday: Robot Patrol, Tickling Machine, and More From IROS 2016

Your weekly selection of awesome robot videos

11 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Robot patrol
Image: EPFL & Universidade de Lisboa

Goodbye, South Korea! IROS ended today in Daejeon, and we hope you’ve enjoyed our coverage so far (TurtleBot 3! Microspine gripper! Robots that sweat!). We wish we could write about every single amazing project we’ve seen there, but then we would die of exhaustion and you’d have 0 (zero) articles next year. So here’s another idea: We’re going to stuff Video Friday, both this week and next, with a massive dose of IROS videos along with their accompanying abstracts. We hope you like that better than us dying.

For you impatient types, we’ll return to normal Video Friday in two weeks, so if you have video suggestions, keep them coming as usual. Enjoy today’s IROS 2016 Special Edition Part 1.

"Graph-Based Distributed Control for Adaptive Multi-Robot Patrolling using Local Formation Reconfiguration," by Alicja Wasik, Jose N. Pereira, Rodrigo Ventura, Pedro U. Lima, and Alcherio Martinoli from École Polytechnique Fédérale de Lausanne, Switzerland, and Instituto Superior Tecnico, Universidade de Lisboa, Portugal.

Multi-robot cooperative navigation in real-world environments is essential in many applications, including surveillance and search-and-rescue missions. State-of-the-art methods for cooperative navigation are often tested in ideal laboratory conditions and not ready to be deployed in real- world environments, which are often cluttered with static and dynamic obstacles. In this work, we explore the graph-based framework to achieve control of real robot formations moving in a world cluttered with a variety of obstacle densities, some static, some dynamic by introducing a new algorithm for dynamic distributed change of the formation shape. We show successful results of controlling formations of 3 real robots in scenarios of increasing complexity.

"Blade-Type Crawler Vehicle with Wings in Ground Effect for Traversing Uneven Terrain at High Speed," by Yasuyuki Yamada, Gen Endo, and Taro Nakamura from Chuo University and Tokyo Institute of Technology.

Unmanned rescue, observation and/or research vehicles with high terrain adaptability, high speed, and high reliability are needed to reach difficult locations. However, most vehicles achieve improved performance over rough terrain at the expense of low speed and/or complex mechanisms. We developed a blade-type crawler robot with a very simple and reliable mechanism, capable of traversing uneven terrain at high speed, using aerodynamic devices. As these small devices are in the low Reynolds number region, we tested a wing that made use of the ground effect. We experimentally confirmed the success of this approach in improving the traveling speed and ability to traverse uneven terrain. The robot with aerodynamic lift was climbed 1.5 times higher obstacle than without wings.

"A Reactive Stepping Algorithm Based on Preview Controller with Observer for Biped Robots," by Oliver Urbann and Matthias Hofmann from Technical University of Dortmund.

Reactive stepping is an important utility to regain balance when bipedal walking motions are disturbed. This paper sched light on the reasons for humanoid robots to fall down. It presents a method to calculate modifications of predefined foot placements with the objective to minimize deviations of the Zero Moment Point from a reference without interrupting the walk. The calculation is in closed-form, and is embedded into a well-evaluated preview controller with observer based on the 3D Linear Inverted Pendulum Mode (3D-LIPM). Experiments in simulation and on a physical robot prove the benefit of the proposed system.

"Eccentric Crank Rover: A Novel Crank Wheel Mechanism with Eccentric Wheels," by Hirotaka Komura, Gen Endo, and Koichi Suzumori from Tokyo Institute of Technology.

Crank wheel mechanism is a mechanism which has high mobility and efficiency since it has both wheels and legs in its simple structure. However, each of prior model of crank wheel mechanism has each shortcomings such as small clearance under the body. In this paper, we will propose a novel crank wheel mechanism ”Eccentric Crank Rover”(ECR), which is enhanced crank wheel mechanism with eccentric wheels. The eccentric wheels enlarge the clearance under the body, and change the body trajectory from straight to trochoid curve, which is same shape but opposite phase trajectory as crank legs. Thus the body also act as a ”second” crank leg. As a result of experiment we confirmed higher step climbability and larger clearance, and lower COT than other models such as normal wheel model, eccentric wheel model, and crank legged model without eccentric wheel.

"Motion generation in android robots during laughing speech," by Carlos T. Ishi, Tomo Funayama, Takashi Minato, and Hiroshi Ishiguro from ATR Hiroshi Ishiguro Labs, Japan. 

We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).

"Robust Dynamic Walking Using Online Foot Step Optimization," by Siyuan Feng, X. Xinjilefu, Christopher G. Atkeson, and Joohyung Kim from Carnegie Mellon University, Uber Advanced Techonologies Center, and Disney Research.

To enable robust dynamic walking on the Atlas robot, we extend our previous work [1] by adding a receding- horizon component. The new controller consists of three hierar- chies: a center of mass (CoM) trajectory planner that follows a sequence of desired foot steps, a receding-horizon controller that optimizes the next foot placement to minimize future CoM tracking errors, and an inverse dynamics based full body controller that generates instantaneous joint commands to track these motions while obeying physical constraints. The proposed controller is implemented and tested on the Atlas robot. It is capable of walking with strong external perturbations such as recovering from large pushes and traversing unstructured terrain.

"Towards Multi-Modal Intention Interfaces for Human-Robot Co-Manipulation," by Luka Peternel, Nikos Tsagarakis, and Arash Ajoudani from Istituto Italiano di Tecnologia, Genoa, Italy.

This paper presents a novel approach for human- robot cooperation in tasks with dynamic uncertainties. The essential element of the proposed method is a multi-modal interface that provides the robot with the feedback about the human motor behaviour in real-time. The human muscle activity measurements and the arm force manipulability prop- erties encode the information about the motion and impedance, and the intended configuration of the task frame, respectively. Through this human-in-the-loop framework, the developed hybrid controller of the robot can adapt its actions to provide the desired motion and impedance regulation in different phases of the cooperative task. We experimentally evaluate the proposed approach in a two-person sawing task that requires an appropriate complementary behaviour from the two agents.

"Tricycle Manipulation Strategy for Humanoid Robot Based on Active and Passive Manipulators Control," by Kohei Kimura, Shunichi Nozawa, Yohei Kakiuchi, Kei Okada, and Masayuki Inaba from the University of Tokyo.

Humanoid robot has the potential to manipulate wide range of tools in daily life. Arms and legs of humanoid robot contribute this ability. Above all, manipulation tasks for vehicles which are the same size as a life-sized humanoid or larger size than it require the operational motion by both arms and legs of humanoid robot. In addition to the arms and legs cooperative motion control, it is also important for humanoid robot to stabilize self posture during driving vehicle. In this research, we focus on the arms-legs-integrated manipulation task for tricycle controlled by humanoid robot. We propose dual manipulators control law that is defined as active manipulator which works movable objects such as handle and crank, and passive manipulator which follows the movement of this objects. We discuss not only this manipulation control strategy but also the self stabilizing strategy for humanoid robot by both active manipulating legs. Furthermore, this paper contributes the strategy of recognition and planning for outside obstacle situations, and integrates the tricycle manipulation system based on it. We show the experimental result for tricycle manipulation on obstacle-mixed situation by humanoid robot applying this proposed system.

"Steering Control of a Water-Running Robot Using an Active Tail," by HyunGyu Kim, Kyungmin Jeong, Metin Sitti, and TaeWon Seo from Yeungnam University, Korea Atomic Energy Research Institute, Max Planck Institute for Intelligent Systems, Stuttgart, Germany, and Carnegie Mellon University.  

Many kinds of mobile robots have been developed through biomimetic research. In this research, we mimicked a basilisk lizard’s ability to run on water for the maneuverability of a hexapedal robot, especially steering locomotion on the water. The robot has a circular plate as a tail, which the robot rotates to steer on water. We dynamically modeled the platform and conducted simulations and experiments on steering locomotion with a bang-bang controller. The robot can steer on water by rotating the tail, and the controlled steering locomotion is stable. The dynamic modelling approximates the robot’s steering locomotion and the trends of the simulation and experiment are similar, although there are errors between the desired and actual angles. The robot’s maneuverability on water can be improved through further research. 

"Event-Driven Ball Detection and Gaze Fixation in Clutter," by Arren Glover and Chiara Bartolozzi from Istituto Italiano di Tecnologia, Italy.

The fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking and control, for example a robot catching a ball. When the event-driven iCub humanoid robot moves to grasp an object its head and torso undergo motion, inducing camera motion, and tracked objects become no longer trivially segmented amongst the mass of background clutter. Current event-based tracking algorithms have mostly considered stationary cameras that have clean event-streams with minimal clutter. This paper introduces novel methods to extend the Hough-based circle detection algorithm using optical flow information that is readily extracted from the spatio-temporal event space. Results indicate the proposed directed-Hough algorithm is more robust to other moving ob- jects and the background event-clutter. Finally, we demonstrate successful on-line robot control and gaze following on the iCub robot.

"Walking Control in Water Considering Reaction Forces from Water for Humanoid Robots with a Waterproof Suit," by Yuta Kojio, Tatsushi Karasawa, Kunio Kojima, Ryo Koyama, Fumihito Sugai, Shunichi Nozawa, Yohei Kakiuchi, Kei Okada, and Masayuki Inaba from the University of Tokyo.

In this paper, we develop a waterproof suit for humanoid robots and propose an underwater walking control method. Although very few life-sized humanoid robots are completely waterproof, we can easily make these humanoid robots watertight by putting a waterproof suit on them. In water, humanoid robots are influenced by the two forces due to the water: buoyancy and drag force. We take buoyancy into account when generating a walking pattern because the force is large and easy to estimate before walking. However, drag force is small and difficult to precisely predict and therefore, we treat the force as an unknown disturbance. In our method, we modify footsteps based on the Capture Point in order to deal with large disturbances. We verify the effectiveness of the proposed methods through an experiment in which a life-sized humanoid robot walks on a floor, stairs and debris in water.

"One DoF Robotic Hand That Makes Human Laugh by Tickling Through Rubbing Underarm," by T. Kishi, T. Nozawa, A. Nibori, H. Futaki, Y. Miura, M. Shina, K. Matsuki, H. Yanagino, S. Cosentino, K. Hashimoto, A. Takanishi, from Waseda University and Mejiro University.

This paper describes the development of one DoF robotic hand that makes human laugh by tickling through rubbing underarm. Laughter is attracting research attention because it enhances health by treating or preventing mental diseases. However, laughter has not been used effectively in healthcare because the mechanism of laughter is complicated and is yet to be fully understood. The development of a robot capable of making humans laugh is useful for clarifying the mechanism of laughter because the stimuli by the robot is quantitative and reproductive. Especially, tickling matches to this purpose because the relationship between stimuli and reaction is simpler compared to other techniques. Therefore, this research aimed to develop a robotic hand that can output quantitative and reproductive tickling stimuli for clarifying the mechanism of laughter. Rubbing underarm is selected as a target motion of robot because previous research suggests that this is the best way for making humans feel ticklish. In order to achieve the tickling motion by robots as humans, the required specifications were determined through experimental method. In order to develop a robot that achieves the required fingertip trajectory by simple mechanisms as much as possible, mechanism with crank and link driven by single motor was developed. The result of experimental evaluation shows that the developed robot could make humans laugh by its rubbing motion. In addition, the quantitative tickling motion by developed robotic hand was suggested to be effective for clarifying the mechanism of laughter.

"Real-Time Skating Motion Control of Humanoid for Acceleration and Balancing," by Noriaki Takasugi, Kunio Kojima, Shunichi Nozawa, Yohei Kakiuchi, Kei Okada, and Masayuki Inaba from the University of Toyo.

In this paper, we propose the real-time control method for skating motion of humanoid robots. There are three problems for skating motion: (1) keeping dynamic balance, (2) adequately controlling foot force to suppress slipping at the foot, (3) controlling full-body motion in real-time. For solving these problems, we propose the Skating Motion Generator and Skating Motion Stabilizer. In Skating Motion Generator, we separate the slip suppression from motion generation for (3). The separation enables us to generate skating motions in real time. In Skating Motion Stabilizer, we adjust the sole pressure distribution of each foot for contradiction between (1) and (2). We show the effectiveness of the proposed controller through the experiments, in which life-sized humanoid HRP-2 pushes the ground and skates on the skateboard. Applying the proposed controller, HRP-2 could successfully accelerate and skate on the skateboard at 0.5[m/s].

"Proprioceptive Control of an Over-Actuated Hexapod Robot in Unstructured Terrain," by Marko Bjelonic, Navinda Kottege, and Philipp Beckerle from Technische Universitat Darmstadt and CSIRO, Brisbane, Australia.

Legged robots such as hexapods have the potential to traverse unstructured terrain. This paper introduces a novel hexapod robot (Weaver) using a hierarchical controller, with the ability to efficiently traverse uneven and inclined terrain. The robot has five joints per leg and 30 degrees of freedom overall. The two redundant joints improve the locomotion of the robot by controlling the body pose and the leg orientation with respect to the ground. The indirect force controller reacts to unstructured terrain and thus achieves self-stabilizing behavior without prior profiling of the terrain through exterioceptive sensing. Instead of adding force sensors, the force is calculated by processing the torque output of the actuators. We exper- imentally evaluate Weaver with the proposed controller and demonstrate that it can effectively traverse challenging terrains and high gradient slopes, reduce angular movements of the body by more than 55% and reduce the cost of transport (up to 50% on uneven terrain and by 85% on a slope with 20). The controller also enables Weaver to walk up inclines of up to 30, and remain statically stable on inclines up to 50. Furthermore, we present a new metric for legged robot stability performance along with a method for proprioceptive terrain characterization.

"Preparatory Object Reorientation for Task-Oriented Grasping," by Anh Nguyen, Dimitrios Kanoulas, Darwin G. Caldwell, and Nikos G. Tsagarakis from Istituto Italiano di Tecnologia, Italy.

This paper describes a new task-oriented grasping method to reorient a rigid object to its nominal pose, which is defined as the configuration that it needs to be grasped from, in order to successfully execute a particular manipulation task. Our method combines two key insights: (1) a visual 6 Degree- of-Freedom (DoF) pose estimation technique, which is based on 2D-3D point correspondences, is used to estimate the object pose in real-time and (2) the rigid transformation from the current to the nominal pose is computed online and the object is reoriented over a sequence of steps. The outcome of this work is a novel method that can be effectively used in the preparatory phase of a manipulation task, to permit a robot to start from arbitrary object placements and configure the manipulated objects to the nominal pose, as required for the execution of a subsequent task. We experimentally demonstrate the effectiveness of our approach on a full-size humanoid robot (WALK-MAN) using different objects with various pose settings under real-time constraints.

“Autonomous Navigation in Dynamic Social Environments using Multi-Policy Decision Making," by Dhanvin Mehta, Gonzalo Ferrer, and Edwin Olson from University of Michigan, Ann Arbor.

In dynamic environments crowded with people, robot motion planning becomes difficult due to the complex and tightly-coupled interactions between agents. Trajectory opti- mization methods, often supported by models of typical human behavior and personal space, can often produce reasonable behavior. However, they do not account for the future closed- loop interactions of other agents with the trajectory being constructed. In this paper, we propose a new method for navigation amongst pedestrians in which the trajectory of the robot is not planned, but a planning process selects one of a set of closed-loop behaviors whose utility can be predicted through forward simulation. In particular, we extend the Multi-Policy Decision Making (MPDM) framework [1] to this domain using the closed-loop behaviors Go-Solo, Follow-other, and Stop. By dynamically switching between these policies, we show that we can improve the performance of the robot as measured by utility functions that reward task completion and penalize inconvenience to other agents. Our evaluation includes extensive results in simulation and real-world experiments.

The Conversation (0)