Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Video Friday: RoboCup Finals, Crowdsourced Robotics, and Growing Drones in Vats

Your weekly selection of awesome robot videos

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Teen size humanoid robot kicks ball at RoboCup
Image: Nimbro via YouTube

Video Friday is your weekly selection of awesome robotics videos, collected by your Chemputer™-savvy Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):

IEEE AIM 2016 – July 12-15, 2016 – Banff, Canada
DLMC 2016 – July 13-15, 2016 – Zurich, Switzerland
ROS Industrial Workshop – July 14-15, 2016 – Singapore
MARSS 2016 – July 18-22, 2016 – Paris, France
IEEE WCCI 2016 – July 25-29, 2016 – Vancouver, Canada
RO-MAN 2016 – August 26-31, 2016 – New York, N.Y., USA
ECAI 2016 – August 29-2, 2016 – The Hague, Holland
NASA SRRC Level 2 – September 2-5, 2016 – Worcester, Mass., USA


Let us know if you have suggestions for next week, and enjoy today’s videos.

Here are highlights from the RoboCup 2016 finals, with Tech United up against Water (in pink). Don’t worry about keeping score, because it goes to penalty kicks at the end.

Congrats to Tech United! And check out the mad skills on their goaliebot from an earlier match against Cambada:

[ Tech United ]

Nimbro was at RoboCup busily scoring goals with Teen Size robots:

[ Nimbro ]

Could a tail have allowed ancient vertebrates to make the transition from water to land? Reporting in Science today, researchers from Georgia Institute of Technology, Carnegie Mellon University, Clemson University and National Institute for Mathematical and Biological Synthesis described results of a groundbreaking study to answer this question using amphibious fish, a custom-built robot and mathematical models of movement.

MUDDYBOT ATTACK!!!

The researchers developed a simplified, mudskipper-like version of a robot, which they call “MuddyBot,” on which they could systematically vary the angle and movements of the robot’s flipper-like limbs and tail. They used MuddyBot to find out which coordinated motions of limb and tail were most effective on granular surfaces of different inclines. They call this approach “robophysics” -- a novel way to understanding the behavior of long-lost species.

“Insight from these experiments led us to hypothesize that propulsive use of the tail, an appendage that has received relatively little attention in previous studies of the invasion of land, may have been the critical adaptation that allowed these early walkers to gain ground on challenging substrates,” says Benjamin McInroe, a co-author on the paper and then an Georgia Tech undergraduate (now a Ph.D. student at the University of California, Berkeley). The team is now making robots to test some alternative possibilities, including robots with bodies like salamanders that move with a diagonal gait.

[ NSF ]

Your word of the day is “stipple”:

We describe a method for creating stippled prints using a quadrotor flying robot. At a low level, we use motion capture to measure the position of the robot and the canvas, and a robust control algorithm to command the robot to fly to different stipple positions to make contact with the canvas using an ink soaked sponge. We describe a collection of important details and challenges that must be addressed for successful control in our implementation, including robot model estimation, Kalman* filtering for state estimation, latency between motion capture and control, radio communication interference, and control parameter tuning.

We use a centroidal Voronoi diagram to generate stipple drawings, and compute a greedy approximation of the traveling salesman problem to draw as many stipples per flight as possible, while accounting for desired stipple size and dynamically adjusting future stipples based on past errors. An exponential function models the natural decay of stipple sizes as ink is used in a flight. We evaluate our dynamic adjustment of stipple locations with synthetic experiments. Stipples per second and variance of stipple placement are presented to evaluate our physical prints and robot control performance.

* Editor’s Note: Hungarian-American electrical engineer and IEEE Life Fellow Rudolf E. Kalman, inventor of the Kalman filter, died last Saturday at his home in Florida. He received the IEEE Medal of Honor in 1974, among many other awards and honors.

[ Paper ]

We didn’t bother posting about this because it’s one of those “within this century” concepts that has a very limited basis in near-term reality. But sure, why not, BAE Systems wants to grow drones in vats:

I like how BAE seems to think that by the time we’ll be growing drones in vats, humans will still be running around on the battlefield with guns. How quaint.

[ BAE ]

I am very very sad that Harvard’s amorphous construction project seems to be over, but this video that was just published for some reason shows “the final autonomous robot design” doing its foamy thing:

[ Harvard SSR ]

I don’t have to explain this next video because a talking skeleton will do it for me.

[ Suzumori Endo Robotics Laboratory ]

What kind of sadistic university would deliberately landscape a field into a walking robot nightmare? I’m looking at you, University of Michigan:

I think a couple of those guys need to do more pushups.

[ UMich ]

Jeff wrote in to share this video of his team’s robot competing in NASA’s Sample Return Robot Challenge. Spoiler alert: they do pretty well, and are one of the five teams (out of 18) moving on to round two:

Round two, which takes place in September, will be more difficult, with 10 samples to locate and collect instead of just two.

[ NASA ]

Thanks Jeff!

Thomas writes:

This is a video that I made myself about a robot that i designed myself at the university of Victoria, BC, Canada. The robot’s movement is controlled by an Arduino MCU connected via USB serial to my laptop to make it a part of the ROS environment. In this video I demonstrate the use of an Xbox Kinect as a line following computer vision sensor. The robot makes use of the ROS platform and features two unique versions of line following. Version 1 shows computer vision algorithms being used on a small Region of Interest in order to centre the robot on the line. In Version 2 the camera looks ahead at the line in front determines the angle of those lines with respect to the robot centre and then moves faster if the line looks straight and slows down if the line turns.

[ University of Victoria ]

Thanks Thomas!

My guess is that it takes a bit longer than 9 minutes to build a KUKA robot, but here is at least part of the process:

[ KUKA ]

Witness the birth of a whole bunch of Ivy League Turtlebots:

[ Harvard CS189 ]

This is cool, but what I really want are ball-fetching robots, not ball-dropping robots.

[ Virgin Active ]

Charles River Analytics and their teammate, 5D Robotics, are developing an intuitive soldier-machine interface for controlling robotic leader-follower systems in small team operations. The Multi-modal Interface for Natural Operator Teaming with Autonomous Robots, or MINOTAUR, fuses multiple proven leader-tracking and robot control technologies to provide a reliable, hands-free interface for Warfighters operating in challenging environments.

Sidenote: MINOTAUR only functions properly if you send it the occasional tasty youth to devour.

[ CRA ]

The Dyson 360 Eye is launching in the U.K. So, if you live in the U.K., now is the time to send us one so that we can review the darn thing already.

£800.

[ Dyson ]

From UTIAS ASRL:

This video shows some highlights from a field test we conducted in June 2016 at an old sand and gravel pit in Sudbury, Ontario, Canada. We taught our robot a 5 km network of interconnected paths and then carried out 120 km of autonomous repeats on these paths using only stereo vision for feedback. The video shows some sections being repeated. Our technique is called Visual Teach and Repeat 2, and is a significant advance over our earlier work in that (i) it uses a Multi-Experience Localization (MEL) technique to match live images to several previous experience of a path, and (ii) is able to do place-dependent terrain assessment to safeguard the robot and people around it, even in rough terrain with vegetation.

They’ll be presenting a paper on this at IROS this October.

[ ASRL ]

Here’s some fascinating research about how a crowd of people on the Internet (which is usually good for nothing but a disaster) can teach robots something useful:

For more details, here’s a talk from one of the authors of the paper, Josh Bongard:

[ MECLab ]

The Conversation (0)