Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):
HRI 2016 – November 15-17, 2016 – Cancun, Mexico
AI-HRI – November 17-19, 2016 – Arlington, Va., USA
Humans, Machines, and the Future of Work – December 5, 2016 – Houston, Texas, USA
RiTA 2016 – December 11-14, 2016 – Beijing, China
WAFR 2016 – December 18-20, 2016 – San Francisco, Calif., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
I needed something to laugh about today, and this helped:
This RHex dance contest was filmed in 2013, as part of the Philadelphia Science Festival.
[ Kod*lab ]
More isn’t always better, but with glowy flying drones, it definitely is:
Last year: 100 drones. This year: 500 drones. If you do the math (and the math is simple and exponential), by 2020 Intel will be flying 312,500 drones, which is a VGA display.
[ Intel ]
Some very cool research here from ASU, optimizing machine learning to get a robot to teach itself to play basketball in just a couple of hours:
Ben Amor’s algorithm, called “sparse latent space policy search,” enables a robot to first understand the coordination between its different joints, parts and movements. Through this, the robot gradually eliminates unsuccessful solutions to arrive at a successful one. The algorithm is technically a form of reinforcement learning, albeit at a much faster pace. “In a sense, this algorithm is linked to how humans learn — this project is not making a biological statement, it simply mirrors how we approach a problem,” says Ben Amor. “We innately understand the relationship between our different joints and synergistic movements, but this is something robots need to learn.”
Remember how in Star Trek: First Contact, Commander Data referred to 0.68 seconds as being “an eternity” for an android? Of course you do. And Ol’ Yellow Eyes is right: Even if they seem busy, robots have lots of intermittent down time that could be put to use:
At Automatica 2016 KUKA Corporate Research presented the initial results of the RobDREAM project, which is part of the larger SPARC European robotics partnership. The RobDREAM research seeks to optimize robotic operation by analyzing data and improving processes during "inactive" periods (such as during an off-shift in the evening) - much like animals do during sleep. One of the main goals was to speed up the programming and optimization of robots so that small and medium-sized enterprises don’t have to invest so much time and resources into that portion of robotic integration.
Being able to drop off a ground robot with a drone is all well and good, but you’re not going to leave the poor little guy there forever, are you?
In 2013, we succeeded in an initial experiment of robotic observation in volcanic areas for a UGV deployed by a UGV on Mt. Asama. However, the system could not bring the UGV back to the base, because it did not include the function to capture the UGV. Therefore, in this research, we developed a capturing net for retracting a small-sized UGV suspended by UAV. In this videoclip, we introduce a basic experiment for deploying and retracting the UGV with a capturing net hung by a UAV.
This mostly looks like a wicked fun immersive remote controlled ATV, but the tethered backpack drone is cool:
This "high performance accident monitoring system" was developed by the Korea Atomic Energy Research Institute (KAERI). Looks like a great idea, since if you’re monitoring a nuclear accident, you probably want to be as far away from it as possible.
[ KAERI ]
If you’re not fascinated by multi-contact balancing for torque-controlled humanoid robots, skip to 1:50 to see DRL’s TORO robot doing a weird dance (and not falling over) on top of a stack of exercise mats:
If you are fascinated by multi-contact balancing for torque-controlled humanoid robots, there’s a paper on all of this at the link below.
IBM has this new little open source robot called TJ Bot that’s essentially designed to be a tiny cardboard body for Watson. You can built a TJ Bot in about 15 minutes, and this video gives you a sense of what it has going on inside:
This project looks pretty new, so hopefully we’ll be seeing more about what TJ Bot actually does in the near future. Meantime, the link below has lots more info from IBM.
[ TJ Bot ]
Here’s a series of videos from DFKI’s Robotics Innovation Center, where they are doing some impressive work involving humanoid robots, ground robots, and underwater robots:
[ DFKI ]
Starship’s delivery robots are bringing fresh-baked cookies to your door in Redwood City, Calif.:
It’s not totally clear from the video how much of this is autonomous (supervised or otherwise) and how much is teleop or remote controlled, but this is a lot more progress than we’ve seen from delivery drones lately.
[ Starship ]
The near term future for military robots is likely going to be trying to take care of the "long tail" of logistics and support, like resupplying soldiers in the field. Aurora Flight Sciences has been developing a retrofit kit that can give autonomy (complete with sense and avoid) to conventional helicopters to enable them to run resupply missions by themselves:
Adapting existing manned platforms this way seems like it would be cheaper and more versatile that developing dedicated unmanned systems to do the same thing.
All of the presentation videos from ROSCon 2016 in Seoul have been up for a few weeks now, but if you haven’t checked them out, here are three that might be worth your attention, even if you’re not a ROS fanatic (yet):
Introducing the Turtlebot3
Introducing Intel RealSense Robotics All-in-one Perception Device
Introducing H-ROS, the Hardware Robot Operating System