Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):
IEEE SSRR – March 10-13, 2017 – Shanghai, China
NYC Drone Film Festival – March 17-19, 2017 – New York, N.Y., USA
European Robotics Forum – March 22-24, 2017 – Edinburgh, Scotland
NDIA Ground Robotics Conference – March 22-23, 2017 – Springfield, Va., USA
Automate – April 3-3, 2017 – Chicago, Ill., USA
ITU Robot Olympics – April 7-9, 2017 – Istanbul, Turkey
ROS Industrial Consortium – April 07, 2017 – Chicago, Ill., USA
U.S. National Robotics Week – April 8-16, 2017 – USA
NASA Swarmathon – April 18-20, 2017 – NASA KSC, Florida, USA
RoboBusiness Europe – April 20-21, 2017 – Delft, Netherlands
RoboGames 2017 – April 21-23, 2017 – Pleasanton, Calif., USA
ICARSC – April 26-30, 2017 – Coimbra, Portugal
AUVSI Xponential – May 8-11, 2017 – Dallas, Texas, USA
AAMAS 2017 – May 8-12, 2017 – Sao Paulo, Brazil
Let us know if you have suggestions for next week, and enjoy today’s videos.
From MIT CSAIL:
Communication with a robot using brain activity from a human collaborator could provide a direct and fast feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.
In other words, Baxter will notice you thinking that it’s making an error, and then correct itself.
“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”
The researchers will be presenting this at ICRA in Singapore this May, and we’re hoping they’ll let us try it out for ourselves.
[ MIT CSAIL ]
I have never wondered how an elevator feels, except now that someone has endowed elevators and buildings with AI and feelings, I’m going to be constantly wondering about it.
Artificial intelligence has been lifted to new heights in a campaign that gives elevators a voice so humans can hear them in conversation. hasan & partners, worked with KONE, the global leader in the elevator and escalator industry, to connect selected elevators in different countries around the world. The elevators have their operational information translated into English so they can converse continuously with KONE’s cloud network using IBM’s Watson IoT platform. Machine Conversations is a way to show people exactly what Internet-connected elevators would say, if they could be heard. It has been produced to support KONE’s 24/7 Connected Services, which uses the IBM Watson IoT platform and other advanced technologies to bring intelligent services to elevators and escalators.
I desperately want to participate in this now: "Hey elevator, what’s goin’ down? Oh, you are? Great job!" Or, "hey elevator, you just opened your doors? That’s a-door-able!" And I’m already out of elevator puns. You can see these conversations taking place in real time around the world at the link below.
[ KONE ]
Miso Robotics has combined a relatively basic vision and manipulation system to make this mobile robot that will flip burgers for you:
As much as I like this idea, I have to wonder whether it’s actually cost effective, considering what a (much more capable) human burger flipper gets paid.
ShyBot has no function in the traditional sense. It does not serve us; it probably will not enslave us (perhaps only our imagination). Instead, this robot is programmed to run. To run immediately, with nervous electric heartbeat, in the opposite direction as soon as it senses the presence of a human being.
ShyBot is how I feel most days.
This is, like, Inception levels of meta going on right here from Hinamitetu:
[ Hinamitetu ]
Om nom nom green algae nom.
[ KAIST ]
As much as I hate the idea of urban delivery drones, I’m the first to admit that from time to time, drones can do things that are useful and good.
[ senseFly ]
Which TurtleBot 3 configuration is tastiest: burger, or waffle?
My guess is they both taste like plastic shards, but I’m hoping to be pleasantly surprised.
[ Robotis ]
The autonomous underwater vehicle (AUV) Leng was designed as long-distance exploration vehicle. Its shape was specifically designed to meet the requirements of the Europa-Explorer-Project: very small diameter (in order to fit into the ice drill) as well as a hydrodynamically optimized outer hull (in order to reduce energy consumption and enable long-range missions). The vehicle is equipped with a large number of different navigation sensors since localization quality and availability are of key importance – in the Europa-Explorer-scenario the vehicle has to return to its starting position (ice drill) even after having conducted long-distance missions.
[ DFKI ]
These are DSTR robots, pronounced "disaster." It’s a project from Texas A&M that’s a few years old, but these little dudes have been augmented with some FPV gear for SXSW:
[ Texas A&M ]
Next time someone asks you to handle something radioactive, run away screaming, and then come back, apologize, and tell them to use this robot instead.
[ UT Austin ]
Jibo’s voice sounds a bit different, hopefully that’s a good sign?
This is the first we’ve heard from Jibo in a while, though: beyond a brieft check-in at CES, they’ve been pretty quiet over there. Hmm.
[ Jibo ]
This video presents the robotic system designed by Team NimbRo Picking for the Amazon Picking Challenge 2016. The video covers mechanical design, perception components including object detection and semantic segmentation, heuristic grasp planning, item pose registration, and parametrized motion primitives. Footage from the competition runs is included.
[ Team NimbRo ]
Toru is one of the very few warehouse robots that can pick items off of shelves, which is impressive, even if it mostly just works on shoebox and shoebox-like things:
[ Magazino ]
ETH Zurich’s Robotic Systems Lab is taking their Anymal quadruped to France for the Argos Challenge:
That emergency shutdown is super cute.
The Discovery Channel did a little feature on Relay, and it’s got enough adorable little beeps and boops in it to be totally worth watching:
[ Savioke ]
This week’s CMU RI Seminar: Shuo Yang Director of Intelligent Navigation Technologies, DJI
Over the past decade, DJI has developed several world-leading drone products, turning cutting-edge technologies such as high resolution image transmission, visual odometry, and learning-based object tracking into affordable commercial products. Along with all these technological successes, DJI is exploring innovative ways to make them more accessible. In this talk, Shuo will review some key technologies DJI has developed, then talk about RoboMasters, a robotics competition that uses these technologies to nurture next generation engineers.
[ CMU RI ]
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
Erico Guizzo is the digital product manager at IEEE Spectrum. An IEEE Member, he is an electrical engineer by training and has a master’s degree in science writing from MIT.