Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):
IEEE IRC 2018 – January 31-2, 2018 – Laguna Hills, Calif.
HRI 2018 – March 5-8, 2018 – Chicago, Ill.
Let us know if you have suggestions for next week, and enjoy today’s videos.
In case you weren’t keeping track, Monday, December 4, was National Cookie Day in the United Sttes, so here’s a throwback to MIT’s PR2 baking a cookie (they say it’s called a “Chocolate Afghan,” whatever that is).
[ MIT ]
We’ve been waiting TEN YEARS for this: It’s the official ROS 10 year montage!
[ ROS.org ]
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.
Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
[ UC Berkeley ]
This small bot is capable of doing useful tasks in a fully autonomous fashion. Features include on-board computing, communications, sensing, power management, solar system and high torque motors. There is also space for additional payloads. Small robots have a number of potential applications including materials inspection (i.e., pipes), navigating through collapsed buildings, intelligent transportation/delivery, micro surgery, surveillance and more.
Honda will unveil its new 3E (Empower, Experience, Empathy) Robotics Concept at CES 2018, demonstrating a range of experimental technologies engineered to advance mobility and make people’s lives better. Expressing a variety of functions and designs, the advanced robotic concepts demonstrate Honda’s vision of a society where robotics and AI can assist people in many situations, such as disaster recovery, recreation and learning from human interaction to become more helpful and empathetic.
-3E-A18, a companion robotics concept that shows compassion to humans with a variety of facial expressions
-3E-B18, a chair-type mobility concept designed for casual use in indoor or outdoor spaces
-3E-C18, a small-sized electric mobility concept with multi-functional cargo space
-3E-D18, an autonomous off-road vehicle concept with AI designed to support people in a broad range of work activities
[ Honda ]
I am not entirely sure what LOVOT is, except that LOVE × ROBOT = LOVOT, and it’s apparently being developed by the lead developer on Pepper.
Here is how LOVOT is different from ROBOT, according to the website:
Robots make life convenient.
LOVOT makes life better.
Robots take orders from everyone.
LOVOT shies away from others and turns only to you.
Robots only do what is necessary.
LOVOT gazes into your eyes and does even more.
Robots can’t listen to your troubles.
LOVOT stands by you when you’re in tears.
I’ll take it. Launching in 2019.
[ GROOVE X ]
Robots, drones and AI. The Danish Technological Institute (DTI) in Odense, Denmark is working with a wide variety of robotics. What many people do not know, however, is that there is also a small cross-disciplinary special task force dedicated to saving Christmas.
If I woke up on Christmas morning and found a box with a UR3 in it, I’d be just as happy as that kid.
[ DTI ]
Stanford is testing some gecko-inspired grippers on our favorite cubical space robot, Astrobee:
[ NASA ]
Kuka Robotics, in its infinite wisdom, has decided to lend Simone Giertz a LBR iiwa arm which I bet they had to contractually require her not to “accidentally” destroy.
[ Simone Giertz ]
Mechanical Engineering’s Aaron Johnson talks about his research in creating robots that can be useful to help humans perform desired behaviors in any type of terrain.
Fun fact: robots are clinically proven to perform better when they have googly eyes on them. And we’re reeeaaally looking forward to seeing what a Minitaur with an actuated back-tail can do.
Robust and accurate visual-inertial estimation is crucial to many of today’s challenges in robotics. Being able to localize against a prior map and obtain accurate and drift-free pose estimates can push the applicability of such systems even further. Most of the currently available solutions, however, either focus on a single session use-case, lack localization capabilities or an end-to-end pipeline. We believe that only a complete system, combining state-of-the-art algorithms, scalable multi-session mapping tools, and a flexible user interface, can become an efficient research platform. We therefore present maplab, an open, research-oriented visual-inertial mapping framework for processing and manipulating multi-session maps, written in C++.
On the one hand, maplab can be seen as a ready-to-use visual-inertial mapping and localization system. On the other hand, maplab provides the research community with a collection of multi-session mapping tools that include map merging, visual-inertial batch optimization, and loop closure. Furthermore, it includes an online frontend that can create visual-inertial maps and also track a global drift-free pose within a localization map. In this paper, we present the system architecture, five use-cases, and evaluations of the system on public datasets. The source code of maplab is freely available for the benefit of the robotics research community.
Jamie Paik’s Reconfigurable Robotics Lab (RLL) at EPFL has been getting up to some modular, squishable, deformable stuff this year:
What’s that little jumpy dude at a minute in, have we seen that before...?
[ RRL ]
I can totally do this:
[ YouTube ]
This video by George Joseph shows the view of a Cozmo robot as it navigates through doorways marked by ArUco markers. The software is part of the cozmo-tools package available on GitHub. Work done at Carnegie Mellon University, November 2017.
Not bad for a little toy robot, right?
[ GitHub ]
Drone Adventures sent a team to Sao Tomé and Principe in March 2017 to map several shores around the islands where erosion and flooding threaten local communities.
[ Drone Adventures ]
Rusty Squid Robotic Design Principles: The evolution of robotics is too important to leave in the hands of the engineers alone.
Design practice and thinking are simply not done in the robotics labs around the world. Large sums of cash are spent on very expensive robotics and the first time the engineers see how the public react is when the robots are complete... and they wonder why the robots aren’t being welcomed with open arms.
Our robotic art and design laboratory has gone back to first principles and created a robust design process that draws on the traditions of puppetry, animatronics and experience design to build rich and meaningful robots.
[ Rusty Squid ]
The nice thing about robots is that you can program them to laugh, even if your jokes are really really bad.
Is Nao capable of giving high fives? Wouldn’t it be a high three?
[ RobotsLAB ]
YouTube’s auto translate can’t make sense of this, but I think Sota is suggesting that you buy a robot for Christmas?
[ Vstone ]
Per Sjöborg interviews Mel Torrie from Autonomous Solutions in the latest episode of Robots in Depth:
Mel Torrie, is the founder and CEO of ASI, Autonomous Solutions Inc. He talks about how ASI develops a diversified portfolio of vehicle automation systems across multiple industries.
[ Robots in Depth ]
This week’s CMU RI Seminar is from Alan Wagner, on “Exploring Human-Robot Trust during Emergencies”:
This talk presents our experimental results related to human-robot trust involving more than 2000 paid subjects exploring topics such as how and why people trust a robot too much and how broken trust in a robot might be repaired. From our perspective, a person trusts a robot when they rely on and accept the risks associated with a robot’s actions or data. Our research has focused on the development of a formal conceptualization of human-robot trust that is not tied to a particular problem or situation. This has allowed us to create algorithms for recognizing which situations demand trust, provided insight into how to repair broken trust, and affords a means for bootstrapping one’s evaluation of trust in a new person or new robot. This talk presents our results using these techniques as well as our larger computational framework for representing and reasoning about trust. Our framework draws heavily from game theory and social exchange theories. We present results from this work and an ongoing related project examining social norms in terms of social and moral norm learning.
[ CMU RI Seminar ]