Video Friday: MIT Origami Robots, Sphero Mini, and Headless Robotic Cat

Your weekly selection of awesome robot videos

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Qoobo therapeutic robotic pet
Photo: Qoobo

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):

Drone World Expo – October 2-4, 2017 – San Jose, Calif., USA
Latin American Robotics Week – October 12-14, 2017 – Valparaiso, Chile
IEEE TechEthics Conference – October 13, 2017 – Washington, D.C.
HAI 2017 – October 17-20, 2017 – Bielefeld, Germany
CBS 2017 – October 17-19, 2017 – Beijing, China
ICUAS 2017 – October 22-29, 2017 – Miami, Fla., USA
Robótica 2017 – November 7-11, 2017 – Curitiba, Brazil
Humanoids 2017 – November 15-17, 2017 – Birmingham, U.K.

Let us know if you have suggestions for next week, and enjoy today’s videos.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had developed a new shape-shifting robot that’s something of a superhero: It can transform itself with different “outfits” that allow it to perform different tasks. Dubbed “Primer,” the cube-shaped robot can be controlled via magnets to make it walk, roll, sail, and glide. It carries out these actions by wearing different exoskeletons, which start out as sheets of plastic that fold into specific shapes when heated. After Primer finishes its task, it can shed its “skin” by immersing itself in water, which dissolves the exoskeleton.

Primer’s various forms have a range of advantages. For example, “Wheel-bot” has wheels that allow it to move twice as fast as “Walk-bot.” “Boat-bot” can float on water and carry nearly twice its weight. “Glider-bot” can soar across longer distances, which could be useful for deploying robots or switching environments. Primer can even wear multiple outfits at once, like a Russian nesting doll. It can add one exoskeleton to become “Walk-bot,” and then interface with another, larger exoskeleton that allows it to carry objects and move two body lengths per second. To deploy the second exoskeleton, “Walk-bot” steps onto the sheet, which then blankets the bot with its four self-folding arms.

[ MIT News ]

The latest version of Sphero is mini-ized and alarmingly hip and colorful:

[ Sphero Mini ]

Thomas from ETH Zurich’s Autonomous Systems Lab wrote in to share a recent Greenland glacier monitoring expedition with AtlantikSolar, a multi-day flight capable solar-powered drone:

Incredible, right? The arctic is a perfect place for a solar powered drone, for at least half the year, but the other half of the year you can always just keep going in the antarctic instead.

[ Sun2ice ] via [ AtlantikSolar ]

Thanks Thomas!

I mean, this is basically what a cat is, right?

$100-ish, launching in June of next year.

Qoobo ] via [ BBG ]

This is what robots do in the lab when no one is around at night. They misplace our stuff on purpose!

This video shows the autonomous robot-robot collaboration in HySociaTea using the TECS framework. The Compi robot takes item to be delivered and reports that a delivery is required. AILA recognizes that she can handle the task and takes over. The handover is realized using a communication protocol.

The project HySociaTea is a cooperation between various research departments at DFKI GmbH combining the competencies of the complete spectrum of Artificial Intelligence. HySociaTea is an R&D project which is embedded in the future strategy Industry-4.0 and deals with the development of flexible production and mission processes. The basic idea in HySociaTea is that the production environment of the future will consist of a team of humans closely collaborating with robots and virtual agents. The team can flexibly react to incoming requirements. The project establishes the basis for such a team by considering setup, communication and role allocation in such a hybrid team.

DFKI ]

Cassie Blue visits the University of Michigan Museum of Art:

You already are art, Cassie. You already are.

[ UMD ] via [ Agility Robotics ]

Walking on two legs isn’t as easy as it seems. Especially for robots, where a natural stride is a major challenge. Researchers at EPFL’s Biorobotics Laboratory are testing novel systems to improve humanoids’ ability to walk and interact.

[ EPFL ]

ANYmal went to ERL Emergency Robotics 2017, and there was much frolicking:

[ ANYmal ]

Do not watch this video of an autonomous robot dentist drilling into the mouth of a live human if you are in the least bit squeamish:

[ SCMP ]

Nobody wants to do jobs like this, which is why we have robots:

Carl Fredrik Jönsson from the Swedish waste management company Carl F tells us what it’s like working with a ZRR. The Malmö-based company Carl F receives approx. 40 000 tonnes of C&D waste annually, of which 40 % has ended up as combustible residual waste previously. Thanks to the Carl-robot, that number has dropped to 10 %. Because of ZRR’s 24/7 operation and the feeding rate control, there’s no need to overfill the conveyor to maintain the yearly capacity.

[ Zen Robotics ]

And this is why people build robots in the first place:

[ Impress ]

This is “Passenger Drone,” and according to the folks who made it, it’s “the most advanced manned autonomous VTOL in the world!!!” Judge for yourself whether that’s worth three exclamation points.

Look, I hate to keep belaboring this point, but if you staple enough motors to ANYTHING you can make it into an “autonomous drone.” True autonomy, though, involves intelligent and dynamic sense and avoid, and so far nobody seems to have made that work, or even gotten close.

[ PassengerDrone ]

If you want to know what they’ve been doing at Pollen Robotics, this is it.

[ Pollen Robotics ]

CMU has been working on autonomous vehicles for ages. And by ages, I mean decades. Here are a pair of videos showing what CMU’s autonomous cars program was like 20 years ago (!).

[ CMU RI ]

Lockheed Martin’s concept is called Mars Base Camp and it’s our idea of how to send humanity’s first crewed mission to Mars in about a decade. The Mars Base Camp orbiting outpost could give scientists/astronauts the ability to operate rovers and drones on the surface in real time – helping us better understand the Red Planet and answer fundamental questions: Where did we come from? Where are we going? Are we alone?

What’s notable to me about this video is that there is almost no mention of robots. By the time we’re sending humans to Mars, even if it’s just to orbit, there should be a whole crew of robots down on the surface ready and waiting. From orbit, astronauts could teleop them to do all kinds of exploration and science without taking on any additional risk, and perhaps even set up infrastructure for the first human landing.

[ Lockheed Martin ]

If you missed ROSCon 2017 in Vancouver, you won’t get an extra special ROS coloring book, but you can at least watch all of the talks that OSRF/OSR/OR/whatever has posted online. We’ll feature more of them next week (because there are a bunch of good ones), but for today we have a sandwich of the opening remarks from Brian Gerkey and Tully Foote, an inside look at the Space Robotics Challenge, and closing remarks from Ryan Gariepy.

[ ROSCcon 2017 ]

This week’s CMU RI Seminar comes from Henny Admoni, entitled “Toward Natural Interactions With Assistive Robots.”

Robots can help people live better lives by assisting them with the complex tasks involved in everyday activities. This is especially impactful for people with disabilities, who can benefit from robotic assistance to increase their independence. For example, physically assistive robots can collaborate with people in preparing a meal, enabling people with motor impairments to be self sufficient in cooking and eating. Socially assistive robots can act as tutors, coaches, and partners, to help people with social or learning deficits practice the skills they have learned in a non-threatening environment. Developing effective human-robot interactions in these cases requires a multidisciplinary approach that involves fundamental robotics algorithms, insights from human psychology, and techniques from artificial intelligence and machine learning.

In this talk, I will describe my vision for robots that collaborate with and assist humans on complex tasks. I will explain how we can leverage our understanding of natural, intuitive human behaviors to detect when and how people need assistance, and then apply robotics algorithms to produce effective human-robot interactions. I explain how models of human attention, drawn from cognitive science, can help select robot behaviors that improve human performance on a collaborative task. I detail my work on algorithms that predict people’s mental states based on their eye gaze and provide assistance in response to those predictions. And I show how breaking the seamlessness of an interaction can make robots appear smarter. Throughout the talk, I will describe how techniques and knowledge from cognitive science help us develop robot algorithms that lead to more effective interactions between people and their robot partners.

[ CMU RI ]

The Conversation (0)