Video Friday: Realistic Robot Dog, and More

Your weekly selection of awesome robot videos

5 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Tombot dog robot
Photo: Tombot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2019 – March 11-14, 2019 – Daegu, Korea
ISMR 2019 – April 1-5, 2019 – Atlanta, GA, USA
U.S. National Robotics Week – April 6-14, 2019 – USA
We Robot 2019 – April 11-13, 2019 – Miami, Florida, USA
RoboSoft 2019 – April 14-18, 2019 – Seoul, Korea
NIST ARIAC Challenge – April 15-19, 2019 – Online
Nîmes Robotics Festival – May 17-19, 2019 – Nîmes, France
ICRA 2019 – May 20-24, 2019 – Montreal, Canada

Let us know if you have suggestions for next week, and enjoy today’s videos.

Humans are very good at object generalization—even when we’re very young, it takes just a few samples from a class of objects for us to be able to identify other objects that fit into the same class. The amount of training data that it takes for a human to be able to identify (say) a previously unseen coffee mug based on their previous coffee mug experience is tiny. Part of the reason we’re able to do this is because we can generalize a class of coffee mugs based on what we know all coffee mugs share: They’re cups with handles. As long as it’s a cup with a handle, it doesn’t matter how big or small it is, or what color it is, we can still identify it.

MIT researchers are teaching robots to generalize about objects in a similar way, and extending that to manipulation tasks as well. By identifying and training on just a few semantic “key points” in 3D data (like, coffee mugs have tops and bottoms and handles, those are three key frames right there), a robot can both identify and manipulate objects that share those key frames. More complicated objects (like shoes) need more key frames for training, but the system is very reliable, and it’s able to do common-sense manipulation of objects that it’s never seen before.

[ Paper ] via [ MIT ]

Tombot is a US $450 robotic dog that is just lifelike enough to perhaps provide some comfort to folks with dementia:

It’s a very Paro-like idea, although it’s worth pointing out that as a dog, Tombot is going to be more of a challenge since people have very specific ideas about how dogs are supposed to act. The reason why Paro is a seal (and also why Pleo is a dinosaur) is because it’s easier to suspend disbelief in form factors we have no experience with.

[ Kickstarter ] via [ Gizmodo ]

This video shows how the humanoid robot LOLA walks over uneven terrain without using any vision-based information. This reactive robustness to unknown terrain is achieved by a ground-force control scheme that handles ground-height variations and unplanned partial footholds with the environment.

[ TUM ]

Combining hardware upgrades and new software skills including the 5th generation intelligent gripper from RightHand, the latest collaborative robotic arms and more powerful RightPick.AI vision processing software and machine learning, the RightPick2 (RP2) piece-picking solution is even more dynamic, enabling supply chain managers to increase productivity in distribution centers around the world.

[ RightHand Robotics ]

Thanks Jeff!

Pepper is learning new ways of aggressively trying to sell you stuff. Thanks, Pepper!

[ Paper ]

I have enough trouble flying a drone from a zero HP chair.

[ Team BlackSheep ]

It’s a robot video with a puppy in it. WHAT MORE DO YOU WANT?

[ Misty Robotics ]

The collaboration of humans and robots, as well as a safe coexistence, are an important research topic in the Factory of the Future of the Institute of Robotics and Mechatronics. Since robot-intelligence will not be sufficiently enough in the near future to execute all possible applications autonomously, the flexibility and diversity of humans has to take place in Factories of the near Future. An important component of human-robot collaboration (HRC) is the development of intelligent assistance of cobots.

This video shows the assembly of an automotive component with the support of a collaborative robot. The advantage of the cooperation between human and robot is on the one hand the quality assurance. The human gets the right component at the right time according to the assembly plan. In addition, the human being no longer has to search for the component in a large number of shelfs and storages and can concentrate on the essentials, the assembly.

[ DLR ]

Low back pain is a leading cause disabling people. It particularly affects the elderly, whose proportion in European societies keeps rising, incurring growing concern about healthcare. Assistive technology in general and assistive robotics in particular may help to address the increasing need for healthcare. In particular, it can help people with musculoskeletal conditions that need keeping mobility of joints and increase of muscle force and coordination. In this context, the KERAAL consortium proposed to develop a robot coach for rehabilitation exercises.

[ KERAAL ]

Festo (the people behind those awesome bioinspired robots) presents a discussion on whether humans will be useless in the future.

Well that’s disappointing. I, for one, would be perfectly happy to be useless in the future.

[ Festo ]

Toyota AI Ventures talks about ElliQ.

[ Elliq ]

Sewer inspections require many humans to work in risky and unhealthy conditions. Introducing a robotics solution in this process aims at reducing the labour risks, improving the precision of sewer inspections and optimising sewer cleaning resources of the city, not only in terms of economic expenses but also in terms of water required for the cleaning process and of machinery needed.

I have to say, as far as sewers go, that one doesn’t seem so bad.

[ SIAR ]

Video shows an autonomous navigation platform at the Robotics Institute of Carnegie Mellon University. The vehicle is equipped with 3D lidar, camera, IMU, and wheel encoders, and capable of mapping, terrain analysis, and obstacle avoidance.

The vehicle operates in two modes. First, an operator commands the vehicle with a remote controller (smart joystick mode). The vehicle is aware of its surroundings and makes sure its path is collision-free. After a map is built, the vehicle can then follow way-points based on the map (way-point following mode). State estimation and mapping use our latest technology leveraging ranging, vision, and inertial sensing. Collision avoidance uses our fast local planner where decisions are made within 1ms. The system is able to handle tight maneuvers in cluttered environments.

[ Ji Zhang ]

The GRAPE project (Ground Robot for vineyArd monitoring and ProtEction) aimed at creating the enabling technologies to allow agricultural service companies and equipment providers to develop vineyard robots that can increase the cost effectiveness of their products with respect to traditional practices.

In particular, the project addressed the market of instruments for biological control by developing the tools required to execute (semi) autonomous vineyard monitoring and farming tasks with Unmanned Ground Vehicles (UGVs) and, therefore, reducing the environmental impact with respect to traditional chemical control.

[ GRAPE ]

The Conversation (0)