Video Friday: Startup Unveils Agile Robot Dog That Costs Less Than $10k

Your weekly selection of awesome robot videos

5 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Unitree Robotics A1 quadruped robot dog
This robot dog wants to be your best friend.
Photo: Unitree

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 01, 2020 – [Online]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia
ICUAS 2020 – September 1-4, 2020 – Athens, Greece

Let us know if you have suggestions for next week, and enjoy today’s videos.

After unveiling Laikago three years ago and, more recently, Aliengo, Chinese robotics startup Unitree is now showing off its latest robot dog model, called A1. The specs are quite impressive, and it costs less than $10k!

I wonder how that compares to the lifetime cost of an actual dog.

Unitree A1 ]

Thanks Xing!

For social robots to be deployed in an interaction-centred environment such as a hospital, entities need to understand how to design human-robot interactions for the technology to be adopted successfully. For instance, if the robot was to be placed at a hospital concierge, it would need to engage visitors similar to a human concierge, either through verbal or non-verbal gestures (social cues). In this study, we investigate two hypotheses. Firstly, we hypothesized that various attention-drawing social cues significantly correlate to the receptivity of the robot. Secondly, we hypothesized that humans preferred medium of information transfer is through verbal interaction. We set up a humanoid concierge robot, Cruzr, in a hospital for 5 days as a trial. Our findings indicate an increase in receptivity when Cruzr performed a social cue as compared to neutral mode and that the preferred mode of communication was touch over voice.

[ Paper ]

It’s nice that Spot is helping out, but its mobility system is vastly overengineered for this sort of thing—a regular telepresence robot would do just fine, and likely be much easier for folks to manage on the ground. But sure, we’d love to see a COVID-related application that Spot is uniquely qualified for.

[ Boston Dynamics ]

The video shows a self-righting behavior of the four-legged walking robot Charlie, developed by the Robotics Innovation Center (RIC) of the DFKI. The behavior consists of two parts: 1. A transition from a supine lying pose to a prone lying pose and 2. subsequently the transition to a standing posture, where the 6DoF artificial spine of Charlie was key to reach the stable four-legged pose.The first transition has been optimized using state-of-the-art machine learning algorithms.

[ DFKI ]

What if we could generate complex movements for arbitrary robots with arms and legs interacting in a dynamic environment in real-time? Such a technology would certainly revolutionize the motion capabilities of robots and unlock a wide range of very concrete industrial and service applications: robots would be able to react in real-time to any change of the environment or unexpected disturbance during locomotion or manipulation tasks. However, the computation of complex movements for robots with arms and legs in multi-contact scenarios in unstructured environments is not realistically amenable to real-time with current computational capabilities and numerical algorithms.

The project Memmo aims to solve this problem by 1) relying on massive off-line caching of pre-computed optimal motions that are 2) recovered and adapted online to new situations with real-time tractable model predictive control and where 3) all available sensor modalities are exploited for feedback control going beyond the mere state of the robot for more robust behaviors. Memmo will develop a unified yet tractable approach to motion generation for complex robots with arms and legs.

[ MEMMO ]

Thanks Carlos!

Introducing Mavic Air 2, the next big thing in aerial content creation. With intelligent features like FocusTrack, SmartPhoto, and QuickShots, this hyper-intuitive drone puts cinematic masterpieces right at your fingertips. From stunning 8K Hyperlapse videos to OcuSync 2.0 transmission and a 34-minute max flight time, Mavic Air 2 is smart, safe, and simply amazing.

[ DJI ]

Something only a little robot-y, but more than a little weird, from the MIT Media Lab.

[ Ganimals ]

Some rather dramatic new animations of JPL’s Mars helicopter.

[ JPL ]

With concerns over Covid-19, many researchers have had to make sure they keep a safe distance to avoid spreading the virus. But at least one project continues that allows work to go on without worries. The Robotic Assay for Drought uses a robot to water and analyze plants to study the effects of drought. That leaves researchers to study the data, from a safe distance.

[ Iowa State ]

From spaceships to Roombas, robots have the potential to be valuable assistants and to extend our capabilities. But it can still be hard to tell them what to do - we’d like to interact with a robot as if we were interacting with another person, but it’s often clumsy to use pre-specified voice/touchscreen commands or to set up elaborate sensors. Allowing robots to understand our nonverbal cues such as gestures with minimal setup or calibration can be an important step towards more pervasive human-robot collaboration.

This system, dubbed Conduct-a-Bot, aims to take a step towards these goals by detecting gestures from wearable muscle and motion sensors. A user can make gestures to remotely control a robot by wearing small sensors on their biceps, triceps, and forearm. The current system detects 8 predefined navigational gestures without requiring offline calibration or training data - a new user can simply put on the sensors and start gesturing to remotely pilot a drone.

[ MIT ]

When wheels aren’t quite enough, and neither are rotors.

[ Huaping Liu ]

Thanks Huaping!

Seriously, what’s the point of having feet when you can have wheels instead?

[ RSL ]

Thanks Marko!

Self-assembly of modular robotic systems enables the construction of complex robotic configurations to adapt to different tasks. This is inspired by the collective intelligence in nature that groups of individuals can form a variety of structures in order to overcome the limited capability of each individual. We present a framework for SMORES types of modular robots to efficiently self-reconfigure into kinematic topologies in tree structure.

[ UPenn ]

Clearpath is BRINGING CLIPPY BACK

[ Clearpath ]

A Reflectance Transformation Imaging technique (RTI) realized by multi-rotor Unmanned Aerial Vehicles (UAVs) with a focus on deployment in difficult to access buildings is presented in this letter. RTI is a computational photographic method that captures a surface shape and color of a subject and enables its interactive re-lighting from any direction in a software viewer, revealing details that are not visible with the naked eye. The input of RTI is a set of images captured by a static camera, each one under illumination from a different known direction. We present an innovative approach applying two multi-rotor UAVs to perform this scanning procedure in locations that are hardly accessible or even inaccessible for people. The proposed system is designed for its safe deployment within real-world scenarios in historical buildings with priceless historical value.

[ Paper ]

Pratap Tokekar, an assistant professor of computer science at the University of Maryland, is working on robotics research involving “3D” problems—robots performing tasks that can be dull, dirty or dangerous for humans. See how Tokekar is using teams of aerial and ground-based robots for environmental monitoring, precision agriculture, infrastructure inspection, and more.

[ UMIACS ]

Clean Sky 2’s ACCLAIM project explores robotics and simulation technologies to improve the assembly of cabin & cargo lining and hatrack elements in aircraft, to support the human workforce and reach greater efficiency! The project’s final demonstration meeting was held at the start of 2020, where the partners involved - Fraunhofer Institute for Manufacturing Technology and Advanced Materials (IFAM), CEIT, SFS intec GmbH, and National Research Council of Italy (CNR) - successfully demonstrated the full potential of automatic installation of cabin & cargo components.

[ Cleansky ]

Lex Fridman discusses the 1950 paper by Alan Turing that proposed what is now called the Turing Test, and whether or not machines are capable of thought.

[ AI Podcast ]

The Conversation (0)