Video Friday: NASA’s Curiosity Mars Rover Captures 1.8 Billion-Pixel Panorama

Your weekly selection of awesome robot videos

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

NASA's Curiosity Mars Rover Captures 1.8 Billion-Pixel Panorama
Image: NASA/JPL-Caltech/MSSS

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

NASA Curiosity Project Scientist Ashwin Vasavada guides this tour of the rover’s view of the Martian surface. Composed of more than 1,000 images and carefully assembled over the ensuing months, the larger version of this composite contains nearly 1.8 billion pixels of Martian landscape.

This panorama showcases "Glen Torridon," a region on the side of Mount Sharp that Curiosity is exploring. The panorama was taken between Nov. 24 and Dec. 1, 2019, when the Curiosity team was out for the Thanksgiving holiday. Since the rover would be sitting still with few other tasks to do while it waited for the team to return and provide its next commands, the rover had a rare chance to image its surroundings several days in a row without moving.

[ MSL ]

Sarcos has been making progress with its Guardian XO powered exoskeleton, which we got to see late last year in prototype stage:

The Sarcos Guardian XO full-body, powered exoskeleton is a first-of-its-kind wearable robot that enhances human productivity while keeping workers safe from strain or injury. Set to transform the way work gets done, the Guardian XO exoskeleton augments operator strength without restricting freedom of movement to boost productivity while dramatically reducing injuries.

[ Sarcos ]

Professor Hooman Samani, director of the Artificial Intelligence and Robotics Technology Laboratory (AIART Lab) at National Taipei University, Taiwan, writes in to share some ideas on how robots could be used to fight the coronavirus outbreak. 

Time is a critical issue when dealing with people affected by Coronavirus. Also due to the current emergency disaster, doctors could be far away from the patients. Additionally, avoiding direct contact with infected person is a medical priority. An immediate monitoring and treatment using specific kits must be administered to the victim. We have designed and developed the Ambulance Robot (AmbuBot) which could be a solution to address those issues. AmbuBot could be placed in various locations especially in busy, remote or quarantine areas to assist in above mentioned scenario. The AmbuBot also brings along an AED in a sudden event of cardiac arrest and facilitates various modes of operation from manual to semi-autonomous to autonomous functioning.

[ AIART Lab ]

Digit is launching later this month alongside a brand new sim that’s a 1:1 match to both the API and physics of the actual robot. Here, we show off the ability to train a learned policy against the validated physics of the robot. We have a LOT more to say about RL with real hardware... stay tuned.

Staying tuned!

Agility Robotics ]

This video presents simulations and experiments highlighting the functioning of the proposed Trapezium Line Theta* planner, as well as its improvements over our previous work namely the Obstacle Negotiating A* planner. First, we briefly present a comparison of our previous and new planners. We then show two simulations. The first shows the robot traversing an inclined corridor to reach a goal near the low-lying obstacle. This demonstrates the omnidirectional and any-angle motion planning improvement achieved by the new planner, as well as the independent planning for the front and back wheel pairs. The second simulation further demonstrates the key improvements mentioned above by having the robot traverse tight right-angled corridors. Finally, we present two real experiments on the CENTAURO robot. In the first experiment, the robot has to traverse into a narrow passage and then expand over a low lying obstacle. The second experiment has the robot first expand over a wide obstacle and then move into a narrow passage.

To be presented at ICRA 2020.

Dimitrios Kanoulas ]

We’re contractually obligated to post any video with “adverse events” in the title.

JHU ]

Waymo advertises their self-driving system in this animated video that features a robot car making a right turn without indicating. Also pretty sure that it ends up in the wrong lane for a little bit after a super wide turn and blocks a crosswalk to pick up a passenger. Oops!

I’d still ride in one, though.

Waymo ]

Exyn is building the world’s most advanced, autonomous aerial robots. Today, we launched our latest capability, Scoutonomy. Our pilotless robot can now ‘scout’ freely within a desired volume, such as a tunnel, or this parking garage. The robot sees the white boxes as ‘unknown’ space, and flies to explore them. The orange boxes are mapped obstacles. It also intelligently avoids obstacles in its path and identifies objects, such as people or cars. Scoutonomy can be used to safely and quickly finding survivors in natural, or man-made, disasters.

Exyn ]

I don’t know what soma blocks are, but this robot is better with them than I am.

This work presents a planner that can automatically find an optimal assembly sequence for a dual-arm robot to assemble the soma blocks. The planner uses the mesh model of objects and the final state of the assembly to generate all possible assembly sequence and evaluate the optimal assembly sequence by considering the stability, graspability, assemblability, as well as the need for a second arm. Especially, the need for a second arm is considered when supports from worktables and other workpieces are not enough to produce a stable assembly.

[ Harada Lab ]

Semantic grasping is the problem of selecting stable grasps that are functionally suitable for specific object manipulation tasks. In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for. We introduce the Context-Aware Grasping Engine, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns. We quantitatively validate our approach against three prior methods on a novel dataset consisting of 14,000 semantic grasps for 44 objects, 7 tasks, and 6 different object states. Our approach outperformed all baselines by statistically significant margins, producing new insights into the importance of balancing memorization and generalization of contexts for semantic grasping. We further demonstrate the effectiveness of our approach on robot experiments in which the presented model successfully achieved 31 of 32 suitable grasps.

[ RAIL Lab ]

I’m not totally convinced that bathroom cleaning is an ideal job for autonomous robots at this point, just because of the unstructured nature of a messy bathroom (if not of the bathroom itself). But this startup is giving it a shot anyway.

The cost target is $1,000 per month.

[ Somatic ] via [ TechCrunch ]

IHMC is designing, building, and testing a mobility assistance research device named Quix. The main function of Quix is to restore mobility to those stricken with lower limb paralysis. In order to achieve this the device has motors at the pelvis, hips, knees, and ankles and an onboard computer controlling the motors and various sensors incorporated into the system.

[ IHMC ]

In this major advance for mind-controlled prosthetics, U-M research led by Paul Cederna and Cindy Chestek demonstrates an ultra-precise prosthetic interface technology that taps faint latent signals from nerves in the arm and amplifies them to enable real-time, intuitive, finger-level control of a robotic hand.

[ University of Michigan ]

Coral reefs represent only 1% of the seafloor, but are home to more than 25% of all marine life. Reefs are declining worldwide. Yet, critical information remains unknown about basic biological, ecological, and chemical processes that sustain coral reefs because of the challenges to access their narrow crevices and passageways. A robot that grows through its environment would be well suited to this challenge as there is no relative motion between the exterior of the robot and its surroundings. We design and develop a soft growing robot that operates underwater and take a step towards navigating the complex terrain of a coral reef.

[ UCSD ]

What goes on inside those package lockers, apparently.

[ Dorabot ]

In the future robots could track the progress of construction projects. As part of the MEMMO H2020 project, we recently carried out an autonomous inspection of the Costain High Speed Rail site in London with our ANYmal robot, in collaboration with Edinburgh Robotics.

[ ORI ]

Soft Robotics technology enables seafood handling at high speed even with amorphous products like mussels, crab legs, and lobster tails.

[ Soft Robotics ]

Pepper and Nao had a busy 2019:

[ SoftBank Robotics ]

Chris Atkeson, a professor at the Robotics Institute at Carnegie Mellon University, watches a variety of scenes featuring robots from movies and television and breaks down how accurate their depictions really are. Would the Terminator actually have dialogue options? Are the "three laws" from I, Robot a real thing? Is it actually hard to erase a robot’s memory (a la Westworld)?

[ Chris Atkeson ] via [ Wired ]

This week’s CMU RI Seminar comes from Anca Dragan at UC Berkeley, on “Optimizing for Coordination With People.”

From autonomous cars to quadrotors to mobile manipulators, robots need to co-exist and even collaborate with humans. In this talk, we will explore how our formalism for decision making needs to change to account for this interaction, and dig our heels into the subtleties of modeling human behavior — sometimes strategic, often irrational, and nearly always influenceable. Towards the end, I’ll try to convince you that every robotics task is actually a human-robot interaction task (its specification lies with a human!) and how this view has shaped our more recent work.

[ CMU RI ]

The Conversation (0)