Ford Self-Driving Vans Will Use Legged Robots to Make Deliveries

Agility Robotics’ Digit will bring packages from a delivery vehicle to your front door

6 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

Digit robot from Agility Robotics
Next time you shop online, this skinny robot may show up with your package.
Photo: Ford and Agility Robotics

Ford is adding legs to its robocars—sort of.

The automaker is announcing today that its fleet of autonomous delivery vans will carry more than just packages: Riding along with the boxes in the back there will be a two-legged robot.

Digit, Agility Robotics’ humanoid unveiled earlier this year on the cover of IEEE Spectrum, is designed to move in a more dynamic fashion than regular robots do, and it’s able to walk over uneven terrain, climb stairs, and carry 20-kilogram packages.

Ford says in a post on Medium that Digit will bring boxes from the curb all the way to your doorstep, covering those last few meters that self-driving cars are unable to. The company plans to launch a self-driving vehicle service in 2021.

Digit performs flawlessly in the video, although it wasn’t operating fully autonomously. It was being teleoperated at a high level via commands like “walk to this location,” “climb the stairs,” and “put down the box.” We’re told that Digit didn’t fall over even once during filming, but certainly a bigger challenge for the robot will be to perform this well across the wide variety of homes that it may eventually have to handle, with obstacles like inclined surfaces, different types of stairs, overgrown yards, gates, and wayward pets and/or children.

Having a vehicle serve as a base station provides a variety of advantages for Digit. It can carry a smaller battery because it will frequently return to the vehicle to recharge. And while it has cameras and a lidar, Digit will have help from the vehicle to do mapping and path planning

Having a vehicle serve as a base station provides a variety of advantages for Digit. For example, Digit can get away with a much smaller battery than most large humanoids, because it only really needs to operate for a few minutes at a time before returning to the vehicle to recharge as it drives to the next delivery stop. And while Digit carries several stereo cameras and a lidar, it will have help from its companion robovan to do much of the mapping and path planning required to carry out a delivery. That’s an advantage, Ford says, because its autonomous vehicles are equipped with much more powerful sensors and computers than Digit could carry alone.

From the Medium post:

Digit itself will have just enough sensory power to travel through basic situations. If it comes across an unexpected obstacle, it can send an image back to the vehicle and have the car figure out a solution. The car could even send that information into the cloud and ask for other systems to help Digit navigate its environment, providing multiple levels of added assistance while keeping the robot light and nimble.

Digit robot coming out of self-driving vanDigit will ride in the back of Ford’s self-driving delivery vans, unfolding itself at every stop to make a delivery. The robot will rely on the vehicle to recharge its battery as well as for sensing, computing, and connectivity resources.Image: Ford

This is a very interesting concept, and to learn more about it (and about how Digit will handle all the rest of this operation), we spoke with Agility Robotics CEO Damion Shelton.

IEEE Spectrum: Offloading the sensing and computing required for autonomous navigation is a very interesting idea—can you break down what will be done on the robot and what will be done on the vehicle? 

Damion Shelton: The exact split is still to be determined, but the basic idea is to run things that require real-time (or close to it) processing on the robot, and push other tasks off-board. Examples of the former are things like footstep placement, low-level postural control, execution of previously trained RL behaviors, and path-planning out to 3 to 5 steps. Tasks that could be pushed to the vehicle include storage and retrieval of maps, training of RL behaviors, and initialization of the robot’s global pose during deployment. The initialization of global pose is actually one of the most important things the vehicle can be used for, in our view. Absent that, Digit would need to build a local world model from ground zero every time it gets out of the vehicle.

Having bipedal robots that are mechanically capable of traversing semi-structured terrain is often very far from having bipedal robots that are actually able to reliably operate in semi-structured terrain without human supervision. How will you develop the confidence to deploy Digit in real-world use, and what are the biggest challenges you’ll need to solve?

We don’t anticipate operating without human supervision for quite a while. The form that this takes will relax over time; initially, we would expect a human to be present in the immediate vicinity of the robot during operation. After we’re confident that the performance in a particular geofenced area is reliable, direct monitoring could be replaced with “call center” style central monitoring, but that’s a minimum of several years out. From the perspective of data gathering and continued refinement of both hardware and software, the fact that monitoring is required in the immediate future isn’t really a detriment. Particularly in collaborative applications—say, where the robot is a labor assistant to a delivery driver—the additional cost to have a human partially in the loop is close to zero (since the driver is already doing the work now).

Digit delivering package from Ford autonomous vehicleDigit has cameras, lidar, and a computer, but it will get help from the autonomous vehicle nearby, which has more processing power, to do mapping and path planning so the robot can go up and down steps and avoid obstacles.Photo: Ford and Agility Robotics

Digit will likely have to interact with a variety of non-deterministic, dynamic obstacles, like other people or pets. How much of a concern is having reliable autonomy when there’s potential for all kinds of unpredictable edge cases?

From a test deployment standpoint (tens to hundreds of robots in scale) our plan is to avoid edge cases that we’re not able to handle and allow just enough uncertainty into the mix to keep our R&D moving forward. For the first 12 to 18 months of testing—starting in early 2020—we anticipate pre-mapping and qualifying all of the environments we operate in. This is what the majority of autonomous vehicle companies have done: Geofence an area you understand, and get comfortable there before expanding. It’s certainly true that we won’t be able to deal with a majority of the “hard problems” in the world early on, but we don’t see that as a barrier to deployment. We don’t need to address the most difficult cases, since even the easiest 1/10th of a percent of market is enormous relative to any plausible sustained growth rate.

But that’s not to minimize the difficulty of the edge cases. You’re exactly correct that reliability in the real world is challenging—we hope that by getting Digits out in the world as soon as possible, we start to collect data on the hard problems even if we don’t (yet) have a deployable solution.

Will Digit be able to interact with humans directly? What would those interactions look like?

We’re not super focused on human-robot interaction problems, other than as they relate to mobility. In a perfect world, Digit blends into the background and interactions are primarily non-verbal. You know that other pedestrians aren't going to run into you on a sidewalk by having a mental model of posture, gait dynamics, and so on. We think a lot about those kinds of dynamic cues, but don’t have plans to turn Digit into a witty conversationalist. That being said, the production version of Digit is going to have a speaker on it, and a light display, both of which can be used to provide minimalist feedback to the outside world.

Is this the application you had in mind when you designed Digit? What other kinds of things would you like to see Digit doing?

Yes, at least in the sense that we believed from the beginning that the best early market for Digit would be in logistics. It’s a market that requires the mobility of legs (at least in the areas we’re focusing on) while not requiring super advanced AI (in “easy” environments), FDA certification (e.g. in-home assistive robotics for the elderly), or harsh environment operations (e.g. firefighting). Basically, if you can move through the world and carry a box, you’ve addressed the absolute minimalist use case for logistics.

Delivery services are a large and rapidly growing industry, which also gives us the ability to focus on a profitable use-case from day one. Many of the “dull dirty dangerous’ jobs that robots are usually targeted at are both quite challenging and and relatively low volume. Legs have been talked about for years as a tool for disaster recovery, search and rescue, and so on, but these are enormously challenging environments to move through and the business case is hard to rationalize out of the gate. Conversely, if we have a fleet of Digits that learn to move through the world with the large training set of last-mile environments, and then simultaneously have the cost pressure and economy of scale of a commercial deployment, the odds of us then being able to offer a competitive product in more specialized markets goes up dramatically.

For Agility, which has raised over US $8 million from investors like Playground Global, the partnership with Ford is a major milestone. As Agility CTO Jonathan Hurst described in his Spectrum article, making deliveries is one of the applications the company has envisioned as it seeks to commercialized its legged machines.

To showcase the idea of robotic delivery, two Digits will be at the City of Tomorrow event that Ford is hosting in Los Angeles on Thursday.

The Conversation (0)