Blue Is a New Low-Cost Force-Controlled Robot Arm from UC Berkeley

Designed to safely perform human-scale tasks, Blue will cost $5k and help accelerate research towards useful home robots

10 min read
UC Berkeley Blue robot arm
Blue is a new robot arm designed to be useful and accessible to researchers working on AI and applied manipulation for human environments.
Image: UC Berkeley

Robots are well-known for being very good at some very specific things. They’re often defined by words like “precision” and “repeatability” and “speed,” because if you want a robot to be uniquely useful, it’s usually going to have to leverage one or more of those characteristics in a way that makes it better at some specific task than humans are. Robots have been doing this for decades, typically in places like industrial settings, but things are starting to change, and roboticists are beginning to look towards other applications in more unconstrained, dynamic environments, like non-industrial settings.

Such environments (our homes, for example) are the kinds of places that we really, really want robots to be useful in. We want them doing our chores so that we don’t have to, ideally without causing catastrophic damage or injury at the same time. And tasks like these need a much different set of capabilities—in order to do things that humans do in places that humans are, “speed” and “repeatability” and all that are far less important than compliance and the ability to make the most of clever, adaptable software. The upshot of all this is that the advances in artificial intelligence over the past few years have resulted in researchers developing software for (and on) robots that are over engineered for many of the tasks that we want them to do, more expensive than they need to be, and probably not as safe as we’d want.

A team of roboticists at UC Berkeley led by Pieter Abbeel identified this problem three years ago, and began development of a new robot arm designed specifically to be useful (and accessible) to folks researching applied manipulation in human environments. Blue is a human-scale 7-degrees-of-freedom arm that will cost less than US $5,000 (when manufactured in volume), and the hope is that it’s capable enough, and affordable enough, to significantly accelerate research towards useful home robots.

This video gives a brief overview of the kinds of things that Blue is designed to do; note that the video is sped up, and that the robot is being teleoperated by a human using an HTC Vive VR headset and handheld controllers.

Some specs on Blue:

Degrees of freedom: 7
Mass: 8.7 kg
Reach (full extension): 0.7 m
Continuous payload: 2 kg
Peak velocity: 2.1 m/s
Repeatability: 3.7 mm
Position bandwidth: 7.5 Hz
Power consumption: 100 watts

Internal view of a single 2-DoF geared differential module of the UC Berkeley Blue robot arm Each link of the robot has a 2-DoF geared differential module combining two planar QDD timing belt transmissions into output pitch and roll motions. Image: UC Berkeley

More generally, Blue (referring to one single arm) is build around quasi-direct drive, or QDD, actuators, which offer robust force control, selectable impedance, and are highly backdrivable. Brushless motors designed for drones drive a 7.125:1 single-stage reduction (a timing belt), coupled to a differential (gears with large plastic teeth under preload). Custom motor driver boards are each equipped with 14-bit absolute magnetic encoder for motor commutation and robot position sensing; 12-bit current sensing for closed-loop current (and thus torque) control of each servomotor; a 3-axis accelerometer for state estimation, collision detection, and calibration; and temperature sensors for thermal monitoring. The QDD actuators make Blue both generally safe for humans to be around and resilient to damage. The layout of the arm is similar to a human arm, with three shoulder joints, one elbow joint, and three wrist joints. A simple parallel jaw gripper is included.

From the beginning, Blue was designed to be a “useful robotic manipulator,” which the UC Berkeley researchers helpfully define in a paper that will be presented at ICRA next month:

We define a design paradigm that enables useful, low-cost, robotic arms capable of manipulation tasks in unconstrained environments.

a) We define useful in metrics similar to humans: human-size, 7 degrees of freedom, 2-kg payload, safe, compliant, and with a repeatability under 10 mm.

b) We define low-cost as: pricing below $5000 to an end-user for a manufacturing run of more than 1500 arms.

c) A partial set of tasks to consider includes: unloading a dishwasher, stocking a refrigerator, floor decluttering, opening doors, opening microwave ovens, sorting packages, physical stroke rehabilitation, folding laundry, cleaning windows, bed making, and bathroom cleaning. We demonstrate the robot in kitchen cleaning, table decluttering, telepresence, and machine tending.

Making an arm that’s useful in the context of human tasks is a key differentiator for Blue, because it’s what allows it to be capable and affordable at the same time. Doing the sorts of tasks described above doesn’t require sub-millimeter precision or high speed or high strength or anything like that. Blue just needs to be able to lift things like clothes and dishes, and put them down again in generally the right place using visual, tactile, or other sensors, in the same way that humans do and at about the same pace.

“AI has been moving very fast, and existing robots are getting smarter in some ways on the software side, but the hardware’s not changing. Everybody’s using the same hardware that they’ve been using for many years . . . We figured there must be an opportunity to come up with a new design that is better for the AI era.”

This kind of thinking is what led to Blue’s 2-kg continuous payload, for example. Blue can actually lift twice as much for brief periods, but its motors will overheat, and it’ll need a break, which is basically how humans work. I’m not sure it’s even worth calling all this a compromise. Rather, Blue has been designed to fulfill a very specific set of criteria that, the UC Berkeley researchers hope, will make it ideal for many people working on solving real-world robotics problems.

Of course, that $5,000 price tag is going to be a major draw, and even researchers who might like a little more precision or payload are likely to give Blue a serious look simply because of its affordability. Indeed, when you compare Blue to other force-controlled manipulators that can lift human-useful payloads, Blue looks very favorable. Baxter costs five times as much, albeit with two arms. The Franka Emika is $30k. Kuka’s LBR is nearly $70k. The Barrett WAM arm is something like $135k, and we can’t forget the now decade-old PR2, which (while still pretty great) you can’t buy or service anymore and was $400k when you could. Obviously, all of these more expensive arms are capable of doing things that Blue can’t do, but the question is this: For robotics research right now, are those capabilities useful to the extent that they’re worth paying for (say) one Franka Emika arm when you could instead buy six Blue arms? The UC Berkeley roboticists are betting that for many roboticists, getting six Blue arms for their labs will be the obvious choice.

David Gealy in the background teleoperating BLUE to make coffee David Gealy in the background using a VR headset and handheld controllers to teleoperate Blue and use a coffee maker. Photo: Phillip Downey

Having more arms is appealing for all kinds of reasons. One robot per student in a class or one robot per researcher in a lab is going to be much more effective (and fun!) than trying to schedule your “robot time.” As we’ve seen from folks like Google, using a whole bunch of arms is one way of speeding up robotics research, especially when it comes to unsupervised learning. It’s also useful for scaling learning-by-demonstration, since Blue’s kinematic design is anthropomorphic, meaning that its arms move in the same way that yours do. The idea is that it’ll be easy(ish) for people to teleoperate Blue to remotely perform household tasks, while your AI algorithm gradually figures out how to do similar tasks by itself. Longer term, it’s even possible that this could lead to commercial applications.

Several versions of Blue prototype arms are already in use at UC Berkeley, and at least seven alpha units have been sold, which will help bootstrap the 50 or so beta units that will be produced next. Once those have been sold, the final generation should be able to hit that $5k price point. We’re told that the beta units should be done in May, with the final (“consumer”) version coming in late 2019 or early 2020.

Blue\u2019s creators, Pieter Abbeel, David Gealy and Stephen McKinley Blue’s creators: Pieter Abbeel, David Gealy, and Stephen McKinley. Photo: Phillip Downey

For more details, we spoke with the UC Berkeley roboticists behind Blue’s development: graduate student David Gealy, postdoctoral research fellow Stephen McKinley, and Professor Pieter Abbeel, who is the director of the Berkeley Robot Learning Lab and co-founder and chief scientist of AI startup Covariant.

IEEE Spectrum: What made you realize three years ago that you should start developing a robot like Blue?

Pieter Abbeel: Three years ago, the observation for us was that AI has been moving very fast, and existing robots are getting smarter in some ways on the software side, but the hardware’s not changing. Everybody’s using the same hardware that they’ve been using for many years, and it wasn’t designed with AI in mind— it was designed with repeated motion in mind. And so, we figured there must be an opportunity to come up with a new design that is better for the AI era.

The price point of the robots that we were using in the lab was also a big part of it. PR2 is a beautiful robot. It’s ten years old, it still works. But it’s very expensive, and if you break it, you can’t get another one. You’re on your own. And that’s not the way to get every Ph.D student consumers to have their own robot, it’s not the way to get consumers to buy robots. You need to find a way to drive the cost down while not foregoing essential things.

The hope was, and three years later it seems like the hope was correct, that we could design a robot that’s much lower cost and also safer around humans than the previous types of robots. And the hope from here is to get these robots into people’s hands. People will be able to build things that we can’t even think of right now, and that’s part of the excitement.

How did the PR2 influence your design approach with Blue?

David Gealy: When we started this project, we were able to take a step back and look at all of the advances that have happened around the robotics space to figure out where we could go within the design to allow for the same functionality as the PR2, but from a completely new perspective. The human scale is really important for operating in human environments. The natural compliance of the robot, which enables it to work in unstructured environments. And the human-like configuration of the arm joints which allows it to behave the way a human arm would. When you’re trying to teach the robot through demonstration, it’s easier if you can assume that the robot will behave the way your own body would.

Stephen McKinley: Adding on top of that, PR2 is incredibly durable. We’ve been whacking it against things for 10 years, and it’s still going strong. That’s a really important feature to have, because if it’s compliant and durable, researchers can mess up while using it time and time again, whereas if you do that with an industrial arm, you broke something for sure.

Your demo videos show Blue doing the kinds of household tasks that are very difficult for robots to do right now—why choose those tasks to introduce Blue?

David Gealy: I think we’re really trying to outline the highest level capability of the robot, which is going back to our thesis that anything that is teleoperable, a future AI will be able to do. And so that’s why we picked out these really hard problems—problems that other robots couldn’t necessarily do so well and so fluidly. So that was definitely on purpose, to showcase what’s possible with the hardware.

Pieter Abbeel: We want to speak to people’s imagination. Home robots have been a dream for a long time, so that’s also why we take a lot of example tasks out of that space. The robot is designed to be around people; the inherent compliance makes it ideal. So we want to bring that concept in.

Do you feel comfortable calling this robot inherently safe?

Stephen McKinley: The robot is able to render a range of stiffnesses. What we’ve created is a robot that’s inherently compliant, but can also render itself to be more stiff. That’s similar to what a human being can do—your arm can be very flexible and compliant, but you can also tense up your muscles and become very rigid. So, if the robot were to have a lapse of control, it goes back to being soft and compliant, as opposed to a lot of other cobots, which are taking an inherently rigid piece of machinery and putting more sensors on it so that it can moderate its interactions with the environment.

Under teleoperation control, 2-armed version of Blue robot folds a red towel Under teleoperation control, Blue folds a red towel. Photo: Phillip Downey

It seems like more and more research is being done in simulation, especially with strategies for converting skills learned in simulation to real world skills. How does Blue fit into that?

Stephen McKinley: We definitely considered that when we were designing the robot. Specifically it shows up in the end-effector, which is a fairly simple parallel jaw gripper. And we chose to make that simple so that it could be modeled and simulated more easily. But anytime you’re trying to learn something in simulation and apply it to the real world, there are always going to be differences, so using a physical robot still has advantages.

Pieter Abbeel: For everyone who does Sim2Real research, the proof is still in running it on the real robot, and showing that it transfers. And there’s a lot of iteration there. It’s not like you train in sim, you test on the real robot, and you’re done. It’s more like, you train in sim, test on the real robot, realize it’s not generalizable, rethink your approach, and train in a new sim, and hope that now it’ll generalize on the real robot. And this process can go on for a long time before you actually get that generalization behavior that you hope for. And in that process you’re constantly testing on a real robot to see if your generalization works, or doesn’t work.

Is Blue a research project, or a product?

David Gealy: The mission is just to get arms out to the research community. [Making money] is less important to us than trying to accelerate research. We’re lucky that we’re attached to a university so that this doesn’t need to be a massive corporation for us to consider it a success.

Pieter Abbeel: We don’t want to lose any money over it, but if we can get a thousand researchers very happy with this arm, that would be a success.

We’re glad that the focus here is on making physical robots. Originally, the idea was just to develop a design and make it available for anyone to build by buying parts directly and doing some simple machining and 3D printing. While a good idea in theory, in practice, the majority of people don’t have the skill, equipment, time, or (let’s be honest) the inclination to build a robot from scratch to save money. We’ve seen this with TurtleBots: It’s totally possible to hack together a Roomba and a netbook with 3D-printed and laser-cut parts, but in most robotics research contexts, it’s much more efficient to just buy a TurtleBot instead. Same with Blue.

At $5,000 each (assuming everything works out and the arm gets to that price), Blue is likely achievable for most robotics labs, courses in robotics, and potentially even for high schools. It’s going to take UC Berkeley a little bit to get to that production volume, but we’re excited to see it happen—almost as much as we’re excited to see the research that comes after.

If you’re interested in purchasing a Blue arm, you can find more information and a form here.

“Quasi-Direct Drive for Low-Cost Compliant Robotic Manipulation,” by David V. Gealy, Stephen McKinley, Brent Yi, Philipp Wu, Phillip R. Downey, Greg Balke, Allan Zhao, Menglong Guo, Rachel Thomasson, Anthony Sinclair, Peter Cuellar, Zoe McCarthy, and Pieter Abbeel, from University of California, Berkeley, will be presented at ICRA 2019 in Montreal, Canada.

[ Project Blue ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less