The idea of robots that are “collaborative” is usually about robots that are safe for humans to work next to. Sometimes, a collaborative robot might assist a human by performing one step of a task while a human performs another step of the same task. What’s a bit more unusual are robots that are collaborative in that they work directly with a human, augmenting the abilities of that human with intelligence, not just strength.
The key here is really the “task knowledge”: the robot knows what you’re supposed to do, and it augments your effective skill or intelligence to help you do it. In this implimentation, the end effector of the robot can point towards what you’re supposed to do, avoid pointing towards stuff you’re not supposed to do, and even point blank refuse to allow you to do something when it knows that there’s a conflict with the task you’re supposed to be working on.
Potentially, a tool like this would allow you to perform pretty well on tasks that you’re not highly skilled at: you just have to have some mediocre, basic level of skill, and the robot can help you close the gap. You can imagine something basic like a smart hammer that can sense nails and will hit them if you swing the hammer anywhere close (while avoiding your thumb), or something far more complicated like an entire system that enables you to repair your own car. The researchers suggest other, perhaps slightly more realistic examples, like an intelligent weed sprayer that you can wave around randomly but only sprays plants that it knows are weeds.
We should make sure and point out that neither we (nor the researchers) are suggesting that robotic tools could make a novice into a professional: rather, they could make a novice into someone with a rough competency who is less likely to make basic mistakes. The robot can make you better at something that you have very little experience with, but it’s not going to put any professionals out of work (although it might make their jobs a bit easier as well).
A user study showed that people tended to prefer “smarter” tools with more autonomy, and that these tools helped people to both be more efficient and feel more efficient. As you might expect, there was also some frustration when the tool had a different idea than the user of the most efficient way to complete a task:
During the experiment, users expressed frustration that, “The tool won’t go where I want it to” (paraphrase) and said that the tool often made mistakes when it chose a different pixel to paint from the one the user wanted. These effects were most apparent in semi-auto mode where the most cooperation was required. This could be improved by making the tool’s behaviour more predictable and by better modelling of the user’s intention.
User opinions of semi-autonomous and autonomous mode were more split in tiling than in painting where everyone preferred automatic. In tiling, some users reported that they liked the extra control they had over tile placement while others preferred the ease of use. One user reported that when in autonomous mode they felt less responsible for the outcome of the task because it was “the robot’s fault” if it made a mistake. This statement is not entirely true as the user does have a large impact on the performance of the tool.