"Real world" is a dangerous phrase to talk about when it comes to robots, because robots very seldom find themselves operating alone out there in wild and forlorn places like your living room or office. Autonomy in unstructured environments is an exceptionally difficult problem to tackle, and it gets even harder when you're dealing with multiple robots trying to collaborate on tasks in situations where they might not even be able to talk to each other reliably. MIT has been developing a control program that's able to coordinate multiple robots while dealing with significant uncertainty, and it's quite creative in how it goes about doing it.
While the video provides a good overview on how MIT's algorithm manages to coordinate multiple robots, what it doesn't show is just how clever it can be when presented with a variety of situations and obstacles. For example, in a realistic environment, you're likely to have things to deal with like bad maps, intermittent communications, low bandwidth, and hardware that flakes out on you from time to time. The software that MIT has developed considers variables like this, and works them into an action plan that maximizes efficiency.
Furthermore, the software actually makes its own decisions about how to use the resources that it has available. If there's no reliable wireless, but there are some signal lights, the algorithm can decide whether it's better to risk an intermittent high-bandwidth connection, or to go with the lights instead. And if it decides on the lights, it even decides on how exactly to use them, potentially coming up with a more efficient solution than a human would. In fact, the program can actually guarantee that it'll do the best that it possibly can, taking all of the uncertainties into account.
The decisions that MIT's program makes are based on data and statistics that it collects as the system that it manages runs, meaning that (hypothetically) you could just set it up somewhere and let it figure out on its own how to accomplish what it needs to accomplish while compensating for everything that (inevitably) goes wrong. Because if there's one thing that we can absolutely rely on when it comes to robotics, is that things will never, ever work the way that they're supposed to, so if we ever hope to get them to reliably do anything, algorithms like this are going to be critical.
[ MIT ]
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.