When designing a brand new robot, it’s usually a good idea to design and test it in simulation first, to get a sense of how well your design is going to work. But even a successful simulated robot will only provide you limited insight into how it’s going to do when you actually build it: as we’ve seen, even sophisticated simulations don’t necessarily reveal how robots will perform in the real world.
This fundamental disconnect between simulation and reality becomes especially problematic when you’re dealing with an area of robotics where it’s impractical to build physical versions of everything. Evolutionary robotics is a very good example of this, where robot designs are tested and iterated over hundreds (or thousands) of generations: it works great in simulation (if you have a fast computer), but is much harder to do in practice. However, with something like evolutionary robotics, we come back to the original issue, which is that a robot that has evolved to work well in simulation may not work well at all out of simulation, which throws into question the value of iterating on the fitness of a robot through simulation at all.
In a paper published last month in PLOS ONE, Luzius Brodbeck, Simon Hauser, and Fumiya Iida from the Institute of Robotics and Intelligent Systems at ETH Zurich took things one step further by teaching a “mother robot” to autonomously build children robots out of component parts to see how well they move, doing all of the hard work of robot evolution without any simulation compromises at all.
The basic idea behind evolutionary robotics is to build a whole bunch of simple robots, test them in some way, and then take a few of the most promising robots and use them to inform the design of the following generation. This is generally how biology evolution works (survival of the fittest and whatnot), and the fact that you’re sitting there reading this is a testament to how successful it can be. For those of us who don’t have eons to wait, robots can be forcibly evolved much much faster, as long as you’re willing to focus on just one trait and keep things extremely basic.
This is a UR5 arm “mother robot” (that’s what the paper calls it) constructing a locomotion agent (what I’ve been calling a “child robot”) out of a few standardized parts, including active cubes with one rotating face and smaller passive cubes made out of wood. The mother robot hot glues active and passive cubes together and then transports them to a testing area, where they’re wirelessly activated and an overhead camera watches them wiggle around:
Once the evaluation is complete, the child robots are disassembled (manually, for now) by removing the hot glue, and the components are returned to the queue to make a new robot. Meanwhile, in software, the successful “elite” designs (the ones that were able to move the farthest in the least amount of time) are carried on to the next generation unchanged. The system also mutates or crossbreeds the elites to create the rest of the next generation.
To understand how breeding and mutation of the robots works, you can think about it in terms of genes. The design of each robot can be described by a genome consisting of between one and five genes, where each gene describes the characteristics of one of the modules that makes up the child robot, including its “brain” (in the form of motor command parameters). The child being constructed in the first video above has three genes, and each gene includes information about how that module is oriented and where it gets glued to the previous module.
In the case of a mutation, one of three things happens: the genome of an elite either has an entire gene added at random, an entire gene deleted at random, or a single parameter of an existing gene is randomly changed. So in the context of the robot that’s being constructed in the first video, maybe it gets another module glued onto it somewhere (adding a gene), maybe one of the three modules gets removed (deleting a gene), or maybe that third active module gets placed off-center to the right instead of the left (changing the parameter of an existing gene).
A crossover (breeding) of elite robots is a bit simpler than a mutation: two elite “parent” robots get their genomes chopped roughly in half, and the first few genes from the first parent get attached to the last few genes from the second parent. This can result in some weirdness, so generally, the approach favored mutations over crossovers. Once all the mutations and crossovers are calculated, the mother robot builds the next generation, tests them, and then repeats the evolutionary process all over again.
This rather complicated series of pictures shows one of five total experiments, where 10 generations of robots were constructed, evolved, and improved. There’s often a substantial amount of variation between even the elite robots between generations, likely because many of the gaits were unstable, sometimes doing very well, and sometimes not doing well at all [figure below].
As you can see, the elites don’t always do so well when they’re tested again, even through the design is the same. We asked the researchers why they didn’t run multiple trials within each generation to try and account for this, but as they explained to us, this very inconsistency is part of the evolutionary process:
“There are significant variances in behaviors of some of the agents even with the identical genomes, which is the reason why the elites sometimes don’t reproduce similar results in the subsequent generations. On the other hand, it turns out that, even without multiple trials, we found that evolutionary pressure tends to select more consistent ones over generations, and usually repeatable genomes remain over generations. This is why we stayed with our current protocol of experiments. After all, everything should be counted and valued toward the survival, thus the lack of precise repetitive tests would not influence the big picture of what we are trying to understand in physical robot evolution.”
Overall, “a fitness increase of more than 40 percent over 10 generations was observed in all experiments,” which is pretty good, but the impressive part is that it’s all physical: the robots have all been built and tested, so you know that your elite designs really are elite, and will behave well in whatever application you can find for a weird little robot made out of some cubes.
So does this signal the end for evolutionary simulations of robots? Not at all, and the researchers are quick to point out that their methods will likely work even better when combined with simulations:
The demonstrations show the feasibility of the model-free evolution of a physical system. The evaluation of a candidate’s fitness is done with a physical robot, producing real data in a time-intensive process. Simulations on the other hand could test more solutions in shorter time. Therefore, it might be interesting to combine both methods rather than using one extreme with simulation or real-world testing only. Simulation could for example be employed to preselect promising candidates for real-world testing, reducing the amount of time spent on solutions with low or no chance of success.
[ PLOS ONE ]