Cracking the Puzzle of Serpentine Motion

Earthworms were easy—stingrays, skydivers, bacteria, and bots came next

3 min read

6 frames of a stingray. The top and bottom are similar.

The top set of images show a real stingray swimming in the ocean, while the bottom series shows a new algorithm's successful reconstruction of its complex range of strokes.

TU Berlin/Caltech/ACM Transactions on Graphics

The counterintuitive and sinewy motions of snakes, stingrays, and skydivers represent a strange kind of motion that is notoriously hard to simulate, animate, or anticipate. All three types of locomotion—through sand, sea, and air—represent movement that relies on neither wings nor limbs but rather subtle and sometimes sudden changes of a body’s geometry.

Now researchers from Caltech and the Technical University of Berlin have created a crucial algorithm that can finally put such curiously complex motions into expressible forms. In the short term, the team says they hope to help animators bring such strange creatures to virtual life—while in the longer term enabling new modes of locomotion for roboticists and other technologists designing new ways to make things move.

Motion From Shape Change (Siggraph 2023)

“We spoke to people from Disney—they told us that animating snakes is pretty nasty and a lot of work for them,” says Oliver Gross, a doctoral student in mathematics at the Technical University of Berlin, and the paper’s lead author. “We hope to simplify this a little bit.”

Even if the animator doesn’t know how one shape turns into the next, the algorithm will determine physical movements through space that match the shape change.

The algorithm examines each body as a shape fashioned from vertices—the points that plot out a 3D model’s mesh or skeleton rig, for instance. The algorithm’s goal, then, is to determine the most energy-efficient way that set of vertices can rotate or translate.

What “energy-efficient” actually means depends on the material through which a body is moving. If a body is pushing through a viscous fluid—such as a bacterium or jellyfish swimming through water—the algorithm finds motions that dissipate the least energy into the fluid via friction, following a fluid-mechanics theorem known as Helmholtz’s principle.

On the other hand, if a body is moving through a vacuum or a thin medium like air—an astronaut in freefall, for instance, or a falling cat—that body won’t face nearly as much drag, and its movement is at the mercy of its inertia instead. So, the algorithm instead minimizes a body’s kinetic energy, in accordance with Euler’s principle of least action.

Regardless of the specific physics involved, a user feeds the algorithm a sequence of images. Imagine a sequence of four squiggly shapes created by an animator, each different from the last. Even if the animator doesn’t know how one shape turns into the next, the algorithm will determine physical movements through space that match the shape change. In the process, the algorithm can also account for gravity’s pull and, in viscous media, the effect the fluid has on the shape.

The Berlin-Pasadena group hammered together an early version of the algorithm in around a week, they say, intending to simulate the wriggling of an earthworm. The researchers soon realized, however, they could simulate other life forms too. They implemented their algorithm within a 3D-modeling environment—SideFX’s Houdini—and test-drove it on a menagerie of computerized creatures, ranging in complexity from a 14-vertex piece of pipe to a 160-vertex fish to a 600-vertex underwater robot to a 7,100-vertex eel. When the algorithm examined real-world creatures like a sperm cell, a stingray, a jellyfish, a diver, and a falling cats, its output closely matched real-world imagery.

Gross says his group developed the algorithm without any particular use in mind. However, since much of the group’s research is in aid of computer graphics, they’ve begun thinking of applications in that realm.

In the near future, Gross and his colleagues want to build the algorithm out into a full-fledged animation pipeline. To wit, Gross pictures a machine-learning model that examines a video of a moving animal and extracts a sequence of 3D meshes. An animator could then feed those meshes into the shape-change algorithm and find a movement that makes them happen.

In a different type of virtual world, Gross also imagines that robot builders could use the algorithm to understand the limits and capabilities of their machine in the real world. “You could perform initial tests on a computer, if [the robot] can actually perform these desired motions, without having to build a costly prototype,” he says.

The researchers’ algorithm is currently limited to finding shape changes. What it cannot do, but what Gross says he hopes to enable soon, is to take in a designated point A and point B and find a specific course of motion that will bring a creature from start to finish.

The group’s algorithm was recently published in the journal ACM Transactions on Graphicsand made available online as a .zip file.

The Conversation (0)