Fly like a Fly

The common housefly executes exquisitely precise and complex aerobatics with less computational might than an electric toaster

11 min read
Image: Eye of Science/Photo Researchers Inc.
Image: Eye of Science/Photo Researchers Inc.

This fall, several biologist colleagues of mine plan to build a movie theater for houseflies. In fact, it’s a miniature IMAX theater—complete with a panoramic screen—inside of which they’ll place a tiny rotating cage, a downsized version of the ones that astronauts use to simulate tumbling in space. Some time next year, they’ll strap a fly into the cage and show it a movie.

A leisurely pastime for idle academics? Hardly. The common housefly is an extremely maneuverable flyer, the best of any species, insect or otherwise. What’s more, its flight control commands originate from only a few hundred neurons in its brain, far less computational might than you’d find in your toaster.

My colleagues in England—Holger Krapp and Simon Laughlin at the University of Cambridge and Graham Taylor and Richard Bomphrey at the University of Oxford—and I want to know its secret. The fly-size flight simulator will reproduce the inertial effects of flight. The movie depicts panoramic scenes during flight. By inserting electrodes into the fly’s brain, the biologists will be able to observe how its neurons light up in response to these scenes. In a sense, we’ll see what the fly sees.

Our goal is to understand flight control from the insect’s perspective. What we have learned so far, and what we expect the experiment to confirm, is that the fly uses a flight control paradigm that is completely different from that of a fighter jet. Whereas the F-35 Joint Strike Fighter, the most advanced fighter plane in the world, takes a few measurements—airspeed, rate of climb, rotations, and so on—and then plugs them into complex equations, which it must solve in real time, the fly relies on many measurements from a variety of sensors but does relatively little computation.

And yet the fly can outmaneuver any human-built craft at low speeds. Buzzing annoyingly across a room, a housefly reaches speeds of up to 10 kilometers per hour at twice the acceleration of gravity. When turning, it is even more impressive: the fly can execute six full turns per second, reaching its top angular speed in just two-hundredths of a second. It can fly straight up, down, or backward, and somersault to land upside down on a ceiling. If it hits a window or a wall sideways, which it often does, the fly will lose lift and begin to fall. But its wings keep beating, and within a few microseconds, the fly recovers its lift and can move off in the opposite direction.

Discovering the fly’s flight control scheme, I believe, will have important lessons for the design of micro air vehicles (MAVs), which attempt to approximate insect flight, and for high-performance aircraft in general.

Insect flight has been a subject of academic interest for at least half a century, but serious attempts to emulate it are more recent. The field got a big boost in 1996, when the U.S. Defense Advanced Research Projects Agency (DARPA), in Arlington, Va., launched a three-year MAV program with the goal of creating a flyer less than 15 centimeters long for military surveillance and reconnaissance. A few fixed-wing designs were successfully demonstrated, most notably the Black Widow, from AeroVironment Inc., in Monrovia, Calif. The Black Widow had a propeller, GPS navigation, and decent flight control. Several rotary-type MAVs were also put forward. But no one managed to get an insectlike flapping-wing design off the ground.

Inspired by the DARPA program, I started my research on MAVs in 1998 at Cranfield University, at the Royal Military College of Science, in Shrivenham, England. My main goal was to build a reconnaissance robot capable of discreetly penetrating and maneuvering autonomously within confined spaces, including buildings, stairwells, and tunnels.

The military uses of such a vehicle are manifold. A soldier mired in combat could take a few MAVs from his backpack and throw them into the air to scout the interiors of nearby buildings. Equipped with video cameras, the tiny flyers could surreptitiously locate hidden adversaries, downed comrades, or scared civilians. MAVs could find equal application in bomb detection and bomb deployment—the U.S. Air Force, for one, is interested in using MAVs for precisely delivering tiny bombs, to take out, say, a single computer.

In the civilian realm, an MAV could be used to examine so-called dull, dirty, or dangerous environments where a human can’t or shouldn’t go. Who really wants to rush into a collapsed mine searching for survivors? Or creep through crawl spaces or down chimneys doing routine inspections? A robotic MAV could easily and quickly accomplish the initial reconnaissance and then indicate whether any human intervention is needed.

Flying indoors is tricky, though. The MAV must fly with agility at low speeds without smashing into walls, ceilings, and other objects; hover for sustained periods; take off and land vertically; and consume little power. Fixed-wing flyers aren’t up to the job because they can’t hover, and they have to fly relatively fast to generate lift. Rotary-wing MAVs can hover, but they require a lot of power. Nor can they fly close to walls: the air pushed down by the rotor bounces off the wall and interrupts the downward flow of air through the rotor, usually with catastrophic results.

Insects, on the other hand, are the culmination of more than 300 million years of evolutionary flight experience. They can hover, fly slowly, maneuver aerobatically, and do it all in an astoundingly power-efficient way. A 100-milligram fly in motion consumes just 3 joules per second. Gram for gram, an airplane consumes more than twice as much power, a helicopter five times as much.

With the fly in mind, I’ve spent the last seven years trying to reverse engineer insect flight, collaborating with biologists and engineers, analyzing insect behavior and flapping-wing aerodynamics, and building electromechanical flapping mechanisms.

Only one other group has made a sustained effort to design and build an insectlike MAV. In 2001, a team at the University of California at Berkeley led by Ronald Fearing, a professor of electrical engineering and computer science, working with the leading U.S. expert on insect flight, Michael Dickinson, produced a 25-millimeter-long proof-of-principle demonstrator based on MEMS technology.

But neither the Berkeley group nor we in the United Kingdom have built a flying prototype. A number of factors are holding us back, including weight reduction and a good power source, but the key issue is flight control. It is not good enough to have “something flying”: if you throw a brick, it will fly, at least briefly, but so what? The great draw of insect flight is its extreme agility, and this amazing capability can be achieved only by appropriate flight control. Unraveling the secret of insect maneuverability requires first understanding the underlying aerodynamics and mechanics of insect flapping.

Flight control dates back to the Wright brothers. These days, it has reached an apotheosis in modern fighter aircraft, such as Lockheed Martin Corp.’s latest F-35, including an impressive version capable of short takeoffs and vertical landings. But the rules governing flight for the F-35 require about 1.1 million lines of code; it uses another 4.5 million lines for tasks like weapons targeting, communications, and mission control. The flight software runs on three shoebox-size computers, each with a pair of PowerPC processors. MAVs obviously don’t have the space, or the cooling fans, to accommodate such onboard computers.

Neither do insects, of course. Studies suggest that the fly’s flight control commands originate from a few hundred neurons in its brain (out of the brain’s total of about 338 000 neurons). A neuron can be thought of as the brain’s smallest computational unit, each one like a switching transistor, with its binary on-off states. Obviously, then, flies are not executing millions of calculations to solve forbidding differential equations in midair. But they still must obey the same laws of physics as the F-35, so whatever they are doing must be functionally equivalent to solving those equations in real time.

For an F-35, we take measurements from a few sensors—a device called a Pitot tube for measuring airspeed, an altimeter for computing rate of climb, a set of gyroscopes for detecting rotations, and vanes for sensing sideslip and angle of attack. The aircraft’s computers use the sensor data, along with inputs from the pilot’s controls, to continually calculate where the plane is and should be and then adjust the plane’s control surfaces—such as the flaps, ailerons, and rudder—accordingly.

Simply put, conventional flight control uses a little measurement and a lot of computation. I believe that the fly does exactly the opposite: a lot of measurement from many sensors and a little computation. I call it the sensor-rich feedback control paradigm.

The fly brain receives sensory inputs from about 80 000 sites on its body, so about 98 percent of the neurons are specialized, devoted to sensory processing. The remaining 2 percent take care of higher-level functions, such as flight control, recognizing predators, and the like. Of course, the fly has many tasks other than flying, so quite a few of its sensors aren’t related to flight, such as those for taste, smell, sound, temperature, and humidity.

A human being has thousands of muscles; between your elbow and your fingertips, you have 200 degrees of freedom. A fly, by contrast, is not actuator-rich: it uses only 12 or so muscles for flying, so it can produce only a relatively small number of motions.

With each wing beat, the leading edge of its wings traces a sideways figure eight in the air. First the wings sweep forward, generating lift. Then, at the end of the stroke, they rotate about 90 degrees and sweep backward, also generating lift. At the end of the back stroke, they rotate again and sweep forward, starting the cycle again. Despite their small complement of muscles, flies execute these intricate beats 120 to 250 times per second.

For flight, the sensors of critical importance are the compound eyes and various mechanical sensors, such as the antennae and numerous wind-sensitive hairs, which allow detailed measurements of the airflow. Unique among insects, flies also have special organs for sensing their own rotation, called halteres.

These drumstick-shaped protrusions on the fly’s thorax are the remnants of a second pair of wings. The halteres beat just like wings, but they don’t generate any lift. Instead, sensors in the sockets of the halteres detect their position, which in turn helps stabilize the insect. Without them, the fly can’t fly.

Most of the fly’s neural processing is devoted to vision, and its compound eyes are the key to flight control [see illustration, "The Eyes Have It"]. They not only enable the fly to see static, pixelated patterns, but also the optic flow—that is, the fly’s motion relative to its surroundings. The eyes allow panoramic vision; the fly can see nearly all of the surrounding space at once, as if its worldview were projected onto a sphere. Also notable are three light-sensitive sensors arranged in a triangle on the top of the head, called ocelli. Their main role is to detect which direction is up, so that the fly can rapidly orient itself.

Each of the fly’s compound eyes is composed of up to 6000 miniature hexagonal eyes, or ommatidia. Each ommatidium measures light intensities within a small solid angle of 1 to 2 degrees. This spatial resolution is much lower than that of the human eye, but the fly eye’s temporal resolution—its ability to detect motion—is higher by an order of magnitude. That’s why it’s so hard to sneak up on a fly.

Each ommatidium operates in conjunction with its closest neighbors, in bunches of six wired together into elementary motion detectors, or EMDs. Even though each ommatidium sees only a little bit of the surroundings, its view is compared with its neighbors’, and if what the neighbors see is different, the fly senses movement. In that way, the EMD estimates the local velocity vector of the optic flow.

These concepts are best understood in an example. Let’s say the fly is moving straight up. The local velocity vector recorded by each ommatidium would point down. It’s like riding in a helicopter that’s taking off vertically: all the buildings, trees, and lights around you will appear to be streaking downward. If you were to map all the local velocity vectors onto a sphere, representing the fly’s panoramic field of vision, the sphere would be covered with downward arrows. And if you were then to take the sphere and flatten it out into a Mercator projection, all of the arrows would be pointing downward.

Different relative motions produce different vector patterns. Suppose the fly is now rolling in the air, around the lengthwise axis of its body. The fly would see objects around it appearing to move in the direction opposite to its roll. When you project the local vectors of the fly’s motion onto a sphere, they all head in one direction around the horizontal axis. But in the flattened projection of the vector pattern, some of the local vectors point upward, some downward, and some in between.

What’s interesting is that in both the upward flight scenario and the rolling scenario, sections of the global vector pattern are identical, even though the fly is executing completely different moves. It’s not enough to have the local picture; the fly needs the global view, which represents the motion of the insect with respect to its surroundings.

How does the fly use these global patterns to fly? Specialized neurons integrate the signals from all the EMDs to form the global vector pattern. What Cambridge’s Krapp and his colleagues have discovered is that when the fly sees a certain flow pattern, corresponding to a specific direction relative to the axis of the fly’s body, the pattern will strongly trigger a specific neuron in the brain. If the insect is flying between two preferred directions, it doesn’t detect the pattern as strongly. It’s sort of like when you see a loved one, and your face lights up. If you see a stranger instead, your response is more muted.

Is there something special about these preferred directions? One theory put forward by Taylor at Oxford is that the fly can easily stabilize or control its flight in these directions. It’s the same with humans—some ways of walking, running, and turning are much more comfortable and stable than others. If the fly wants to change direction, it simply moves in small, easy-to-control steps, from one preferred direction to the next, until it finally arrives at the desired direction. This is the sensor-rich feedback control paradigm in action: the compound eyes provide the sensor input, while the vector field patterns provide the feedback.

Here’s where the miniature IMAX theater mentioned at the start of this article comes into play. The goal of this experiment is to trick the fly into believing that it’s flying, so that we can develop a model of how the insect perceives its own flight.

The fly is placed inside the rotating cage, which simulates the inertial forces that the insect experiences in flight. Air is blown on it, as further stimulus. Meanwhile, the panoramic screen reproduces the fly’s optical stimulus—what the fly would see if it were actually flying around a room. Electrodes inserted into the fly’s brain will, we hope, record the same activity that the fly has in free flight.

Ultimately, we’d like to match this internal model of insect flight—how the fly itself perceives its own flight—with the external model—how we perceive the insect’s flight from the outside.

F-35s don’t control themselves the way a fly does. But could they? The fly attains remarkable performance, yet is computationally quite simple, relying on extensive, distributed measurement of parameters of interest. From an engineering viewpoint, this opens up new possibilities in control, as well as in sensors and instrumentation.

For one thing, it suggests that there is no need for high-resolution imaging cameras for flight control. Instead, coarse-grained arrays of sensors could give good results, provided they are arranged to offer a global view of the environment, able to detect the direction of motion, and endowed with parallel processing to extract the global vector field of motion.

Several groups have succeeded in building electronic sensors that mimic the fly’s vision and other flight control apparatus. Reid Harrison, an assistant professor of electrical and computer engineering at the University of Utah, in Salt Lake City, is considered one of the pioneers in using analog very large-scale integration (VLSI) to produce an insectlike vision chip. Charles Higgins, associate professor of electrical and computer engineering at the University of Arizona, Tucson, has continued that work and designed fly-vision systems that can extract features from the optic-flow field.

In Japan, Kimihiro Nishio of Yonago National College of Technology has designed a fly-vision chip that he uses not for MAVs, but for wheeled robots. The start-up Centeye Inc., based in Washington, D.C., sells an insect-inspired optic-flow sensor called LadyBug, designed for MAVs and other types of robots.

Fearing’s group at Berkeley has also developed an artificial haltere for detecting rotation. The haltere seems to have some advantages over gyroscopes based on MEMS technology. For one, it consumes far less power because it has no actuators. And it can detect angular velocities from as low as tens of degrees per second to as high as hundreds of thousands of degrees per second, which a flying insect making a sharp turn will often encounter.

But the sensor-rich feedback control architecture doesn’t depend on a specific type of instrument—that is, you don’t have to exactly replicate the fly’s compound eye or haltere to achieve the same results. All you need is something that collects the relevant information with the required speed and accuracy.

We need to do more detailed and multidisciplinary research to completely reverse engineer insect flight control. We don’t yet know how the fly’s vision is coupled with its physiology—how does it translate what it sees into wing beats? And how does sensory input from the halteres, antennae, and so on, get integrated with visual information?

What we know already, though, hints at exciting possibilities for understanding motion control in animals and humans and for acquiring a new paradigm in control engineering. The latter will greatly affect not only the vast area of automatic control but also the sensor, instrumentation, and measurement communities. Who would have thought a small, unlovely creature like the fly could teach us so much?

Acknowledgments

I thank Graham Taylor at the University of Oxford and Holger Krapp at the University of Cambridge for their helpful comments on the manuscript. I am grateful to Neal Glassman and Belinda King from the U.S. Air Force Office of Scientific Research and Johnny Evers from the Air Force Research Laboratory for supporting this work. The flight simulator experiments are funded by the Biotechnology and Biological Sciences Research Council.

About the Author

Rafal Zbikowski is a principal research officer in the department of aerospace, power, and sensors, Cranfield University, the Royal Military College of Science, in Shrivenham, England.

To Probe Further

Rafal Zbikowski’s work is described at https://www.rmcs.cranfield.ac.uk/daps/guidance/microairvehicles/view, along with a list of papers.

Links to Ronald Fearing’s work on a Micromechanical Flying Insect can be found at https://robotics.eecs.berkeley.edu/~ronf/MFI/index.html.

Michael Dickinson has a Web site at https://www.dickinson.caltech.edu. Graham Taylor’s Web site is https://users.ox.ac.uk/~zool0261. Holger Krapp’s Web site is https://www.zoo.cam.ac.uk/zoostaff/Krapp.html.

For a description of Charles M. Higgins’s work on an insect-inspired imaging system, see IEEE Sensors, December 2002, Vol. 2, no. 6, pp. 508­28. Reid R. Harrison’s work on an artificial fly-vision system is described in Autonomous Robots, 1999, Vol. 7, no. 3, pp. 211­24.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions