Why We Fall Apart

Engineering’s reliability theory explains human aging

12 min read
Illustration of human
ILLUSTRATION: VICTOR KOEN

First in a series of reports on biomedical engineering innovations

Illustration of human body Illustration: Victor Koen

Childhood is a special time indeed. If only we could maintain our body functions as they are at age 10, we could expect to live about 5000 years on average. Unfortunately, from age 11 on, it’s all downhill!

The problem is that our bodies deteriorate with age. For most of our lives, the risk of death is increasing exponentially, doubling every eight years. So, why do we fall apart, and what can we do about it?

Many scientists now believe that, for the first time in human history, we have developed a sophisticated enough understanding of the nature of human aging to begin seriously planning ways to defeat it. These scientists are working from a simple but compelling notion: the body, far from being a perfect creation, is a failure-prone, defect-ridden machine formed through the stochastic process of biological evolution. In this view, we can be further improved through genetic engineering and be better maintained through preventive, regenerative, and antiaging medicine and by repairing and replacing worn-out body parts. In short, the rate at which we fall apart could be decreased, maybe even to a negligible level.

The quest to understand and control aging has led us, two biologists, to draw inspiration from what might seem an unlikely source: reliability engineering. The engineering approach to understanding aging is based on ideas, methods, and models borrowed from reliability theory. Developed in the late 1950s to describe the failure and aging of complex electrical and electronic equipment, reliability theory has been greatly improved over the past several decades. It allows researchers to predict how a system with a specified architecture and level of reliability of its constituent parts will fail over time.

The theory is so general in scope that it can be applied to understanding aging in living organisms as well. In the ways that we age and die, we are not so different from the machines we build. The difference, we have found, is minimized if we think of ourselves in this unflattering way: we are like machines made up of redundant components, many of which are defective right from the start.

The reliability engineering approach to human aging provides a common scientific language and general framework for scientists working in different areas of aging research. It helps them knock down the barriers that specialists have constructed and allows them to understand each other better.

Most important, it helps define more clearly what aging is. In reliability theory, aging is defined through the increased risk of failure [see sidebar, “Terms To Know”]. More precisely, something ages if it is more likely to fall apart tomorrow than today. If the risk of failure does not increase as time passes, then there is no aging in terms of reliability theory.

graph, Stages of Life Stages of Life: The so-called bathtub curve for human mortality as seen in the U.S. population in 1999 has the same shape as the curve for failure rates of many machines. The curve for people (and machines) is in three parts: working in, or infant mortality (left); normal working (middle); and aging (right) Graph: Bryan Christie; Source: Age-specific death rates from the Human Mortality Database (HMD): http://www.mortality.org.

By looking closely at human aging data, we can find a striking similarity between how living organisms and technical devices age and fail. In both cases, the failure rate follows a curve shaped roughly like a bathtub [see graph, “Stages of Life”]. The curve consists of three stages, which we call the working-in or infant-mortality, normal-working, and aging periods. Engineers do not often see all three stages in a single product—infant mortality is a somewhat avoidable warranty disaster and most electronics become obsolete well before they would start to age—but the bathtub curve is still illustrative of the way things fail in general.

At the start of a machine’s life, the working-in period, failure rates are high; they then decrease with age. During this period, defective components fail. For example, the risk of a new microprocessor failing is often higher at the very start, because of defects in the silicon or because small variations in the fabrication process lead to circuits that give out under the initial stress of operation. The same working-in period exists early in life for most living organisms, including humans; for humans, it is called the infant-mortality period.

Those computers and people that did not fail initially operate quite well for a time, known as the normal-working period. This stage is distinguished by low and approximately constant failure rates. In humans, this period is all too short, just 10 to 15 years, starting at about age 5.

Then the third epoch, the aging period, starts. It is marked by an inexorable rise in the failure rate over time. In most living organisms, ourselves included, this rise in failure rates follows an explosive exponential trajectory described by the Gompertz law of mortality. In humans, the aging period occurs approximately from the ages of 20 to 100 years.

graph, 'No End in Sight' No End In Sight: Death rates slow at advanced ages. After age 95, the observed risk of death (red line) deviates from the value predicted by an early model, the Gompertz law (black line). Graph: Bryan Christie; Source: Data for Swedish women for 1990-2000 from the Kannisto-Thatcher Database on Old Age Mortality ( http://www.demogr.mpg.de/databases/ktdb)

But there is a fourth epoch that we and our machines share. This period is known in biology as late-life mortality leveling off. The late-life mortality deceleration law states that death rates stop increasing exponentially at advanced ages and instead begin to plateau. In humans, this happens at ages exceeding 100 years [see graph, “No End in Sight”]. If you live to be 110, your chances of seeing your next birthday are not very good, but, paradoxically, they are not much worse than they were when you were 102. There have been a number of attempts to explain the biology behind this in terms of reproduction and evolution, but since the same phenomenon is found not only in humans but also in such man-made stuff as steel, industrial relays, and the thermal insulation of motors, reliability theory may offer a better answer.

An immediate consequence of the last observation is that there is no fixed upper limit to human longevity—there is no special number that separates possible from impossible values of a life span. This conclusion flies in the face of the common belief that humans have a fixed maximal life span and that there exists a biological limit to longevity.

graph, 'Geography is not Destiny' Geography Is Not Destiny: The compensation law of mortality shows that death rates in different populations converge for older people. Graph: Bryan Christie; Source: Adapted from Gavrilov & Gavrilova,  The Biology of Life Span, 1991

Another aging rule becomes apparent in studies of the older end of the population. Called the compensation law of mortality, or mortality convergence in later life, this empirical rule states that the relative differences in death rates between different populations of the same species decrease with age. That is, while middle-aged people in India during World War II might have died at a much higher rate than those in Norway during the 1950s, the death rates for octogenarians from the two populations were rather close to each other [see graph, “Geography is Not Destiny”].

Any theory of human aging has to explain these last three rules, known collectively as mortality, or failure, laws. And reliability theory, by way of a clutch of equations, covers all of them.

Here’s what the mathematics of reliability theory tells us. First, it predicts that a system may deteriorate with age even if it is built from nonaging elements—that is, elements that have a constant rate of failure that is caused by random factors, like being hit by radiation or infected with a virus, for example. This applies to any system made up of redundant but irreplaceable parts.

A simple example would be a computer with three microprocessors. In this case, the processors themselves do not age, but they do suffer damage by chance at some unpredictable point in time and permanently fail [see illustration, “Damage Tolerance” ]. In the three-processor system, it takes a sequence of at least three failures to destroy the computer, as opposed to one unfortunate stroke if it had had only one processor. Even such a simple, three-part redundant system behaves as if it were aging.

Graph, 'Damage Tolerance' Damage Tolerance: Redundancy creates damage tolerance, but it also allows a system to accumulate damage and thereby age. A system with one component fails as soon as it is damaged (top), while a system with three redundant components ages but survives damage (bottom). Illustration: Bryan Christie

The positive effect of redundancy in systems is tolerance of damage, which decreases the initial risk of failure (death) and increases life span. Tolerance, however, makes it possible for damage to accumulate over time, thus producing the aging phenomenon. There are good reasons to look at humans as redundant systems made up at least in part of nonaging elements. The redundancy in living things is straightforward, as our vital organs and systems are made up of a great many cells. But there is also evidence that many of the parts that make up our vital systems, at the level of the cell, do not age.

We are like machines made up of redundant components, many of which are defective right from the start.

Recent experiments looking for the mechanisms behind age-related neurodegenerative diseases, such as Parkinson’s, found that the rate of brain-cell death stays constant, regardless of age. Many cell functions, too, have been shown to be as good as new even in old age.

By itself this redundancy takes care of two of the three aging rules. First is the compensation rule: older people from different populations die at similar rates even if younger people from those populations have very different death rates. Assuming that there is a steady rate of failure for its individual components, a system with 10 redundant parts might be less likely to fail at first than one with only eight. But at some point each system will be left with only a few working parts and the same risk of failing. It will simply take longer for the 10-part system to get there [see graph, “Redundancy Leads to Aging”].

Graph, 'Redundancy Leads to Aging' Redundancy Leads to Aging: Machines with more redundancy in their systems begin life with a lower risk of failure than those with less redundancy, but as they age, the difference in their risk of failing diminishes; at the end, the risk to all levels off. Those characteristics are the same as those found in the death rates of humans. Both failure rate and age are presented in dimensionless units. Graph: Bryan Christie

Redundant systems also mimic the way death rates level off in people over 100. At advanced ages, all systems eventually lose their redundancy and are left with only one critical component. At that stage, their failure rate is high but constant rather than increasing, similar to death rates in very old people.

The only remaining problem that simple redundancy does not explain is the law of mortality growth with age. If you plot the logarithm of death rates against age for living things, you get, roughly, a straight line described by the Gompertz curve. But to get that same straight line from the logarithm of failure rates of machines, you need to plot it against the logarithm of age, a relationship called the Weibull power law. In other words, during the aging period, the curve describing death rates for humans bends upward much more steeply than the one describing typical failure rates for machines. This was a puzzle to us for many years.

The “aha!” moment took place some years ago when we had to work with an unpredictable, dilapidated mainframe computer in Russia, and we got the impression that the complex behavior of this computer could only be described by resorting to such human concepts as character, personality, and change of mood. This observation led us to the bizarre idea that living organisms, including humans, have more of a resemblance to partially damaged machines than to new ones.

Indeed, in contrast to technical devices, which are constructed out of previously manufactured and tested components, organisms form themselves through a process of self-assembly out of untested elements—cells. This fundamental difference in the manner in which people and machines are made has important consequences for how they age.

While the reliability of technical devices can be ensured by the high quality of their elements, the reliability of living organisms has to be ensured by an exceptionally high degree of system redundancy to overcome the poor quality of some elements. In other words, machines can be made to avoid faults, while living things make themselves to tolerate faults.

Musing over the behavior of our old Russian computer, we discovered that standard reliability models usually have a hidden assumption that the system is, at its start, undamaged. It is that assumption that leads to failure rate curves that follow Weibull’s power law. However, explaining the exponential deterioration of living organisms requires the opposite conjecture: organisms start their adult life with a high load of initial damage [see graph, “A Defective Start”].

Graph, 'A Defective Start' A Defective Start: People age more like machines built with lots of faulty parts than like ones built with pristine parts. As the number of bad components, the initial damage load, increases (bottom to top), machine failure rates begin to mimic human death rates. Graph: Bryan Christie

Although this idea may seem counterintuitive, it fits well with many observations of massive cell loss in early development. For example, the female human fetus at 4 to 5 months possesses 6 to 7 million eggs. By birth, this number drops to 1 to 2 million and declines even further. At the start of puberty in normal girls, there are only 0.3 to 0.5 million eggs left—just 5 to 7 percent of the initial number. It is now well established that the exhaustion of the number of eggs over time is responsible for menopause and the failure of the reproductive system, and that women with more egg cells have a longer reproductive span.

If we accept the idea that we are born with a large amount of damage, it follows that even small improvements to the processes of early human development—ones that increase the numbers of initially functional elements—could result in a remarkable fall in mortality and a significant extension of human life. Indeed, there is mounting evidence now in support of the idea of fetal origins of adult degenerative diseases and early-life programming of aging and longevity.

Interestingly, even such an ephemeral early-life circumstance as the month of birth affects human life span, indicating that early-life seasonal troubles, such as vitamin deficiency from a mother’s more meager winter diet or exposure to diseases such as influenza, may have long-lasting consequences.

With the reliability theory’s view of aging, researchers now have, generally at least, a why and a how of aging. We age because our makeup includes irreplaceable but redundant parts, many of which are defective, and we age as each of those parts inevitably stops working. Having such a theory can help focus biomedical research on interventions that can slow or control aging.

One of the greatest of such interventions would be a way to avoid the developmental damage responsible for the high initial damage load that marks our lives. Even such a simple thing as an adequate supply of vitamins (folic acid, in particular) and other micronutrients for expectant mothers prevents extensive DNA damage and many inborn defects. For example, pregnant mice fed antioxidants, which decrease damage to DNA and other cellular structures, produce longer-lived offspring. This line of research could lead to the prevention of age-related diseases before birth, analogous to improving the manufacturing process of a computer chip.

We could also do better at preventing damage to tissues and organs. The elimination of widespread chronic infections and hidden inflammation helps to delay the onset of arthritis, atherosclerosis, diabetes, Alzheimer’s disease, and some types of cancer. And while we’re at it, we should learn to repair our bodies better when we’re wounded or weakened by disease.

Living organisms already have numerous mechanisms of repair—for example, cells killed by everything from scratches to sunburn are continuously replaced by new ones, which are formed by stem cells, cells that can multiply to form many types of tissue. Scientists have been studying what’s called the hormesis effect, the observation that a little bit of poison activates an organism’s self-repair mechanisms, having the side effect of protecting it against other hazards than the poison itself. If we could learn to control such a protective effect, we might be able to slow or prevent the loss of cells and systems that leads to aging.

Finally, we could learn to replace our damaged organs, substituting the young and healthy for the old and failing. Many researchers now believe that one day the human life span could be greatly extended by replenishing aging organs with stem cells. We are just now starting down this road. Such regenerative medicine and tissue engineering may sound like science fiction, but a growing number of scientists are taking the first steps to grow tissues and organs to replace failed ones. Laboratories around the world are making progress in building replacement lung, kidney, liver, and heart tissue.

Reliability theory suggests that there might be no single underlying aging process. Instead, aging may be largely an emergent property of redundant systems. Such systems can have a network of destruction pathways, each associated with particular manifestations of aging, whether menopause or Alzheimer’s disease. Metaphorically speaking, our life span is a time bomb with many fuses burning at different speeds. Cutting off only one fuse may be inadequate—we need to take care of them all.

Photo: Joran Hollender

To Probe Further

Several hundred scientists recently met for a conference called “Strategies for Engineered Negligible Senescence: Reasons Why Genuine Control of Aging May Be Foreseeable.” A detailed report from this meeting (110 articles, 597 pages) was published in the June 2004 issue of the Annals of the New York Academy of Sciences (http://www.annalsnyas.org/content/vol1019/issue1/).

A new peer-reviewed journal, Rejuvenation Research, has just been established to promote the study of aging and interventions to slow it.

The authors of this article detailed the mathematics behind the reliability theory of aging in The Biology of Life Span: A Quantitative Approach (Taylor and Francis, New York, 1991) and more recently in “The Reliability Theory of Aging and Longevity” in the Journal of Theoretical Biology, Vol. 213, no. 4, 2001, pp. 527-45.

Additional information related to the topic of this article is at the authors’ Web site, http://longevity-science.org.

For the state of the art in reliability engineering, subscribe to IEEE Transactions in Reliability Engineering.

The Conversation (0)

China’s Mars Helicopter to Support Future Rover Exploration

Powered flight on Mars is a new threshold in the 21st century space race

2 min read
National Space Science Center/Chinese Academy of Sciences

The first ever powered flight by an aircraft on another planetary took place in April when NASA's Ingenuity helicopter, delivered to the Red Planet along with Perseverance rover, but the idea has already taken off elsewhere.

Earlier this month a prototype "Mars surface cruise drone system" developed by a team led by Bian Chunjiang at China's National Space Science Center (NSSC) in Beijing gained approval for further development.

Keep Reading ↓ Show less

New Fuel Cell Tech Points Toward Zero-Emission Trains

Lighter hydrogen converter design could give locomotives an efficiency boost

2 min read

A motor and flywheel simulate the load during testing of a new hydrogen cell converter designed by Pietro Tricoli and Ivan Krastev at the University of Birmingham.

Ivan Krastev

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Keep Reading ↓ Show less

Help Build the Future of Assistive Technology

Empower those in need with a master’s degree in assistive technology engineering

4 min read

This article is sponsored by California State University, Northridge (CSUN).

Your smartphone is getting smarter. Your car is driving itself. And your watch tells you when to breathe. That, as strange as it might sound, is the world we live in. Just look around you. Almost every day, there's a better or more convenient version of the latest gadget, device, or software. And that's only on the commercial end. The medical and rehabilitative tech is equally impressive — and arguably far more important. Because for those with disabilities, assistive technologies mean more than convenience. They mean freedom.

So, what is an assistive technology (AT), and who designs it? The term might be new to you, but you're undoubtedly aware of many: hearing aids, prosthetics, speech-recognition software (Hey, Siri), even the touch screen you use each day on your cell phone. They're all assistive technologies. AT, in its most basic form, is anything that helps a person achieve enhanced performance, improved function, or accelerated access to information. A car lets you travel faster than walking; a computer lets you process data at an inhuman speed; and a search engine lets you easily find information.

Keep Reading ↓ Show less

Trending Stories

The most-read stories on IEEE Spectrum right now