The October 2022 issue of IEEE Spectrum is here!

Close bar

Some AI Systems May Be Impossible to Compute

New research suggests there are limitations to what deep neural networks can do

4 min read
The shape of a brain and lines indicating a network glow within a box
Getty Images/IEEE Spectrum

Deep neural networks are increasingly helping to design microchips, predict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.

These findings might help shed light on what is and is not actually possible with AI, the scientists add.

In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in the human brain. A neural network is dubbed “deep” if it possesses multiple layers of neurons.

Although deep neural networks are being used for increasingly practical applications such as analyzing medical scans and empowering autonomous vehicles, there is now overwhelming evidence that they can often prove unstable—that is, a slight alteration in the data they receive can lead to a wild change in outcomes. For example, previous research found that changing a single pixel on an image can make an AI think a horse is a frog, and medical images can get modified in a way that’s imperceptible to the human eye and causes an AI to misdiagnose cancer 100 percent of the time.

Previous research suggested there is mathematical proof that stable, accurate neural networks exist for a wide variety of problems. However, in a new study, researchers now find that although stable, accurate neural networks may theoretically exist for many problems, there may paradoxically be no algorithm that can actually successfully compute them.

“Theoretically, there are very few limitations on what neural networks can achieve,” says study co-lead author Matthew Colbrook, a mathematician at the University of Cambridge, in England. The problem emerges when trying to compute those neural networks. “A digital computer can only compute certain specific neural networks,” says study co-lead author Vegard Antun, a mathematician at the University of Oslo, in Norway. “Sometimes computing a desirable neural network is impossible.”

These new findings might sound confusing, as if one stated that a kind of cake may exist but that no recipe exists to create it.

“We would say that it is not the recipe that is the problem. Instead, it is the tools you have to make the cake that is the problem,” says study senior author Anders Hansen, a mathematician at the University of Cambridge. “We are saying that there might be a recipe for the cake, but regardless of the mixers you have available, you may not be able to make the desired cake. Moreover, when you try to make the cake with your mixer in the kitchen, you will end up with a completely different cake.”

In addition, to continue the analogy, “it can even be the case that you cannot tell whether the cake is incorrect until you try it, and then it is too late,” Colbrook says. “There are, however, certain cases when your mixer is sufficient to make the cake you want, or at least a good approximation of that cake.”

These new findings on the limitations of neural networks echo previous research on the limitations of mathematics from mathematician Kurt Gödel and on the limitations of computation from computer scientist Alan Turing. Roughly, they revealed “that there are mathematical statements that can never be proven or disproven and that there are basic computational problems that a computer cannot solve,” Antun says.

The new study finds that an algorithm may not be able to compute a stable, accurate neural network for a given problem no matter how much data it can access or the accuracy of that data. This is similar to Turing’s argument that there are problems that a computer may not solve regardless of computing power and run time, Hansen says.

“There are inherent limitations on what computers can achieve, and these limitations will show up in AI as well,” Colbrook says. “This means that theoretical results on the existence of neural networks with great properties may not yield an accurate description of what is possible in reality.”

These new findings do not suggest that all neural networks are completely flawed, but that they may prove stable and accurate only in limited scenarios. “In certain cases, it is possible to compute stable and accurate neural networks,” Antun says. “The key issue is the part ‘in certain cases.’ The big problem is to find these cases. Currently, there is very little understanding of how to do this.”

The researchers found there was often a trade-off between stability and accuracy in neural networks. “The issue is that we want both stability and accuracy,” Hansen says. “In practice, for safety-critical applications, one may have to sacrifice some accuracy to secure stability.”

As part of the new study, the researchers developed what they call fast iterative restarted networks (FIRENETs). These neural networks can offer a blend of both stability and accuracy when it comes to tasks such as analyzing medical images.

These new findings regarding the limitations of neural networks are not aimed at damping artificial-intelligence research, and may instead spur new work exploring ways to bend these rules.

“Figuring out what can and cannot be done will be healthy for AI in the long run,” Colbrook says. “Note that the negative results of Turing and Gödel sparked an enormous effort in mathematical foundations and computer science. This led to much of modern computer science and modern logic, respectively.”

For example, these new findings imply the existence of a classification theory for describing which stable neural networks with a given accuracy can be computed by an algorithm. Using the previous cake analogy, “it would be a classification theory describing which cakes can be baked with the mixers that are physically possible to design,” Antun says. “And if it is impossible to bake the cake, we want to know how close one can get to the type of cake you desire.”

The scientists detailed their findings online 16 March in the journal Proceedings of the National Academy of Sciences.

The Conversation (2)
ROSANNA FESTA03 Apr, 2022
INDV

This is a problem

James Weller31 Mar, 2022

The human brain is limited too. I believe in a practical sense, there will be workarounds to achieve greater than human intelligence in virtually any endeavor for applied AI. I think the way forward will be structuring hierarchies of the deep learning networks to attack various levels of problems. There are a few simple paradigms that work remarkably well for human beings, like partitioning problems into sub problems as an example. As complexity increases there will be know perfect answers, but more human like "solutions" complete with some flaws. However, as the AI gets better the flaws will reduce and the answers will be more often right (than humans).

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}