The February 2023 issue of IEEE Spectrum is here!

Close bar

Biggest Neural Network Ever Pushes AI Deep Learning

Digital Reasoning has trained a record-breaking artificial intelligence neural network that is 14 times larger than Google's previous record

2 min read
Biggest Neural Network Ever Pushes AI Deep Learning
Illustration: Getty Images

Silicon Valley giants such as Google and Facebook have been trying to harness artificial intelligence by training brain-inspired neural networks to better represent the real world. Digital Reasoning, a cognitive computing company based in Franklin, Tenn., recently announced that it has trained a neural network consisting of 160 billion parametersmore than 10 times larger than previous neural networks.

The Digital Reasoning neural network easily surpassed previous records held by Google’s 11.2-billion parameter system and Lawrence Livermore National Laboratory’s 15-billion parameter system. But it also showed improved accuracy over previous neural networks in tackling an “industry-standard dataset” consisting of 20,000 word analogies. Digital Reasoning’s model achieved an accuracy of almost 86 percent; significantly higher than Google’s previous record of just over 76 percent and Stanford University’s 75 percent.

“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Matthew Russell, chief technology officer for Digital Reasoning, in a press release.

Deep learning” involves the building of learning machines from five or more layers of artificial neural networks. ("Deep" refers to the depth of the layers, rather than any depth of knowledge.) Yann LeCun, head of the Artificial Intelligence Research Lab at Facebook, has described the idea of deep learning as “machines that learn to represent the world.” (For a more detailed description—complete with knobs and lights—see IEEE Spectrum’s previous interview with LeCun on deep learning.)

Digital Reasoning’s neural network was trained on three multi-core computers overnight in order to achieve its accuracy in tackling the word analogies dataset. But the company’s researchers plan to test the system on larger datasets and vocabularies in the near future. Their results so far have been detailed in a paper on the preprint server arXiv and in the Journal of Machine Learning.

Deep learning neural networks have received a growing amount of attention lately. For example, Google has been training its deep learning AI to figure out classic arcade games from scratch. The tech giant also recently unveiled its “DeepDream” tool for visualizing neural networks; a tool that also happened to produce beautiful, sometimes surreal images.

The Conversation (0)

An IBM Quantum Computer Will Soon Pass the 1,000-Qubit Mark

The Condor processor is just one quantum-computing advance slated for 2023

4 min read
This photo shows a woman working on a piece of apparatus that is suspended from the ceiling of the laboratory.

A researcher at IBM’s Thomas J. Watson Research Center examines some of the quantum hardware being constructed there.

Connie Zhou/IBM

IBM’s Condor, the world’s first universal quantum computer with more than 1,000 qubits, is set to debut in 2023. The year is also expected to see IBM launch Heron, the first of a new flock of modular quantum processors that the company says may help it produce quantum computers with more than 4,000 qubits by 2025.

This article is part of our special report Top Tech 2023.

While quantum computers can, in theory, quickly find answers to problems that classical computers would take eons to solve, today’s quantum hardware is still short on qubits, limiting its usefulness. Entanglement and other quantum states necessary for quantum computation are infamously fragile, being susceptible to heat and other disturbances, which makes scaling up the number of qubits a huge technical challenge.

Keep Reading ↓Show less