Silicon Valley giants such as Google and Facebook have been trying to harness artificial intelligence by training brain-inspired neural networks to better represent the real world. Digital Reasoning, a cognitive computing company based in Franklin, Tenn., recently announced that it has trained a neural network consisting of 160 billion parameters—more than 10 times larger than previous neural networks.
The Digital Reasoning neural network easily surpassed previous records held by Google’s 11.2-billion parameter system and Lawrence Livermore National Laboratory’s 15-billion parameter system. But it also showed improved accuracy over previous neural networks in tackling an “industry-standard dataset” consisting of 20,000 word analogies. Digital Reasoning’s model achieved an accuracy of almost 86 percent; significantly higher than Google’s previous record of just over 76 percent and Stanford University’s 75 percent.
“We are extremely proud of the results we have achieved, and the contribution we are making daily to the field of deep learning,” said Matthew Russell, chief technology officer for Digital Reasoning, in a press release.
Deep learning” involves the building of learning machines from five or more layers of artificial neural networks. ("Deep" refers to the depth of the layers, rather than any depth of knowledge.) Yann LeCun, head of the Artificial Intelligence Research Lab at Facebook, has described the idea of deep learning as “machines that learn to represent the world.” (For a more detailed description—complete with knobs and lights—see IEEE Spectrum’s previous interview with LeCun on deep learning.)
Digital Reasoning’s neural network was trained on three multi-core computers overnight in order to achieve its accuracy in tackling the word analogies dataset. But the company’s researchers plan to test the system on larger datasets and vocabularies in the near future. Their results so far have been detailed in a paper on the preprint server arXiv and in the Journal of Machine Learning.
Deep learning neural networks have received a growing amount of attention lately. For example, Google has been training its deep learning AI to figure out classic arcade games from scratch. The tech giant also recently unveiled its “DeepDream” tool for visualizing neural networks; a tool that also happened to produce beautiful, sometimes surreal images.
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program.