Artificial intelligence software can beat the world’s most widely used test of a machine’s ability to act human, Google’s reCAPTCHA, by copying how human vision works, a new study finds.
These new findings suggest the need for more robust automated human-checking techniques, and could help improve computer perception for robotics tasks, scientists add.
The founder of modern computing, Alan Turing, conceived of the Turing test, the most famous version of which asks if one could devise a machine capable of mimicking a human well enough in a conversation over text to be indistinguishable from human. In doing so, Turing helped give rise to the field of artificial intelligence.
The most commonly used Turing test is the CAPTCHA, an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart.” CAPTCHAs are designed to see whether users are human, often to prevent bots from accessing computing services. They usually challenge website visitors to recognize a string of distorted letters and digits, a problem designed to be difficult for computers and easy for humans.
A CAPTCHA is considered broken if an algorithm can successfully solve it at least 1 percent of the time. Now San Francisco Bay Area startup Vicarious reveals its AI software can solve reCAPTCHAs at an accuracy rate of 66.6 percent, BotDetect at 64.4 percent, Yahoo at 57.4 percent and PayPal at 57.1 percent.
The system that Vicarious developed, known as the Recursive Cortical Network (RCN), is an artificial neural network, a computing design that mimics how the brain works. In such a system, components known as artificial neurons are fed data, and work together to solve a problem such as identifying text or recognizing speech. The neural net can then alter the pattern of connections among those neurons to change the way they interact, and the network tries solving the problem again. Over time, the neural net learns which patterns are best at computing solutions.
Previous neural nets could solve reCAPTCHAs, but required training on millions of labeled CAPTCHA image examples or handcrafted rules on how to crack each kind of image. In contrast, Vicarious’ system required much less training data — compared to state-of-the-art deep-learning neural net approaches for reading text, RCN had comparable or higher accuracy while using roughly 300 times less training data.
“Our system has the ability to learn using relatively few examples, much like the human brain,” says study lead author Dileep George, cofounder of Vicarious.
Vicarious says the key to its success was modeling RCN after the human brain’s visual system. The company explains that RCN’s artificial neurons are structured in a way that supports the generation of models that can quickly identify surfaces and contours to help it recognize images and objects given just a few examples.
These findings suggest “text-based CAPTCHAs are becoming obsolete,” George says. He notes that Google and others are already moving away from text-based CAPTCHAs toward new verification mechanisms, such as relying on image-based CAPTCHAs.
The researchers note their software could help tackle other challenges linked with computer perception. “We’re applying it toward many robotics tasks,” George says. “You can imagine a robot needing to not just identify an object but also interact with it, and it needs to build a model of how it behaves if it, say, has to push it.”
The scientists detailed their findings online Oct. 26 in the journal Science.
Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum. He has written for Scientific American, The New York Times, Wired, and Science, among others.