Facebook, Microsoft, and IBM Leaders on Challenges for AI and Their AI Partnership

AI has come far, but there are many challenges ahead, say expert panelists at the White House Frontiers Conference

3 min read

Panel on Best Practices in AI
Photo: Prachi Patel

Late last month, Amazon, Facebook, Google, IBM, and Microsoft announced that they will create a non-profit organization called Partnership on Artificial Intelligence. At the White House Frontiers Conference held at Carnegie Mellon University today, thought leaders from these companies explained why AI has finally arrived and what challenges lie ahead. (Also read the White House’s report on the future of AI released yesterday.)

While AI research has been going on for more than 60 years, the technology is now at an inflection point, the panelists agreed. That has happened because of three things: faster, more powerful computers; critical computer science advances, mainly statistical machine learning and deep learning techniques; and the massive information available due to sensors and the Internet of Things.

The early decades of AI saw “a succession of disappointments and promises not met,” said Yann LeCun, director of AI at Facebook. “Now we have systems that can identify images, that can understand and translate text, for speech recognition,” he said. “Expectations are now even higher but perhaps more realistic. And now an industry exists…in the past AI was mostly academic.”

The Partnership on AI has two goals. “We want it to be a forum to discuss issues on proper deployment, best practices, and ethical questions,” LeCun said. “The other purpose is to explain the state-of-the-art in AI and what it can do in future.”

There has been much media coverage on the potential and dangers of AI. The new organization will serve to dispel myths and serve as a reliable source of information on where the technology is going, added Guruduth Banavar, VP of cognitive computing at IBM.

“Most of what we (humans) learned, we learned in the first few years of life just by observing the world. And we don’t know how to do that with machines . . . This is one mountain we need to climb, but we don’t know how many mountains are behind it.” —Yann LeCun, director of AI at Facebook

So what are the frontiers of AI? Teaching machines common sense, said LeCun. He gave the problem of text translation as an example. The technology is far from perfect, he said, because machines don’t have a deep understanding of the text they are translating.

“Most of what we [humans] learned, we learned in the first few years of life just by observing the world,” said LeCun. “And we don’t know how to do that with machines, how to teach them to learn by observing the world. This is one mountain we need to climb, but we don’t know how many mountains are behind it.”

According to IBM’s Banavar, a critical challenge is more coordination between people and machines. IBM uses the term “cognitive computing” in lieu of artificial intelligence specifically to highlight that the goal is “augmenting the intelligence of people with what machines can do really well.” Combine computing power with the judgment and values humans have, he said, and we could transform everything from healthcare to education to manufacturing.

Technical challenges aside, another massive mountain to climb, says Banavar, is how to deploy AI in the real world. That would have to involve the point-of-view of not just tech developers, but also users and policy-makers. “In environments where machines and humans are interacting there’s got to be an element of trust,” he said. “That trust building will take time.”

Deploying AI systems in the real world will also include other humanistic challenges, said Jeannette Wing, corporate vice president at Microsoft Research. Her comments stemmed from the fiasco earlier this year with Microsoft’s AI chatbot Tae, which got hacked and learned to become racist and abusive, forcing the company to take it down within 24 hours. This was in the U.S. A similar chatbot in use in China since 2014 has been massively successful.

“We learned a lot from this,” Wing said. “AI systems are not just a tech experiment, but also a social and cultural experiment. We will have to design them with cybersecurity in mind, and keeping in mind the sub-society of the internet.”

The Conversation (0)