The October 2022 issue of IEEE Spectrum is here!

Close bar

Facebook, Microsoft, and IBM Leaders on Challenges for AI and Their AI Partnership

AI has come far, but there are many challenges ahead, say expert panelists at the White House Frontiers Conference

3 min read
Panel on Best Practices in AI
Photo: Prachi Patel

Late last month, Amazon, Facebook, Google, IBM, and Microsoft announced that they will create a non-profit organization called Partnership on Artificial Intelligence. At the White House Frontiers Conference held at Carnegie Mellon University today, thought leaders from these companies explained why AI has finally arrived and what challenges lie ahead. (Also read the White House’s report on the future of AI released yesterday.)

While AI research has been going on for more than 60 years, the technology is now at an inflection point, the panelists agreed. That has happened because of three things: faster, more powerful computers; critical computer science advances, mainly statistical machine learning and deep learning techniques; and the massive information available due to sensors and the Internet of Things.

The early decades of AI saw “a succession of disappointments and promises not met,” said Yann LeCun, director of AI at Facebook. “Now we have systems that can identify images, that can understand and translate text, for speech recognition,” he said. “Expectations are now even higher but perhaps more realistic. And now an industry exists…in the past AI was mostly academic.”

The Partnership on AI has two goals. “We want it to be a forum to discuss issues on proper deployment, best practices, and ethical questions,” LeCun said. “The other purpose is to explain the state-of-the-art in AI and what it can do in future.”

There has been much media coverage on the potential and dangers of AI. The new organization will serve to dispel myths and serve as a reliable source of information on where the technology is going, added Guruduth Banavar, VP of cognitive computing at IBM.

“Most of what we (humans) learned, we learned in the first few years of life just by observing the world. And we don’t know how to do that with machines  . . . This is one mountain we need to climb, but we don’t know how many mountains are behind it.”

So what are the frontiers of AI? Teaching machines common sense, said LeCun. He gave the problem of text translation as an example. The technology is far from perfect, he said, because machines don’t have a deep understanding of the text they are translating.

“Most of what we [humans] learned, we learned in the first few years of life just by observing the world,” said LeCun. “And we don’t know how to do that with machines, how to teach them to learn by observing the world. This is one mountain we need to climb, but we don’t know how many mountains are behind it.”

According to IBM’s Banavar, a critical challenge is more coordination between people and machines. IBM uses the term “cognitive computing” in lieu of artificial intelligence specifically to highlight that the goal is “augmenting the intelligence of people with what machines can do really well.” Combine computing power with the judgment and values humans have, he said, and we could transform everything from healthcare to education to manufacturing.

Technical challenges aside, another massive mountain to climb, says Banavar, is how to deploy AI in the real world. That would have to involve the point-of-view of not just tech developers, but also users and policy-makers. “In environments where machines and humans are interacting there’s got to be an element of trust,” he said. “That trust building will take time.”

Deploying AI systems in the real world will also include other humanistic challenges, said Jeannette Wing, corporate vice president at Microsoft Research. Her comments stemmed from the fiasco earlier this year with Microsoft’s AI chatbot Tae, which got hacked and learned to become racist and abusive, forcing the company to take it down within 24 hours. This was in the U.S. A similar chatbot in use in China since 2014 has been massively successful.

“We learned a lot from this,” Wing said. “AI systems are not just a tech experiment, but also a social and cultural experiment. We will have to design them with cybersecurity in mind, and keeping in mind the sub-society of the internet.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}