Can Big AI Make Responsible AI?

Major tech companies grapple with guidelines for artificial intelligence

3 min read

dots in shape of person standing in palm of hand
iStock

Perhaps it was inevitable, as the AI world absorbed the news of GPT-4, that some people would think of Frankenstein’s monster. Or HAL 9000. Or the Terminator—any of science fiction’s great stories of technologies that wrought havoc before human beings had thought through their implications.

Even as the latest large language model has taken the tech world by surprise, the industry is scrambling—to burnish its ethical AI credentials, and to keep its standards for AI ethics ahead of the rapid advances in the field. A prime case: Microsoft, which has had a Responsible AI initiative since 2017, has just added new open-source applications to what it calls its Responsible AI Toolbox, coding that developers can use “to make it easier and faster for developers to incorporate responsible AI principles into their solutions.” (Not unrelatedly, in a recent round of layoffs, Microsoft closed down an Ethics and Society team that it said had guided early AI efforts. A spokesperson, contacted by Spectrum, says there has been no letup in “the interdisciplinary way in which we work across research, policy, and engineering.”)

“[T]here’s clearly a need to develop guidelines and move more swiftly than regulation.”
—Claire Leibowicz, Partnership on AI

“AI may well represent the most consequential technology advance of our lifetime,” wrote Brad Smith, Microsoft’s vice chair and president, in a blog post in February. His words were tempered: “Will all the changes be good? While I wish the answer were yes, of course that’s not the case.”

Separately, the Partnership on AI (PAI), a nonprofit that seeks to promote discussions of AI issues, has just published “Responsible Practices for Synthetic Media,” a set of guidelines for how to create and share multimedia content generated by AI. Members of the partnership include such companies as OpenAI, Adobe, Tiktok and the BBC’s R&D arm, as well as several AI startups.

But how effectively can major tech companies be in policing AI’s development, especially given how widely the use of AI tools is spreading beyond the tech giants? If you’re concerned about deepfakes, watch the spread of “cheap fakes,” images or videos fabricated with AI’s help that may often be crude, but that can be made, for free, by anyone who finds an AI app online. The largest social media companies, including Meta, Twitter and Google (which owns YouTube), have committed to removing misinformation or offensive posts. But the job keeps getting harder as more offenders and malefactors use more and increasingly sophisticated AI technologies.

Last month, for instance, a video turned up on Twitter of President Biden announcing he was going to start drafting American troops to protect Ukraine. It was, of course, fake—the conservative influencer who posted it came on camera after Biden to say so. He claimed it was an AI-powered warning of what the White House might do. As of this week it was still online, viewed more than 4 million times. It didn’t violate Twitter’s rules because it didn’t claim to be real. But a lot of people who reacted on Twitter apparently didn’t watch long enough to see the disclaimer.

How to decide, in such cases, what to do? Can the big tech companies—can anyone—set rules in advance that will work for everything that might be done with AI in the future?

“Everyone, I think, is operating in this Wild West and is eager to have some set of guidelines,” says Claire Leibowicz of the PAI. “I think, for good reasons, people are understandably skeptical of voluntary standards. At the same time, based on the swell of interest, and guidance from people from many different sectors, there’s clearly a need to develop guidelines and move more swiftly than regulation.”

Government, particularly in the United States, has moved slowly to make AI rules. That’s fine with many developers who would argue that regulators will be heavy-handed and behind the curve. For now, that leaves companies in charge. They so far have tended to set fairly general standards. The PAI’s framework, for example, recommends that content creators be transparent when they’ve altered or faked something, perhaps using labels or digital watermarks so that users can easily tell. The PAI agrees, at least in public, that it cannot go it alone.

“Microsoft believes that some regulation of AI, particularly for high-risk uses of the technology, is necessary,” says Besmira Nushi, a principal researcher at Microsoft Research, in an email. “As governments worldwide debate approaches to regulating certain uses of AI, Microsoft is committed to doing our part to develop and deploy AI responsibly.”

Leibowicz, at the PAI, says that if companies agree on a list of harmful and responsible uses of AI, it needs to be a living document, adaptable in a fast-changing field. “And it’s our hope that that will catalyze or galvanize the field of people who have a major role to play in this effort. And, to that end, it will be a complement to regulation that’s absolutely necessary.

“But,” she adds, “I think there’s also a degree of maintaining some humility at being unable to predict the future.”

The Conversation (1)
keith sheridan
keith sheridan25 Oct, 2023
INDV

Why can't AI be open-source, like Chrome? Developers in mass find problems and fix/verify them before release to public use. If the AI contains names, it should be open-source, that would be a regulatory start.