Standards Matter for Cars, Plugs, Wi-Fi—and AI?

Efforts are now underway to standardize development of responsible AI

4 min read
A layer of cream and blue silhouetted heads forming a swirl resulting in an upside down head within the head.
Jorg Greuel/Getty Images

Artificial intelligence holds much promise for innovation and progress, but it also has the potential to cause harm. To enable the responsible development and use of AI, the International Organization for Standardization (ISO) recently released ISO/IEC 42001, a new standard for AI management systems. According to ISO, this standard “offers organizations the comprehensive guidance they need to use AI responsibly and effectively, even as the technology is rapidly evolving.”

As AI has rapidly matured and broadly been rolled out across the world, there’s been a tangle of conflicting standards from big AI companies like Meta, Microsoft, and Google. (Although in November, Meta reportedly disbanded its Responsible AI group.) And the Austin, Tex.-based Responsible AI Institute has its own assessments and certification program for ethical uses and applications for AI. Yet, maintaining consistent standards and practices are also an age-old challenge over the entire history of technology. And standards-keeping organizations like the ISO—and the IEEE—could be natural places to turn for a widely agreed-upon set of parameters for responsible AI development and use.

“If there is this kind of buy-in from organizations that are promoting the responsible development and use of AI, others will follow.” —Virginia Dignum, Umeå University, Umeå, Sweden

In ISO’s case, their standard concerns AI management systems. These are catalogs or inventories of the different AI systems that a company is using, along with information on how, where, and why these systems are being used, says Umang Bhatt, an assistant professor and faculty fellow at New York University and an advisor to the Responsible AI Institute. And as the standard specifies, an AI management system is “intended to establish policies and objectives, as well as processes to achieve those objectives, in relation to the responsible development, provision, or use of AI systems.”

So ISO’s new standard provides a set of concrete guidelines—as opposed to just high-level principles—that support responsible AI, says Hoda Heidari, who coleads the Responsible AI Initiative at Carnegie Mellon University. Heidari adds that the standard also gives AI developers confidence that “the appropriate processes were followed in the creation and evaluation of the system before it was released, and there are appropriate processes around to monitor it and address any adverse outcomes.”

IEEE, ISO, and governments consider

Meanwhile, IEEE Spectrum’s parent organization, the IEEE, also maintains and develops a wide range of standards across many fields of technology. As of press time, Spectrum has learned of at least one effort now afoot within the broad global reach of IEEE standards-making organizations to develop responsible AI standards. It would reportedly be an outgrowth of the 2020 Recommended Practice standard for AI development and use. In addition, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has made available this document promoting ethically aligned development of autonomous systems.

As with some standards in tech, the ISO’s standard is not a mandatory standard. “What would compel companies to adopt this? The standard itself is not enough—you have to have reason and motivation for these developers to adopt it,” says Chirag Shah, a founding codirector of RAISE, a center for responsible AI at the University of Washington. He adds that organizations might also view the standard as an overhead task, especially for small ones without enough resources or even large corporations that already have their own standards.

“It’s just like a tracking record that I hope will become part of the culture in the software development community.” —Umang Bhatt, New York University

Virginia Dignum, a professor in responsible AI and director of the AI Policy Lab at Sweden’s Umeå University, echoes the sentiment, noting that the standard “only really works when there is a sufficient number of organizations taking it up, and by doing that, we also identify what will and will not work in the standard.” To address this issue, Dignum suggests turning to big tech firms and convincing them to adopt the standard, because “if there is this kind of buy-in from organizations that are promoting the responsible development and use of AI, others will follow.” For instance, Amazon’s AWS participated in creating the standard and is now pursuing its adoption.

Another motivation to apply the standard is to be prepared and create a framework for looming regulations from other standards-making bodies, which may align with ISO’s new standard. For example, the U.S. government recently released an executive order on AI, while the European Union’s AI Act is expected to take full effect by 2025.

Trust matters, too

An additional incentive for AI companies to take up the standard is to cultivate trust with end users. In the United States, for instance, people express more concern over excitement about AI’s impact on their daily lives, with these concerns spanning the data used to train AI, its biases and inaccuracies, and its potential for misuse. “When there are standards and best practices around, and we can assure consumers that those are followed, they will trust the system more, and they are more willing to interact with it,” Heidari says.

Akin to a car’s braking system, which has been built and tested following particular standards and specifications, “even if users don’t understand what the standard is, it will provide them the confidence that things were developed in a certain way, and that there are also some auditing or checks and oversight on what has been developed,” Dignum says.

For AI firms looking to adopt the standard, Bhatt advises viewing it much like you would the practices you’ve established to keep track of any issues with your AI system. “These standards are going to come in place in a way that is quite similar to the continuous monitoring tools you might build and use,” he says. “It’s just like a tracking record that I hope will become part of the culture in the software development community.”

Beyond implementation, Heidari hopes ISO’s new standard will spur a mindset shift in AI companies and the people creating them. She points to design choices when training machine-learning models as an example: It may seem like making just another engineering or technical decision that doesn’t have any meaning outside the machinery they’re dealing with, but “all those choices have huge implications when the resulting model will be utilized for decision-making processes or for automating practices on the ground,” she says. “The most important thing for developers of these systems is to keep in mind that whether they know it or not and whether they accept it or not, a lot of the choices they make have real-world consequences.”


UPDATE 9 Feb. 2024: The story was updated to provide a link to the most recent IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems document, from 2019. The original story had linked to the 2017 version.

The Conversation (0)