International Data Corp. estimated that US $118 billion was spent globally in 2022 to purchase artificial intelligence hardware, software, and data services. IDC has predicted the figure will nearly triple, to $300 billion, by 2026. But public procurement systems are not ready for the challenges of procuring AI systems, which bring with them new risks to citizens.
To help address this challenge IEEE Standards Association has introduced a pioneering standard for AI procurement. The standard, which is in development, can help government agencies be more responsible about how they acquire AI that serves the public interest.
Governments today are using AI and automated decision-making systems to aid or replace human-made decisions. The ADM systems’ judgments can impact citizens’ access to education, employment, health care, social services, and more.
The multilayered complexity of AI systems, and the datasets they’re built on, challenge people responsible for procurement—who rarely understand the systems they’re purchasing and deploying. The vast majority of government procurement models worldwide have yet to adapt their acquisition processes and laws to the systems’ complexity.
To assist government agencies in being better stewards of public-use technology, in 2021 the IEEE Standards Association approved the development of a new type of socio-technical standard, the IEEE P3119 Standard for the Procurement of AI and Automated Decision Systems. The standard was inspired by the findings of the AI and Procurement: A Primer report from the New York University Center for Responsible AI.
The new, voluntary standard is designed to help strengthen AI procurement approaches with due-diligence processes to ensure that agencies are critically evaluating the kinds of AI services and tools they acquire. The standard can provide agencies with a method to require transparency from AI vendors about associated risks.
IEEE P3119 also can help governments use their procuring power to shape the market—which could increase demand for more responsible AI solutions.
A how-to guide
The standard aims to help government agencies strengthen their requirements for AI procurement. Added to existing regulations, it offers complementary how-to guidance that can be applied to a variety of processes including pre-solicitation and contract monitoring.
Existing AI procurement guidelines such as the ones from the U.S. Government Accountability Office, the World Economic Forum, and the Ford Foundation cover AI literacy, best practices, and red flags for vetting technology vendors. The IEEE P3119 standard goes further by providing guidance, for example, on determining whether a problem requires an AI solution. It also can help identify an agency’s risk tolerance, assess a vendor’s answers to questions about AI, recommend curated AI-specific contract language, and evaluate an AI solution across multiple criteria.
IEEE is currently developing such an AI procurement guidance, one that moves beyond principles and best practices to detailed process recommendations. IEEE P3119 explicitly addresses the technical complexity of most AI models and the potential risks to society while also considering the systems’ capacity to scale for deployment in much larger populations.
Discussions in the standards working group centered around ways to identify and evaluate AI risks, how to mitigate risks within procurement needs, and how to provoke transparency about AI governance from vendors, with AI-specific best practices for solicitations and contracts.
The IEEE P3119 processes are meant to complement and optimize existing procurement requirements. The primary goal for the standard is to offer government agencies and AI vendors ways to adapt their procurement practices and solicited proposals to maximize the benefits of AI while minimizing the risks.
The standard is meant to become part of the “request for proposals” stage, integrated with solicitations in order to raise the bar for AI procurement so that the public interest and citizens’ civil rights are proactively protected.
Putting the standard into practice, however, could be challenging for some governments that are dealing with historical regulatory regimes and limited institutional capacity.
A future article will describe the need to test the standard against existing regulations, known as regulatory sandboxes.
Gisele Waters is the working group chair of IEEE P3119, cofounder of the AI Procurement Lab, an AI governance standards builder, and a human-centered design researcher. She is focused on addressing risk to vulnerable populations by optimizing human experiences and processes with technology.