Close

Rule Britannia: Can the UK Set Global AI Standards?

London tries to create an Artificial Intelligence Strategy

4 min read
A birds eye view of London with a AI inspired pattern over it.
iStockphoto

"The public have become extremely distrustful of AI systems," says Bill Mitchell. "You need to prove to them that you are competent, ethical and accountable."

Mitchell is director of policy at BCS, the Chartered Institute for IT in the United Kingdom, and the lead author of a new proposal to the UK's government for a National Artificial Intelligence Strategy. The government, which invited input, says it hopes to publish a plan by the end of the year, and Mitchell says heightened standards for AI are badly needed. They may affect you wherever in the world you live.

Data science in the UK had a tough year after COVID-19 struck. A computer model for determining when to impose lockdowns was widely criticized; a contact-tracing app was held up for months by technical glitches. Sixteen thousand COVID test results were lost because of a software error. Most damaging of all, standardized exams were canceled for the nation's secondary-school students—and their scores were estimated by what Prime Minister Boris Johnson called "a mutant algorithm."

Most of these cases did not involve AI, but the pain technology had caused was still felt. A poll commissioned by BCS showed that 53 percent of adults in the UK had "no faith in any organisation to use algorithms when making judgements about them."

Could national standards repair the damage? How, precisely, would they be set and enforced? How would they affect people, business, government, education and other institutions? What effect might UK regulations have on other countries? Complicated questions, but ones that even some tech companies—who could be on the receiving end of regulatory change—support asking.

BCS (originally the British Computer Society) proposes, among other things, new standards for training and ethics so that data scientists are seen as true professionals, like doctors or lawyers, with requirements to be met before one can work in the field, and penalties for breaking the rules. It says government should help make the AI workforce more diverse and inclusive so that everyone feels represented. It says the country needs to provide more equipment, broadband access and education programs for people in poverty to narrow the digital divide. And it says the government should coordinate efforts to develop the AI technologies that will be key in the fight against global climate change.

AI could follow the example of medicine, which enjoys a higher level of trust than most other major institutions

These are, the report says, overarching priorities, meant, in part, to ensure that the UK sets "the 'gold standard' in AI professionalism." The mechanism to get there, Mitchell says, is "the right regulatory framework that is genuinely pro-innovation, pro-competition, but pro-ethical innovation and fair competition."

The "gold standard" phrase is not there by accident. The United Kingdom could effectively lead the world for AI standards if its rules are strong and well-designed. (By other measures, such as papers published or research and development, it ranks second or third behind the United States and China.) Companies around the world, even if they have little or no physical presence in the UK, know that people there might visit their websites. The major tech companies that lead in AI would not leave the UK if it imposed new regulations; in a borderless digital world, they really couldn't. It is in their interest, rather, to cooperate with the government in London.

"AI is too important not to regulate," said Google in a statement emailed in response to written questions. "Fact-based guidance from governments, academia and civil society is also needed to establish boundaries, including in the form of regulation."

Reid Blackman, a philosophy professor by background who now heads a technology-ethics consulting firm called Virtue, says national standards can be effective—if they get into the specifics of how AI works and affects people. "There are loads of organizations—private-sector, government, nonprofit—that have rolled out various recommendations, frameworks, principles, whatever. And they are way too high-level to be helpful. 'We're for transparency.' 'We're for explainability.' 'We're for fairness,'" he says. "That's not going to help the consumer who just got denied a credit card."

But, Blackman says, AI could follow the example of medicine, which enjoys a higher level of trust than most other major institutions. In the United States, hospitals have review panels, universities have medical ethicists, and patients often develop personal relationships with their doctors, who are regarded as good if they explain a treatment before they try it.

"There's a big culture around ethics within the medical discipline. That doesn't exist in technology. So it's a bigger lift," says Blackman, "but that doesn't mean it's impossible."

It would, however, be difficult. As pervasive as AI has become in modern life, it is also, often, invisible. People may never know whether an AI system helped determine whether they would get that auto loan or job interview, so, Mitchell says, they are primed to distrust it.

That, he says, is why it may help if AI has a generally-accepted set of guidelines. People may not want more rules and requirements, but at least there will be more certainty about how AI is used.

"We need to have regulators who are proactively reaching out to collaborate with all the different people in this digital world, including the people affected by the digital technology, not just the ones like myself, sitting there writing software," says Mitchell.

"And also," he says, "don't assume they're going to get it right the first time. The regulators themselves have to be very, very innovative about how they're going to do this."

The Conversation (0)

Atari Breakout: The Best Videogame of All Time?

Breakout—as designed by Steve Wozniak—was a manufacturing nightmare

2 min read
Vertical
atari breakout game screen showing a paddle at bottom and rows of colored bricks and two score fields at top
DarkGray

Breakout was the best video game ever invented, many designers say, because it was the first true video game. Before Breakout, all were games like Pong—imitations of real life. With Breakout, a single paddle was used to direct a ball at a wall of colored bricks. Contact made a brick vanish and the ball change speed. The game could never exist in any medium other than video.

Like Pong, the specifications for Breakout—its look and game rules—were defined by Nolan Bushnell at Atari Inc., Sunnyvale, Calif. But along with the specs came an engineering challenge in 1975: design the game with less than 50 chips, and the designer would receive $700; design the game with less than 40 chips, and the designer would receive $1000. Most games at that time contained over 100 chips. Steven Jobs, now president of Apple Computer, Santa Clara, Calif., was hanging around Atari at that time. “He was dirt poor,” recalled Allan Alcorn, who joined Atari at its formation. Atari’s design offer was “good cash”—to Mr. Jobs. Mr. Alcorn remembered that Mr. Jobs quickly designed the game with fewer than 50 chips. He had help. He called on his friend, Steven Wozniak, who later designed the Apple computer.

Keep Reading ↓ Show less