What If the Biggest AI Fear Is AI Fear Itself?


Artificial intelligence has the potential to create more work than it disrupts

4 min read

illustration of a man holding a large jigsaw puzzle piece in front of a robot on a computer screen holding a jigsaw puzzle piece
Getty Images

It’s been just about a year now—a nonprofit called the Future of Life Institute posted an open letter reflecting people’s darkest fears about artificial intelligence.

“Contemporary AI systems are now becoming human-competitive at general tasks,” it said. It called for a pause in training of the most advanced AI, so that technology companies could develop safety protocols. It expressed worry about disinformation and out-of-control machines. And it struck a nerve with its concerns that some AI could make human work irrelevant.

“AI is a tool.... And tools generally aren’t substitutes for expertise but rather levers for its application.” —David Autor, MIT

“Should we automate away all the jobs, including the fulfilling ones?” the letter asked. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” At last count, the institute said more than 33,000 people involved in computer science (including about 100 IEEE members) had signed it.

But there are many others who say we are far—perhaps very far—from a world in which smart machines make human talent redundant. On the contrary, they can extend human reach. That argument is laid out most recently by David Autor, an economist at the Massachusetts Institute of Technology who has written and spoken extensively on the future of work.

“It’s important to understand that many of our tools are not our competitors,” he says. “They are more like enablers of the use of human expertise.”

AI as Calculator—and Chainsaw

In an essay, posted on the website of the National Bureau of Economic Research and published in the magazine Noema, Autor says that if we do it right, AI can create many more opportunities than it disrupts. Certainly, there are jobs that will go away, and many things that once demanded a human touch will be done more cheaply and quickly by machines. But, he argues, many new lines of work will be created, or made more effective, by AI’s help. Autor says they may outnumber the jobs made obsolete, potentially by a substantial degree. He writes that “AI—used well—can assist with restoring the middle-skill, middle-class heart of the U.S. labor market that has been hollowed out by automation and globalization.”

“People are worried about the wrong things. They’re worried primarily about whether we’ll run out of work, when they ought to be worrying about how we will use human expertise, whether we use it well or badly.” —David Autor, MIT

“AI is a tool, like a calculator or a chainsaw,” he says. “And tools generally aren’t substitutes for expertise but rather levers for its application.”

He cites, among other evidence, an experiment led by economist Sida Peng of Microsoft Research, in which software developers were given access to Github Copilot, a generative AI programming aid. They were asked to implement an HTTP server in JavaScript. If they used Copilot, they did the job 56 percent faster than a control group.

In another experiment, published in the journal Science, grant writers, consultants, and managers were invited to use ChatGPT to help them write short documents, such as press releases and analysis plans. The AI didn’t take over the writing—but it sped the writers’ progress by 40 percent, and an outside peer group rated the quality of their work as 18 percent better.

Autor emphasizes that he is not saying there’s nothing to worry about. There’s plenty—much of it still unknowable. But, he says, “People are worried about the wrong things. They’re worried primarily about whether we’ll run out of work, when they ought to be worrying about how we will use human expertise, whether we use it well or badly.”

The “Pause” That Never Happened

Oren Etzioni is now focused on that question. An entrepreneur and computer scientist, he has started a nonprofit called TrueMedia.org, dedicated to fighting the rise of political deepfakes. He says that in many ways he agrees with Autor’s theme: “I think it can help to train people and it can help to level their expertise, and obviously those are positive things, but it’s a nuanced topic.”

People would never have guessed a few decades ago at the rise of programmers, statisticians, or social-media managers. Or cybersecurity analysts. Or AI ethicists.

Etzioni says AI technologies can create work opportunities in countless different fields, but they can also make it easier and cheaper for small groups to do what he calls “disinformation terrorism,” planting falsehoods on social channels to take down opponents.

AI technologies in their current stage have considerable limits, says Autor. What’s not happening, he says, is AI bots going off on their own and doing creative work or exercising judgment the way a person can. An AI system can improve computer code, for instance, but not suggest new applications out of the blue.

Among other things, Autor points out that generative AI is transforming a lot of cognitive work—in computing, medicine, finance, and the like—but not physical work, where too many things are beyond a machine’s control.

“The progress in robotics that deals in an uncertain world, as opposed to robotics on assembly lines where everything is bolted down and under control, that progress has been incredibly slow,” he says. “I mean, how many robots do you encounter in the course of a day? Just about none, right? Maybe your Roomba?”

The “pause” proposed in that open letter didn’t happen. AI in its many forms continues to transform people’s work at dizzying speed, much as software did a generation ago, or, long before it, electricity. Many jobs have disappeared, like those of farriers or typesetters. But, as Autor says, people would never have guessed a few decades ago at the rise of programmers, statisticians, or social-media managers.

Or, on the other hand, cybersecurity analysts. Or AI ethicists.

“A favorite line which I repeat over and over to people,” says Etzioni, “is that AI is the tool, but the choice is ours.”

The Conversation (1)
Joshua Stern
Joshua Stern15 Mar, 2024
LM

Yes, this is basically my view, too. Anyway today's LLM's are no closer to AGI than was Eliza 50+ years ago.