AI and the Future of Work: What to Expect

MIT conference convenes AI and automation experts to consider the technology’s effects on jobs and industries

4 min read
0s and 1s set up to look like a business person
Illustration: Shutterstock

The robots have come for our jobs. This is the fear that artificial intelligence increasingly stokes with both the tech and policy elite and the public at large. But how worried should we really be?

To consider what impact AI will have on employment, a conference at MIT titled The Future of Work is convening this week, bringing together some leading thinkers and analysts. In advance of the conference, IEEE Spectrum talked with R. David Edelman, director of MIT’s Project on Technology, Economy & National Security about his take on AI’s coming role.

Edelman says he’s lately seen AI-related worry (mixed with economic anxiety) ramp up in much the same way that cybersecurity worries began ratcheting up ten or fifteen years ago.

“Increasingly, issues like the implications of AI on the future of work are table stakes for understanding our economic future and our ability to deliver prosperity for Americans,” he says. “That’s why you’re seeing broad interest, not just from the Department of Labor, not just from the Council of Economic Advisors, but across the government and, in turn, across society.”

Before coming to MIT, Edelman worked in the White House from 2010-’17 under a number of titles, including Special Assistant to the President on Economic and Technology Policy. Edelman also organizes a related conference in the spring at MIT, the MIT AI Policy Congress.

At this week’s Future of Work conference though, Edelman say he’ll be keeping his ears open for a number of issues that he thinks are not quite on everyone’s radar yet. But they may be soon.

For starters, Edelman says, not enough attention in mainstream conversations concerns the boundary between AI-controlled systems and human-controlled ones.

“We need to figure out when people are comfortable handing decisions over to robots, and when they’re not,” he says. “There is a yawning gap between the percentage of Americans who are willing to turn their lives over to autopilot on a plane, and the percentage of Americans willing to turn their lives over to autopilot on a Tesla.”

Which, to be clear, Edelman is not saying represents any sort of unfounded fear. Just that public discussion over self-driving or self-piloting systems is very either/or. Either a self-driving system is seen as 100 percent reliable for all situations, or it’s seen as immature and never to be used.

Second, not enough attention has yet been devoted to the question of metrics we can put in place to understand when an AI system has earned public trust and when it has not.

The very last profession AI researchers said would ever be automated was—surprise, surprise—AI research. “Everyone believes that their job will be the last job to be automated.”

AI systems are, Edelman points out, only as reliable as the data that created them. So questions about racial and socioeconomic bias in, for instance, AI hiring algorithms are entirely appropriate.“Claims about AI-driven hiring are careening evermore quickly forward,” Edelman says. “There seems to be a disconnect. I’m eager to know, Are we in a place where we need to pump the brakes on AI-influenced hiring? Or do we have some models in a technical or legal context that can give us the confidence we lack today that these systems won’t create a second source of bias?”

A third area of the conversation that Edelman says deserves more media and policy attention is the question of what industries does AI threaten most. While there’s been discussion about jobs that have been put in the AI cross hairs, less discussed, he says, is the bias inherent in the question itself.

A 2017 study by Yale University and the University of Oxford’s Future of Humanity Institute surveyed AI experts for their predictions about, in part, the gravest threats AI poses to jobs and economic prosperity. Edelman points out that the industry professionals surveyed all tipped their hands a bit in the survey: The very last profession AI researchers said would ever be automated was—surprise, surprise—AI researchers.

“Everyone believes that their job will be the last job to be automated, because it’s too complex for machines to possibly master,” Edelman says.

“It’s time we make sure we’re appropriately challenging this consensus that the only sort of preparation we need to do is for the lowest-wage and lowest-skilled jobs,” he says. “Because it may well be that what we think of as good middle-income jobs, maybe even requiring some education, might be displaced or have major skills within them displaced.”

Last is the belief that AI’s effect on industries will be to eliminate jobs and only to eliminate jobs. When, Edelman says, the evidence suggests any such threats could be more nuanced.

AI may indeed eliminate some categories of jobs but may also spawn hybrid jobs that incorporate the new technology into an old format. As was the case with the rollout of electricity at the turn of the 20th century, new fields of study spring up too. Electrical engineers weren’t really needed before electricity became something more than a parlor curiosity, after all. Could AI engineering one day be a field unto its own? (With, surely, its own categories of jobs and academic fields of study and professional membership organizations?)

“We should be doing the hard and technical and often unsexy and unglamorous work of designing systems to earn our trust,” he says. “Humanity has been spectacularly unsuccessful in placing technological genies back in bottles. … We’re at the vanguard of a revolution in teaching technology how to play nice with humans. But that’s gonna be a lot of work. Because it’s got a lot to learn.”

This post was updated on 26 November 2019. 

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less