AI’s Threats to Jobs and Human Happiness Are Real

But short-term job chaos will give way to long-term prosperity, says AI expert Kai-Fu Lee

10 min read
AI’s Threats to Jobs and Human Happiness Are Real

Renowned computer scientist and AI expert Kai-Fu Lee sees likely disruption over the coming 15 to 20 years, owing to “smart” systems creating jobs in fields that AI-displaced workers may not be well trained to handle.

Sinovation Ventures

There’s a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.

The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. It’s not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.

IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.

Kai-Fu Lee on…

The science fiction stories are set in 2041, by which time you expect AI to have already caused a lot of disruption to the job market. What types of jobs do you think will be displaced by then?

Kai-Fu Lee: Contrary to what a lot of people think, AI is actually just a piece of software that does routine work extremely well. So the jobs that will be the most challenged will be those that are routine and repetitive—and that includes both blue-collar and white-collar work. So obviously jobs like assembly line workers and people who operate the same equipment over and over again. And in terms of white-collar work, many entry-level jobs in accounting, paralegal, and other jobs where you’re repetitively moving data from one place to another, and jobs where you’re routinely dealing with people, such as customer-service jobs. Those are going to be the most challenged. If we add these up, it will be a very substantial portion of all jobs, even without major breakthroughs in AI—on the order of 40 to 50 percent.

The jobs that are most secure are those that require imagination, creativity, or empathy. And until AI gets good enough, there will also be craftsman jobs that require dexterity and a high level of hand-eye coordination. Those jobs will be secure for a while, but AI will improve and eventually take those over as well.

How do you imagine this trend is changing the engineering profession?

Lee: I think engineering is largely cerebral and somewhat creative work that requires analytical skills and deep understanding of problems. And those are generally hard for AI.

But if you’re a software engineer and most of your job is looking for pieces of code and copy-pasting them together—those jobs are in danger. And if you’re doing routine testing of software, those jobs are in danger too. If you’re writing a piece of code and it’s original creative work, but you know that this kind of code has been done before and can be done again, those jobs will gradually be challenged as well. For people in the engineering profession, this will push us towards more of an analytical architect role where we deeply understand the problems that are being solved, ideally problems that have complex characteristics and measurements. The ideal combination in most professions will be a human that has unique human capabilities managing a bunch of AI that do the routine parts.

It reminds me of the Ph.D. thesis of Charles Simonyi, the person who created Microsoft Word. He did an experiment to see what would happen if you have a really smart architect who can divvy up the job of writing a piece of code into well-contained modules that are easy to understand and well defined, and then outsource each module to an average engineer. Will the resulting product be good? It was good. We’re talking about the same thing, except we’re not outsourcing to the average engineer, who will have been replaced by AI. That superengineer will be able to delegate the work to a bunch of AI resulting in creativity and symbiosis. But there won’t be very many of these architect jobs.

In the book, you say that an entirely new social contract is needed. One problem is that there will be fewer entry-level jobs, but there still needs to be a way for people to gain skills. Can you imagine a solution for engineering?

Lee: Let’s say someone is talented and could become an architect, but that person just graduated from college and isn’t there yet. If they apply for a job to do entry-level programming and they’re competing for the job with AI, they might lose the job to the AI. That would be really bad because we will not only hurt the person’s self-confidence, but also society will lose the talent of that architect, which needs years of experience to build up.

But imagine if the company says, “We’re going to employ you anyway, even though you’re not as good as AI. We’re going to give you tasks and we’ll have AI work alongside you and correct your errors, and you can learn from it and improve.” If a thousand people go through this entry-level practical training, maybe a hundred emerge to be really good and be on their way to become architects. Maybe the other 900 will take longer and struggle, or maybe they’ll feel complacent and continue to do the work so they’re passing time and still have a chance to improve. Maybe some will say, “Hey, this is really not for me, I’m not reaching the architect level. I’m going to go become a photographer and artist or whatever.”

Back to top

Why do you think that this round of automation is different from those that came before in history, when jobs were both destroyed and created by automation?

Lee: First of all, I do think AI will both destroy and create jobs. I just can’t enumerate which jobs and how many. I tend to be an optimist and believe in the wisdom and the will of the human race. Eventually, we’ll figure out a bunch of new jobs. Maybe those jobs don’t exist today and have to be invented; maybe some of those jobs will be service jobs, human-connection jobs. I would say that every technology so far has ended up making society better, and there has never been a problem of absorbing the job losses. If you look at a 30-year horizon, I’m optimistic that that there will not be a net job loss, but possibly a net gain, or possibly equal. And we can always consider a four-day work week and things like that. So long-term, I’m optimistic.

Now to answer your question directly: short-term, I am worried. And the reason is that none of the previous technology revolutions have tried explicitly to replace people. No matter how people think about it, every AI algorithm is trying to display intelligence and therefore be able to do what people do. Maybe not an entire job, but some task. So naturally there will be a short-term drop when automation and AI start to work well.

“If you expect an assembly-line worker to become a robot-repair person, it isn’t going to be so easy.”
—Kai-Fu Lee, Sinovation Ventures

Autonomous vehicles are an explicit effort to replace drivers. A lot of people in the industry will say, “Oh no, we need a backup driver in the truck to make it safer, so we won’t displace jobs.” Or they’ll say that when we install robots in the factory, the factory workers are elevated to a higher-level job. But I think they’re just sugarcoating the reality.

Let’s say over a period of 20 years, with the advent of AI, we lose x number of jobs, and we also gain x jobs; let’s say the loss and gain are the same. The outcome is not that the society remains in equilibrium, because the jobs being lost are the most routine and unskilled. And the jobs being created are much more likely to be skilled and complex jobs that require much more training. If you expect an assembly-line worker to become a robot-repair person, it isn’t going to be so easy. That’s why I think the next 15 years or 20 years will be very chaotic. We need a lot of wisdom and long-term vision and decisiveness to overcome these problems.

Back to top

Currency

There are some interesting experiments going on with universal basic income (UBI), like Sam Altman’s ambitious idea for Worldcoin. But from the book, it seems like you don’t think that UBI is the answer. Is that correct?

Lee: UBI may be necessary, by it’s definitely not sufficient. We’re going to be in a world of very serious wealth inequality, and the people losing their jobs won’t have the experience or the education to get the right kinds of training. Unless we subsidize and help these people along, the inequality will be exacerbated. So how do we make them whole? One way is to make sure they don’t have to worry about subsistence. That’s where I think universal basic income comes into play by making sure nobody goes without food, shelter, water. I think that level of universal basic income is good.

As I mentioned before, the people who are most devastated, people who don’t have skills, are going to need a lot of help. But that help isn’t just money. If you just give people money, a wonderful apartment, really great food, Internet, games, and even extra allowance to spend, they are much more likely to say, “Well, I’ll just stay home and play games. I’ll go into the metaverse.” They may even go to alcohol or substance abuse because those are the easiest things to do.

So what else do they need?

Lee: Imagine the mind-set of a person whose job was taken away by automation. That person has been to be thinking, “Wow, everything I know how to do, AI can do. Everything I learn, AI will be able to do. So why should I take the universal basic income and apply that to learning?” And even if that person does decide to get training, how can they know what to get training on? Imagine I’m an assembly-line worker and I lost my job. I might think, truck driver, that’s a highly paid job. I’ll do that. But then in five years those jobs are going to be gone. A robot-repair job would be a much more sustainable job than a truck driver, but the person who just lost a job doesn’t know it.

So the point I make in the book is: To help people stay gainfully employed and have hope for themselves, it’s important that they get guidance on what jobs they can do that will, first of all, give people a sense of contribution, because then at least we eliminate the possibility of social unrest. Second, that job should be interesting, so the person wants to do it. Third, if possible, that job should have economic value.

Why do you put economic value last in that list?

Lee: Most people think jobs need to have economic value. If you’re making cars, the cars are sold. If you’re writing books, the books are sold. If you just volunteer and take care of old people, you’re not creating economic value. If we stay in that mentality, that would be very unfortunate, because we may very well be in a time when what is truly valuable to society is people taking care of each other. That might be the glue that keeps society going.

More thought should go into how to deal with the likely anxiety and depression and the sense of loss that people will have when their jobs are taken and they don’t know what to do. What they need is not just a bunch of money, but a combination of subsistence, training, and help finding a new beginning. Who cares if they create economic value? Because as the last chapter states, I believe we’re going to reach the era of plenitude. We’re not going to be in a situation of incredible scarcity where everyone’s fighting each other in a zero-sum game. So we should not be obsessed with making sure everyone contributes economically, but making sure that people feel good about themselves.

Back to top

I want to talk about the last chapter. It’s a very optimistic vision of plenitude and abundance. I’ve been thinking of scenarios from climate-change models that predict devastating physical impacts by 2041, with millions of refugees on the move. I have trouble harmonizing these two different ideas of the future. Did you think about climate change when you were working on that chapter?

Lee: Well, there are others who have written about the worst-case scenario. I would say what we wrote is a good-case scenario—I don’t think it’s the best case because there are still challenges and frustrations and things that are imperfect. I tried to target 80 percent good in the book. I think that’s the kind of optimism we need to counterbalance the dystopian narratives that are more prevalent.

The worst case for climate is horrible, but I see a few strong reasons for optimism. One is that green energy is quickly becoming economical. In the past, why didn’t people go for green energy? Because fossil fuels were cheaper and more convenient, so people gained for themselves and hurt the environment. The key thing that will turn it around is that, first, governments need to have catalyst policies such as subsidized electrical vehicles. That is the important first step. And then I think green energy needs to become economic. Now we’re at the point where, for example, solar plus lithium batteries, not even the most advanced batteries, are already becoming cheaper than fossil fuel. So there are reasons for optimism.

I liked that the book also got into philosophical questions like: What is happiness in the era of AI? Why did you want to get into that more abstract realm?

Lee: I think we need to slowly move away from obsession with money. Money as a metric of happiness and success is going to become more and more outdated, because we’re entering a world where there’s much greater plenitude. But what is the right metric? What does it really mean for us to be happy? We now know that having more money isn’t the answer, but what is the right answer?

AI has been used so far mainly to help large Internet companies make money. They use AI to show people videos in such a way that the company makes the most money. That’s what has led us to the current social media and streaming video that many people are unhappy about. But is there a way for AI to show people video and content so that they’re happier or more intelligent or more well liked? AI is a great tool, and it’s such a pity that it’s being used by large Internet companies that say, ‘How do we show people stuff so we make more money?” If we could have some definitions of happiness, well-likedness, intelligence, knowledgeableness of individuals, then we can turn AI into a tool of education and betterment for each of us individually in ways that are meaningful to us. This can be delivered using the same technology that is doing mostly monetization for large companies today.

Back to top

The Conversation (3)
Fabian Lamaestra24 May, 2022
INDV

"We’re going to employ you anyway". Uhhhh, no.

Between the plan-demics and years of terrible forced home-schooling; and now with AI looming; people will LOSE THE WILL to work and ultimately any humanity that cares about their families will rise up and things will get ugly; or most living spaces will turn into a massive ghettos on welfare; or you'll just become a faceless life-force for the system to consume.

We already got a taste of this reality from 2019-2022. AI and massive job loss will compound the problem.

William Croft20 May, 2022
INDV

Finland has been named the world's happiest country for the fifth year in a row. Google it. Meaning and purpose for a satisfying and productive life, has ZERO to do with high tech 'visionary' predictions. AI bubble mentality, strikes out again.

FB TS13 May, 2022
INDV

Let's realize that average amount of work each human needs to do was already keep dropping since the beginning times of humanity!

Why humans always need to work exactly?

Saving humanity from wage slavery is actually absolutely the right goal IMHO!

Only problem is how to achieve it sustainably!

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}