A Crowd of Computer Scientists Lined Up for Bill Gates—But It Was Gavin Newsom That Got Them Buzzing

California governor Gavin Newsom wows a crowd of distinguished computer scientists, educators, and other Silicon Valley luminaries at Stanford Human-Centered AI symposium

6 min read

Bill Gates
Bill Gates answers questions from teens Stephanie Tena-Meza [center] and Amy Jin [right] at the inaugural symposium of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI). Tena-Meza and Jin both attended Stanford's AI4All summer camp.
Photo: Steve Castillo

Stanford University launched its Institute for Human-Centered AI on Monday. Known as Stanford HAI, the institute’s charter is to develop new technologies while guiding AI’s impact on the world, wrestle with ethical questions, and come up with helpful public policies. The Institute intends to raise US $1 billion to put toward this effort.

The university kicked off Stanford HAI (pronounced “high”) with an all-day symposium that laid out some of the issues the Institute aims to address while showcasing Stanford’s current crop of AI researchers.

The most anticipated speaker on the agenda was Microsoft cofounder Bill Gates. Lines of AI researchers, Silicon Valley entrepreneurs, investors, and educators formed early to get through the security screening required to watch his talk in person. And indeed, Gates’s remarks, structured as an interview by two teens who participated in Stanford’s AI4All summer camp program, went over well. Gates was generally optimistic, as you would expect from a keynote speaker at this kind of kickoff event, but he warned the audience not to be overoptimistic about the promise of AI or the speed at which that promise can be fulfilled.

“When I started Microsoft,” he said, “I wrote a note to my parents saying that I may miss a bunch of breakthroughs in AI, and that’s what I give up to start this company, but oh, well…”

It turned out, Gates said, that “for 20 years I didn’t miss much.”

Now, he pointed out, the technology is moving quickly—more quickly than the policy side of the world can keep up with—and he cautioned that Stanford might not always want to continue bragging about its pioneering work in AI unless the world does a good job of managing its effects.

Challenged to name some applications of AI that have been beneficial to society, Gates came up with search engines and…search engines. The new medicines and other impacts on health care that he’s really looking forward to, suggested Gates, will be emerging over the next 10 years.

“The value of computers in classrooms is virtually nil. So that’s good news. We didn’t accentuate the digital divide; the schools with computers are just as bad as the ones without them.”

Looking back at his time at Microsoft, and that company’s impact on society, Gates noted, “We didn’t disrupt the way people get news or community.”

Those concerns weren’t even on the radar. “We were worried about digital divide,” he said. And it turned out that, “at the classroom level, the value of computers in classrooms is virtually nil. So that’s good news. We didn’t accentuate the digital divide; the schools with computers are just as bad as the ones without them.”

A question from the audience turned the topic to the concentration of AI talent—something hard not to think about in a room full of Silicon Valley’s movers and shakers.

That kind of talent concentration, for the most part, is not a bad thing, Gates said, as long as it’s not just a private-sector concentration.

“When something is competitive,” Gates said, “it’s not normal [to] have lots of people at an identical position. [But, for example] in designing nuclear weapons, we didn’t have lots of places in the world at the level of Los Alamos.”

But, he said, while the concentration in general isn’t a big problem, it would not be good if the universities fall behind the private sector. “There is already a question about access to cloud computing power [for universities] that matches what private industry has. Unless we are careful, the private sector will run away not just with the smart people but with the ability to do super, super-complex models.”

Gavin NewsomCalifornia governor Gavin Newsom discusses the promise and perils of artificial intelligence at the inaugural symposium of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI).Photo: Holly Hernandez

And then, after a few more rounds of research presentations, came California governor Gavin Newsom. Newsom took the podium to give a 5 p.m. speech—a last minute addition to the agenda—after sitting in the audience listening to at least half of the afternoon’s presentations.

I’m not sure the attendees expected anything more from California’s new governor than a repackaged stump speech; they stayed out of curiosity, politeness, and, perhaps, because the reception wouldn’t begin until Newsom finished.

But Newsom, speaking with no notes, had them at hello. Or, if not exactly hello, close: Newsom kicked off his remarks by bluntly stating what most Silicon Valley denizens think about the state government.

“California,” Newsom said, “is at the cutting edge of technology of 1973,” pointing out that the state’s Division of Motor Vehicles can’t even figure out how to access credit cards.

“I heard a conversation here talking about the technical illiteracy of people in my profession,” he said. “It is true.”

At that point, the crowd went silent. The usual rustles—of people shifting in their chairs, reaching for water bottles, pulling out cellphones, or opening laptops to check email—stopped cold. Newsom had their attention and kept it. He didn’t seem like an alarmist—although he did say the word “anxiety” over and over. He didn’t seem like a tech hater—he came off as someone who personally loves technology but can understand why it is freaking some people out. His talk focused on technology’s dark side, yet came off as hopeful.

“There is a lot of anxiety out there,” Newsom said, “and fear out there, and it’s real.”

Newsom began rattling off statistics, speaking rapidly without notes. “I read [PricewaterhouseCoopers]: 38 percent of jobs will be automated in the next 15 years. Then I relax because Bain comes out and says just 25 percent of the jobs in the next 20 years. Then Oxford says, that’s an old study, that 47 percent is only in 702 job categories, not all job categories. Then I got James [Manikya], which says it’s just 60 percent of current jobs that will be just 30 percent augmented.”

With that kind of conflicting information flying around, Newsom asked rhetorically, “what do you want me to do as a policymaker?”

Turning from statistics to recent anecdotal evidence about the disruption caused by AI, Newsom said, “I remember how excited we all were when we read about a company named Otto that drove those 46,000 cans of Budweiser in Colorado in a driverless truck. Then I met with Teamsters—they didn’t think it was that exciting.”

“I remember reading about Knightscope,” he continued. “One of the folks that work there said about the technology, these are robotic units that replace security guards. He said, ‘no pensions, no workers comp, no complaining.’ Whoa.”

And, Newsom said, consider robotic hamburger-maker Momentum Machines (now called Creator). “It’s going after 3.5 or 3.6 million fast-food workers…and the CEO of that company said our job is not to make employees more efficient, it’s to ‘obviate the need for them.’ Oh.”

“I was excited, finally went to Amazon Go. They have one in San Francisco. Fascinating. But there are 3.4 million cashiers who don’t think it’s that fascinating. What the hell do we do with them?”

“It's not enough to teach you to code at 60 years old.”

How to address the workforce disruption AI is bringing? Retraining is part of it, but it will take bigger investments than those proposed so far, and it’s not just about retraining. “It’s not enough to teach you to code at 60 years old,” he said.

The state of California is not yet prepared to deal with the kinds of changes AI will bring, Newsom said, but he indicated that he’s hopeful that some of the people assembled through this newest Stanford organization will be able to help figure it out.

“We can’t play small ball anymore, we can’t play on the margins. [This] requires an order-of-magnitude change. We need your guidance, your counsel, and your empathy as well. We need to be educated on the world we are living in, on how real this change is, what machine learning is, what deep learning is…this language is new to us.

“The world we invented,” Newsom said, “is competing against us; we have to invest in maintaining our lead.”

All these changes, these new technologies, he indicated, are what is producing anxiety, and a mounting techlash.

“The techlash is real, in ways I shudder to even communicate,” Newsom said. “When I have people who embrace technology talking about data frackers, that’s a pejorative; it’s not just about privacy, or about some version of GDPR, it’s about real animus that’s being created in this winner-take-all world.”

Afterwards, the normal postconference chatter converged onto a single path. “What did you think of the event?” I heard person after person say. “Well-organized, the tech talks were pretty good, a bit arm-wavy, but what can you expect at the beginning of something like this,” were the general answers. And then there would be a pause, followed by someone saying, “But what about Gavin?”

“Gavin. Yes. Wow,” was the typical response, or something along those lines.

“He pulled out statistics that he clearly really understood, hadn’t just memorized, faster than I could, and it’s my field!” said one attendee. “And he had no notes!”

“He listened to the panels, and then synthesized it all into a coherent speech on the fly. How does he do that?” said another.

“The guy is just really, really smart,” I heard over and over, often with a note of surprise.

“He was optimistic,” one attendee told me, “but he also warned us. And he was right to do that.”

Gates’s and Newsom’s full addresses to the HAI symposium are available on Facebook Live.

A correction was made 22 March 2019.

The Conversation (0)