Computer Vision Leader Fei-Fei Li on Why AI Needs Diversity

At the White House Frontiers Conference, Stanford's Li details three crucial reasons to increase diversity in AI

2 min read

Stanford computer science professor calls for greater ethnic and gender diversity in artificial intelligence
Stanford computer science professor Fei-Fei Li makes the case for an emphasis on ethnic and gender diversity in artificial intelligence
Photo: L.A. Cicero/Stanford University

As Fei-Fei Li sees it, this is a historical moment for civilization fueled by an artificial intelligence revolution. “I call everything leading up to the second decade of the twenty-first century AI in-vitro,” the Stanford computer science professor told the audience at last week’s White House Frontiers Conference. Heretofore, the technology was being fundamentally understood, formulated, and tested in labs. “At this point we’re going AI in-vivo,” she said. “AI is going to be deployed in society on every aspect of industrial and personal needs.”

It’s already around us in the form of Google searches, voice-recognition, and autonomous vehicles. Which makes this a critical time to talk about diversity.

The lack of diversity in AI is representative of  the state of computer science and the tech industry in general. In the United States, for example, women and ethnic minorities such as African-Americans and Latinos are especially underrepresented. Just 18 percent of computer science grads today are women, down from a peak of 37 percent in 1984, according to The American Association of University Women. The problem is worse in AI. At the Recode conference this summer, Margaret Mitchell, the only female researcher in Microsoft’s cognition group, called it “a sea of dudes.” 

But the need for diversity in AI is more than just a moral issue. There are three reasons why we should think deeply about increasing diversity in AI, Stanford’s Li says.

The first is simply practical economics. The current technical labor force is not large enough to handle the work that needs to be done in the fields of computing and AI. There isn’t much in the way of specific numbers on diversity in AI, but anecdotal evidence say they’d probably be dismal. Take, for instance, Stanford’s computer science department. AI has the smallest percentage of women undergrads, at least as compared to tracks like graphics or human-computer interaction, Li points out. Worldwide, the GDP from automation and machine learning is expected to rise. So it’s really important that more people study AI, and that they come from diverse backgrounds. “No matter what data we look at today, whether it’s from universities or companies, we lack diversity,” she says.

Another reason diversity should be emphasized is its impact on innovation and creativity. Research repeatedly shows that when people work in diverse groups, they come up with more ingenuous solutions. AI will impact many of our most critical problems, from urban sustainability and energy to healthcare and the needs of aging populations. “We need a diverse group of people to think about this,” she says.

Last, but certainly not the least, is justice and fairness. To teach computers how to identify images or recognize voices, you need massive data sets. Those data sets are made by computer scientists. And if you only have seas of (mostly white) dudes making those data sets, biases and unfairness inadvertently creep in. “Just type the word grandma in your favorite search engine and you’ll see the bias in pictures returned,” Li says. “You’ll see the race bias. If we’re not aware of the bias of data, we’re going to start creating really problematic issues.”

What can we do about this? Bring a humanistic mission statement to the field of AI, Li says. “AI is fundamentally an applied technology that’s going to serve our society,” she says. “Humanistic AI not only raises the awareness of the importance of the technology, it’s a really important way to attract diverse students, technologists and innovators to participate.”

The Conversation (0)