Algorithm Groups People More Fairly to Reduce AI Bias

Fair clustering algorithms aim to make artificial intelligence more impartial

3 min read
An illustration shows clusters of people standing against a light background.
Illustration: iStockphoto

Say you work at a big company and you’re hiring for a new position. You receive thousands of resumes. To make a first pass, you may turn to artificial intelligence. A common AI task is called clustering, in which an algorithm sorts through a set of items (or people) and groups them into similar clusters.

In the hiring scenario, you might create clusters based on skills and experience and then hire from the top crop. But algorithms can be unfair. Even if you instruct them to ignore factors like gender and ethnicity, these attributes often correlate with factors you do count, leading to clusters that don’t represent the larger pool’s demographics. As a result, you could end up hiring only white men.

In recent years, computer scientists have constructed fair clustering algorithms to counteract such biases, and a new one offers several advantages over those that came before. It could improve fair clustering, whether the clusters contain job candidates, customers, sick patients, or potential criminals.

 Most fair clustering algorithms try to balance only one “sensitive” attribute (such as gender if you want gender equality) at a time. If they can handle more, each attribute must be binary—male or female, say. The new method, described in a paper that will be presented at the Extending Database Technology conference in Copenhagen in April, handles multiple factors simultaneously, and those factors can be binary, numeric, or categorical. 

This approach is based on a simple and common method called k-means clustering. In k-means clustering, you first convert each attribute to a number so that a data point representing, say, a person can be placed into a multi-dimensional space. Then if you want to form, for example, three clusters, you’d first pick three data points at random. Assign all the points nearest to each of those points to be in a cluster with it. For each cluster, create a new temporary point that is an average of all the points in the cluster. Once again, assign whatever points are nearest to a center point to be in a cluster with it. Repeat several times.

The new system, called FairKM, makes a critical change. When assigning a point to this or that cluster, it considers not only which center point it’s closest to, but also which cluster it will skew the least in terms of sensitive attributes. If the overall dataset is half female and has an average age of 40, and gender and age are sensitive attributes, the algorithm tries to make each cluster half female and give it an average age of 40. This is called demographic parity and is one of many (often mutually incompatible) forms of fairness.

The researchers, from the Indian Institute of Technology Madras and Queen’s University Belfast applied FairKM to two datasets. The first contained census data from about 15,000 individuals. The algorithm clustered people based on nine attributes including education and occupation, while balancing five sensitive ones including marital status and native country. The second dataset contained physics word problems, which the system clustered based on theme, while balancing the difficulty level.

"One challenge that we sidestep in this research is the question of fairness in data collection itself."

The researchers compared their system on both cluster quality and fairness with two benchmarks: k-means clustering that simply ignores the sensitive attributes, and a previously published fair clustering algorithm that balances only one sensitive attribute at a time (that algorithm and FairKM were both tested on one sensitive attribute at a time, even though FairKM balanced them all).

On the two datasets, FairKM could not match the blind k-means on quality, but unfairness was reduced by 30 to 90 percent (the exact number depends on the metric used). Compared with the algorithm that balanced a single sensitive attribute, the quality of results produced by the new approach was at least as high on most metrics, sometimes much higher, and unfairness was reduced by 50 to 85 percent.

Savitha Abraham, a computer scientist at the Indian Institute of Technology Madras and the paper’s primary author, says they’re working to improve the algorithm to handle data that’s even more skewed. They also want to speed it up. Deepak Padmanabhan, a computer scientist at Queen’s University Belfast and co-author of the paper, said, “One challenge that we sidestep in this research is the question of fairness in data collection itself,” but they’re working on technical fixes for that, too.

The method in the paper is “a nice direction to explore,” said Suman Bera, a computer scientist at the University of California, Santa Cruz who was not involved in the work, “and they show it works well in practice as well.” His own work on fair clustering takes a different approach, balancing clusters after k-means is finished forming them. He says his approach is more amenable to theoretical guarantees of fairness, but the two are hard to compare directly. “This is a very evolving field,” he says. “It’s an exciting time to work in this area.”

The Conversation (0)