CONTACT US Wed Nov. 13, 2013

CASS 中国社会科学网(中文) Français

.  >  WORLD

Algorithms can reinforce human biases

Author  :  WANG YOURAN     Source  :    Chinese Social Sciences Today     2020-06-02

Today, public and private sectors are increasingly turning to algorithms to automate decision-making processes. Many experts welcome algorithms as an antidote to long-standing human biases. However, cases where automated algorithms scale up existing human biases are not uncommon.

Each of the various steps of solving a problem with data could have a disproportionately adverse impact on particular groups, said Solon Barocas, an assistant professor in the Department of Information Science at Cornell University.

Historical human biases and incomplete or unrepresentative data are two key causes of algorithmic bias, noted Nicol Turner Lee, a fellow in the Governance Program’s Center for Technology Innovation at the Brookings Institution, and Paul Resnick, a professor of information at the University of Michigan.

Historical human biases are shaped by prejudices against certain groups, which can lead to their reproduction and amplification in computer models. This is what happened with a recruiting algorithm once used by Amazon. The algorithm was taught to recognize word patterns in resumes, and the data was bench-marked against Amazon’s predominantly male engineering department to determine an applicant’s fit. Thus, resumes that included the word “women’s” were downgraded.

When the data used to train the algorithm is less representative of some groups, decisions based on the algorithm can be disadvantageous to those groups. Joy Buolamwini, a computer scientist at the MIT Media Lab, found that some facial recognition software failed to identify darker-skinned women as female. The algorithms powering these software systems picked up on certain facial features as ways to detect male and female faces. The lack of diversity in the training data led to the misidentification of darker-skinned females as males. Conversely, an over-representation in the training data can also skew the result.

Algorithmic bias is more likely to bring about large-scale negative consequences than human bias, warned Kartik Hosanagar, a professor at the University of Pennsylvania. Many people see algorithms as rational and infallible, and thus fail to address and curb their so-called “bad behavior” quickly enough. As such, “elections are swayed, markets are manipulated, individuals are hurt due to our own attitudes and actions towards algorithms.” Moreover, while a bad judge can affect the lives of thousands of people, a bad algorithm can affect the lives of billions.

In psychology and genetics, human behavior is often attributed to the genes we are born with and our environment influences, which is to say nature and nurture, Hosanagar said. Algorithms are no different. The behaviors of early computer algorithms were fully programmed by their human creators—the influence of “nature.” Modern algorithms learn big chunks of their logic from real-world data— the influence of “nurture.” Many of the biases and much of the unpredictable behavior of modern algorithms are picked up from their training data.

Scientists are now working on improving the fairness of machine learning algorithms, and there is a large body of work to detect and mitigate algorithmic bias, said Christo Wilson, an assistant professor of computer science at Northeastern University. Last year, some American researchers published a comparative study of fairness-enhancing interventions in machine learning. Another approach is to open the “black box” of machine learning systems to interrogate what they have learned and why they make decisions.

Recent work has demonstrated that it is possible to achieve fairness with no loss in accuracy. Human judgment is important for ensuring the fairness of algorithms, as humans will need to decide what “fairness” means, Wilson said.

Hosanagar proposed two solutions to enhance algorithmic fairness: greater transparency regarding the data and logic behind algorithmic decisions and then formal audits or testing of algorithms that could produce social consequences if widely applied.

An “algorithmic bill of rights” could help regulate algorithm-based decisions, Hosanagar said. It should feature four main pillars: a description of the training data and details as to how that data was collected; an explanation regarding the procedures used by the algorithm to make decisions, expressed in terms simple enough for average people to understand; control over the way algorithms work, which includes a feedback loop between users and the algorithm; and the responsibility to be aware of the unanticipated consequences of automated decision-making.

According to Wilson, an automated decision-making algorithm that is carefully designed and thoroughly audited could potentially be less biased than human beings. Algorithms should be monitored over time to ensure that they continue to perform correctly even as social conditions change. Without preventive measures, algorithms may exacerbate existing human biases.

Editor: Yu Hui

>> View All

Ye Shengtao made Chinese fairy tales from a wilderness

Ye Shengtao (1894–1988) created the first collection of fairy tales in the history of Chinese children’s literature...

>> View All