Machines, like humans, are guided by data and experience. If that data or experience is mistaken or crooked, a biased decision can be made, whether that decision is made by a human or a machine.

 

An algorithm is a set of functions, designed to reach a specific objective; it is used to perform calculation, data processing, automated reasoning, and other tasks. Thus, with an algorithm, we could search for common patterns in historical data and filters the results by pre-defined objectives.

 

Say an algorithm is optimized for the maximum overall accuracy of the entire population in the test dataset. But that population includes a specific minority group that is, for example, less than 1%. If an algorithm trains to optimize for the overall general population it will ignore those minority subgroups. 

Does that qualify as efficiency or discrimination?

Is one algorithm optimized for the majority of the population suitable for the entire population or do we need additional algorithms optimized to meet the objectives of minority groups? 

 

These are questions worth asking.

 

However we will discuss here bias in training data sets :)

 

In 2018, MIT computer scientist and researcher Joy Buolamwini found that facial recognition softwares marketed by tech giants Microsoft, IBM and Amazon to companies across the world, could among others identify lighter-skinned men but not darker-skinned women. However, darker skin alone may not be fully responsible for misclassification. Instead, darker skin may be highly correlated with facial geometries or gender display norms that were less represented in the training data of the evaluated classifiers.

 

“Imagine a scenario in which self-driving cars fail to recognize people of color as people and are thus more likely to hit them because the computers were trained on data sets of photos in which such people were absent or underrepresented,” told Joy to Fortune in an interview. 

 

Joy Buolamwini’s 15 min outstanding video analyses the progress made by several large tech companies on facial recognition softwares in 2020 vs 2018 : https://www.youtube.com/watch?v=rjesnx_Pp5w

Click here to access the study:

In 2018, Reuters reported that Amazon had developed an experimental hiring tool to help rank job candidates. By learning from its past preferences, Amazon hoped that the resume scanning tool would be able to efficiently identify qualified applicants by comparing their applications to previous hires. The system quickly began to downgrade resumes from candidates who attended all-women’s colleges, along with any resumes that included the word “women’s”. After uncovering this bias, Amazon engineers tried to fix the problem by directing the system to treat these terms in a “neutral” manner. The company eventually abandoned the tool when they were unable to ensure that the algorithm would not be biased against women. 

Gender-based discrimination was built too deeply within the system – and in Amazon’s past hiring practices - to be uprooted using a purely technical approach.

 

This raises concerns because the widespread use of AI runs the risk of replicating and even amplifying human biases, particularly those affecting protected groups.  

 

Do not take for granted the technologies making decision based on algorithms such as online recruitment tools, online adds, facial recognition technology. 

 

Origin of any biases mostly come from: 

  • Data used to train the algorithm which include past human biases as well as historical and social inequalities.

  • Lack of diversity among the programmers designing the training sample which can lead to an under or over representation of a particular group or specific physical attributes.

  • Incomplete or unrepresentative training data or on the opposite algorithms with too much data, or an over-representation, can skew the decision toward a particular result. 

 

Who code matters       —  Identifying bias & screening for bias

How we code matters —  Curating inclusively

Why we code matters —. Developing conscientiously


 

FOR YOUR ACTION

 Go to The Algorithmic Social League website to

  • Explore the notion of bias in AI

  • Host a Workshop/trainings for researchers & developers communities, 

  • Report AI arms & Biases, 

  • Request an algorithmic audit

  • De-bias data set and request dataset for research 

ABOUT US

Publishing regular summaries of studies on unconscious & conscious stereotypes as well as a call for action. Our objective is to raise awareness and help people to self reflect on the impact of stereotypes on their decisions. Studies are issued by prestigious universities or renowned experts. Our summaries stick to the facts and are short, fun as well as colorful. 

CONTACT