AI Bias

Back in the days, hand-drawn sketches were used to represent human face visually. But it wasn’t until the invention of photography that portraits became widespread forms of representation and identification. However, smart criminals could still get through by altering their physical appearance.

Alphonse Bertillon (1880-1914), one of the forefathers of forensic science invented a technique known as bertillonage in 1879, which emerged as a promising standardized, foolproof biometric identification system.

Bertillonage system

A chart from Bertillonage system
Bertillonage system

After bertillonage (facial recognition) flopped, at end of 19th century it was completely replaced by fingerprinting. Other identification technologies also drew interests like voice, iris, genetic codes and even by the walk. The 9/11 attacks and subsequent “War on Terror” vastly expanded and changed mass surveillance tactics: it brought back facial identification as a preferred method of identification.

Authorities expanded public video surveillance and analyzed massive troves of security camera and social media images. Governments also invested heavily in developing new technologies.

Soon enough, companies like Apple and Facebook emerged as the leaders in facial recognition technology.

Facebook DeepFace

In 2014, DeepFace made headlines when its 97 percent accuracy beat the FBI’s Next Generation Identification system which was only 85 percent accurate
Facebook DeepFace in action

Facebook DeepFace in action

Real Big Problem – Bias in AI

There are about 150 human biases that affect how we make decisions. These biases can easily make their way into AI systems. These systems are used by businesses as well as governments to make important decisions and can lead to wrong decisions.

Cognitive Bias Codex

AI – in particular, both machine learning and deep learning – take large data sets as input, distill the essential lessons from those data, and deliver conclusions based on them. If the input data are biased – say, consisting of mostly young white males (our ‘garbage in’), then the AI will recommend mostly young white males (predictably, the ‘garbage out’). This is called “algorithmic bias.”

Gigo system

MIT Media Lab Project

Joy Buolamwini, who led the study from MIT Media Lab found these observations

In this way, bias in facial recognition threatens to reinforce the prejudices of society; disproportionately affecting women and minorities, potentially locking them out of the world’s digital infrastructure, or inflicting life-changing judgements on them.

Amazon – ACLU Test

Test conducted by the American Civil Liberties Union (ACLU) on Amazon’s facial recognition software, Rekognition found racial bias. Amazon replied saying it was due to wrong threshold set by the user. Amazon scraped their secret AI recruiting tool that showed bias against women. Amazon isn’t the only technology giant experiencing pushback from its own employees about how products are sold to and used by the US government.

Google was criticized after its image recognition algorithm identified African Americans as “gorillas.” Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech.

Other examples include:

  • Photo sets used to train image-recognition algorithms that identify men in the kitchen as women.
  • Job-listing systems that show more high-paying jobs for men than women.
  • Automated criminal-justice systems that assign higher bail or longer jail sentences to black people than white people.

Responsible use of technology

Businesses which rely on AI must act responsibly or they might get into some legal risk and public condemnation. The world would be a very different place if we were able to restrict people from buying computers because of a possible threat of its misuse. The same can be said about the everyday technology in our lives.

There are many ways in which technology can help mankind. Example, preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families, building educational apps for children and prevent crimes. And at the same time it can also help businesses by enhancing security and simplifying everyday procedures.

Achilles Heel of AI – Bad Data

Once the AI system learns something out of a certain data it tries to generalise its understanding to situations and scenarios accordingly. Therefore, systems built using data from one region perform less accurately in different regions, i.e, AI system developed using data from western countries will not perform at par in the Asian countries.

The AI Hierarchy of Needs

Think of AI as the top of a pyramid of needs. Yes, self-actualization (AI) is great, but you first need food, water and shelter (data literacy, collection and infrastructure).

AI Heirarchy
Data Is The Foundation For Artificial Intelligence And Machine Learning

Let’s fight the Bias together. Join The Good!

We are planning to create dataset based on Diversity, Depth and Deviations (3D’s dataset) and then check the fairness of the dataset.

This dataset can be used for training the AI system to get a trusted inclusive AI system which will provide a fair decision.

How can you help?

Share this message with your network

Subscribe to our upcoming campaigns where we will be collaborating with socially responsible organizations.

For more information please connect to contact@aindralabs.in