Blog 6(2021_Term Two

Zhichen Gu
2 min readFeb 8, 2022

Inherent Bias on the Machine Learning

Human biases are well-documented. Over the past few years, society has started to wrestle with just how much these human biases can make their way into artificial intelligence systems — with harmful results. At a time when many high tech companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.

An MIT Thesis Gender Shades has looked at how gender classification systems from leading tech companies performed across a range of skin types and genders. All systems performed better on male faces than female faces overall, and all systems performed better on lighter-skinned faces than darker-skinned faces overall. Error rates were as high as 35% for darker-skinned women, 12% for darker-skinned men, 7% for lighter-skinned women, and no more than 1% for lighter-skinned men.

And of course machine learning, it is a machine making judgments of new situations. The difference is that a machine has no personality differences like people. Even when trained on millions of photo’s, machine learning is about classification. Therefore, the only difference lies in what “past experience” it had, A.K.A the dataset we provided.

For example, according to Wikipedia, the 2019 U.S. Census Bureau estimated that the U.S. has 13.4% Black and African American and 60% white American. If a facial recognition model is exposed to a white face 60% of the time, it will more likely to guess a face is white when we give it an ambiguous face.

Digital devices tune out small errors while creating opportunities for large errors.

Today, machine bias can be far more damaging than human bias. At a minimum, it starts with an idea that some company owns the algorithm and, therefore, is legally allowed to refuse to allow anyone to know what biases have come to be built into it. We’ve seen this with machine algorithms for setting bail in criminal settings as well as awarding or dismissing teachers based on test scores. And the outliers here are so absurd that it’s kind of a shock anyone trusts a machine to do anything.

In other words, even sometimes when we know machine bias exists, we aren’t allowed to find out WHERE it is biased nor challenge decisions made by machines.

--

--