Soul in the Machine – Combatting Unconscious Bias

Published by

on

Digital heart. (by Alexander Sinn)

This is Part 4 in the Soul in the Machine series.

In the Soul in the Machine series, I will be delving into our collective responsibility to ensure that computing systems are treating users fairly and responsibly. Specifically, I will be raising the ethical questions around current trends such as data privacy regulations and machine learning capabilities.

Right now, in Part 4, we are going to look at building a diverse team capable of implementing ethical AI and ML solutions. How do we ensure that the culture of integrity we are working so hard to support leads to fair results in our machine learning? How can we be conscious about battling unconscious bias in the solutions we develop?

Two heads are better than one

Building up the culture of integrity in an organization allows us to start asking questions about what we’re doing, and how we’re doing it, but a critical step is building a team that is capable of seeing WHEN these questions need to be asked.

Having a collection of ideas from different view points greatly helps you to be a more ethical organization. The analogy of “two heads are better than one” hints at this. We all bring our own unconscious biases to the table. Even if we are trying our best to think of others and how they should be treated, nothing replaces the collective voices of different experiences.

“Unfortunately, as we have seen, the technology sector in general, and even more so in the specific area of AI programming and development, is dominated by a predominantly young, white and male workforce who often are unaware of unconscious biases.”

– Chris Baker, AI and gender bias – who watches the watchers?

Gender bias

For example, in many circles gender is considered to be Male, or Female. Many machine learning models and algorithms get built entirely around this concept and try to predict behavior on whether you are a man or a woman in a binary feature in the data. What happens when a non-binary individual gets scored by that algorithm?

NonBinaryInBinaryModel

Nobody was trying to be insensitive when they put that model together, it simply does not even occur to them to include that as an option. A non-binary individual on the team would immediately point it out and identify the issue, leading to better data and a better model.

Jesse Moore theorizes on some reasons that teams are not having these ethical discussions in data science:

“Perhaps this is because there is no real need for an ethical discussion in many quantitative fields such as Mathematics, and these are the people that are increasingly entering the AI field. In a quest to optimise their algorithms, get more data, and increase accuracy, many practitioners are developing algorithms that influence behaviour and decisions. The Data Science field should look towards the systems that engineers and doctors have installed to ensure their field is working for the good of humanity. Ethical design needs to become a requirement in any curriculum, and self-taught designers such as myself need to familiarise themselves with these ideas.”

– Jesse Moore, Can a Machine be Racist?

Racism in technology

Another example, though not particularly a machine learning one, is about a racist soap dispenser at a Facebook office in Africa.

https://twitter.com/nke_ise/status/897756900753891328

The sensors cannot detect dark skin. Employees needed to hold a piece of white tissue paper under the dispenser to get any soap to wash their hands. If you had even a SINGLE person with dark skin on your test or development team, this issue doesn’t happen.

Diversity is a must-have feature

This diversity also allows you to have a better solution for customers in different cultures and regions. Allowing your team to have a diversity of ideas, but also cultures, family histories, geographic locations… all of this allows you to get a better solution that can work for every individual.

“Most important will be achieving diversity of backgrounds in teams designing and architecting AI systems, across race, gender, culture, and socioeconomic background. Anyone paying attention knows about the diversity challenges of the tech sector. Given the clear bias problem and AI’s trajectory to touch all parts of our lives, there’s no more critical place in tech to attack the diversity problem than in AI.”

– Will Byrne, Now Is The Time To Act To End Bias In AI

So we need to ensure diversity on our teams to help find issues in our process, but we also need diversity in our data. I was told a story by somebody who attended a conference in 2017. She used an application which refused to acknowledge that she was a developer and classified her instead as a marketer. The model simply did not have the data diversity to recognize a woman as a developer. The impact of the false identification to that individual hurts your relationship with them. As that story spreads, it hurts your relationship with others.

You might say “well, we’re going to get it right most of the time”, but nobody hears the stories about the times you got it right. You will hear the stories about when you got it wrong. We need to make sure that the predictions are covering our whole audience and improving all of their experiences, not just some.

Your turn

Team diversity helps us to support our culture of integrity and deliver fair solutions. By deliberately tackling team diversity, we can take conscious action to battle unconscious bias in our solutions.

How will you go out of your way?

Leave a comment

Create a website or blog at WordPress.com