This is Part 2 in the Soul in the Machine series.
In the Soul in the Machine series, I will be delving into our collective responsibility to ensure that computing systems are treating users fairly and responsibly. Specifically, I will be raising the ethical questions around current trends such as data privacy regulations and machine learning capabilities.
Right now, in Part 2, we are going to examine the challenge of introducing a culture of integrity in the digital space and how we can take responsibility for ensuring teams are ready to face the challenge, particularly when dealing with machine learning and artificial intelligence.
The TL;DR version
- Just because we CAN does not mean we SHOULD. Question the implications of innovation. This does not mean stopping innovation, but being deliberate in our digital continuous improvement.
- YOU are responsible. “Following orders” is not good enough. It is all of our responsibility to stand up and protect others from harm.
- Build a safety net. Individuals need a safe way to raise issues and understand how to deal with “grey area” scenarios. Guidance and processes need to be in place!
The machines are raising the stakes
I am not trying to be alarmist here, simply cautionary. The fact of the matter is that we have always had the ability to be unethical with our building of computer software. Traditionally it took quite a bit of effort and skill to impact a large number of people in any significant manner. With the new advances in artificial intelligence, machine learning, deep learning, etc., we now have an ability to build algorithms and models that can analyze and impact at a massive scale.
We also have a market that is in a continuous race to be the first to get the attention of customers combined with a massive amount of customer data. With unproven processes on how to leverage machine learning innovations at play, this is creating a scenario which is ripe for issues to arise.
The market and technology innovations are driving us to have automated systems that can learn on their own, with some guidance. Humans are no longer needed in all scenarios to make decisions and constantly adjust the programming. So, when a machine learns to do something on its own and makes a decision to negatively impact a group of people, who is to blame?
Milli Vanilli blamed the rain
The lyrics in the 1989 superhit by the notorious lip-syncing duo Milli Vanilli described someone who was clearly at fault but refused to take responsibility for his actions. This is an unfortunately common scenario to which many individuals fall victim. It is much easier to rationalize and be defensive than to accept something might actually be our own fault.
Let’s enjoy a story…
A large retail brand wants to increase their conversion rates for online purchases. After watching a webinar on the benefits of personalization and individualization, the marketing team decides that they should start analyzing the behaviours of their customers in order to provide more relevant content. Excellent!
By leveraging their current marketing tools, they begin monitoring the impact of a variety of content variations and different promotions. Using machine learning algorithms and a trained model, the team is able to start scoring their visitors on the likelihood that they will buy a specific product. Now they can offer a discount to these individuals who are likely to buy in order to increase the conversion! SO AWESOME!
Conversions go up, the marketing team is happy, but some of their customers are not. It seems the 10% discount is never being offered to women or persons of colour. Social media is blowing up saying that the company only wants to cater to white men. Uh oh. That’s not so good.
How did this happen? If we are only looking at real customer data and behaviour, how did a piece of software lead us to this scenario? The model accuracy was high, what went wrong?
In this story, the model might have looked at traits such as gender and shipping address when analyzing likelihood to convert. A gender trait is an obvious one to disregard if you’d like to avoid gender bias. Especially since most gender data does not even consider non-binary or other gender identifications.
But why would a shipping address be problematic? The issue is that geographic locations can often have an embedded racial trait. For these reasons, a model using addresses is unconsciously introducing a potential racial bias into the training.
So whose fault was it? WHO DO WE BLAME FOR THIS?!?!
- Was it the marketing operations team that tracked their users gender and address data and came up with the plan and requirements?
- Was it the data scientist who built the model and algorithm which predicted the users’ likelihood to purchase?
- Was it the engineering team who was testing the implementation prior to the production release?
- Was it the content marketer who decided to use the predictions and offer discounts only to those with the high propensity to buy?
Yup, you got it right. All of the above… it was everybody’s responsibility to own this being done in an ethical fashion. Every person involved must be accountable for standing up and saying “I don’t think this is right”. However, to acknowledge that also requires acknowledging the need for an organizational Culture of Integrity.
Integrity is hard
So what is a Culture of Integrity? It sounds like a bunch of fluffy words you would hear from an organizational process improvement consultant.
You are probably right, but that doesn’t mean it isn’t a huge value to an organization!
At the core of anything we do there needs to be a belief that we are doing the “right thing”.
We need to know our colleagues are doing the “right thing”.
We need to know that management and executives are doing the “right thing”.
I’m “air-quoting” here because one of the most critical problems is : What is the right thing to do in a world of grey-area scenarios?
Start Learning
If you are wanting to be more deliberate about ethics in your organization, the first thing you need to start doing is research. A lot of it. This is not a new field and a lot of organizations have gone through this cultural transformation. You can learn a LOT from the successes and failures of others.
Integrity starts from the top
We look to our leaders to see how we should act. If management is cutting corners, following unethical practices, overworking… employees will be tempted to emulate those practices seeing as they are being rewarded. Similarly, if leadership is seen to be deliberate, transparent, and is encouraging ethical and unbiased decision-making and behaviour, employees will understand that those behaviours are valued by the organisation and follow suit.
It takes a village
Adoption across the organization is a huge challenge. This is not unique to ethics and integrity. I have seen the same with adopting agile delivery frameworks, DevOps cultures, and Continuous Improvement practices. If we look at DevOps culture as an example, there is an inherent belief behind DevOps that we are all working together, as a single team, with a shared responsibility. This spans from idea, to launch, and continuing through the entire life-cycle of software. DevOps is a cultural shift that needs to happen throughout an organization to make DevOps succeed.
The same is true for handling ethics in machine learning. Everybody needs to be involved to build a deliberate and shared culture that supports thinking about problems and solutions and whether something SHOULD be done.
Build a support system
We need to build strategies of how everybody in the organization can weigh the outcomes in different scenarios by building ethical considerations into everyday decision-making.
There have to be individuals championing integrity within the organization, and voicing support for colleagues doing the right thing. There have to be examples and training provided for employees so that everybody can learn to see a scenario through a different lens and say “Oh, yeah, I never thought about it that way.”
Organizations should be looking at their Codes of Conduct to ensure that they contain not just regulations, but also reflect the values of the organization. We do not need a billion rules and punishments and a team of Ethics Police to monitor behaviour!
It has to be OK to say things are NOT OK
For this to work, everyone needs to feel comfortable raising an ethical concern, even about something they did themselves. The working environment needs to be a safe space where concerns can be brought forward without fear of reprisal, loss of status, or impact to one’s career. Individuals need to be able to stand up, own a mistake, and not feel like they need to shift the fault away from themselves.
We need to move away from the blame-game!
This sounds like somebody else’s problem
Another big challenge for adoption is that almost universally, people believe two things to be true:
- They are excellent drivers, and;
- They are ethical employees
It is one of the most difficult things for somebody to be able to recognize their own bias. This is what is known as unconscious bias.
(THIS IS A BIG DEAL! So big, I’ll be talking about unconscious bias a little later in the series instead of diving in right now… )
Is ethics really an issue?
You might be saying to yourself: “Self, this just sounds like common sense”. You’d think so, wouldn’t you? A lot of my readers come from an engineering background, and many engineers need to go through ethics training as part of their profession. When you could be potentially writing the software that is flying a plane, the industry kind of wants you to be thinking about passenger safety. So, some of us might have a biased view, ironically, of how professionals go about their work.
Quantitative fields, like Mathematics, do not necessarily bring ethical discussions into their profession during formative training. When your primary professional focus is on optimizing algorithms and solving the hard mathematical problems of our time, it’s not necessarily the first thing that might be thought of, especially if the work is being isolated inside of higher education research facilities. However, with machine learning, deep learning, and artificial intelligence, it is increasingly mathematicians who have the skillset to become data scientists. They are designing the algorithms that are impacting our customer scenarios.
Also, there are plenty of non-engineers, or engineers who did not have professional ethics in their background, or people who just don’t bother to think about ethics. They may be very ethical individuals, but actively considering ethics in their decisions may be new to them.
Every person in the organization may influence how a particular algorithm, model, or process is defined and built. If we want to do that ethically, we need to ensure that all individuals involved in the process have the support to be able to consider the ethical implications of their decisions.
“Life finds a way”
Ultimately, what I hope you take away from this is that we need to get people to think. To take time out of their busy schedules and top priorities and deadlines and pause…
Just for a moment…
Ask that question Jeff Goldblum wants us to ask: “Should we be doing this?”
References
- Can a machine be racist?
- Racist, Sexist AI Could be A Bigger Problem Than Lost Jobs
- Researches Combat Gender and Racial Bias in AI
- Analyzing and Preventing Unconscious Bias in Machine Learning
- Six things you can do to create a healthier organization and achieve sustainable success