This is Part 4 in the Soul in the Machine series.
In the Soul in the Machine series, I will be delving into our collective responsibility to ensure that computing systems are treating users fairly and responsibly. Specifically, I will be raising the ethical questions around current trends such as data privacy regulations and machine learning capabilities.
Right now, in Part 4, we are going to look at building a diverse team capable of implementing ethical AI and ML solutions. How do we ensure that the culture of integrity we are working so hard to support leads to fair results in our machine learning? How can we be conscious about battling unconscious bias in the solutions we develop?
Two heads are better than one
Building up the culture of integrity in an organization allows us to start asking questions about what we’re doing, and how we’re doing it, but a critical step is building a team that is capable of seeing WHEN these questions need to be asked.