This is Part 5 in the Soul in the Machine series.
In the Soul in the Machine series, I will be delving into our collective responsibility to ensure that computing systems are treating users fairly and responsibly. Specifically, I will be raising the ethical questions around current trends such as data privacy regulations and machine learning capabilities.
Right now, in Part 5, we are going to look at some reactions to conversational AI technologies, Responsible AI, and the line between CAN and SHOULD.
Should we build this?
In a previous article, I highlighted the need for teams to stop and ask if they SHOULD be doing something. Let us take an example from Google Duplex.
In this demo from Google, the caller was the AI, but honestly, it could easily have been the restaurant answering with an AI. There is another example of Google Duplex where the assistant books a hairdresser appointment, and others where it can handle interruptions and resume the conversation.
This, and other examples, were demonstrated by Google in 2018 as part of some of the technology innovations they are working on. The reaction was… not entirely positive.
https://twitter.com/Peesha_Deel/status/994605691439599616
The tech behind this is amazing, the advancements made here are truly impressive, but questions around what SHOULD be done were raised.
“The obvious question soon followed: Should AI software that’s smart enough to trick humans be forced to disclose itself.”
– Mark Bern, Bloomberg, Google Grapples With ‘Horrifying’ Reaction to Uncanny AI Tech
And some felt that the technology displayed was truly “horrifying”.
“Don’t try to con me, bro”
The general feeling was that people wanted to know they were talking to a person, or a machine. They did not want to be fooled.
“With no exceptions so far, the sense of these reactions has confirmed what I suspected – that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line. To use the vernacular: “Don’t try to con me, bro!””
– Lauren Weinstein, People for Internet Responsibility, Calls From Google’s “Duplex” System Should Include Initial Warning Announcements
Since this demo was released, Google has responded to the feedback and clarified that they plan on having the assistant self-identify.
“We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”
However, the fact of the matter is that the technology DOES NOT HAVE TO SELF-IDENTIFY. There are no rules, no laws, no regulations, no guidelines. It could just keep fooling people and that would be totally “above board”.
Responsible AI development
The decision to self-identify is not based on whether it CAN be done. There is no technology limitation or legal limitation at play here. Deciding to identify is based on SHOULD WE DO THIS. That is the first step in bringing ethics into our problem solving process and delivering responsible solutions.
In the scenario of Google Duplex, the public is acting as a regulator of the extent to which we want AI solutions to reach into our lives. However, ultimately, we want to catch these problems before it gets out to the public reaction. Public outrage is a great feedback mechanism, but a little late in the process. So we need to bring that valve into our team responsibility.
This only works if one highly unlikely fact is true: the team building a solution is capable of identifying that something they are working on is ethically questionable. Unconscious bias in a team can lead to us not even seeing the problem in front of us.
“Most important will be achieving diversity of backgrounds in teams designing and architecting AI systems, across race, gender, culture, and socioeconomic background. Anyone paying attention knows about the diversity challenges of the tech sector. Given the clear bias problem and AI’s trajectory to touch all parts of our lives, there’s no more critical place in tech to attack the diversity problem than in AI.”
– Will Byrne, IDEO, Now is the Time to Act to Stop Bias in AI
Team diversity will be critical to giving us a better chance to identify unconscious bias situations, but it will not be a silver bullet. Responsible development frameworks, privacy and ethics officers, hiring practices, all of it will play a role in building teams capable of delivering responsible solutions.
As these technologies continue to evolve and amaze, we need to support those individuals within the community who are striving for ethical oversight of these implications. This is not just a matter of law. This is not just a matter of privacy. We need to ensure responsible innovation is delivered.
The responsibility is ours.
AUTHOR’S NOTE: The emphasis applied to all quoted text in this article is mine, for the purpose of highlighting key takeaways for the reader.
Recommended Reading
- Will Byrne, IDEO, Now is the Time to Act to Stop Bias in AI
- Chris Baker, Concur, AI and Gender Bias – Who watches the watchers?
- Lauren Weinstein, People for Internet Responsibility, Calls From Google’s “Duplex” System Should Include Initial Warning Announcements
- Mark Bern, Bloomberg, Google Grapples With ‘Horrifying’ Reaction to Uncanny AI Tech