Soul in the Machine – Can we avoid the Google Duplex outrage in Conversational AI development?

This is Part 5 in the Soul in the Machine series.

In the Soul in the Machine series, I will be delving into our collective responsibility to ensure that computing systems are treating users fairly and responsibly. Specifically, I will be raising the ethical questions around current trends such as data privacy regulations and machine learning capabilities.

Right now, in Part 5, we are going to look at some reactions to conversational AI technologies, Responsible AI, and the line between CAN and SHOULD.

Should we build this?

In a previous article, I highlighted the need for teams to stop and ask if they SHOULD be doing something. Let us take an example from Google Duplex.

In this demo from Google, the caller was the AI, but honestly, it could easily have been the restaurant answering with an AI. There is another example of Google Duplex where the assistant books a hairdresser appointment, and others where it can handle interruptions and resume the conversation.

Continue reading “Soul in the Machine – Can we avoid the Google Duplex outrage in Conversational AI development?”