Despite announcing several new things, including Gmail composing emails for you and Google Assistant learning to chat, the biggest talking point of Google I/O 2018 was Duplex. Everyone had an opinion on Google Duplex, and not all of them positive.
What Is Google Duplex?
Duplex is Google’s new AI, and a massive step-up from the likes of Siri and Alexa. Duplex is capable of making calls for you, meaning you’ll never have to book a hair appointment or table at a restaurant again. The problem is Duplex is a little too human for its own good.
Google CEO Sundar Pichai demoed Duplex on stage at I/O 2018, showing the next-level AI fooling two people into thinking it was a real-life human. And many people found that aspect troubling, especially as at no point did Duplex announce it wasn’t human.
Duplex Will Disclose Its Identity
It seems that Google was unaware of what reaction Duplex was going to cause. And the company certainly didn’t foresee morality questions being asked. Google has now issued a statement regarding Duplex, telling The Verge:
“We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”
Google has listened to feedback and reacted accordingly. The problem is if Duplex is going to announce itself as not being human, why does it need to sound so human? This is just the first of many moral dilemmas humanity is going to face when dealing with AI.
The Day the Robots Take Over…
Google is due to start testing Duplex within Assistant this summer. And only then will it become clear just how Google is going to have Duplex announce it isn’t human. Until then I’ll be having nightmares about the day when robots can do all of the jobs.