Is Google LaMDA Sentient?
I spoke with Hallie Cotnam on CBC Ottawa Morning on 20-Jun-2022 about this issue. Have a listen 👇
Has the “I” in A.I. Finally Come True?
Recently, an AI ethics researcher from Google was placed on administrative leave after publicly claiming that Google’s LaMDA system was sentient. A claim that has been denounced by Google and others in the AI community.
It’s a bold claim and there simply isn’t enough evidence to support it.
Is Google LaMDA sentient? No.
What is LaMDA?
If it’s not an actual intelligence, what is it? LaMDA actually stands for Language Model for Dialogue Applications. This is a system that is designed to hold a conversation in a natural manner.
Sundar Pichai, CEO of Google and Alphabet, revealed the latest version at Google I/O 2022 and hit on three key aspects of the system. He phrased it as the system being able to;
- “Imagine tt”, the ability to synthesize new ideas and topics
- “Talk about it”, extrapolate ideas around a specific topic and keep the conversation on topic
- “List it”, take a complex goal and break it down into lists of tasks to do
These three areas of focus allow the system to present as if it’s having an intelligent conversation. In reality, it’s using all of it’s vast inputs—Google Search, YouTube, Google Maps, Google Books, etc.—to find groups of relevant responses and create something that is plausible.
What Impact Will LaMDA Have?
If you’re asking yourself, “Why would Google create such a system?” The answer is actually very straight forward; efficiency.
Digital systems are often the first interface for many businesses (through online chat or phone calls) and a lot of tools like Google Home. We’ve all had that frustrating interactive voice response (IVR) experience when calling a big company’s customer support…
“Hello and welcome to BigCorp. What can I help you with today?”, 🤖
“Customer service”, 😀
“I heard, ‘Sales.’ Is that correct?”, 🤖
“No, I want customer service”, 😀
“Oh, I’m sorry that I misheard you. Forwarding you to ‘Sales’”, 🤖
👆 That’s the type of interaction—whether voice or chat—that LaMDA aims to get rid of forever. The results so far are promising.
There are definitely issues around the ethics of using a system like this. We won’t dive into them here but those discussions need to be had in our communities.
At a minimum, these systems would be forced to identify as digital. You should always know if you’re talking to a digital system.
But overall, LaMDA should be a big win for the most use cases.
- LaMDA 2 being announced at Google I/O 2022
- ‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA, by Katherine Cross for Wired
- Is Google’s LaMDA conscious? A philosopher’s view
- Do large language models understand us?, by Blaise Aguera y Arcas
- LaMDA and the power of illusion: The aliens haven’t landed … yet, by Louis Rosenberg
- Blake Lemoine Says Google’s LaMDA AI Faces ‘Bigotry’D, by Steven Levy for Wired
- AI’s most convincing conversations are not what they seem, by Rupert Goodwins for The Register
- Turing test explanation by Wikipedia
- Dr. Sbaitso from the early days of sound cards
- Voight-Kampff test from Blade Runner
- DALL·E mini from Hugging Face
- GPT-3 by OpenAI
- Google Cloud AI and machine learning product