Archive · · 2 min read

Is Google LaMDA Sentient?

Is a Google language model alive? One researcher certainly thinks so.

Is Google LaMDA Sentient?

An icon representing a document where the bottom half of it has been drawn with a dotted outline, implying a copy The CBC Radio segment has been archived and is only available from CBC by request.

I spoke with Hallie Cotnam on CBC Ottawa Morning on 20-Jun-2022 about this issue.

Has the “I” in A.I. Finally Come True?

Recently, an AI ethics researcher from Google was placed on administrative leave after publicly claiming that Google’s LaMDA system was sentient. A claim that has been denounced by Google and others in the AI community.

It’s a bold claim and there simply isn’t enough evidence to support it.

Is Google LaMDA sentient? No.

What is LaMDA?

If it’s not an actual intelligence, what is it? LaMDA actually stands for Language Model for Dialogue Applications. This is a system that is designed to hold a conversation in a natural manner.

Sundar Pichai, CEO of Google and Alphabet, revealed the latest version at Google I/O 2022 and hit on three key aspects of the system. He phrased it as the system being able to;

These three areas of focus allow the system to present as if it’s having an intelligent conversation. In reality, it’s using all of it’s vast inputs—Google Search, YouTube, Google Maps, Google Books, etc.—to find groups of relevant responses and create something that is plausible.

What Impact Will LaMDA Have?

If you’re asking yourself, “Why would Google create such a system?” The answer is actually very straight forward; efficiency.

Digital systems are often the first interface for many businesses (through online chat or phone calls) and a lot of tools like Google Home. We’ve all had that frustrating interactive voice response (IVR) experience when calling a big company’s customer support…

“Hello and welcome to BigCorp. What can I help you with today?”, 🤖

“Customer service”, 😀

“I heard, ‘Sales.’ Is that correct?”, 🤖

“No, I want customer service”, 😀

“Oh, I’m sorry that I misheard you. Forwarding you to ‘Sales’”, 🤖

👆 That’s the type of interaction—whether voice or chat—that LaMDA aims to get rid of forever. The results so far are promising.

There are definitely issues around the ethics of using a system like this. We won’t dive into them here but those discussions need to be had in our communities.

At a minimum, these systems would be forced to identify as digital. You should always know if you’re talking to a digital system.

But overall, LaMDA should be a big win for the most use cases.

References

LaMDA

A.I. Tests

Other Models

Read next