Intelligent artificial intelligence LaMDA from Google? More like ‘pure clickbait’, says AI expert

Back in June of this year, the internet was buzzing with discussions about AI becoming sentient and the imminent arrival of SkyNet in the not-too-distant future. This came after Google engineer Blake Lemoine stated that the company’s language model for conversational applications (LaMDA) had become self-aware and had achieved sentience. However, Google didn’t have any of that, and after that, Lemoine was first put on paid leave and then eventually fired.

For people looking for an outside perspective on the situation that doesn’t involve Google or its former employee Lemoine, Stanford University’s co-director of human-centered artificial intelligence (HAI), John Etchemendy, chimed in. And like Google, Etchemendi is unimpressed by Lemoine’s claims, and he called Google’s intelligent AI stories “pure clickbait.”

He said:

Sensitivity is the ability to sense the world, experience feelings and emotions, and act in response to those sensations, feelings and emotions.

LaMDA is not intelligent for the simple reason that it does not have a physiology for sensations and feelings. It is a program designed to generate sentences in response to sentence prompts.

When I saw the article in the Washington Post, I was disappointed in the Washington Post even for the fact that it was published.

They published it because currently they could write this headline about the “Google Engineer” who made this absurd claim, and because most of their readers are not sophisticated enough to understand what it is. Pure clickbait.

Richard Fikes, emeritus professor of computer science at Stanford, agrees that LaMDA was simply trying to respond like a human, not actually become one. He said:

You can think of LaMDa as an actor; he will take the form of whatever you ask him to. [Lemoine] was drawn into the role of LaMda, playing a sentient being.

Consequently, many, including Fikes, believe that Blake Lemoine succumbed to the ELISA effect, which is simply the tendency to unconsciously assume that computers behave like humans.

Source: Stanford Daily.

Leave a Reply

Your email address will not be published.