Google fires engineer who called AI model ‘sentient’

Blake Lemoine made his firing public on Friday

Source: Google

Blake Lemoine has made a name for himself and that’s saying something for a software engineer. His peers would argue, though, that it’s been for the wrong reason. As a member of Google’s Responsible AI division, he’s been pitching and publicizing transcripts generated from his interactions with the company’s Language Model for Dialogue Applications, or, LaMDA. Lemoine has described the chatbot generator as a “hivemind” of personas with a self-aware core of intelligence. When Google originally was made aware of his disclosures, it suspended him. Now, Lemoine himself has revealed that he has been dismissed from the company.


Lemoine told the Big Technology Podcast (via Reuters) about his firing during a taping on Friday. Google later confirmed the news in a statement to the press, which reads in part:

[…] despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.

Google and other engineers in the field have brought up evidence that points against LaMDA being sentient and that Lemoine was not qualified to make such an assessment as he was employed as an engineer, not an ethicist.

LaMDA was first publicized at Google I/O 2021 and is built on Google’s Transformer neural architecture. It’s just one of many language models in the machine learning space including GPT-3 and the Pathways Language Model, or, PaLM, another Google-made product. We have a full breakdown on the history of LaMDA from Android Police’s Ryne Hager.

Lemoine has defended his stance on (and even communicates with) LaMDA through posts on his Medium blog. He’s also shared different aspects of his spirituality as a mythic Christian priest and his employment during his time at Google.

Source link

Add Comment