Whistleblower gets suspended after exposing Google’s AI Secret

After being suspended for disclosing confidential information about the project with third parties, Blake Lemoine, a software engineer on Google’s artificial intelligence development team, has gone public with allegations of encountering “sentient” AI on the company’s computers.

The researcher was placed on paid leave by Alphabet Inc. early last week, reportedly for breaching the company’s confidentiality agreement, he said in a Medium post headlined “May be fired soon for performing AI ethical work.” In the posting, he draws a connection to prior members of Google’s AI ethics group, such as Margaret Mitchell, who were eventually dismissed by the company in a similar fashion after raising concerns.

The Washington Post on Saturday ran an interview with Lemoine, wherein he said he concluded the Google AI he interacted with was a person, “in his capacity as a priest, not a scientist.” The AI in question is dubbed LaMDA, or Language Model for Dialogue Applications, and is used to generate chatbots that interact with human users by adopting various personality tropes. Lemoine said he tried to conduct experiments to prove it but was rebuffed by senior executives at the company when he raised the matter internally.

In response, Google spokesperson Brian Gabriel said, “Some in the larger AI community are discussing the long-term prospect of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” In accordance with our AI Principles, our team – which includes ethicists and engineers – has investigated Blake’s concerns and advised him that the evidence does not support his allegations.”


When asked about Lemoine’s suspension, the company replied it doesn’t comment on personnel concerns.