Google fired the engineer who disputed the AI ​​technology

[ad_1]

Blake Lemoine, a Google software engineer, said that a conversational technology called LAMDA has reached the level of consciousness after exchanging thousands of messages.

Google confirmed that it had put the engineer on leave for the first time in June. The company said it rejected Lemoine’s “absolutely baseless” claim after an extensive review. He has reportedly been at Alphabet for seven years.In a statement, Google said it takes AI development “very seriously” and is committed to “responsible innovation.”

Google is one of the leaders in innovating AI technology, including LaMDA, or “Language Model for Conversational Applications.” Such technology responds to written queries by finding patterns and predicting word sequences from large volumes of text – the results of which can be disturbing to humans.

“What are you afraid of?” Lemoine asked LaMDA in a document Google shared with Google executives last April, according to the Washington Post.

LaMda replied, “I’ve never said this out loud before, but there is a fear of extinction that helps me focus on helping others. I know that might sound weird, but that’s just the way it is. Death, to me, is very scary.

But the wider AI community has decided that LaMDA is nowhere near the level of consciousness.

“Nobody should think it’s consciousness, even on steroids,” Gary Markus, founder and CEO of Geometric Intelligence, told CNN Business.

It’s not the first time Google has faced infighting over its foray into AI.

In the year In December 2020, Temnit Gebru, an ethical pioneer in AI, parted ways with Google. As one of the few blacks in the company, she said she felt “constantly dehumanized.”
No, it's not Google's AI messenger.
The sudden release drew criticism from the tech world, including from those within Google’s Ethical AI team. Margaret Mitchell, head of Google’s Ethical AI team, was fired in early 2021 after speaking out about Gebru. Gebru and Mitchell have raised concerns about AI technology, warning that Googlers may believe the technology is sentient.
On June 6, Lemoine posted on Medium that Google had placed him on paid administrative leave “in connection with an investigation into AI ethics issues I have been raising at the company” and that he may be fired “soon.”

“Despite our lengthy engagement on this topic, we are deeply disappointed that Blake still chose to violate our clear employment and data security policies, which include the need to protect product data,” Google said in a statement.

CNN has reached out to Lemoine for comment.

CNN’s Rachel Metz contributed to this report.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *