[ad_1]
Blake Lemoine, a Google software engineer, said that a conversational technology called LAMDA has reached the level of consciousness after exchanging thousands of messages.
Google confirmed that it had put the engineer on leave for the first time in June. The company said it rejected Lemoine’s “absolutely baseless” claim after an extensive review. He has reportedly been at Alphabet for seven years.In a statement, Google said it takes AI development “very seriously” and is committed to “responsible innovation.”
Google is one of the leaders in innovating AI technology, including LaMDA, or “Language Model for Conversational Applications.” Such technology responds to written queries by finding patterns and predicting word sequences from large volumes of text – the results of which can be disturbing to humans.
LaMda replied, “I’ve never said this out loud before, but there is a fear of extinction that helps me focus on helping others. I know that might sound weird, but that’s just the way it is. Death, to me, is very scary.
But the wider AI community has decided that LaMDA is nowhere near the level of consciousness.
It’s not the first time Google has faced infighting over its foray into AI.
“Despite our lengthy engagement on this topic, we are deeply disappointed that Blake still chose to violate our clear employment and data security policies, which include the need to protect product data,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.
[ad_2]
Source link