AI ‘Godfather’ Geoffrey Hinton has pulled the plug on Google over the dangers of artificial intelligence.

[ad_1]

Turing Award winner and AI pioneer Geoffrey Hinton has left his position with Google to speak out about the dangers of artificial intelligence.

In the year In 2012, Hinton and two of his students, Ilya Sutskever and Alex Kryshevsky, built a convolutional neural network that powered computer vision.

Later that year, Hinton, Sutskever and Krzyszewski turned their technology into a company called DNA Research, which Google bought in an auction for $US66 million ($44 million).

Now, after more than a decade with Google, the 75-year-old, dubbed the ‘godfather of AI’, is leaving the tech giant and can speak freely without Google executives breathing down their necks.

“I’m going to talk about the dangers of AI without thinking about how this is going to hurt Google,” he said. He tweeted. To clarify the implications made in A on Monday night New York Times The article that first broke the news of Hinton’s departure.

Jeffrey Hinton

Jeffrey Hinton

Jeffrey Hinton

Geoffrey HintonAI

Hinton spoke. New York Times How the recent AI fame, fueled by ChatGPT’s sudden and massive popularity, has plagued him.

Hinton’s concerns include the Internet being flooded with AI-generated images, videos and text to the point where people don’t know “what’s real anymore,” the broader impact on work, and the threat of AI. Finding unexpected properties when starting to write and run his own code.

“The idea that this thing could be smarter than people — few people believed that,” he said. New York Times.

“But many people thought it was far-fetched. And I thought it was far-fetched. I thought it was 30 to 50 years or more. Obviously I don’t think so. “

It was a landmark event in Hinton’s open letter calling for a six-month freeze on AI development to give regulators time to wait — a letter with many signatories who adhere to Weird’s long-standing ideology.

But during his tenure at Google, Hinton publicly warned that the ongoing arms race between the tech giants posed unseen risks to humanity.

“I don’t think they should raise it any further until they understand whether it’s manageable or not,” he said.

ChatGPT’s arrival sent shockwaves through Google; The executives saw the AI-powered chatbot as a direct threat to the profit-seeking product.

When Microsoft announced that it would add AI to Bing, Google quickly followed suit with shareholder complaints.

The conflict between business interests and ethical AI development has been a long time coming.

In the year In 2020, Google fired renowned AI ethics researcher Tamnit Gebru after she co-authored an article outlining the four main risks associated with building natural language models like Bing, ChatGPT, and Google Bard.

Those concerns are about environmental costs, embedding bias into AI, the decision to build systems solely to meet business needs, and the potential for mass misinformation.

Gebru created the AI ​​Ethics Institute and, along with AI ethicist Margaret Mitchell, who was fired by Google in 2021, gave a critical response to the AI ​​Pause letter.

Mitchell responded to the news about Hinton Serious warning “If the world’s most competent researchers are not indirectly/culturally censored for short-term gain, they cannot say what the future of AI might hold”.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *