AI guru Geoffrey Hinton says AI is a new intelligence unlike ourselves, so are we thinking wrong?

Startup Stories

[ad_1]

Arguments about AI often describe it as a technology designed to compete with human intelligence. Indeed, one of the most widely expressed fears is that AI could achieve human-like intelligence and render humanity obsolete in the process.

However, one of the world’s top AI scientists sees AI as a new form of thinking — one that poses unique risks and requires unique solutions.

2018 Turing Award winner Jeffrey Hinton steps up from role at Google to warn the world about the dangers of AI. It follows the action of more than 1,000 tech leaders who signed an open letter calling for a halt to advanced AI development for at least six months.

Hinton’s argument is ludicrous. While he thinks AI has the potential to be smarter than humans, he also suggests that it should be considered generic. Different Thinking for ourselves.

Why Hinton’s ideas are important

Although experts have been raising red flags for months, Hinton’s decision to voice his concerns is significant.

Dubbed the “Godfather of AI,” he helped pioneer many of the mechanisms underlying the modern AI systems we see today. His early work on neural networks made him one of three individuals awarded the 2018 Touring Prize. And one of his students, Ilya Sutskever, became a co-founder of OpenAI, the organization behind ChatGPT.

When Hinton speaks, the AI ​​world listens. And if we view his AI processing as an intelligent non-human entity, one could argue that we were thinking about everything.

The trap of false equality

On the one hand, large language-based tools like ChatGPT produce text that is very similar to what people write. ChatGPT makes things up, or “hallucinates,” which Hinton suggests is something humans do as well. But when we think of these similarities as the basis for comparing AI intelligence to human intelligence, we run the risk of understating it.

Jeffrey Hinton

Jeffrey Hinton

We find an important parallel in the invention of manned flight. For thousands of years, people have tried to fly by imitating birds: by moving their hands to imitate some contrasting feathers. This didn’t work. Eventually, we realized that fixed wings could lift, using a different principle, and this led to the invention of flight.

Airplanes are no better or worse than birds; They are different. They do different things and face different dangers.

AI (and computing, for that matter) is a similar story. Large language models like GPT-3 are comparable in many ways to human intelligence, but they work differently. ChatGPT breaks down broad chunks of text to predict the next word in a sentence. People have a different approach to form sentences. Both are amazing.

How is AI intelligence unique?

Both AI practitioners and experts have long considered the relationship between AI and human intelligence – not to mention AI’s tendency to create anthropomorphism. But AI is fundamentally different to us in many ways. As Hinton explains:

If you or I learn something and want to pass that knowledge on to someone else, we can’t just send them a copy. […] But I can have 10,000 neural networks, each with its own experience, and any of them can share what they have learned in real time. That’s a big difference. We all know that one person has learned something, as if we had 10,000 people.

AI outperforms humans in many tasks, including any task based on matching patterns and extracting information from large data sets. Humans are comparatively slow, and their AI memory capacity is less than optimal.

However, humans are superior on some fronts. We compensate for our poor memory and slow processing speed with sound thinking and logic. we can as soon as And easily Learn how the world works and use this knowledge to predict the probability of events. AI still struggles with this (although researchers are working on it).

Humans are also energy efficient, but AI requires powerful computers (especially for learning) that use more energy than we do. According to Hinton:

People can imagine the future. […] On a cup of coffee and toast.

OK, so what if AI is different from us?

If AI is fundamentally an intelligence different from ours, then it follows that we cannot (or should not) compare it to ourselves.

New intelligence presents new risks to society and requires a shift in how we talk about and manage AI systems. In particular, we may need to re-evaluate the way we think about protecting against AI threats.

One of the fundamental questions that dominate these debates is how to define AI. After all, AI is not binary; Intelligence exists on a spectrum, and the spectrum of human intelligence can be very different from machine intelligence.

This point One of the first attempts to regulate AI in New York in 2017 failed, as auditors could not agree on which systems should be classified as AI. Defining AI when designing regulations is very challenging.

So perhaps we should focus on the binary definition of AI and focus more on the specific outcomes of AI-based actions.

What dangers are we facing?

The rapid growth of AI in industries has surprised everyone, and some experts are worried about the future of work.

This week, IBM CEO Arvind Krishna announced that the company could replace 7,800 back office jobs with AI over the next five years. We need to adjust how we manage AI as it is deployed for tasks that have been completed by humans.

More worryingly, AI’s ability to generate fake text, images and videos is ushering us into a new era of information consumption. Our current methods are insufficient to deal with human-generated errors.

Hinton worries about the dangers of AI-driven autonomous weapons and how badly actors can act.

These are just a few examples of how AI – and in particular the various features of AI – can pose a threat to humanity. In order to manage AI productively and proactively, we need to take into account these unique characteristics, and not apply recipes designed for human intelligence.

The good news is that humans have already learned to manage potentially harmful technologies, and AI is no different.

If you want to hear more about the issues covered in this article, check out CSIRO’s Everyday AI podcast.The conversation

  • Olivier Salvado, Lead AI for Mission, CSIRO and John Whittle, Director, Data61

This article is reprinted from the discussion under a Creative Commons license. Read the original article.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *