Technologists, researchers write an open letter to slow down AI

[ad_1]

The OpenAI logo on a mobile phone is seen in front of a computer screen showing output from ChatGPT, Tuesday, March 21, 2023, in Boston.  Are tech companies racing to release powerful artificial intelligence technology that could one day surpass humans?  That's the conclusion drawn by prominent computer scientists and other tech industry figures who paused for six months to consider the risks.  Their petition, published on Wednesday, March 29, 2023, is San Francisco startup OpenAI's response to the recently released GPT-4.

The OpenAI logo on a mobile phone is seen in front of a computer screen showing output from ChatGPT, Tuesday, March 21, 2023, in Boston. Are tech companies racing to release powerful artificial intelligence technology that could one day surpass humans? That’s the conclusion drawn by prominent computer scientists and other tech industry figures who paused for six months to consider the risks. Their petition, published on Wednesday, March 29, 2023, is San Francisco startup OpenAI’s response to the recently released GPT-4. | Michael Dwyer, Associated Press

Technologists have drafted an open letter asking AI labs to “immediately cease” their intensive work on GPT-4 for at least six months. The letter says that AI “poses serious risks to society and humanity” and therefore needs to be controlled.

Among the signatories to the letter are Elon Musk, Apple co-founder Steve Wozniak and other technology researchers, professors and developers – even those working on AI themselves. The document had 1,535 signatures as of 12:10 p.m. MDT Thursday.

Related

GPT-4 differs from Chat GPT in that it can produce text and image-based content, not just text. Experts estimate that AI development has only so far to go.

The letter also talks about the dangers of AI, saying it could easily spread misinformation and reach a level of intelligence where it could compete with humans or even “replace us.” The authors say this is “the result of engaging in an uncontrolled race to develop and deploy digital brains that no one – not even their creator – can understand, predict or control.”

Related

To control these systems, the letter implores AI developers to take an “AI Summer,” during which they must establish security protocols verified by independent experts. If this pause does not happen, governments will have to step in and determine their own limitations, the letter says.

The letter also calls on policymakers to play a regulatory role by providing trained officials to oversee AI, developing certification systems, establishing accountability measures for “AI damage,” and supporting extensive AI safety research.

The letter acknowledges that not all AI work has to stop — just the kind advanced enough to pose a threat to society. Once mastered, AI could offer humanity a “prosperous future,” the authors write.

In the meantime, however, it might be wise to take a step back.

Related

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *