[ad_1]
The world is very different now. For mankind holds in its mortal hand the power to destroy all forms of poverty and all forms of human life.
John F. Kennedy
Humans have learned many things that have changed our lives, created our civilizations, and eventually killed us all. This year we created one more.
Artificial intelligence has been around the corner of technology for at least 50 years. Last year, a set of AI apps caught everyone’s attention. AI has finally moved beyond the era of junk apps to transformative and useful tools – Dall-E to create images from text queries, Github Copilot as a pair programming assistant, Alphafold to calculate the shape of proteins, and ChatGPT 3.5 as a smart Chatbot. These applications were originally considered to be domain-specific tools. Most people (including me) believed that the next versions of these and other AI applications and tools would be incremental improvements.
We are very, very wrong.
With the introduction of ChatGPT-4 this year, we may have seen the creation of something with a comparable impact on explosives, mass communication, computers, DNA/CRISPR and nuclear weapons – all rolled into one app. If you haven’t played with ChatGPT-4, stop by and spend a few minutes doing so here. Seriously.
At first glance, ChatGPT is an extremely intelligent chat expert (and homework writer and challenger). However, this First software program it has to be Man-competitor on many general tasks. (See the links and realize there is no going back.) This level of performance was completely unexpected. Even by its creators.
In addition to superior performance at work, What surprised researchers about ChatGPT was its spontaneous behavior. That’s a fancy word that means, “We didn’t build it to do that, and we have no idea how it’s supposed to do that.” These are features that were not present in earlier smaller AI models but are now seen in larger models such as the GPT-4. (Researchers believe this breakthrough is the result of a complex interplay between neural network architecture and the vast amount of training data it’s exposed to—basically everything that’s been on the Internet since September 2021.)
(Another troubling potential of ChatGPT is its ability to manipulate people into beliefs that aren’t true. Although ChatGPT “seems very smart,” it can sometimes make things up easily and convince you of something even if the facts aren’t correct. We’ve seen this effect on social media when people argue about beliefs. (We cannot predict where a behavioral AI might determine these safeguards.)
But that’s not all.
Opening Pandora’s Box
So far, ChatGPT has been limited to the chat box that the user interacts with. But OpenAI (the company that developed ChatGPT) is allowing ChatGPT to be accessed and interacted with other apps through an API (Application Programming Interface). Application To the more amazingly powerful platform which other software developers can plug and build upon.
By exposing ChatGPT to a wide range of input and feedback via an API, developers and users are guaranteed to discover new capabilities or applications that were not initially anticipated for the model. (The idea that an application can write itself to request large amounts of data and write code is of little concern. This will almost certainly lead to more new, unexpected and unexpected behaviors.) Some of these applications will create new industries and new jobs. . Some will make existing industries and jobs obsolete. And as with fire, explosives, mass communication, computing, recombinant DNA/CRISPR, and the creation of nuclear weapons, the exact consequences are unknown.
Should you think? Should you be worried?
First, you should definitely take care of it.
Over the past 50 years, I have been fortunate to be present at the creation of the first microprocessors, the first personal computers, and the first enterprise web applications. I’ve lived through the revolutions in telecom, life sciences, social media, etc., and seen new industries, markets, and customers created overnight. I may be seeing one more with ChatGPT.
One of the problems with disruptive technology is that disruption doesn’t come with memory. History is full of journalists writing and not realizing it. User Interface and Networking at their own Palo Alto Research Center). Many people mass dThey broke down and failed to recognize him because he looked like a toy to them.
Others look at the same technology and realize at that moment that the world will no longer be the same (Steve Jobs at Xerox, for example). It may be a toy today, but as that technology scales, becomes more refined, and tens of thousands of creative people build applications, they’ll realize what’s inevitable—the world will change.
We are probably seeing this here. Some will find the importance of ChatGPT immediately. Others will not.
Maybe we should take a deep breath and think about this?
Few people are concerned about the consequences of ChatGPT and other AGI-like applications and believe that we are about to cross the Rubicon – the point of no return. They suggested a 6-month ban on training AI systems More powerful Instead of ChatGPT-4. Others find that idea ludicrous.
Longtime scientists are concerned about what they have discovered. US scientists who worked on the development of the atomic bomb proposed civilian control of nuclear weapons. After World War II In 1946, the US government seriously considered international control over the development of nuclear weapons. And until recently, most countries agreed to a treaty on the non-proliferation of nuclear weapons.
In the year In 1974, molecular biologists were shocked to discover that newly discovered genetic editing tools (recombinant DNA technology) could insert tumor-causing genes into E. coli bacteria. Without recognition of biohazards and agreed best practices in biosafety, there was a high risk of accidentally creating something with dire consequences. They called for a voluntary moratorium on DNA testing until they agreed on best practices in the lab. In the year In 1975, the US National Academy of Sciences sponsored the so-called Asilomar Conference. Here, biologists provide guidelines for laboratory safety standards, depending on the type of experiment and a list of prohibited experiments (cloning of things that can harm humans, plants, and animals).
Until recently, these regulations have controlled most biological laboratory accidents.
They had advocates of unlimited testing and unlimited surveillance of nuclear weapons and genetic engineering. “Let science go where it will” Yet even these minimal controls have protected the world from potential disasters for 75 years.
Goldman Sachs economists predict that 300 million jobs could be affected by the new wave of AI. Other economists are realizing the ripple effect this technology will bring. At the same time, new startups are emerging, and venture capital is already pouring money into the field, which will accelerate the impact of this generation of AI. Intellectual property lawyers are already arguing over who owns the data on which these AI models are built. Governments and military organizations are grasping the implications of this technology in the diplomatic, information, military and economic fields.
Now that the genie is out of the bottle, it is unreasonable to ask AI researchers to take 6 months and follow the same model that other thoughtful and caring scientists have already done. (Stanford has taken down the ChatGPT version over security concerns.) Guidelines for using this technology should be developed, perhaps parallel to those given for genetic editing experiments—with risk-related test types and biosafety containment levels.
Unlike profit-driven atomic weapons and genetic engineering, research scientists fear the continued expansion and funding of generative AI will be driven by for-profit companies and venture capital.
Welcome to our brave new world.
Lessons learned
- Pay attention and stay
- We are in for a tough journey
- We need an Asilomar conference for AI
- For-profit companies and VCs are interested in accelerating the pace.
File under: Technology |
[ad_2]
Source link