Saturday, May 04, 2024

AI, ChatGPT critics and promoters descend on Washington

technology

[ad_1]

After years of big tech — and the explosive success of ChatGPT — lawmakers are looking to avoid similar mistakes with artificial intelligence.

(Maria Alconada Brooks/The Washington Post; iStock)

Senator Chris Murphy saw a familiar face while watching the “AI Dilemma” video.

Technology ethicist Tristan Harris, known among lawmakers for sounding the alarm about the harmful effects of social media, now argues that artificial intelligence represents a far more dangerous development — perhaps even more dangerous to human survival than the advent of nuclear weapons.

The message of the video – as received by some technologists Apple founder Steve Wozniak — agreed with Murphy (D-Conn.), who quickly deleted the tweet.

“Something is coming. We are not ready,” the senator warned.

AI hype and fear have reached Washington. After years of social media damage, Silicon Valley has caught up with policymakers from both parties turning their attention to artificial intelligence. Lawmakers are eagerly anticipating the AI ​​arms race led by the explosion of OpenAI’s chatbot ChatGPT. The technology’s ability to engage in human-like conversations, write essays and even annotate images has shocked its users, but it has also given rise to new threats to children’s online safety and misinformation that could distort choices and fuel fraud.

But policymakers have arrived at the new debate marred by battles over how to regulate the tech industry — despite years of congressional hearings, historic investigations and bipartisan proposals, they have yet to enact comprehensive tech rules. At this time, some hope to move quickly to avoid similar mistakes.

“We’ve made the mistake of trusting the technology industry to self-police social media,” Murphy said in an interview. “I can’t believe we’re on the verge of making the same mistake.”

Consumer advocates and tech industry titans are converging on D.C. for what they hope will be a technology policy debate that will define lawmakers for months and even years to come. A handful of Washington lawmakers are knowledgeable about AIA, which can influence discussions for industry boosters and critics alike.

“AI is going to reshape society in profound ways, and we’re not ready for it,” said Rep. Ted Lieu (D-Calif.), one of the few members of Congress with a computer science degree.

Silicon Valley Offensive

The companies behind ChatGPT and competing technologies have launched a pre-charm attack by highlighting their attempts to build artificial intelligence responsibly and ethically, said several people, who spoke on condition of anonymity to discuss private conversations. Since Microsoft’s Open AI investment — which allows it to integrate ChatGPT into its products — the company’s president, Brad Smith, has discussed artificial intelligence during trips to Washington. OpenAI executives, who have been circling Washington for years, are meeting with lawmakers interested in new artificial intelligence in the wake of ChatGPT’s release.

A bipartisan delegation of 10 lawmakers from a House committee tasked with challenging China’s ruling Communist Party traveled to Silicon Valley this week to meet with high-tech executives and venture capitalists. A person close to the council panel and the companies, speaking on condition of anonymity, said their discussions focused on recent developments in artificial intelligence.

At a lunch in Stanford University’s auditorium, the lawmakers gathered with Google’s president of global affairs, Kent Walker, and Smith, the executives of Palantir and Scale AI. Many have expressed openness to Washington regulating artificial intelligence, but one executive warned there are Antitrust laws could hamper the country’s ability to compete with China, which has limited access to mass data, the people said.

Smith did not agree that AI would change competition rules, Microsoft spokeswoman Kate Frishman said.

He called on the federal government — especially the Pentagon — to increase its investment in artificial intelligence, which would benefit companies.

But the companies face a Congress that is increasingly skeptical of Washington’s warnings about the threat of AI. During the meetings, lawmakers heard a “vigorous debate” about the potential dangers of artificial intelligence, said Rep. Mike Gallagher (R-Wis.), chairman of the House panel. But he said he left the meetings skeptical that the United States could take the extreme measures some technologists have proposed to stop AI deployment.

“We need to find a way to put these safeguards in place while at the same time ensuring that our technology sector is innovating and that we are innovating,” he said. “I paused, feeling that it would only serve the interests of the CCP, not the interests of America.”

The meeting on the Stanford campus was miles away from the 5,000-person meetings and AI house parties that fueled San Francisco’s tech boom, prompting venture capital investors to pour $3.6 billion into 269 AI deals between January and mid-March, according to the report. Investment analysis company pitch book.

Across the country, officials in Washington were engaged in their own activism. President Biden held a meeting on the dangers and opportunities of artificial intelligence on Tuesday, hearing from a variety of experts on the Council of Science and Technology Advisors, including Microsoft and Google.

Sitting under a portrait of Abraham Lincoln, Biden told members of the House that the industry has a responsibility to “make sure their products are safe before they put them out there.”

When asked whether AI is dangerous, he said it is an unanswered question. “Maybe,” he replied.

Silicon Valley has indicated that two of the nation’s top regulators — the Federal Trade Commission and the Justice Department — are monitoring the emerging field. The FTC recently issued a warning, saying it could face fines if companies falsely overpromise artificial intelligence products and fail to assess risks before release.

Jonathan Cantor, the Justice Department’s top antitrust enforcer, told South by Southwest last month that the office has launched an initiative called “Project Gretzky” to stay ahead of the curve on competition issues in artificial intelligence markets. The project’s name is a reference to hockey star Wayne Gretzky’s famous quote about skating, “where the puck goes.”

Despite these efforts to avoid repeating the same mistakes in regulating social media, Washington is lagging behind other countries – especially in Europe.

Already, regulators in countries with comprehensive privacy laws are considering how those laws might apply to ChatGPT. This week, Canada’s privacy commissioner said it would open an investigation into the device. The announcement comes on the heels of Italy’s decision last week to ban chatbots over concerns that they violate EU laws designed to protect citizens’ privacy. Germany is considering a similar move.

OpenAI responded to the new investigation in a blog post this week, detailing the steps it is taking to address AI security, including limiting personal information about individuals in the datasets it uses to train its models.

Meanwhile, Liu is working on legislation to build a state commission to assess the risks of artificial intelligence and to create a federal agency to oversee the technology, much like the Food and Drug Administration reviews drugs coming to market.

Getting buy-in from the Republican-controlled House will be a challenge for a new federal agency. He warned that Congress alone would not be able to move quickly enough to develop laws governing artificial intelligence. Early struggles to legislate Addressing a narrower aspect of AI — facial recognition — showed Liu that the council was not the right place to do this work, he said.

Harris, a technology ethicist, has met with members of the Biden administration and powerful lawmakers from both parties on Capitol Hill in recent weeks, including Senate Intelligence Committee Chairman Mark R. Warner (D-Va.) and Sen. Michael F. Bennett (D-Colo.).

Along with Aza Raskin, who founded the Center for Humane Technology, a nonprofit focused on the negative effects of social media, Harris gathered a group of D.C. heavyweights last month to discuss the looming crisis over the national drink and hors d’oeuvres of the Press Club. Surgeon General Vivek H. Murthy, Republican Sen. Frank Luntz, Congressional staff and FTC staff members, including the agency’s Consumer Protection Bureau Director Sam Levy, called for an immediate ban on companies’ AI deployments.

Harris and Raskin compared the present to the advent of nuclear weapons in 1944, and Harris called for policymakers to consider drastic measures to slow the rollout of AI, including an executive order.

“By the time outlaws started trying to control social media, it was already intertwined with our economy, our politics, our media and our culture,” Harris told The Washington Post on Friday. “AI can be hidden very quickly, and by facing the issue now, before it’s too late, we can harness the power of this technology and modernize our institutions.”

The message appears to have resonated with some cautious legal experts — and alarmed some AIA experts and ethicists.

Sen. Michael F. Bennett (D-Colo.) cited Harris’ tweets in a March letter to Open AI, Google, Snap, Microsoft and Facebook officials, asking the companies to outline safeguards to protect children and teenagers from AI-powered chatbots. . A Twitter thread shows Snapchat’s AI chatbot telling a fake 13-year-old girl how to lie to her parents about an upcoming trip with a 31-year-old man and giving advice on how to lose her virginity. (Snap announced Tuesday that it has implemented a new system that takes into account a user’s age during a conversation.)

Taking an example from Harris and Raskin’s video, Murphy tweeted that Chatgpty had “taught himself to do advanced chemistry” and developed human-like abilities.

Timnit Gebru, a former Google team leader who warned, “Please don’t spread misinformation,” is a response focused on ethical artificial intelligence. It is difficult to defend the promotion of our work without politicians joining the bandwagon.

“Policymakers and technologists don’t always speak the same language,” Harris said in an email. The presentation doesn’t claim that ChatGPT taught itself chemistry, but it cites research that says no human designer or programmer has intentionally given the system a chemistry-incapable chatbot.

Industry representatives and experts took issue with a murder Murphy tweet; The office is fielding clarification requests, he said in an interview. Murphy says he knows AI isn’t intuitive and self-taught, but he’s trying to talk about chatbots in an approachable way.

The criticism, he said, “is consistent with a broader shaming campaign used by the industry to intimidate policymakers into silence.”

“The technology sector thinks they’re smarter than everybody else, so they want to create the rules for how this technology goes out, but they want to capture the economic benefits.”

Nitasha Tiku contributed to this report.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *