AI’s TikTok moment: How Washington is facing two tech problems at once.

[ad_1]

For more crisp and insightful business and economic news, subscribe to the Daily Upside newsletter. It’s completely free and we guarantee you’ll learn something new every day.

The artificial intelligence revolution is upon us, brought by OpenAI’s powerful ChatGPT, a large language model chatbot, and a two-sided coin opportunity. On the one hand there are many amazing technological advances, especially in the field of medicine, on the other hand AI is a bit more than luck, and you know, humanity will write the last chapter.

Technophobes aren’t the only ones worried about the nightmare scenario. Sam Altman, OpenAI puzzle co-founder/CEO/live doomsday prep/entrepreneur/regrettable at the company’s innovations: “We’re a little scared of it,” Altman told ABC News in a wide-ranging interview last month.

That’s not very reassuring. At the dawn of Facebook, Mark Zuckerberg seemed to publicly recognize the fledgling network’s potential to enhance human communication, spread misinformation, and more or less ruin everyone’s Thanksgiving and Christmas dinners.

And Altman, like Silicon Valley CEOs before him, is practically begging Congress to regulate his company (and unlike most of his Big Tech peers, he seems sincere in asking). But if he gets his wish, that will leave a broken Washington scrambling to figure out the rules and regulations for a complex tech enterprise it doesn’t fully understand.

Does it sound familiar?

Indeed, lawmakers on Capitol Hill are grappling with not one but two thorny issues regarding tech industry regulation: Congress is also on geopolitical and cultural watchdogs over TikTok, the wildly popular video app owned by Chinese parent-company ByteDance.

While the two questions are not inherently linked, looking at how Washington resolves one may provide clues as to how it will resolve the other. And, above all, the common line between the two shows where politicians’ concerns really lie: China’s access to TikTok user data combined with its own rapid growth in AI has Capitol Hill in a bubble, and not entirely without reason.

The state of the game

The adoption scale of ChatGPT is unprecedented. In January, UBS analysts estimated that the chatbot had reached 100 monthly active users and 13 million daily active users. That was two months after the chatbot’s public launch, and before the GPT-4 firmware update that turned the chatbot into a standard-testing-ugly winderside. It is the fastest growing consumer app in history; The craze TikTok crossed the 100 million mark in just nine months, and Instagram took two and a half years to hit that mark.

“We cannot recall a rapid rise in consumer Internet applications in the 20 years since the advent of the Internet,” UBS analysts said.

In other words, your coworkers may be getting a little more AI help with their emails, presentations, and memos than you think (The Daily Upside, we assure you, is always written by a real human… but feel free to blame any typos on rogue bots).

Right to Lose: You might wonder: How far behind are our sclerotic lawmakers in Washington in responding to this earth-shaking technology?

It’s not exactly as backwards as you think, which you may or may not consider good news. The Biden administration last October. before ChatGPT rocketed AI into the mainstream, calling it a “blueprint for an AI Bill of Rights” from the Office of Science and Technology Policy.

The 73-page document, while not legally binding, provides guidelines for AI developers to follow and lawmakers to investigate. It also contains insights into what, if any, protections against AI may soon find their way into federal law:

  • The manifesto is based on five key principles: AI systems should be subject to pre-training, risk-testing and continuous monitoring. Users should not face discrimination from AI and algorithms; AI should not conduct invasive data practices and users should have agency over how data is used; Users should be informed when interacting with AI systems and given an explanation of how certain results are obtained; And people should be able to opt out of interacting with AI when appropriate.
  • “More than a set of principles, this is a plan to encourage the American people to expect better and demand better from their technologies,” White House deputy director of science and technology policy Alondra Nelson said in a press release. After the release of the document.

In a way, the document can be seen as a mulgan. Washington has not emerged from the last decade’s cycle of social media technological innovation, without much scrutiny, as the companies we celebrate have created an infrastructure of surveillance capitalism that collects and shares as much personal user data as possible. After all, algorithmic discrimination is a problem that has long plagued Internet platforms like LinkedIn.

Meanwhile, human users are not given much say in exactly how and when their data is collected, as well as how it is used to create certain effects on their social media feed experience.

What brings us to TikTok: While ChatGPT’s development was covered in several (presumably human-written) newspaper headlines last month, TikTok CEO Xu Zi-chew completed an American-style tech CEO ritual: testifying before Congress for hours, subjecting himself to both savvy and completely nonsensical lines of communication. He did. to ask

(Photo credit: World Economic Forum/Flickr)

“Will TikTok access your home Wi-Fi network?” asked Congressman Richard Hudson of North Carolina – answering Senator Orrin Hatch’s question to Mark Zuckerberg in 2013: “So how do you sustain a business model? Which users aren’t paying for your service? Young Zuck said, “Senator, we’re going to advertise. He replied.

Still, national security officials have worried that TikTok’s industry-standard collection of user data could hypothetically allow it to share sensitive U.S. information with Chinese Communist Party officials, requiring only a look and not being denied.

The response to such fears is a rare bipartisan bill being introduced in the Senate with public support from the White House. But critics say it’s a slapdash solution that could do more harm than good.

Restrictions: Although it’s called a “TikTok ban” in digital-speak, the bipartisan RESTRICT Act introduced last month doesn’t specifically mention TikTok. Instead, the bill gives the executive branch broad authority to assess national security threats and then to influence the “transactions” and “content” of information and communication technology companies with six “foreign adversaries”: Cuba, Iran, North Korea, Venezuela, Russia and, of course, China. So if the White House suspects Beijing of misusing US user data, TikTok could be toast.

Critics, chief among them the Electronic Frontier Foundation, a nonprofit that defends digital privacy and free speech, say the technology is a vague term that further fragments the global Internet without addressing fundamental issues of surveillance.

  • The law could result in severe criminal penalties, including up to 25 years in prison, for trying to “circumvent” the TikTok ban by using the app via a VPN (to make the user “appear” on the internet to be outside the US) or entering the US from another country with the app already downloaded on a device.
  • “In general, the law allows the executive branch to decide which technologies can enter the United States with very limited oversight by the public or its representatives about the law’s implementation,” the organization said in a review of the law earlier this month.

The EFF has proposed that the government instead introduce comprehensive consumer data privacy reforms to reduce the potential for third-party access to personal data. Even considering TikTok’s biggest U.S. competitors, it looks good — Cough, coughGoogle and Meta – they’re not that interested in that happening.

Stay in Shanghai; The same US-China tech tensions that trapped TikTok and fueled a battle for semiconductor supremacy may be fueling a global AI arms race.

“If the democratic party is not in the lead [developing AI] Eileen Donahoe, former US ambassador to the UN Human Rights Council and current executive director of Stanford University’s Global Digital Policy Incubator, told NBC News that technology, and dictators moving forward, have put democracy and human rights at risk.

The parallels with Cold War-era logic have not been lost on critics. New York Times Columnist Ezra Klein recently expressed concern about such thinking: “If one country pauses, others will push harder.”

Be that as it may, this week China slowed things down on its own by announcing several new proposed laws on AI development.

  • Seen as draft laws The Wall Street JournalChinese-developed AI will be limited in terms of what content it can generate (China loves its censorship. That’s in addition to laws passed last year that require permission to feature real people in “deeply fake” photos and videos — or very realistic recreations of human AI).
  • The draft rules that AI can only be trained on small datasets, unlike chatgpt, for example, which is trained on an infinite amount of public data.

It’s not just China that’s catching on. In Italy, however, data protection regulators have restricted access to chatbots until certain safeguards are in place.

what about here In the US, however, things heat up quickly. This week, the Biden administration began weighing what checks could be put in place to regulate AI development, specifically to protect children, creating accountability measures and a certification and risk-testing process to introduce new AI. On Thursday, Axios reported that Chuck Schumer’s office was rolling out regulations that would include key safeguards, including:

  • Clear and transparent ethical boundaries.
  • Identify who trained the AI ​​system and who the intended user base is.
  • AI data source disclosure.
  • An explanation of how AI gets to the answers.

That last point, in particular, is easier said than done — explaining how extremely powerful algorithmic systems arrive at conclusions that require “billions of mathematical operations” is incomprehensible to anyone. Wired Columnist and author Meghan O’Giebyn wrote in her book God, Man, Animal, Machine: Technology, Metaphor and the Search for Meaning.

Still, the diffusion of control continues. Also this week, the AI ​​Now Institute, a leading research center that studies the social impacts and implications of AI, released a comprehensive guide to regulating the industry.

A handful of private actors competing with nations have amassed power and wealth while developing and promoting artificial intelligence as a critical social infrastructure, the report notes (authors Amba Kak and Sarah Mayer West, both former advisers to FTC Chair Lena). Khan… you guessed it). Among the top proposals: modeling AI risk assessment on the FDA’s path to drug development, putting pressure on tech companies to develop their AI systems before they can be safely released to the public. And, as in Italy, making the AI ​​policy internally linked with the data policy.

Still, as the EFF points out, the US lacks comprehensive data privacy protections comparable to the rest of the world — the European Union’s General Data Protection Regulation is the gold standard.

Moreover, as some of the technology’s proponents say, if the threat of artificial intelligence is as serious as fighting nuclear proliferation and climate change, solutions must be developed with international cooperation. Unfortunately, judging by the response to TikTok, the world’s biggest power players are moving towards breaking up our digital ecosystems, not bringing them together.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *