Europe is trying to take a leading role in regulating the uses of AI

[ad_1]

In two years’ time, if all goes well, EU residents will be protected by law from some of the most controversial uses of AI, such as street cameras that identify and track people, or government computers that rate an individual’s behavior.

This week, Brussels he set out his plans to become the world’s first blockchain with rules on how artificial intelligence can be used, in an attempt to put European values ​​at the heart of rapidly developing technology.

Over the past decade, AI has become a strategic priority for countries around the world, and the two world leaders, the US and China, have taken very different approaches.

The plan led by the state of China has led him to invest heavily in technology and quickly deploy applications that have helped the government increase surveillance and control the population. In the United States, the development of AI has been left to the private sector, which has focused on commercial applications.

“The United States and China have been the ones who have been innovative and led the investment in AI,” said Anu Bradford, an EU law professor at Columbia University.

“But this regulation wants to put the EU at stake again. It seeks to balance the idea that the EU needs to become a technological superpower and put itself at stake with China and the United States, without compromising its European values ​​or fundamental rights. ”

EU officials expect the rest of the world to follow suit and say Japan and Canada are already looking closely at the proposals.

While the EU wants to curb the way governments can use AI, it also wants to encourage emerging companies to experiment and innovate.

Officials said they hoped the clarity of the new framework would help give confidence to these start-ups. “We will be the first continent where we will give guidelines. So if you want to use artificial intelligence apps, go to Europe. You will know what to do and how to do it, ”said Thierry Breton, the French commissioner in charge of the blog’s digital policy.

In an attempt to be pro-innovation, the proposals recognize that regulation often falls more on smaller companies and therefore incorporate measures to help. These include “sandboxes” where emerging companies can use the data to test new programs to improve the justice system, health care, and the environment without fear of receiving heavy fines if mistakes are made.

Alongside the regulations, the commission issued a detailed roadmap to increase investment in the sector and group public data across the block to help form machine learning algorithms.

The European Parliament and member states are likely to fiercely debate the proposals, two groups that will have to sanction the bill. The legislation is expected to be at least 2023, according to people following the process closely.

But critics say that in trying to support commercial AI, the bill does not go far enough in banning discriminatory applications of AI-like predictive policing, controlling migration at borders, and biometric categorization of race, gender, and sexuality. Currently, they are applied as “high-risk” applications, which means that anyone who deploys them will have to notify the people they use them with and provide transparency about how the algorithms made the decisions, but will allow its widespread use, especially by private companies.

Other high-risk applications, but not banned, include the use of AI in hiring and managing workers, as currently practiced by companies such as HireVue and Uber, AI that evaluates and monitors students, and the use of AI in the granting and revocation of public benefits and assistance services.

Access Now, a Brussels-based digital rights group, also noted that direct bans on both live facial recognition and credit scoring are only aimed at public authorities, without affecting companies such as the firm. Clearview AI facial recognition or AI credit score start-ups such as Lenddo and ZestFinance, whose products are available worldwide.

Others highlighted the notable absence of citizens ’rights in the legislation. “The whole proposal governs the relationship between suppliers (those in development [AI technologies]) and users (those that are implemented). Where do people come in? ”Sarah Chander and Ella Jakubowski of European Digital Rights, a defense group, wrote on Twitter. “It seems that there are very few mechanisms by which those directly affected or harmed by AI systems can claim redress. This is a big mistake for civil society, discriminated groups, consumers and workers. ”

On the other hand, lobbyists representing Big Tech’s interests also criticized the proposals, saying they would stifle innovation.

The Center for Data Innovation, part of the think tank whose main organization receives funding from Apple and Amazon, said the draft legislation dealt a “detrimental blow” to EU plans of being a world leader in AI and that “a set of new standards hamstring technology companies” in the hope of innovating.

In particular, it faced the ban on AI that “manipulates” people’s behaviors and the regulatory burden of “high-risk” AI systems, such as mandatory human supervision, and security testing and efficiency.

Despite these criticisms, the EU is concerned that, if it does not act now to set rules on AI, it will allow the global rise of technologies contrary to European values.

“The Chinese have been very active in applications that concern Europeans. They are being actively exported, especially for police purposes, and there is a lot of demand among illiberal governments, ”Bradford said. “The EU is very concerned about the need to do its part to stop the global adoption of these deployments that compromise fundamental rights, so there is definitely a race for values.”

Petra Molnar, associate director of York University in Canada, agreed that the draft legislation is more in-depth and focuses more on human values ​​than the first proposals in the United States and Canada.

“There’s a lot of hand waving around ethics and AI in the US and Canada, though [proposals] they are more superficial. “

Ultimately, the EU is committed to the development and commercialization of AI driven by public confidence.

“If we can have a better regulated AI that consumers trust, this will also create a market opportunity, because … it will be a source of competitive advantage for European systems. [as] they are considered reliable and of high quality, “said Bradford of Columbia University.” You don’t just compete with price. “

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *