European Union seeks to regulate artificial intelligence, in chatgpt time

The spell to be reserved for conversational robots is the subject of tensions in Brussels around the “AI Act” settlement project.

by Alexandre Piquard

What did the leaders of OpenAi and the Secretary of State for Digital, Jean-Noël Barrot, spoke, at the beginning of January in San Francisco, where did the organization? Of artificial intelligence, of course, because OpenAi is the creator of Chatgpt, the software that has created the sensation since December by its ability to create texts imitating human prose. But the discussions mainly focused on the regulatory project launched by Europe, called “AI Act”.

The continent indeed aims to be the first to supervise this vast field, deemed both promising and carrying dangers. The success of Chatgpt has rekindled the debates around these regulations in gestation for more than a year: it is monitored by OpenAi and the lobbies of the digital giants, while, after the European Commission and the Council bringing together the Member States It is the turn of the European Parliament to return its version of the text in March.

From the start, and the commission’s proposal, in April 2021, the desire displayed is to produce a “balanced” text. “The European Union must be number one of regulation but also of innovation,” said one in the cabinet of the commissioner at the internal market, Thierry Breton, a supporter of bringing out a data economy, especially industrial. The tone is less offensive than on Digital Markets Act and Digital Services Act, which concerned “dominant” platforms in social networks, e-commerce or online research-and often American.

AI Act was therefore built on an approach “by risks”: artificial intelligence is regulated there according to its uses, deemed more or less dangerous. Thus, some should be prohibited: Chinese “social notation” systems, “subliminal techniques” aimed at manipulating citizens, software “that exploit vulnerabilities due to age, disability or situation social “, and also video surveillance allowing” the biometric identification of people in real time in public space “, except for the police within the framework of surveys or the fight against terrorism.

“contradictory wishes”

Other uses are classified “at high risk”: in transport (autonomous vehicle driving, etc.), education and human resources (examination of exams, CV sorting, etc.), health (assisted surgery By robot), services (obtaining credit), justice (evaluation of evidence) … The text then requires obligations: verifying the “quality” of the data used to cause software, “minimize the risks and discriminatory results”, Ensure a low error rate … It is also necessary to prevent users which they interact with a machine.

You have 56.21% of this article to read. The rest is reserved for subscribers.

/Media reports cited above.