
Europe’s A.I. Act—which will probably be the world’s first comprehensive regulation of the technology if passed—is moving forward at high speed.
The bill was first introduced by the European Commission just over two years ago, but the rapid rise of generative A.I. recently forced lawmakers to scramble to modernize it. Today, the European Parliament approved its preferred version of the law, which would have major impacts on the likes of ChatGPT. The move opens the way for final “trilogue” negotiations between Parliament, the Commission, and national governments—and, to drive home the sense of urgency, those talks begin tonight.
For the big generative A.I. players, the most important part of Parliament’s preferred version is a new article that would force the providers of foundation models (such as OpenAI’s GPT) to assess their systems for potential impacts on fundamental rights, health and safety, the environment, democracy and more—and to then mitigate any problems—before releasing them onto the market.
Content generated by these foundation models would have to be labeled as such, and A.I. providers would have to publish summaries of the copyrighted data they used to train the models—a potentially tall order if the training material was indiscriminately scraped from the internet.
Social media recommendation systems would be classified as high-risk, like A.I. used in critical infrastructure, recruitment, or robot-assisted surgery. That would mean serious oversight measures and transparency to the user.
Meanwhile, digital rights and consumer advocates are pretty ecstatic with the Parliament’s new bans on any real-time facial-recognition systems in public spaces; most retroactive remote biometric identification systems; the scraping of facial images from social media to create databases for facial recognition; predictive policing; social scoring by companies; automated emotion recognition in law enforcement, the workplace, and schools; and biometric categorization systems using characteristics like race or gender.
The same activists are however very unhappy with the Act’s lack of protections for migrants facing A.I.-powered risk assessments at Europe’s borders, and for the leeway A.I. vendors would have in classifying their own systems on the risk scale. Trade unionists are also grumbling that the Act only restricts A.I. in the workplace if it can be shown to pose a “significant risk”—they would prefer to be able to apply the precautionary principle.
“The bans proposed by the Parliament today on the use of facial recognition in publicly accessible spaces, or on social scoring by businesses, are essential to protect fundamental rights,” said Ursula Pachl, deputy director general of the European Consumer Organisation (BEUC). “The creation of rights for consumers, such as a right to be informed that a high-risk A.I. system will take a decision about you, are also very important.”
But Pachl added: “We however regret that the Parliament gives businesses the option to decide if their A.I. system is considered high-risk or not, and to thus escape from the main rules of the law.”
It is extraordinary for the first trilogue to take place right after Parliament’s plenary vote on a proposed law, but here we are. The political pressure to get this over the finish is immense, and the final version may even be ready this year (with companies probably then given a couple years to adapt before the law comes into force).
However, that compromise may not look quite like what I’ve just described. The EU’s member states will probably have their own ideas about restricting the use of A.I. in law enforcement, for example—and Big Tech’s lobbyists will be bending the ears of those national governments regarding the impact on all those red-hot large language models. So stay tuned. Given the influence of EU legislation on other countries’ tech laws, the A.I. Act’s final form will have global significance.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.