Nation/World

E.U. reaches deal on landmark AI bill, racing ahead of U.S.

European Union officials reached a landmark deal Friday on the world’s most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

At a time when the sharpest critics of AI are warning of its nearly limitless threat, even as advocates herald its benefits to humanity’s future, Europe’s AI Act seeks to ensure that the technology’s exponential advances are accompanied by monitoring and oversight, and that its highest-risk uses are banned. Tech companies that want to do business in the 27-nation bloc of 450 million consumers - the West’s single largest - would be compelled to disclose data and do rigorous testing, particularly for “high-risk” applications in products like self-driving cars and medical equipment.

Roberta Metsola, the president of the European Parliament hailed the legislation as “avant-garde” and “comprehensive” adding that the EU AI Act would set the “global standard” for years to come.

“This is all about Europe taking the lead, and we’ll do it our way responsibly,” she said.

The deal came together after about 37 hours of marathon talks between representatives of the European Commission, which proposes laws, and the European Council and European Parliament, which adopt them. France, Germany and Italy, speaking for the council, had sought late-stage changes aimed at watering down parts of the bill, an effort strongly opposed by representatives of the European Parliament, the bloc’s legislative branch of government.

The result was a compromise on the most controversial aspects of the law - one aimed at regulating the massive foundation language models that capture internet data to underpin consumer products like the popular chatbot ChatGPT and another that sought broad exemptions for European security forces to deploy artificial intelligence.

The latter issue emerged as the most contentious. The final deal banned scraping faces from the internet or security footage to create facial recognition databases or other systems that categorize using sensitive characteristics such as race, according to a news release. But it created some exemptions allowing law enforcement could to “real-time” facial recognition to search for victims of trafficking, prevent terrorist threats and track down suspected criminals in cases of murder, rape and other crimes.

ADVERTISEMENT

European digital privacy and human rights groups were pressuring representatives of the parliament to hold firm against the push by countries to carve out broad exemptions for their police and intelligence agencies, which have already begun testing AI-fueled technologies.

They warned that AI could be more broadly used to identify political protestors, or monitor and classify people based on race, gender, sexual orientation or other markers. In an open letter published Friday, critics of exemptions decried the rise of “dystopian” AI-fueled surveillance in Europe, blaming nations and tech companies for seeking to “legalize dangerous and discriminatory police AI.”

Companies that violate the EU AI Act could face fines up to 7% of global revenue, depending on the violation and the size of the company breaking the rules.

The law furthers Europe’s leadership role on tech regulation. For years, the region has led the world in crafting novel laws to address concerns about digital privacy, the harms of social media and concentration in online markets.

The architects of the AI Act have “carefully considered” the implications for governments around the world since the early stages of drafting the legislation, said Dragoș Tudorache, a Romanian lawmaker co-leading the AI Act negotiation. He said he frequently hears from other legislators who are looking at the E.U.’s approach as they begin drafting their own AI bills.

“This legislation will represent a standard, a model, for many other jurisdictions out there,” he said, “which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others.”

After years of inaction in the U.S. Congress, E.U. tech laws have had wide-ranging implications for Silicon Valley companies. Europe’s digital privacy law, the General Data Protection Regulation, has prompted some companies, such as Microsoft, to overhaul how they handle users’ data even beyond Europe’s borders. Meta, Google and other companies have faced fines under the law, and Google had to delay the launch of its generative AI chatbot Bard in the region due to a review under the law. However, there are concerns that the law created costly compliance measures that have hampered small businesses, and that lengthy investigations and relatively small fines have blunted its efficacy among the world’s largest companies.

The region’s newer digital laws - the Digital Services Act and Digital Markets Act - have already impacted tech giants’ practices. The European Commission announced in October that it is investigating Elon Musk’s X, formerly known as Twitter, for its handling of posts containing terrorism, violence and hate speech related to the Israel-Gaza war, and Breton has sent letters demanding other companies be vigilant about content related to the war under the Digital Services Act.

In a sign of regulators’ growing concerns about artificial intelligence, Britain’s competition regulator on Friday announced that it is scrutinizing the relationship between Microsoft and OpenAI, following the tech behemoth’s multiyear, multibillion-dollar investment in the company. Microsoft recently gained a non-voting board seat at OpenAI, following a company governance overhaul in the wake of chief executive Sam Altman’s return.

Microsoft president Brad Smith said in a post on X that the companies would work with the regulators, but he sought to distinguish the companies’ ties from other Big Tech AI acquisitions, specifically calling out Google’s 2014 purchase of the London company DeepMind.

Meanwhile, Congress remains in the early stages of crafting bipartisan legislation addressing artificial intelligence, after months of hearings and forums focused on the technology. Senators this week signaled that Washington was taking a far lighter approach focused on incentivizing developers to build AI in the United States, with lawmakers raising concerns that the E.U.’s law could be too heavy-handed.

Concern was even higher in European AI circles, where the new legislation is seen as potentially holding back technological innovation, giving further advantages to the United States and Britain, where AI research and development is already more advanced.

“There will be a couple of innovations that are just not possible or economically feasible anymore,” said Andreas Liebl, managing director of the AppliedAI Initiative, a German center for the promotion of artificial intelligence development. “It just slows you down in terms of global competition.”

The deal on Friday appeared to ensure that the European Parliament could pass the legislation well before it breaks in May ahead of legislative elections. Once passed, the law would take two years to come fully into effect and would compel E.U. countries to formalize or create national bodies to regulate AI, as well as a pan-regional European regulator.

ADVERTISEMENT