World
Marathon EU talks fail to reach deal on Artificial Intelligence Act
The European Parliament and member states failed on Thursday to reach a political deal on the Artificial Intelligence Act following marathon talks in Brussels that stretched over 22 hours. Negotiations will resume on Friday.
The negotiations began on Wednesday afternoon, went into the entire night, continued during the morning and concluded on Thursday afternoon, with an agenda that reportedly featured over 23 items, reflecting the extreme technicality of the issue at hand.
The Act is considered the world’s first attempt to regulate artificial intelligence, a technology with an astonishing and often unpredictable capacity to evolve, in a comprehensive, ethics-based and environmentally sustainable manner.
The discussions took place against the backdrop of aggressive lobbying from Big Tech and start-ups, stark warnings from civil society and intense media scrutiny as the legislation from Brussels could well influence state-led efforts across the world.
Mindful of the high stakes, the European Parliament and the Council, which represents member states, vowed to give it a second chance on Friday, starting at 9.00 am.
“Lots of progress made over (the) past 22 hours on the AI Act,” said Thierry Breton, the European Commissioner for the internal market.
Lawmakers who took part in the drawn-out discussions also said considerable progress had been achieved, without providing more details for the sake of confidentiality.
The negotiations were hard-fought bargaining between MEPs and governments over a string of deeply complex questions, most notably the regulation of foundation models that power chatbots like OpenAI’s revolutionary ChatGPT and targeted exceptions for using real-time biometric identification in public spaces.
Despite its impressive and possibly record-breaking length, Thursday’s marathon talks were not enough to go through the entire list of open questions.
Even if Friday’s second try bridges the gaps and brings forth a provisional agreement at the political level, more consultations will likely be required to fine-tune all the technical details. Spain, the current holder of the Council of the EU’s rotating presidency, is tasked with keeping the 27 member states and their wide range of views on the same page.
Once the draft, which covers hundreds of pages in articles and annexes, is rewritten and a consolidated version emerges, it will be sent to the European Parliament for a new vote in the hemicycle, followed by the Council’s final green light.
The law will then have a grace period before it becomes fully enforceable in 2026.
An ever-evolving technology
First presented in April 2021, the AI Act is a ground-breaking attempt to ensure the most radically transformative technology of the 21st century is developed in a human-centric, ethically responsible manner that prevents and contains its most harmful consequences.
The Act is essentially a product safety regulation that imposes a staggered set of rules that companies need to follow before offering their services to consumers anywhere across the bloc’s single market.
The law proposes a pyramid-like structure that splits AI-powered products into four main categories according to the potential risk they pose to the safety of citizens and their fundamental rights: minimal, limited, high and unacceptable.
Those that fall under the minimal risk category will be freed from additional rules, while those labelled as limited risk will have to follow basic transparency obligations.
The systems considered high risk will be subject to stringent rules that will apply before they enter the EU market and throughout their lifetime, including substantial updates. This group will encompass applications that have a direct and potentially life-changing impact on private citizens, such as CV-sorting software for job interviews, robot-assisted surgery and exam-scoring programmes in universities.
High-risk AI products will have to undergo a conformity assessment, be registered in an EU database, sign a declaration of conformity and carry the CE marking – all before they get to consumers. Once they become available, they will be under the oversight of national authorities. Companies that violate the rules will face multi-million fines.
AI systems with an unacceptable risk for society, including social-scoring to control citizens and applications that exploit socio-economic vulnerabilities, will be outright banned across all EU territory.
Although this risk-based approach was well received back in 2021, it came under extraordinary pressure in late 2022, when OpenAI launched ChatGPT and triggered a global furore over chatbots. ChatGPT was soon followed by Google’s Bard, Microsoft’s Bing Chat and, most recently, Amazon’s Q.
Chatbots are powered by foundation models, which are trained with vast troves of data, such as text, images, music, speech and code, to fulfil a wide and fluid set of tasks that can change over time, rather than having a specific, unmodifiable purpose.
The Commission’s original proposal did not introduce any provisions for foundation models, forcing lawmakers to add an entirely new article with an extensive list of obligations to ensure these systems respect fundamental rights, are energy efficient and comply with transparency requirements by disclosing their content is AI-generated.
This push from Parliament was met with scepticism from member states, who tend to prefer a soft-touch approach to law-making. Germany, France and Italy, the bloc’s three biggest economies, came forward with a counter-proposal that favoured “mandatory self-regulation through codes of conduct” for foundation models. The move sparked an angry reaction from lawmakers and threatened to derail the legislative process.
According to Reuters, Thursday’s talks helped co-legislators agree on provisional terms for foundation models. Details on the agreement were not immediately available.
A contentious issue that still needs to be resolved is the use of real-time remote biometrics, including facial recognition, in public spaces. Biometrics refers to systems that analyse biological features, such as facial traits, eye structures and fingerprints, to determine a person’s identity, usually without the person’s consent.
Lawmakers are defending a blanket ban on real-time biometric identification and categorisation based on sensitive characteristics like gender, race, ethnicity or political affiliation. Member states, on the other hand, argue exceptions are needed to enable law enforcement to track down criminals and thwart threats to national security.