World

Brussels and Google pitch voluntary AI pact to fill legislative gap

Published

on

EU legislators are currently negotiating the Artificial Intelligence Act, but the legislation could take up to three years to be fully applicable.

The European Commission and Google have committed to crafting a voluntary pact for artificial intelligence to mitigate the gravest risks associated with this rapidly evolving technology until proper legislation is put in place.

The pledge was announced after Google CEO Sundar Pichai met with several European Commissioners during a visit to Brussels, where the topic of AI featured prominently in the conversations.

“We expect technology in Europe to respect all of our rules, on data protection, online safety, and artificial intelligence. In Europe, it’s not pick-and-choose,” Thierry Breton, European Commissioner for the internal market, said on Wednesday, according to a short read-out.

“Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI Pact on a voluntary basis ahead of the legal deadline.”

Advertisement

The voluntary pact, whose specific details are still unclear, will involve “all major” companies working in the AI field, both from in and outside Europe, Breton added.

Google did not immediately reply to a request for comment.

Although AI has long been on the policy radar of Brussels, the market explosion of ChatGPT, the chatbot developed by OpenAI, has jolted the debate and put so-called foundation models under the microscope.

Foundation models are those trained with vast troves of data, such as text, images, music, speech and code, with the goal of fulfilling an ever-expanding set of tasks, rather than having a specific, unmodifiable purpose.

Chatbots like OpenAI’s GPT and Google’s Bard are some of the early examples of this technology, which is expected to further evolve in the coming years.

Advertisement

While investors have gladly jumped on chatbots, critics have decried their unchecked development, raising the alarm about bias, hate speech, fake news, state propaganda, impersonation, IP violations and labour redundancies.

ChatGPT was temporarily banned in Italy after authorities detected data privacy concerns.

Prelude to legislation

In Brussels, a sense of urgency has spread as a result of the chatbot phenomenon.

EU legislators are currently negotiating the Artificial Intelligence Act, a world-first attempt to regulate this technology based on a human-centric approach that splits AI systems into four categories according to the risk they pose to society.

The act was proposed by the European Commission more than two years ago and is being amended to reflect the latest developments, such as the remarkable rise in foundation models.

Advertisement

Negotiations between the European Parliament and member states are scheduled to conclude before the end of the year.

The law, however, includes a grace period to allow tech companies to adapt to the new legal framework, meaning the act could take up to three years to become fully applicable across the bloc.

The newly-announced pact is meant to serve as a prelude and fill the legislative void, even if its voluntary nature will inevitably limit its reach and effectiveness.

Speaking to MEPs after his meeting with Pichai, Commissioner Breton defended the need to have an intermediate rulebook comprising the “broad outlines” of the AI Act.

“I already have a common vision of what could be put in place in anticipation and which could allow us to give some elements of protection,” Breton told a parliamentary committee, referring to the possibility of “labelling” AI systems.

Advertisement

“We have to manage the urgency but we must not slow down innovation either, so we have to find the means, the right means, and we also have to be quite firm on certain elements that will have to be supervised, and anticipate to some extent the effects of the AI Act.”

Breton’s plans stood in contrast with the remarks of Sam Altman, the CEO of OpenAI, who on Wednesday told Reuters his company might consider leaving the European market if it could not comply with the AI Act.

“The current draft of the (act) would be over-regulating, but we have heard it’s going to get pulled back,” Altman told Reuters. “They are still talking about it.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Exit mobile version