Spanish English French German Italian Portuguese
Social Marketing
HomeTechnologyArtificial IntelligenceGoogle will work with Europe on a provisional "Pact for Artificial Intelligence"

Google will work with Europe on a provisional "Artificial Intelligence Pact"

Sundar Pichai, of Google, has agreed to work with European legislators on what has been called a "Pact for Artificial Intelligence", apparently a provisional set of voluntary norms or standards while work is being done on the formal regulation of the application of Artificial Intelligence.

Pichai met with Thierry Breton, European Union Internal Market Commissioner, who issued a statement following today's conference stating: «There is no time to waste in the field of artificial intelligence. “There is no time to waste in the AI ​​race to build a safe online environment.”

A briefing note published by his office after the meeting also says that the EU wants to be "proactive" and work on a pact on AI before EU legislation that will apply to AI is approved.

The note added that the bloc wants to launch an AI Pact "in which all the main European and non-European AI agents voluntarily participate" and before the legal deadline of the aforementioned Pan-European Law on AI expires.

However, at the moment, the only technology giant that has publicly joined the initiative is Google.

We have contacted Google and the European Commission to ask them about the initiative.

In other public statements, Breton stated:

We expect technology in Europe to respect all our rules, on data protection, online security and artificial intelligence. In Europe, it's not about choice.
I am pleased that Sundar Pichai recognizes this and is committed to complying with all EU rules.
The GDPR [General Data Protection Regulation] is in force. The DSA [Digital Services Act] and the DMA [Digital Markets Act] are being implemented. Negotiations on the Artificial Intelligence Law are approaching their final phase and I call on the European Parliament and the Council to adopt the framework before the end of the year.
Sundar and I agree that we cannot afford to wait until AI regulations are actually enforceable, and work together with all AI developers to develop an AI Pact on a voluntary basis before the legal deadline.
I also welcome Sundar's commitment to step up the fight against disinformation ahead of the elections in Europe.

Although there are no details on what the "AI pact" might contain, as with any self-regulatory agreement, it would lack legal force, so there would be no way to force developers to sign, and no consequences for those who do not comply. (voluntary) commitments.

Still, it may be a step toward the kind of international cooperation in standards-making that several technologists have called for in recent weeks and months.

The EU has precedent when it comes to getting tech giants to commit to self-regulation: Has established, over several years, a pair of voluntary agreements (also known as Codes) that have been signed by several tech giants (including Google), committing to improve their responses to reports of online hate speech and dissemination. of harmful misinformation. And while the two Codes mentioned have not resolved what remain complex issues of online speech moderation, they have provided the EU with a yardstick to measure whether or not platforms live up to their own claims – and, sometimes, used to dish out a light public beating when they are not.

More generally, the EU is ahead of the rest of the world in developing digital standards and has already drafted regulations on artificial intelligence: two years ago it proposed a risk-based framework for artificial intelligence applications. However, even the block's best efforts lag behind advances in this field, which have been especially virulent this year, after OpenAI's generative AI chatbot, ChatGPT, was made widely available to users of the web and gained viral attention.

At present, the EU draft Artificial Intelligence Law, proposed in April 2021, remains a living piece of legislation between the European Parliament and the Council, with the former recently agreeing on a series of amendments they want to include, including several aimed at generative AI.

EU co-legislators will need to reach a compromise on the final text, so it remains to be seen what the final shape of the bloc's AI regulations will be.

Furthermore, even if the law is approved before the end of the year, which is the most optimistic deadline, we will have to wait at least a year for it to be applied to AI developers. Hence EU commissioners are pushing for provisional measures.

Earlier this week, Vice President Margrethe Vestager, who heads the bloc's digital strategy, suggested the EU and the US were willing to cooperate on setting minimum standards before the legislation comes into force (via Reuters). .

Speaking after the meeting with Google, Vestager tweeted: «We need the AI ​​Law as soon as possible, but AI technology is evolving at extreme speed. “So we need a voluntary agreement on universal standards for AI now.”

A Commission spokesperson explained Vestager's comment: "At the G7 digital ministerial meeting in Takasaki, Japan on 29-30 April, Vice-President Vestager proposed internationally agreed safety limits on AI that companies can voluntarily comply until the AI ​​Law comes into force in the EU. This proposal was taken up by the G7 leaders, who last Saturday agreed in their Communiqué to launch the "Hiroshima Process on AI", with the aim of designing such guardrails, in particular for generative AI.

Despite these sudden expressions of high-level haste, it is worth noting that the current EU data protection regulation, the GDPR, can be - and has already been - applied against certain AI applications, including ChatGPT, Replika and Clearview AI, to name three. For example, a regulatory intervention on ChatGPT in Italy in late March led to the suspension of the service, which was followed by OpenAI developing new statements and controls for users in an attempt to comply with privacy regulations.

Added to this, as Breton points out, the new DSAs and DMAs may create new stringent requirements that AI app developers will have to adhere to, depending on the nature of their service, in the coming months and years as that these rules begin to apply to the digital services, platforms and technology giants that most influence the market (in the case of the DMA).

However, the EU remains convinced of the need for specific risk-based rules for AI. And, apparently, it is willing to double down on the “Brussels effect” that its digital legislation can attract by now announcing a provisional pact on AI.

In recent weeks and months, US lawmakers have also focused their attention on the delicate question of how to best regulate artificial intelligence. Recently, a Senate committee held a hearing in which Sam Altman, CEO of OpenAI, gave his opinion on how to regulate this technology.

Google may want to play the other side and rush to collaborate with the EU on voluntary standards. Let the AI ​​regulation arms race begin!

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks