Spanish English French German Italian Portuguese
Social Marketing
HomeIAUK antitrust body to review generative AI

UK antitrust body to review generative AI

The UK competition watchdog has announced a initial review of “fundamental AI models”, such as the extended language models (LLMs) that underpin OpenAI's ChatGPT and Microsoft's New Bing. Generative AI models that power AI art platforms, such as OpenAI's DALL-E or Midjourney, will likely also be included in the scope.

The Competition and Markets Authority (CMA) said its review will look at competition and consumer protection considerations in the development and use of fundamental AI models, with the aim of understanding "how the fundamental models and producing an assessment of the conditions and principles that will better guide the development of basic models and their use in the future”.

He proposes to publish the review in "early September," with a June 2 deadline for stakeholders to submit responses to inform their work.

“The basic models, including large language models and generative artificial intelligence (AI), which have emerged in the last five years, have the potential to transform much of what people and companies do. To ensure AI innovation continues in a way that benefits consumers, businesses and the UK economy, the government has asked regulators, including the [CMA], to look at how innovative development can be supported and the deployment of AI in five general principles: safety, security and robustness; appropriate transparency and explainability; justice; accountability and governance; and contestability and reparation," the CMA wrote in a press release."

The Basic Models Research Center at Stanford University's Center for Human-Centered Artificial Intelligence is credited with coining the term "basic models," in 2021, to refer to AI systems that focus on training a model on a large amount of data and adapt it to many applications.

“AI development touches on a number of important issues, such as security, copyright, privacy and human rights, as well as the way markets work. Many of these issues are being considered by government or other regulators, so this initial review will focus on the questions the CMA is best positioned to address: what are the likely implications of developing core AI models for competition? and consumer protection? added the CMA.

In a statement, its CEO, Sarah Cardell, also said:

AI has burst into the public consciousness in recent months, but it has been on our radar for some time. It is a rapidly developing technology that has the potential to transform the way businesses compete, as well as drive substantial economic growth.

It is crucial that the potential benefits of this transformative technology are easily accessible to UK businesses and consumers, while individuals remain protected from issues such as false or misleading information. Our goal is to help this rapidly expanding new technology develop in a way that ensures open and competitive markets and effective consumer protection.

Specifically, the UK competition regulator said that its initial review of fundamental AI models:

  • examine how competitive markets for foundation models and their use might evolve
  • explore what opportunities and risks these scenarios could bring for competition and consumer protection
  • produce guiding principles to support competition and protect consumers as basic AI models are developed

While it can be seen that the antitrust regulator will carry out a review of a fast-moving emerging technology, the CMA is acting on instructions from the government.

An AI white paper published in March noted ministers' preference to avoid setting custom rules (or oversight bodies) to govern the uses of AI at this stage. However, ministers said existing UK regulators, including the CMA, whose name was directly named, are expected to issue guidance to encourage safe, fair and responsible uses of AI.

The CMA says its initial review of fundamental AI models is in line with instructions in the white paper, where the government spoke about existing regulators conducting "detailed risk analysis" to be in a position to carry out potential executions. , that is, about dangerous, unfair and irresponsible applications of AI, using its existing powers.

The regulator also points to its core mission, supporting open and competitive markets, as another reason to take a look at generative AI now.

In particular, the competition watchdog is poised to gain additional powers to regulate Big Tech in the coming years, under plans Prime Minister Rishi Sunak's government shelved last month when ministers said they would move forward with a reform (but long overdue) aimed at the market power of digital giants.

The expectation is that the CMA's Digital Markets Unit, operating since 2021 in the shadows, will (finally) gain legislative powers in the coming years to apply proactive "pro-competition" rules that are tailored to platforms deemed to be they have “strategic market status” (SMS). So we can speculate that in the future, providers of powerful fundamental AI models may be judged to have SMS, which means they could expect to face custom rules on how they should operate against rivals and consumers in the UK market. United.

The UK data protection watchdog, the ICO, also has its eye on generative AI. It is another existing oversight body that has been tasked by the government with paying special attention to AI under its context-specific guidance plan to steer the development of the technology through enforcement of existing laws.

With a blog post Last month, Stephen Almond, ICO's executive director of regulatory risk, offered some advice and a bit of a warning for generative AI developers when it comes to complying with UK data protection regulations. “Organizations that develop or use generative AI should consider their data protection obligations from the outset, taking data protection for default layout and focus," He suggested. "This is not optional: if you are processing personal data, it is the law."

Meanwhile, across the English Channel in the European Union, lawmakers are in the process of deciding on a fixed set of rules that are likely to apply to generative AI.

Negotiations towards a final text for the EU inbound AI rulebook are ongoing, but there is currently a focus on how to regulate fundamental models through amendments to the risk-based framework for regulating uses of AI that the block published in draft more than two years ago.

It remains to be seen where the EU co-legislators will end up in what is also sometimes called general purpose AI. But MPs are pushing for a layered approach to tackling security issues with fundamental models; the complexity of responsibilities in AI supply chains; and to address specific content issues (such as copyright) that are associated with generative AI.

Add to this, the EU data protection law that already applies to AI, of course. And investigations focused on the privacy of models like ChatGPT are underway across the bloc, including in Italy, where an intervention by the local watchdog led OpenAI to rush a series of disclosures and privacy checks last month.

The European Data Protection Board also recently created a working group to support coordination between different data protection authorities on AI chatbot investigations. Other ChatGPT researchers include Spain's privacy watchdog.

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks