
The UK's competition watchdog has raised concerns over the rapid expansion of artificial intelligence (AI), citing potential risks such as increased fraudulent activities, fake reviews, dissemination of false information, and the dominance of a few major players. While acknowledging the benefits and productivity boost AI can bring, the Competition and Markets Authority (CMA) calls for vigilance in ensuring consumer trust and preventing power concentration in the AI market.
Concerns over potential pitfalls of AI
The surge of artificial intelligence is not a guarantee of positive results, warns the Competition and Markets Authority (CMA) in the UK. The regulator highlights the potential pitfalls stemming from this rapid growth, including a surge in misinformation, fraudulent activities, and fake reviews. Moreover, the high cost of leveraging such technology could pose a financial burden. While acknowledging the advancements AI can bring, the CMA underscores the importance of careful monitoring of these potential risks.
Foundation models, the bedrock technology that powers AI tools, have become a hot topic of discussions due to their potential societal implications. Examples of such models include the well-known ChatGPT chatbot and the image generator tool Stable Diffusion. These tools have triggered debates over their potential to disrupt the job market, especially white-collar jobs in areas like law, IT, and media. Furthermore, their ability to mass-produce disinformation raises concerns about their impact on voting and consumer decision-making.
Cautious optimism towards AI integration
CMA Chief Executive Sarah Cardell emphasized the swift integration of AI into everyday life and its potential to simplify tasks and increase productivity. However, she cautioned against complacency, stressing that the benefits of AI can't be taken for granted. Cardell underscored the need to ensure that AI's development does not erode consumer trust or end up being controlled by a small number of dominant players, which could hinder the full economic benefits of AI.
The CMA provides a clear definition for foundation models, describing them as large-scale machine-learning models. These models are trained using vast amounts of data, and they are designed to be adaptable, capable of handling a wide array of tasks and operations. Such models power various AI tools, including chatbots and image generators, as well as software products like Microsoft's 365 office suite.
Release of foundation models
According to the CMA's estimation, roughly 160 foundation models have been released by a variety of firms, including tech giants like Google, Meta (formerly Facebook), and Microsoft. Notable AI companies like OpenAI, and the UK-based Stability AI are also contributors to this growing number. The watchdog's estimate underscores the widespread adoption and development of such models by companies across different sectors.
CMA proposes principles for AI model development
In order to ensure a healthy AI market, the CMA proposed a new set of guiding principles for the development and usage of AI models. These principles include granting access to data and computing power, ensuring that businesses have diverse options to access AI models, and providing consumers and businesses clear information about the use and limitations of AI models. These measures aim to prevent monopolistic practices, like 'bundling' AI models into other services, and ensure fair competition within the AI landscape.