Gallery Review Europe Blog European Art The Battle Over Foundation Models in the EU AI Act – EURACTIV.com
European Art

The Battle Over Foundation Models in the EU AI Act – EURACTIV.com


The United States’ Big Tech companies, European countries, well-connected European startups, and former French Ministers are all using their considerable influence and political capital to drastically alter the European rules on Artificial Intelligence through extensive lobbying.

Connor Axiotes, Communications at Control AI.

Their Goal? To remove ‘foundation models’ – the most powerful, state-of-the-art AIs such as OpenAI’s ChatGPT – from the European Union’s landmark AI Act. This well-organised and politically powerful alliance seeks to push regulation solely on application developers, who have the least control over these AIs.

The EU finds themselves at risk of not just being behind the US and UK, who have produced an Executive Order on AI and the world’s first AI Safety Summit, respectively, but running the wrong race altogether. 

The EU AI Act was set to be the world’s first extensive piece of AI legislation in the world. Until the ‘Big Three’ member states France, Germany, and Italy decided that the part of the act which seeks to regulate the most powerful and dangerous AIs – that part which sought to deal with ‘foundation models’ – is stricken from the Act altogether. 

The countries are now calling on the Spanish Presidency for ‘mandatory self-regulation through codes of conduct’ for foundation models. This would put the onus on the application of AI, not the technology itself, which is not what usually happens. This approach would instead burden those with the least influence in the AI ecosystem, the application developers, while exempting those who create and control these powerful models: US Big Tech and their favoured European startups.

Why the sudden change? France was, at one point, one of the most vociferous advocates of the need to regulate foundation models such as the large language models that form the basis of OpenAI’s ChatGPT. Now, French President Emmanuel Macron’s former Minister for Digital Economy, Cédric O, who is a co-founding member and shareholder of influential French startup Mistral, has been vocal against regulations that could ‘kill’ O’s company. His tune has changed.

Cédric O also co-founded En Marche alongside Macron. Until 2022, as minister, he argued that “we need more regulation” on tech companies. As he stated at VivaTech 2021, France’s largest tech conference, Cédric wanted to rein in the American big tech “oligopoly” to “protect the public interest”.

In 2022, at a tech conference in New York, O stated that his position was that: “we need more regulation. So if the price to pay is to have a different framework in the U.S. and the EU, I would go for that.” His tune suddenly changed by mid-2023, just after becoming a shareholder and part of the ‘founding team’ of Mistral, a new AI startup with uncanny political weight. Soon after, he started declaring in interviews: ‘EU’s AI Act could kill our company’. Around the same time, France also began advocating for the exclusion of foundation models.

Over the Atlantic, US Big Tech has been lobbying the EU to weaken and remove foundation model regulations in the EU AI Act. Major players like OpenAI, Microsoft, and Google were placed among the ‘Top 5 Lobbyists’ as identified by the Corporate Europe Observatory. These tech giants are effectively seeking a legal environment that allows them unbridled freedom to develop and deploy these technologies without stringent oversight.

By exempting foundation models from the EU AI Act, the EU risks creating a regulatory blind spot where the most powerful and transformative AI technologies operate without adequate oversight. This approach is like addressing climate change by regulating vehicles while ignoring the activities of oil companies.

Last week, the European Commission submitted their compromise to the French-led demands. It suggested regulation only for those AI models trained over a truly huge amount of compute – so huge that no model has yet to even use that amount – as it is so large. 

Compute refers to the computing power needed to power AI such as large language models (LLMs). This amount, 10^26 FLOP, is well over the rumoured 10^25 FLOP that the current state-of-the-art models such as LLama-2 and GPT-4 are trained on.

Analysis shows that the compromise in the recent non-paper circulated by France, Germany, and Italy  includes the ‘fewest provisions with regards to foundation models or general purpose AI systems, even falling below the minimal standard that was set in a recent U.S. White House Executive Order.’ If the compromise is accepted, Europeans may live within the least safe AI policy regime, as foundation models are where the real dangers from AI come from.

The European Union has a unique opportunity to set a global standard for AI regulation. However, this requires a commitment to comprehensive and equitable oversight that encompasses all facets of AI development, especially foundation models. The future of AI in Europe and globally hinges on the ability of policymakers to resist undue influence and prioritise the long-term societal impacts of these technologies.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version