Home Blog Newsfeed The EU AI Act Aims to Create a Level Playing Field for AI Innovation: Here’s What it Is
The EU AI Act Aims to Create a Level Playing Field for AI Innovation: Here’s What it Is

The EU AI Act Aims to Create a Level Playing Field for AI Innovation: Here’s What it Is

The European Union’s Artificial Intelligence Act, widely known as the EU AI Act, has been described by the European Commission as “the world’s first comprehensive AI law.” After years in the making, this pivotal regulation is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU.

Crucially, the EU AI Act is more than a European affair. It applies to companies both local and foreign, impacting both providers and deployers of AI systems. This legislation establishes a comprehensive legal framework setting the stage for the use of artificial intelligence across numerous sectors.

Why Does the EU AI Act Exist?

As is common with EU legislation, the EU AI Act aims to ensure a uniform legal framework applicable to AI across all EU countries. This regulation is intended to “ensure the free movement, cross-border, of AI-based goods and services” without the complication of diverging local restrictions. With timely regulation, the EU seeks to create a level playing field across the region and foster trust, potentially creating new opportunities for emerging companies. However, the framework adopted is not entirely permissive; despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and should not do for society at large.

What is the Purpose of the EU AI Act?

According to European lawmakers, the framework’s primary goal is to “promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.” This dual objective highlights the delicate balance the Act strives to maintain between fostering innovation and preventing harm, and between encouraging AI uptake and ensuring environmental protection. As is often the case with EU legislation, the efficacy of the Act will hinge on the specifics of its implementation.

How Does the EU AI Act Balance its Different Goals?

To balance harm prevention with the potential benefits of AI, the EU AI Act adopts a risk-based approach. This involves banning a select number of “unacceptable risk” AI use cases, imposing stringent regulations on a set of “high-risk” applications, and applying lighter obligations to “limited risk” scenarios.

Has the EU AI Act Come into Effect?

The EU AI Act rollout commenced on August 1, 2024, but it is being implemented through a series of staggered compliance deadlines. Generally, it will apply sooner to new entrants than to companies already offering AI products and services within the EU.

The first deadline took effect on February 2, 2025, focusing on enforcing bans for a small number of prohibited AI uses, such as untargeted scraping of internet or CCTV data for facial image databases. Many other provisions are set to follow, with most expected to apply by mid-2026, unless the schedule is altered.

What Changed on August 2, 2025?

Since August 2, 2025, the EU AI Act applies to “general-purpose AI models with systemic risk.” General-purpose AI (GPAI) models are those trained on extensive datasets, capable of performing a wide array of tasks. The Act identifies these models as potentially carrying systemic risks, for instance, by lowering barriers for developing chemical or biological weapons or through unintended issues of control over autonomous GPAI models.

In preparation for this deadline, the EU published guidelines for providers of GPAI models, which include both European companies and international players such as Anthropic, Google, Meta, and OpenAI. Companies with existing models on the market will have until August 2, 2027, to comply, distinguishing them from new market entrants.

Does the EU AI Act Have Teeth?

The EU AI Act includes penalties designed to be “effective, proportionate and dissuasive,” applicable even to large global players. While specific details will be determined by EU member states, the regulation establishes overall thresholds. Infringements on prohibited AI applications face the highest penalty: “up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher).” The European Commission can also impose fines of up to €15 million or 3% of annual turnover on providers of GPAI models.

How Fast Do Existing Players Intend to Comply?

The voluntary GPAI code of practice, which includes commitments such as not training models on pirated content, provides an indication of how companies may engage with the framework law. In July 2025, Meta announced it would not sign this voluntary code. However, Google soon after confirmed its intention to sign, albeit with reservations. Signatories to the AI Pact so far include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI, among others. Yet, as Google’s example suggests, signing does not always equate to full endorsement.

Why Have (Some) Tech Companies Been Fighting These Rules?

While Google stated it would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, expressed reservations, stating, “We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI.” Meta was more direct in its opposition; chief global affairs officer Joel Kaplan described the EU’s implementation of the AI Act as “overreach” that “introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.” European companies have also voiced concerns. Arthur Mensch, CEO of Mistral AI, was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to “stop the clock” for two years before key obligations came into force.

Will the Schedule Change?

In early July 2025, the European Union responded negatively to lobbying efforts calling for a pause, reaffirming its commitment to its original timeline for implementing the EU AI Act. The August 2, 2025 deadline proceeded as planned, and this article will be updated should any changes occur.

Add comment

Sign Up to receive the latest updates and news

Newsletter

© 2025 Proaitools. All rights reserved.