EU AI Act brings GPAI rules to life

The EU’s new AI rules make GPAI compliance an immediate reality for UK companies developing or deploying these models
As of August 2025, a pivotal chapter of the EU’s landmark Artificial Intelligence Act (EU AI Act) has become applicable, shifting the regulation of so-called “General-Purpose AI” (GPAI) from legal theory to operational reality. Due to its significant extraterritorial scope, the EU AI Act will directly impact UK-based companies who develop, deploy, or integrate these powerful technologies for the EU market.
The EU AI Act is widely known for its risk-based "pyramid" approach, which categorises AI systems based on their intended purpose and potential for harm. This logic, conceived in the initial 2021 draft, was elegant in its simplicity. However, the legislative process was famously disrupted by the explosive arrival of foundation models like the one powering ChatGPT in late 2022.
These models, capable of a vast range of tasks and not designed for a single purpose, did not fit neatly into the risk pyramid. Lawmakers were forced to address this regulatory gap, introducing a parallel set of obligations that targets the very core of the generative AI revolution: the GPAI models themselves. This article provides an overview of these new rules and their practical implications.
The EU AI Act defines a GPAI model as an AI model that displays "significant generality" and is "capable of competently performing a wide range of distinct tasks". While this definition appears broad, it serves a critical function: it distinguishes the underlying technology from the final product.
This leads to the most important distinction in the Act’s new architecture: the difference between a GPAI model and an AI system.
A GPAI model is the foundational engine, the large, versatile technology trained on vast datasets (e.g., GPT-5). The new rules discussed here apply directly to the providers of these models.
An AI system is the specific application built using that engine (e.g., a medical chatbot for diagnostics built on top of GPT-5). This system is what ultimately gets placed into the traditional risk pyramid and classified, for instance, as "high-risk".
For the purpose of this article, we will focus on the fundamental obligations that apply to all GPAI models. While the EU AI Act does create a special sub-category for GPAI models with "systemic risk", which are subject to more stringent rules, their high classification threshold means they are of limited practical relevance for most companies at present.
Core Obligations for GPAI Models
The foundation of the new regime is Article 53 of the EU AI Act, which establishes four key obligations for providers of GPAI models. These duties are designed to ensure transparency and accountability throughout the AI value chain, focusing on the relationship between the original model provider and the downstream companies that build upon their technology.
First, providers are required to draw up and maintain extensive technical documentation for their model. This is a core accountability obligation, designed to ensure that the model's design, training, and testing processes are transparent and auditable, primarily for competent authorities upon request. According to the Act, this documentation must contain, at a minimum, detailed information about the model's architecture, the data and computational resources used for training, and the results of its evaluation and testing.
Second, providers have a distinct obligation to share specific information with the downstream providers who integrate the GPAI model into their own AI systems. This is a crucial element of the Act's cascading system of responsibility. The information to be provided is essentially a subset of the main technical documentation. Its purpose is to give the downstream provider the necessary understanding of the model's capabilities and limitations, thereby enabling them to comply with their own obligations under the AI Act.
Third, providers must put in place a policy to respect Union copyright law. This obligation directly addresses the contentious issue of training data. It requires providers to demonstrate how they adhere to the EU’s rules on Text and Data Mining (TDM), particularly the requirement to respect legally enforceable "opt-outs" from rightsholders who do not wish for their content to be used for AI training.
Finally, providers must make publicly available a "sufficiently detailed summary" of the content used to train their GPAI model. This is a pure transparency obligation aimed at the wider public. The legal text itself, however, leaves the term "sufficiently detailed" undefined, creating significant legal uncertainty for providers regarding the required level of granularity.
The Compliance Toolkit
The EU AI Act’s high-level principles require more granular detail for practical implementation. To bridge this gap, a framework of supporting documents has been established to provide concrete guidance. Three of these are central to GPAI compliance: the GPAI Guidelines, the GPAI Codes of Practice (GPAI CoP), and the official Template for the Training Data Summary.
The GPAI CoP are particularly significant. While voluntary, formally subscribing to them grants a provider a "presumption of conformity". This means authorities will presume the provider is compliant with the law, offering a vital safe harbour and establishing the GPAI CoP as the de facto industry standard. The GPAI CoP translates the Act's abstract duties into clear, actionable instructions:
For Documentation and Transparency, the GPAI CoP introduces a standardised "Model Documentation Form". This template creates a tiered system, clarifying which information must be shared with downstream providers versus what is reserved for authorities.
For Copyright Policy, the GPAI CoP requires providers to implement concrete measures. These include respecting machine-readable rights reservations (such as the robots.txt protocol), excluding illegal content sources from training data, and establishing a formal complaint mechanism for rightsholders.
For the Training Data Summary, the Commission’s official template operationalises the vague "sufficiently detailed summary" requirement. It provides a standardised format for reporting on training data, balancing transparency with trade secret protection by focusing on narrative descriptions rather than granular, work-by-work disclosure.
Finally, the GPAI Guidelines complement this toolkit by providing clarity on scope and classification. They help companies determine if a model even qualifies as a GPAI model in the first place, for example by introducing a rebuttable presumption based on a computational threshold of 10²³ FLOPs. Crucially, they also provide guidance on the complex question of when a modification or fine-tuning, whether by the original provider or a third party, is significant enough to be considered the creation of an entirely new GPAI model.
Current Practical Challenges
While the new secondary legislation provides a much clearer compliance path, businesses are discovering that the transition from legal text to operational reality presents significant strategic challenges.
Providers of GPAI models face a delicate balancing act when creating the required documentation, particularly the Technical Documentation and the Training Data Summary. On one hand, these documents must possess sufficient detail and transparency to withstand scrutiny from competent authorities. On the other, they must be drafted with extreme care to avoid inadvertently providing "ammunition" in the form of sensitive information that could be used by rights holders in potential copyright infringement litigation. This turns the drafting process into a high-stakes strategic exercise.
The neat legal distinctions of the EU AI Act can become highly complex when applied to real-world, cross-border AI value chains. Consider a company that takes a model from a non-EU provider, fine-tunes it, and then integrates it into its own AI system which is then placed on the market in the EU. This single scenario triggers a cascade of difficult questions: Has the fine-tuning created a new GPAI model? Who is now considered the 'provider' of the relevant model(s)under the Act? And how do the rules on placing a model on the market apply? Navigating these multi-layered situations is a major analytical challenge.
Finally, companies are realising that compliance with the EU AI Act is not an isolated task but merely one piece of a much larger puzzle. The ultimate challenge is to establish a holistic AI governance framework. This requires identifying all relevant legal domains – from intellectual property, data protection, and cybersecurity to contract and corporate law – understanding their complex interplay, and then integrating them into a single, coherent governance strategy. For legal advisors, the task is less about isolated EU AI Act compliance and more about integrated, multi-disciplinary risk management.
Conclusion & Outlook
The entry into application of the GPAI model provisions marks a pivotal moment for the AI industry. For providers of these powerful technologies, compliance is no longer a future prospect but an immediate operational reality. As this article has shown, navigating this new landscape requires looking beyond the EU AI Act itself. The true rulebook for day-to-day compliance is found in the secondary legislation, where abstract principles are transformed into actionable processes through instruments like the GPAI CoP and the official Training Data Summary Template.
Looking ahead, these documents are not the final word but the beginning of an ongoing regulatory dialogue. They are intended to be living documents, designed to evolve alongside rapid technological advancements. For companies and their legal advisors, simply adopting the current standards will not be enough. The key to sustainable compliance will be to foster a culture of proactive engagement, continuously monitoring guidance from the AI Office and contributing to the refinement of these standards to help shape a predictable and innovation-friendly regulatory environment for the future.
Oliver Belitz is a Counsel in Bird & Bird's Frankfurt office and a recognised expert in the law of Artificial Intelligence. He advises a wide range of companies on navigating the complex regulatory landscape of emerging technologies, with a particular focus on the EU AI Act. His practice centers on drafting and negotiating complex AI-related agreements and advising on liability issues arising from the enterprise use of generative AI. As an active member of the firm's Legal Tech committees, he is also deeply involved in shaping the future of AI-enabled legal services.