General FAQ
Artificial Intelligence (“AI”) promises huge benefits to our economy and society. General-purpose AI models play an important role in that regard, as they can be used for a variety of tasks and therefore form the basis for a range of downstream AI systems, used in Europe and worldwide.
The AI Act aims to ensure that general-purpose AI models are safe and trustworthy.
To achieve that aim, it is crucial that providers of general-purpose AI models ensure a good understanding of their models along the entire AI value chain, so as to allow downstream providers both to integrate such models into AI systems and to fulfil their own obligations under the AI Act. More specifically, and as explained in more detail below, providers of general-purpose AI models must draw up technical documentation of their models to make available to downstream providers and to provide upon request to the AI Office and national competent authorities, put in place a copyright policy, and publish a training content summary. In addition, providers of general-purpose AI models posing systemic risks, which may be the case either because the models are very capable or because they have a significant impact on the internal market for other reasons, must notify those models to the Commission, assess and mitigate systemic risks stemming from those models, including via performing model evaluations, reporting serious incidents, and ensuring adequate cybersecurity of these models.
In this way, the AI Act contributes to safe and trustworthy innovation in the European Union.
The AI Act defines a general-purpose AI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications” (Article 3(63) AI Act).
The recitals to the AI Act further clarify which models should be deemed to display significant generality and to be capable of performing a wide range of distinct tasks.
According to recital 98 AI Act, “whereas the generality of a model could, inter alia, also be determined by a number of parameters, models with at least a billion of parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks.”
Recital 99 AI Act adds that “large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks.”
Note that significant generality and ability to competently perform a wide range of distinctive tasks may be achieved by models within a single modality, such as text, audio, images, or video, if the modality is flexible enough. This may also be achieved by models that were developed, fine-tuned, or otherwise modified to be particularly good at a specific task or at a number of tasks in a specific domain.
Different stages of the development of a model do not constitute different models.
The AI Office intends to provide further clarifications on what should be considered a general-purpose AI model, drawing on insights from the Commission’s Joint Research Centre, which is currently working on a scientific research project addressing this and other questions.
Systemic risks are risks of large-scale harm from the most advanced (i.e. state-of-the-art) models at any given point in time or from other models that have an equivalent impact (see Article 3(65) AI Act). Such risks can manifest themselves, for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous general-purpose AI models (recital 110 AI Act). The most advanced models at any given point in time may pose systemic risks, including novel risks, as they are pushing the state of the art. At the same time, some models below the threshold reflecting the state of the art may also pose systemic risks, for example through reach, scalability, or scaffolding.
Accordingly, the AI Act classifies a general-purpose AI model as a general-purpose AI model with systemic risk if it is one of the most advanced models at that point in time or if it has an equivalent impact (Article 51(1) AI Act). Which models are considered general-purpose AI models with systemic risk may change over time, reflecting the evolving state of the art and potential societal adaptation to increasingly advanced models. Currently, general-purpose AI models with systemic risk are developed by a handful of companies, although this may change in the future.
To capture the most advanced models, i.e. models that match or exceed the capabilities recorded in the most advanced models so far, the AI Act lays down a threshold of 10^25 floating-point operations (FLOP) used for training the model (Article 51(1)(a) and (2) AI Act). Training a model that meets this threshold is currently estimated to cost tens of millions of Euros (Epoch AI, 2024). The AI Office is continuously monitoring technological and industrial developments, and the Commission may update the threshold, through the adoption of a delegated act (Article 51(3) AI Act), to ensure that it continues to single out the most advanced models as the state of the art evolves. For example, the value of the threshold itself could be adjusted and/or additional thresholds introduced.
To capture models with an impact equivalent to the most advanced models, the AI Act empowers the Commission to designate additional models as posing systemic risk, based on criteria such as the evaluations of capabilities of the model, number of users, scalability, or access to tools (Article 51(1)(b) and Annex XIII AI Act).
The AI Office intends to provide further clarifications on how general-purpose AI models will be classified as general-purpose AI models with systemic risk, drawing on insights from the Commission’s Joint Research Centre which is currently working on a scientific research project addressing this and other questions.
The AI Act rules on general-purpose AI models apply to providers placing such models on the market in the Union, irrespective of whether those providers are established or located within the Union or in a third country (Article 2(1)(a) AI Act).
A provider of a general-purpose AI model means a natural or legal person, public authority, agency or other body that develops a general-purpose AI model or that has such a model developed and places it on the market, whether for payment or free or charge (Article 3(3) AI Act).
To place a model on the market means the first making available of the model on the Union market (Article 3(9) AI Act), that is, to supply it for distribution or use on the Union market for the first time in the course of a commercial activity, whether in return for payment or free of charge (Article 3(10) AI Act). Note that a general-purpose AI model is also considered to be placed on the market if that model’s provider integrates the model into its own AI system which is made available on the market or put into service, unless the model is (a) used for purely internal processes that are not essential for providing a product or a service to third parties, (b) the rights of natural persons are not affected, and (c) the model is not a general-purpose AI model with systemic risk (recital 97 AI Act).
Providers of general-purpose AI models must document technical information about their models for the purpose of providing that information upon request to the AI Office and national competent authorities (Article 53(1)(a) AI Act) and making it available to downstream providers (Article 53(1)(b) AI Act). They must also put in place a policy to comply with Union law on copyright and related rights (Article 53(1)(c) AI Act) and draw up and make publicly available a sufficiently detailed summary about the content used for training the model (Article 53(1)(d) AI Act).
The General-Purpose AI Code of Practice should provide signatories with further detail on how to ensure compliance with these obligations in the sections dealing with transparency and copyright (led by Working Group 1).
Under the AI Act, providers of general-purpose AI models with systemic risk have additional obligations. They must assess and mitigate systemic risks, in particular by performing model evaluations, keeping track of, documenting, and reporting serious incidents, and ensuring adequate cybersecurity protection for the model and its physical infrastructure (Article 55 AI Act).
The General-Purpose AI Code of Practice should provide signatories with further detail on how to ensure compliance with these obligations in the sections dealing with systemic risk assessment, technical risk mitigation, and governance risk mitigation (led by Working Groups 2, 3, and 4 respectively).
The obligations for providers of general-purpose AI models apply from 2 August 2025 (Article 113(b) AI Act), with special rules for general-purpose AI models placed on the market before that date (Article 111(3)) AI Act.
The obligations to draw up and provide documentation to the AI Office, national competent authorities, and downstream providers (Article 53(1)(a) and (b) AI Act) do not apply if the model is released under a free and open-source license and its parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available (Article 53(2) AI Act). Recitals 102 and 103 AI Act further clarify what constitutes a free and open-source license and the AI Office intends to provide further clarifications on questions concerning open-sourcing general-purpose AI models.
The exemption does not apply to general-purpose AI models with systemic risk. After the open-source model release, measures necessary to ensure compliance with the obligations of Article 55 may be more difficult to implement (Recital 112 AI Act). Consequently, providers of general-purpose AI models with systemic risk may need to assess and mitigate systemic risks before releasing their models as open-source.
The General-Purpose AI Code of Practice should provide signatories with further detail on how to comply with the obligations in Articles 53 and 55 in relation to the different ways of releasing general-purpose AI models, including open-sourcing.
An important but difficult question underpinning the process of drawing up the Code of Practice is that of finding a balance between pursuing the benefits and mitigating the risks from the open-sourcing of advanced general-purpose AI models: open-sourcing advanced general-purpose AI models may indeed yield significant societal benefits, including through fostering AI safety research; at the same time, when such models are open-sourced, risk mitigations are more easily circumvented or removed.
Article 2(8) AI Act provides that, as a general matter, the AI Act “does not apply to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service.”
At the same time, certain obligations for providers of general-purpose AI models (with and without systemic risk) explicitly or implicitly pertain to the development phase of models intended for, but prior to, the placing on the market. This is the case, for example. for the obligations for providers to notify the Commission that their general-purpose AI model meets or will meet the training compute threshold (Articles 51 and 52 AI Act), to document information about training and testing (Article 53 AI Act), and to assess and mitigate systemic risk (Article 55 AI Act). In particular, Article 55(1)(b) AI Act explicitly provides that “providers of general-purpose AI models with systemic risk shall assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development (...) of general-purpose AI models with systemic risk.”
The AI Office expects discussions with providers of general-purpose AI models with systemic risk to start early in the development phase. This is consistent with the obligation for providers of general-purpose AI models that meet the training compute threshold laid down in Article 51(2) AI Act to “notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met” (Article 52(1) AI Act). Indeed, training of general-purpose AI models takes considerable planning, which includes the upfront allocation of compute resources, and providers of general-purpose AI models are therefore able to know if their model will meet the training compute threshold before the training is complete (recital 112 AI Act).
The AI Office intends to provide further clarifications on this question.
General-purpose AI models may be further modified or fine-tuned into new models (recital 97 AI Act). Accordingly, downstream entities that fine-tune or otherwise modify an existing general-purpose AI model may become providers of new models. The specific circumstances in which a downstream entity becomes a provider of a new model is a difficult question with potentially large economic implications, since many organisations and individuals fine-tune or otherwise modify general-purpose AI models developed by another entity.
In the case of a modification or fine-tuning of an existing general-purpose AI model, the obligations for providers of general-purpose AI models in Article 53 AI Act should be limited to the modification or fine-tuning, for example, by complementing the already existing technical documentation with information on the modifications (Recital 109). The obligations for providers of general-purpose AI models with systemic risk in Article 55 AI Act should only apply in clearly specified cases. The AI Office intends to provide further clarifications on this question.
Regardless of whether a downstream entity that incorporates a general-purpose AI model into an AI system is deemed to be a provider of the general-purpose AI model, that entity must comply with the relevant AI Act requirements and obligations for AI systems.
Based on Article 56 of the AI Act, the General-Purpose AI Code of Practice should detail the manner in which providers of general-purpose AI models and of general-purpose AI models with systemic risk may comply with their obligations under the AI Act. The AI Office is facilitating the drawing-up of this Code of Practice, with four working groups chaired by independent experts and involving nearly 1000 stakeholders, EU Member States representatives, as well as European and international observers.
More precisely, the Code of Practice should detail at least how providers of general-purpose AI models may comply with the obligations laid down in Articles 53 and 55 AI Act. This means that the Code of Practice can be expected to have two parts: one that applies to providers of all general-purpose AI models (Article 53), and one that applies only to providers of general-purpose AI models with systemic risk (Article 55).
The Code of Practice should not address inter alia the following issues: defining key concepts and definitions from the AI Act (such as “general-purpose AI model”), updating the criteria or thresholds for classifying a general-purpose AI model as a general-purpose AI model with systemic risk (Article 51), outlining how the AI Office will enforce the obligations for providers of general-purpose AI models (Chapter IX Section 5), and questions concerning fines, sanctions, and liability.
These issues may instead be addressed through other means (decisions, delegated acts, implementing acts, further communications from the AI Office, etc.).
Nevertheless, the Code of Practice may include commitments by providers of general-purpose AI models that sign the Code to document and report additional information, as well as to involve the AI Office and third parties throughout the entire model lifecycle, in so far as this is considered necessary for providers to effectively comply with their obligations under the AI Act.
The AI Act distinguishes between AI systems and AI models, imposing requirements for certain AI systems (Chapters II-IV) and obligations for providers of general-purpose AI models (Chapter V). While the provisions of the AI Act concerning AI systems depend on the context of use of the system, the provisions of the AI Act concerning general-purpose AI models apply to the model itself, regardless of what is or will be its ultimate use. The Code of Practice should only pertain to the obligations in the AI Act for providers of general-purpose AI models.
Nevertheless, there are interactions between the two sets of rules, as general-purpose AI models are typically integrated into and form part of AI systems. If a provider of a general-purpose AI model integrates that model into an AI system, that provider must comply with the obligations for providers of general-purpose AI models and, if the AI system falls within the scope of the AI Act, with the requirements for AI systems. If a downstream provider integrates a general-purpose AI model into an AI system, the provider of the general-purpose AI model must cooperate with the downstream provider of the AI system to ensure that the latter can comply with its obligations under the AI Act if the AI system falls within the scope of the AI Act (for example by providing certain information to the downstream provider).
Given these interactions between models and systems, and between the obligations and requirements for each, an important question underlying the Code of Practice concerns which measures are appropriate at the model layer, and which need to be taken at the system layer instead.
The Code of Practice should set out its objectives, commitments, measures and, as appropriate, key performance indicators (KPIs) to measure the achievement of its objectives. Commitments, measures and KPIs related to the obligations applicable to providers of all general-purpose AI models should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including start-ups, that should not represent an excessive cost and not discourage the use of such models (Recital 109 AI Act). Moreover, the reporting commitments related to the obligations applicable to providers of general-purpose AI models with systemic risk should reflect differences in size and capacity between various providers (Article 56(5) AI Act), while ensuring that they are proportionate to the risks (Article 56(2)(d) AI Act).
After the publication of the third draft of the Code of Practice, it is expected that there will be one more drafting round over the coming months. Thirteen Chairs and Vice-Chairs, drawn from diverse backgrounds in computer science, AI governance and law, are responsible for synthesizing submissions from a multi-stakeholder consultation and discussions with the Code of Practice Plenary consisting of around 1000 stakeholders. This iterative process will lead to a final Code of Practice which should reflect the various submissions whilst ensuring an appropriate reflection of how to comply with the AI Act.
Based on Article 56(9), “codes of practice shall be ready at the latest by 2 May 2025” in order to give providers sufficient time before the GPAI rules become applicable on 2 August 2025.
Once the Code is finalised “the AI Office and the Board shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55 (...). They shall publish their assessment of the adequacy of the codes of practice. The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union.” (Article 56(6) AI Act).
If approved via implementing act, the Code of Practice obtains general validity, meaning that adherence to the Code of Practice becomes a means to demonstrate compliance with the AI Act, while not providing a presumption of conformity with the AI Act. Providers may also demonstrate compliance with the AI Act in other ways. In that case, providers would be expected to adopt measures to ensure compliance with their AI Act obligations that are adequate, effective, and proportionate, and which may include reporting to the AI Office.
Based on the AI Act, additional legal effects of the Code of Practice are that the AI Office may take a provider’s adherence to the Code of Practice into account when monitoring its effective implementation and compliance with the AI Act (Article 89(1) AI Act) and may favourably take into account commitments made in the Code of Practice when fixing the amount of fines depending on the specific circumstances (Article 101(1) AI Act).
Finally, “If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of this Article.” (Article 56(9) AI Act).
Based on Article 56(8) AI Act, “the AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in particular in light of emerging standards”. The third draft of the Code of Practice includes a suggested mechanism for its review and updating.
The AI Office will supervise and enforce the obligations laid down in the AI Act for providers of general-purpose AI models (Article 88 AI Act) and, exceptionally, the obligations for providers of AI systems based on general purpose AI models if the provider of the model and the system are the same (Article 75, (1) AI Act). The AI Office will support the relevant market surveillance authorities of the Member States in their enforcement of the requirements for AI systems (Article 75 (2) and (3) AI Act), among other tasks. Enforcement by the AI Office is underpinned by the powers given to it by the AI Act, namely the powers to request information (Article 91 AI Act), conduct evaluations of general-purpose AI models (Article 92 AI Act), request measures from providers, including implementing risk mitigations, and recalling the model from the market (Article 93 AI Act), and to impose fines of up to 3% of global annual turnover or 15 million Euros, whichever is higher (Article 101 AI Act).
Related content
The first General-Purpose AI Code of Practice will detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks.