Scope and objectives
The AI Act is the world's first comprehensive law for AI. It aims to address risks to health, safety and fundamental rights. The regulation also aims to protect democracy, the rule of law and the environment.
The uptake of AI systems and general-purpose AI models has a strong potential to bring societal benefits, economic growth and to enhance EU innovation and global competitiveness. However, in certain cases, the specific characteristics of certain AI systems and general-purpose AI models may create new risks related to user safety, including physical safety, and fundamental rights. Some general-purpose AI models, the most advanced ones, could even pose so-called ‘systemic risks’.
This leads to legal uncertainty and a potentially slower uptake of AI technologies by public authorities, businesses and citizens, due to a lack of trust. Disparate regulatory responses by national authorities could risk fragmenting the internal market.
Responding to these challenges, legislative action was needed to ensure a well-functioning internal market for AI systems and general-purpose AI models within which risks are adequately addressed and benefits are taken into account.
The legal framework applies to both public and private actors inside and outside the EU, who place an AI system or general-purpose AI model on the EU market, or put an AI system into service or use it in the EU.
The obligations apply amongst others to providers (e.g. a developer of a CV-screening tool) and deployers of AI systems (e.g. a bank using this screening tool), as well as to providers of general-purpose AI models. There are certain exemptions to the regulation. Research, development and prototyping activities that take place before an AI system or model is released on the market are not subject to this regulation. AI systems that are exclusively designed for military, defence or national security purposes are also exempt, regardless of the type of entity carrying out those activities.
The AI Act introduces a uniform framework across all EU Member States, based a risk-based approach.
Prohibited AI systems
A very limited set of particularly harmful uses of AI systems that contravene EU values because they violate fundamental rights and will therefore be banned:
- Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
- Social scoring for public and private purposes
- Individual predictive policing based solely on profiling people
- Untargeted scraping of internet or CCTV for facial images to build-up or expand databases
- Emotion recognition in the workplace and education institutions, unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot)
- Biometric categorisation of natural persons to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation. Labelling or filtering of datasets and categorising data will still be possible
- Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, subject to narrow exceptions (see below)
The Commission has issued guidelines on the prohibited AI practices and guidelines on the AI system definition of the AI Act. These provide legal explanations and practical examples to help stakeholders comply with these rules.
High-risk AI systems
The AI Act considers a limited number of AI systems as high-risk, because they potentially create an adverse impact on people's safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights). Annexed to the Act are the lists of high-risk AI systems, which can be reviewed to align with the evolution of AI use cases.
High-risk AI systems include for example AI systems that assess whether somebody is able to receive a certain medical treatment, to get a certain job or loan to buy an apartment. Other high-risk AI systems are those that are used by the police for assessing the risk of somebody committing a crime (unless prohibited under Article 5).
High-risk AI systems also include safety components of products covered by sectorial Union legislation or AI systems that constitute such products themselves. They will always be considered high-risk when subject to third-party conformity assessment under that sectorial legislation. This could include, for example, AI systems operating robots, drones, or medical devices.
Transparency requirements for certain AI systems
To foster trust, it is important to ensure transparency around the artificial origin of certain content or interactions of AI. Therefore, the AI Act introduces specific transparency requirements on the providers or deployers of certain interactive or generative AI systems (e.g. chatbots or deep fakes). These transparency requirements aim to address the risks of misinformation and manipulation, fraud, impersonation and consumer deception.
AI systems with minimal risk
The majority of AI systems can be developed and used subject to the existing legislation without additional legal obligations.
General-purpose AI models
The AI Act also regulates general-purpose AI models. These can be used for a variety of tasks and are becoming the basis for many AI systems in the EU. Providers of general-purpose AI models have to document certain information (unless they qualify for the open-source exemption) and take measures that ensure they respect EU law on copyright and related rights.
The most advanced among GPAI models could also present systemic risks. For example, powerful models could enable chemical, biological, radiological or nuclear (CBRN) attacks or large-scale sophisticated cyberattacks, escape human control, or enable the strategic distortion of human behaviour or beliefs. Providers of general-purpose AI models with systemic risks additionally have to assess and mitigate systemic risks.
The AI Act creates a legal framework that is responsive to new developments, easy and quick to adapt and allows for frequent evaluation. The regulation sets result-oriented requirements and obligations but leaves the concrete technical solutions and operationalisation to industry-driven standards and codes of practice that are flexible to be adapted to different use cases and to enable new technological solutions.
The AI Act can be amended by delegated and implementing acts, for example to review the list of high-risk use cases in Annex III. There will be frequent evaluations of certain parts of the AI Act and eventually of the entire regulation, making sure that any need for revision and amendments is identified.
Under the AI Act, high-risk AI systems will be subject to specific requirements. European harmonised standards will provide detailed specifications for implementation and compliance with the high-risk requirements. Providers who develop a high-risk AI system in accordance with the harmonised standards will benefit from a presumption of conformity.
In May 2023, the European Commission mandated the European standardisation organisations CEN and CENELEC to develop standards for these high-risk requirements. This mandate was amended in June 2025, to align with the final text of the AI Act. CEN and CENELEC have not been able to develop the standards in the requested timeline of August 2025. The standardisation work is still ongoing.
Standards are voluntary, but decisive for legal certainty. They help companies to understand what they need to do and for authorities to know what they need to check. The delayed availability of the standards puts at jeopardy the successful entry into application of the high-risk rules on 2 August 2026.The Commission therefore considers that it is appropriate to allow more time for the effective implementation of the rules for which standards are of significant importance (see Digital Omnibus proposal).
AI opportunities and impact transcend borders. Challenges are global, which is why cooperation is important and necessary. The AI Office is leading the Commission's international engagement in the field of AI, on the basis of the AI Act and the Coordinated Plan on AI. The EU seeks to:
- Lead global efforts on AI by supporting innovation, setting up appropriate guardrails on AI and developing the global governance on AI
- Promote the responsible stewardship and good governance of AI
- Ensure that international agreements align with our approach
As the global landscape shifts, proactive AI engagement—coordinated closely with our allies—has become an essential and growing priority. The EU is a proactive, cooperative, and reliable partner that leads by example and collaborates internationally while protecting its interests, security and values.
It engages bilaterally and multilaterally to promote trustworthy and human-centric AI. Bilaterally, the EU cooperates with an increasing number of countries and regions such as Canada, the US, India, Japan, Republic of Korea, Singapore, Australia, the UK, the Latin America and Caribbean region, Africa, etc. Multilaterally, the EU is involved in key forums and initiatives where AI is discussed – notably G7, G20, the OECD and the Global Partnership on AI, the Council of Europe, the International Network for Advanced AI Measurement, Evaluation and Science but also in the United Nations context.
The EU builds on strategic assets and strengths - such as talent, research, industrial strength and large single market with uniform rules - and deploy these internationally as part of the EU tech offer to build mutually beneficial partnerships and alliances across the globe.
High-risk AI systems
The AI Act sets out a solid methodology for the classification of AI systems as high-risk. This aims to provide legal certainty for businesses and other operators.
The risk classification is based on the intended purpose of the AI system, in line with the existing EU product safety legislation. It means that the classification depends on the function performed by the AI system and on the specific purpose and modalities for which the system is used.
AI systems can classify as high-risk in two cases:
- If the AI system is embedded as a safety component in products covered by existing product legislation (Annex I) or constitute such products themselves. This could be, for example, AI-based medical software.
- If the AI system is intended to be used for a high-risk use case, listed in Annex III of the AI Act. The list includes use cases in areas such as education, employment, law enforcement or migration. It is annually reviewed by the Commission and can be updated.
The Commission is preparing guidelines for the high-risk classification, which will be published ahead of the application date for these rules.
Annex III of the AI Act comprises 8 areas in which the use of AI can be particularly sensitive and lists concrete use cases for each area. An AI system classifies as high-risk if it is intended to be used for one of these use cases. Some examples:
- AI systems used as safety components in certain critical infrastructures. For instance in the fields of road traffic and the supply of water, gas, heating and electricity.
- AI systems used for certain tasks in in education and vocational training. E.g. to evaluate learning outcomes and steer the learning process and monitoring of cheating.
- AI systems used for certain tasks in employment and workers management and access to self-employment. E.g. to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
- AI systems used in the access to essential private and public services and benefits (e.g. healthcare), creditworthiness evaluation of natural persons, and risk assessment and pricing in relation to life and health insurance.
- AI systems used for certain tasks in the fields of law enforcement, migration and border control, insofar as not already prohibited, as well as in administration of justice and democratic processes.
- AI systems used for biometric identification, biometric categorisation and emotion recognition, insofar as not prohibited.
Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. risk management, data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment has to be repeated if the system or its purpose are substantially modified. Depending on the type of high-risk AI system, the conformity assessment may have to be conducted by an independent conformity assessment body. Providers of high-risk AI systems will also have to implement a quality management systems to ensure their compliance with the new requirements and minimise risks for users and affected persons, throughout the lifecycle of the AI system.
To enable supervision and ensure public transparency, providers of high-risk AI systems have to register the system in a public EU database.
Providers of high-risk AI systems remain responsible for the safety and compliance of the system throughout its lifecycle. Where necessary, they have to report and respond to serious incidents, take corrective actions in case of identified risks or non-compliance, provide information and cooperate with market surveillance authorities.
As a baseline, deployers must use the high-risk AI system in accordance with the instructions of use and have to take all technical means to that end. Based on the instructions of use, deployers of high-risk AI systems have to monitor the operation of the AI system and act upon identified risks or serious incidents. Deployers need to assign human oversight to a person in their organisation. That person or persons must be sufficiently equipped and enabled to exercise human oversight. When a deployer provides input data for a high-risk AI system, the AI Act requires that the input data must be relevant and sufficiently representative for the intended purpose of the AI system.
Deployers who are public authorities or who provide public services have to carry out a fundamental rights impact assessment prior to the first use.
In addition, the deployer may have information obligations:
- High-risk AI systems that are deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database, except for high-risk AI systems for critical infrastructure. AI systems used for law enforcement and migration have to be registered in a non-public part of the database that will be only accessible to relevant supervisory authorities.
- If the high-risk AI system is deployed at the workplace, deployers must beforehand inform affected employees and workers’ representatives. This goes in addition to any workers’ consultation rules that may already apply.
- If the high-risk AI system is designed to make decisions or assist in making decisions about natural persons, the deployer must inform the affected person.
The AI Act also introduces a right to an explanation for natural persons. This applies where the output of a high-risk AI system was used to take a decision about a natural person and this decision produces legal effects. Deployers of high-risk AI systems must accommodate for this possibility and, in case of a request, provide a clear and meaningful explanation to the affected person.
Real world testing of High-Risk AI systems can be conducted for a maximum of 6 months (this can be prolonged by another 6 months). Prior to testing, a plan needs to be drawn up and submitted to a market surveillance authority, which has to approve the plan and specific testing conditions, with default tacit approval if no answer has been given within 30 days. Testing may be subject to unannounced inspections by the authority.
Real world testing can only be conducted given specific safeguards, e.g. users of the systems under real world testing have to provide informed consent, the testing must not have any negative effect on them, outcomes need to be reversible or disregardable, and their data needs to be deleted after conclusion of the testing. Special protection is to be granted to vulnerable groups, e.g. due to their age, physical or mental disability.
AI systems do not create or reproduce bias. Rather, when properly designed and used, AI systems can contribute to reducing bias and existing structural discrimination, and thus lead to more equitable and non-discriminatory decisions (e.g. in recruitment).
The new mandatory requirements for all high-risk AI systems will serve this purpose. AI systems must be technically robust to ensure they are fit for purpose and do not produce biased results, such as false positives or negatives, that disproportionately affect marginalised groups, including those based on racial or ethnic origin, sex, age, and other protected characteristics.
High-risk systems will also need to be trained and tested with sufficiently representative datasets to minimise the risk of unfair biases embedded in the model and ensure that these can be addressed through appropriate bias detection, correction and other mitigating measures. They must also be traceable and auditable, ensuring that appropriate documentation is kept, including the data used to train the algorithm that would be key in ex-post investigations.
Compliance systems must be regularly monitored and potential risks promptly addressed, before and after they are placed in the market.
Biometric identification can take different forms. Biometric authentication and verification i.e. to unlock a smartphone or for verification/authentication at border crossings to check a person's identity against his/her travel documents (one-to-one matching) remain unregulated, because they do not pose a significant risk to fundamental rights.
In contrast, remote biometric identification (E.g. to identify people in a crowd) can significantly impact privacy in the public space. The accuracy of systems for facial recognition can be significantly influenced by a wide range of factors, such as camera quality, light, distance, database, algorithm, and the subject's ethnicity, age or gender. The same applies for gait and voice recognition and other biometric systems.
While a 99% accuracy rate may seem good in general, it is considerably risky when the result can lead to the suspicion of an innocent person. Even a 0.1% error rate can have a significant impact when applied to large populations, for example at train stations.
The use of real-time remote biometric identification in publicly accessible spaces (i.e. facial recognition using CCTV) for law enforcement purposes is prohibited. Member States can introduce exceptions by law that would allow the use of real-time remote biometric identification in the following cases:
- Law enforcement activities related to 16 specified very serious crimes
- Targeted search for specific victims, abduction, trafficking and sexual exploitation of human beings, and missing persons
- The prevention of threat to the life or physical safety of persons or of a terror attack
Any exceptional use would need to be necessary and proportionate and be subject to prior authorisation by a judicial or independent administrative authority whose decision is binding. In case of urgency, approval can be requested within 24 hours. If the authorisation is rejected, all data and output of the use must be deleted.
The use of real-time remote biometric identification should be notified to the relevant market surveillance authority and the data protection authority.
The use of AI systems for post remote biometric identification (identification of persons in previously collected material) of persons under investigation is not prohibited, but requires prior authorisation from a judicial authority or an administrative authority, as well as notification to the relevant data protection and market surveillance authority.
There is already a strong protection for fundamental rights and for non-discrimination in place at EU and Member State level, but the complexity and opacity of certain AI applications (‘black boxes') can pose a problem.
A human-centric approach to AI means to ensure AI applications comply with fundamental rights legislation. By integrating accountability and transparency requirements into the development of high-risk AI systems and improving enforcement capabilities, we can ensure that these systems are designed with legal compliance in mind right from the start. Where breaches occur, such requirements will allow national authorities to have access to the information needed to investigate whether the use of AI complied with EU law.
The AI Act requires that certain deployers of high-risk AI systems conduct a fundamental rights impact assessment.
Providers of high-risk AI systems need to carry out a risk assessment and design the system in a way that risks to health, safety and fundamental rights are minimised.
However, certain risks to fundamental rights can only be fully identified knowing the context of use of the high-risk AI system. When high-risk AI systems are used in particularly sensitive areas of possible power asymmetry, additional considerations of such risks are necessary.
Therefore, deployers that are bodies governed by public law or private operators providing public services, as well as operators providing high-risk AI systems that carry out credit worthiness assessments or price and risk assessments in life and health insurance, shall perform an assessment of the impact on fundamental rights and notify the national authority of the results.
In practice, many deployers will also have to carry out a data protection impact assessment. To avoid substantive overlaps in such cases, the fundamental rights impact assessment shall be conducted in conjunction with that data protection impact assessment.
General-Purpose AI (GPAI) Models
General-purpose AI models, including large generative AI models, can be used for a variety of tasks. They may be integrated into a large number of AI systems.
A provider of an AI system integrating a general-purpose AI model must have access to necessary information to do it and ensure the system is compliant with the AI Act. The AI Act obliges providers of such models to document technical information about the models, and to disclose certain information to downstream system providers. Such transparency enables a better understanding of these models.
Model providers need to have policies in place to ensure compliance with EU law on copyright and related rights. In particular, to identify and comply with rights reservations under the text-and data-mining exception. Providers also need to publish a summary of the content used to train the model, based on a template from the AI Office.
The most advanced of these models could pose systemic risks. Currently, general-purpose AI models trained using a total amount of computational resources that exceeds 10^25 FLOP (floating point operations) are presumed to pose systemic risks. The Commission may update this threshold in light of technological developments. Upon a request from a provider, the Commission may also rebut the presumption that a model above the FLOP threshold poses systemic risk, if the model is not considered to be amongst the most advanced. The Commission may also designate models below the FLOP threshold as general-purpose AI models with systemic risk, based on the criteria specified in Annex XIII of the AI Act.
Providers of models with systemic risk must assess and mitigate these risks, including by conducting state-of-the-art model evaluations, reporting serious incidents, and ensuring adequate cybersecurity of their models and their physical infrastructure.
To support providers in complying with these obligations, the Commission has facilitated the development of the General-Purpose AI (GPAI) Code of Practice. The GPAI Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. The code is complemented by the Commission guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act.
The first General-Purpose AI (GPAI) Code of Practice details the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks. The GPAI Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models.
The AI Office facilitated the drawing-up of the code. The process was chaired by independent experts, involved nearly 1000 stakeholders, as well as EU Member States representatives, European and international observers. The Commission and the AI Board have confirmed that the code is an adequate voluntary tool for providers of GPAI models to demonstrate compliance with the AI Act.
Following the endorsement, GPAI model providers who voluntarily sign it can show they comply with the AI Act by adhering to the code. This will reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods. The Code is complemented by Commission guidelines on key concepts related to general-purpose AI models.
The AI Office will, as appropriate, encourage and facilitate the review and adaptation of the Code to reflect advancements in technology and state-of-the-art. Once a harmonised standard is published and assessed as suitable to cover the relevant obligations by the AI Office, compliance with a European harmonised standard should grant providers the presumption of conformity.
Providers of general-purpose AI models should be able to demonstrate compliance using alternative adequate means, if Codes of Practice or harmonised standards are not available, or they choose not to rely on those.
The AI Act introduces specific transparency requirements for providers and deployers of certain interactive or generative AI systems (including chatbots or deep fakes). These transparency requirements aim to address the risks of misinformation and manipulation, fraud, impersonation and consumer deception:
- Providers of AI systems directly interacting with natural persons need designed and developed those systems in such a way that the natural persons concerned are informed that they are interacting with an AI system.
- Providers of generative AI systems need to mark the AI outputs in a machine-readable format and ensure they are detectable as artificially generated or manipulated. The technical solutions must be effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.
- Deployers of emotion recognition or biometric categorisation systems must ensure that individuals exposed to these systems are informed.
- Deployers of generative AI systems that generate or manipulate image, audio or video content, constituting a deep fake, must visibly disclose that the content has been artificially generated or manipulated. Similarly, deployers of AI systems that generate or manipulate text published with the purpose of informing the public on matters of public interest must also disclose that the text has been artificially generated or manipulated.
Specific exceptions apply to all the transparency obligations mentioned.
The AI Office will issue guidelines to provide further guidance for providers and deployers on the obligations in Article 50 which will become applicable on 2 August 2026.
To assist with compliance with some of these transparency obligations, the AI Office has kick started the process of drawing up a code of practice on transparency of AI-generated content. The Code will be drafted by independent experts appointed by the AI Office, based on an inclusive process. More than 200 stakeholders from across sectors contribute to the drafting of the Code. If approved by the Commission, the final Code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations.
Governance, enforcement, and implementation
The AI Act will apply 2 years after entry into force on 2 August 2026, except for the following specific provisions:
- The prohibitions, definitions and provisions related to AI literacy apply since 2 February 2025. The Commission has published guidelines on prohibitions and definitions, as well as a living repository of AI literacy practices and a dedicated FAQ.
- The rules on governance and the obligations for general purpose AI became applicable on 2 August 2025.
- The obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of EU harmonisation legislation), apply 36 months after entry into force on 2 August 2027.
On 19 November 2025, as part of the Digital Omnibus proposal, the Commission has proposed to adjust the timeline for the application of high-risk rules by linking the application date to the availability of support measures such as harmonised standards, common specifications or Commission guidelines.
This proposal in the context of a delay in the preparation of standards to support the application of the high-risk requirements and the set-up of competent authorities in EU Member States. This puts at risk a smooth entry into application on 2 August 2026. The Digital Omnibus is now under consideration of the European Parliament and the Council of the EU.
The AI Act establishes a two-tiered governance system, where national competent authorities are responsible for overseeing and enforcing rules for AI systems, while the AI Office is responsible for governing and enforcing the obligations for providers of general-purpose AI models and for a subset of certain AI systems.
The European Artificial Intelligence Board (AI Board) was established to ensure EU-wide coherence and cooperation. It comprises high-level representatives from Member States and specialised subgroups for national regulators and other competent authorities.
The AI Act establishes two advisory bodies to provide expert input: the Scientific Panel and the Advisory Forum. These bodies will offer valuable insights from stakeholders and interdisciplinary scientific communities, informing decision-making and ensuring a balanced approach to AI development.
The European Artificial Intelligence Board comprises high-level representatives of Member States and the European Data Protection Supervisor (EDPS). As a key advisor, the AI Board provides guidance on all matters related to AI policy, notably AI regulation, innovation and excellence policy and international cooperation on AI.
The AI Board plays a crucial role in ensuring the smooth, effective and harmonised implementation of the AI Act. It has specialised subgroups composed of technical experts, national regulators and other competent authorities. The Board is a forum where the AI regulators can coordinate the consistent application of the AI Act.
Member States have to lay down effective, proportionate and dissuasive penalties for infringements of the rules for AI systems. The Regulation sets out thresholds that need to be taken into account:
- Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data
- Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation
- Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request
- For each category of infringement, the threshold would be the lower of the 2 amounts for SMEs and the higher for other companies.
The Commission can also enforce the rules on providers of general-purpose AI models by means of fines, taking into account the threshold:
- Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the obligations or measures requested by the Commission under the Regulation.
EU institutions, agencies or bodies are expected to lead by example, which is why they will also be subject to the rules and to possible penalties. The European Data Protection Supervisor (EDPS) will have the power to impose fines on them in case of non-compliance.
Initiated in May 2023, the AI Pact is fostering engagement between the AI Office and organisations (Pillar I) and encouraging voluntary commitment from the industry to start implementing the AI Act's requirements ahead of the legal deadline (Pillar II).
Under Pillar I, participants contribute to the creation of a collaborative community, sharing their experiences and knowledge. This includes webinars organised by the AI Office which provide participants with a better understanding of the AI Act, their responsibilities and how to prepare for its implementation. In turn, these webinars allow the AI Office to gather insights into best practices and challenges faced by the participants.
Under Pillar II, organisations are encouraged to proactively disclose the processes and practices they are implementing to anticipate compliance, through voluntary pledges (.PDF). Pledges are intended as ‘declarations of engagement' and will contain actions (planned or underway) to meet some of the AI Act's requirements.
The majority of rules of the AI Act (for example, some requirements on the high-risk AI systems) will apply at the end of a transitional period (i.e. the time between entry into force and date of applicability). In this context and within the framework of the AI Pact, the AI Office has called on all organisations to proactively anticipate and implement some of the key provisions of the AI Act, with the aim of mitigating the risks to health, safety and fundamental rights as soon as possible.
More than 3000 organisations have already expressed their interest in joining the AI Pact initiative, further to a call launched in November 2023. A first information session was held online on 6 May, with 300 participants. The official signing of the voluntary commitments took place for autumn 2024 and to date more than 230 companies have signed the pledges. On 15 December 2025, the AI Office has gathered the AI Pact pledgers to take stock of the progress made.
The Commission has set up the AI Act Service Desk and a Single Information Platform (SIP). Both were launched alongside the Apply AI Strategy and they had already been announced in the AI Continent Action Plan. The SIP provides all relevant information, compliance tools and tailored guidance on the AI Act. It offers:
- FAQs and resources about the AI Act
- a Compliance Checker, a tool crafted to help stakeholders determine whether they are subject to legal obligations and understand the steps they need to take to comply
- an AI Act Explorer, an online tool designed to help users browse through different chapters, annexes and recitals of the AI Act in an intuitive way
- an online form allows stakeholders to submit questions to the AI Act Service Desk - a team of expert professionals working in close cooperation with the AI Office
Innovation and Sustainability
The AI Act can enhance AI uptake in 2 ways:
- Increasing users' trust will increase the demand for AI used by companies and public authorities
- by harmonising rules across the EU Member States, AI providers will access bigger markets, with products that users and consumers appreciate and purchase
The AI Act is innovation-friendly. Rules will apply only where strictly needed and in a way that minimises the burden for economic operators, with a light governance structure. For SMEs, the AI Act foresees simplified compliance pathways for certain burdensome obligations, like technical documentation. To leave space for innovators, there are exemptions foreseen for research and development activities.
The AI Act further enables the creation of regulatory sandboxes and real-world testing, which provide a controlled environment to test innovative technologies for a limited time. These measures foster innovation by companies, SMEs and start-ups and will help building the right framework conditions for AI development and deployment. Additional initiatives supporting innovation include:
The objective of the AI Act is to address risks to safety and fundamental rights, including the fundamental right to a high-level environmental protection. The environment is also one of the explicitly mentioned and protected legal interests.
In 2026, the Commission plans to request European standardisation organisations to produce a standardisation deliverable on the reporting and documentation processes to improve AI systems' resource performance. This includes reduction of energy and other resources consumption of the high-risk AI system during its lifecycle, and on energy efficient development of general-purpose AI models.
The Commission is asked to submit a progress report on the development of standardisation deliverables on energy efficient development of general-purpose models and asses the need for further measures or actions (including binding ones). This obligation starts 2 years after the date of application of the AI Act and takes place every 4 years thereafter.
Providers of general-purpose AI models, which are trained on large data amounts and therefore prone to high energy consumption, are required to disclose energy consumption. In case of general-purpose AI models with systemic risks, energy efficiency needs to be assessed.
The Commission is empowered to develop a methodology with appropriate and comparable measurement for these disclosure obligations.
Related content
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.