The AI Pact encourages and supports organisations to plan ahead for the implementation of AI Act measures.
What is the AI Pact?
The AI Act entered into force on August 1, 2024. Some provisions of the AI Act are already fully applicable. However, some requirements on the high-risk AI systems and other provisions will only be applicable at the end of a transitional period (ie., the time between entry into force and date of applicability).
In this context, the Commission is promoting the AI Pact, seeking the industry’s voluntary commitment to anticipate the AI Act and to start implementing its requirements ahead of the legal deadline. To gather participants, the first call for interest was launched in November 2023, obtaining responses from over 550 organisations of various sizes, sectors, and countries. The AI Office has since initiated the development of the AI Pact, which is structured around two pillars:
-
Pillar I acts as a gateway to engage the AI Pact network (those organisations that have expressed an interest in the Pact), encourages the exchange of best practices, and provides with practical information on the AI Act implementation process;
-
Pillar II encourages AI system providers and deployers to prepare early and take actions towards compliance with requirements and obligations set out in the legislation.
Two pillars’ approach
Pillar I: gathering and exchanging with the AI Pact network
Under Pillar I, participants contribute to the creation of a collaborative community, sharing their experiences and knowledge. This includes workshops organised by the AI Office which provide participants with a better understanding of the AI Act, their responsibilities and how to prepare for its implementation. In turn, the AI Office is can gather insights into best practices and challenges faced by the participants.
In this context, participants can share best practices and internal policies that may be of use to others in their compliance journey. Depending on participants’ preferences, these best practices may also be published online in a platform where the AI Office will share information on the AI Act’s implementation process.
Pillar II: facilitating and communicating company pledges
The purpose of this pillar is to provide a framework to foster the early implementation of some of the measures of the AI Act. This initiative encourages organisations to proactively disclose the processes and practices they are implementing to anticipate compliance. Specifically, companies providing or deploying AI systems can demonstrate and share their voluntary commitments towards transparency and high-risk requirements and prepare early on for their implementation.
The commitments take the form of pledges which are ‘declarations of engagement’. These pledges contain concrete actions (planned or underway) to meet the AI Act’s distinct requirements and include a timeline for their adoption. Such declarations of engagement can also take the form of incremental objectives.
Signature of the pledges
On 25 September, the Commission convened key industry stakeholders in Brussels to celebrate the first signatories of the pledges. To date, over 100 companies have signed the pledges, including multinational corporations and European Small and Medium Enterprises (SMEs) from diverse sectors, including IT, telecoms, healthcare, banking, automotive, and aeronautics. The full list is displayed below.
The EU AI Pact voluntary pledges call on participating companies to commit to at least three core actions:
- Adopting an AI governance strategy to foster the uptake of AI in the organisation and work towards future compliance with the AI Act
- Identifying and mapping AI systems likely to be categorised as high-risk under the AI Act
- Promoting AI awareness and literacy among staff, ensuring ethical and responsible AI development
In addition to these core commitments, companies are encouraged to do further commitments as proposed by the AI Pact. These should be tailored to their activities, including ensuring human oversight, mitigating risks, and transparently labelling certain types of AI-generated content, such as deepfakes.
The text of the pledges (.pdf), initially drafted by the AI Office, was shared with the relevant stakeholders in the AI Pact network to gather feedback and insights. As a result, the final version of the pledges reflects the input received from stakeholders. These pledges are not legally binding and do not impose any legal obligations on participants. Companies will be able to sign them at any moment until the AI Act fully applies.
Participating companies will be invited to publicly report on their progress 12 months after the publication of their commitments. The primary purpose of the report is to offer insights and transparency into the progress made on fulfilling these pledges
What are the benefits of the Pact for participants?
The Commission is working together with the participants and supporting them in:
-
building a common understanding of the objectives of the AI Act
-
taking concrete actions to understand, adapt and prepare for the future implementation of the AI Act (e.g. build internal processes, prepare staff and self-assess AI systems)
-
sharing knowledge and increasing the visibility and credibility of the safeguards put in place to demonstrate trustworthy AI
-
building additional trust in AI technologies
The Pact is allowing front-runners and ambitious participants to test and share their solutions with the wider community.
Tentative Timeline
Signatories of the pledges
Some signatories may not appear immediately, but we are making sure to continuously update the list as new pledges are signed.
Related Content
Big Picture
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.