This Code of Practice aims to support compliance with the AI Act transparency obligations related to marking and labelling of AI-generated content.
Marking and labelling of AI-generated content
The obligations under Article 50 of the AI Act (transparency obligations for providers and deployers of generative AI systems) aim to ensure transparency of AI-generated or manipulated content, such as deep fakes. The article addresses risks of deception and manipulation, fostering the integrity of the information ecosystem. These transparency obligations will complement other rules like those for high-risk AI systems or general-purpose AI models.
To assist with compliance with these transparency obligations, the AI Office has kick started the process of drawing up a code of practice on transparency of AI-generated content. The Code will be drafted by independent experts appointed by the AI Office in an inclusive process. Eligible stakeholders will be involved contribute to the drafting of the Code. If approved by the Commission, the final Code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations under Article 50(2) and (4) AI Act. These obligations pertain to marking and detection of AI generated content and labeling of deep fakes and certain AI generated publications.
Scope of the working groups
The drafting of the code is centered around 2 working groups, following the structure of the transparency obligations for AI generated content in Article 50.
Working group 1: Providers
Focuses on obligations, requiring providers of generative AI systems to ensure:
- Outputs of AI systems (audio, image, video, text) are marked in a machine-readable format and detectable as artificially generated or manipulated.
- The employed technical solutions are effective, interoperable, robust, and reliable as far as technically feasible. These must take into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.
Working group 2: Deployers
Focuses on obligations, requiring deployers of generative AI systems to disclose:
- Content that is artificially generated or manipulated, constituting a deep fake (image, audio, or video which resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful).
- AI generated/manipulated text publications informing the public on matters of public interest, unless the publication has undergone a process of human review and is subject to editorial responsibility.
Both groups will also consider cross-cutting issues, including horizontal requirements for the information to be provided to the natural persons under Article 50(5) and aim to promote cooperation between relevant actors across the value chain to achieve the AI Act’s objectives on marking and labelling of AI-generated content.
Each working group will be led by independent chairs and vice-chairs., who are expected to provide strategic leadership and guidance, ensuring that discussions remain focused and productive. You can download the full list of participants of working groups 1&2(PDF).
Drafting process
The drafting of the Code involves eligible stakeholders who replied to a public call launched by the AI Office. Within this group are providers of specific generative AI systems, developers of marking and detection techniques, associations of deployers of generative AI systems, civil society organisations, academic experts, and specialised organisation with expertise in transparency and very large online platforms.
In its role as a facilitator, the AI Office also invited international and European observers to join the drawing up of Code of Practice. These organisations did not meet the eligibility criteria of the call, but can still contribute with valuable expertise and submit written input. All participants and observers will be invited to take part in plenary sessions, working group meetings, and thematic workshops dedicated to discussing technical aspects of the Code.
The drafting process will consider:
- Feedback from the multi-stakeholder consultation on transparency requirements for certain AI systems
- Expert studies commissioned by the AI Office and input from eligible stakeholders participating in the drawing up of Code of Practice
The full exercise is expected to last for 7 months. This timeline allows sufficient time for providers and deployers to prepare for compliance before the rules take effect in August 2026. The transparency obligations in Article 50 AI Act will complement other rules like those for high-risk AI systems or general-purpose AI models.
The Commission will prepare in parallel guidelines to clarify the scope of the legal obligations and addressing aspects not covered by the Code.
Timeline
The working group meetings and workshops with participants, chairs and vice-chairs will take place between November 2025 and May 2026.
The dates below are indicative and may need to be confirmed (TBC).
-
September 2025Call for expression of interest to participate in the Code of Practice (also on link above)
-
October 2025Eligibility checks and selection of applications for chairs and vice-chairs
-
17-18 November 2025
-
17 December 2025
-
12 & 14 January 2026Working groups meetingsStart of the second drafting round
-
21-22 January 2026Workshops - working groups 1 & 2
-
March 2026 (TBC)Publication of the second draftStart of the final drafting round
-
April 2026 (TBC)Working groups meetings
-
May-June 2026Closing PlenaryPublication of the final Code of Practice
Related Content
Big Picture