The European approach to artificial intelligence (AI) will help build a resilient Europe for the Digital Decade where people and businesses can enjoy the benefits of AI. It focuses on 2 areas: excellence in AI and trustworthy AI. The European approach to AI will ensure that any AI improvements are based on rules that safeguard the functioning of markets and the public sector, and people’s safety and fundamental rights.
To help further define its vision for AI, the European Commission developed an AI strategy to go hand in hand with the European approach to AI. The AI strategy proposed measures to streamline research, as well as policy options for AI regulation, which fed into work on the AI package.
The Commission published its AI package in April 2021, proposing new rules and actions to turn Europe into the global hub for trustworthy AI. This package consisted of:
- a Communication on Fostering a European Approach to Artificial Intelligence;
- the Coordinated Plan with Member States: 2021 update;
- a proposal for an AI Regulation laying down harmonised rules for the EU (Artificial Intelligence Act).
A European approach to excellence in AI
Fostering excellence in AI will strengthen Europe’s potential to compete globally.
The EU will achieve this by:
- enabling the development and uptake of AI in the EU;
- making the EU the place where AI thrives from the lab to the market;
- ensuring that AI works for people and is a force for good in society;
- building strategic leadership in high-impact sectors.
The Commission and Member States agreed boost excellence in AI by joining forces on AI policy and investment. The revised Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.
Maximising resources and coordinating investments is a critical component of the Commission’s AI strategy. Through the Digital Europe and Horizon Europe programmes, the Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade.
The newly adopted Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.
Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.
A European approach to trust in AI
Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.
The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:
- a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
- EU rules to address liability issues related to new technologies, including AI systems (last quarter 2021-first quarter 2022);
- a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive, second quarter 2021).
European proposal for a legal framework on AI
The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.
This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.
- April 2021 – Fostering a European approach for artificial intelligence; Proposal for an AI Regulation laying down harmonised rules on artificial intelligence; Updated Coordinated Plan on AI
- April 2020 – Impact Assessment of the Regulation on artificial intelligence
- October 2020 – 2nd European AI Alliance Assembly
- July 2020 – Inception Impact Assessment: Artificial intelligence — ethical and legal requirements
- July 2020 – Public Consultation — White Paper on Artificial Intelligence
- July 2020 – Final Assessment List on Trustworthy AI (ALTAI) of the AIHLEG
- July 2020 – Sectoral Recommendations of Trustworthy AI of the AIHLEG
- February 2020 – White Paper on AI: a European approach to excellence and trust
- December 2019 – Piloting of Assessment list on Trustworthy AI
- June 2019 – 1st European AI Alliance Assembly
- June 2019 – Policy and Investment recommendations of AI HLEG
- April 2019 – Communication: Building Trust in Human Centric Artificial Intelligence
- April 2019 – Ethics Guidelines for Trustworthy AI
- December 2018 – Coordinated Plan on AI (Communication on 'AI Made in Europe'; Press Release)
- December 2018 – Stakeholder Consultation on draft Ethics Guidelines for Trustworthy AI
- June 2018 – Launch of the European AI Alliance
- June 2018 – Set up of the High Level Expert Group on AI (AIHLEG)
- March 2018 - Press Release on AI Expert Group and European AI Alliance
- April 2018 – European AI Strategy (Communication: Artificial Intelligence for Europe; Press Release)
- April 2018 – Staff Working Document: Liability for emerging digital technologies
- April 2018 – Declaration of cooperation on artificial intelligence