In this webinar, you will learn:
- The AI Act’s regulatory logic with insights into risk and quality management systems compliance requirements for high-risk AI systems;
- Essential elements for developing risk assessments on general purpose AI models with systemic risks;
- How to be ahead of the curve and future-proof your regulatory compliance with the AI Act;
- Whether available international standards, such as the recent ISO/IEC 42001, are sufficient for regulatory compliance with the risk management approach under the AI Act.
Background
The EU AI Act is expected to enter into force in June/July 2024. It follows a New-Legislative Framework product safety approach. This means that high-level essential requirements, such as the requirement for risk-management system are set in the legal act, while technical operationalisation can be ensured through harmonised standards. The European Commission has requested the European standardisation organisations to develop European harmonized standards to support the AI Act. Those standards, if developed in line with the AI Act, will be essential tools for the implementation of the AI Act, and for the presumption of conformity.
The risk management system is one of the key requirements for high-risk AI systems (Article 10) and one of the obligations for general-purpose AI models with systemic risks (Article 55).
The AI Office, one of the key governance bodies under the AI Act, is organizing a series of webinars to help organisations to prepare for correct and timely implementation of the AI Act.