AI Act regulatory compliance based on ISO/IEC 42001.
AI Act - Artificial Intelligence Law
The European Union's new Artificial Intelligence Regulation (AI Act) introduces significant challenges and obligations for organizations that use artificial intelligence.
Whether a company is developing, distributing, operating, or using AI, ensuring regulatory compliance is essential to mitigate legal, ethical, and business risks.
What is the AI Act? Who may be affected?
The European Union’s new artificial intelligence regulation, the AI Act, is the most comprehensive regulation to date and is expected to significantly impact organizations using AI. The regulation is fully mandatory in all member states.
The AI Act aims to improve the functioning of the internal market by establishing a uniform legal framework - particularly for the development, placement on the market, deployment, and use of artificial intelligence (AI) systems in the Union. It seeks to promote the uptake of human-centric and trustworthy AI while ensuring a high level of protection of health, safety, and fundamental rights as established in the Charter of Fundamental Rights of the European Union, to guard against the harmful effects of AI systems and support innovation.
The scope of the AI Regulation extends to all organizations that develop, distribute, deploy, or use AI systems within the European Union.
The regulatory framework defines four levels of risk for AI systems.
It prohibits all AI systems that clearly pose a threat to people’s safety, livelihoods, and rights-ranging from government-led social scoring to toys using voice assistance that encourage dangerous behaviour.
ISO 42001 AI Management System
The ISO 42001 standard helps organizations act responsibly regarding AI systems-whether they are using, developing, overseeing, or providing products or services that involve artificial intelligence.
An artificial intelligence management system must be integrated into the organization’s processes and overall governance structure. The use of AI for automated decision-making-sometimes in a non-transparent and non-explainable manner-may require special governance measures beyond those of traditional IT systems.
ISO 42001 is a harmonized standard under the AI Act and can support organizations in achieving compliance.
RSM AI Compliance Consulting
RSM helps clients prepare for the challenges of the regulatory environment, develop internal governance systems, and ensure business and legal security through transparent, responsible AI applications.
AI Act Compliance - How Can RSM Help?
Our experienced AI consulting professionals at RSM can help you prepare for compliance with the AI Act, ensuring your company meets the new regulatory requirements.
RSM’s AI advisory services include the following activities:
Establishing AI awareness and internal governance:
- Developing internal policies for AI use
- Organizing and delivering employee training and awareness sessions
- Launching AI risk mitigation programs
- Producing monthly newsletters as part of an awareness program
AI Act readiness - based on the ISO 42001 standard:
- GAP analysis
- Establishing an AI governance framework
- AI risk management
- Preparing the necessary policies and procedures
- Developing targeted training and awareness materials
Integrated ISO 42001 and ISO 27001 readiness:
- GAP analysis
- Harmonized AI and IT security policies
- Integrated risk management
- Preparing the required documentation
- Developing targeted training and awareness materials
Why Choose RSM’s Artificial Intelligence (AI) Compliance Advisory Services?
- Comprehensive approach - We offer an integrated perspective that combines technology, legal, data protection, and risk management aspects.
- Experienced expert team - Our support is built on the collaboration of professionals from multiple disciplines.
- ISO and AI Act expertise - We provide guidance based on harmonized international standards and the AI Act.
- Tailored solutions - Our advice is aligned with your organization’s operations and technological maturity.
AI Act - Deadlines and Key Dates
1.Entry into force - August 1, 2024
The AI Regulation has officially entered into force.
2.Unacceptable AI - February 2, 2025
Bans on AI systems with unacceptable risks (e.g. manipulation, social scoring, biometric categorization) become applicable. General provisions like AI literacy also apply.
3.General-purpose AI - May 2, 2025
Codes of practice for general-purpose AI systems become applicable.
4.Competent authorities - August 2, 2025
- Obligations for providers of general-purpose AI models take effect.
- Member States must designate competent authorities to oversee AI regulation.
- The European Commission will review the list of prohibited AI practices annually.
5.Post-market monitoring - February 2, 2026
The European Commission will issue an implementing act on post-market surveillance requirements.
6.High-risk AI - August 2, 2026
- Member States must set up at least one functioning regulatory sandbox.
- Obligations for high-risk AI systems listed in Annex III come into effect. These include systems used in: biometric identification, critical infrastructure, education, employment, essential public services, law enforcement, migration, and the judiciary.
- The Commission will review and potentially update the list of high-risk systems.
- Penalties: Member States must implement rules for sanctions, including administrative fines.
7.Third-party conformity assessment - August 2, 2027
Obligations apply to high-risk AI systems not listed in Annex III but subject to third-party conformity assessment under EU law (e.g. toys, radio equipment, in vitro diagnostic medical devices, aviation safety, agricultural vehicles).
8.Large-scale IT systems - 2030
Obligations take effect for AI systems that are part of large-scale IT infrastructures regulated by EU law in areas such as freedom, security, and justice (e.g. Schengen Information System).
AI Act - Sanctions
The AI Act introduces strict sanctions to ensure compliance across the European Union.
- Member States must adopt and enforce appropriate penalties, including administrative fines, by August 1, 2026.
- Fines may reach millions of euros or a percentage of the company’s global annual turnover, similar to the GDPR.
- Sanctions must be proportionate, dissuasive, and effective, with particular severity for breaches involving prohibited or high-risk AI systems.
What is Artificial Intelligence and why does it need regulation?
Artificial Intelligence (AI) refers to a set of technologies that enable machines to mimic human cognitive functions such as learning, reasoning, and decision-making. Today, AI is present in nearly every industry—from self-driving vehicles and intelligent customer service to personalized recommendation systems.
However, AI also raises serious ethical, legal, and data protection concerns. Key issues include the transparency of automated decision-making, algorithmic bias, the protection of human rights, and the security of personal data. Improperly regulated AI systems can pose risks to users, employees, and entire business operations.
The Risks of AI
That’s why it is essential for organizations using AI to prepare for regulatory expectations now and to responsibly build their own AI governance and compliance frameworks.