CMS and Capgemini Invent: Joint consulting on digital regulation and transformation
By combining the expertise of international law firm, CMS, and leading strategy and technology consultancy, Capgemini Invent, we provide comprehensive and seamless advice on all aspects of digital transformation. Together, we present the foundational elements of AI governance, AI governance frameworks and platforms, and the importance of AI regulatory compliance.
We would like to thank the authors Björn Herbers, Philipp Heinzke, David Rappenglück and Sara Kapur (all CMS) and Philipp Wagner, Oliver Stuke, Lars Bennek and Catharina Schröder (all Capgemini Invent).
Legal assessment and implications of the AI Act
The “Regulation on Harmonized Rules for Artificial Intelligence” (AI Act), adopted by the European Parliament and the Council of the European Union, came into force on August 1st 2024. This concludes a long path of tough negotiations that began in 2021 with the European Commission’s proposal for EU-wide regulation of AI. Due to its direct applicability in all 27 member states, the AI Act will have far-reaching impacts on providers, operators, and users of AI.

- Prohibited practices under the AI Act (Art. 5) are those AI systems deemed incompatible with the fundamental rights of the EU.
- High-risk AI systems (Art. 6) are divided into those that are products or safety components of certain products and subject to third-party conformity assessment and such used in specific areas. Providers of such AI systems face high compliance requirements throughout the system’s lifecycle.
- Certain AI systems, such as those interacting with humans (e.g., chatbots), are subject to specific transparency obligations (Art. 50).
- General Purpose AI (GPAI) models (Art. 51 ff.) are versatile AI models that can perform various tasks and be integrated into systems. Compliance obligations vary based on classification as “normal” GPAI models or those with systemic risk.
From the date the AI Act came into effect, provisions on prohibited practices will apply for six months, 12 months for GPAI models, and between 24 and 36 months for high-risk AI systems.
Violations of the provisions can result in fines up to EUR 35 million or up to 7% of the previous year’s worldwide total revenue. In other cases, the penalty may amount to EUR 7.5 million or up to 1% of the previous year’s worldwide total revenue.
Strategic and operational implementation through AI governance
Implementing the requirements of the AI Act requires an overarching approach. With our comprehensive AI Governance Framework, we help organizations use AI responsibly and efficiently while minimizing risks. Processes and responsibilities must be defined and adhered to throughout the AI lifecycle, covering data, models, systems, and use cases, aligning with technical, procedural, and regulatory requirements.

Formulating a long-term vision for AI governance and developing ethical guidelines within the organization lays the foundation for any AI strategy. This strategy must then be effectively conveyed through a comprehensive communication plan. Subsequently, roles and responsibilities related to AI projects can be identified and defined, and processes for the development, implementation, and monitoring of AI projects can be adjusted.
Creating a handbook with security standards, best practices, and guidelines for implementing AI is recommended. Given the AI Act impacts such areas as data protection, copyright, and IT security law, it is advisable to continuously analyze regulatory requirements and translate them into technically measurable KPIs.
For providers of high-risk AI, setting up a risk management system is mandatory (specific components to be explored in a subsequent blog post). Here, an AI governance framework is essential. To effectively scale AI deployment and optimize operational processes, it is crucial to take an inventory of all AI systems and subsequently automate processes across development, deployment, monitoring, and documentation stages.
Establishing and monitoring metrics for quality, fairness, and robustness is a cornerstone of effective strategy. To foster knowledge among employees and mitigate biases against AI, continuous training and awareness initiatives should be integrated throughout the AI lifecycle in an iterative process, complemented by change management to ensure a seamless transition.
Application example
To illustrate this approach, consider a fictional federal ministry intending to use AI for partial automation of administrative services.
In our application example, the complexity and multifaceted nature of the topics are evident. Only by first clarifying the legal questions surrounding AI actors and the type of AI system can the resulting obligations be identified and effectively integrated into AI governance.
Strategic approaches to AI governance and risk management
The use of AI offers enormous potential for value creation but also carries functional and legal risks. Ongoing legislation, such as the AI Act, leads to a comprehensive regulatory framework but requires detailed implementation in organizations, considering technical, procedural, and human dimensions. Various projects have shown that AI governance only adds value if it keeps pace with the constant evolution of AI technology. Clear regulatory frameworks can catalyze increased AI applications in boundary-pushing areas following initial uncertainty. This not only fosters technological solutions for known issues but also unveils new use cases made possible by emerging capabilities. To remain competitive, implementation guidelines should strike a balance: offering necessary support while maintaining flexibility from the outset. Keep an eye out for our forthcoming deeper exploration of the risk management requirements outlined in Article 9 of the AI Act.