Skip to Content

Trends in 2025 for Public Administration

Capgemini
Ravi Arunachalam, Simone Botticini, Pierre-Adrien Hanania, Sandra Prinsen
Mar 24, 2025

The future of public administration lies in partnerships—not silos—with citizens, businesses, and civil society. In an era of rapid digital transformation, while the guiding principle of providing accessible, inclusive and high-quality public services remains fundamentally unchanged, the way public administrations are creating value for their citizens is undergoing a profound evolution.

As technology evolves and societal challenges grow more complex and interconnected, traditional siloed structures are increasingly being replaced by dynamic ecosystems where value co-creation is critical to the success or failure of public interventions.

In 2021, 85% of public administrations in Europe were already using some form of co-creation to innovate public-service delivery. Today, this approach has become a widespread foundational principle. Key technological enablers are driving this shift, empowering public administrations to move towards a collaborative approach of public service delivery that brings together governments, businesses and citizens to address challenges more effectively. From leveraging interoperability to dissolve boundaries and advance data-sharing ecosystems to the rise of GovTech, proactive service delivery and the transformative potential of government AI, these key trends are laying the groundwork for a smarter, more inclusive and efficient public governance designed to meet the demands of modern, interconnected societies.

In today’s interconnected world, traditional boundaries in government at every level (local, state, national) are increasingly dissolving. This shift is driven by the urgent need for integrated, citizen-centric service delivery and the efficient utilization of resources. Governments are moving from siloed operations to a whole-of-government approach, where entities collaborate across jurisdictions to achieve shared objectives and provide responsive, efficient public services.

At the heart of this transformation is interoperability. Governments are prioritizing interoperability principles to foster collaboration among agencies, sectors, and even across national borders. This requires the seamless exchange of data, systems, and processes, supported by a robust framework that addresses organizational, legal, semantic, and technical challenges.
Around the world, interoperable services are reshaping public administration, showcasing the value of integrated public services:
Denmark—offers coherent public services and consistently rank 1st in UN e-gov survey
Australia—delivers life-event-based services through MyGov
Singapore—the LifeSG app integrates and provides a wide range of unified public services
Many societal challenges today transcend national or jurisdictional boundaries. Issues like climate change, public health crises, rapid urbanization, cybersecurity threats, and migration & displacement require coordinated, cross-border interoperability efforts.  To assist governments in their efforts, several interoperability frameworks are gaining traction:
European Interoperability Framework (EIF): Established in 2017, the EIF provides guidance for EU member states to achieve cross-border public service integration. The Interoperable Europe Act (2024) promises to accelerate these efforts, mandating more rigorous interoperability initiatives (e.g. the Once Only Technical System).
Digital public infrastructures (DPI): Defined as interoperable and shared digital systems open for collaboration across public and private services, DPIs are gaining traction along their promise to enhance initiatives in the field of digital identity or wallets.
ASEAN Digital Economy Framework Agreement (DEFA): Currently in negotiation phase, DEFA emphasizes cross-border data flows, data protection, and cybersecurity. Once implemented, it is expected to transform digital collaboration within the ASEAN region.
These efforts promise not only more efficient service delivery but also better preparedness for collaboratively tackling global societal challenges.  Capgemini is committed to helping our clients address the interoperability challenges to transform public services delivery within and across borders

As EU President Ursula von der Leyen aptly stated, “Europe needs a data revolution,” highlighting the urgency for governments to harness data’s untapped potential. Governments worldwide are now reimagining how they share and leverage data, moving away from centralized data hubs toward decentralized, sovereign data-sharing ecosystems.
Historically, centralized data hubs allowed limited collaboration due to agency concerns about losing control over their data. Today, data spaces, enabled by protocols and technologies that ensure sovereignty and security, are fostering new levels of trust and cooperation. These frameworks empower sector and cross-sector data sharing, facilitating innovation and improving public services.
Supportive initiatives like the EU Data Spaces Support Center (DSSC) and open-source projects like SIMPL act as catalysts, standardizing and enabling broader adoption of data spaces, both on the implementation and the governance perspective. Stakeholders such as the International Data Spaces Association (IDSA) have been instrumental in formalizing these efforts, promoting the Data Spaces Protocol as a potential global standard for interoperability.
The EU leads the way with its Common European Data Spaces initiative, creating sector-specific data ecosystems for health, agriculture, cultural heritage, and climate goals (Green Deal). These initiatives are already yielding results, such as the European Health Data Space, which enhances cross-border healthcare and crisis response.
Globally, interest in data spaces is growing.  Australia is piloting data spaces through its leading national data infrastructure research agency Australian Research Data Commons (ARDC), inspired by EU efforts.  China, through its 2024-2028 National Data Administration Action Plan, aims to establish over 100 data spaces, driving an integrated national data market, while securely connecting with international partners.
Data spaces are evolving from niche proofs-of-concept to broader ecosystems capable of addressing complex societal challenges. Still there are significant developments happening in the application of decentralized identity management, privacy-preserving technologies, and robust usage control mechanisms at protocol and technology components level.  These developments will further enhance trust and accelerate wider adoption, while the existence of such privacy-enhancing techniques should skip the human part, along needed organizational change and stakeholder management. The rise of new roles such as the Chief Data Officer, the role of scoping phases, and a tailormade data collaboration approach along specific use cases and the culture of the organizations, remain key features of a successful journey towards sharing data.

GovTech is no longer just a buzzword. It’s a revolution that’s transforming the way public administrations operate and deliver public services. What was once an afterthought relegated to IT departments, has now become a strategic priority of administrations worldwide. GovTech, defined as the public sector’s adoption and use of innovative technological solutions to improve public service delivery, is the key to achieving better social outcomes, digital inclusion, and improved public sector services. 

With government technology projected to surpass $1 trillion and become the largest software market by 2028, it´s clear that public administrations do not want to be merely passive buyers of innovation—they want to be innovative players themselves. Indeed, GovTech is not just about purchasing technology, it’s about co-creating value through partnerships. While legacy IT systems, siloed governance structures and traditional procurement processes that favor large vendors still pose challenges, public administrations are increasingly trying to overcome them by rethinking their engagement with the private sector, turning to public-private partnerships (PPPs) to tap into the creativity, agility, and expertise of startups and SMEs. These collaborations allow administrations to work with non-traditional players to co-create solutions, share risks, and scale innovations to improve service delivery. In this regard, a pivotal moment in the worldwide GovTech ecosystem came with the official opening of the Global Government Technology Centre in Berlin (GGTC Berlin), a hub for collaboration and digital transformation.
Capgemini is proud to be a co-founder of this first-of-its-kind center, which brings together governments, startups, and private enterprises to accelerate the adoption of GovTech. GGTC promotes a systematic approach to GovTech, encouraging cross-sector collaboration and co-creation among global experts to tackle challenges like interoperability and siloed systems, ensuring that solutions can be shared across borders to benefit countries with fewer resources, helping bridge the digital divide.
Looking ahead, and as exemplified by the GGTC, a strategic, systematic, and sustainable approach to GovTech will mark the new era of innovation for public administrations. As the GovTech ecosystem matures, public administrations will unlock new technological solutions, ensuring digital transformation is inclusive, scalable, and impactful across borders, all while being more agile, innovative, and responsive to digitally native societies.

Digitally sophisticated citizens are demanding faster, seamless, and personalized digital services. Simply digitizing public services is no longer enough; public administrations must step up their game by adopting a human-centered approach, organized around citizens’ life events to proactively meet their needs.

While digital public services have become more efficient and accessible, many remain mere electronic replicas of outdated traditional processes. Challenges such as siloed systems and unequal access to eGov services persist in many public administrations, along with the growing pressure to match the intuitive user experience and responsiveness of private-sector platforms. Moving public services online is insufficient; administrations must ensure that citizens can and will use them. Governments with lower service design maturity levels are only now moving beyond basic digitalization, while more advanced administrations are shifting from fragmented electronic services to proactive, fully integrated service delivery. This transformation requires systemic reforms and interagency collaboration to co-create Citizen Services that are human centered by design and informed by real-time user insights rather than outdated government silos. Meeting citizen expectations today means providing multi-service, omnichannel experiences that anticipate their needs, mirroring the seamless interactions they have with private-sector services.
Some countries are already exploring proactive governance approaches, moving towards a truly “invisible bureaucracy”, where services are seamlessly embedded into daily life. By leveraging data-driven insights, governments can determine eligibility and deliver services automatically, without requiring citizens to apply. For example, the UAE Government has been pioneering this transformation, offering bundled, proactive services that range from offering 18 housing services in just one platform to bundled services for hiring employees or saving families time and effort when a baby is born. This new reality extends public services’ reach to underserved populations, with the user-friendliness of private sector platforms. Citizens no longer need to apply or even be aware of service delivery, minimizing bureaucratic burdens while enhancing user satisfaction.
Ultimately, the future of public service delivery is not just about making public services digital, it is about making them intelligent, integrated, and anticipatory. Achieving this vision requires breaking down silos and fostering strong partnerships across government agencies, private-sector innovators, and civil society to co-create data-driven services that proactively meet citizens’ needs.

As citizen expectations rise, budget shrinks and workloads increase, AI has emerged as a powerful tool in the hands of public administrations to improve internal operations and deliver better public services. No longer a distant promise, AI is here and is now transitioning from experimentation to large-scale implementation, but challenges remain.
Unlike with previous technological innovations, accessible, “democratic” tools like ChatGPT and GitHub Copilot have empowered civil servants to explore (Generative) AI’s potential from the outset. In countries like Australia and the UK, trials of Microsoft 365 Copilot and RedBox Copilot have demonstrated significant time savings on tasks such as document summarization, information retrieval, and briefings creation. This allows civil servants to focus on strategic high-value work, improving their productivity and job satisfaction. This is in line with recent studies which show how GenAI could increase productivity by up to 45%, automating 84% of routine tasks across over 200 government services, ultimately driving a global productivity boost of $1.75 trillion annually by 2033.
Beyond internal operations, AI is reshaping how administrations interact with citizens. Tools like chatbots and virtual assistants are improving transparency and fairness while creating more personalized, accessible, and inclusive public services. For example, the Generalidad de Catalunya in Spain partnered with Capgemini to implement a GenAI chatbot for handling citizens’ queries in both Catalan and Spanish, reducing employees’ workloads and ensuring equitable access to services for all citizens. By incorporating human oversight to verify chatbot outputs, the AI-powered chatbot is driving efficiency and inclusion in public service delivery without compromising quality and trust.
These early successes are just the tip of the iceberg for (Gen) AI applications in public administrations. Now, the challenge is no longer experimentation but scaling these innovations to embed them into everyday processes. Beyond automation, the true transformative potential of AI lies in applications such as AI-driven decision-support mechanisms and predictive governance, which will redefine how administrations function and serve citizens. This path is not without obstacles: data privacy, security and biases in AI outputs remain top concerns as administrations grapple with protecting citizens’ sensitive information while integrating AI into their systems. The solution lies in developing customized AI tools with built-in trust layers and guardrails that will prevent inaccuracies and biases. Here Catalonia’s approach, balancing automation with accountability, offers a model for building trust in (Gen)AI.

Time for action in an increasingly interconnected world

To fully harness the potential of these digital trends, public administration leaders must adopt an action-oriented approach. A combination of political commitment to digital transformation, inter-agency collaboration and leveraging robust PPPs to bridge resource gaps and accelerate innovation will be key. Together they will help to overcome budget constraints, siloed institutional frameworks, cultural resistance to change and complexities in measuring and reporting progress that still afflict public administrations worldwide. While strategically investing in cutting-edge technologies like AI, leaders must also champion a culture of continuous learning and upskilling among civil servants, ensuring they are equipped to leverage effectively these emerging tools. Ultimately, aligning digital strategies with citizens’ needs through human-centered service delivery will enable administrations to build trust, improve efficiency, and deliver meaningful public value in an increasingly interconnected world.

Authors

Hanania-Pierre-Adrien

Pierre-Adrien Hanania

Global Public Sector Head of Strategic Business Development
“In my role leading the strategic business development of the Public Sector team at Capgemini, I support the digitization of the public services across security and justice, public administration, healthcare, welfare, tax and defense. I previously led the Data & AI in Public Sector offer of the Group, focusing on how to unlock the intelligent use of data to help organizations deliver augmented public services to the citizens along trusted and ethical technology use. Based in Germany, I previously worked for various European think tanks and graduated in European Affairs at Sciences Po Paris.”
Ravi Shankar Arunachalam

Ravi Shankar Arunachalam

Public Administration & Smarter Territories SME – Global Public Sector
“As a Public Sector strategist and technologist at Capgemini, I assist local, state, and federal governments worldwide in harnessing the full potential of a collaborative, Government-as-a-platform model to revolutionize citizen service delivery. With a deep understanding of industry challenges, citizen expectations, and the evolving technology landscape, I develop systemic transformation strategies and solutions that provide lasting value to both people and the planet”
Simone Botticini

Simone Botticini

Associate Consultant, Capgemini Invent Belgium
“Public administrations worldwide are undergoing a major transformation, driven by digitalization, evolving citizen expectations, and the move toward proactive, data-driven governance. By leveraging digital technologies, they can improve service delivery, streamline bureaucracy, and create more inclusive, citizen-centric administrations. Capgemini is leading this transformation, helping public administrations harness the power of technology to enhance public services while ensuring trust, transparency, and security.”
Sandra Prinsen

Sandra Prinsen

Group Client Partner and Global Public Admin Segment Lead
I work with our public clients to create a more sustainable, diverse and inclusive society, fueled by technology. The combination of this digital and sustainable transition offers governments the opportunity to navigate towards a society and a data-driven ecosystem that is ready for the future. That is why I am looking forward to think along in suitable solutions, to jointly make real impact in the lives of citizens.

    Trends in 2025 for Security and Justice

    Capgemini
    Vanshikha Bhat, Anne Legrand, Conrad Agagan, Nick James, Pierre-Adrien Hanania
    Mar 27, 2025

    A focus on justice reform, restorative practices, and addressing systemic inequalities is reshaping the way societies approach crime and punishment. Coupled with new threats posed by cybersecurity risks, geopolitical instability, and climate change, this will significantly impact both national security priorities and global cooperation in the coming years.

    Data has become a core strategic asset for organizations today and plays a vital role in improving our public safety, law enforcement and judicial systems. By collecting and analyzing available data, law enforcement organizations are making informed decisions, enabling them to detect and prevent crime. Additionally, data has a part to play in improving the efficiency of police and justice as well as enhancing the citizen experience. For example, data is the bedrock of assisted case management for citizen queries and is being used for predictive analytics in courts, whereby legal practitioners can use historic data to predict (and manage) outcomes.

    There is also a matter of how data should be shared. Initiatives such as the EU law enforcement data space promote data sharing, not just within a department but also across borders. So, we are seeing the development of interoperable systems that allow for the seamless exchange of data between countries’ border agencies gaining pace, improving the flow of information about people and cargo across borders.
    At the same time, while data is an undeniable asset, to ensure its value, security organizations must use and protect their own data (and mitigate risks), as well as help other organizations with their data management security requirements. All organizations must balance the need to use data to improve decision making or outcomes while complying with privacy and security standards. The goal is to ensure that data remains both a strategic asset and a protected resource.

    The next generation of forensics is a multifaceted field that blends advanced physical and virtual methods. It utilizes advanced tools, techniques, and methodologies to address the challenges of modern-day investigations. These advancements are crucial as cybercrime becomes one of the fastest-growing criminal activities, with data breaches and digital fraud continuing to rise alongside traditional physical crimes.

    While traditional techniques remain important, the rise of cybercrime, advanced data storage, and complex digital evidence requires continuous adaptation of tools, techniques, and strategies. According to a 2023 report by Cybersecurity Ventures, global cybercrime costs are expected to reach $10.5 trillion annually this year, up from $3 trillion in 2015. The integration of AI, cloud computing, mobile forensics, and data recovery tools is reshaping how law enforcement and investigators approach crime-solving in an increasingly digital world.
    In the age of generative AI (Gen AI), biometrics are becoming more sophisticated, combining traditional biometric modalities with advanced AI techniques to create more secure, accurate, and user-friendly authentication systems. Using biometric and digital identity systems, governments are developing stronger cybersecurity measures to prevent data breaches and unauthorized access.
    Facial recognition technology is another rapidly growing component of physical forensics. The global facial recognition market was valued at $4.9 billion in 2024and is projected to grow at a 17.8% CAGR from 2023 to 2030. According to Statista, facial recognition systems were used in over 60% of law enforcement agencies in 2023 worldwide for identification and investigation.
    The implementation of AFIS (automated fingerprint identification system) has revolutionized fingerprint analysis. As of 2023, there were more than 500 million fingerprint records in AFIS databases worldwide. The use of AFIS systems is now standard in most countries, and their accuracy has improved significantly with AI and machine learning algorithms, reducing human error and increasing identification speed.

    New technologies continue to shape the future of law enforcement, enhancing crime prevention, improving investigation efficiency, and ensuring better accountability and public safety. These technologies range from AI-driven analytics to advanced crime detection tools and digital forensics.

    Reports also suggest that the AI in the public security and safety market will grow from US$ 12.02 billion in 2023 to US$ 99.01 billion by 2031. This is driven by applications such as predictive policing, facial recognition, crime pattern analysis, risk profiling and AI-assisted investigations.
    Predictive policing using AI: AI is being used to analyze patterns in historical data, social media activity, weather, and other factors to anticipate where and when crimes are likely to occur. For example, US police agencies use AI-driven predictive policing tools like PredPol, HunchLab, and Palantir to forecast crime hotspots and resource allocation. The AI in predictive policing is expected to grow by a CAGR of 46.7% in the next 10 years.
    Smart policing: Connected devices (e.g., sensors, smart vehicles, and wearable technology) are enabling real-time data sharing and smarter resource deployment.
    Cybersecurity & fraud detection: AI is increasingly used for detecting financial crimes, such as money laundering and fraud. The global market for AI in cybersecurity of US$ 25.35 billion in 2024 is expected to grow at aCAGR of 24.4% from 2025 to 2030.
    AI-powered decision making: AI can be used to analyze large datasets, such as travel records, biometric data, and other intelligence sources, to identify trends or patterns indicative of illegal immigration, human trafficking, or drug smuggling. This allows for more efficient decision making, improving how border agents allocate resources.
    AI for risk profiling: AI systems can use historical data, behavioral analytics, and patterns from databases like Interpol or FBI records to predict potential risks. For example, AI models can assess whether an individual is likely to engage in illegal activity based on their travel patterns, previous encounters with law enforcement, or other risk indicators.
    While the added value of such technology is clear, ethical standards will be key to assure compliance with frameworks, such as the EU AI Act. Security and justice organizations have started to look at solutions and organizational set-ups, especially as their use cases can often fall under the scope of high-risk categories. Tools exist, such as Capgemini’s EU AI Act Compliance platform, and a more programmatic approach is being developed, helping polices and home affairs ministries to monitor where they stand in regards to compliance with explainability, human supervision or bias detection standards.

    Countries worldwide are grappling with border security concerns, seeking ways to combat illegal immigration, drug smuggling, and human trafficking. Many are leveraging technology to improve the efficiency, security, and management of border control processes. However, all of this requires a coordinated, multi-faceted approach that combines advanced technologies, effective policies, and law enforcement capabilities. Border agencies must also focus on balancing humanitarian concerns with national security and border management.

    Technologies such as drones, sensors, AI, facial recognition, and advanced detection systemsoffer border management officers powerful tools to enhance their capabilities, while policy reforms and collaborative strategies help address the systemic challenges of illegal immigration and drug trafficking. The following strategies and technologies are among those playing a vital role in addressing these challenges:

    • Biometric identification and digital identity: Biometric systems are being used at border crossings to identify individuals and verify their identities. These systems can be integrated with international databases to track those attempting illegal border crossings. Some countries are implementing digital identity programs that allow travelers to authenticate themselves securely using biometric data on smartphones or other digital platforms, making it harder for migrants to cross illegally using counterfeit documents.
    • Cybersecurity and data protection: As more data is collected and shared across borders (e.g., biometric data, travel information, migration records), securing this sensitive data becomes crucial. Robust cybersecurity protocols and privacy regulations are necessary to prevent misuse or exploitation of personal data while maintaining effective border security.Advanced fraud detection systems are also needed to identify fake documents, including forged passports, visas, or identity cards.

    The justice sector is increasingly using Gen AI technologies like natural language processing (NLP), predictive analytics, and machine learning. These technologies have the potential to improve efficiency, reduce costs, and even contribute to fairness in legal processes. AI is projected to improve court system efficiency, reducing administrative work by 40-50% and automating case management, with the legal AI market expected to reach $3.8 billion by 2028.
    AI is also being used to improve access to justice by automating basic legal services. According to the American Bar Association (ABA), over 80 per cent of low-income Americans cannot afford legal representation, and AI tools are helping fill this gap. Platforms like DoNotPay have handled millions of cases to date, providing free legal services to underserved populations.

    Law enforcement professionals face unique stresses and risks, such as exposure to trauma, long hours, dangerous situations, and the pressure of public scrutiny. Technology can play a pivotal role in addressing mental health issues and providing better mental health support, early intervention, and improving resilience in high-stress environments.

    According to the United Kingdom Police Federation Survey (2023), 82 per cent of respondents had experienced feelings of stress, low mood, anxiety, or other difficulties with their mental health or wellbeing, the same rate as in 2022, but up from 77 per cent in 2020.
    Another survey highlighted several areas that require proactive measures from law enforcement agencies for the goodwill of police officials. Of note, 43 per cent indicated that excessive workload contributed significantly to their poor work-life balance and stress levels and 35 per cent reported that job-related stress affected their personal relationships and family life.
    Tech tools, such as wearables like smartwatches or biosensors, can monitor physical and physiological indicators of stress. AI tools are able to process data from wearables or mental health screenings to identify patterns that suggest an officer is at risk of mental health issues, such as PTSD, anxiety, or depression. Telehealth and self-help solutions can also play a vital role in managing the stress of the officers.

    Redefining future operations

    The trends shaping security and justice in 2025 reflect a complex interplay of technological innovation, social change, and global challenges. As advancements in AI, cybersecurity, and surveillance technologies redefine how law enforcement operates, the demand for accountability, privacy protection, and fair use of these tools will become more pronounced.

    As new threats emerge, from cyberattacks to the impacts of climate change, global cooperation and adaptive strategies will be essential for maintaining both public safety and human rights. The future of security and justice will be defined by the need to navigate these complex, interconnected issues, while ensuring that technological progress serves the greater good.

    Authors

    Vanshikha Bhat

    Vanshikha Bhat

    Senior Manager, Global Public sector / Industry platform 
    ” We at Capgemini public sector help governments organizations across the globe in driving initiatives that address the diverse needs of vulnerable populations. Our involvement also aids in navigating complex processes, optimizing resource, and fostering innovation. We thrive towards enhances the impact and sustainability of government programs, positively affecting the lives of those in need.”
    Nick James

    Nick James

    Executive Vice President, Central Government and Public Security
    “To continue to be relevant, public security and safety agencies require better tools, data, and shared intelligence, available when and where they need them. Digitalization, cloud and real time communications are key enablers to achieving this, and are likely to be a key building block for future public security strategies.”
    Hanania-Pierre-Adrien

    Pierre-Adrien Hanania

    Global Public Sector Head of Strategic Business Development
    “In my role leading the strategic business development of the Public Sector team at Capgemini, I support the digitization of the public services across security and justice, public administration, healthcare, welfare, tax and defense. I previously led the Data & AI in Public Sector offer of the Group, focusing on how to unlock the intelligent use of data to help organizations deliver augmented public services to the citizens along trusted and ethical technology use. Based in Germany, I previously worked for various European think tanks and graduated in European Affairs at Sciences Po Paris.”
    Conrad Agagan

    Conrad Agagan

    CGS Account Executive for US Department of Homeland Security
    “As a retired career law enforcement officer who has dedicated 25 years of my life in helping secure the U.S. homeland, I feel very fortunate to now be in a position at Capgemini that allows me the honor of continuing to work with the dedicated men and women of the Department in support of the mission.”

      Trends in 2025 for Smart Cities

      Capgemini
      Capgemini
      Apr 15, 2025

      Technology is redefining urban living. Rapid urbanization this century has transformed cities into bustling centers of growth and innovation. However, this progress comes with challenges, such as resource management, climate resilience, and efficient governance. In 2025, emerging technologies will play a pivotal role in reimagining how cities function at scale.

      With more than half the global population now living in cities, urban areas are under immense pressure to adapt to growing populations and environmental concerns. Smart cities are rising to the challenge, integrating advanced technologies to improve infrastructure, enhance public services, and foster sustainable living. This will also ensure inclusivity, while improving the quality of life for urban dwellers.

      The following insights into the trends shaping the future of our cities reveal that a new chapter in urban living is under way.

      With cities getting smarter, novel digital services—such as smart grids, on-demand mobility, smart water management—are reinventing public service models and processes. At the same time, they are driving an unprecedented surge in data generation and flows. Urban data platforms serve as the essential infrastructure for effectively utilizing city data to enhance operational efficiency and scale smart city initiatives. They connect, analyze and visualize all data from diverse domain systems in urban fabric. From here, data can be further shared to city services or third-party private entities, enabling innovative business models to flourish.

      As part of the RUGGEDISED project, Rotterdam, Umeå and Glasgow developed urban data platforms to tackle respective city specific challenges. The Digital City Platform in Rotterdam discloses and visualizes actual energy use, as well as use over a period of time (by individual buildings, as well as the whole area). Rotterdam’s 3-D model is connected to the platform and, together with real-time data, it forms a 3D digital twin of the city. This 3D digital twin supports Rotterdam in crowd and public space management, smart mobility, electricity and thermal grid planning and operational optimization, as well as energy and resource efficient waste collection and processing.
      Cities are also beginning to adopt a federated data spaces model to facilitate sovereign and secure ways of data sharing across city domains, as well as across cities and borders. EU-funded initiatives such as the European Data Space for Sustainable and Smart Cities and Communities (DS4SSCC) have developed a multi-stakeholder data governance blueprint. This initiative creates a cross-sectoral data space for governments and their providers, enabling interoperability to improve service delivery to citizens. Several pilot projects—UrbanMind, Traffic flow data space—are underway in the DS4SSCC program where multiple cities are collaborating to co-create value.

      Digital twins and IoT technologies are shaping optimized city operations feeding off data from urban data platforms. By creating virtual models of cities, planners can simulate and test the impact of new developments, identify potential issues, optimize city services and proactively create policies to avoid future impact. Through simulation, monitoring, and optimization of various urban elements, digital twins help cities achieve a balance between economic growth, efficient operations, and environmental protection. 

      Depending on the maturity levels, cities are adopting digital twin solutions that range from descriptive analysis and predictive intelligence to scenario simulations. 
      The Virtual Singapore platform is a digital twin of the city-state, providing a dynamic 3D model that enables users across various sectors to develop advanced tools and applications for testing concepts and services. It also supports planning, decision-making, and research on innovative technologies to address complex and emerging challenges. 
      Shanghai has developed an extensive digital twin to monitor and manage city operations, including traffic flow, energy consumption, and environmental conditions. This digital representation aids in optimizing urban planning and improving public services.
      As the next evolution, digital twin models are overlaid with immersive experience technologies such as augmented reality (AR), virtual reality (VR) and mixed reality (MR), to provide additional context about the urban elements.  Global initiative on metaverse for cities (Citiverse) was launched by the International Telecommunication Union (ITU), the United Nations International Computing Centre (UNICC) and Digital Dubai to provide normative guidance and framework for virtual world solutions in cities.
      Digital twins and citiverse initiatives are redefining city operations by making urban environments more efficient, resilient, and citizen-friendly.

      With increasing frequency of extreme weather events, cities need to buckle up, investing in the resilience of their infrastructure. From IoT-enabled flood monitoring systems to predictive analytics for disaster management, urban areas are focusing on safeguarding both people and resources. Smarter water systems address challenges like scarcity through innovative recycling and distribution methods. Physical systems, such as water systems, were not built with the digital age in mind. Yet rebuilding is also often not an option given the enormous costs of (temporary) replacements. A mitigation can be found in retrofitting  these physical assets to digital infrastructures using sensors and remote-control digital components. A great example can be found in France with Voies navigables, the French inland waterway network facilitator.
      Another compelling example of climate adaptation strategies can be found in the battling of urban heat islands (UHRs). For instance, the city of Paris has undertaken significant measures, such as planting trees, revamping its iconic zinc rooftops, and installing cooling public infrastructure, to reduce heat retention. Similarly, Seville adapted ancient Persian techniques by using qanat water supply systems, enhanced with renewable energy, to cool buildings through water circulation within walls. These initiatives exemplify the proactive steps European cities are taking to mitigate the effects of urban heat islands. Albeit the outcome is physical, extensive modelling in digital twins forms the basis upon which cities act.

      Governments across the globe are no longer merely setting ambitious climate goals, they are operationalizing these commitments into tangible outcomes. The European Green Deal stands as a hallmark initiative, aiming to make the EU the world’s first climate-neutral continent by 2050. Under this framework, the REPowerEU program, launched in May 2022, has achieved significant milestones: for the first time, electricity generation from wind and solar has surpassed gas, with an 18% reduction in gas consumption in just two years.
      Governments understand that they need to lead by example. The global Net-Zero Government Initiative (NZGI) with its 18 partner nations has set stringent targets for net-zero emissions in government agency operations by 2050. This initiative employs strategic measures like carbon pollution-free electricity, net-zero buildings & operations, zero-emission vehicles, climate resilient infrastructure & operations, and circular economy practices. Progress is evident: Australia achieved a more than 50% reduction in greenhouse gas (GHG) emissions in operations in 2022 compared to the previous year.
      ICLEI – Local Governments for Sustainability is a global network working with more than 2,500 local and regional governments committed to sustainable urban development. Active in 125+ countries, this network is influencing sustainability policy and driving local action for zero emission, nature-based, equitable, resilient and circular development. City agencies are increasingly leveraging circular economy principles, transforming waste into raw materials and fostering innovative business models. Amsterdam is a pioneer city in sustainable and circular urban development and is focused on three value chains—food and organic waste streams, consumer goods, and built environment.  It is constantly tracking progress through a circular economy monitor
      Despite notable progress, governments face hurdles, such as budget constraints, siloed institutional frameworks, cultural resistance to change, and complexities in measuring and reporting progress. Overcoming these barriers demands a combination of political commitment, inter-agency collaboration, investment in innovation, and robust public-private partnerships. Sharing global best practices will be critical in refining sustainability strategies and achieving long-term goals.

       Health as a priority for urban planners
      Environmental health technologies will take center stage in urban planning. After all, cities are made for humans to thrive. Sensors will be used to monitor air quality, noise pollution, and other factors that influence well-being. Predictive health tools will guide the development of spaces that support healthier lifestyles. An earlier study showed the potential for a quick return-on-investment, with savings reported by EIT Urban Mobility of between 485-700€ per inhabitant. A stark demographic fault line is, however, emerging, splitting urban centers into two distinct camps: old and young.
      Aging cities, primarily in high-income nations and parts of the developing world, face a demographic crunch. Public transit systems, pedestrian infrastructure, even housing—all demand costly retrofits to accommodate aging populations. Economically, these cities struggle with a shrinking workforce shouldering the weight of pension systems and healthcare needs. To address this issue, Japan is exploring the development of AI-driven robots, such as AIREC, designed to assist with tasks like shifting patients, cooking, and folding laundry. Meanwhile, youthful cities are experiencing the inverse. Here, labor markets churn with opportunity, powered by policies prioritizing education, employment, and entrepreneurial ambition. But these cities aren’t without growing pains. Pollution, congestion, and urban stress loom large, as does a rising tide of respiratory disorders and mental health struggles among young, high-strung populations. One creative solution is a low cost and flexible gondola-like ride hailing network being piloted in New Zealand. This cable car transit system will appeal to younger residents seeking efficient and sustainable mobility options.

      The road ahead: Challenges and opportunities

      The future of urban living will be defined by how effectively cities adopt and integrate these technological innovations. While the potential benefits are immense—smarter resource management, reduced environmental impact, and improved citizen experiences—success depends on political commitment, societal acceptance, and the ethical use of technology.

      In 2025, smart cities will not only focus on innovation but also on creating inclusive, resilient, and sustainable communities. By leveraging the technologies shaping today’s urban transformation, we can build cities that thrive in harmony with people and the planet.

      Authors

      Luc Baardman

      Luc Baardman

      Managing Consultant and Lead Enabling Sustainability Capgemini Invent NL
      “Sustainability at its core is the most important transformation question of our time. Left unanswered, it will wreak havoc upon the world and its population, and it is up to all of us to play our part in becoming sustainable in an inclusive manner. Capgemini’s part is to remove the impediments for a better future, to truly enable sustainability.”
      Ravi Shankar Arunachalam

      Ravi Shankar Arunachalam

      Public Administration & Smarter Territories SME – Global Public Sector
      “As a Public Sector strategist and technologist at Capgemini, I assist local, state, and federal governments worldwide in harnessing the full potential of a collaborative, Government-as-a-platform model to revolutionize citizen service delivery. With a deep understanding of industry challenges, citizen expectations, and the evolving technology landscape, I develop systemic transformation strategies and solutions that provide lasting value to both people and the planet”
      Ambika Chinnappa

      Ambika Chinnappa

      Knowledge Management Lead, Global Public Sector
      “At Capgemini, I lead Knowledge Management initiatives to ensure that critical expertise, insights, and best practices are effectively captured, curated, and shared across our global teams. By enabling efficient knowledge flow and collaboration, I help our Public Sector colleagues stay informed, aligned, and empowered to drive impactful outcomes. Through structured KM strategies, I aim to enhance organizational learning, support smarter decision-making, and contribute to the delivery of innovative, sustainable solutions for governments and the communities they serve.”

        Confidential AI: How Capgemini and Edgeless Systems allow regulated industries to adopt AI at scale

        Capgemini
        Capgemini
        Apr 14, 2025

        By combining confidential computing with Nvidia H100 GPUs, “Privatemode AI” provides cloud-hosted LLMs with end-to-end encryption of user data.

        The AI revolution is transforming our world at unprecedented speed. Just a few years ago, the idea of conversing naturally with a computer seemed more at home in Hollywood or in science fiction than in the workplace. Yet with the rise of generative AI tools like ChatGPT, these technologies have become an everyday reality, embraced by employees, customers and IT users alike.

        However, this rapid adoption brings new challenges, particularly for organizations in regulated industries that must maintain high levels of data protection and privacy.  How can those organizations harness the power of GenAI models at scale while also safeguarding sensitive information?

        Confidential AI solves the “cloud versus on-premises dilemma”

        The advent of AI has amplified the importance of choosing between cloud and on-premises infrastructure. Traditionally, organizations preferred to process sensitive data on-premises, within their own data center, as it offered maximum control. But given the significant costs of GPU infrastructure and the energy consumption that AI workloads require, on-premises is usually not economical. What’s more, limited expertise and technical resources for managing AI architectures locally make the cloud – especially “AI-as-a-service” offerings – a more viable option for most organizations.

        Yet, when deploying AI solutions such as large language models (LLMs) via a cloud-based service, many parties – cloud, model and service providers – potentially have access to the data. Which creates problems for regulated industries.

        The diagram shows a user sending data to large language models (LLMs), and receiving a response. But because the LLMs are run on the public cloud, use raises privacy and control issues, and the risk of access by unauthorized parties.

        Figure 1: With standard GenAI services, model, infrastructure and service providers can all potentially access the data.

        This is where confidential computing comes into play. While it’s long been standard to encrypt data at rest and in motion, data in use has typically not been protected.

        Confidential computing solves this problem with two main features: runtime memory encryption and remote attestation. With confidential computing-enabled CPUs, data stays encrypted in the main memory, strictly isolated from other infrastructure components. Remote attestation also makes it possible to verify the confidentiality, integrity and authenticity of the so-called Trusted Execution Environment (TEE) and its respective workloads.

        The diagram illustrates the two main pillars of confidential computing. Inside the application or landing zone, data is encrypted whether at rest or in transit. When in use, the CPU keeps data encrypted in memory. Outside the application or landing zone, remote attestation takes place. The CPU issues certificates for security as a compliance and validation step.

        Figure 2: Confidential computing provides runtime encryption and remote attestation for verifiable security.

        Confidential computing has been a standard feature of the last few generations of Intel and AMD server CPUs, where the feature is called TDX (Intel) and SEV (AMD) respectively. With Nvidia’s H100, there’s now a GPU that provides confidential computing – allowing organizations to run AI applications that are fully confidential.

        The diagram illustrates how confidential computing protects data end to end. The user sends data to an AI system and receives a result; all data is encrypted in transit. The data is fully protected and cannot be accessed by unauthorized parties.

        Figure 3: Confidential AI allows organizations in regulated industries to use cloud-based AI systems while protecting the data end to end.

        How Capgemini and Edgeless Systems deliver confidential AI together

        Capgemini is a leader in GenAI, managing large-scale projects to drive automation and foster efficiency gains for clients worldwide. The firm has long-standing expertise in delivering AI systems across clouds and on-premises, including critical aspects like user experience, Retrieval Augmented Generation (RAG) and fast inference. (More on these later.)

        Data security and privacy are critical aspects of many Capgemini projects, particularly those in regulated industries. This means clients are often confronted with the aforementioned “cloud versus on-premises dilemma”.

        The good news: deploying GenAI tools through ough the cloud, with verifiable end-to-end confidentiality and privacy, isn’t a distant future. It’s a reality. And Capgemini is already bringing it to clients in regulated industries like healthcare, defense, the public sector and the financial sector.

        In 2024, Capgemini partnered with Edgeless Systems, a German company that develops leading infrastructure software for confidential computing. (See the blog post, Staying secure and sovereign in the cloud with confidential computing.) Edgeless Systems now provides Privatemode AI, a GenAI service that uses confidential virtual machines and Nvidia’s H100 GPUs to keep data verifiably encrypted end to end. This allows users to deploy LLMs and coding assistants that are hosted in the cloud while making sure no third party can access the prompts.

        • Powerful LLMs, e.g., Llama 3.3 70B and Mistral 7B
        • Coding assistants, e.g., Code Llama and Codestral
        • End-to-end prompt encryption
        • Verifiable security through remote attestation
        • Standard, OpenAI-compatible API

        Together, Capgemini and Edgeless Systems are already bringing exciting confidential AI use cases to life.

        Case 1: Confidential AI for public administration

        In the German public sector, the demographic change will soon lead to many unfilled positions and capability gaps. GenAI applications can support the work of civil servants, automate administrative tasks and help to reduce labor shortages. For example, the IT provider of the largest German state (IT.NRW – Landesbetrieb Information und Technik NRW) has contracted Capgemini to develop an “Administrative AI Assistant” to improve productivity for thousands of administrative employees.

        The GenAI application helps in several ways, including by summarizing text or supporting research assistants with RAG (Retrieval Augmented Generation). However, there aren’t enough GPUs available on-premises to support inference (the process whereby an LLM receives and responds to a request) and the public cloud isn’t an option for sensitive data. Here, the client uses Privatemode AI for confidential inference in the cloud, serving a Meta Llama 3.3 70B model via a standard OpenAI-compatible API. So while all the heavy processing is done in the cloud, all the user data is encrypted end to end.

        The diagram shows a hybrid architecture for LLM-based assistants as deployed in Germany. The user interacts with a front end that connects with other applications and databases inside the on-premises data center, and with the confidential “AI-as-a-service” provided by Edgeless Systems which is located externally.

        Figure 4: Hybrid architecture for LLM-based assistants with Confidential “AI-as-a-service” for inference (blue box).

        Case 2: Confidential coding assistants for sensitive applications

        As Capgemini is one of the largest global custom software developers, it’s also responsible for protecting code and developing sensitive applications, including for security agencies. Software development projects are handled fully on-premises due to regulations, which makes integrating state-of-the-art coding assistants that require scalable GPU infrastructure a challenge.

        Together, Capgemini and Edgeless Systems integrate AI-based confidential coding assistants with end-to-end encryption for developing sensitive, proprietary code. With Privatemode AI, Capgemini can also improve the experience for developers by allowing them to use modern coding assistants in a sensitive environment.

        Confidential AI is the future of AI in regulated industries

        It’s evident that the discussion about digital sovereignty is especially relevant in the context of AI. Critical infrastructures and regulated industries can largely benefit from GenAI applications but also require secure handling of sensitive data to boost innovation and digitalization. The future of AI therefore lies largely in confidential AI. And by enabling use cases with end-to-end data protection at scale, Capgemini and Edgeless Systems are leading the way.

        GET THE FUTURE YOU WANT

        Capgemini and Edgeless Systems have already implemented confidential AI use cases in critical infrastructures, public administration and healthcare. Let our experience inspire you and bring your data together with AI innovation.

        Additional links:

        Edgeless Systems: www.edgeless.systems

        Privatemode AI: www.privatemode.ai

        Nvidia blog post on Privatemode AI (2024): https://developer.nvidia.com/blog/advancing-security-for-large-language-models-with-nvidia-gpus-and-edgeless-systems/

        Edgeless Systems’ Open Confidential Computing Conference OC3 with presentation by Capgemini and IT.NRW on Confidential AI: https://www.oc3.dev/

        OC3 presentation: Confidential AI in the Public Sector by Arne Schömann (IT.NRW) and Maximilian Kälbert (Capgemini): https://www.youtube.com/watch?v=hu04kOtJ660

        Learn more

        Staying secure and sovereign in the cloud with confidential computing

        Thomas Strottner

        Vice President, Business Development, Edgeless Systems

        “With Privatemode AI, we empower organizations in regulated industries – such as healthcare, banking, and the public sector – to scale AI use cases effortlessly in the cloud while ensuring that their data remains verifiably protected against unauthorized access. We are proud to partner with Capgemini and NVIDIA to bring large-scale AI projects to life.”

        Authors

        Stefan Zosel

        Stefan Zosel

        Capgemini Government Cloud Transformation Leader
        “Sovereign cloud is a key driver for digitization in the public sector and unlocks new possibilities in data-driven government. It offers a way to combine European values and laws with cloud innovation, enabling governments to provide modern and digital services to citizens. As public agencies gather more and more data, the sovereign cloud is the place to build services on top of that data and integrate with Gaia-X services.”
        Ernesto Marin Grez

        Ernesto Marin Grez

        Vice President – Head of Strategic Initiatives Gen AI and Applied Innovation, Germany
        “At Capgemini, we are focused on advancing artificial intelligence with a strong emphasis on confidential computing. This technology is crucial for industries such as finance, healthcare, and government, where data privacy and security are paramount. By ensuring that sensitive data remains encrypted even during processing, we enable our customers to harness the power of AI without compromising on security. This approach not only protects valuable information but also fosters innovation and trust in AI applications.”

          Small is the new big: The rise of small language models

          Sunita Tiwary
          Jul 22, 2024

          In the dynamic realm of artificial intelligence (AI) and machine learning, a compelling shift is taking center stage: the ascent of small language models (SLMs). The tech world is smitten with the race to build and use large, complex models boasting billions and trillions of parameters, and consumers have become unwitting accomplices in the obsession with “large”. However, recent trends indicate a growing interest in smaller, more efficient models. This article delves into the reasons behind this shift, its implications, and what it means for the future of AI.

          Before we dive into SLMs, how did the wave of large languages grow

          In the not-so-distant past, natural language processing (NLP) was deemed too intricate and nuanced for modern AI. Then, in November 2022, OpenAI introduced ChatGPT, and within a mere week, it garnered more than a million users. Suddenly, AI, once confined to research and academic circles, became accessible to the masses. For example, my nine-year-old daughter effortlessly began using ChatGPT for school research tasks, while my mother-in-law, in her late sixties, whose only tech acquaintance was limited to WhatsApp and Facebook, now enthusiastically shares the latest news about AI, and her budding interest in GenAI during our tea time conversations.

          The launch of ChatGPT marked the onset of the very loud and very public (and costly) GenAI revolution, effectively democratizing AI. This is evident in integrating AI as copilots in various products, the exponential growth of large language models (LLMs), and the rise of numerous startups in this space. The landscape of technology and our world will never be the same.

          To comprehend the magnitude of this shift, let’s delve into the parameters of AI models. The number of parameters is a core measure of an AI’s scale and complexity. GPT-2 had 1.5 billion parameters, and then OpenAI released GPT-3, which had a whopping 175 billion parameters. This was the largest neural network ever created, more than a hundred times larger than its predecessor just a year earlier. Now we see a trillion parameter LLMs.

          Deciphering SLMs

          While the definition of an SLM remains contextual, some research identifies them as models encompassing approximately 10 billion parameters or less. SLMs are lightweight neural networks that can process natural language with fewer parameters and computational resources than LLMs. Unlike LLMs (which are generalized models), SLMs are usually purpose-driven and tailored to address specific tasks, applications, or use cases.

          Recent studies demonstrate that SLMs can be fine-tuned to achieve comparable or even superior performance compared to their larger counterparts in specific tasks.

          For example, phi-3-mini, a 3.8 billion parameter SLM trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69 percent on MMLU and 8.38 on MT-bench). Another example is phi-3-vision, a 4.2 billion parameter model based on phi-3-mini with strong reasoning capabilities for image and text prompts. Similarly phi-2 matches or outperforms models up to 25 times larger on complex benchmarks. Another such model is Orca2, which was built for research purposes. Similarly, TinyLlama, launched in late 2023, had just 1B parameters, followed by the OpenELM by Apple for edge devices launched in April 2024.

          Why does it matter?

          SLMs bring many benefits, notably their swift training and faster inference speed. Beyond efficiency, these models contribute to a more sustainable footprint, showcasing reduced carbon and water usage. In addition, SLMs strike a harmonious balance between performance and resource efficiency. Training SLMs is much more cost-effective due to the reduced number of parameters and offloading the processing workload to edge devices further decreases infrastructure and operating costs.

          Credit: Microsoft

          1. Efficiency and sustainability

          It is crucial to acknowledge that LLMs demand substantial computational resources and energy. Complex architecture and vast parameters necessitate significant processing power that contributes to environmental and sustainability concerns.

          In contrast, SLMs significantly reduce computational and power consumption through several key factors:

          • Reduced computational load: Small models have fewer parameters and require less computation during inference, leading to lower power consumption
          • Shorter processing time: The reduced model size decreases the time required to process inputs thus consuming less energy per task
          • Lower memory usage: Smaller models need less memory, which reduces the power needed for memory access and management which is a significant factor in energy consumption. Efficient use of memory further minimizes the energy needed to store and retrieve parameters and intermediate calculations
          • Thermal management: Lower computational requirements generate less heat, reducing the need for power-hungry cooling systems. Furthermore, reduced thermal stress increases the longevity of hardware components, indirectly reducing the energy and resources needed to replace and maintain them.

          SLMs are increasingly becoming popular due to their efficiency. They require less computational resources and storage than LLMs, making them a more practical solution for many applications requiring real-time processing or deployment on edge devices with limited resources. Therefore, by reducing the model size and complexity, developers can achieve faster inference times, lower latency, and improved performance, making small models preferred for resource-constrained environments such as mobile phones, personal computers, or connected devices. For example, phi-3 is highly capable of running locally on a cell phone. Phi-3 can be quantized to four bits so it occupies only ~1.8GB of memory. The quantized model of phi-3 when tested on iPhone 14 with A16 Bionic chip running natively on-device and fully offline achieving more than 12 tokens per second (the rate at which a model processes tokens (words, subwords, or characters) during inference).

          According to the Tirias Research GenAI Forecast and TCO Model, if 20 percent of GenAI processing workload could be offloaded from data centers by 2028 using on-device and hybrid processing, then the cost of data center infrastructure and operating cost for GenAI processing would decline by $15 billion (where data center infrastructure and operating costs projected to increase to over $76 billion by 2028.). This also reduces the overall data center power requirements for GenAI applications by 800 megawatts.

          2. Economic viability

          Developing and maintaining LLMs comes with steep costs, demanding significant investments in computational resources, energy usage, and specialized skills. In contrast, SLMs present a more budget-friendly solution. Their streamlined design means they are more efficient at training and require less data and hardware, leading to more economical computing costs. SLMs often employ optimized algorithms and architectures designed for efficiency. Techniques like pruning (removing unnecessary parameters) and quantization (using lower precision arithmetic) make these more economically viable.

          3. Scalability and accessibility

          Smaller models are inherently more scalable and accessible than their larger counterparts. By reducing model size and complexity, developers can deploy AI applications across various devices and platforms, including smartphones, IoT devices, and embedded systems. This democratizes AI, encourages wider adoption, and accelerates innovation, unlocking new opportunities across many industries and use cases.

          4. Ethical and regulatory dimensions

          Ethical and regulatory considerations also contribute to the shift towards SLMs. As AI technologies become increasingly pervasive, data privacy, security, and bias concerns become more pronounced. Embracing small models allows organizations to reduce data exposure, address privacy challenges, and reinforce transparency and accountability. When trained on specific, high-quality datasets, smaller models significantly reduce the risk of data exposure. They require less training data compared to their larger counterparts, which lowers the risk of memorizing, overfitting, and inadvertently revealing sensitive information within the training set. With fewer parameters, these models have simpler architectures, minimizing potential pathways for data leakage. Furthermore, smaller models are easier to interpret, validate, and regulate, facilitating compliance with emerging regulatory frameworks and ethical guidelines.

          Limitations of SLMs

          While SLMs have great benefits, there are challenges and limitations too. Due to their smaller size, these models do not have the capacity to store too much “factual knowledge” This could lead to hallucination, factual inaccuracies, amplification of biases, inappropriate content generation, and safety issues. However, this can be mitigated by the use of carefully curated training data and targeted post-training and improvements from red teaming insight. Models can also be augmented with a search engine for factual knowledge.

          Conclusion

          The transition to SLMs represents a significant trend in the AI field. While LLMs excel due to their vast size, intensive training, and advanced NLP capabilities, SLMs offer targeted efficiency, cost-effectiveness, and scalability. By adopting these models, organizations can unlock new opportunities, speed up innovation, and create value across various sectors.

          The future of generative AI is also moving towards the edge, enabled by small, efficient language models. These models transform everyday technology with natural, generative interfaces, encompassing everything from personal devices and home automation to industrial machinery and intelligent cities.

          SLMs are essential to enable AI at the edge. According to IBM, Huawei, and Grand View Research, the edge AI market is valued at $21 billion and is expected to grow at a CAGR of 21 percent. Companies like Google, Samsung, and Microsoft are advancing generative AI for PCs, mobile, and connected devices. Apple is joining this effort with OpenELM, a group of open-source LLMs and SLMs designed to run entirely on a single device without cloud server connections. This model, optimized for on-device use, can handle AI tasks independently, marking a new era in mobile AI innovation, as noted by Alphasense.

          Finally, it’s not a matter of choosing one over the other. LLMs are generalists with extensive training in massive data and have extensive knowledge across various subjects. They have the ability to perform complex interactions like chatbots, content summarization, and information retrieval and have vast applicability, however, are expensive and have a high operational cost. SLMs on the other hand are specialized, domain-specific powerful, and less computationally intensive but struggle with complex context and hallucinations if not used on their specific use case and context. The choice between SLM and LLM is dependent on the need and availability of resources, nevertheless, SLM is surely a game changer in the AI era.

          Author

          Sunita Tiwary

          Sunita Tiwary

          Senior Director– Global Tech & Digital
          Sunita Tiwary is the GenAI Priority leader at Capgemini for Tech & Digital Industry. A thought leader who comes with a strategic perspective to Gen AI and Industry knowledge. She comes with close to 20 years of diverse experience across strategic partnership, business development, presales, and delivery. In her previous role in Microsoft, she was leading one of the strategic partnerships and co-creating solutions to accelerate market growth in the India SMB segment. She is an engineer with technical certifications across Data & AI, Cloud & CRM. In addition, she has a strong commitment to promoting Diversity and Inclusion and championed key initiatives during her tenure at Microsoft.
          Fabio Fusco​

          Fabio Fusco​

          Data & AI for Connected Products Centre of Excellence Director​, Hybrid Intelligence​, Capgemini Engineering
          Fabio brings over 20 years of extensive experience, blending cutting-edge technologies, data analytics, artificial intelligence, and deep domain expertise to tackle complex challenges in R&D and Engineering for diverse clients and is continuously forward-thinking.

            Tech and Digital 2025 – The start of geo and tranversal tech

            Vikram Kumaraswamy
            May 6, 2025

            The year 2024 saw elections in over 70 countries, a historical high for any single year. Many national agendas cited tech, and the need for self-sufficiency and sovereignty as national priorities. 

            The Tech and Digital industry is a confluence of a broad and diverse segment of organizations, made up of capitals, semiconductor firms, platforms, software, and the electronic hardware and networking companies that drive the digital transformation of all the other industries. With innovations such as customized chips and AI workflows, rapid advancements in each of the Tech and Digital sectors promise disruption across all the other industry verticals. 2025 holds immense promise across all of these sectors.

            Here are the more secular macro trends by segments in the Tech and Digital industry:

            Software and Digital – platforms, platforms, platforms

            Software and Digital is the largest of the sectors within the Tech and Digital industry. The biggest trend within Software and Digital is platformization. The pivotal role of platforms cannot be overstated. This is the piece of customer-facing software that becomes the foundation to deliver, deploy or manage countless services, applications, software and technologies.

            New trends in platforms include:

            1. AI-Native Platforms
            2. Platforms as a Market Place
            3. Super Platforms and interoperability

            These are the “new & next” of this segment within Tech and Digital.  Cloud platforms are embedding agentic AI services to enable intelligent workflows, developer assistants, and autonomous decision-making. Examples include: Salesforce Einstein Copilot, SAP Joule, Azure AI Studio, AWS Bedrock Agents. AI here isn’t just a feature, but a core interaction layer for users and apps, and hence becomes a horizontal that will feature across all segments.

            Cloud platforms are becoming commerce layers that connect ISVs, APIs, and services, facilitating the monetization of developer marketplaces like AWS Marketplace, Azure Marketplace and Google Cloud’s Alloy DB ecosystem. The main area of growth in this segment will be industry-specific marketplaces such as healthcare APIs, AI agents, and fintech compliance tools. Cloud platforms are morphing into super platforms that integrate IaaS, PaaS, SaaS, ML, edge, and ecosystem orchestration. That would mean easing interoperability between platforms. Cloud platforms are investing in edge marketplace ecosystems for low-latency services, including Telco APIs, IoT agents and autonomous systems Example: AWS Wavelength, Azure Stack Edge, GCP Anthos.

            Positioning the future of the Tech and Digital industry for platform and software companies lies in the contextually rich intersections of industry verticals. There is a significant opportunity in contextual specialization within this wealth of knowledge. Platform and software players (who boast a CAGR of over 12%) are defining the future for all industries and have the largest addressable market, valued in billions of dollars. They lead the innovation agenda globally and have the highest propensity to outsource.

            Semiconductors – more specialized, more local

            Tech nationalism is emerging as a major theme, driven by the sovereignty and resilient supply chain goals of every industry and country. Semicon talent is currently concentrated in a few countries. This is especially true for manufacturing and testing (FAB & ATS) which are mainly concentrated in Southeast Asia and Taiwan. Thus, to build an in-country semiconductor eco-system, the first requirement is talent. In a segment on track for a $1 trillion turnover by 2030, this is a massive priority.

            Some of the most prominent trends in the semiconductor industry are node size reduction (shrinking of transistors), Gen AI chips, AI/ML Integration into chip design and in-house development of chips. Another very important development in semiconductors is the evolution of RISC V as an open-source, modular architecture. This allows developers to create processors tailored to specific needs by offering a flexible platform for building, porting, and optimizing software, extensions, and hardware. 

            Many of the chips designed for training and using Gen AI cost tens of thousands of dollars and are primarily destined for large cloud data centers. However, by 2025, Gen AI chips or lightweight versions of these chips are expected to be found in various other locations, including:

            • Enterprise Edge: These chips will be integrated into enterprise edge devices, enhancing their capabilities.
            • Computers: Both personal and enterprise computers will start incorporating these advanced chips.
            • Smartphones: Mobile devices will benefit from the power of Gen AI chips, enabling more sophisticated applications.
            • Other Edge Devices: Over time, other edge devices such as IoT applications will also adopt these chips.

            These chips are also being utilized for various purposes, including:

            • Generative AI: For creating new content and applications.
            • Traditional AI (Machine Learning): For tasks such as data analysis and predictive modeling.
            • Combination of both: Increasingly, these chips are being used for a combination of Gen AI and traditional AI tasks, providing versatile and powerful solutions.

            It’s no surprise then, that the demand for semiconductors that can better handle AI is going through the roof. The race is on to develop chips that can handle the workload required to support AI. As NVIDIA CEO Jensen Huang said, “The future of computing is AI. Our goal is to provide the most powerful and efficient AI computing platforms to accelerate innovation across industries.”

            Across industries, companies are working on specialized processors, designed for AI applications. For example:

            • Amazon Web Services (AWS) and Google have begun developing their own chips to reduce reliance on overstretched players like Nvidia. These chips are tailored for specific workloads, ensuring greater control and efficiency.
            • With the rise of electric vehicles and autonomous driving technologies, automotive semiconductors are becoming increasingly critical.

            Finally, for the sake of tech sovereignty and resilience, the semiconductor industry is finding new geographies.

            Across the board, one thing is true for the semiconductor industry: intelligent manufacturing is the order of the day.

            Electronics and Hardware – built for purpose

            AI-Centric Hardware Architectures:Purpose-built AI chips (like NVIDIA Grace Hopper, AMD MI300X, Intel Gaudi) are overtaking general-purpose CPUs for AI workloads. Edge AI accelerators are enabling faster inferencing in IoT, autonomous vehicles, and smart factories.

            Hardware-Based Cybersecurityled byzero-trust hardware roots, and integrated silicon security in CPUs and GPUs (e.g., AMD SEV, Intel TDX) for secure AI, fintech, and cloud workloads are in order. Physical-layer security in networking devices becoming standard in critical infrastructure.

            Composable Infrastructure is continuing to gain momentum withhardware infrastructure becoming software-defined and on-demand followed by disaggregation of compute, storage, and networking into composable building blocks via high-speed fabrics (like CXL, NVMe over Fabrics).

            The demand for AI infrastructure is on a vertical rise leading to energy-efficient compute & cooling innovations with a massive focus on power efficiency due to AI compute intensity. This entails an adoption of liquid cooling, chip-level thermal design, and carbon-aware scheduling.

            Trends in the Tech and Digital industry are created by the tech majors. These eventually drive the much broader digital transformation of all the other industries.

            Looking to capitalize on these trends? Capgemini is uniquely positioned to become the partner of choice for of the tech industry, here to help you build and drive strategic value.

            Author

            Vikram Kumaraswamy

            Vikram Kumaraswamy

            Vice President – Global Hi-tech – IP Lead
            Vikram is responsible for the Tech and Digital platform team that helps create thought leadership and offers across the Tech and Digital sectors forging the value of “one Capgemini”. He comes with a strong experience of 34 years running very large business sizes at HPE ( formerly) covering services, software & the hybrid cloud.

              Agentification of AI : Embracing Platformization for Scale

              Sunita Tiwary
              Jun 4, 2025

              Agentic AI marks a paradigm shift from reactive AI systems to autonomous, goal-driven digital entities capable of cognitive reasoning, strategic planning, dynamic execution, learning, and continuous adaptation with a complex real-world environment. This article presents a technical exploration of Agentic AI, clarifying definitions, dissecting its layered architecture, analyzing emerging design patterns, and outlining security risks and governance challenges. The objective is strategically equipping the enterprise leaders to adopt and scale agent-based systems in production environments.

              1. Disambiguating Terminology: AI, GenAI, AI Agents, and Agentic AI

              Capgemini’s and Gartner’s top technology trends for 2025 highlight Agentic AI as a leading trend. So, let’s explore and understand various terms clearly.

              1.1 Artificial Intelligence (AI)

              AI encompasses computational techniques like symbolic logic, supervised and unsupervised learning, and reinforcement learning. These methods excel in defined domains with fixed inputs and goals. While powerful for pattern recognition and decision-making, traditional AI lacks autonomy, memory, and reasoning, limiting its ability to operate adaptively or drive independent action.

              1.2 Generative AI (GenAI)

              Generative AI refers to deep learning models—primarily large language and diffusion models—trained to model input data’s statistical distribution, such as text, images, or code, and generate coherent, human-like outputs. These foundation models (e.g., GPT-4, Claude, Gemini) are pretrained on vast datasets using self-supervised learning and excel at producing syntactically and semantically rich content across domains.

              However, they remain fundamentally reactive—responding only to user prompts without sustained intent—and stateless, with no memory of prior interactions. Crucially, they are goal-agnostic, lacking intrinsic objectives or long-term planning capability. As such, while generative, they are not autonomous and require orchestration to participate in complex workflows or agentic systems.

              1.3 AI Agents

              An agent is an intelligent software system designed to perceive its environment, reason about it, make decisions, and take actions to achieve specific objectives autonomously.

              AI agents combine decision-making logic with the ability to act within an environment. Importantly, AI agents may or may not use LLMs. Many traditional agents operate with symbolic reasoning, optimization logic, or reinforcement learning strategies without natural language understanding. Their intelligence is task-specific and logic-driven, rather than language-native.

              Additionally, LLM-powered assistants (e.g., ChatGPT, Claude, Gemini) fall under the broader category of AI agents when they are deployed in interactive contexts, such as customer support, helpdesk automation, or productivity augmentation, where they receive inputs, reason, and respond. However, in their base form, these systems are reactive, mostly stateless, and lack planning or memory, which makes them AI agents, but not agentic. They become Agentic AI only when orchestrated with memory, tool use, goal decomposition, and autonomy mechanisms.

              1.4 Agentic AI

              Agentic AI is a distinct class where LLMs serve as cognitive engines within multi-modal agents that possess:

              • Autonomy: Operate with minimal human guidance
              • Tool-use: Call APIs, search engines, databases, and run scripts
              • Persistent memory: Learn and refine across interactions
              • Planning and self-reflection: Decompose goals, revise strategies
              • Role fluidity: Operate solo or collaborate in multi-agent systems

              Agentic AI always involves LLMs at its core, because:

              • The agent needs to understand goals expressed in natural language.
              • It must reason across ambiguous, unstructured contexts.
              • Planning, decomposing, and reflecting on tasks requires language-native cognition.

              Let’s understand with a few examples: In customer support, an AI agent routes tickets by intent, while Agentic AI autonomously resolves issues using knowledge, memory, and confidence thresholds. In DevOps, agents raise alerts; agentic AI investigates, remediates, tests, and deploys fixes with minimal human input.

              Agentic AI = AI-First Platform Layer where language models, memory systems, tool integration, and orchestration converge to form the runtime foundation of intelligent, autonomous system behavior.

              AI agents are NOT Agentic AI. An AI agent is task-specific, while Agentic AI is goal-oriented. Think of an AI agent as a fresher—talented and energetic, but waiting for instructions. You give them a ticket or task, and they’ll work within defined parameters. Agentic AI, by contrast, is your top-tier consultant or leader. You describe the business objective, and they’ll map the territory, delegate, iterate, execute, and keep you updated as they navigate toward the goal.

              2. Reference Architecture: Agentic AI Stack

              2.1 Cognitive Layer (Planning  and Reasoning)
              • Foundation Models (LLMs): Core reasoning engine (OpenAI GPT-4, Anthropic Claude 3, Meta Llama 3).
              • Augmented Planning Modules: Chain-of-Thought (CoT), Tree of Thought (ToT), ReAct, Graph-of-Thought (GoT).
              • Meta-cognition: Self-critique, reflection loops (Reflexion, AutoGPT Self-eval).
              2.2 Memory Layer (Statefulness)

              To retain and recall information. This is either information from previous runs or the previous steps it took in the current run (i.e., the reasoning behind their actions, tools they called, the information they retrieved, etc.). Memory can either be either session-based short-term or persistent long-term memory.

              • Episodic Memory: Conversation/thread-local memory for context continuation.
              • Semantic Memory: Long-term storage of facts, embeddings, and vector search
              • Procedural Memory: Task-level state transitions, agent logs, failure/success traces.
              2.3 Tool Invocation Layer

               Agents can take action to accomplish tasks and invoke tools as part of the actions. These can be built-in tools and functions such as browsing the web, conducting complex mathematical calculations, and generating or running executable code responding to a user’s query. Agents can access more advanced tools via external API calls and a dedicated Tools interface. These are complemented by augmented LLMs, which offer the tool invocation from code generated by the model via function calling, a specialized form of tool use.

              2.4 Orchestration Layer
              • Agent Frameworks: LangGraph (DAG-based orchestration), Microsoft AutoGen (multi-agent interaction), CrewAI (role-based delegation).
              • Planner/Executor Architecture: Isolates planning logic (goal decomposition) from executor agents (tool binding + result validation).
              • Multi-agent Collaboration: Messaging protocols, turn-taking, role negotiation (based on BDI model variants).
              2.5 Control, Policy & Governance
              • Guardrails: Prompt validators (Guardrails AI), semantic filters, intent firewalls.
              • Human-in-the-Loop (HITL): Review checkpoints, escalation triggers.
              • Observability: Telemetry for prompt drift, tool call frequency, memory divergence.
              • ABOM (Agentic Bill of Materials): Registry of agent goals, dependencies, memory sources, tool access scopes.

              3. Agentic Patterns in Practice

              (Source-OWASP)

              As Agentic AI matures, a set of modular, reusable patterns is emerging—serving as architectural primitives that shape scalable system design, foster consistent engineering practices, and provide a shared vocabulary for governance and threat modeling. These patterns embody distinct roles, coordination models, and cognitive strategies within agent-based ecosystems.

              • Reflective Agent : Agents that iteratively evaluate and critique their own outputs to enhance performance. Example: AI code generators that review and debug their own outputs, like Codex with self evaluation.
              • Task-Oriented Agent :Agents designed to handle specific tasks with clear objectives. Example: Automated customer service agents for appointment scheduling or returns processing.
              • Self-Learning and Adaptive Agents: Agents adapt through continuous learning from interactions and feedback. Example: Copilots, which adapt to user interactions over time, learning from feedback and adjusting responses to better align with user preferences and evolving needs.
              • RAG-Based Agent: This pattern involves the use of Retrieval Augmented Generation (RAG), where AI agents utilize external knowledge sources dynamically to enhance their decision-making and responses. Example: Agents performing real-time web browsing for research assistance.
              • Planning Agent: Agents autonomously devise and execute multi-step plans to achieve complex objectives. Example: Task management systems organizing and prioritizing tasks based on user goals.
              • Context- Aware  Agent:  Agents dynamically adjust their behavior and decision-making based on the context in which they operate. Example: Smart home systems adjusting settings based on user preferences and environmental conditions. 
              • Coordinating Agent :Agents facilitate collaboration and coordination and tracking, ensuring efficient execution. Example: a coordinating agent assigns subtasks to specialized agents, such as in AI powered DevOps workflows where one agent plans deployments, another monitors performance, and a third handles rollbacks based on system feedback.
              • Hierarchical Agents :Agents are organized in a hierarchy, managing multi-step workflows or distributed control systems. Example: AI systems for project management where higher-level agents oversee task delegation.
              • Distributed Agent Ecosystem: Agents interact within a decentralized ecosystem, often in applications like IoT or marketplaces. Example: Autonomous IoT agents managing smart home devices or a marketplace with buyer and seller agents.
              • Human-in-the-Loop Collaboration: Agents operate semi-autonomously with human oversight. Example: AI-assisted medical diagnosis tools that provide recommendations but allow doctors to make final decisions.

              4. Security and Risk Framework

              Agentic AI introduces new and very real attack vectors like (non-exhaustive):

              • Memory poisoning – Agents can be tricked into storing false information that later influences decision
              • Tool misuse – Agents with tool or API access can be manipulated into causing harm
              •  Privilege confusion – Known as the “Confused Deputy,” agents with broader privileges can be exploited to perform unauthorized actions
              • Cascading hallucinations – One incorrect AI output triggers a chain of poor decisions, especially in multi-agent systems
              • Over-trusting agents – Particularly in co-pilot setups, users may blindly follow AI suggestions

               5. Strategic Considerations for the enterprise leaders

              5.1 Platformization
              • Treat Agentic AI as a platform capability, not an app feature.
              • Abstract orchestration, memory, and tool interfaces for reusability.

              5.2 Trust Engineering

              • Invest in AI observability pipelines.
              • Maintain lineage of agent decisions, tool calls, and memory changes

              5.3 Capability Scoping

              • Clearly delineate which business functions are:
              • LLM-augmented (copilot)
              • Agent-driven (semi-autonomous)
              • Fully autonomous (hands-off)

              5.4 Pre-empting and managing threat

              • Embed threat modelling into your software development lifecycle—from the start, not after deployment
              • Move beyond traditional frameworks—explore AI-specific models like the MAESTRO framework designed for Agentic AI
              • Apply Zero Trust principles to AI agents—never assume safety by default
              • Implement Human-in-the-Loop (HITL) controls—critical decisions should require human validation
              • Restrict and monitor agent access—limit what AI agents can see and do, and audit everything

              5.5 Governance

              • Collaborate with Risk, Legal, and Compliance to define acceptable autonomy boundaries.
              • Track each agent’s capabilities, dependencies, and failure modes like software components.
              • Identify business processes that may benefit from “agentification” and identify the digital personas associated with the business processes.
              • Identify the risks associated with each persona and develop policies to mitigate those. 

              6. Conclusion: Building the Autonomous Enterprise

              Agentic AI is not just another layer of intelligence—it is a new class of digital actor that challenges the very foundations of how software participates in enterprise ecosystems. It redefines software from passive responder to active orchestrator. From copilots to co-creators, from assistants to autonomous strategists, Agentic AI marks the shift from execution to cognition, and from automation to orchestration.

              For enterprise leaders, the takeaway is clear: Agentification is not a feature—it’s a redefinition of enterprise intelligence. Just as cloud-native transformed infrastructure and DevOps reshaped software delivery, Agentic AI will reshape enterprise architecture itself.

              And here’s the architectural truth: Agentic AI cannot scale without platformization.

              To operationalize Agentic AI across business domains, enterprises must build AI-native platforms—modular, composable, and designed for autonomous execution.

              The future won’t be led by those who merely implement AI. It will be defined by those who platformize it—secure it—scale it.

              Author

              Sunita Tiwary

              Sunita Tiwary

              Senior Director– Global Tech & Digital
              Sunita Tiwary is the GenAI Priority leader at Capgemini for Tech & Digital Industry. A thought leader who comes with a strategic perspective to Gen AI and Industry knowledge. She comes with close to 20 years of diverse experience across strategic partnership, business development, presales, and delivery. In her previous role in Microsoft, she was leading one of the strategic partnerships and co-creating solutions to accelerate market growth in the India SMB segment. She is an engineer with technical certifications across Data & AI, Cloud & CRM. In addition, she has a strong commitment to promoting Diversity and Inclusion and championed key initiatives during her tenure at Microsoft.
              Mark Oost - AI, Analytics, Agents Global Leader

              Mark Oost

              AI, Analytics, Agents Global Leader
              Prior to joining Capgemini, Mark was the CTO of AI and Analytics at Sogeti Global, where he developed the AI portfolio and strategy. Before that, he worked as a Practice Lead for Data Science and AI at Sogeti Netherlands, where he started the Data Science team, and as a Lead Data Scientist at Teradata and Experian. Throughout his career, Mark has worked with clients from various markets around the world and has used AI, deep learning, and machine learning technologies to solve complex problems.

                Who leads in the Agentic Era: The Builders or the Adopters?

                Sunita Tiwary
                Jun 18, 2025

                We’ve entered a new phase of AI – one where systems no longer wait for instructions but actively reason, plan, and act. This shift from generative to agentic AI raises a defining question:

                Who will lead the next wave of transformation?

                 Will it be the tech companies building the foundation models and platforms, or the industries embedding AI into real-world business workflows? The answer is clear: neither side can win alone. Agentic AI isn’t a plug-and-play solution—it’s a systemic leap that demands AI-native infrastructure, new talent roles, a culture of experimentation, and trust in autonomous systems. The future belongs to those who can bridge the gap between breakthrough technology and scalable, responsible value creation. In this article, we explore the evolving power dynamic between builders and adopters—and why service providers may be the unlikely accelerators of this new era.

                Agentic AI: Beyond Implementation to Transformation

                Unlike prior tech cycles, Agentic AI isn’t simply implementing a new tool or channel. It demands a complete rethink of how work is done, how decisions are made, and how value is created. To truly harness its power, industries need more than APIs and dashboards.

                They need:

                • Infrastructure readiness: scalable compute, data pipelines, and model orchestration.
                • Talent transformation: from prompt engineers to AI product managers, the skills needed are nascent and niche.
                • Mindset shift: a culture of experimentation, agility, and comfort with co-creating alongside AI.

                In this context, the true differentiator isn’t just having access to AgenticAI; it’s being prepared to reimagine how you operate with AI at the core.

                ROI, Talent, and the Black Box Problem

                While tech companies dazzle with breakthrough models and autonomous agents, industries face grounded realities:

                • ROI is uncertain unless use cases are tightly coupled with business outcomes.
                • Niche talent is hard to find, and even harder to retain.
                • The black-box nature of LLMs challenges observability, governance, and trust.
                • Security, privacy, and compliance must be rethought in the age of generative automation.

                This isn’t a plug-and-play revolution. It’s a systemic shift. Industries must invest not only in tools but also in readiness and resilience.

                The Evolving Power Dynamic

                Tech companies lead the way in building foundation models, toolchains, and agentic platforms. They control the tech stack, drive innovation velocity, and shape the ecosystem. Yet, they face challenges around monetization, trust, and the long tail of enterprise needs.

                On the other hand, industries hold the real-world context, proprietary data, and deep knowledge of customer behaviour. They define high-value use cases, drive adoption at scale, and ultimately determine where AI delivers impact. But they must also tackle integration complexity, change management, and readiness gaps.

                The new power players will be those who can navigate both worlds — translating the potential of Agentic AI into practical, governed, and scalable transformation across domains.

                Strategic Implications for Service Providers

                For service companies working with both tech builders and enterprise consumers, this creates a unique strategic opportunity:

                • Act as translation layers between Agentic AI innovation and industry needs.
                • Provide platformization strategies (moving from isolated tools and pilots to creating scalable, reusable AI foundations inside an enterprise) to help industries build internal capability, not just consume tech.
                • Build AI governance frameworks that bridge the black-box risks and enterprise trust requirements.
                • Offer talent incubation and skilling programs tailored to AI-first roles.

                Service companies must evolve from implementation partners to AI transformation enablers.

                The Real Winners: Co-Creators of Value

                Ultimately, the winners in the Agentic AI era will not be defined solely by who builds the most powerful models or the most dazzling demos. They will be the ones who can:

                • Align AI with business strategy.
                • Drive adoption with speed and responsibility.
                • Build ecosystems that are trustworthy, explainable, and human-centric.

                This is not just a race to innovate — it’s a race to transform. And those who can blend technology, context, and trust will define the next era of value creation.

                In this new landscape, co-creation is the new competitive advantage.

                Meet the Authors

                Sunita Tiwary

                Sunita Tiwary

                Senior Director– Global Tech & Digital
                Sunita Tiwary is the GenAI Priority leader at Capgemini for Tech & Digital Industry. A thought leader who comes with a strategic perspective to Gen AI and Industry knowledge. She comes with close to 20 years of diverse experience across strategic partnership, business development, presales, and delivery. In her previous role in Microsoft, she was leading one of the strategic partnerships and co-creating solutions to accelerate market growth in the India SMB segment. She is an engineer with technical certifications across Data & AI, Cloud & CRM. In addition, she has a strong commitment to promoting Diversity and Inclusion and championed key initiatives during her tenure at Microsoft.
                Mark Oost - AI, Analytics, Agents Global Leader

                Mark Oost

                AI, Analytics, Agents Global Leader
                Prior to joining Capgemini, Mark was the CTO of AI and Analytics at Sogeti Global, where he developed the AI portfolio and strategy. Before that, he worked as a Practice Lead for Data Science and AI at Sogeti Netherlands, where he started the Data Science team, and as a Lead Data Scientist at Teradata and Experian. Throughout his career, Mark has worked with clients from various markets around the world and has used AI, deep learning, and machine learning technologies to solve complex problems.

                  Our journey to winning the NTIA 5G challenge

                  Ashish Yadav
                  Oct 16, 2023
                  capgemini-engineering

                  The 5G Challenge aimed to accelerate the adoption of open, interoperable wireless network equipment to support vendor diversity, supply chain resiliency and national security.

                  From Morse Code to the NTIA

                  When Guglielmo Marconi sent a Morse Code signal using radio waves to a distance of 3.3 kilometers in 1895, he may not have fully understood its impact on communication in the years to come. His actions sparked a revolution that continues to transform how people communicate.

                  83 years later, the National Telecommunications and Information Administration (NTIA) was created. It has played a crucial role in shaping the nation’s telecommunications policies and promoting innovation and growth in the technology field.

                  As part of the U.S. Department of Commerce, the NTIA is the Executive Branch agency that advises the President on telecommunications and information policy issues. NTIA’s programs and policymaking focus primarily on expanding broadband Internet access and adoption in America, expanding the use of spectrum by all users, advancing public safety communications, and ensuring that the Internet remains an engine for innovation and economic growth.

                  NTIA office bearers worked tirelessly to fight the digital divide and bring digital equity. In 1994, it sponsored the first virtual government hearing over the Internet. Schools and public libraries nationwide offered public access points for Americans who otherwise would not have access to view the hearing. It was undoubtedly a significant shift in thinking and how people connected.

                  In 1995, the principles of a book co-written by former US Vice President Al Gore, ‘The Global Information Infrastructure: Agenda for Cooperation’ helped transform the Internet into a shared international resource throughout the rest of the 1990s. As part of this, the digitization of information on the website of the NTIA and other U.S. agencies made government more accessible to everyday Americans.

                  The 2022 5G Challenge

                  In 2022, NTIA launched a 2-year ‘5G Challenge’ program to foster a vibrant 5G O-RAN vendor community. The 5G Challenge, hosted by the US Institute for Telecommunication Sciences (ITS), aimed to accelerate the adoption of open, interoperable wireless network equipment to support vendor diversity, supply chain resiliency, and national security.

                  In its first year (2022), NTIA/ITS required the contestants to successfully integrate hardware and/or software solutions for one or more 5G network subsystems: Central Unit (CU), Distributed Unit (DU) and Radio Unit (RU).

                  The Capgemini team participated in the 2022 5G challenge, competing in the Central Unit category and winning all three challenge stages

                  Capgemini qualified for Stage One by demonstrating compliance with 3GPP and O-RAN Alliance. We were evaluated successfully on End-to-End functionality and standards conformance. We proceeded to Stage Three, successfully demonstrating wrap-around testing for Stage Two. Capgemini won Stage Three and the challenge by successfully integrating subsystems from five different vendors: user equipment (UE), Radio Unit, Distributed Unit, Central Unit, and Core.

                  The 2023 5G Challenge

                  The 2023 5G Challenge was the second of the two 5G Challenges. NTIA selected contestants with high-performing 5G subsystems that showcased multi-vendor interoperability across RUs and combined CUs and DUs (CU+DU). CableLabs hosted the challenge and provided two separate 5G test and emulation systems.

                  Capgemini was assigned one of the two subsystems. Contestant subsystems that passed the emulated testing were integrated with the Capgemini subsystem and the CableLabs baseline system consisting of a 5G standalone core and UE emulator. ‘Plug-and-play’ performance was evaluated using a standard corpus of performance metrics. The testing involved cold pairing with radios selected by NTIA.

                  The level of testing and interoperability justifies the event’s name. It really was a challenge, but one in which the Capgemini team participated with great enthusiasm. It was also a testimonial to the resilience, compliance, and quality of the Capgemini CU and DU framework. We completed all the stages for wrap-around emulation testing, End-to-end (E2E) integration testing to establish and test E2E sessions (CU+DU and RU), and Mobility testing between two E2E sessions.

                  Success!

                  The NTIA awarded Capgemini first place for Multi-Vendor E2E Integration. NTIA specifically applauded Capgemini during the ceremony for achieving a 100% pass rate on all feature and performance tests, which was the icing on the cake.

                  It took hard work, teamwork, personal sacrifices, motivation, hope, and triumph to reach the final hard-won victory. The Capgemini team’s incredible journey with the 5G Challenge 2023 started on March 1st, 2023, with Stage One kick-off, and culminated in September 2023, when NTIA awarded us first place.

                  At the closing ceremony, held in the CableLabs/Krio facilities, Boulder, Colorado, on September 21st, Capgemini was presented with two prizes: Multi-Vendor E2E Integration and Wrap-around testing for the Open RAN CU and DU.

                  This experience of competing in the two NTIA challenges over the past two years reinforced the foundation and importance of O-RAN, and why interoperability is so crucial for the broader ecosystem. 

                  It has been an incredible journey for the Capgemini team to be part of this historic initiative by NTIA. It brought the diverse vendor community together to give a meaningful push to making wireless networks interoperable. It also gave us all a common purpose to serve, by building a resilient supply chain for national security.


                  Meet our expert

                    Raising network subscribers’ awareness of energy consumption
                    A new way for CSPs to tackle Scope 3 emissions

                    Subhankar Pal
                    May 5, 2025
                    capgemini-engineering

                    Connectivity is essential to our modern world, but it incurs a severe environmental cost.

                    Device connectivity accounts for a significant share of mobile communications networks’ total energy consumption. Every video stream, file download, or cloud sync requires energy to manage the electromagnetic waves that carry data to and from your phone.

                    Total mobile data transfer globally is 16.10 terawatt-hours (TWh) per month[1], 60% of the UK’s annual electricity usage (or all of South Africa). That energy contributes notably to communication service provider’s (CSP’s) Scope 3 emissions – the indirect emissions occurring across the value chain.

                    [1]Calculated based on mobile phone average of 0.13 kWh to transfer 1 GB of data across a mobile network (GSMA), and global total data transfer is 123.84 Exabytes per month (Ericsson)

                    Scope 3 emissions, which can account for 80% of some telco’s carbon footprints, pose the biggest challenge in the move to net zero. This is due not only to their magnitude, but also the fact that they are the hardest to measure and not fully within telcos’ control. Mobile users contribute a significant portion of these Scope 3 emissions.

                    As CSPs set ambitious net-zero targets, scope 3 emissions, which can be over half of a company’s carbon burden, are now in sharp focus.

                    Simple changes by mobile users – such as doing data heavy tasks over energy efficient wifi, or avoiding unnecessary activities such as updating apps they don’t use – could significantly cut the energy use of mobile networks.

                    And this presents a challenge: how do you reduce what you don’t directly control? The answer lies in finding ways to influence user behavior without compromising their quality of experience.

                    The solution: Tools to inform, and change behavior

                    On-device applications and tools can inform, nudge, and empower users to reduce their energy and data footprints. These tools can analyze user behaviors, provide information on their CO₂ impact, and offer personalized recommendations to cut their carbon footprint – such as using Wi-Fi instead of mobile data during peak hours or opting for lower-resolution streaming when on the go.

                    Such behavior shifts may seem small individually, but at scale they translate into significant reductions in downstream network load and energy usage, making a measurable dent in Scope 3 emissions.

                    Apps to incentivize subscribers to make these types of decisions can decrease the energy used by/on the network. Some may be happy to do this to reduce their carbon footprint and just need the information to do it via an app. Others may respond to incentives such as earning credits against their mobile bill, or gamification such as competing with friends to get the lowest monthly carbon emissions score.

                    Such apps also capture lots of granular data on user behavior, which gives mobile networks and equipment providers a comprehensive view of the environmental impact of each subscriber at an individual level, replacing the estimates currently used. Given enough time, data on the activity of millions of devices accessing the network can be coordinated. Each user can be sent subtle signals to nudge their behavior in ways that collectively benefit the whole network – similar to the approaches the energy industry is taking smart meters and lower cost overnight tariffs.

                    The good news is that have already done this. In collaboration with Nokia and Google, we have developed a minimum viable product (MVP) which we call ‘Energy Efficiency for Scope 3 Indirect Emissions’. Put simply, it’s an on-device app designed to raise awareness about energy consumption among network subscribers. The app could soon be helping Nokia’s customers, and the expertise we have developed could help others develop similar solutions.

                    Shaping mobile user behavior for energy efficiency: Lessons learned

                    In developing the MVP, we overcame various challenges. In addition to the many technical and data management challenges of integrating and scaling any new capability into the complicated architectures of mobile networks, three challenges specific to this issue are worth mentioning.

                    The importance of scoring

                    We quickly identified that a key feature of any behavioral tool was a scoring system. Indeed, scoring is a proven gamification strategy that encourages participates to tap into their natural desires for competition and achievement

                    Our system assesses subscribers based on their energy demand from the network – and hence carbon footprint – and gives them a ‘score’. This serves as a tangible metric that subscribers can understand, making the abstract concept of energy consumption more concrete, and providing a benchmark to improve upon.

                    Beyond scoring their impact, we also needed practical ways to get them to act to reduce that score, like actionable insights and recommendations for reducing energy consumption.

                    User trust

                    Convincing subscribers to trust, install and use this app is also a necessity. Building trust starts with transparency – users must understand what data is collected, how it’s used, and how it benefits them. That meant careful UX design for ease of use (unsurprisingly, poor user experience can significantly damage trust) – along with a rigorous approach to security, end-to-end encryption, compliance with regional data protection laws, and openness about the use of data.

                    Delivering actionable insights

                    Armed with detailed data on user behavior, the network can start to spot subtle patterns and nudge users at scale. This was the trickiest part, requiring us to deploy smart people to develop complex AI and mathematical models – similar to those used by electricity grids to nudge users into more energy efficient behaviors. The result is insights and automated systems that help individual subscribers (as well as CSPs) to take targeted action toward reducing emissions.

                    Not just a capability, but a catalyst for change

                    Few mobile users are aware of the impact of their mobile devices. By making them more aware of their individual impact, and empowering them to take action, we can encourage proactive energy saving behavior. This collective responsibility is essential for CSPs (and, ultimately everyone) to achieve net-zero goals.

                    The good news is that we are well on the way to achieving this. Indeed, Google showcased the tool at their booth at Mobile World Congress 2025. And, in doing so, we have acquired a great deal of expertise that can be deployed for other mobile network scope 3 reduction initiatives. As we move towards a net-zero world, initiatives like this will be essential in shaping our collective efforts to combat climate change.

                    Contact us to learn more about our MVP – and more broadly about how Scope 3 emissions can be reduced in mobile networks.

                    Meet the author

                    Subhankar Pal

                    Subhankar Pal

                    Senior Director and Global Innovation leader for the Intelligent Networks program, Capgemini Engineering 
                    Subhankar has over 24 years of experience in telecommunications, specializing in advanced network automation, optimization, and sustainability using cloud-native principles and machine learning for 5G and beyond. At Capgemini, he leads technology product incubation, product strategy, roadmap development, and consulting for the telecommunications sector and related markets.