Demonstrating 5G AI-RAN for Autonomous Mobility with NVIDIA

Artificial Intelligence for Radio Access Networks (AI-RAN) is rapidly emerging as a critical enabler of the next phase of telecom transformation. As AI adoption increases, demand for inference for real-time, AI-driven services is rising significantly. Traditional RAN architectures are unable to process this new type of AI traffic and remain a pass through to AI datacenters. Operators now require networks that are not only faster and more efficient, but also software-defined and AI-native, capable of supporting AI workloads at scale at both the network core and edge that is enabling more networking for AI. In addition to the usual voice and data traffic, traffic generated by physical AI – autonomous robots and vehicles, cameras, sensors, mission and safety critical systems – needs distributed computing infrastructure for ensuring deterministic latency, reliability and cost-effective scaling. AI-RAN provides the architecture to enable such a distributed computing infrastructure that combines connectivity, AI computing, sensing and also hosts an agentic layer for resource optimization, service assurance and operations. 

Capgemini has collaborated extensively with NVIDIA on AI-RAN implementation and integration of Physical AI applications.
AI-RAN offers a unique opportunity for telcos to become the fabric for Physical AI

  • AI-RAN optimizes spectrum utilization, reduces energy consumption, and improves service experience through real-time decision-making. More importantly, it creates a path for communication service providers (CSPs) to evolve beyond connectivity and position themselves as platforms for intelligent, revenue-generating services at the edge.
  • By leveraging AI-RAN and edge-based AI infrastructure, telcos can support compute-intensive workloads closer to the point of action. Beyond connectivity, these capabilities also enable operators to unlock new B2B revenue streams across industries such as manufacturing, logistics, retail, and healthcare by offering differentiated, AI-enhanced services. Autonomous vehicles, for example, can offload parts of visual processing and contextual reasoning to the network edge, reducing latency and improving safety.

This enables operators to:

  • Build rich, contextual intelligence from vehicle sensor data
  • Dynamically enhance high-definition maps with hyper-local updates
  • Support real-time decision-making for urban mobility services

These capabilities not only improve outcomes for mobility-as-a-service[1] providers and municipalities, but also open new monetization paths for CSPs transforming AI-RAN investments into scalable, city-level platforms for intelligent mobility.

[1] Mobility-as-a-Service integrates various forms of transport and transport-related services into a single, comprehensive, and on-demand mobility service.

MWC 2026 Demo: AI-RAN in Autonomous Mobility


Capgemini’s work within Project ULTIMO, a Horizon Europe–funded initiative, demonstrates how AI-RAN and the Distributed AI infrastructure
can support large-scale autonomous mobility services. The project aims to deploy economically viable, on-demand public transport using fully autonomous vehicles across multiple European cities.

As part of the demonstration:

  • Selected camera data streams are transmitted over 5G to agentic AI applications running on NVIDIA AI-RAN Grace Hopper GH200 servers at the edge
  • AI workloads dynamically scale across the Distributed AI infrastructure, supporting multiple shuttles and use cases simultaneously

This architecture enables real-time detection of road incidents, public safety events, and accessibility needs, while ensuring that mission-critical RAN functions always receive priority access to compute resources.

The AI-RAN Alliance defines three complementary dimensions of AI integration within wireless networks:

  • AI for RAN, Improving Efficiencies: Applying AI to optimize RAN performance (up to 20% improvement in user throughput), spectral efficiencies (improve by up to 2X), and reliability.
  • AI and RAN, Shared Infrastructure: Running AI and RAN workloads on shared, GPU-accelerated infrastructure
  • AI on RAN, New Revenue opportunities: Enabling AI applications directly at the network edge
    As operators extend AI infrastructure beyond the core network and into enterprise and industrial environments, AI-RAN becomes a bridge between connectivity and real-world AI applications. Early deployments such as distributed learning for remote wind farms demonstrate how AI-RAN is evolving in the era of Physical AI, where intelligence directly interacts with and learns from the physical environment.


Capgemini designs, builds, and demonstrates AI-RAN solutions using NVIDIA technology. The objective is clear: transform 5G and future 6G infrastructure into a shared, intelligent compute and connectivity platform, one that serves functions for autonomous networks as well as B2B applications which require AI aggregation capabilities.
At Mobile World Congress 2025, Capgemini showcased this collaboration through a live wind-farm demonstration The solution combined –

  • NVIDIA AI libraries, open models and frameworks, and accelerated AI infrastructure  
  • Capgemini’s Edge Computing platform (IEAP)
  • Capgemini’s RAN DU/CU software frameworks
  • Capgemini’s Telco AI and data platform called EIRA
  • Digital twin capabilities required by the Windfarm AI applications.

Together, they enabled proactive resource management and predictive maintenance—illustrating how AI-RAN can simultaneously improve network efficiency and unlock new enterprise use cases.

Looking Ahead: The distributed AI infrastructure and the Emergence of Physical AI

Physical AI represents a shift from purely digital intelligence to systems that perceive, reason, and act within the physical world using real-time feedback loops. This includes applications spanning autonomous mobility, smart cities, industrial automation, and critical infrastructure.

Supporting Physical AI at scale requires a distributed AI infrastructure: a GPU-accelerated fabric that seamlessly connects
edge devices, edge data centers, and cloud resources, creating a unified intelligence layer, that enables real-time inference and distributed learning across physical and virtual nodes, while maintaining telecom-grade reliability and latency.

When combined with AI-RAN architectures, the distributed AI infrastructure allows CSPs to deliver both connectivity and AI capacity as an integrated service. In the distributed AI infrastructure, the sharing of compute resources, like the GPU, between networks and B2B applications allows the B2B applications to reap the benefits of AI-enabled network autonomy – For example closed loop controls and predictive anomaly detection and mitigation enable predictable throughputs and spectral efficiency for SLA guarantees.

The Figure below shows a mapping of Capgemini developed & integrated assets, on top of the AI RAN Alliance’s view of AI enhanced services.

Figure 1 : AI RAN Alliance Network AI services, Mapping Capgemini Autonomous Network Assets and Integrations

Autonomous Networks: A Prerequisite for AI-RAN at Scale

Autonomous networks are essential for manifesting AI-native telecom networks. They provide real-time, closed-loop control over both compute and connectivity resources, allowing AI workloads and RAN functions to coexist efficiently on shared infrastructure. As operators expose AI compute services over 5G, autonomous network intelligence becomes the mechanism that ensures performance, resilience, and multi-tenant fairness.

For network equipment providers (NEPs), this represents a clear mandate: enable CSPs with architectures that support self-optimizing, self-healing networks powered by multi-agent intelligence that can predict, diagnose, and resolve issues without human intervention.

Capgemini-NVIDIA AI RAN : What’s under the hood in the MWC 2026 Evolved Architecture

  • Capgemini 5G CU and DU with NVIDIA Aerial CUDA-Accelerated RAN  built on top of the NVIDIA Grace Hopper platform packs in a powerful combination of NVIDIA Grace CPU and NVIDIA Hopper GPU connected over the NVIDIA NVLink C2C interconnect, allowing lightning fast & efficient communication required to power the 5G Layer1 as well as AI Agents running Large Language Models (LLM) and Large Visual Models (LVM)
  • The NVIDIA BlueField-3 data processing unit (DPU) offloads networking, storage, and security processing from the server, creating an accelerated data path and freeing the host computing resources for AI workloads. This separation of network and application traffic on the same server keeps workloads isolated and performance predictable.
  • Capgemini’s Event Intelligence and Relationship Analysis (EIRA) and Multi-Agent Anomaly Detection Application (MADA) provide network sensing, anomaly prediction and RCA needed for a closed loop Autonomous Network Operation. This is enabled through Agentic AI applications which consume network metrics and resource information, run predictive agents, diagnose future issues and execute workflows to normalize and mitigate future faults.
  • Capgemini IEAP Edge distributes application workloads based on resource predictions MADA

Building on prior demonstrations of intelligent resource forecasting, Capgemini has evolved its approach with
Multi-Agent Anomaly Detection Application (MADA). This system represents a shift from reactive automation to predictive, self-governing networks.

MADA employs specialized AI agents that:

  • Detect anomalies and forecast resource demand
  • Perform root-cause analysis and recommend mitigation actions
  • Execute corrective measures through orchestration systems
  • Continuously validate outcomes through reflective feedback loops

By ingesting both structured and unstructured network data, this approach enables AI-RAN environments to operate autonomously, with minimal human intervention unlocking the agility and efficiency required for next-generation AI services.

Figure 2: The Distributed AI infrastructure for AI driven MaaS market

AI-RAN and Autonomous Networks enable new B2B use-cases like Mobility as a Service (MaaS)

This convergence of the NVIDIA powered AI-RAN over a distributed edge infrastructure and Autonomous Networks is essential for emerging use cases such as Mobility-as-a-Service (MaaS), where vehicles, infrastructure, and city systems must operate as a coordinated, intelligent whole.

Autonomous Networks enable B2B use-cases for industry domains like MaaS, Autonomous Vehicles, and Mass Mobility:

AI-RAN provides edge-hosted AI capabilities—including Vision-Language Models (VLMs)—allowing autonomous shuttles and buses to process high-bandwidth sensor streams locally for low-latency perception and decision making.

Aggregating data across multiple autonomous units (shuttles, buses, depots, cameras, roadside sensors) gives operators a consolidated, real-time fleet intelligence layer that individual vehicles cannot achieve on their own.

Cross-vehicle intelligence unlocks high-value operational use cases such as dynamic fleet routing, optimized re-routing, enhanced environmental awareness, and automated incident alerts to operations centers.

Fleet-level coordination enabled by AI-RAN supports advanced behaviors—such as swarm-like movement and collaborative decision making—driving safer, more efficient MaaS and Autonomous Vehicle services.

Why This Matters for Telcos

CSP expectations indicate that AI workloads will rapidly dominate network demand: 41% of CSPs identifying Agentic AI as the key driver of autonomous network operations[1], highlighting the necessity for self-optimizing architectures to manage future service complexity. This surge in persistent, AI-generated traffic means networks cannot rely on manual operations; instead, they must evolve into highly scalable, predictive, autonomous systems capable of self-healing and dynamic scaling.

AI-RAN creates a strategic inflection point for CSPs. Rather than remaining providers of undifferentiated connectivity, operators can become essential enablers of intelligent ecosystems particularly in autonomous mobility, but autonomous network capabilities need to be baked in.

The global autonomous vehicle market is projected to grow from USD 68.1 billion in 2024 to over USD 214 billion by 2030, driven by advances in AI, sensor fusion, and real-time decision systems. These applications depend on high-throughput, ultra-reliable connectivity and distributed AI capabilities that cannot be delivered through manually-managed or static network operations–they require autonomous networks.

Conclusion

AI-RAN is becoming the foundation for intelligent, autonomous networks that power real-world AI applications. Capgemini is helping CSPs and NEPs transform 5G infrastructure into a shared platform for connectivity, computing, and intelligence.

By combining AI-RAN, and autonomous network operations, the industry can unlock new business models, accelerate enterprise innovation, and enable safer, smarter mobility at scale. This is not simply an evolution of the network, it is a redefinition of its role in the digital economy.

Join us at Mobile World Congress 2026 in Barcelona to experience these capabilities firsthand. Visit the Capgemini booth at Hall 2K21 and explore how AI-powered networks are shaping the future of connectivity and intelligence.

[1]  Omdia: 41% of CSPs see Agentic AI driving autonomous network operations – https://omdia.tech.informa.com/pr/2025/nov/41percent-of-csps-see-agentic-ai-driving-autonomous-network-operations