Skip to Content

Teaming and Autonomy: From tools to teammates in the age of HMU

Ali Shafti
Oct 2, 2025

Shared mental models and adaptive robots are making machines into genuine teammates. Here’s how your business can take advantage.

Automation has transformed countless tasks, roles, and industries. Across these disparate areas, there are machines that complete their work with minimal human input. However, it’s not all good news. Working with machines still feels too much like a one-sided relationship where robotic systems are more like tools than teammates.

Machines are rarely trusted to work in complete isolation. Technological, infrastructural, ethical, safety, or regulatory limitations require humans to remain an active part of the task, or to at least keep autonomous systems in check. These constraints mean the shift to full automation often remains out of reach, and we’re left working in a state of semi-autonomy.

This halfway stage to full automation brings other challenges. Humans tasked with checking on our robot counterparts end up having to conform to machine-like behaviors. This requirement results in adherence to rigidly choreographed actions where humans interact with machines at prescribed moments in narrowly defined ways.

For the people in these semi-autonomous setups, minimal human input can feel like a maximal task.

Overcoming the challenges

So, how can we turn machines into true teammates? We suggest two key challenges remain:

  • Businesses lack the right AI collaboration skills – Most companies are still in transition, as many individuals view AI primarily through an efficiency lens, seeing it as a tool in the kitbag rather than a collaborative partner to deliver innovation and growth. The shift to an AI-first mindset requires strategy and leadership. 
  • Machines lack human context – AI technologies cannot understand contextual nuances of how humans think, collaborate, and adapt. Humans compensate by providing context, nuances, and knowledge that machines lack.

To overcome these challenges, a fundamental shift is emerging: Human-Machine Understanding (HMU). HMU enables AI to interact intelligently and intuitively with humans, turning machines from tools into teammates. This is particularly timely now as we are moving towards machines that have enough agency to contribute to tasks as humans might — simple remote-controlled systems or background processes that merely correct for errors don’t meet this threshold.

Evidence of a move in this direction through the application of emerging technology can already be seen. Writing tasks can be completed twice as fast using AI. Economists show 10-20% higher productivity when using language models, while call-center operators, demonstrate 14% increased efficiency with generative AI — rising to 30% for less experienced workers.

However, these developments focus on incremental productivity gains rather than transformational gains enabled by true human-machine collaboration. Turning machines into true teammates requires people and technology to work together to perceive, decide, and act. While advances in AI and robotics have expanded autonomous capabilities, systems must also interpret and respond to human behavioral indicators of task progress, intentions, preferences, and internal states, such as fatigue or stress.

Monitoring and understanding the human context and addressing it effectively will mean that adaptive autonomous systems evolve into dependable co-workers. Let’s see how those more natural partnerships are fostered in one sector.

Learning lessons from the industrial sector

The transition from Industry 4.0’s automation-focused approach to Industry 5.0 turns traditional human-machine interactions into collaborative partnerships. EU data highlights two areas where adaptive automation is taking initial steps along this path:

  • Robotics in cleaning and waste disposal – Businesses are prioritizing hygiene through automation, with adoption rising from 21% to 29% of firms. Organizations are reducing health risks to employees, ensuring consistency in cleanliness, and freeing up human resources for more complex tasks. This cooperation fosters a safer and more efficient work environment.
  • Robotics in surveillance, security, and inspection applications – Implementation increased from 15% to 21%, with robots complementing human capabilities in continuous monitoring while enabling staff to focus on critical incident response and analysis. This cooperation enhances security outcomes and reduces operational vulnerabilities.

These nascent explorations show how machines complement human efforts in health-critical areas. However, these examples are the first signs towards a deeper human-machine collaboration enabled by emerging technology.

Moving to the next level of teaming and autonomy

Three key developments are driving this transformation in how humans and machines will work together in the age of HMU:

  • Humanoids and collaborative robots (cobots) – The next wave of human–robot teaming builds on the legacy of cobots, which introduced mechanical versatility and safety features for shared workspaces. Humanoids extend this value by adding a human-like form factor and interaction model, enabling them to operate in environments designed for people without costly redesigns. They can use tools built for human hands and communicate through familiar gestures and language, reducing friction and fostering intuitive collaboration. Early examples like Raymath’s cobot applications, which boosted productivity by 600%, illustrate the foundation on which humanoid systems will build, delivering even greater adaptability and human-centric interaction.
  • AI advances and the rise of physical AI – Beyond digital intelligence, physical AI brings cognition into the robot’s embodied form, combining advanced perception, control, and physical world models to enable human-like adaptability and dexterity. This allows robots to understand and predict interactions in dynamic, unstructured environments. These capabilities complement machine-learning progress and can address challenges such as concept drift, where changes in the working environment require continuous adaptation to maintain effective collaboration. Together, these advances move systems from rigid automation toward context-aware teammates that can interpret human cues and adapt in real time.
  • Industry demands and human-space compatibility – Workforce shortages and operational pressures are accelerating adoption, but businesses also need solutions that integrate into existing human-centric environments without major redevelopment costs. Humanoids and physically intelligent systems meet this need by blending into current workflows, reducing barriers to deployment while enabling deeper human collaboration. Companies like Bison Gear and Engineering demonstrate the value of this approach, transitioning from two full-time operators per shift to a single operator managing multiple systems while maintaining productivity.

For pioneering firms, HMU-aligned systems provide a new competitive advantage where human intuition is fused with machine precision. Rather than being reactive, the HMU-enabled future is about the emergence of a reliable proactive technological teammate who augments rather than replaces talented humans. Now is the time to explore HMU.

Meet the author

Ali Shafti

Ali Shafti

Head of Human-Machine Understanding, Cambridge Consultants, part of Capgemini Invent
Ali leads a team of specialists in AI, psychology, cognitive and behavioral sciences to create next generation technologies that can truly understand and support users in dynamic, strenuous environments. Ali holds a PhD in Robotics with focus on human-robot interaction and has more than 12 years experience in research and development for human-machine interaction.
Irene Salazar Medina

Irene Salazar Medina

Principal Engineer, Cambridge Consultants, part of Capgemini Invent
Irene leads technical work on human-centric AI, focusing on affective and cognitive modelling to enable adaptive autonomy and human–robot collaboration. With more than 10 years’ experience in engineering, she combines expertise in mechatronics and digital signal processing with deep experience in embedded software and computational modelling. This unique blend allows her to translate theoretical models of human behaviour and cognition into practical applications with measurable value across multiple sectors.
Ethan Lubbock

Ethan Lubbock

Senior AI Developer, Cambridge Consultants, part of Capgemini Invent
Ethan applies advanced machine learning techniques to tackle challenging and unconventional problems across diverse domains. His work includes developing reinforcement learning systems for autonomous cybersecurity, applying NLP and generative models to real-world client challenges, and pioneering AI-driven approaches for network routing and biomedical sensing. He leads technical projects that deliver high-profile client solutions and regularly presents original research at international conferences. Ethan holds a Master’s degree in Mathematics and Physics from the University of Durham.
Alexandre Embry

Alexandre Embry

CTIO, Head of AI Robotics and Experiences Lab
Alexandre Embry is CTIO, member of the Capgemini Technology, Innovation and Ventures Council. He is leading the Immersive Technologies domain, looking at trends analysis and developing the deployment strategy at Group level. He specializes in exploring and advising organizations on emerging tech trends and their transformative powers. He is passionate about enhancing the user experience and he is identifying how Metaverse, Web3, NFT and Blockchain technologies, AR/VR/MR can advance brands and companies with enhanced customer or employee experiences. He is the founder and head of the Capgemini’s Metaverse-Lab, and of the Capgemini Andy3D immersive remote collaboration solution.