Skip to Content

Generative AI is just the start of data-powered collaboration. Human-Machine Understanding will deliver technology-enabled decision-making that truly adapts to human needs.

The rollout of Generative AI technologies like OpenAI’s ChatGPT and subsequent large language models from late 2022 onwards has given new impetus to the potential of technology-enabled decision-making. This transition of AI from the laboratory to a consumer app marks a shift in human and machine collaboration. Suddenly, we can engage with technology conversationally, explaining our challenges and receiving actionable advice. At last, following so many technological advances during the past few decades, we finally have a trusted, data-enabled assistant to help us make decisions – or do we?

Towards a new era in decision-making processes

Overshadowed by the hyperbole accompanying the rollout of AI models is an underlying reality: technology-enabled decision making is a one-way system that relies heavily on humans. Current AI models provide limited analysis and interaction, such as requesting clarifications or flagging unhelpful responses. But they lack understanding of your decision-making process itself – how you weigh options, what stages of reasoning you follow, and how your cognitive approach shapes your conclusions. In short, we’re still on our own for big decision-making moments.

However, the current, visible manifestation of AI services is just the first stage of a move towards deeper human and machine collaboration. The next stage, via human-machine understanding (HMU), will finally deliver technology-enabled decision-making partners.

HMU-equipped systems will provide the data-rich helping hand humans crave. These systems will understand your decision-making challenges and deliver the right information at the right time, tailored to your requirements. Consider how you explain complex analysis to a colleague – if they’re new, you might explain differently than to someone you’ve worked with for years. HMU brings this adaptive capability to AI decision support.

In high-stakes, time-sensitive scenarios, such as healthcare and strategic decision making, HMU-equipped systems could even account for internal human states, such as stress or fatigue, that might affect decision-making processes.

Unlocking HMU’s decision-making value

A key challenge in this evolution is building trust and transparency in AI-driven decisions. To address this, AI decision support systems must be able to explain their analysis and reasoning processes effectively; beyond the black-box thinking of many current AI models. However, just like human co-workers can explain complex processes to each other, HMU will provide explanations aligned with the user’s unique requirements.

One promising research area lies in understanding human mental models and decision-making processes. Let’s look at healthcare and strategic decision-making use cases to see how progress towards HMU will lead to better outcomes.

Enabling healthcare

Modern healthcare is a data-rich process. Pioneering collaborations between clinicians and machines exploit this data for better healthcare outcomes, with Generative AI already being used to enhance decision-making processes.

Take Color Health’s AI copilot system, which helps clinicians create cancer treatment plans by analyzing patient data and healthcare guidelines to identify missing diagnostics. Early results show clinicians can identify four times more missing labs and tests while reducing analysis time from weeks to minutes and maintain oversight at every step.

Similarly, Google’s Med-PaLM helps doctors with complex cases by analyzing medical knowledge and patient data to suggest potential diagnoses and treatment options, while Microsoft’s Nuance DAX focuses on ambient clinical intelligence, automatically documenting patient encounters to help physicians focus more on patient interaction.

Confidence is crucial for AI-enabled healthcare decisions. Developments in storytelling-based Explainable AI (XAI) that provide comprehensible explanations to users, from smart home environments to eHealth interfaces, can build trust and address the diverse needs of healthcare professionals and patients.

Sensing and monitoring technologies are another area of data-led progress. AI-powered systems now include transformers that recognize surgical gestures with 94% accuracy[1]. Digital twin systems, meanwhile, enable real-time monitoring by integrating data from sensors, devices, and systems to optimise clinical and non-clinical operations[2].

Enhancing strategy

Successful decision-making relies on informed choices from disparate data. Decision-makers must often act without a full understanding of the environment, relying on fragmented data to assess risks, predict outcomes, and determine the best course of action. Traditional systems often fail to synthesize fragmented information effectively, limiting their usefulness in complex, high-stakes contexts. Capgemini’s deep tech powerhouse, Cambridge Consultants, conducted a project for the UK Government to explore how HMU can help.

The team used the strategy game StarCraft II as a controlled test environment, replicating scenarios where decision-makers operate with incomplete information. The research developed AI assistants using neural networks and unsupervised and supervised learning to enhance human decision-making processes by tackling two key problems:

  1. Reducing ambiguity: Using advanced neural networks to analyze partial and historical data to predict unseen elements of the environment, providing both predictions and confidence levels to help decision-makers assess risks.
  2. Strategy detection: Classifying and tracking opponent strategic patterns over time to allow users to anticipate and adapt to evolving challenges.

A user-centric explainability framework was crucial to this effort. The framework combined an understanding of user requirements with XAI delivery techniques and interface designs. By tailoring explanations to user needs, decision-makers trusted the AI outputs and integrated the insights into their workflows. The system provided enhanced clarity, allowing users to make decisions with confidence. The explainability framework increased trust in AI systems, helping to bridge the gap between technical outputs and user understanding. These techniques could also be applied in other settings beyond strategy, including resource management, logistics, or crisis response.

The Future: A vision for adaptive decision support

These early applications show how HMU systems can help to redefine how humans and machines collaborate to solve complex problems. By understanding user needs, interpreting contextual nuances, and providing tailored support, HMU systems transform machines from static tools into adaptive partners in decision-making processes.

From boards to steering committees, humans draw on the wisdom of groups, sometimes even wisdom of the crowd. Machines have traditionally struggled in large group settings, but recent advances show that they can enable groups to outperform individual decision-makers.

Future developments will focus on improving real-time adaptability, enhancing explainability, and integrating these capabilities into diverse decision-making environments. The result? Effective and trusted human-machine collaboration that creates benefits for everyone.


[1] Chen, Ketai, D. S. V. Bandara, and Jumpei Arata. “A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.” International Journal of Computer Assisted Radiology and Surgery (2025): 1-10.

[2] Han, Yilong, et al. “Digital twinning for smart hospital operations: Framework and proof of concept.” Technology in Society 74 (2023): 102317.

Ali Shafti

Ali Shafti

Head of Human-Machine Understanding, Cambridge Consultants, part of Capgemini Invent
Ali leads a team of specialists in AI, psychology, cognitive and behavioral sciences to create next generation technologies that can truly understand and support users in dynamic, strenuous environments. Ali holds a PhD in Robotics with focus on human-robot interaction and has more than 12 years experience in research and development for human-machine interaction.
Matt Rose

Matt Rose

Senior Analyst, Cambridge Consultants, part of Capgemini Invent
Matt is a strategic foresight analyst with over a decade of experience in research, horizon scanning, and data visualization. He has led future-focused projects across AI, robotics, and emerging technologies for high-profile clients. Drawing on a rich background in UX design, including work in the gaming industry, Matt brings a user-centered lens to innovation, helping organizations navigate technological change and design digital products that resonate with real-world needs.
Matthew J Clayton

Matthew J Clayton

Principal Algorithm Engineer, Cambridge Consultants, part of Capgemini Invent
Matthew is an experienced AI developer and data scientist who specializes in autonomous systems, numerical modelling, and applied statistics. Matthew has developed AI-enabled systems in many technology areas including robot navigation and mapping, reinforcement learning, computer vision, bio-sensing, cyberdefense, and cognitive radar. Matthew holds a DPhil in Astrophysics from the University of Oxford.
Alexandre Embry

Alexandre Embry

Vice President, Head of the Capgemini AI Robotics and Experiences Lab
Alexandre leads a global team of experts who explore emerging tech trends and devise at-scale solutioning across various horizons, sectors and geographies, with a focus on asset creation, IP, patents and go-to market strategies. Alexandre specializes in exploring and advising C-suite executives and their organizations on the transformative impact of emerging digital tech trends. He is passionate about improving the operational efficiency of organizations across all industries, as well as enhancing the customer and employee digital experience. He focuses on how the most advanced technologies, such as embodied AI, physical AI, AI robotics, polyfunctional robots & humanoids, digital twin, real time 3D, spatial computing, XR, IoT can drive business value, empower people, and contribute to sustainability by increasing autonomy and enhancing human-machine interaction.