Why is agentic AI the focus of our latest GovTech collaboration?

Technology used to be a support function for the state. Now, it’s reshaping how government operates, delivers services and earns the trust of citizens.

This shift could bring huge benefits for society. In a report we produced last year with the WEF and the GGTC Berlin, we estimated that GovTech could unlock $9.8 trillion of public value by 2034.

Agentic AI could play a central role in realizing these gains by helping governments become more resilient, responsive and outcome focused. But prioritizing speed over strategic clarity risks draining resources and damaging trust. To adopt agentic AI responsibly and successfully, governments need a credible basis for understanding where to start and what to prioritize. And they need to know how to move from experimentation to reality.

In the second report on GovTech that the WEF and GGTC Berlin have published in collaboration with Capgemini, we’ve conducted a study to help governments find that clarity. This blog explains what we found and how governments can apply it to get started on their agentic AI journey. But first, let’s look at what makes this technology potentially such a gamechanger.

What is agentic AI in government and why does it matter now?

Earlier workflow automation in government focused on automating individual tasks within a department or agency. By contrast, agentic AI can coordinate, decide and act across multiple steps of a workflow that spans government organizations.

This step change in capability can help governments do more with less in a world of shrinking budgets and workforces. Agentic AI saves time and reduces errors while giving citizens a better experience. It also frees up public servants to focus on meaningful work requiring human judgment.

As such, agentic AI augments people rather than replaces them – especially as human oversight is integral to deploying it responsibly. It’s already getting results. Our report, Making Agentic AI Work for Government: A Readiness Framework, includes case studies of how public sector organizations are using this advanced technology and the difference it’s making.

However, agentic AI also requires governments to change how they assess opportunity – from a department-based approach to one based on “functions”.

“Agentic AI could fundamentally transform governments by shifting them from automating tasks to delivering outcomes. By acting strategically now, they can lay the foundation for a more efficient, resilient and outcome-focused future.”

Manuel Kilian, Managing Director, Global Government Technology Centre, Berlin

Demand for agentic AI in government is accelerating fast: 90% of the 350 public sector organizations we surveyed plan to explore, pilot or implement it in the next two to three years. This signals that governments recognize they cannot meet growing pressures and expectations with current operating models. So, the question is no longer whether to engage with agentic AI. It’s how apply it in the right order – starting with the workflows that are most ready for agentic AI now, and can be automated with the least risk.

Which government functions are most suited to agentic AI?

Functions are recurring activities that span organizational boundaries, encompass at least one end-to-end workflow and deliver a clear outcome. (Think eligibility assessment, fraud detection, benefit calculation, and permit issuance.) They’re a good fit for agentic AI because they reflect operational reality, are a suitable unit for pilots and promote cross-domain learning and reuse. But how do you know which are the strongest candidates?

Our report includes the first systematic framework for assessing government readiness for agentic AI. It maps 70 core government functions against two dimensions. The first is the opportunity for agentic AI to add public value (potential). The second is the practical, organizational and ethical considerations for deploying it responsibility (complexity).

The result is a shared reference point for which functions to prioritize (high readiness). It also shows which functions need more preparation (medium readiness) and which currently warrant caution (low readiness).

In good news for governments, half of the functions analyzed combine significant agentic AI potential with manageable implementation complexity. This suggests there’s substantial scope to scale agentic AI where the right capacity and safeguards are in place.

With functions in the Public Services category achieving three of the highest scores, there’s a clear opportunity to improve efficiency and the citizen experience. At the other end of the scale, functions in the low-readiness group require nuanced interpretation and strategic judgment. This reinforces the need for what we call “human-AI chemistry”.

The question isn’t whether governments should deploy agentic AI, but where to begin. Our analysis of 70 core functions reveals something striking: nearly half are both high-impact and achievable in the near term. That represents a substantial opportunity and a clear signal that the time for strategic action is now.

Kelly Ommundsen, Head of Digital Inclusion, GovTech, & Regulatory Innovation, World Economic Forum

How can you adapt the global framework to local conditions responsibly?

The framework identifies where agentic AI can add public value with manageable risk. But the global assessment is a starting point, not a prescription. We’ve suggested six steps to help governments create their own actionable plan:

  1. Establish your baseline: First, you need to determine what’s feasible in your context. That includes assessing how digitally mature your organization is, how skilled your people are, and how much money you have. It also includes being clear about what legislation like the EU AI Act means for you.
  2. Address risks and challenges: Clear strategies can stop many problems before they start. For example, ethical impact assessments, continuous monitoring and keeping humans in the loop can help protect privacy and keep outcomes fair.
  3. Translate global scores into local priorities: The global assessment is a great starting point. But it’s based on global estimates that may not reflect local realities (for example, sticking points, legal barriers and public attitudes). Assess the global scores using local knowledge and data to discover where to focus.
  4. Be careful about what you automate and when: Governments are limited in how much technological change they can make at once. And as an emerging technology, agentic AI needs more upfront work than most. That’s why it’s important to get started, with the functions that offer high potential and low risk, and learn as you go. You can then apply the experience and trust you’ve gained to more complex areas.
  5. Test before you scale: Procurement rules, legacy systems, vendor limitations and workforce concerns often only show up when you start implementing agentic AI. Run small pilots before committing to full deployment.
  6. Treat the framework as dynamic: Functions that seem out of reach today may become accessible as technology evolves, infrastructure matures and regulations change. To grab opportunities as they emerge, regularly reassess which functions are most ready for agentic AI.

Underpinning these steps is the principle of “bounded autonomy”. This means defining operational scope, building in human oversight and being transparent around decisions at every stage. This approach results in agentic AI systems that are effective, accountable and provide a solid foundation from which to scale.

“The acute pressure on governments to deliver faster results and transform more deeply creates a strong case for getting started with agentic AI. This framework will help them bridge the gap between ambition and implementation by identifying the areas that are most promising and least complex.”

Marc Reinhardt, Global Public Sector Leader, Capgemini

Final word

The analysis in Making Agentic AI Work for Government: A Readiness Framework clearly shows the size of the opportunity for governments. But racing ahead without a strategic plan could lead to fragmented pilots that drain resources. This could damage public trust as well as the case for future AI investments.

By contrast, taking a research-based approach to identifying what to automate and when could convert potential into measurable public value.

The framework in the report supports the second path. It’s designed to give governments the strategic clarity to act with confidence, learn from experience and lead responsibly. We hope it helps you move from ambition to implementation.

Making Agentic AI Work for Government: A Readiness Framework is out now. Read the report on the WEF website and explore the findings on the interactive microsite.