Skip to Content

Machines need zero trust too: Why devices deserve context-aware security

Lee Newcombe
Jun 25, 2025

In the first post in this series, I wrote about the business and security outcomes that can be achieved for users (and the organizations to which they belong!) by adopting approaches labeled as “zero trust.” But why should we limit ourselves to interactions with human users? Don’t machines deserve a little attention too?

The answer, of course, is “yes” – not least because this would otherwise be a remarkably short post. So, I’m going to talk about the application of those high-level characteristics of zero trust mentioned in my last post – dynamic, context-based, security – to operational technology (OT).

As every OT professional will quite rightly spell out – at length – OT is not IT. They have grown from separate disciplines, talk different network protocols, have different threat models, and often have different priorities when it comes to the application of the confidentiality, integrity, and availability triad we have used for so long in the security world. When your company faces losses of millions of dollars a day from a production line outage, or your critical national infrastructure (CNI) service can no longer function, availability rapidly becomes the key business issue, particularly where intellectual property may not be a core concern. Before diving into the application of dynamic, context-based, security principles to OT, we should probably set a little more context:

  • OT facilities may not be as well-segmented as modern corporate IT networks. They were either isolated or “behind the firewall,” so why do more? (Of course, best practice has long pointed toward segmentation, however if best practice were always implemented I’d likely be out of a job).
  • OT covers a vast range of technologies and different types of devices, from sensors out in the field through to massive manufacturing plants. Threat models differ! Context matters.
  • Devices often have embedded operating systems (typically cut-down versions of standard operating systems); these systems require patching and maintenance if they are not to become susceptible to known vulnerabilities.
  • Equipment requires maintenance. You’ll often find remote access facilities in the OT environment for the vendors to be able to conduct such maintenance remotely. (You might see where this is going from a security perspective.)
  • The move toward intelligent industry is pushing OT toward increasing use of machine learning and artificial intelligence, all of which is heavily reliant upon data – which means you need a way to export that data to the services performing the analysis. Your “air gap” isn’t really an air gap anymore. (And if we’re talking about critical national infrastructure, then there may well also be some sovereignty issues to consider.)
  • Legacy is a real problem. What happens if a business buys a specialist piece of kit and then the vendor goes bust? It could well form a critical part of the manufacturing process, and so stripping it out is not always possible, let alone straightforward.
  • OT doesn’t always talk IP. This is a problem for traditional security tools that only understand IP. We need to use specialized versions of traditional security tooling like monitoring solutions – solutions that can understand the communications protocols in use. Meanwhile, network transceivers/data converters may contain software components that can sometimes get overlooked from a security perspective.
  • Good models for thinking about OT security are out there, e.g. the Purdue model and the ISO 62443 series (which provide structures for the different levels of technology and functionality in OT environments, from the physical switches and actuators up to the enterprise information and management systems). It’s not as much of a wild west out there as my words so far may indicate – but we can do better.

For the purposes of this article, the above highlights some interesting requirements from an OT security perspective:

  1. We need to understand the overall OT environment, and be able to secure access into and within it.
  2. We need to make the OT environment more resilient – reduce the blast radius of compromise. We really do not want one compromised machine taking out a whole facility.
  3. We want to be able to control machine-to-machine communications, and communications across the different layers of the Purdue model, e.g., from the shop floor to the management systems, or even across to the enterprise environment for import into the data lake for analysis purposes.

Lots of interesting problems, some of which seem very similar to those discussed in the context of securing human user access to applications and systems.

How do we start the process of finding some solutions? Well, first things first. We need a way to distinguish the devices we are securing, i.e., some form of machine identity. We have a variety of options here, from the installation of trusted digital certificates through to the use of network-based identifiers (including IP addresses and hardware addresses where available). Once we have identities, we can start to think of how to use them to deliver context-based security.

Let’s start by establishing some baselines of normal behavior:

  • How do the devices in scope communicate?
  • What other devices do they communicate with, and what protocols do they use?
  • Are there some obvious segmentation approaches that we can take based off of those communication patterns? If not, are there some more context-based approaches we can take, e.g., do specific communications tend to take place at specific times of day?

Such profiling may need to take place over an extended period of time in order to get a true understanding of the necessary communications. We should certainly be looking at how we control support access from vendors into the OT environment; let’s just start by making sure Vendor A can only access their own technology and not that of Vendor B. Let’s not forget to support access from internal users either, particularly if they have a habit of using personal or other unapproved devices. Going back to that segmentation point for a second, do we have any legacy equipment that is no longer in active support? If so, are we able to segment such kit away and protect access into and out of that environment to limit the risk associated with such legacy kit?

Whether we are trying to apply dynamic, context-based security to machines or users, many of the same considerations apply:

  1. Is there a way to uniquely identify and authenticate the entities requesting access?
  2. Where are the signals going to come from to enable us to define the context used to either grant or deny access?
  3. How can we segment the resources to which access is being requested?
  4. Where are we going to apply the enforcement mechanisms that act as the barriers to access? Do these mechanisms have consistent network connectivity or must they operate independently?
  5. How do we balance defense in depth with simplicity and cost of operation?

If an organization already has some technologies that can help to deliver the required outcomes, e.g., some form of secure software edge, there will often be some merit in extending that coverage to the OT environment, particularly with respect to remote access into such environments.

I’ve shown that we can apply the same zero trust principles to machines that we can apply to users. However, knowing the principles and believing they have value is one thing, finding an appropriate strategy to deliver them in an enterprise context is something completely different. The final post in this series will talk about how we can approach doing this kind of enterprise security transformation in the real world.

About the author

Lee Newcombe

Expert in Cloud security, Security Architecture, Zero Trust and Secure by Design
Dr. Lee Newcombe has over 25 years of experience in the security industry, spanning roles from penetration testing to security architecture, often leading security transformation across both public and private sector organizations. As the global service owner for Zero Trust at Capgemini, and a member of the Certified Chief Architects community, he leads major transformation programs. Lee is an active member of the security community, a former Chair of the UK Chapter of the Cloud Security Alliance, and a published author. He advises clients on achieving their desired outcomes whilst managing their cyber risk, from project initiation to service retirement.