Skip to Content

A Secure AI: features vital to a robust and long-term deployment

Capgemini
6 Aug 2021

Without a proper, proactive approach to securing AI, the widespread adoption of AI in business, industry and society will simply not be possible.

Actively securing your AI is key to building trust in the deployments of the future

Generally, talking about a ‘secure AI’ can mean several things. In the context of moving from a proof of concept (PoC) to industrialisation, however, its definition is much clearer: it is the process of actively securing AI-based applications and deployments, in which elements of hardware, software, governance and strategy can together define the security of AI assets. Why is this process necessary? Simply put, without a proper, proactive approach to securing AI, and the gradual compilation of trust that comes with such a process, the widespread adoption of AI in business, industry and society will lack inclusiveness, be treated with scepticism by excluded groups, and face endless contention.

Securing Machine Learning models is vital to promoting widespread adoption in industry

The most common form taken by AI today is that of Machine Learning (ML) – a method of analysis to identify patterns in data and make decisions with minimal human intervention. One disadvantage of ML models is that they are greatly sensitive to data input, which means that attempts by malicious entities to pollute data – known collectively as ‘adversarial Machine Learning’ – are made somewhat easier by the nature of these models.

The first task of any effort to secure industrialised AI deployments is to verify the authenticity of the model in question, namely the continuity of the model’s design/architecture. This can be accomplished through securing the:

  1. Model detectability: a system of checks to identify any potentially malicious data samples, and
  2. Model verifiability: a mapping of (raw data) inputs with their respective (AI) outputs, to ensure a consistency and predictability of results in industrialised data science models.

Model security is especially important where AI is to be used in critical and/or real-time systems, such as medical condition auto-classification or autonomous driving. Where such systems have been built using open source ML tools like TensorFlow or PyTorch, real-time model security is vital to protect the human lives that may be partially dependent on them.

Encryption techniques can also be used to hide or anonymise the more sensitive data used in ML models, and authentication techniques used to verify that the model being pushed out to production is unchanged from that which was approved/finalised in testing.

Protecting AI’s underlying infrastructure is a multi-disciplinary process

The second task of securing a large-scale AI is to protect its overall architecture: the underlying hardware and (non-AI) back-end software. Accomplishing this is far easier than the novel methods required to secure data science models. Typical techniques include:

  • Asset redundancy: the duplication of underlying technology infrastructure to negate any potential hardware failure, so that critical AI-backed business operations remain uninterrupted. Asset redundancy is almost always the responsibility of the underlying infrastructure provider. In the case of Public Cloud environments, this is determined by the Service Level Agreement that a deployer of AI technology signs with the Cloud provider.
  • Risk mitigation: the identification and evaluation of potential risks to a set of processes, with the active development of options and actions to reduce the likelihood that those risks will happen. Risk mitigation can be implemented through technology, but is often defined through good governance frameworks and personnel training.
  • Security Access Controls: Comprised of authentication – verifying that users are who they claim to be through the use of credentials and/or other methods, and then authorisation – to determine which verified users have access to specific parts of the AI system.

However, Public Cloud environments today (Amazon Web Services, Google Cloud Platform, Microsoft Azure, and others) allow users to undertake these activities via provider-managed services, making this second task of securing large-scale AI far easier for less technical users.

AI ethics and explainability are increasingly coupled to the widescale acceptance of AI in society

The third and final task of securing AI is perhaps the most contentious: the ethics and explainability (ability to transparently justify the AI outputs) of Machine Learning. With industry giants having fallen prey to the same pitfalls of bias as junior AI practitioners, it remains clear that AI must be developed in a social context, based on the extent of its interactions with humans and human-designed systems, and away from an isolationist, lab-like environment. There is no correct or ‘right’ way in which to incorporate ethics into industrial-scale AI, but some methods might include:

  • The use of characteristically-diverse focus groups to review and assess the suitability of input data used to train models for a particular task or set of tasks.
  • The use of inherently explainable data science models (for less complex tasks) or the comparison of expected vs actual outputs from the model (for more complex tasks).

Securing an industrialised AI, then, is an all-encompassing effort, but one which business and technological leaders can never leave to chance, especially as advanced analytics becomes responsible for an ever-increasing number of tasks – menial or otherwise – in a modern world.