Skip to Content

Can AI save the security operations center?

Geert van der Linden
2021-11-30

Cybersecurity has long been a cat-and-mouse game between security pro and criminal, with progressively more robust defenses from the former pushing the latter to devise even more devious tactics. These are then countered, and so new cybercrime strategies are devised. And around and around it goes.

All in all, it’s a tough gig for the cybersecurity professional – one that’s only getting tougher.

First, the volume and variety of cybercriminal activity have increased exponentially. Spurred on by the disruption caused by the pandemic, 2020 proved to be a record-breaking year for cyberattacks, with ten million DDoS attacks recorded. Double extortion ransomware such as Maze and Sodinokobi were arguably some of the highest-profile and most lucrative new malware variants on the block.

Against this backdrop, the ongoing cyber-skills shortage was even more pointed. A recent report suggests that 3.1 million cybersecurity professionals are needed to address the skills gap. Cybercrime is unlikely to disappear overnight, and it’ll be equally difficult to magic up three million security professionals in the short term.

So what’s a security operations center (SOC) to do?

AI-driven SOCs

Many are already turning to artificial intelligence (AI) and machine learning (ML) to level the playing field.

AI thrives in environments where there are a lot of mundane, repetitive tasks. SOCs, which focus largely on threat identification, tracking, and remediation, are therefore an ideal setting. With AI automating a lot of this costly and time-consuming work, the manual workload is reduced – crucial to any SOC facing an increasing attack surface and shallower pool of skilled cybersecurity professionals.

In addition to improving the quality and speed of analysis, AI technologies can also perform threat modeling and impact analysis, activities that historically relied on experienced cybersecurity professionals. In fact, the latest AI tools can go one step further, providing insights previously impossible through manual-only analysis. For instance, some can identify when threats could result in attacks on the corporate network and shut down particular services or subnets based on activities determined to be potentially harmful. Others can scan vast amounts of code and automate the process of discovering vulnerabilities.

Is AI the ultimate answer for the troubled SOC?

Clearly, AI’s ability to analyze and act on data quickly and at scale is a boon for the short-staffed and over-worked SOC team. However, it is no silver bullet. No matter how advanced AI becomes, it simply cannot replace cybersecurity experts.

Cybersecurity parlance may often cite the human user as the weakest link in the chain, but AI is not without its vulnerabilities. It is yet another system that can be targeted, increasing the attack surface. These attacks can confuse the underlying ML model and bypass what the system is intended for. For example, generative adversarial networks (GANs) can be used to fool facial recognition security systems or subvert voice biometric systems.

As we start giving AI more responsibility, particularly when predicting and taking actions against perceived threats, a number of privacy and ethical considerations also come to the fore. These will only become more poignant as AI advances, and provide another solid argument against it becoming the sole controller of the SOC.

Building ethical AI

Despite all its advancements, AI is still beholden to the age-old “garbage in, garbage out” IT mantra. In order to maximize accuracy, AI systems require huge volumes of high-quality training data. But at the same, this raises the potential of ethical lapses and intrusions of privacy. How much would you let an AI know about you? Would you sacrifice privacy for security?

There are also regulatory considerations. Data from financial services firms or medical sciences organizations are under heavier regulatory scrutiny than other industries. Should they, therefore, sacrifice cybersecurity by having less effective AI, when the sensitive data they hold is the most tempting for attackers?

If SOCs are to gain the trust of those customers they’re hired to protect, they need to be 100% transparent about how much and what kinds of data they’re feeding to their AI programs – and militant in ensuring those lines are not overstepped.

The human element remains

While AI will likely transform the SOC over the next five to ten years, security professionals need not worry about being out of a job. In fact, the future success of AI will, perhaps ironically, rely hugely upon the human element.

The old game of cat-and-mouse may come to an end, but security professionals will have a new purpose: ensuring their most powerful weapon is being used judiciously and, most importantly, ethically.

Contact Capgemini today to find out how our network of global Cyber Defense Centers can help your organization embrace this new generation of cybersecurity.