The recent Capgemini Research Institute report, Reinventing cybersecurity with artificial intelligence, found companies are facing ever-increasing threats and attacks. Forty-two percent of executives surveyed reported an increase in incidents through time-sensitive applications and 43% noted an increase in machine-speed attacks.
Nearly one in four executives surveyed believe they will not be able to successfully investigate all identified incidents, which is concerning because cyber-analysts need to be able to track anomalies, incidents, and breaches that occur impacting an organization’s brand as well as its business operations.
The promises of AI and machine learning make it easy to think that they are a quick fix to this escalating threat landscape. Typically, though, companies trying to solve cybersecurity issues themselves tend to be flooded with information and alert overloads. AI alone is not going to solve that issue. Companies are spending money on technologies to defend and protect, when they should focus instead on foundational capabilities with a ground-up approach. Basic security policies are a start, but those must be accompanied by adherence, operationalization, and ensuring the effectiveness of those security policies.
Before moving too quickly on the AI front, companies need an effective enterprise security framework that includes capabilities, talent, and resources with the appropriate spending levels.
Where AI and machine learning fit
Once a company has built a solid cybersecurity foundation, AI and machine learning can complement those strengths. But, as with any new technology, appropriate planning, and preparation are required to be successful.
A roadmap to bring AI into your cybersecurity implementation includes:
- Creating data platforms to operationalize AI
- Training cyber-analysts to be AI-ready
- Selecting the right set of use cases to provide solid proofs of concept
- Collaborating to increase awareness of the various threats
- Establish security, orchestration, automation, and response (SOAR)
- Governance for AI algorithms.
Creating a strong governance model will ensure AI is supporting cybersecurity issues. This includes developing a control process to monitor if an AI algorithm is behaving abnormally and identifying risk tolerance for the output generated by AI algorithms.
The Capgemini report showed companies are ramping up AI investments to support cybersecurity, but AI is only one tool in a larger cybersecurity toolkit. In most cases, companies need to make changes in their current infrastructure, data systems, and application landscapes to implement AI at scale. New technologies are not going to replace cyber-analysts, a solid IT foundation, and good governance policies.
The enthusiastic move to AI needs to be tempered to ensure the technology is solving the right problems, rather than creating potential new openings for hackers.
Drew Morefield is head of the North American cybersecurity practice at Capgemini and you can reach him at firstname.lastname@example.org.