Click here to take our online assessment. It’s free, anonymous, and allows you to benchmark your maturity against other organisations and industries.
Given the relative maturity of other developer automation tools such as Jenkins, Puppet and Ansible, isn’t it frustrating when your automated code review tool produces an unmanageable number of false positive alerts? Why can’t they just be as useful as the other tools in my arsenal?
False positives are not necessarily a bad thing
Of course, a high volume of false positives slows down product releases, and eventually developers lose faith in their tool’s ability. However, we believe organisations should take a less ‘black and white’ outlook on the alerts generated by their security tools. Instead of looking at the false positives one by one, what are the key security themes indicated by them?
The outputs of the tools need to be contextualised to the software product’s architecture and build environments. Automated Dynamic and Static Application Security Test (DAST & SAST) tools do not replace manual code reviews, where human added context can start to make sense of the alerts.
Embed security themes into your sprints
Regardless of whether your DAST and SAST tools are producing true positive or false positive alerts, an element of manual review should be undertaken during every sprint, to ensure your code is being published with an acceptable level of security confidence.
Our experience has taught us a common-sense approach to continuous engagement between developers and your security specialists produces the best results:
For each sprint, the code review stage has an emphasis on one aspect of security, a ‘security flavour of the sprint’ if you will. For example, the security specialist could pay close attention to preventing Injection attacks one week and will therefore ensure Input Validation best practice has been embedded within the code. Ensuring manual code reviews are part of your ‘Definition of Done’ can help codify this level of review into your governance process.
This type of continuous engagement throughout the lifecycle of the software product not only provides a level of confidence that all aspects of security have been covered off during code review, but also developers have gained more knowledge of security, and hopefully fostered a good relationship with their security colleagues in the process.
Use Case: A risk-based approach to tackle vulnerabilities
Vulnerability free software is always the target, but as we see often it’s also unattainable. Instead, an alternative risk driven approach will allow you to focus on areas that are within your control, rather than seeking an unattainable goal.
Let’s consider the UK Parliament’s Petition website as an example, which in recent days has been suffering frequent outages due to abnormally high volumes of traffic. These outages weren’t caused by cyber-attacks; however, the Government must consider the risk posed by Distributed Denial of Service (DDoS) attacks against this high availability service as unacceptable. It’s also within their control to do something about it. For example, they may decide to purchase enhanced Web Application Firewalls (WAF) to mitigate against DDoS and other web-enabled cyber-attacks. Deploying an enhanced WAF may be the solution here, but elsewhere in the Government, their focus would be on preventing the loss of data instead. As such, the focus will shift toward Data Loss Prevention (DLP) tooling instead.
Combine manual and automated code review to reduce risk
We suspect organisations are not reaping the benefits from their automated software development tooling because they are missing the human element. The security team has its role to play in ensuring everyone can make sense of the outputs from their automated tools, and that security features and controls are embedded in areas that address the most risk.
We are growing our team
Capgemini Invent are seeking like-minded cybersecurity professionals to join our team – follow this link to apply now