Skip to Content

How to optimize DevOps with the help of containers

Capgemini
April 6, 2020

As we all know, in the world of DevOps there are five different topologies. While the topologies overlap slightly, what they promise is a similar concept: neutralize the complexity of operations and externalize the “commodity,” allowing IT to focus on the high-value part of the business.

  • Dev & Ops collaboration comprising well-defined rules, processes, and toolsets is an area that requires a change in mindset and processes, thus proving it difficult to implement.
  • DevOps evangelist team is where a team works to spread awareness of DevOps practices in the organization. This requires a lot of time, effort, and the commitment of internal stakeholders.
  • DevOps team with expiry date is an approach where a group of people lead the change with a specific deadline, at the end of which the operating model becomes either a Dev & Ops collaboration or a fully shared Ops responsibility.
  • Fully shared Ops responsibility is implemented in digital companies, such as Netflix and Facebook, which have a single web-based offering. It is very similar to “No-Ops”: there are no operations, and organizations adopt their sourcing model from providers with the capability to manage them on the line of business.
  • Container-driven collaboration is the approach I recommend for us at CIS, where containers allow One-Ops (“one operations”) i.e. only one operating model for different types of infrastructures, be it public cloud (from any provider) or private cloud (co-located, on premises, or outsourced). In this case, Dev and Ops can keep their roles and responsibilities, but with a specific point of interaction in the management of the build lifecycle.

An efficient DevOps operating model should have the same process to build, test, release, deploy, operate, and monitor all the different types of workloads. However, the issue with this kind of integrated process is the friction between release and deployment, where roles and responsibilities of Dev and Ops are mixed and the time-to-production can increase dramatically if not managed correctly. This is where I believe containers can create a paradigm shift.

  • Run any kind of workloads: in principle it is possible to run any x86 workload inside a container, including traditional apps, cloud-native apps, cloud-aware, and cloud-hosting ready apps.
  • Blueprint-as-a-Service: developers can immediately create a new blueprint for their development pipelines and store Docker images to be reused in the future.
  • Consistent environment for software development, testing, and deployment: translating into fewer variables and fewer potential attack vectors, as well as easier communication between security experts and other members of the team.
  • More control over software distribution: when users install software using containers, it generally comes from container registries, most of which provide access-control and binary-signature features that can mitigate the risk of malicious code being pushed to unsuspecting users.
  • Isolate applications: running an application inside a container doesn’t guarantee that an attack against that application won’t escalate into an attack against other targets on the system, but containers certainly make escalation harder and it becomes easier to detect an intrusion.
  • Limit attack surfaces: when applications run as containers, it is possible to minimize attack surfaces by disabling unnecessary services like SSH and limiting their exposure to public-facing networks. This makes it easier for developers, admins, security experts, and everyone involved to design and run applications in ways that minimize potential security vulnerabilities.
  • Faster software updates: when an attack occurs, a patch can quickly roll down the delivery chain and go into production.

Container-as-a-Service thus helps organizations achieve several benefits such as:

  • Increase the portability of applications through different environments, enabling organizations to move applications in bulk from one cloud provider to another.
  • Migrate applications as is or with little re-engineering paving the way for a quick win in the short term, and at the same time helping evolve the application into a microservices architecture in the mid/long term.
  • Increase the scale-out of applications through the finest possible granularity at reduced costs.
  • Increase the “dynamic” scale-out or resource allocation when Docker is implemented in combination with composable infrastructure technologies that can be directly programmed at CPU or storage unit level.
  • Reduce the overall TCO by minimizing the instances of virtual machines or rather increasing the “applications intensity” within the VM.
  • Improve security and application version control.
  • Enhance the DevOps model through tight cooperation between Development and the IT Operations teams. This is critical, because even if every organization already has a collaboration in place between developers and operations, the way a DevOps model is implemented makes a different impact on processes, people, and technology.

If you’d like to know more on the subject, contact Fausto Pasqualetti.

You can read his other thought leadership blogs below: