CTO Blog

CTO Blog

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

TechnoVision 2015 - Build, Release, Run, Repeat

Category : Technology
Invisible Infostructure #3 - Build, Release, Run, Repeat

Enterprises are raving about DevOps: the agile, perfect fusion between IT Development and IT Operation folks. Using one unified, highly automated ‘train’ of tools, DevOps teams develop applications, test, integrate and package them for deployment, plus make them live in a continuous, uninterrupted flow, if needed many times a day. It requires a thorough understanding of what components constitute a state-of-the-art DevOps platform.  Also, it needs a true mastery of the agile approach to begin at the business level. Arguably, it eats away the barriers of the solution life cycle, bringing experts together in high-productivity teams that ‘never sleep’.

The speed of application development and notably application change is increasing, particularly in ‘3rd platform’ areas around cloud, mobile, social, and real real-time data. The typical Car and Scooter dynamics require going through the entire solutions lifecycle in days, hours or even minutes. But at the same time, the necessity of a rock-solid quality of solutions is paramount: our very business performance depends on Digital and we cannot afford mistakes because we are in a hurry. 

So here’s the conundrum: we want it ultra-fast and with the highest quality whilst being totally in control.
A new, exciting approach addresses this apparent oxymoron: DevOps. Being a portmanteau between ‘Development’ and ‘Operations’, it is a concept that connects developers, quality assurance and infrastructural operations in such way that the entire “build, release, run repeat process” operates as a continuously producing factory. It features clear roles, responsibilities, input and outputs and as such it requires a mature, established governance to get there.
The main aim of DevOps (here's our definitional white paper) is to revolutionize the change process, de-risk IT deployments, delete the stereotypical "but it worked on my system", and eliminate the silos between developers, testers, release managers and system operators.

The tools and products that are being developed in this space all focus on maximizing predictability, visibility and flexibility, whilst keeping an eye on stability and integrity. With the advent of open source and Virtual Lego, a DevOps team can simply construct any environment it needs. It’s an area that many tend to focus on first: to create a train of specialized tools that allow for an almost complete automatic execution of the solution lifecycle; all the way from change requests via versioning, development, integration, testing, configuration, packaging to deployment on the live run environment. Examples of these 'tool train' components include Docker, Puppet and Chef, as well as a newer entrants like VMware’s vRealize Code Stream.

Lets take an example. Imagine you're a developer creating code for SuSE Linux who's developing a 3rd platform based application using a cloud based development environment. In order to test your application you need to move the code plus configuration information to a separate unit-test environment and once tested, the application needs to be installed in a user acceptance-test environment. Once users have okayed the app, it requires a last, live-like performance- and security test. This all before it can be deployed on a live environment, which sits in an on-premise, private cloud.

Before the era of DevOps you would have requested teach and every move and construction via an Environment Manager using an internal (and maybe only PC-supported) proprietary change management system, taking already days and sometimes weeks, let alone all the issues that might arise from subtle differences between the approach of various “sysadmins” involved, resulting in a divergent nightmare of test and target platforms. 
Fast forward: in a DevOps team, you use the expressway. You work with the same UI and tools as all your colleagues in a tight, multi-disciplinary team; you have the ability to create, deploy and destroy an environment using standard templates and blueprints, increasing the ability to shortcut fault analysis to your code only. You can kick off full, pre-defined install sequences, eradicating the need to manually install anything. And what is even better - it supports any target platform, whether Unix, Linux, Windows, Mac even mainframe installed either on or off-premise, virtualized or non-virtualized.

You’re not cutting corners here, you simply benefit from the highest degree of automation and standardization to repeat the entire solutions life cycle over and over and over again at supersonic speed.

It requires mastering agility at all stages and it needs perfectly aligned teams with committed specialists from all crucial disciplines: developers, testers and operations. Now would be a good time to get them acquainted. And probably it makes sense to start exploring the new approach only in the most suitable areas first: rather around mobile and Cloud-based applications than straight into the critical, core applications space.
Once in flow, an optimally tuned DevOps team can set a shining example to the rest of the enterprise.

Build, Release, Run, Repeat. All before lunch.

What if the business could do that too?

Your expert: Gunnar Menzel  

Part of Capgemini's TechnoVision 2015 update series. See the overview here.

About the author

Ron Tolido
2 Comments Leave a comment
I would challenge Docker being a key tool for automation & industralisation. I believe Docker is a different paradigm for virtualisation (the technology has been around for some time but the OpenSource Docker seems to be getting traction in the industry now). Therefore Docker will become more of a challenger for IaaS solutions with a hypervisor - but both Docker & IaaS can be automated to support DevOps. Also, don't forget Microsoft Service Management Automation & Desired State Configuration for performing similar capabilities using Microsoft technologies.
I think it's great that the industry is focusing heavily on DevOps practices but I feel a lot of focus is going into tooling instead of the practice itself (the collaboration). Yes, Docker is the new kid on the block but it's naive to think "hey everyone, we all need to start using this new tool called Docker.. containerization is the answer!!".. ummmm NO!! There are several IaaS solutions out there but probably only 1 or 2 will be suitable for meeting your organisations requirements. Things to consider are 1) what is an environment and what does it need to consist of... I mean if you need to deploy a large test environment at the touch of a button I hardly think Docker is going to be your first choice. 2) Do I need load-balancing capabilites? database? networking? What about physical location? Meaning the use of AWS in the Cloud might not be your ideal choice (if you need that environment to talk to an external 3rd party ..... and so on and so on. As I say, different tools for different technological scenarios/requirements. The main aspect of DevOps (i feel atleast) is team collaboration, focus, dedication on continuously improving the development and deployment process. Something which doesnt happen overnight when you've been working in the typical Waterfall or semi-Agile release process.

Leave a comment

Your email address will not be published. Required fields are marked *.