Invisible Infostructure #3 – Build, Release, Run, Repeat
Enterprises are raving about DevOps: the agile, perfect fusion between IT Development and IT Operation folks. Using one unified, highly automated ‘train’ of tools, DevOps teams develop applications, test, integrate and package them for deployment, plus make them live in a continuous, uninterrupted flow, if needed many times a day. It requires a thorough understanding of what components constitute a state-of-the-art DevOps platform. Also, it needs a true mastery of the agile approach to begin at the business level. Arguably, it eats away the barriers of the solution life cycle, bringing experts together in high-productivity teams that ‘never sleep’.
The speed of application development and notably application change is increasing, particularly in ‘3rd platform’ areas around cloud, mobile, social, and real real-time data. The typical Car and Scooter dynamics require going through the entire solutions lifecycle in days, hours or even minutes. But at the same time, the necessity of a rock-solid quality of solutions is paramount: our very business performance depends on Digital and we cannot afford mistakes because we are in a hurry.
So here’s the conundrum: we want it ultra-fast and with the highest quality whilst being totally in control.
A new, exciting approach addresses this apparent oxymoron: DevOps. Being a portmanteau between ‘Development’ and ‘Operations’, it is a concept that connects developers, quality assurance and infrastructural operations in such way that the entire “build, release, run repeat process” operates as a continuously producing factory. It features clear roles, responsibilities, input and outputs and as such it requires a mature, established governance to get there.
The main aim of DevOps (here’s our definitional white paper) is to revolutionize the change process, de-risk IT deployments, delete the stereotypical “but it worked on my system”, and eliminate the silos between developers, testers, release managers and system operators.
The tools and products that are being developed in this space all focus on maximizing predictability, visibility and flexibility, whilst keeping an eye on stability and integrity. With the advent of open source and Virtual Lego, a DevOps team can simply construct any environment it needs. It’s an area that many tend to focus on first: to create a train of specialized tools that allow for an almost complete automatic execution of the solution lifecycle; all the way from change requests via versioning, development, integration, testing, configuration, packaging to deployment on the live run environment. Examples of these ‘tool train’ components include Docker, Puppet and Chef, as well as a newer entrants like VMware’s vRealize Code Stream.
Lets take an example. Imagine you’re a developer creating code for SuSE Linux who’s developing a 3rd platform based application using a cloud based development environment. In order to test your application you need to move the code plus configuration information to a separate unit-test environment and once tested, the application needs to be installed in a user acceptance-test environment. Once users have okayed the app, it requires a last, live-like performance- and security test. This all before it can be deployed on a live environment, which sits in an on-premise, private cloud.
Before the era of DevOps you would have requested teach and every move and construction via an Environment Manager using an internal (and maybe only PC-supported) proprietary change management system, taking already days and sometimes weeks, let alone all the issues that might arise from subtle differences between the approach of various “sysadmins” involved, resulting in a divergent nightmare of test and target platforms.
Fast forward: in a DevOps team, you use the expressway. You work with the same UI and tools as all your colleagues in a tight, multi-disciplinary team; you have the ability to create, deploy and destroy an environment using standard templates and blueprints, increasing the ability to shortcut fault analysis to your code only. You can kick off full, pre-defined install sequences, eradicating the need to manually install anything. And what is even better – it supports any target platform, whether Unix, Linux, Windows, Mac even mainframe installed either on or off-premise, virtualized or non-virtualized.
You’re not cutting corners here, you simply benefit from the highest degree of automation and standardization to repeat the entire solutions life cycle over and over and over again at supersonic speed.
It requires mastering agility at all stages and it needs perfectly aligned teams with committed specialists from all crucial disciplines: developers, testers and operations. Now would be a good time to get them acquainted. And probably it makes sense to start exploring the new approach only in the most suitable areas first: rather around mobile and Cloud-based applications than straight into the critical, core applications space.
Once in flow, an optimally tuned DevOps team can set a shining example to the rest of the enterprise.
Build, Release, Run, Repeat. All before lunch.
What if the business could do that too?
Your expert: Gunnar Menzel
Part of Capgemini’s TechnoVision 2015 update series. See the overview here.