With the increasing use of IT comes increased reliance, and therefore increased disruption, when issues occur.
The ubiquity of end-user applications on phones, watches, or glasses, let alone on traditional devices such as laptops or desktops, means that, when there is an issue, business processes can suddenly stop. Supply chains can grind to a halt, sales funnels can go unmanaged, invoices can go unissued or unpaid, and the impact can be huge.
In addition, increased reliance on millions of devices in IoT networks, or thousands of RPA bots across an enterprise, means that the scale of a simple defect can quickly lead to widespread disruption that can negatively impact a company’s reputation and financial standing.
Stability is a very useful lever for IT systems to minimize disruption. If we can improve the stability of the underlying applications by minimizing defects, then disruption is reduced. Change introduces instability and therefore traditionally it would be minimized or heavily controlled, so as to slow things down.
But, digital transformation has turned everything on its head. Change must now be enabled at a great pace to quickly respond to disruption in markets caused by competitors or to bring to market the very ideas that caused the disruption in the first place. At the same time, with ubiquitous IT the expectation is that “it will just work,” right? Updates or changes on my smart device do not impact my consumer experience so why should they impact my enterprise experience on the devices I use for work?
As an expectation, this is understandable. But it means that the culture of business IT, and of the providers that support it, must change from reactive with a fast speed of response, to proactive with a zero-defect mentality.
So, what exactly does this mean?
Well recent analysis by various analysts, including ISG, Gartner, and Forrester, recognizes the importance of the next generation of Applications Development and Maintenance (ADM) Services that can vertically integrate traditional applications and infrastructure towers into a business value chain “tower.”
Better alignment to customer expectations, business or end-user, drives the right behavior from a number of different perspectives:
Monitoring for pre-action, not reaction
Whether it be the performance of the business process, or experience for the end user as described in my previous blog, the strategy now is to monitor for potential issues and react to them before they become visible to the user or business.
Software intelligence tools, such as Dynatrace, App Dynamics, or capabilities from Capgemini’s Automation Drive Suite monitoring tools, can be used to watch user journeys or business processes from the screen right back through the application stack to the cloud and the server therein, monitoring each component in minute detail.
With this level of detail, incidents can be prevented or addressed before disruption occurs. In addition, the insight provided can be used to do proactive root cause analysis, thereby ensuring that incidents do not occur and providing a solid platform on which to build self-healing code, applications, and business processes.
The final angle then is typified by SAP’s recent acquisition of Qualtrics, and that is experience management. Qualtrics enables the analysis of customer feedback from various perspectives, ensuring that if the application landscape is inherently stable the full experience can be monitored and enhanced on an ongoing basis.
While software intelligence and experience management tools can be used to ensure that disruption is minimized in a static landscape, ongoing change is still required in order to execute on a digital transformation agenda.
New delivery methods need to be adopted to increase the rigor while not slowing down the software delivery lifecycle. DevOps is again a hot topic in order to achieve this and much has been written on this subject. The key factor in delivering DevOps successfully is to have a clear vision of the outcomes and, if zero defects is one, then integrated tooling has got to be a key capability.
The market for tools is large but, to my mind, two areas really support the zero-defect focus. Firstly, the concept of “containerization” helps to manage landscape consistency and whether it is in enterprise applications like SAP’s Correction & Transport Solution or Docker for Open Source platforms, these tools ensure that landscapes are aligned. Secondly, as the overall guardian of the landscape, integrated and automated testing tools (many are available) ensure that if defects do sneak through then they can be stopped before they cause disruption.
Don’t stand still
Finally, the trend for increased ubiquity of IT will lead to more complexity, which will subsequently lead to a greater risk of disruption and therefore a greater impact when it happens. This means that continuous improvement must be part of the IT DNA, and not just incremental improvements, but innovative leaps where possible that drive positive disruption themselves.
So, if yin and yang describe seemingly opposite forces, which may actually be complementary or interconnected, then change and disruption clearly are. The challenge though, for the IT market as a whole, is how to increase change, and the positive aspects of disruption, without suffering from the negative aspects. Next-generation ADM capabilities are clearly an enabler for this and Capgemini’s ADMnext, underpinned by our Automation Drive Suite, is already being seen as a leader in the market.
To learn more, please feel free to get in touch with me on social media.