Pressures on IT service management within utilities companies are mounting. Advanced infrastructure is now needed to support the rising levels of agility and bandwidth demanded by these new technologies, business models, and working methods. As such, teams are increasingly turning to cloud adoption to help them keep up. But how do they ensure they stay on the front foot with their cloud adoption, and what are the pitfalls to watch out for?
Over the years, IT service management has developed enough books, tools, and models to work across a range of estates and structures – all of which have been built using known quantities and versions of equipment, software, and estate assets. In running these services, an understanding of the strengths and weaknesses of the architecture is developed over time, leading to the necessary provision of a whole series of projects and upgrades to address any limitations as and when they arise.
Traditionally, such limitations have often centered around configuration, capacity, and change management – these are the areas that, when gotten wrong, cause the major outages and service interruptions that are so dreaded. When these incidents do occur, the response involves a sometimes lengthy and often resource-heavy sequence of looking at known changes, rolling back, adjusting, and tuning capacity, to name just a few of the possible steps!
Now, with the positives of cloud and “as-a-Service” technologies, utilities companies can breathe something of a sigh of relief as the troubleshooting process promises to be reduced dramatically.
Advanced provisioning and scalability
In a cloud environment, centrally managed configuration ensures that the service being provisioned is fully tested ahead of any release and that all related critical services are updated in conjunction so that any impact is immediately clear and understood.
By definition, the scalability of the cloud platform is its primary strength when compared to traditional hosting. Requests for additional capacity no longer require complex processes or even full procurement cycles, as capacity boosts can be controlled by the hosted application. As the adoption of cloud tools and technologies increases, the enterprise can sit comfortably knowing that the performance of the workload running on the cloud will be consistent regardless of what is thrown at it – unless, of course, something truly extreme comes along.
For most utilities, the workload is a known quantity and such overwhelming scenarios have been rare. Apart from an annual billing cycle or extreme weather resulting in higher call volumes, the ongoing volatility of the loads on the estate is relatively low.
However, as the adoption of digital technologies, and the convergence of IT and “Operational Technology” and SaaS applications rises – introducing new opportunities for utility companies to engage more with customers, develop more agile network capability and exploit the large operational data sets to drive efficiency into the business – this is all set to change.
Encouragingly, over three quarters (78%) of the UK energy suppliers we spoke to said they feel prepared to handle the approaching influx of data, and 70% have already implemented innovative technologies to manage and use it effectively. When it comes to running a predictable IT service in the cloud, however, all of this confidence can be built on little more than a house of cards if three key areas are not addressed:
- Compatibility: Compatibility raises its head when an automatic cloud update causes a new way of operating that doesn’t work with the rest of the estate or moves out of synchronisation with the rest of the architecture. This can manifest itself in an issue that is several steps away from the root cause and requires a change in support model to really focus on vendor release plans and short notice testing periods before a change is released.
- Knock-on capacity: Knock-on capacity issues occur when there are unmonitored devices or elements in the chain that become a limiting factor as an unintended consequence. For example, some SaaS applications provide image recognition/matching that also replicates cloud–stored images onto mobile devices. After a few days usage, mobile phones with low capacity will reach their limit and no longer be able to carry out other functions. In such instances, digital transformation for the engineering team could also render core capabilities unavailable without careful thought about the entire chain of technology and its limitations.
- Security: Security has been a growing requirement over recent years and, coupled with new data regulations in the UK and beyond about where and how data is stored, controlled, and accessed, it is another key consideration with cloud adoption. One of the disadvantages of cloud provisioning is that any vulnerabilities that are identified are no longer able to be patched swiftly by the service team but are instead dependent upon a general release by the vendor with timescales unknown, and little understanding of any other knock-on effects. Service monitoring may also require additional tools and configuration changes to keep new modules and feature releases in the line of sight of those presiding over the estate.
Across all industries, a clear overall IT strategy supported by individual data, cloud, architecture, and application strategies is becoming ever-more important to the agility and cost effectiveness of IT estate and service transformation. Such frameworks will help provide the focus areas on the critical aspects that support adoption of these “evergreen” technologies. For utilities specifically, the realisation of the potential benefits will then depend upon the successful implementation of models that support a wider range of scenarios and needs as part of the overall business case.
Click here to read the full findings of our research into the impact of digital transformation on the UK energy market.