I recently read an article on cloud computing in utilities. It provided several use cases for cloud technologies, and it prompted a thought, so I dug into a few other articles on the subject. As I reflected on the examples provided in the half dozen sources, it struck me that no one was really talking about the advantages or challenges of the cloud versus traditional architectures.
This led me to think about what are the true, primary cases where cloud (private, public, hybrid) offers a distinct advantage to utility players. I am referring to those cases where the cloud offers capabilities that a traditional architecture approach would struggle to emulate.
But before we jump into the use cases, let’s talk for a moment about cloud architectures. Cloud has many different flavors. In the public-cloud space, we have traditional vendors providing Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS) capabilities. Newly branded services such as Database-as-a-Service (DBaaS), Storage-as-a-Service (STaaS), Disaster-Recovery-as-a-Service (DRaaS), and others provide even finer-grained service models. Regardless of the services provided, public clouds are off premises and are typically offered on a subscription basis.
Private clouds may be on or off premises, but are always dedicated to a single organization. Hybrid clouds are essentially an orchestrated mix of private and public cloud environments leveraged to provide elastic capacity.
Regardless of the cloud model, it is essential that utilities architect for change. This means developing methods to encapsulate cloud functionality and reduce the cost and time of adding or switching cloud services. A common approach is the use of application program interfaces (APIs). Not only are APIs a lighter-weight method of integration, a well-constructed API can open up cloud services to a broader audience inside the organization.
What are the inherent advantages of cloud architectures? Some of the most-often cited include lower upfront costs, more flexible cost models, geographic reach/coverage, robust disaster recovery, and frequent (automatic) software upgrades.
After spending more than 30 years in consulting, I can remember numerous discussions in which a business unit or the finance organization wanted to delay an upgrade to reduce cost in the current fiscal year. Or sometimes the business unit simply didn’t want a disruption during a busy time of the year and asked for upgrades to be delayed.
Often, that’s okay; a single-year delay isn’t really a big deal. But what happens when it is then delayed the next year? And delayed again the year after? I recall numerous applications being five-plus years behind their scheduled upgrade cycle. What happens when you upgrade a five-year-old system? It essentially becomes a re-platform operation that is now very costly and disruptive. Not only that, the delays also increase security risks as oftentimes the operating system and/or supporting database systems can’t be upgraded as well.
Also, when you consider costs, it is important to consider all of them. Often overlooked in discussions of total cost of ownership are items such as operating system upgrades, hardware refreshes, supporting software upgrades (databases, browsers, etc.), and organization disruption. Organization disruption occurs when we incur downtime to install upgrades or have to conduct extensive business testing for new releases of the application. Cloud architectures and solutions insulate organizations from most of these costs and disruptions. Simply stated, a recurring subscription service is a much more predictable cost model.
So, the cloud offers significant advantages for utilities companies – including service options, flexible cost models, and customizable performance levels – but those wins are often wiped out when companies delay critical cloud implementations or upgrades.
In part two of this series, I examine some other potential downsides and then examine those utility use cases that are most suited to cloud architectures.