Skip to Content

Impact of Planning and Architecture on Cloud Economics

Manas K. Deb / Ruben Olav Larsen
January 18, 2022

Over the past few years, cloud computing has become a big business, and in the coming years it is expected to get even bigger.

Background:

A contemporary IDC report forecasts that the global “whole cloud” spend will grow from about $700B (2021 estimate) to more than $1.3T (2025 projection). When we are dealing with such large size investments and expenditures, economic considerations for cloud computing cannot and must not be ignored. However, what exactly is cloud economics, and how can we benefit from learning more about this topic?

Given the already quite large deployment of IT landscapes in cloud, enterprises are becoming increasingly conscious of what they are paying for their cloud services. In the process, often they feel or realize that they are paying more than they were expecting to. Like any other resource, cloud resources can also be misused or wasted, in fact, much more quickly and in bigger volumes as these resources are easier and faster to obtain.

Untracked and wasted cloud spend is thus an important topic. Nearly 80% of enterprises and around a third of the total cloud spend is ultimately deemed wasteful. These concerns have given rise to now the increasingly popular discipline of “Cloud Financial Operations (aka FinOps)” which, as practiced today, is mainly focused on runtime cost management and optimization. At this point if you somehow suspect that the scope of cloud economics should be broader than FinOps, you would be correct!

Using one of the most basic definitions of economics and adjusting the generic terms in the definition to fit the current context, we can state that “cloud economics relates to the broad financial considerations for how cloud resources are produced, bought/sold, and consumed”. Thus, in order to achieve the best economic outcome from cloud computing, there is a shared responsibility between the cloud platform vendors (aka hyperscalers like Amazon AWS, Microsoft Azure and Google GCP) and the consuming enterprises, at large – the hyperscalers control the production and selling of cloud resources while the consuming enterprise need to worry about how best they buy and utilize them. This is where, for the consuming enterprises, cloud adoption planning and architecture become the two additional critical pillars, besides cloud run-cost management, that help attain the best achievable cloud computing outcome from an economics point-of-view.

Impact of Cloud Adoption Planning:

An enterprise level adoption of cloud computing transforms the traditional IT operations and software development activities into activities using cloud resources and embraces the cloud-way of working i.e., adoption of iterative project planning, CI/CD, agile/DevSecOps, etc. Once the why move-to-cloud questions have been reasonably answered, a cloud adoption plan provides answers to the whathow, and when questions and guides the actual execution of the adoption process. Closely tied to the reasons, ambitions, and execution strategies of cloud adoption are the following two activities:

  1. Cloud adoption business case analysis which includes one or more of Total Cost of Ownership aka TCO, Return on Investment aka ROI, Net Present Value/Internal Rate of Return aka NPV/IRR, Total Economic Impact/Total Economic Value aka TEI/TEV analyses, and the like
  2. Cloud adoption roadmap creation which includes design and scheduling of move-to-cloud groups of assets maintaining technical viability and business continuity, cloud operating model design and roll-out, associated change management and governance.

Since enterprise level cloud adoption involves large investments and spending adjustments, along with the usual people-process-technology considerations, a financial dimension, a CFO/COO perspective so to say, must be added to the overall adoption framework and should be duly considered right from the early adoption planning stage. While a lot of traditional IT expenditures were categorized as Capital Expenditure aka CapEx, cloud spend is commonly thought of as Operational Expenditure aka OpEx. When seen from a finance/accounting point-of-view, the impact of this shift of CapEx to OpEx is not so black-and-white. Accounting practices have been evolving to better classify cloud spend. As an example, per the latest Generally Accepted Accounting Principles/Financial Accounting Standards Board aka GAAP/FASB or International Financial Reporting Standards aka IFRS guidelines certain cloud projects and reserved/dedicated cloud resources could be treated as CapEx. Thus, a careful attention to these details is required as these two very different types of expenditure, i.e., CapEx and OpEx, impact short-term and longer-term financial health and reporting. For example, CapEx positively and OpEx negatively influence the balance sheet while the opposite is true for cash flow reporting.

Besides financial reporting, especially when evaluating the business case for cloud adoption, one needs to go beyond the operating expense comparisons and consider all the other necessary IT expenses, for example, cost of hardware refresh and technical debt elimination. The above graphic, taken from an actual customer situation, shows that when costs are accounted for in a comprehensive manner, the cost relevant to P&L (dotted lines) with cloud adoption (“new”) was never higher than that without cloud (“baseline”) over the time horizon of interest. In other words, the cloud adoption, in this case, was self-funding – a fact that would not be revealed unless a comprehensive financial analysis was made.

When planning cloud adoption, in some cases, projects can be scheduled in a way that the savings from earlier projects can fund the costs of later projects. In another customer example where the unit cost of application management was found to be lower in the cloud, a cloud rehost, i.e., lift-and-shift migration without any change to the applications, of part of the IT landscape provided savings from the total application management budget which then was used to partially fund the subsequent cloud migration/modernization projects.

Technology adoption is essential for modern businesses to grow, be profitable, and stay competitive. Often substantial financial investment is required to acquire and adopt enabling technologies. In traditional IT, much of these investments need significant capital outlay and consequently cost of capital becomes a relevant financial consideration. In cloud, on the other hand, upfront cost of technology acquisition is minimal to none. Operating IT via cloud computing also increases business agility which can have tremendous economic value. During the current COVID-19 pandemic, cloud-enabled enterprises have shown better resilience, adaptability, and growth compared to those mainly dependent on traditional IT. As an example, a bakery-café fast restaurant chain with about 2000 locations in the US pivoted itself almost overnight, with major help from cloud-based technologies, to a friction-less diner as well as a grocery delivery business.

ESG (Environment, Social, Governance) topics have been gaining importance rapidly both in the public and private sector enterprises. Driven by a combination of the fundamental desire to do good and to be more attractive to ESG-minded investors, enterprises are taking on many initiatives to meet their stated ESG goals. The related ESG economics involving expenditures to drive ESG initiatives and financial benefits from the increase in brand value due to better ESG maturity should consider transition to cloud computing as it helps with reduction of carbon and carbon-equivalent emissions thus getting closer to a sustainable IT (a component of ESG) goal.

Impact of Cloud Architecture:

Although financial planning, forecasting, budgeting, cost allocation, and rightsizing are all crucial elements of cloud economics, many organizations fail to realize the impact cloud architecture can have on optimizing the value cloud adoption for the business.

Architecture in this context is about translating business requirements and constraints into architecture principles and technical requirements in order to design the most optimal solution that meets these requirements. Since cost is an important factor, “most optimal” includes designing the solution that provides “most bang for the buck” for the business.

An important area where this solution optimization comes into play is business continuity. Most enterprises have spent significant time and effort in developing a robust Business Continuity Plan. This is important especially in today’s world of climate changes, that include the increasing frequency of super storms, and cyber threats like ransomware attacks that target more and more organizations.

Building resilient solutions using traditional IT infrastructure can be complex and often require a substantial financial investment, that is why the cloud is so exciting since it brings us a new set of capabilities to tackle these challenges in more cost-effective ways. This includes architecting for High Availability (HA) and Disaster Recovery (DR) or when it comes to leveraging advanced AI driven cybersecurity features that can help protect your data.

Put in a simplistic way, the total cost of cloud solutions is a combination of three basic types of cost, infrastructure, license, and people:

  • Infrastructure (hardware + software abstraction layer i.e., compute, storage, network etc.)
  • License (OS, middleware, applications, or other types of software)
  • People (operations, security, application development and management, etc.)

Since cloud services themselves are highly automated, any people cost associated with using a specific cloud service is a significant decision criterion. With the shared responsibility model, the amount of work that needs to be performed by the customers varies depending on the service model (IaaS, CaaS, PaaS, SaaS) that a particular cloud service is built on. As an example, a Virtual Machine aka VM requires a lot more maintenance by the consuming enterprise than a serverless function, so this “people cost” needs to be included when calculating at the TCO of the cloud solution.

Let’s briefly look at a few examples of how cloud architecture can have a significant impact on optimizing business value and the overall economics of the cloud solution.

Greenfield scenario:

Let’s imagine you are designing a brand-new business application, your organization has adopted a “cloud first strategy”, so naturally you are designing this as a cloud native application. You have decided to use a hyperscaler as your cloud provider and you are now looking at the literally hundreds of services being offered, often in various versions/SKU’s/sizes etc. spanning different pricing and discount models. Coming to grips with such a vast set of choices and selecting the right service to best meet the technical requirements can be a daunting task itself. Selecting the right service from an economics perspective, that best meets the business requirements is even harder. The solution is not a “one size fits all”, a fancy tool, or the latest buzzwords, but rather utilizing highly skilled architects who can help navigate through the feasible options and design the optimal solutions.

Some question we often get asked is this context are:

  • Should I use containers or serverless?
  • How can I ensure my application runs in a reliable and cost-effective way?
  • What is the best storage/database technology to use?
  • How can I best ensure data privacy and compliance?

As an example, let’s take a closer look at the first question, i.e., containers vs. serverless. These two technologies are not suited for all uses case, but one of the overlapping use cases is to run an API function on a platform with high availability.

From purely an economics perspective serverless can provide significant benefits if you require computing power with high availability that is only infrequently used. As such, the unit compute cost of a serverless function vs a single VM is typically higher. But when you factor in the quality of service with the higher SLA and cost savings of paying per function call, serverless becomes more cost effective, especially, if the workload stays intermittent.

There is also lot less people-oriented service management and operations activities involved in serverless computing compared to using VMs, and this significantly impacts the TCO when comparing these two options.

Containers however are great if you need more control over the runtime environment, have dependencies on other hardware or software, need more long running functions or if you need flexibility and portability to run across multiple cloud providers or combine with running some of the workloads on-premises.

Some organizations are concerned about cloud vendor lock-in, so a container-based approach can alleviate that to some extent. This can include running your own serverless platform on-top of your container platform using libraries like OpenFaaS. However, it is important to keep in mind that with increased flexibility and complexity also comes the increased cost.

High Availability Design Targets:

Just because you can more easily build very highly available solutions in the cloud, it does not mean you always should. Your cloud architecture needs to be fit-for-purpose, basically solving functional and operational business requirements in the most cost-effective way.

Availability Service Level Agreement (SLA) is one of the requirements you often need to meet in your designed solution. In the table below you can see some typical SLA levels that are commonly used with corresponding application categories.

Batch processing, as an example, typically does not need high availability and can often utilize spot instances or other forms of cheap computing, especially, if you can easily re-run a job in case of an interruption.

Source — AWS Availability

Choosing the Best Storage/Database Technology:

If you are creating a cloud native application, you will most likely not limit yourself to just one technology. As with many things in life, it is about selecting the right tool for the job.

Polyglot Persistence is a concept of using different storage technologies by the same application or solution, leveraging the best capabilities of each storage technology. As an example, in a microservice architecture with a simple one-to-one scenario you will select the most efficient and cost-effective storage solution depending on what type of data the microservice is going to store. The result is that the overall solution, consisting of many microservices, will have different storage technologies in play.

Source — Claudio Guidi

All the Hyperscalers have a large palette of storage and database solutions catering towards different storage needs. Not surprisingly, these services can often differ vastly if you compare their price/performance per megabyte. So, selecting the right tool for the job is essential from a cloud economics perspective. This can also go hand-in-hand with the ESG and sustainable IT agenda, since eliminating wasteful consumption of infrastructure is good for business and the environment.

Closing Remarks:

When negligent resource usages are factored out, cloud computing cost is a direct function adoption planning and architecture. As we have described here, besides the standard people, process, and technical considerations, a careful evaluation of the relevant financial aspects is critical in ensuring high economic benefits from cloud computing adoption.

This article has benefited from discussion with Chris Dudgeon from Capgemini UK.

About the authors


Manas K. Deb
PhD, MBA – VP and Business Leader, Cloud Computing, Capgemini, Europe

Ruben Olav Larsen
Director and Cloud CoE Leader, Capgemini, Norway