CTO Blog

CTO Blog

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

The next generation data centre

Category :

My post on January 4thThe new data centre is not your current one virtualized’ resulted in quite a lot of people asking to know more at subsequent meetings or even by email (curiously I seem to get more direct email follow ups now and very few interactive comments – wonder if that’s because the mails are much more personal questions?) So here is a further piece that I had written on the topic for the Capgemini Monthly Technology Brief which provides a featured article on a topic in the news and a summary of all the major announcements in the industry that I personally thought were important to note.

Prior to this I had written about the shift from developing in a monolithic application manner where the process is to work from the application to define a suitable operating system that will provide the interface to the computational resources. The result is a data centre full of custom applications on different operating systems and the first round of virtualization has been to use guest operating systems to reduce the sheer numbers of servers and operate the remaining ones more efficiently. The shift to developing small simple ‘services’ to run from cloud-based computational resources then led to this article; ….

The current data centre could be defined as the physical co-location of a number of individual discrete systems to enable optimization of operations, and efficiency in sharing as many resources as possible. This definition is based on a legacy of application-driven development which in turn drove the choice of the operating system, and from this usually the hardware. The shift to small modular services supported from common resource pools in the cloud model literally turns this model upside down. The individuality of each application based on a business requirement led to the acceptance that other factors were secondary, and if this led to diversity in the choices of operating systems and hardware, this was acceptable.

Virtualisation in its first round of adoption has provided a rationalization for this model by enabling different operating systems to be clustered together more efficiently on a single server. Less servers, with better utilization of each, provides reductions in both direct, and indirect, (air-con etc), power consumption which is generally reckoned to be more than half of the overall data centre operating costs. Extending this model to create a ‘pool’ of servers amongst which these applications on their operating systems can be still further optimized, or using a mainframe based super server to achieve a similar effect, is often described as a private cloud especially if ‘virtual machines’ can be readily created for new user-driven purposes.

The key point that this approach may not address, and is at the heart of the next generation data centre, is that with a genuine Services on a Cloud environment there is little, or no, need for an operating system at all. In conventional discrete application development on dedicated hardware the role of the operating system is to provide a set of interfacing capabilities to the hardware. In a genuine cloud environment ‘services’ are written to a set of simple standardized APIs that the cloud layer provides, Microsoft Azure is a good example of this, as are IBM Cloudburst, and HP Instant-On for internal private clouds, as well as external public cloud approaches by Google and Amazon. VMware and Cisco with EMC are introducing a further approach with VBlock.

Correctly done these simple APIs provide highly optimized use of the computational resources and avoid all the previous challenges of performance improvements for unique custom development. In addition there are all the other benefits of ‘services’ such as reducing development time, allowing reuse and dynamic orchestrations etc, all in a cloud model without the need for time consuming server builds, deployment and operational management. As a result the data centre can now be simplified and instead of the previous co-location of different servers each virtualized to suit their unique workload it becomes a standardized uniform single resource.

The next generation data centre uses virtualization and cloud technology to resemble a single server, or storage system, or any other computational service, all addressed through a single set of standardized interfaces. Failover and resilience is addressed by the sheer number of CPUs running being able to cope with any individual failure by distributing the load across the remaining pool. In existing data centres that have adopted this approach repair, (led by Google and Amazon as first movers), and reboot are generally abandoned in favor of remove and junk on the basis that it is both cheaper and safer to do this than risk operational activity in the overall running cloud pool.

The principle operating and administration task moves from being machine focused towards policy management of what, who and how users, and services, can make use of the resources. Simplicity in access, and use, coupled with the shift to user driven provisioning, orchestration, etc all make it essential to be able to strengthen the policy management capabilities. This is a point that IBM, HP, and Dell are intent on addressing and positioning as the new value that their hardware provides. (Sun is in a somewhat different model developed by Oracle to suit their application centric business model). The shift to a new generation of software becomes the critical element in the virtualized cloud resource model that has introduced startups such as Joyent who provide only the software and reduce the role of the hardware to the ‘cheapest’ standardized units available.

Standardization plays a big part in next generation data centres for two reasons, the obvious one in terms of building blocks and flexibility, but the second aspect is in terms of the universal similarity of the resulting environment. All enterprises will be using a mixture of ‘services’ in the much more externalized activities that new business models are driving, some will come from their internal private cloud, others will be shared with business partners and must therefore be running in easily accessible public clouds, and increasingly there will be vertical sector clouds providing new specialized business capabilities for these new markets.

The shift to ‘X as a Service’ is not necessarily about the current generation of enterprise applications managed by IT as part of the internal overheads budget, it is a direct requirement for the front office consumption of ‘services’ for business technology in such situations. The connectivity management between people, services, and resources is a new complex task that defines the difference between the ‘free’ and ‘open’ world of Web 2.0 and the business world of the cloud. Cisco believes the management of this connectivity and the network of people, services and resources calls for their version of the next generation data centre which focuses on these aspects rather than on the physical data centre or its hardware.

In all of these aspects the need for next generation data centres to provide common understood interfacing and service levels is paramount to the ongoing development of clouds in the fully developed business model. For this reason some seventy companies have established the Open Data Center Alliance with five working groups using 19 use case models to develop a 1.0 roadmap. The five core focuses are; Infrastructure; Management; Security; Services and Government & Ecosystems. An initial set of drafts termed Roadmap 0.5 will be delivered in the first half of 2011.

The radical change that all of this brings may not be welcomed by the current operators of data centres who will rightfully argue the need to continue to support their current enterprise applications, and point to operating improvements that virtualization is bringing. But the next generation data centre is not about operational improvement and is not driven from resources up as in the past, instead it is being driven by a shift in business use of technology creating the need for a wholly different set of resources, supplied, charged for, and managed in a new way. The steering committee of the Open Data Alliance makes this very clear being comprised of totally of non-IT venders or service providers, but representing some of the largest business market makers; BMW, China Life, Deutsche Bank, Lockheed Martin, JP Morgan Chase, Marriot International, National Australia Bank, Shell, UBS, and Terremark.

About the author

Andy Mulholland
Andy Mulholland
Capgemini Global Chief Technology Officer until his retirement in 2012, Andy was a member of the Capgemini Group management board and advised on all aspects of technology-driven market changes, together with being a member of the Policy Board for the British Computer Society. Andy is the author of many white papers, and the co-author three books that have charted the current changes in technology and its use by business starting in 2006 with ‘Mashup Corporations’ detailing how enterprises could make use of Web 2.0 to develop new go to market propositions. This was followed in May 2008 by Mesh Collaboration focussing on the impact of Web 2.0 on the enterprise front office and its working techniques, then in 2010 “Enterprise Cloud Computing: A Strategy Guide for Business and Technology leaders” co-authored with well-known academic Peter Fingar and one of the leading authorities on business process, John Pyke. The book describes the wider business implications of Cloud Computing with the promise of on-demand business innovation. It looks at how businesses trade differently on the web using mash-ups but also the challenges in managing more frequent change through social tools, and what happens when cloud comes into play in fully fledged operations. Andy was voted one of the top 25 most influential CTOs in the world in 2009 by InfoWorld and is grateful to readers of Computing Weekly who voted the Capgemini CTOblog the best Blog for Business Managers and CIOs each year for the last three years.

Leave a comment

Your email address will not be published. Required fields are marked *.