Just as Web 2.0 seems to be falling into place in terms of the collection of technologies and how to use them, up pops the next ‘big thing’; Cloud Computing, complete with all the hype. Actually there is a certain linkage going on here and it’s reasonable to say that just as there is a relationship between SOA as an enterprise enabler for ‘Services’ then there is a relationship to Cloud Computing as the enabling environment for wide spread support of Web 2.0, as well as ‘Services’ in general.
I promised in a previous Blog to try to make some sense of Cloud Computing following having spent some time discussing this topic with Intel, HP, and several specialist start-ups during the Intel Venture Capital event in San Francisco in the first week in June. So why is it called ‘cloud computing’ might be a good place to start. In theory at least it’s because those provisioning ‘services’ do not have to concern themselves with the supporting technology layers instead they can represent them in a schematic drawing as a ‘cloud’ in much the same way as we have been representing the complexities of Networks over the last few years. At this point you can see why Cisco is pretty keen on Cloud Computing, or as it terms it, ‘Network based Services’ arguing that it’s not just a computing resources topic.

As each conversation invariably is focussed on the vendors point of view it was difficult to try to get a useful break down on how a Cloud works, but eventually we got there. (Thanks Russ at HP, and Bill at Google, in particular). So here is my simple breakdown into three major elements of what we at Capgemini dubbed some time ago as ‘the Invisible Infostructure’ based on the original O’Reilly seven principles of Web 2.0 definitions one of which referred to; ‘The Web as a Platform’. If you think about it then you realise you assume things work on, or over, the Web without further thought hence the phrase that its ‘invisible’!
Everything sits on the bottom layer which I think is best described as Computational Tasks, this is everything from raw compute power to storage capabilities all tied together and delivered as a single integrated entity under its own sophisticated management. This embraces absolutely everything that could be required to support the upper layers. That broadens the technology quite a bit and sees Cisco coming in with IBM, HP, Sun and others such as EMC, etc.
On top of this sits the Platform, or rather a series of Platforms from different technology vendors, but also you could consider Google, or FaceBook, etc, a platform in this definition. The job of the Platform is to add value in a way that it can be used to mount the last layer of Services upon. That means it is open in terms of published APIs and has a generic capability such as Google Maps, or FaceBook communities, that you can use to add specific services to. And of course its well known that Google deploy their data centres for the computational tasks layer in a very different manner in order to support the use of the platform by an unknown number of people, or services, in an unknown time frame.
So the last layer contains the ‘services’ meaning something such as a logistics company building services to locate your parcel over a map of the neighbourhood coming from Google. I think you see the whole stack best by considering Amazon and their Elastic Compute Cloud, EC2, with their shopping mall platform over it, and the ability to allow shop owners to place their group of ‘services’ on top of the Mall platform.
So far so good, conceptually at least, but we have a long way to go to make our current generation of computers and applications to go near Cloud Computing, so it seems more likely that Cloud Computing belongs to supporting Web based systems, and the conventional data centre will continue to support our current generation of Enterprise Applications. Though of course the pricing model for provisioning the existing data centre may well start to change towards ‘power by the hour’, something Sun has been doing for universities for some time know under the ‘grid’ tag.
It occurs to me that I have not mentioned IBM in this, sorry guys, you are right up there too, and here is your link!
My summary? As we deploy more and more Web 2.0 active solutions we will need a different provisioning model and that’s the first target for Could Computing. But for a while at least it’s not going to do much to change the data centre operational model, but you will increasingly be able to change the charging and provisioning model for the computers and supporting elements such as storage towards a ‘usage’ based charging model.