The trigger for this blog was actually the recent SIENA, (Standards and Interoperability for Einfrastructure implemeNtation initiAtive), conference entitled Cloudscape III which focuses on the science computing community. It was an interesting set of presentations all available online showing the same leadership that in turn has produced the Internet, the World Wide Web and grid computing. Altogether the papers represent probably the clearest view of how clouds and their use will develop as the big scientific centres plough real expertise into solving everyday issues, to make this new level of computational services work.

Included in the presentations is one on NASA Nebula which provides a reference model for interoperable clouds. The OpenNebula project with associated Open Source community developing the stack, is strongly based on leading scientific centres including CERN, the home of the World Wide Web. OpenNebula 2.2 was released in March 2011 as ‘stable for daily operations’ under the Apache License, currently the previous release OpenNebula 2.0.1 can be downloaded as part of Debian, OpenSUSE and Ubuntu Linux distributions.

An alternative that seems to be more driven by commercial users comes from The, Open Source Cloud Computing who also in March have just released its latest, now significantly comprehensive software stack, CloudStack 2.2. This release not only provides an ‘out of the box’ (as they call it) pre-integrated set of all-you-might-hope-for to build a private cloud, but it also now includes CloudBridge. As the name suggests this bridges between a CloudStack cloud and an Amazon EC2 / S3 and allows services to be shared and interchanged. This is a big part of the roadmap for the ongoing development of the CloudStack Open API, and the goal of a similar seamless service for other ‘cloud’ or ‘virtual’ products. The ambition extends to being able to manage this hybrid environment through common policies and tools enacted at the API connection layer.

While it’s not Open Source, the Open Data Centre Alliance has to be brought into this as the third and very commercial community active in setting a path for being able to deploy clouds. By the way expect to see the term ‘open’ being used more and more simply because the focus now is on external interactions between enterprises using ‘services’ and this by definition means everything has to be ‘open’ to be usable. I have commented on the Open Data Centre Alliance in a previous blog  on Next Generation Data Centres being built to function in a very different way in order to support cloud stacks. In drawing attention to the software stacks I felt that I should draw attention again to the design of the hardware that relates to them.

These three industry moves together create a pretty cohesive approach to the design and functionality of a mature cloud model supporting more than just internal deployment of improved capabilities reflected in a private model. On this point alone it’s worth taking the time to understand exactly what each is doing and most importantly why, but at the same time it’s also necessary to do whatever it makes sense to do today in updating and adopting virtualisation and clouds with a clear idea of where you will need to be in a year or so time. We all know that worrying feeling of being driven to make a decision around new technology in the current year and wondering if it will be a help or hindrance in the years ahead!


A week or two back I posted on the RSA Security Conference and around the time this was going on RSA root confidential systems were attacked in a very very skilful manner and certain confidential information was obtained. RSA, a division of EMC published an online advisory note on the risk that this might pose for RSA SecureID users. It seems as if the attack was aiming at obtaining the root algorithm that generates the random number used in two factor authentication.