

Lean and the Art of  
Cloud Computing Management

#### A guide to building Agile IT Supply Chains

by Gregor Petri

Smashwords Edition

##  Table of Contents

Table of Content

Introduction

The Cloud Academy

Cloud - more a marathon than a sprint

Section 2: Cloud computing-defined

Cloud computing: what is it?

Cloud computing: the benefits

Cloud computing: the risks

Cloud computing: the building blocks

Cloud computing: management aspects

Cloud computing: from definition to deployment

Cloud computing: A better way

Section 3: Cloud questions

Is hybrid the new black?

Will audits and certificates erase cloud security concerns?

Can public clouds be assured?

The day the cloud was out

The private cloud debate is building up steam, but is it worth having?

Who leads cloud computing developments?

Will the cloud end micro management?

Will the cloud drive consumerisation beyond devices?

Will the cloud kill outsourcing, the browser and the web?

Will today's data centre follow yesterday's mainframe?

What will be the cloud's killer app?

Can you have cloud computing without vendor lock-in?

Market developments around lock-in

Is there a role for government in stimulating cloud computing?

Vivek Kundra's decision framework for cloud migration

Some pragmatic cloud advice from down under

Section 4: A new role for it management?

The rumours of the it managers death were greatly exaggerated

Why cloud spells c.o.m.p.e.t.i.t.i.o.n. for the average it department

Why is it so complex to make it simple?

Reshaping it management- by cutting it into two halves?

Rogue it and stealth clouds

The IT-dustrial revolution

Managing an industrialised supply chain of services

Applying manufacturing best practices

How lean is your cloud?

A service portfolio approach

An IT supply chain model; once more, with feeling

Building your first virtual it factory

On the importance of planning

Are there any shortcuts or even a better way?

The need for a cloud abstraction model

It's all about the fabric

Is your cloud strategy 3d-ready?

Eight simple rules for creating a cloud strategy

Appendix

The NIST definition

About the author

##  0: Introduction

In organisations everywhere, both business and IT are embarking on a cloud computing journey- but from very different starting points. While many IT departments look upon cloud computing as a way to make IT operations more efficient, business departments see it as an opportunity to directly source solutions 'as a service', often bypassing the IT department. This can't go on. These two groups need to begin talking again; otherwise the scenario will be similar to 'strangers passing in the night'. Even worse, it is 'a train crash waiting to happen'.

This management guide aims to facilitate this discussion by providing a non- technical, structured introduction to cloud computing. It also highlights the profound change that needs to take place in the way large organisations manage their IT. Cloud computing has the potential to further transform IT into a utility: affordable, reliable, always on and ubiquitous. And as Nicolas Carr highlighted in his notorious 2003 Harvard Business Review article, utilities need to be managed differently.

The question is: will this new approach to the management of IT increase or decrease the strategic relevance of IT? That is not easy to answer at a time when some predict cloud computing to be an emerging bubble, while others see it as the beginning of the renaissance of IT.

To answer the question, we not only need to understand what cloud computing is and how it is developing, we also need to realise that the management of IT already began its transformative journey before cloud computing was introduced. Cloud computing is the next station on the route to making organisations more agile, responsive, efficient and thus successful.

Gregor Petri

Advisor Cloud Computing, CA Technologies

## 1.1: The Cloud Academy

About a year ago, we published the Cloud Academy primer "Shedding Light on Cloud Computing". Since then, interest in cloud computing has blossomed and I have had the opportunity to present our Cloud Academy content at cloud computing events around the world.

This new book encapsulates the insights and knowledge gathered from conversations at these events. That includes the dialogue with cloud practitioners, vendors, customers and the considerable number of cloud computing gurus this industry,- despite its young age,- already seems to have.

In section 2, 'Cloud computing defined', we include an abbreviated and updated version of the Shedding Light on Cloud Computing primer. This provides a quick recap of the various types of cloud computing, the reasons why organisations would want to implement such a strategy, and the risks associated with cloud computing.

In section 3, we discuss a number of more philosophical questions around this phenomenon that is reshaping today's IT: How big is the cloud? Can cloud computing be assured and secured?; Does it mean the end of the data centre as we know it?

Finally, in section 4, we take a look at how cloud computing is creating both an opportunity and a necessity for IT management to transform itself from being a guardian of the IT factory to an orchestrator of a supply chain of internal and external services.

Some of the content in this book was originally published via the Cloud Academy blog, the cloud storm chaser blog, ITSMportal.com and in several printed publications. I hope you will find it a useful guide for your journey to the cloud.

## 1.2: Cloud - more a marathon than a sprint

Cloud computing is not an invention. The components that make up or enable the cloud are not new. We have had fairly broad networks for 10 years, have used virtualisation for 20 years and were sharing computing capacity (time sharing) even before I embarked on my working career.

Cloud computing is much more a practical innovation. Practical innovations combine existing technology into a compelling new product. The best example of a practical innovation is probably the Apple® iPod that combined existing and readily available technology like a portable hard disk, a compact headset and MP3 compression in a new type of Walkman. It represents an innovation that has profoundly changed the music industry. Cloud computing has the potential to change the IT industry in a similar fundamental fashion. The thing with practical innovations is that it is not about having the best idea; it is not even about having the idea first. It is all about planning and flawless execution. In other words, despite the hype and the peer pressure, 'ready-fire-aim' is not an encouraging strategy for cloud computing. This is why we decided to launch The Cloud Academy and subsequently publish this book with knowledge and insights from the Academy.

The Cloud Academy's goal is to give IT and business technology (BT) professionals an opportunity to exchange ideas, discuss experiences and brainstorm about execution strategies for their complex environments. The content aims to be vendor and technology agnostic and covers all the different incarnations of cloud computing, including infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). The Academy is not a course where a teacher explains how cloud computing should be executed. The goal is to increase knowledge and insight, so participants can set a strategy for their use of cloud computing. This book, together with the brief primer Shedding Light on Cloud Computing were both created in that spirit The Academy sessions began in Europe in many countries in co-operation with, or via contributions from, recognised cloud players, such as Cisco, NetApp, Amazon Web Services and Cap Gemini.

The sessions are now also scheduled elsewhere, including in North America. During these sessions, debates are sometimes quite heated, as chief security officers, VPs of operations and heads of development (not to mention representatives of business departments) sometimes have conflicting objectives. The best way to resolve this is to build a common understanding of each group's challenges and opportunities so they can be addressed in a constructive fashion. If you would like to participate in the debate, please join The Cloud Academy group at LinkedIn , or attend one of the Cloud Academy sessions.

#  Section 2: Cloud computing- defined

This section contains a shortened and fully updated version of the "Shedding Light on Cloud Computing" primer that the Cloud Academy made available in early 2010.

## 2.1: Cloud computing: what is it?

As cloud computing is such a broad topic it makes sense to look first at some definitions. The shortest one, the best computer is no computer, seems to encapsulate much of the frustration that users traditionally had with IT.

A more pragmatic definition is used by consulting firm Accenture: the dynamic provisioning of IT capabilities (hardware, software or services) from third parties over a network. Most definitions, like the one below from Wikipedia, assume that network to be the Internet (or at least some Internet technology).

Wikipedia: Cloud computing refers to the provision of computational resources on demand via a computer network. In the traditional model of computing, both data and software are fully contained on the user's computer; in cloud computing, the user's computer may contain almost no software or data (perhaps a minimal operating system and web browser only), serving as little more than a display terminal for processes occurring on a network of computers far away. A common shorthand for a provider's cloud computing service (or even an aggregation of all existing cloud services) is 'The Cloud' .

Most industry analysts have their own definitions, but the most widely used or even 'official' definition of cloud computing is the definition provided by the North American National Institute of Standards and Technology (NIST). Following an extensive industry review, this definition was submitted in January 2011 as NIST Special Publication 800-145 (Draft) .

In short, this definition says:

  * Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (for example, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

  * This cloud model promotes availability and is composed of five essential characteristics (on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service); three service models (SaaS, PaaS, IaaS); and four deployment models (private cloud, community cloud, public cloud and hybrid cloud).

The visual presentation of the NIST definition (opposite) gives a nice graphical overview of the components of this definition (source: Australian government cloud computing strategic government paper). In the remainder of this chapter we will discuss the above in more depth. Before doing so, however, let's remind ourselves of how today's organisations typically run IT.

In traditional IT environments, stability is the name of the game. Applications,- regardless of whether they are built in house or bought as standard packages, run on permanently available, stable in-house infrastructure. Even if the infrastructure and/or management of these applications have been outsourced, the outsourced processes and infrastructure will be dedicated to the customer and boast similar levels of stability. Applications can of course be moved across the infrastructure; but to do so a 'change request' is needed, which has to be approved in advance by a change committee. In a pre-cloud environment, they almost certainly are not assigned dynamically to the server that happens to have the most capacity available.

Such stability does not necessarily make this type of environment easy to manage. The inherent complexity of a modern IT environment requires advanced processes, procedures and tools. Often the organisation will have turned to best practice frameworks such as Information Technology Infrastructure Library (ITIL) and Control Objectives for Information and Related Technology (COBIT) to help govern, manage and secure these large and complex environments.

### 1. Cloud computing service models

When discussing cloud computing, the IT industry has broadly divided the way cloud computing can be used in three scenarios.

#### **Infrastructure as a service (IaaS)**

With IaaS, organisations - typically their IT departments - source infrastructure capacity (servers, storage or other) over the web, as a service. For instance, this may be to cater for unexpectedly large customer demand, internal requests for a temporary test server, or an extra SharePoint server for a departmental intranet. In most organisations, the end users will not be aware that their IT department is using such infrastructure cloud services.

Using virtualisation as an enabler, the requested infrastructure can be derived from a private cloud (a pool of infrastructure exclusive to the organisation, either located in-house or at a service provider), or it can be sourced from an external public cloud infrastructure provider. By sharing the infrastructure, at different moments in time and among multiple users or customers, IaaS allows for increased utilisation, reduced capacity requirements, lower cost and lower energy consumption, and also greater scalability and flexibility.

Deployment is also much faster than having new hardware ordered, supplied and installed in the data centre. Due to its dynamics, the allocation and de-allocation of capacity is optimised when fully automated. Often this is done by means of simple scripts, but larger organisations are rapidly turning to more advanced data centre automation solutions.

Some of the more familiar providers offering IaaS are Amazon Web Services, Rackspace, Savvis, Terremark, GoGrid and Layered Tech.

#### **Platform as a service (PaaS)**

PaaS is a software development and execution environment that allows developers to develop applications and offer these as a service to their customers or users. Besides offering an efficient, high-level development environment, PaaS also significantly reduces the time required for deployment (moving the developed application into production), as the PaaS provider also hosts the created services, typically in return for a fee based on actual usage or users.

While internal IT departments may use PaaS for building custom applications, it is often also used by independent software developers to create specialised applications and make them available in the cloud more quickly. Easily combining and integrating these standard offerings with customer-specific developments is one of the promises of PaaS. One of the most familiar PaaS examples is probably Japanese Post, which developed an application that allowed millions of customers to check the whereabouts of their postal packages every morning.

Some of the more familiar names in PaaS are Force.com by salesforce.com, Google App Engine and Microsoft Azure.

With vendors like Microsoft offering both PaaS and IaaS from the same platform (for example Microsoft Windows Azure), the distinction between PaaS and IaaS is blurring. With IaaS users typically bring and install their own software and are responsible for running and tuning it on the provided infrastructure. With PaaS, users provide the application by defining it on the spot in the PaaS development environment or by loading existing (typically Java) application code. However, unlike with IaaS, the PaaS provider is responsible for running it at the agreed performance levels. The PaaS user does not have to worry about adding CPUs or memory, the PaaS provider takes care of that.

#### **Software as a service (SaaS)**

With SaaS, organisations do not buy software for installation on their own computers. Instead, they simply use their browser to access the software they want over the Internet.

As this form of cloud computing directly involves applications and not the underlying infrastructure or development efforts, and applications are much closer to users and thus to the business, one could argue that the business impact of SaaS will surpass that of IaaS and PaaS. Thanks to pre-packaged guidance and recommended work procedures typically offered with SaaS, implementation times are also refreshingly short. SaaS has become more attractive as technologies like Adobe Flash, AJAX, Microsoft Silverlight, and HTML5 bring the graphical user interface of web applications up to the standards of modern PC applications. Apps, the new phenomena of lightweight client applications that act as an 'off-line' front end to SaaS offerings are also boosting interest in SaaS offerings.

SaaS applications are in many cases used directly by end-user departments, often to a degree that surprises the IT department. For example, sales might simply charge the use of a CRM application derived from the cloud to their credit card, only for it to be lost among the myriad of client lunches and entertainment expenses. One of the advantages of SaaS lies in the vast amount of content that is typically included in the service; content such as photos of every street (Google Maps), CVs of potential employees (LinkedIn) or details of all hotels (Expedia). Offering such vast amounts of content as part of the service is often far beyond the possibilities of in-house applications. If the service provider, in addition to providing software and content, also provides certain processes for the customer, we start to talk about business process as a service (BPaaS).

#### **The 'as a service' ecosystem**

Although the various service models can be described individually, they are (or can be) very much related and integrated. A SaaS provider can, for example, decide to build software using the PaaS platform of another vendor, or use the IaaS services of, for example, Amazon to operate on. In fact, most of today's SaaS providers use public IaaS services instead of owning and running their own data centres.

**SaaS experiences  
** Customer relationship management (CRM) was one of the first areas to demonstrate that business-critical applications did not necessarily have to operate in-house. This sprang from the fact that the intended end users (salespeople) are not in the office very often, and that the sales process is less tightly integrated into internal ERP-type administrative processes than, for example, invoicing or purchasing.

In the airline industry we already see systems for reserving seats and selling tickets offered 'as a service' to multiple airlines, often at a cost of just a few cents per ticket. In fact very few of today's low cost carriers run and maintain their own ticketing system because the available SaaS options do it more efficiently and cost effectively.

Another two common SaaS services are conferencing and webcast facilities. Very few companies feel the need, or have the knowledge, to implement these network-sensitive applications in house. Many believe that other collaboration/ communication applications such as email and instant messaging will form the next wave of broadly implemented SaaS applications.

### 2. Cloud deployment models

Cloud computing can be deployed on an infrastructure that is private, public, exclusive to a community, or on a combination of these (hybrid).

In this book we define private clouds as those in which the use of the infrastructure is dedicated to one organisation (regardless of who owns or maintains it), meaning the infrastructure cannot be used by other organisations. Public clouds, on the other hand, do provide their resources on demand to other organisations, typically over the open Internet.

The private versus public discussion currently plays mainly around IaaS. However, it is easy to imagine large customers such as the U.S. federal government asking a PaaS provider to set up a dedicated PaaS cloud for all its departments (private) or for all federal, state and local government organisations (community). Community clouds are becoming rapidly popular in government, health care and other public service sectors.

Every type of cloud needs to accommodate to some extent the five characteristics of the NIST definition:

  * **On-demand self-service** ; users can request (additional) capacity through some (ideally automated) portal.

  * **Broad network access** ; the resources are delivered/accessed over a network (not physically delivered/placed on or under the desk) and can be accessed from anywhere.

  * **Resource pooling** ; The resources are dynamically shared among all users of this cloud- be it all users in an enterprise (private), all members of a community (community) or all customers of a public cloud.

  * **Rapid elasticity** ; when needed additional capacity is easily or automatically allocated.

  * **Measured service** ; use of resources is metered (and ideally charged for) on an 'as used' basis.

Sharing is the key concept of any cloud computing deployment. By increasing the sharing of resources, efficiency improvements and economies of scale are realised. This includes:

  * Sharing capacity (e.g. servers) across multiple departments/customers.

  * Sharing a server across multiple applications (e.g. using virtualisation).

  * Sharing content (pictures, maps, résumés) across more consumers.

  * Sharing functionality and the outcome of development activities across more users.

Some of these sharing possibilities are not exclusive to cloud computing. However, the simple fact that cloud computing is accessed over a network- i.e. the Internet- makes sharing a lot easier than it was before.

## 2.2: Cloud computing: the benefits

The principal benefits of cloud computing can be assessed in terms of:

  * Cost savings for cloud service consumers.

  * Efficiency gains for cloud service providers.

  * Increased added value and agility.

### 1. Cost savings for cloud service consumers

Cost savings for cloud service consumers

Better infrastructure utilisation

Owing to rapid networks, self-service facilities and rich browser interfaces, cloud computing removes many of the obstacles to the effective sharing of IT resources and cost.

However, when not using virtualisation, sharing servers across multiple applications is still problematic, due to multiple applications severely impacting each other. This has led to a proliferation of servers, commonly known as server sprawl, each running only one application. Thanks to virtualisation, running multiple applications on a shared server is no longer a problem as the virtual machine manager, commonly known as the hypervisor, gives each application its own dedicated sandbox, a virtual container in which untrusted programs can be run safely

Effectively this means that cloud computing allows IT to share resources and increase individual server utilisation.

**New York Times text book example  
** The classic example of cloud computing is the New York Times' online archive TimesMachine which takes readers back to any issue of the newspaper from 1851 to 1922. Converting the back issues into a useable format required significantly more computing capacity than the publisher had anticipated, or was in a position to make available. The use of Amazon for conversion and for hosting the document store in the cloud led to significant cost savings. Pay-as-you-go flexibility

As cloud providers only charge for actual usage of the consumed services, the total cost of IT can start to vary according to use. Prior to the arrival of cloud computing, IT costs were typically fixed annually (based on a fixed number of computers, a fixed number of licences and a fixed number of operators). If the total IT department costs $60 million per year to operate, then $60 million was typically cross-charged at the end of each year to the user departments regardless of whether they made use of the installed systems or not.

With cloud computing, the cost of IT resources such as servers, storage and software can vary significantly. Cost becomes variable when organisations start to procure only what is needed. For example, when companies buy capacity from telecommunications companies instead of building and maintaining their own wide area networks (WANs).

**Why buy a taxi if you only need a ride?  
** Cloud computing cost models can be compared to the cost of owning and maintaining a car or aircraft, versus the cost of public transport, like a taxi, plane, train and rental car.

When travelling by public transport, the price of the ticket is a contribution towards the total cost of running the service: no trip, no expense. With a car under ownership, a significant investment has to be made first, but once a certain number of miles is reached (high utilisation) then in theory that investment can be recouped by the lower variable cost per mile.

In a similar vein to the company cars analogy, as cloud providers become more efficient at offering computer capacity and market competition forces them to pass these efficiency gains on to their customers, it becomes less and less attractive for companies to own their computing infrastructure.

One could compare SaaS to flying on a commercial airline (the route is already determined), PaaS to using a taxi (you tell it where to go) and IaaS to driving a car yourself. To take the analogy further, IaaS in a public cloud would be like driving a rental car (you drive it, but you can give it back the minute you no longer need it). IaaS in a private cloud would be comparable to a company pool of cars (shared among employees, but all responsibilities for purchasing, repair and maintenance remain with the company). The private cloud offered by a managed service provider could be compared to using a leased car (the lease company buys, maintains and repairs, but you are the sole user and are required to use it for the full period).

#### Capex versus Opex

The ability of cloud computing to make the service cost variable with use also enables these IT services to be funded as an operational expense (opex) rather than as a capital expenditure investment (capex). Moreover, the decision process for opex is usually much shorter and less complex, as the risks are deemed lower and more easily identifiable. It is like choosing a house: the potential risk of renting a property is lower than the risk of buying a house outright.

Organisations typically invest in areas they identify as core capabilities, such as manufacturing or research and development (R&D). They typically treat other areas like housing, catering or company cars more as expenses. The question organisations need to ask concerning cloud computing is this: Which parts of IT do they see as core capabilities they want to invest in, and which parts of IT do they want to source in the same way as other non-primary resources?

It is important to realise that the answer to this question can vary by type of industry and type of company. The sports company Nike, for example, believes manufacturing does not necessarily have to be done in house, but R&D does. Moreover, running a network could be crucial for one type of company (a telecommunications provider, for example), while for others it is better sourced as a commodity.

Nicolas Carr asked whether IT deserved to be considered strategic in his now notorious Harvard Business Review article 'IT does not matter' . Companies will need to decide what aspects of IT might make a strategic difference to their business. Running desktops, a network or servers might not give that competitive edge, but designing friendly applications for end-user customers might just do the trick.

#### Proof before purchase

Most organisations do not know upfront beyond reasonable doubt how long and how widely a solution that is yet to be implemented will be used. Therefore, an opex-based pay-per-usage model makes sense. But in case the solution turns out to be widely used for a long time, it may have been more cost effective to buy the solution (capex).

Cloud computing can enable companies to start off in the cloud (as opex) and bring projects back in house (as capex) when it becomes clear it will be used intensively for the next ten years.

If the implementation (for some unforeseen reason) is expected to be discontinued in just a few years, then it would be better to continue to finance it as opex. Not all vendors offer this flexibility yet; many solutions are only available in one model. Over time, however, buyers need the flexibility to move solutions from one model to the other and vice versa; for example, from opex to capex, from in-house to as-a-cloud service - or from one cloud provider to another.

### 2. Efficiency gains for cloud service providers

Cloud computing opens up potential cost savings to service providers which, in a competitive open economy, will be passed on to their customers. Provider savings are typically derived from:

#### Volume discounts

The typical cloud provider will buy infrastructure in very large volume, allowing them to negotiate much higher discounts than the average end-user organisation.

#### Operational savings

Most of today's SaaS applications are based on multi-tenancy, meaning that all customers make use of the same configuration, version and implementation. Patches and bug fixes only need to be applied once, and upgrading to a new release immediately moves all customers to the latest version. This eliminates the potentially substantial cost of maintaining previous releases.

But also, in non-multi-tenancy environments, where each customer is in a dedicated, separate environment, providers can boost efficiency by automating upgrade and update processes. A provider upgrading 1,000 customer instances in its data centre, for example, can do this more efficiently (by automating the process) than 1,000 customers each upgrading one unique implementation.

#### Development platform savings

Cloud vendors can significantly reduce their costs by deploying their services on one platform, instead of supporting a multitude of platforms, whether it is mainframe, Windows, UNIX or Linux at multiple versions and releases.

**The total cost of cloud** **  
**At first sight, cloud services can appear to be more expensive than traditional IT. With traditional software the customer typically buys the license (often as little as 10 percent of the total real cost) and pays separately for hardware, network, storage, operating systems, installation and support. With SaaS all these costs are wrapped up in the monthly user fee, making that look high in comparison to the original cost of the software license. It is the same for infrastructure: the cost of hardware is just a fraction of the total cost of ownership (TCO) which includes installation, patching, warranties, backup, and failover. So it does not make sense to compare that hardware cost to the cost of IaaS on a one-to-one basis.

### 3. Increased added value and agility

#### Shorter time-to-value

Cloud computing means shorter time-to-value for both applications and infrastructure. SaaS is often implemented in a fraction of the time required for traditional on-site applications just because it is ready, available and waiting. It provides an attractive, practical and ready-made alternative to requesting and provisioning in-house resources. Even to start a simple pilot in traditional organisations can easily take several months.

The same is true for IaaS. Getting additional capacity for a large number- crunching or data-analysis project in the traditional way takes time. Being able simply to rent this capacity in the cloud can be much faster and more cost effective.

Many CIOs struggle to explain to their CEO why implementing ERP took years and cost ten times more than the cloud-based CRM application that went live within six months - with a comparable number of users, and with greater impact on the business.

**Striking the right balance  
** The ideal cloud provider should understand and be experienced in balancing economies of scale with catering for specific customer demands and warranting continuity.

In the event of a breakdown of one of the large public email services, for example, all we can do is read the press release notifying us of the mishap, which of course will be in line with the published terms and conditions. And then wait for service to be resumed, along with a few million other users!

If, however, we use a niche application from a company with only a few other clients, the only option we have if this supplier gets into difficulty, is to take over the whole set-up including its staff. This may sound far-fetched, but has happened several times in the world of traditional software.

This may lead to traditional outsourcers and managed service providers being more likely candidates as cloud providers for the average enterprise then small innovative SaaS start-ups or large 'mega cloud' infrastructure providers.

#### Elasticity

Being able to scale and deploy additional servers or storage over the web quickly is an important benefit of IaaS. It is commonly referred to as elasticity. In the case of PaaS and SaaS, we also see organisations scale up quickly from just a few users to many thousands.

The peaks and troughs in required capacity can be extreme. While a company can make reasonable capacity estimates by estimating when internal users are likely to log on to their email system, judging the required capacity for applications offered directly to customers or consumers over the Internet is a lot more difficult. The more companies begin to interact directly with the general public over the Internet, the more important elasticity becomes, as this provides the flexible capacity needed to manage the user experience to a satisfactory standard.

#### Higher added value

Cloud applications can offer greater added value than traditional in-house applications in terms of the content they provide.

A typical example is LinkedIn, a constantly updated database that offers profiles of virtually every current, former and prospective employee in the world (provided they have registered on the service). Many HR departments are now using such systems to look up details of their employees because these profiles are often more up to date than those held in house. Another example is Expedia, the travel website. The average in-house travel department cannot include every hotel on the globe in its database but Expedia and several other travel services do.

#### Higher added value

Cloud applications can offer greater added value than traditional in-house applications in terms of the content they provide.

A typical example is LinkedIn, a constantly updated database that offers profiles of virtually every current, former and prospective employee in the world (provided they've registered on the service, but who isn't nowadays). Many HR departments are now using such systems to look up details of their employees because these profiles are often more up to date than those held in-house.

Another example is Expedia, the travel Website. The average in-house travel department cannot include every hotel on the globe in its database-but Expedia and several other travel services do.

## 2.3: Cloud computing: the risks

There is one area where the cloud draws the most resistance: risk. For cloud computing to become as ubiquitous as many expect, cloud vendors will need to address the risk and security related concerns that customers have with regard to:

  * Availability

  * Privacy and legislation compliance

  * Fear of data theft and loss

#### 1. Availability

Availability (having access to a working application when needed) is a concern as old as computing itself. Organisations need to decide what level of availability they require per application. Not all applications are critical. For example, hospitals typically have a backup generator in the basement, whereas local primary schools do not. If a bank currently has a 24/7 failover facility for specific applications, it would be strange if it did not demand the same from its cloud infrastructure or cloud application suppliers. For other applications the need may be significantly less.

In an IaaS environment, the virtualisation layer makes movement of workloads across different cloud providers feasible, making it easier to restore availability; although such portability is not as easily available with PaaS and SaaS vendors. Various vendors are working on addressing these concerns. For example, SaaS escrow services provide a backup copy of the executable software and a copy of the data that allows the customer to continue to run the SaaS application elsewhere should there be an extended interruption of service. We will return to this topic later in the book.

The reliability of access to the Internet is another issue. Larger organisations typically have (or should have) a degree of redundancy built into their Internet access; for smaller companies though, the availability of the cloud is likely to be the greatest concern. For instance, a two day Internet breakdown in the Netherlands resulted in a large number of cancellations for a local book-keeping-as-a-service supplier, as customers ran back to the local PC store for a desktop and a software package to install in their own offices.

#### 2. Privacy and legislation

Moving data off site through outsourcing is one thing. Not having a clear understanding of where that data is, for example not knowing even which country it is in, is something else.

Some of the aspects that need to be examined in detail for each cloud offering are:

  * The specific terms and conditions offered by the service provider.

  * The flexibility of the arrangements with the service provider.

  * The conditions for exit and termination of the agreement.

  * The legal and practical implications of moving specific types of data off-site.

Cloud providers are beginning to address some of these legal and privacy concerns; offering, for example, to guarantee that the data for a specific customer will stay within a certain geographic region. Google has committed to provide a cloud environment dedicated for U.S. federal government use, where data will be stored inside the U.S. and access to this cloud will be restricted to government employees and certified Google staff only. But unlike other providers, Google has yet to offer a guarantee for data to remain inside the European Community.

Associations like EuroCloud are also looking to prevent privacy and legislation requirements becoming so strict that they undermine the use of the cloud and its potential benefits.

#### 3. Data theft and loss

Given the headline-grabbing breaches caused by the careless loss of memory sticks and laptops, data security remains a concern. It may come as a surprise to some though that many cloud data centres are physically and procedurally ring-fenced to a greater degree than their enterprise or government counterparts.

Customers also need to understand the encryption and backup measures the provider is taking. For example, many cloud email providers store emails in an encrypted form, so their employees cannot read them. Customers should evaluate these measures on a regular basis and decide whether these precautions are adequate for their needs or not.

#### 4. Other mishaps

When discussing risk we often focus on protecting ourselves from bad guys and mishaps like disasters of nature. But what if a provider consciously decides to cease providing a service - like several providers recently did with regard to WikiLeaks? WikiLeaks is a special case, but a similar thing happened to a small company that was sent confidential bank information by mistake. Even though it was an admitted error of the bank, a court ordered that the cloud email provider should block the mail account of the company immediately. This was no failure of technology; instead it was bureaucracy that prevented this company from reaching any mail for several weeks. Not having an alternative way to send, receive or access older email proved highly disruptive to the company concerned.

Moreover, consider the case of a project manager who uses a cloud service to plan and monitor the most important project for his company. Due to a credit card mishap the subscription is not renewed, resulting in the supplier, in line with the published terms and conditions, deleting all the details of this project. Who in this case is liable for any delays the company experiences on this project? And how can such mishaps be anticipated, prevented and overcome?

#### 5. Users and identities

Cloud computing also poses new demands on user management. Just as we allow or deny users access to in-house applications based on their roles and responsibilities, we need this ability in a cloud environment as well.

Consider this SaaS example. When a former employee's access to the company intranet and network is removed, this includes all internal applications. Control lies with the organisation that maintains a central record of roles and responsibilities to notify all applications that this former employee is now denied access.

But with the cloud, often the company email address is used as the user ID. Theoretically, the former employee could still use his old company's email identity to continue blogging, use social networking sites and maybe even business applications like CRM posing as a representative of his old company.

The solution? Single sign-on, which permits a user to enter one name and password to access multiple applications. Single sign-on will deliver the same in-house user management in the cloud, as well as removing the need for authorised users to remember multiple passwords and user IDs.

#### 6. Privileged users and administrator identities

When we turn to IaaS and virtualisation, the requirements for administrator (or root) security change significantly. With single machines it was a fact of life that the administrator had all the access rights. Many organisations installed some form of control to prevent the administrator from accidently killing all processes, deleting all users or even from viewing all data.

In a virtual environment with one 'machine' running hundreds or even thousands of virtual servers, often for different organisations, this becomes even more important. In fact many firms split responsibilities between administrators, allowing some to create, move or delete virtual machines and others to access/operate specific sets.

#### 7. Securing virtual machines

Traditional security management needs to be reconsidered in a virtualised environment.

Securing virtual machines in the same way as physical servers has some practical drawbacks. These are down to their not being active all the time so that periodic virus, malware scans, critical updates or patches happen concurrently: as soon as the virtual machine comes online rather than in off-peak hours. The outcome is seriously reduced performance at exactly the time we are waking these machines up to perform a specific task.

Having off-line virtual machines may also lead to a false sense of security, as the latest compliance scan or report may show all live systems as fully patched and up to date, while ignoring all non-active virtual machines. Security also needs to be aware of the significant changes the cloud requires to IP, firewall and port settings. Physical servers typically run in or behind a demilitarised zone (DMZ) with security applied in that context. Virtual servers, on the other hand, can be started and moved just as easily inside or outside an organisation's own firewall or DMZ to anywhere in the cloud. One notorious example would be a developer starting up a virtual copy of a production image on their laptop to try out a change, only to have that virtual copy tell all the production servers in the data centre to route all transactions to the copy now running on this laptop.

The conclusion is that virtual servers require advanced security administration tools-to an even greater extent than physical servers

## 2.4: Cloud computing: the building blocks

Virtualisation isolates objects from the underlying hardware and enables objects to be moved simply across different physical infrastructures. This makes it a key enabler of IaaS. In this chapter, we will briefly discuss the various types of virtualisation.

### 1. Types of virtualisation

#### Network virtualisation

Network virtualisation

Today we see the most widespread use of procuring infrastructure as a service in the field of WANs. About two decades ago, most multinational organisations still owned and managed their own WAN. This consisted of a vast and expensive network of fixed, leased and dial-up lines that connected the various national and international branches.

As technology progressed, sharing the use of an already existing network infrastructure with other organisations was found to be more efficient than each enterprise connecting all their branches themselves. Early provider and telecommunications provider offerings were based on X25, later on frame relay and now, increasingly, on the standard internet protocol (IP). This 'rented' on-demand network capacity still needed to appear as a separate private network to customers, so a virtualisation layer was used to behave as if only the machines in the customer's offices were connected to the network, a virtual private network (VPN). Recently, Amazon has started offering a similar VPN option, which makes the servers it provides part of the infrastructure of the customer by logically putting them inside the customer's (virtual) network.

#### Storage virtualisation

The simplest example of storage virtualisation is the apparent availability of a drive D: on a PC, when in reality it is a directory on a larger disk down in the company's data centre. Here a virtualisation layer presents part of a larger whole as a specific dedicated facility to the user.

The Amazon S3 service (simple storage service) is another example of storage virtualisation. Objects (files, images) can be stored and retrieved using a simple web service interface. Sites such as Flickr, SlideShare and Twitter now use S3 storage services, but S3 can also be used as a backup medium or default storage device. Apple's iCloud service and Microsoft's equivalent, Windows Live SkyDrive, offer a virtual disk in the cloud for consumers. With these services, consumers can store their data (emails, pictures, documents, music) in one place with 24/7 access.

Remote virtual storage does require changes in how we manage it. The common way for an application to check whether a file is still available and not corrupted is to open it and read it. With remote storage this means transmitting the whole data collection across the network just to be assured it is still there and correct.

Several storage vendors are working on a smarter storage application programming interface (API) that allows the management application to carry out this verification. The faster the networks become and the more these storage services meet B2B requirements, the greater the advances this type of storage virtualisation is likely to make.

#### Server virtualisation

Server virtualisation is currently the most important, and certainly the most discussed type of virtualisation.

The principle is once again the same: a section of shared physical infrastructure represents itself as a dedicated resource. Today VMware is the best known vendor in this field, but there are several others such as Xen (Citrix), Hyper-V (Microsoft) and KVM (Red Hat).

The Amazon Elastic Compute Cloud (Amazon EC2) service enables users to rent such virtual servers over the internet. Quick loading of virtual servers is possible through the use of image files (types of backup file). Similar to an operating system loading a spreadsheet or document to make it available to users for editing a moment later, a hypervisor (virtualisation layer) loads an image of a full computer and makes it available for use instantly.

Traditionally, physical servers are very rarely used for something else or even powered down after being configured. With virtual servers, we load and unload images all the time, based on demand. Thanks to the virtualisation layer, we can run the virtual server images easily on different types and brands of servers (for example, Dell, HP, IBM or white label x86). We can even move Linux applications to a mainframe hosting thousands of such Linux images. Originally, hypervisors added significant performance overhead. But today's hardware is optimised for running them and the added flexibility far outweighs this now minor overhead.

**Avoiding VM sprawl and stall**  
VM sprawl occurs when the number of virtual machines running in a virtualised infrastructure increases over time due to the ease with which they can be created rather than their necessity to the business. This leads to management complexity and wasted licence costs for unwanted virtual machines.

VM stall occurs after companies have virtualised the 'low hanging fruit'-typically the test and development servers and some of the less critical production servers. It will be clear that the benefits of sharing infrastructure remain largely elusive if only 30 percent of the production servers have been virtualised.

#### Application virtualisation

Unlike network, storage and server virtualisation, application virtualisation is about traditional PC applications working within a 'virtual box', which allows them to be used on the spot without having to go through an install procedure. The virtual box, including the application, is simply loaded as an image. Not only does this ensure that the application does not conflict with others (each contained in its own virtual box) but it also does not alter the underlying operating systems by adding settings to the registry, or loading or deleting DLLs.

#### Desktop virtualisation

With desktop virtualisation, the user's desktop no longer resides exclusively on the local PC of the user. Instead, it uses a virtual machine image installed on a server elsewhere in the office or somewhere in the cloud. That image can be run centrally and accessed via a browser or it can be initiated on the machine that is most convenient: a laptop, home game PC, MacBook or machine at a client site.

This has advantages in terms of mobility and resource sharing. But it also means that the desktop can be accessed by lighter, less energy-consuming devices, such as a notebook, tablet computer or smartphone. Organisations can configure and secure virtual desktops to run inside a save box on any device, which is important as more and more people use less secured personal devices such as phones, tablets and home PCs for work-related activities.

Sun Microsystems, now part of Oracle, has been offering a virtual travelling, thin client-based desktop for several years. However, comparatively few companies have adopted them as they required proprietary hardware. Users, too, have remained wedded to the idea of their own, personal desktop.

The virtual desktop is predicted to become mainstream soon, driven by increased worker mobility, use of multiple end user devices (phones, tablets), concerns about security, and advances in more economic and user-friendly virtual desktop technology.

### 2. Automation

Virtualisation is one side of the coin that makes cloud computing possible; the other side is automation. Automating the creation and configuration of virtual machines is the key to releasing the on-demand dynamic scaling capacity of the cloud. Why? Because configuring virtual machines manually is too slow while, without virtualisation, the complexity is prohibitive to apply automation. Using virtualisation first to restructure applications into a set of independent blocks that can be easily added or removed makes automation feasible.

Automation also enables self-service and elasticity and helps to put a handle on VM sprawl and overcome VM stall. Apart from industrialising the provisioning process, automation can be used to monitor application traffic response time or quickly perform root-cause analytics to help isolate and remediate virtual environment faults. The upshot of all this automation is it allows IT more time on the business and with users, and less on the technology and the plumbing that makes it work.

## 2.5: Cloud computing: management aspects

In later sections, we discuss a strategy for how the IT discipline should evolve. First though, we close off our initial description of cloud computing by looking at how cloud computing directly impacts day-to-day IT management.

### Short term - manage one more platform

In most cases the cloud will initially be an additional platform to manage and monitor. Alongside Windows, UNIX/Linux and maybe the mainframe, organisations will now have applications running in yet another set of environments. We deliberately use the plural here because there are many cloud platforms out there (Amazon, Rackspace, Terremark); and in terms of the private cloud there are also many vendors and platforms, including VMware, Xen, Cisco, IBM, HP and Microsoft.

With users playing a more dominant role in selecting cloud solutions, it will be very hard for organisations to maintain standardisation. As a result they should plan for managing diversity. Essentially this will be a hybrid group of external and internal cloud platforms from many vendors combined with traditional platforms. And it is too soon to bet on which one will turn out to be the 100 pound gorilla. Change management complexity increases

Having good change processes and reliable configuration data in place will be even more essential in a dynamic 'provision to order' cloud environment than it is in today's relatively stable data centres. We all know the stories about IT departments too afraid to switch off a certain server because they have no idea what it does. When this is a virtual server in the cloud, paid for by the minute, it will be even more essential to understand the business processes it is supporting so the correct decision can be made.

#### Manage or predict

The management of cloud platforms is also different in that one cannot actually manage it physically. For example, we cannot configure or tune the servers we source from Amazon EC2, neither can we move the customer relationship management (CRM) application we use as a service to a different server (the service provider takes care of all that). This means some aspects of cloud management become monitoring and predicting availability and planning alternative routes in case there is a problem. It is comparable to a pilot who uses a weather report to determine an optimal route rather than to decide how he will change the weather conditions at a certain destination.

## Operations planning

This requires the IT operations managers to become IT operations planners. This planning and management process is comparable to the distribution-production planning role in a large industrial organisation. Eventually this leads to a supply chain management approach for IT, where IT optimises the delivery across a variety of in-house and sourcing options.

**Planning is essential, whether resources are owned or hired**  
To illustrate the case for planning, consider the example of a manufacturing company that decides to switch from owning and running its own fleet of trucks and directly employed drivers, to using third-party services.

Deciding whether to hire trucks and drivers on a daily or hourly basis, or to use a parcel service to send products to customers, still requires essential planning and management of the distribution process.

### Longer term - towards a new role for IT management

Imagine for a minute that our organisation sources everything as a service, using applications from various SaaS and PaaS providers. In this instance, would there still be a role for IT management? Regardless of whether this fictitious end point will ever be reached, there are a number of things that still need to be managed:

#### Managing support

In such a multi sourced environment, integrated support becomes critical. Organisations using several different SaaS applications would not want their users to go to each individual vendor for support and file issues in as many different places only to get as response that the issue does not lie with this particular vendor

#### Managing integration

Overlooking integration is also a management task that is likely to remain with IT. If the organisation uses CRM from one SaaS vendor and distribution planning from another, IT will be expected to connect or integrate the two. In this case, having an integration capability that works across different cloud and non-cloud applications becomes essential.

In the ERP area, companies at some moment concluded that integration between multiple solutions is just too difficult and senior management started to enforce a single vendor policy (resulting in increased vendor lock-in and the associated high annual bills). An equivalent single vendor policy is not feasible or even sensible in the cloud environment: the cloud's functional scope is much wider, decisions are more user led and the market is much younger, with hundreds of vendors still jockeying for leadership positions.

#### Managing the cost

Another role that will become more important is cost management. As choosing between several external and internal options becomes more standard, it becomes more important to have a thorough understanding of the cost of these alternatives and the impact on the overall cost of the service.

#### Managing a catalogue

A good way of making life easier (both for users and for the IT department) is offering a catalogue of available and approved cloud applications. This catalogue can include internally provided and externally sourced solutions; and ideally the consumer would not even be able to tell the difference. An early example of such a catalogue is Apps.gov, a catalogue of pre-approved cloud services for use by any U.S. federal government organisation.

#### Managing SaaS

People often assume that SaaS does not have to be managed by IT, as IT is not writing the code, running the code or even running the machines the code is installed on. Some also question how IT can monitor, manage and secure SaaS applications, given that it may not be aware of which applications users are deploying as a service.

This is a tough one, but no different from how HR, procurement and manufacturing had to adapt their ways when organisations started to subcontract and outsource their operations. Purchasing used to process every single purchase order; now they contract master agreements and departments place orders directly at approved vendors. Similarly, IT has to get used to the fact that they are less in control and no longer pulling the switches themselves. Yet they will be held responsible still for the mentioned aspects that go across the various SaaS offerings such as support, integration and cost.

#### Managing security and risk

When users go out and select solutions that fit their needs, some requirements will be more important to them than others. Functionality, ease of use and possibly even cost will be seen as critical, while other aspects such as ease of integration, vendor viability, business continuity and security may not be top of the list for the average business department. Selecting and auditing vendors and specific offerings against such criteria will need to be done, regardless whether it is by the security, purchasing or IT department. For cloud services it may actually be the IT department that has the best mix of skills for this.

#### Managing the service portfolio

Some may argue that in a pure cloud environment, where all applications have moved into the cloud, governing the portfolio will be the only remaining task for IT management. That includes monitoring which services to add, which services to retire, evaluating the cost of these services and keeping a finger on the pulse of all associated transition projects.

With so many options, it is more complex than ever to make the right choice about vendors and solutions- and to monitor the successful implementation of these decisions. Portfolio management allows IT to balance investments and costs against risks and available resources and gives the organisations the helicopter view needed.

We will return to the subject of portfolio management and the cloud later in the book.

## 2.6: Cloud computing: from definition to deployment

So far we have set the scene for cloud computing. To some the whole cloud thing may be a bit overwhelming, providing yet more acronyms and complexity; to others the idea of 'the best computer is no computer' sounds very attractive.

Although cloud computing combines many existing concepts, it constitutes a fundamental change, comparable to the move from custom software to standard packages on the application side, and it is as impactful as outsourcing has been on operations.

In general, customers are excited by the prospect of using the most economical way of performing computing. However, they are not necessarily excited by the idea of moving their data or computing off site into a cloud environment.

The role of IT will certainly change. Technical skills, like programming and configuration, will become less important; maintaining an overview and achieving synergy will grow in importance. In the following chapters, we will examine the changing role of IT management from different perspectives.

With regard to security risk (which we referred to earlier as probably the most common objection to cloud computing), there are indeed some questions and issues that still need to be addressed. But the argument that 'our systems are so business- critical that we would never risk bringing them under a cloud' is off the mark. If this were the case, companies would never have outsourced or off-shored parts of their operations. It also ignores the enormous investments cloud providers are making to address these valid concerns.

On the infrastructure side, virtualisation and cloud computing add a set of platforms that can significantly increase utilisation, scalability and sharing of resources, leading to lower and more variable cost. But in the short term, one should not expect these cloud platforms to replace all current internal platforms, but rather be used in addition. Regarding applications, shorter time-to-value and broader functionality at lower initial cost make SaaS very attractive, but the danger of vendor lock-in is also significant here. The big promise is PaaS, which promises to deliver the low cost and scalability enabled by mass standardisation, while still offering scope for differentiation.

One thing is certain, cloud computing offers too many possibilities and opportunities to ignore \- and there's not much chance of that in the current media hype. But at the same time, the rules for cloud computing have not been set in stone. Approaches that seem a no-brainer or a definitive no-no today may be regarded very differently in just a year's time.

In the remainder of this book, we are therefore not going to try and give a 'definitive cookbook' for cloud computing. Instead, we will attempt to share a variety of perspectives and insights exploring how you could apply cloud computing to your organisation's specific situation. It will allow you to reach your own conclusions about the deployment and management of cloud computing. In that spirit we will begin by shedding light on some of the more philosophical questions surrounding cloud computing.

## 2.7: Cloud Computing: Simply a better way

Summarising, one can conclude that cloud computing promises to enable enterprises to select the services they need from several vendors, at competitive cost, without running into lock-in or scalability issues.

Below is an analogy from the consumer IT market to describe the difference between traditional and cloud computing:

**IT, the old way  
** As a consumer you would go to a computer store to pick a software package, for example a cooking application. From the 20 available offers you pick one (probably the one with the nicest picture on the box), only to get home and discover your PC has a release of the operating system/database/browser that is not supported. After fixing this (there goes the weekend), you still cannot get it to run.  
You solicit some consulting from your neighbour/nephew/colleague; while your spouse remarks that at this rate you will be eating takeaway meals for another month (no pressure!). Finally, during week three, you get it to work, although printing still has its quirks. You learned a lot more about your PC, but little about cooking. One month later you buy a new PC and strangely the whole thing stops working again. Luckily, the vendor sends you an email in which they offer an upgrade that runs on your new PC. Comparing it to the cost of take away, you decide to buy the upgrade.

**IT, the new way  
** You feel hungry, so without leaving your seat, you visit the app store on your phone. They offer 60 cooking applications and you pick the one most downloaded (after reading some of the user comments). You prepare your first dish. It is too salty. You blame the application, remove it, and pick another one. That tastes better. You decide whether you use the free version (that includes an automatically printed shopping list for the supermarket chain sponsoring the app) or you pay 20 cents per recipe cooked.

The cloud experience we are aiming for should, of course, feel like the second scenario. Also note how, in the first example, we talked mainly about technology and in the second mainly about cooking. Somehow we, in IT, moved from talking about what our companies do (selling soup, soap or insurance) to mainly discussing technologies (like SOA, SOAP and yes, cloud).

In other words, we need to change from being supply driven, with IT in the role of factory managers running the production of services, to a demand-driven IT organisation. Here, IT adopts the role of a supply chain manager, finding the best way to source the functionality for the business, preferably without locking our company into a dead-end street. The end goal is being able to deliver the 20 percent that really differentiates our company, while at the same time being able to source the 80 percent that is pretty much the same for all companies.

That type of agility is the real promise of cloud computing.

.

# Section 3:  
Cloud Questions

In a Dilbert cartoon on cloud computing, Scott Adams features Dogbert as a cloud consultant. After saying "blah blah cloud" four times, Dilbert's boss says, "Amazing, it's like you're a technologist and a philosopher, all in one" . With this in mind, it is time to follow up our short introduction to the technology with some of the more philosophical cloud questions.

## 3.1: Is Hybrid the new Black?

After fierce and often heated debate, the proponents of private and public clouds now seem to rapidly to reach consensus on the importance of being hybrid. In fashion terms one could say 'hybrid is the new black'. Here we examine why everyone feels it is so cool to be hybrid.

If cloud computing was the buzz last year, then hybrid cloud is well on its way to rule this year. VMware was one of the first - at its annual VMworld conference - to be a very vocal proponent of hybrid clouds, offering a choice between running workloads internally or at an external provider. But since then, many vendors - HP being one of the latest - have adopted a similar 'your place or mine' strategy.

We can look at hybrid cloud computing in terms of the definition - like this one from NIST:

With hybrid cloud the cloud infrastructure is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together by standardised or proprietary technology that enables data and application portability (for example, cloud bursting for load balancing between clouds).

A more graphical and certainly more entertaining way is to discuss hybrid cloud computing by using an analogy with hybrid cars. The reasons why people get so enthusiastic about hybrid clouds are very similar to why people love hybrid cars, especially the upcoming generation of plug-in hybrid cars. Now we will examine some of these in more detail..

#### Economy: more miles to the gallon  more apps per CPU

By smartly using the energy source that makes most sense, hybrid technology delivers mileage that is far beyond what is possible with conventional technology. Similarly, cloud computing leverages virtualisation technology to dramatically increase the utilisation of available infrastructure. By doing this it delivers more return on on-premise infrastructure investments, but when needed, it can also burst out to external pools of resources. With cloud computing we abandon the idea of a dedicated - full-powered - server for each application, just like hybrid cars challenged the idea that you need eight cylinders and massive cubic inches to propel the typical lonely commuter to work.

#### Grid benefits and... grid independence

A plug-in hybrid car allows you to use lower-cost energy from an electricity grid while the conventional engine - at the same time - extends your range. So you do not have to worry all day about whether you will reach the next charger before your battery goes flat.

With (public) cloud computing, many worry about the dependence on the network and external providers. A hybrid cloud approach makes it possible to utilise external (cheap and economical) capacity, but does not make you totally dependent on these.

#### Reach

By combining the gas and battery capacity, hybrid cars can drive all the way to Rome and back (depending of course on where you start). Similarly, hybrid clouds can help organisations extend their reach into new foreign markets. They enable the moving of applications (and if needed data) closer to the location of the users, without requiring the time, capital and knowledge needed to build local datacentres and overcoming the latency issues that may arise when using non-local data centres.

#### Additional power on demand

Few hybrid cars are used for drag racing, but the additional electro engine can give just that little boost when needed. Several luxury brands are now introducing MPV type cars with small 1.2-litre, 4-cylinder engines, which still enable their owners to impress the crowd at any traffic light.

The elasticity brought by cloud bursting (using additional capacity from the public grid when needed) is actually the most mentioned advantage of hybrid clouds. Instead of stacking and racking up for maximum capacity, you can size your infrastructure for average or even for minimal capacity, with additional needed power available on demand.

#### Cool/green image

Both hybrid clouds and cars have a carbon-friendly image. Impressing their neighbours with how ecologically aware they are might not be the main reason for companies to go hybrid cloud yet, but it could make their résumé look a lot cooler and greener.

#### Proven concept

After a century of conventional cars, the hybrid car has proven to be a reliable alternative remarkably quickly. In part because it basically includes the concept of a traditional car, but with additional benefits and possibilities. The idea that it could be run as a traditional car is very comforting to many potential buyers.

Hybrid clouds work in a similar way: you could run a hybrid cloud as a traditional in-house facility. This approach would not give you the potential benefits of the open public cloud, but it does address a lot of the worries and perceived risks that are still associated with the use of public clouds.

As described above, it is remarkable how many similarities there are between hybrid cars and hybrid clouds, but there are also some distinct differences. Here are a few examples:

#### Total cost of ownership (TCO) and capex versus opex

Regarding the TCO of hybrid cars, there still are some unknowns; for example, the long-term trade-in value and the cost of replacement battery packs. But the lower mileage and lower running cost are expected to compensate for the typical higher initial investment cost.

With cloud computing, the initial investment is actually lower than for traditional computing. We no longer need to invest for maximum capacity and the public bursting capacity is funded as opex (operational expense) instead of as capex (capital expenditure). With regard to the cost per transaction, reports are still mixed. Some feel public cloud is bound to be more expensive in the long run, while others expect the public cloud providers to become so cost efficient and the market to be so competitive that the reverse will be true. But regardless, the beauty of a hybrid cloud model is that you can change your mix of private versus public based on actual market prices.

#### Who decides what power plant to use?

In case of the hybrid car it is the car that decides automatically what engine to use. Certainly, you can override it (sometimes) but it is designed to be a seamless experience for both passengers and driver.

With hybrid cloud computing the experience may be seamless to the passengers (the users), but the driver (IT) will need to decide what workload to move where, based on criteria like cost, risk and legal constraints. Over time, IT will move from running an internal power plant to orchestrating a dynamic supply chain of both internal and external services. Decision support tooling - to help IT optimise the outcome of these decisions - are becoming available, for example as cloud connected management suites that enable IT to set, monitor and assure service levels, costs and risks. And we should not forget that with cloud we have a lot more choice than just 'your place or mine', locking yourself in to one vendor's platform in either hosted or on-premise mode is comparable to buying a hybrid car that can only be refuelled at the dealer of that brand. Hybrid cloud management suites can move workloads across different vendors, platforms and hypervisors, allowing the driver to reap maximum flexibility and efficiency.

#### The 'plug-in' hybrid

As mentioned it is the new generation of plug-in hybrid cars that really offer the best of both worlds. So what makes a hybrid cloud a plug-in hybrid cloud? My suggestion is that if takes more than a week to set up, it probably is not a plug-in. Seems ambitious? Not really. There are already hybrid clouds that can be taken for a short test drive, just as easily as taking a hybrid car for a spin. But customers must not fall for the old sales trick of "You can have any colour, as long as it's black".

## 3.2: Will audits and certificates erase cloud security concerns?

In every cloud survey, security consistently comes out as an inhibitor to cloud adoption. Even though this has been the case for several years, many feel that it is a temporary barrier that will be resolved once cloud offerings become more secure, mature, certified and thus accepted. But is this indeed the case or do we need another approach to overcome this barrier?

During a recent cloud event, two speakers from a large accounting and EDP auditing firm (a firm auditing electronic data processing systems) took to the stage to discuss the risks associated with cloud computing. While one speaker dissected the risks for both consumers and providers of cloud services, the second speaker discussed the various certifications and audit schemes that are available in each area. They acknowledged that with the currently available certifications, not all risks were covered, but their envisioned remedy was even more comprehensive certifications and audits. This may come as no surprise given the speakers' backgrounds, but more paperwork simply will not address what IT professionals are really worried about.

Let me try and explain my thinking, including how the recent WikiLeaks events influenced this.

Security is often cited as the major concern associated with cloud adoption. My view is that the apprehensions are more about fear of losing control (not being able to restore service when needed) than the fear of losing data. Fear of losing data can be addressed by cloud providers through implementing security solutions, but fear of losing control cannot.

The big difference between traditional IT and cloud computing is that cloud computing is delivered as a service. With traditional IT we bought a computer and some software. If it did not work, we could fix it ourselves (sometimes a firm kick would suffice). Whatever happened (good or bad), we were the master of our own destiny. And even with traditional outsourcing, we often told the outsourcer what to do, and in many cases how to do it. If push came to shove and the outsourcer really screwed up, we could at least in theory still say, "Move over, let me do it myself."

When something is delivered as a service, there is no equipment to kick and we can no longer say, "Move over, I'll do it myself." We probably would not even be allowed to enter the room where the equipment is located or have access to the underlying code and data. If your biggest customer (or your boss, or the boss of your boss) is on the phone screaming at you, that is not a position many people want to find themselves in. And believe me, showing all the certificates and audit reports that your vendor accumulated and shared with you will not quiet them down. And what if the vendor has made a conscious decision to discontinue rendering the service, as seems to be the case with WikiLeaks?

Now you may feel your organisation would never do something that would warrant or even cause such behaviour by your vendor. But what if a judge ordered your vendor to discontinue the service? This can happen and has happened, sometimes because of minor legal technicalities or unintended incidents, like a server sending spam or an employee collecting illegal content on a company server. Google and other mail providers have been ordered to cease mail services to both consumers and business, and have complied. You could potentially go to court and appeal, but will that be quick enough?

For each 'as a service' service we will need to evaluate what is reasonable risk and what to do to remedy the unreasonable risks. What is reasonable will very much depend on the type of industry. In the following examples we look at scenarios of the service not working (outage), and the data being stolen. Some businesses may barely notice, others may be severely inconvenienced; while for others it could jeopardise the entire continuity of the organisation. For example, not being able to invoice or missing a deadline on a project with severe penalty clauses.

Impact of an outage

  * _Email. If email is down but phones, instant messaging, text messaging and maybe the occasional fax are still available, then a few days outage may be reasonable (for some companies). Provided all email is restored at the end of the outage, regardless of whether we moved to a new provider, the old one finally got it fixed or they switched us on again. With regard to theft: nobody likes their personal conversations discussed in public (see again the WikiLeaks example) so measures like encryption, digital signing, using SSL and working with reputable (OK, let's call them certified) vendors are in order._

  * _CRM. This system tells us what our sales team has been up to. Before we implemented CRM (fairly recently in many cases) we had limited insight into sales activities, so it seems reasonable that a week of outage is fine (again, it depends on your industry). With regard to theft, these are often records about people, so legal and privacy requirements apply, not to mention that you may not want this data to show up at your direct competitor._

  * _Invoicing, order intake, reservation management. The impact of all of these very much depends on the industry, but for some industries a single hour of outage at the wrong moment can already mean bankruptcy. In this case, you probably want to have a hot swappable system, preferably at two different 'as a service' vendors._

  * _Project management. This impact depends on whether you are a system integrator with penalty clauses or an innovator rushing towards a product launch._

  * _Book-keeping. Impacts here may vary: an outage during end-of-month closing will have more impact than an outage early in the month._

I think you get the point. For each service that you would consider moving into the cloud, you have to determine the importance, criticality and impact of disruptions (I am sure you keep an up-to-date list with this information for all your services). This exercise may actually save you a lot of money. Most services are not under- provisioned but over-provisioned. In case of doubt, IT tends to move services to the more secure, more reliable and more failover-equipped platform. A famous example is the company that was running its internal employee entertainment Tour de France betting system on a hot-swappable, 'dual everything', non-stop system.

Next, for each service you must determine what a reasonable recovery period is, and how to implement it. It could be simple source code escrow (with the right to keep using the code) and a failover contract with a nearby infrastructure provider. Or it may require having a fully up-to-date system image ready to provision within an hour. For other scenarios, you may be running two instances of your service or application, in parallel, at two separate service providers on different grids, different networks and in different jurisdictions. And for some you may not bother. It is like insurance: most people insure their house against fire (as they could not overcome the financial impact if it burned down) but many do not insure their phones or cars against theft or damage (as they can afford to buy a new one if needed without going bankrupt, even though it may be severely inconvenient). There is also a case of being too cautious. I remember at my first employer, the book-keeping department of the local plant would travel separately to the annual company outing (two by train and two by car), even though we had 12 factories located within a hundred miles, each with four book-keepers. I am sure we would have closed the books somehow in case of a travel mishap.

Hopefully, most of the services currently running in the cloud (CRM comes to mind) fall into the 'severely inconvenient' category. If they are business critical, you would hope the companies have a plan B that allows them to move these jobs quickly to another cloud if the need arises (see also the next chapter). To be able to do so easily, we will need two things: standards that enable more portability than we have today, and automation tools that allow us to do this semi-automatically. Our EDP auditing friends may claim you also need certifications on both the primary and the backup vendors, but I am sure these will remain in the desk drawer when push comes to shove.

A final thought on assuring your services in the cloud. On the insurance front we see that many people do not insure their house against natural events such as earthquakes. First because it is often not possible or affordable, but also because as my father used to say, "If heaven drops down, we will all be wearing a blue hat." Imagine a video on demand provider that is the only one still running after an earthquake, how much good would it do them? In other words, it is all about being pragmatic.

## 3.3: Can public clouds be assured?

As cloud services take over more and more business-critical functions, a question we will need to answer is, "Can we ensure the performance and availability of public cloud services?"

I am not sure we can. Public cloud services are a bit like the weather: we are lucky if we can predict what it is going to be like, but cannot manage or change it as we do not control the underlying elements. The same holds true for the management of public cloud services. So what do we do? Give up on public cloud services altogether? That is like throwing the baby out with the bathwater. Instead, we can follow a method we have been using in IT for a long time. If we cannot rely on a certain item to be continually available, then we make sure we have a failover option.

The best example comes from storage. At a certain moment, people realised that even the most expensive disks encountered failures now and then. So they developed a strategy where failure of an individual disk would not have a seismic impact. The result was RAID, a redundant array of inexpensive disks that, transparently to the user, served the requested data from other disks in the array when one of the disks failed.

How do we apply a similar redundant array approach to cloud services? The idea of contracting for two email services or two CRM systems is counter-intuitive for most IT professionals, since for years we strove to standardise on one of each. And the reality is that if half the company uses one email system and the other half another, 50 percent of the people are still down if one fails. So instead of looking at email in isolation, we should look at all the employee communication options. These may include email, instant messaging, voice over Internet protocol (VoIP), even a social media function like Facebook or Twitter. If all of these are based on different technologies and sourced from different vendors, it is extremely unlikely that they would all be down at the same time.

Using chat or instant messaging as a backup for email is not how we traditionally think in IT. Challenging traditional thinking like this is exactly the idea of the Cloud Academy, but it aligns with the next generation of IT users. For example: teenagers (like the two living in my home) instantly switch from MSN to Google chat, to Hyves or Facebook, or even to Hotmail or text messaging, if the service they are using is behaving strangely. They are not particularly interested in whether a particular service is down; their only interest is whether they can continue to communicate with their friends. As today's IT departments proactively monitor their internal infrastructure and know the status of systems, they rarely receive a call saying all systems are down. However, that is not the case with external cloud services. We need to find an alternative early warning system, something like a weather report on the status of the external cloud services that our users depend upon. An interesting site in this context is www.unifiedmonitoring.com.

So what conclusions did we reach in our (sometime heated) Cloud Academy debates on whether public cloud service can be assured?

Using public cloud services is another step in giving up control of the underlying components, from building our own computers, to buying standard off-the-shelf packages, to cloud computing. Today, very few companies feel they need to build their own hard disk to guarantee data integrity, or manage their own WAN to guarantee connectivity. And in the not-too-distant future this may also hold true for running your own data centre.

We do, however, have to make conscious decisions when to cede control and start sourcing. This differs by industry, type of application and possible risk. In many cases, using public cloud services may already make sense today. But while we are using them, we need to make smart trade-offs and take smart precautions for when the services are not available. The first step is to have some way of monitoring the availability and outcome of these public services, so we can act when we need to-instead of when the user or the customer calls.

In the next chapter we look at a recent cloud outage and how different vendors, providers and analysts responded.

## 3.4: What to learn from the day the cloud was out?

On April 21, 2011 a network mishap caused an outage at Amazon Web Services, taking down more than 100 cloud service providers in its aftermath. At the same time a similar mishap affected Sony's PlayStation network and stopped about 70 million registered gamers from connecting. This was the biggest public cloud outage so far. Time for some analysis.

Much has been written about the outage, so I won't repeat that here (if you want to catch up, I suggest you read: TechCrunch for the PlayStation story , GigaOM for an overview of the Amazon issue , eWeek for some interesting analysis  and additionally the DotCloud blog for a very readable explanation of day-to-day use of a cloud service like Amazon's Web Services ). This is not about putting blame on Amazon or any other providers. In fact, many services continued to run during the outage as they had engineered their solutions for resilience in case of such mishaps in any underlying services.

Instead, let's focus on the big picture of the cost of reliable cloud, comparing it-again-with the move from mainframe to distributed computing. History tends to repeat itself, especially in IT where generations of technology tend to be heavily siloed, and staffed with different generations of people who often do not even sit at the same table in the canteen.

Somewhere during the 1980s, IT pros started to realise they could get the same processing power for significantly less money, when selecting distributed servers-till that time used mainly for scientific work-instead of traditional mainframes. Soon after, companies began porting existing applications to the new platforms, focusing initially on applications that were more compute than input/output (I/O) or data intensive (sounds familiar?). And indeed, initially the new departmental platform, not requiring all the resource intensive water cooling, air-conditioning or a heightened floor did seem a lot more cost effective. But, it didn't take long after initial proofs-of-concept to find that for some applications we needed more processors or to set up clusters, or uninterruptable power supplies and other redundant features.

On the storage front, we started to use the same - not so cheap - storage solutions as we were using on the mainframe. Soon, the distributed boxes started to look and cost about the same as the systems they were replacing. Let's not forget that at the same time, smart mainframe developers - pushed by the competition from distributed systems - found ways to abandon water cooling, leverage off the shelf standard components like NICs and RISC processors and even re- examined their licensing cost for specific (Linux) workloads.

What does this all have to do with cloud computing? Likewise, we see that running a certain 'compute intensive' workload can be done much faster and cheaper in the new environment. But when replacing these one-off batch jobs with services that have higher availability and reliability needs, the picture changes. We need to have redundant copies and failover machines in the data centre, and in many cases a backup data centre in another part of the country, preferably located on an alternate power grid and connected to multiple network backbone providers. All of a sudden it sounds a lot like the typical set-up a bank would have - and likely with a similar cost profile. So far cloud seems to offer cost benefits, but will this cost advantage still exist if we need to replicate our whole cloud setup at a second vendor-worst case scenario, doubling the cost. Now cloud has many other advantages beyond cost (elasticity, scalability, ubiquitous access, pay per use, etc.), so many specific use cases are still ideally suited for the cloud, but if a certain application has no need for these, then it may be worth (re)considering the business case for a move to the cloud.

Regarding redundancy, the cloud business case actually has two opposing vectors. On one hand, there is the fact that as a user you have less control (you can't fly there and 'kick' the server) leads to requiring a secondary backup installation. On the other hand, the cloud - with its pay-as-you-go model - offers much more efficient ways to arrange for a backup configuration for running your applications, than finding your own second location and filling it with shiny new kit.

At the same time you have to take into account the likelihood of the cloud having capacity available at the moment you need it. If your own data centre fails, I am sure you could find another cloud provider with some capacity. But in this case, where one of the largest (cloud) data centres in North America had issues, with all customers looking to move their workloads elsewhere, it is not certain that enough excess capacity would be available at alternate providers.

In this specific case there seemed to have been technical issues inside Amazon and Sony's data centres that caused a series of events impacting the services running within, but what if there had been a major physical problem, such as a large fire or accident? With more mission critical applications moving to the cloud, companies need to make contingency planning a top priority.

As a result of last week's incidents, enterprises should take stock to assess what services are truly vital to their customers and/or to their own continuity. Organisations (and the world) seem to be more resilient than you might expect. In the Netherlands we saw Telco's ceasing mobile service in a large part of the country for periods of up to half a day, with ISPs unable to offer Internet services for as long as a week in certain regions. And yet, these companies did not go under. Each industry, government and organisation will have to assess its own priorities and success metrics/criteria and standards. As I mentioned in a past blog post, aiming to continue your on-demand video services after a mayor flood, hurricane or nuclear disaster may be over-shooting what is necessary under those circumstances-survival will be the only concern in a case like that.

Looking at the reactions in the market so far, the responses of the vendors impacted by the Amazon mishap seem to be a lot more benign that the reactions of impacted PlayStation gamers. Maybe because these vendors feel they have a good thing going and the last thing they won't to do is kill it by too much honesty. What is refreshing is that not too many vendors crawled out of the woodwork saying "private cloud, private cloud, private cloud" - not even my colleagues who recently published a book on the topic. Good! Nobody loves "I told you so" types and there's no point in kicking your opponent when he is down. But the case for private cloud did get a bit better, or is it just me thinking that?

## 3.5: The private cloud debate is building up steam, but is it worth having?

Slowly but steadily the debate in the blogosphere about private clouds is increasing. While it is always good to see some debate, is this a debate worth having? In the long term, will the cloud debate be about more than simply who owns a machine?

The private cloud debate is building up steam. Under provocative titles like Private cloud discredited, part 1  , part 2  and Do We Really Need Private Clouds?  an article part of a very readable guest series by IT analyst and forward thinker Robin Bloor at Cloud Commons.

It is always good to see debate, and I vividly remember the excitement some years ago about open systems (with 'open' roughly being defined as anything running on UNIX, versus anything that was not running on UNIX, including mainframes, AS/400s and HP3000s). But in reality, that debate was about as productive as a debate about private versus public clouds may turn out to be.

In my view, the most important thing that the cloud can bring is this: cloud (finally) decouples the application from the underlying infrastructure. As a result, it matters a lot less where it runs: whether it is private or public. In a comment on Robin Bloor's Do We Really Need Private Clouds? blogger Jonathan Davis (CTO of DNS Europe) introduces a good example of that principle. Using a cloud platform (CA 3Tera AppLogic in this case) his company enables applications to be deployed transparently and instantly over grids of compute capacity, regardless of whether these clouds are private (hosted or internal), public or a combination (hybrid).

**Where to start  
** The remaining private versus public question then is: where do we start? Do you start with less-sensitive applications on a public cloud and then expand what you learned to core apps onto maybe a private cloud? Or do you approach it the other way round: do you start with a more sensitive app on a private cloud and expand to public when you feel that is proven and secure enough for that application?

Some guidance was given in the cloud scenario session at the recent annual symposium of one of the leading industry analyst companies. The basic tenet of the argument is first explore whether a job can be done with a public cloud, and consider private only if there are valid and severe reasons to not go public.

During the event, it was also suggested that security concerns should be positioned as valid but temporary challenges to be addressed and overcome, rather than as a reason (or excuse) to discard public clouds.

## 3.6: Who leads cloud computing developments?

Traditionally, BIG IT, the IT operations of large banks, governments and Fortune 1000 companies, were the first to implement new technologies - ranging from the first mainframes to powerful UNIX clusters and, later, rack-based systems.

And for many years, to guide their strategy, technology vendors used the 80/20 rule - that the top 20 percent of companies were responsible for 80 percent of the overall global IT spend. Today the total data processing at the average stock exchange still dwarfs the number of transactions a phenomenon like Twitter handles, but online entertainment is catching up rapidly.

This really hit home while visiting a large hosted European data centre. There were some corners where you could still find enterprise servers zooming away, but the really big server farms and all the reserved open spots were dedicated to consumer related services such as online gaming, mobile internet and messaging, and on-demand television. The rise of these consumer services will cause unprecedented demands for cloud storage, cloud networking and cloud processing in 2011, but (except when sitting at home on his couch enjoying MTV) the average enterprise IT manager will not particularly notice. In fact, many traditional enterprise IT chiefs may still feel they are BIG IT.

You could argue that this trend of data centres becoming more and more consumer centric is the top-down part of IT consumerisation. The bottom-up part is employees bringing their consumer technology (iPhones, iPad's, etc.) and expecting to use them while doing their job. The long-term impact of this top-down trend will be that traditional BIG IT technology vendors will start to focus their R&D more on new, fast-growing markets. Vendors with a running start in this new reality will be consumer electronics companies (like Apple) and technology vendors that grew up-or grew big with the Internet. As a result, traditional BIG enterprise IT will become a secondary market; a market where data centre inventions and investments, which were originally made for the consumer and entertainment market, can be redeployed. Something to take into consideration when picking your strategic technology vendors for the next decade.

Consumer IT will not take over enterprise IT completely during 2011, but the days when we made fun of hardware vendors that made more money on consumer printers and ink than on enterprise data centres are definitely behind us.

## 3.7: Will the cloud end micromanagement?

_For a long time IT felt that microcomputers require micromanagement. An idea that may be soon be just as dated as the word 'microcomputer' itself. Below are some of the things that are happening already._

Cloud computing promises to move all functionality into the cloud. At the same time the consumerisation trend is driving the use of consumer electronics such as off the shelf laptops, iPhones, MacBook's and home entertainment centres as access devices. Typically, these devices will be cool, flat, and inexpensive and probably will not have a physical keyboard. But more interestingly, it will no longer matter how these devices are configured and even whether they run Windows, Chrome, Linux or some kind of mobile derivative.

More and more organisations offer webmail as a way to access their email systems. This enables employees to access mail from their home PC, an Internet café while on holiday or from customer sites where employees cannot plug in their laptops but do have access to browsers. At the same time we see that the API of this webmail is used to set up access from personal phones and personal digital assistants (PDAs).

With the introduction of intranet sites and SharePoint servers, many of the received mails, however, link back to content on the corporate network. So more advanced organisations are already offering instant intranet access, either via the standard VPN protocols supported by modern devices or by offering an on-the-fly VPN, where the VPN client software is installed via the browser during the first connection.

A related trend is the use of multiple devices by one person. Not many corporations hand out multiple laptops, netbooks and desktops to the same person, but many executives have taken to the idea of an ultra-light tablet for short trips and a solid laptop for longer stays. Indeed, many of you may have both a laptop and a desktop and maybe even a netbook, MacBook or tablet on the side. And if it was easier to have the right data on the right machine at the right time, we would probably swap devices much more often.

We are also seeing several related developments on the application side. Traditionally, enterprise applications required specifically configured client devices (think client/ server) and access was only offered to devices inside the corporation's network. Most modern applications offer access from a browser. Originally, the browser interface supported a subset of the functionality. But more and more, the full scope of the application functionality is available to browser-based clients. This reduces the need for client-specific configuration; but more importantly allows organisations to offer non-employees, who use non-company provided devices, access to these applications.

These can be contractors, temporary workers, employees, subcontractors etc. As a result, companies have taken to offering access to these applications over the Internet. Of course governing and enforcing who is allowed to have access and who is not, is still required, but access itself is no longer dependent on physical availability of a specific configuration or specific client device.

The described developments basically impact our traditional micro strategy in three ways. First, we may use multiple devices sometimes on and sometimes off the corporate network. Second, most application logic will execute on servers in the datacentre (not on our desktop); and third, our data and settings will ideally travel with us like a virtual desktop, instead of being confined to one physical device.

## 3.8: Will the cloud drive consumerisation beyond devices?

The cloud essentially consumerises all IT, not just relatively unimportant bits like procuring personal hardware and software. This requires us to rethink the notion of corporate IT, as the idea of any master design becomes unattainable. How can IT as a species survive this trend, as it may render irrelevant the education of a whole generation of IT professionals?

The idea of consumerisation - users being allowed to freely procure their own personal hardware and software - has been around for a while. But few CIOs and even fewer heads of IT operations have embraced it. Other than some token adoption, where users could choose between an iPhone or a BlackBerry or where users had a personal budget to order from the company-supplied catalogue of pre-approved hardware, we have not so far witnessed broad adoption of the concept. The idea is that users can go to any consumer store or web shop and order any gadget they like, be it an iPad, laptop, printer or smart-phone, configure these while still in the store and access their corporate mail, intranet and company applications. The idea originated when people wanted to use their 24-inch HD PC with four processors and mega memory - all essential to enjoy modern home entertainment and video and far superior to company standard issue equipment - to also do some work.

Cloud computing applications now make this consumer approach possible at the departmental level. Departments selecting and using non-corporate approved or endorsed SaaS-based CRM applications are the most commonly used example. But more interesting are the cases where departments - tired of waiting for their turn in the endless application backlog of corporate IT - turned to a system integrator to build a custom cloud application to meet their immediate needs. Several system integrators (SIs) indicate that they have more and more projects where the business department is their prime customer not IT. Contracts, service level agreements (SLAs) and even integrations are negotiated directly between the SI and the business department; in some cases IT is not even aware.

This is not a new phenomenon. We saw exactly the same thing when PCs and departmental servers were introduced. Departments went off on their own and bought solutions from vendors popping up like proverbial poppy seeds and often disappearing just as quickly. Remember Datapoint, Wang, Digital? And those were the ones that lasted! Guess who the business expected to clean up (integrate) the mess they left behind? Yes, the same IT departments they bypassed in the first place. Some may even argue that if IT had not been so busy cleaning up this mess over the last 15 years, they would have had a much better chance at building an integrated solution that actually did meet the needs of the business. I am not of that opinion. We had this chance with ERP and still did not manage to keep up with the requirements. Some things are just too vast, too complex or change too quickly to be captured in any master design.

So back to consumerisation. Although the trend has not been whole-heartily embraced by most corporate IT departments so far, it is continuing. In my direct environment I see several people who, instead of plugging their standard-issue laptop into the corporate network at the office, take a machine of their choice (often a shiny MacBook) and a 3G network stick to work. For around 20 euros a month this gives them all the connectivity they need and better performance when accessing the applications they care about. And it gives them access to applications most corporate IT departments blocked off until recently, such as Facebook and Twitter. The question is, of course, can they do their work like that? Surely they need round-the-clock, full-time access to the aforementioned vertically integrated ERP system that is only accessible when on the corporate network?

The answer is 'no'. First of all, the vertically integrated type of enterprises that ERP was intended for, no longer exist. Many corporations are already outsourcing tasks traditionally done internally: distribution to organisations like DHL or TNT; employee travel to the likes of American Express; HR payroll and expenses to XYZ. The list goes on. One could say that the world has moved from vertically integrated manufacturing corporations to supply chain connected extended enterprises.

Most of these external service providers support their services by offering web- based systems: systems that can be accessed from anywhere, inside and outside the company firewall. At the same time, the remaining processes that occur inside the corporate ERP systems are in many cases so integrated that they hardly require any manual intervention from employees. Consequently, employees do not need to spend their time doing data entry or even data updates on that system. Any remaining required interaction is facilitated by directly interfacing with the customer via other, often Web-based systems and in some cases even via social media systems.

On the IT supply side, the result of consumerisation is that IT can no longer assume our users will be on the corporate network using devices we provided to them. As a result, applications are best delivered as easily consumable Internet or cloud services to both employees, partners, customers and suppliers. To facilitate this, one large European multinational is already delivering all its new applications as Internet instead of intranet applications. This means any application can be accessed from anywhere by simply entering an url and doing proper authentication; after which the authorisation system determines, based on the user's role, what access he/she should be granted. Whether the user is inside the company's premises (inside the firewall) has an ever smaller role in the process.

When discussing consuming services, we need to look beyond just IT services. The head of distribution may be looking for an IT system supporting the tracking of packages, but when asking the CEO or the COO they may think of a full distribution service like DHL or TNT deliver. Services can range from distribution to complaint tracking, return and repair handling, or even accounting, marketing and reselling. It is the idea of everything as a service, but on steroids. Choosing which services to source and which to provide internally is based on whether there are outside parties willing and capable to provide these services and whether the company can gain a distinct advantage by providing the service itself.

Consuming an ever-larger part of the supporting services does bring up another issue. How do we warrant continuity, efficiency and compliance in such a consumption oriented IT world? If it is every man (or department) deciding for themselves which services to source, how do we prevent sub-optimisation? In fact, how do we even know what is going on in the first place? How do we know what services are being consumed, what the quality is and how reliable the suppliers will prove to be? This challenge is similar to the one manufacturing companies faced when they decided to no longer manufacture everything themselves. To respond more quickly to market demand, they abandoned full vertical integration and took a supply chain approach. Adopting such an approach in no way reduced their responsibility for the end result. A car company that decides to source its anti-lock braking systems is still responsible if the car does not stop. Cloud computing is in many aspects similar to adopting a supply chain approach. It can make us faster, more responsive and more cost effective, but in the end we are still responsible for the end result.

A supply chain approach starts with thoroughly understanding both demand and supply, matching the two and making sure that the goods (or in this case services) reach the right audience at the right time... on demand. IT has invested a fair amount of time and effort in developing better ways and methodologies to understand demand and to determine the requirements of the users.

On the supply side, IT has, until now, assumed it was the sole vertically integrated supplier. As such, IT professionals may have used industry analysts' reports to classify the components required, for example hardware and software. But in this new world they need to understand thoroughly the full services available on the market, not just the raw materials. An interesting effort worth mentioning here is the Service Measurement Index (SMI) , an approach to classify cloud services co-initiated by my employer, CA Technologies and led by Carnegie Mellon University.

Having gained an understanding of both demand and supply, the remaining task is to join the dots. This sounds trivial, but this brokering of services is an activity that analysts expect to become a multi-billion dollar industry within just a few years. Of course, all of the above will not happen overnight. Many a reader (and with a little luck the author) will have retired by the time today's vertically integrated systems \- many of which are several decades old and based on solid, reliable mainframes - will have become services that are brokered in an open cloud market. A couple of high-profile system outages may even prolong this a generation or two more. But long term I see no other way. Other markets such as electricity, electronics, publishing and even healthcare have taken or are taking the same path. It is the era of consumption.

## 3.9: Will the cloud kill outsourcing, the browser and the web?

Predicting the future is a lot more fun than analysing the past, and there have been plenty of predictions recently.

For starters, Wired magazine announced the death of the (browser-based) web , predicting it will be replaced by dedicated, locally installed desktop or mobile applications, those things we now call apps.

As you can imagine, this article prompted a large response from bloggers and emotions were nearing outrage in some cases. Most of the reaction came from people who simply love their browsers, but I suspect that many SaaS vendors also had a rough night. Being able to run multiple SaaS applications next to each other, while still offering a rather consistent, integrated look and feel, courtesy of HTML and the common web experience, is pretty fundamental to the long term success of SaaS.

Just a week before this, BusinessWeek ran an article by AT Kearney titled 'The End of Outsourcing (as we know it)' , which predicted that today's outsourcers will be rapidly replaced by cloud outfits, in the relentless pursuit of economies of scale. The article even went as far as to pick winners (Amazon and Google), potential winners (Oracle and SAP) and losers (today's outsourcers especially the mid-sized Indian companies).

AT Kearney sees today's outsourcing champions, such as HP and Accenture, as hesitant to become cloud providers. Surprisingly (or perhaps not?), he did not mention IBM, by far today's largest player (HP may be a bigger company today, but mainly because it still sells lots of printers and PCs) nor Apple. You may argue that Apple is a consumer company, but as today's innovations are launched into consumer markets first, we could predict that Apple will move its innovations into the enterprise market soon, offering enterprise versions of cloud offerings like MobileMe (maybe then called MobileInc?). That is, if the world indeed changes as fast as AT Kearney suggests in the BusinessWeek article.

But that is exactly the issue. Today's big enterprise IT is just not that agile. Much of what is outsourced today still consists of code that was first written 20 years ago. We saw several companies try to 'right-size' their pre-relational mainframe databases year after year, always concluding that it either did not have any ROI, simply was not worth the effort or the risk was too high. And as a SAP executive  recently said, many large ERP requirements are still far away from anything cloudy.

One prediction we can be sure of is this: tomorrow will be vastly different from today. In fact, today is already vastly different from yesterday, as @Phil_Nash pointed out in a recent tongue-in-cheek tweet, "Welcome to the new decade: Java is a restricted platform, Google is evil, Apple is a monopoly and Microsoft are the underdogs." But at the same time, companies with expansive IT operations will move slowly, as Brian Stevens, CTO of Red Hat, seems to agree. In a recent interview  on Bloomberg TV he said, "It's going to be several decades before the technology arrives and our [financial services] customers are using the capabilities of cloud more readily."

We may not directly notice this dichotomy, because magazines, articles and the enormous flood of social media almost completely focuses on describing new shiny projects (the 20 percent of the average IT budget), and hardly on the lights being kept on (the 80 percent of the average IT budget). In fact, the view may be even more distorted, because as Marcel den Hartog recently described  some of these older systems are so efficient that they run the majority of the enterprises' transactions at a fraction of the total IT cost.

All of this talk about predictions reminds me of the paperless office. Remember all the hype and anticipation around that? It never happened. In fact we now print more than ever before (making HP bigger than IBM). Only last year we finally saw a device that may get us to this paperless dream. Yes, I mean the iPad, and it is not by coincidence (it never is at Apple) that the only function missing from the original iOS is... a printing function. The other major change attributed to the iPad (and its smaller sibling, the iPhone) is the earlier mentioned end of the web, to be replaced by apps.

Personally, I believe apps will indeed be the preferred way to consume content, but the average knowledge worker is not paid to merely consume content. Wouldn't it be great if you could spend your days reading articles, blogs and publications like this while being paid to do so? But for most of us knowledge workers that is not the case. We are expected to add value by analysing, combining, mashing up and composing new content, or by putting this content in a new context. Capturing that into a single app sounds a lot like George's job on The Jetson's. Just press one button and all the rest is automated .

For now, SaaS vendors can rest assured; it will be a while before they are rendered obsolete. Likewise for outsourcers. Certainly, outsourcers should be thinking about adding cloud services such as IaaS to their portfolios. But at the same time, we see the main pioneer of IaaS, Amazon, taking a distinct step back by starting to offer reserved instances. These are machines dedicated to one customer for anywhere between one and three years (which is longer than most modern outsourcing contracts).

Eventually cloud will also happen for existing legacy applications. Today, cloud will grow in new technology areas (for example, almost all social media sites are in the cloud), or with new things we simply do not yet do (like systems that help users like a George Jetson make smarter decisions through massive data analysis and number crunching). And that is not a bad thing. If we need to choose between deploying cloud to make the systems we already have five percent more efficient or do five new things we do not do today, I think we would all choose the latter. It simply feels like more progress.

## 3.10: Will today's data centre follow yesterday's mainframe?

Many hypes in IT are just the same old idea, launched again, but with better technology and under a new name. To what extent is this also the case with cloud computing?

Who remembers Larry's original network computer? And who is just about to buy one, but now based on Android or iOS? Similarly, we could say for the data centre: "The data centre is dead, long live the virtual data centre".

The danger of this approach is that we treat the virtual data centre just like any new type of infrastructure and simply re-host our existing applications by moving them from physical to virtual machines (P2V). Just as we re-hosted our applications from mainframes to minicomputers in the days of downsizing.

But if we only re-host, we will miss out on the potential benefits: not just cost and energy reductions, but also business and IT agility, management efficiency, market responsiveness and service improvements. And there are several warning signs that this is exactly what is happening. The first warning sign came from David Linthicum who signalled that, "Bowing to IT demands, cloud providers move to reserved instances" . In his article he showed how Amazon "users can make a one-time, up-front payment to reserve a database instance in a specific region for either one or three years." Upfront payment? Specific location? Three years? Sounds pretty much like buying a server to me. David saw it as a necessary evil to get "reluctant enterprises over the cloud fence", but to me it was a first signal that traditional behaviour was making it across to a new type of infrastructure.

This was confirmed in "Despite the promise of cloud, are we treating virtual servers like physical ones?" , a blog by my colleague Jay Fry, VP Marketing, Cloud Computing, CA Technologies. He picked up on the fact that according to recent market and vendor numbers virtual servers at leading cloud servers are getting bigger and are used for significantly longer periods. 'Longer' is a relative term. If a workload is loaded on cloud servers but never removed, you get what VMware calls the 'Hotel California Syndrome' "You can check out any time you like, but you can never leave!". As a result, the use of the cloud becomes similar to leasing hardware. You do not own it, but you are still solely responsible for the usage. And as Jay points out in his blog, that was never the point of cloud computing; it was all about sharing and elasticity.

More serious is that this type of re-hosting does not add any value for the users. Users never cared whether something ran in the back on a mainframe or a mini and likewise they would not care if it ran on a physical box, a virtual box or even a shoebox. What they care about is ease of use, flexibility, connectivity, scalability, functionality and cost (and probably in that order). Traditional downsizing moving applications from running on mainframes to distributed, initially UNIX-based boxes was often done purely for cost reasons, and the savings were quite considerable. So considerable that many started to declare the mainframe dead; a rumour that turned out to be greatly exaggerated. Pretty soon the mainframe reinvented itself and became more efficient, more connectable and more flexible - and as a result greatly reduced its cost per transaction (the only cost that counts).

Funnily enough we already see the same happening with data centres: under the name 'private cloud' they are rapidly becoming more efficient, flexible and scalable. Let's face it, a private cloud is basically a data centre with a fancy name, it is no more elastic or shared than leased servers and you are still limited by your available resources. But that does not mean it cannot be a lot more agile, scalable and cost effective than a traditional data centre.

The big question I have is whether the data centre will follow the mainframe with regard to new applications. The rejuvenation of the mainframe stopped the further re-hosting of applications to minis (also because the remaining applications were the biggest and most complex ones left). But new applications, by default, were developed for and installed on these new UNIX or Windows minis. Minis which, by that time were as big and as fast (and as expensive) as the modernised mainframes. By that time software vendors like SAP were even bringing out their new applications (SAP R3) exclusively for the new distributed platforms, even if they had been extremely successful on the old platform with R2. I guess you can see where I am going? Several software vendors are now building their new applications exclusively for the cloud.

Will the traditional data centre share the same fate and become more efficient at what it runs today, but not see many new applications enter its doors? Of course, sure, there will be some applications that will not be cloud-suitable - just as some applications still run on mainframes, despite their owners having attempted to re-host them roughly every other year during the past decade. Many finally gave up - it was too hard and too complex - and outsourced them altogether.

Funnily (or sadly) enough we see a similar phenomenon around virtualisation; we call it virtual stall . After virtualising about 35 percent of the servers, many virtualisation initiatives stop. After that, it becomes too hard and too complex. There may be some applications not suitable for virtualisation, but I am sure it is not 65 percent; it might be 10 percent (similar to what we saw in mainframes). The reasons these initiatives stall are varied. An important aspect is complexity: a distributed data centre is light years more complex than any mainframe, and adding virtualisation adds even more complexity. But that does not mean it cannot be done. Today's cars are also light years more complex than a Model T Ford, yet today's mature garages manage to run and maintain them more reliably and more (fuel) efficiently.

Maturity is the keyword here. Using a virtualisation maturity model, IT departments can get the complexity under control and reap the benefits of an almost fully virtualised data centre. And do not underestimate the true benefit of that, even if we add all new applications exclusively to the cloud, it will be decades before the majority of modern organisations' applications will be running there. We call these applications affectionately our legacy or installed base. They were not built overnight and certainly will not disappear overnight either.

So far this chapter has talked a lot about hardware, whilst I normally only talk about software. But whilst on the topic, it is funny to see how two non-traditional platforms are rapidly making in-roads into the data centre, potentially replacing the incumbents. These platforms are getting an unusual enthusiastic reception from their users. One is the Cisco UCS platform, a platform designed from the ground up to run virtual workloads (user review) . The other one - surprise, surprise - is the next-generation... mainframe . It is designed around the fastest central processing unit (CPU) on the market today, it is gunning to become the backbone for many distributed servers , currently for Linux and AIX, but soon also for other platforms. So even if today's data centre may be past its Prime (pun intended), soon it may be a cool place to live and work again.

## 3.11: What will be the cloud's killer app?

Currently, many organisations are porting their existing applications to the cloud, but what are going to be the cloud applications that really change the way computing is used.

New generations bring with them new applications: mainframes introduced online transaction processing (OLTP); minis or distributed systems (both open and proprietary) introduced departmental systems and later packaged applications like materials requirements planning (MRP) and ERP; and Internet web systems introduced the age of e-commerce. That was when we started buying books and gadgets and doing our banking online (remember when everything was called e-something?).

So re-hosting our existing apps to the cloud (either private or public) is only a short chapter in the long-term cloud story. This year is likely to be the year of the IaaS killer app: TestDev clouds. TestDev clouds make test, development and other non-production servers (which comprise a whopping 70 percent of the average IT organisation's machine park) available in a much more flexible, economic and ecological way. Marv Waschke wrote about this recently as an ideal way to gain experience with private and public cloud scenarios .

The more long-term story in my view is that the cloud will be ideal for a new generation of applications around 'collaboration'; applications that enable organisations to seamlessly collaborate with other organisations, in an extended enterprise setting. I am not talking about in-company Twitter and Facebook clones that invite employees to waste as much time on updating their profiles at the office as they do at home. Through these new collaboration applications, organisations can take business processes that were traditionally done in house and source them as 'as a service'. These processes can vary from bill collecting, invoicing, physical distribution, repair handling and HR to full manufacturing or product design. Companies will be able to specialise and offer these services to many organisations, which will enable them to achieve massive economies of scale. Many of these services will be largely or completely information/software based.

Imagine the efficiencies of one company handling repairs for several large mobile phone manufacturers, versus each company having to arrange their repairs themselves. Most phone manufacturers sell through the same resellers; use the same repair centres and source from the same Chinese factories. Hooking these up to a central repair-handling application used by multiple players can have an enormous platform effect. An early European player in the area is eBuilder.com (which, by the way, also runs a cloud academy, but with a specific curriculum covering what it calls "Cloud Processes for the Value Network". The head of product marketing has compiled a handy list of 10 essential elements  to create and run such a next generation cloud business process service). These new-generation cloud applications go beyond just IT, and can render efficiencies far beyond any pure IT savings we are familiar with.

You may say that companies have begun the move towards specialisation some years ago, focusing on core competencies and outsourcing of processes. And you're right. But they often did so despite support in IT applications. Thanks to this new generation of cloud services, IT can now become the big promoter, enabler and catalyst for this. Not that the idea is new; in fact, I came across the scenario of 'repair handling as a service' over a decade ago, while introducing XML for a previous employer. But at the time the idea of bringing such crucial functions outside the firewall, outside the realm of IT, was just a bit too revolutionary. The fast-growing acceptance of cloud computing as a model (often even more on the business side than on the IT side) is rapidly changing this.

## 3.12: Can you have cloud computing without vendor lock-in?

IT vendor lock-in is as old as the IT industry itself. Some may even argue that lock-in is unavoidable when using any IT solution, regardless of whether we use it on premise or as a service. To determine whether this is the case, we will examine traditional lock-in and the anticipated impact of cloud computing.

Vendor lock-in is seen as one of the potential drawbacks of cloud computing. One leading industry analyst recently published a scenario  whereby lock-in and standards even surpassed security as the major objection to cloud computing. Despite efforts like Open Systems and Java, we have managed to get ourselves locked-in with every technology generation so far. Will the cloud be different or is lock-in just a fact of life we need to live with? Wikipedia defines vendor lock-in as follows:

In economics, vendor lock-in, also known as proprietary lock-in, or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.

We will now examine what lock-in means in practical terms when using IT solutions and how cloud computing would make this worse or better. For this we look at four dimensions of lock-in:

#### Horizontal lock-in

This restricts the ability to replace a product with a comparable or competitive product. If I choose solution A (for example, a CRM solution or development platform), I will need to migrate my data and/or code, retrain my users and rebuild the integrations to my other solutions if I want to move to solution B. This is a bit like when I buy a Toyota Prius, I cannot drive a Chevrolet Volt. But it would be nice to be able to use the same garage, loading cable and GPS when I switch.

#### Vertical lock-in

This restricts choice in other levels of the stack and occurs if choosing solution A mandates use of database X, operating system Y, hardware vendor Z and/or implementation partner S. To prevent this type of lock-in the industry embraced the idea of open systems, where hardware, middleware and operating systems could be chosen more independently. Before this, hardware vendors often sold specific solutions (like CRM or banking) that only ran on their specific hardware/OS etc. and could only be obtained in their entirety from them. A bit like today's (early market) SaaS offerings, where all must be obtained from one vendor.

#### Diagonal (of inclined) lock-in

This is the tendency of companies to buy as many applications as possible from one provider, even if the vendor's solutions in those areas are less desirable. Companies have typically picked a single vendor to make management, training and especially integration easier but also to argue for higher discounts. This trend led to large, powerful vendors, which again caused higher degrees of lock-in. For now we call this voluntary form of lock-in diagonal lock-in (although 'inclined' - a synonym for diagonal - may describe this better).

#### Generational lock-in

This last one is as inescapable as death and taxes. It is an issue even if there is no desire to avoid horizontal, vertical or diagonal lock-in. No technology generation, no IT solution, and no IT platform lives forever (well, maybe with exception of the mainframe). The first three types of lock-in are not too bad if you have a crystal ball to help pick the right platforms (for example, Windows and not OS/2) and the right solution vendors (generally the ones that turned out to become the market leaders). But even market leaders at some point reach end of life. Customers want to be able to replace them with the next generation of technology without it being prohibitively expensive or even impossible because of technical, contractual or practical lock-in.

#### The impact of cloud computing on lock-in

How does cloud computing, with incarnations like SaaS, PaaS and IaaS impact the above? In the consumer market we see people using a variety of cloud services from different vendors, for example Flickr to share pictures, Gmail to read email, Microsoft to chat, Twitter to Tweet and Facebook to... (well, what do they do on Facebook?).

All of these are seemingly used without any lock-in issues. Many of these consumer solutions now even offer integration with each other. Based on this premise, one might expect that using IT solutions 'as a service' in an enterprise context also leads to less lock-in. But is this the case? We will now explore those four dimensions again.

#### Horizontal

The average enterprise moving from one SaaS solution to another is not so different from moving from a traditional software application to another, provided they agreed that their data can be transferred and how it will be done. An advantage is that SaaS appears easier and faster to implement, and that it is not necessary for the company to have two sets of infrastructure available when migrating.

For PaaS, it is a very different situation, especially if the development language is proprietary to the PaaS platform. In that case, the lock-in is almost absolute and comparable to the lock-in companies may have experienced with proprietary 4GL platforms, with the added complexity that with PaaS the underlying infrastructure is also locked-in (see under vertical).

Horizontal lock-in for IaaS may actually be less severe than lock-in to traditional hardware vendors. Why? Because virtualisation, an important ingredient for any modern IaaS implementation, isolates from underlying hardware differences. Provided customers do not lock themselves in to a particular hypervisor vendor, they should be able to move their workloads relatively easily between IaaS providers (hosting companies) and/or internal infrastructure. A requirement for this is that the virtual images can be easily converted and carried across; a capability that several independent infrastructure management solutions now offer. Even better would be an ability to move full composite applications (more about this in chapter 4.14).

#### Vertical

For SaaS and PaaS, vertical lock-in is almost by definition part of the package - as the underlying infrastructure comes with the service. The good news is the customer does not have to worry about these underlying layers. The bad news is that if the customer is worried about the underlying layers, there is nothing he can do. If the provider uses exotic databases, risky hardware or has his data centre in less-desirable countries, all the customer can do is decide not to pick that provider. He could consider contracting upfront for exceptions, but in all likelihood this will increase the cost considerably, as massive scale and standardisation are essential to the SaaS providers' business model. On the IaaS side we see less vertical lock-in, simply because we are already at a lower level. Ideally, our choice of IaaS server provider should not limit our choice of IaaS network or IaaS storage provider. However, the lesson we learned the hard way during the client server era that - for enterprise applications, logic and data need to be close together to get any decent performance - still applies. As a result, the storage service almost always needs to be procured from the same IaaS provider that is used for processing, or at least from a vendor in the same location. On the network side, most IaaS providers offer a choice of network providers, as they have their data centre connected to several network providers (either at their own location or at one of the large co-locators).

#### Diagonal or inclined

The tendency to buy as much as possible from one vendor may be even stronger in the cloud than in traditional IT. Enterprise customers try to find a single SaaS shop for as many applications as possible. On one hand this offers out-of-the-box integration, on the other it enables customers to regularly audit the delivery infrastructure and processes of their current SaaS providers. This auditing process would be prohibitive if the organisation were using hundreds of SaaS vendors.

For similar reasons we see customers wanting to buy PaaS from their selected SaaS or IaaS vendor. As a result, vendors are trying to deliver all flavours, whether they are any good in that area or not. A recent example being the statement from a senior Microsoft official  that Azure and Amazon were likely to become more alike, with the first offering IaaS and the second likely to offer some form of PaaS.

In my view, it is questionable whether such vertical cloud integration should be considered desirable. The beauty of the cloud is that companies can focus on what they are good at and do that very well. For one company this may be CRM, for another it is financial management or creating development environments and for a third it may be selling books... or rather... hosting large infrastructures. Customers should be able to buy from the best, in each area. CFOs do not want to buy general ledgers from CRM specialists, and certainly sales people do not want it the other way round. Similar considerations apply for buying infrastructure services from a software company or software from an infrastructure-hosting company. At the very least this is because developers and operators are different types of people, which no amount of 'DevOps' training will change (at least not for this generation).

#### Generational

As with any new generation of technology, people seem to feel this may be the final one: "Once we have moved everything to the cloud, we will never move again." Empirically, this is very unlikely. There is always a next generation, we just don't know what it is yet. The underlying thought may be: "Let the cloud vendors innovate their underlying layers, without bothering us". But vendor lock-in would be exactly what would prevent customers from reaping the benefits of cloud suppliers innovating their underlying layers. Let's face it, not all current cloud providers will be innovative market leaders in the future. If we were unlucky and picked the wrong ones (like OS/2), the last thing we want to be is locked-in. In today's market, picking winning stocks or lotto numbers may be easier then picking winning cloud vendors. Even when it comes to picking stocks, many of us are regularly beaten by not very academically adept monkeys.

## 3.13: Market developments around lock-in

Several vendors have made announcements seemingly supporting the decoupling of SaaS and PaaS from their underlying IaaS layers.

For example, Microsoft announced it is making Azure available as a PaaS platform to several large IaaS providers . I am sure this is driven by a desire in Redmond to increase market share for their PaaS platform, which ironically, if too successful, may even increase lock-in. But the move will offer customers who select Microsoft's PaaS platform a choice of vendors for the underlying infrastructure services (IaaS).

At the same time, NASA and Rackspace are joining forces around an open source platform for private clouds called OpenStack . Rackspace's initiative is no doubt as commercially motivated as Microsoft's. If Rackspace - in my view correctly - expects that many private clouds in the foreseeable future will start to source additional capacity (cloudburst) from public clouds, then basing these private clouds be the same architecture as their public cloud offering will help Rackspace.

NASA's motives stem from the U.S. government's cloud stimulus approach, a goal of which is "to accelerate the creation of cloud standards". If history is to repeat itself, we can expect to see industry standards lead to a 'plug compatible cloud market", before a serious 'open standards cloud market' will takes shape. As NASA is determined to have workable cloud standards in far less time than the decade or so it took to get a man on the moon, it is understandable it sees the Rackspace route as a viable shortcut. This is also understandable because agreeing on open cloud standards today would be as difficult as agreeing 3D TV standards in the days of the black and white moon landing broadcasts. And, reader beware, if there were ever a time to keep your options open and not lock yourself in to what looks to become an early standard, it would be today.

#### A decoupled cloud example

For some, the recommendation to prevent lock-in by decoupling the application (SaaS/PaaS) from the underlying choices of infrastructure (IaaS) vendors may be controversial. The logic being: if you want control over underlying layers, you should not embark on cloud computing. After all, the whole idea of cloud computing is that someone else is responsible for the underlying layers. But that is like saying: if you don't want to buy clothes that were made by under-aged children; you should get a sewing machine and make your own clothes. There should be some middle ground, where you get some assurance about how things are implemented.

Others struggle with imagining what such a decoupled cloud would look like in practice. Luckily, a live example of such a decoupled offering already exists. Skygone Inc.  is offering a choice of geographical information system (GIS) services by aggregating solutions from several GIS software vendors across a choice of infrastructure platforms and vendors. Companies in need of such geographical information, which is a complex and specialised area beyond the expertise and interest of most internal IT departments, can now simply source this without locking themselves in to a specific vendor or platform. I should disclose at this point that Skygone uses as its underlying platform AppLogic from 3Tera, now owned by my employer CA Technologies.

#### Cloud brokering

Several analyst firms predicted early on that this type of 'brokering of cloud services' would become an important market force. But in recent months, the analyst community has become very quiet about the concept - maybe due to the influence of several self-proclaimed 100 pound gorillas entering the cloud market. This is a shame, because it also addresses the fact that in an enterprise context 'SaaS does not scale'. By that, I do not mean that SaaS applications cannot scale to service millions of users. They do already. Although some more successfully and reliably than others. I mean that the average enterprise or government organisation, which typically has a portfolio of several hundreds or even thousands of applications, cannot afford to source these from a similar number of SaaS providers. The mandatory auditing of the infrastructure and processes of all these providers would simply not be feasible. Moreover, a leading analyst firm recently pointed out that an SAS 70 certificate is no replacement for such mandatory due diligence (SAS 70 is Not Proof of Security, Privacy, or Continuity Compliance  and ). They did so at about the same time as they suggested it would make sense for many SaaS vendors to partner with IaaS vendors for the delivery of their services (Public Cloud Infrastructure Helps SaaS Vendor Economics ). They also suggested that the traditional SaaS market may not grow to be as big as many initially expected (Organisations Need to Re-Evaluate the Rationale for SaaS ). All to prove that predicting developments and/or placing customer bets in a brand-new area like cloud computing continues to be a risky business.

#### Lock-in conclusion

My goal for this chapter was to try and define lock-in, understand it in a cloud context and agree that it should be avoided while we still have a chance (while 99 percent of all business systems are not yet running in the cloud). Large-scale vertical integration is typical for immature markets, be it early-generation cars, computers, or even clouds. As markets mature, companies specialise again on their core competencies and find their proper (and profitable) place in a larger supply chain. The lock-in table (Diagram 5), where I use the number of padlocks to indicate relative locking of traditional IT versus SaaS, PaaS and IaaS, is meant for discussion and improvement. It is not intended as an absolute statement. In fact our goal should be to reduce lock-in considerably for these new platforms. In a later chapter I will discuss some innovative cross-cloud portability strategies to prevent lock-in when moving large numbers of solutions into the cloud.

Do not let all of this stop you from moving suitable applications into the cloud today. It is a learning experience that we will all need, as cloud computing steadily becomes serious for serious enterprise IT (and I am absolutely sure it will, as the percentage of suitable applications is becoming larger every day). Just make sure you define an exit strategy for each stage first, as all the industry analysts will tell you. In fact, even for traditional IT, it is always a good idea to have an exit strategy first.

## 3.14: Is there a role for government in stimulating cloud computing?

The speed at which governments across the globe are adopting cloud computing differs greatly. While the U.S. federal government aggressively promotes cloud computing, over in Europe,-in what many call 'the old countries'-governments are still remarkably conservative or even reluctant to embrace it.

At about the same time that President Obama organised a dinner with the CEOs of 12 high-tech and cloud companies to stimulate job creation in North America, over in Europe the Dutch Minister of the Interior replied to questions in parliament about the use of cloud computing by governments.

The fact that this particular minister had to be invited three times by the Dutch Employers Association to switch from his pre-war model cast-iron bicycle to a more modern one with gears and suspension, says something about the tone of this debate. A hilarious misunderstanding was that the official government delegation kept referring to cloud computing as a new invention; while the representatives of the industry (including Google and a large international accounting firm) tried to explain that cloud computing was an established practice with many real-world use cases and success stories, both inside and outside government organisations.

Remarkably the U.S. and this European government announced almost at the same time a plan to reduce radically the number of government data centres: by about 60 in the Netherlands, and by about 800 in the U.S. The underlying idea in the U.S. is to make greater use of 'data centres as a service', a.k.a. cloud computing. On the other hand, the Dutch plan, so far, sounds more like a traditional consolidation approach with the objective of creating more efficiency by increasing the scale of use. This approach has so far not proven to be very successful. In fact, around the world we see that the larger the scale of the projects, the more spectacular the reports in the public press about the outcomes.

In the meantime, the CIO of the U.S. federal government, Vivek Kundra, published a very readable cloud strategy . At only 43 pages, this is a must read for anyone involved in setting IT strategy (a shorter summary can be found at sys-con ). Kundra presented his strategy not as a way to save on IT costs, but as a way to get more value from existing IT investments. In many places, but certainly in the public sector, 'protection of budgets' has become a primary survival strategy. By positioning cloud computing not as a way to cut costs, but as a way to increase value, he makes IT (and the whole civil apparatus) an ally to his plans, rather than a potential opponent. In all probability, the technology industry was already on his side, because the government's 'promised' cloud spending in cloud services is likely to amount to about $20 billion per year, or 25 percent of the total budget. This annual amount is approximately equal to the total government investment required to put a man on the moon. In my view, the U.S. government's cloud programme is also a way to create and safeguard jobs for the coming decade. In a sense, it is an industry stimulus programme.

Due to European free trade rules and regulations, creating stimulus packages for national industries in Europe is at best complicated, and in many cases illegal. Within the European Union, Neelie Kroes - the former free trade commissioner - has taken on the role of Commissioner for the Digital Agenda. In a recent lecture , she indicated her ambition is to make Europe not only 'cloud-friendly' but 'cloud-active' (a kind of 'all-in' strategy?). The plan is built around three core areas: first, a legal framework; second, technical and commercial fundamentals; and third, the market. There are now more than 100 actions on her European Digital Agenda, of which more than 20 specifically address the European 'Digital Single Market' an online equivalent of the European single market for goods and services.

However, a fundamental problem for cloud computing in Europe is that the European Union was based on enabling free traffic of persons, goods and services, and not the free traffic of data. This puts European cloud services providers at an immediate disadvantage. American and Chinese companies have a huge domestic market, which they can serve from one geographic location. Europe has, in theory, a similar large domestic market for cloud services, but the various European languages, cultures and laws make this market much less uniform than the American market. Some argue that this diversity has made European suppliers (or European divisions of global providers) better at providing a differentiated approach, instead of the more traditional 'one size fits all' solution. In a fast, growing new market like cloud computing, however, this diversity makes it more difficult to achieve the required scale.

Besides issues with European privacy laws as described in this New York Times article , there are a variety of local and national laws  preventing local suppliers from serving the European government market from one location, even if this location lies within the European Union. For example, the German government requires that all data of local government agencies be kept within its borders. From a historical perspective, this may be understandable, but it prevents the European government sector from becoming a launching force for 'one European cloud market.'

Maybe it is time for a European cloud of two speeds?  A small leading group of countries could opt for the accelerated introduction of uniform cloud legislation, similar to when smaller groups of countries signed the Schengen Agreement (which sanctioned travel between selected European countries without checkpoints) and the introduction of the euro single currency.

## 3.15: Vivek Kundra's decision framework for cloud migration

One of the side effects of governments embarking on cloud computing around the world is a stream of useful guidance documents becoming available in the public domain.

One of those is the decision framework for cloud migration developed recently by U.S. Federal CIO Vivek Kundra. It offers advice applicable to all organisations, regardless of type or geography

In chapter 3.14, I mentioned Kundra's very readable cloud strategy, and the industry-stimulus effect this approach can have on the emerging cloud industry. Section two of the strategy is a pragmatic three-step approach and checklist for migrating services to the cloud, which can also be valuable for organisations outside the government and outside North America.

The full federal cloud computing strategy (43 pages and available for download at www.cio.gov ) includes a description of the possible benefits of cloud computing, several case studies, metrics and management recommendations. A short review of the document was given by Roger Strukhoff at sys-con .

The following summarises the given 3 step strategic perspective for thinking about and planning cloud migration.

### Decision framework for cloud migration

_**Select**  
_-Identify which IT services to move and when  
-Identify sources of value for cloud migrations: efficiency, agility, innovation  
-Determine cloud readiness: security, market availability, government readiness, and technology lifecycle

_**Provision**  
_-Aggregate demand where possible  
-Ensure interoperability and integration with IT portfolio  
-Contract effectively  
-Realise value by repurposing or decommissioning legacy assets

_**Manage**_  
- _ _Shift IT mind-set from assets to services  
-Build new skill sets as required  
-Actively monitor SLAs to ensure compliance and continuous improvement  
-Re-evaluate vendor and service models periodically to maximize benefits and minimize risks__

A set of principles and considerations for each of these three major migration steps is presented below.

### Selecting services to move to the cloud

Two dimensions can help plan cloud migrations: _Value_ and _Readiness_.

The Value dimension captures cloud benefits in three areas: efficiency, agility, and innovation.

**The Readiness dimension** captures the ability for the IT service to move to the cloud in the near-term. Security, service and market characteristics, organisation readiness, and lifecycle stage are key considerations.

Services with relatively high value and readiness are strong candidates to move to the cloud first.

#### Identify sources of value

**Efficiency** : Efficiency gains come in many forms. Services that have relatively high per-user costs, have low utilisation rates, are expensive to maintain and upgrade, or are fragmented should receive a higher priority.

**Agility:** Prioritise existing services with long lead times to upgrade or increase / decrease capacity, and services urgently needing to compress delivery timelines. De-prioritise services that are not sensitive to demand fluctuations, are easy to upgrade or unlikely to need upgrades.

**Innovation** : Compare your current services to external offerings and review current customer satisfaction scores, usage trends, and functionality to prioritise innovation targets.

#### Determine cloud readiness

In addition to potential _value,_ decisions need to take into account potential risks by carefully considering the _readiness_ of potential providers against needs such as security requirements, service and marketplace characteristics, application readiness, organisation readiness, and stage in the technology lifecycle.

Both for value and risk, organisations need to weigh these against their individual needs and profiles.

**Security requirements** include: regulatory compliance; data characteristics; privacy and confidentiality; data integrity; data controls and access policies; governance to ensure providers are sufficiently transparent, have adequate security and management controls, and provide the information necessary.

**Service characteristics** include interoperability _,_ availability, performance, performance measurement approaches, reliability, scalability, portability, vendor reliability, and architectural compatibility. Storing information in the cloud requires technical mechanism to achieve compliance, has to support relevant safeguards and retrieval functions, also in the context of a provider termination. Continuity of operations can be a driving requirement.

**Market characteristics:** What is the cloud market competitive landscape and maturity? Is it not dominated by a small number of players? Is there a demonstrated capability to move services from one provider to another? And are technical standards (which reduce the risk of vendor lock-in) available?

**Network infrastructure, application and data readiness:** Can the network infrastructure support the demand for higher bandwidth and is there sufficient redundancy for mission critical applications? Are existing legacy application and data suitable to either migrate (i.e. re-host) or be replaced by a cloud service? Prioritise applications with clearly understood and documented interfaces and business rules over less documented legacy applications with a high risk of "breakage".

**Organisation readiness:** Is the area targeted to migrate services to the cloud pragmatically ready? Are capable and reliable managers with the ability to negotiate appropriate SLAs, relevant technical experience, and supportive change management cultures in place?

**Technology lifecycle:** where are the technology services (and the underlying computing assets) in their lifecycle? Prioritise services nearing a refresh.

### 2. Provisioning cloud services effectively

Rethink processes as provisioning services rather than contracting assets. State contracts in terms of quality of service fulfilment not traditional asset measures such as number of servers or network bandwidth. Think through opportunities to:

**Aggregate demand:** Pool purchasing power by aggregating demand before migrating to the cloud.

**Integrate services:** Ensure provided IT services are effectively integrated into the wider application portfolio. Evaluate architectural compatibility and maintain interoperable as services evolve within the portfolio. Adjust business processes, such as support procedures, where needed.

**Contract effectively:** Contract for success by minimizing the risk of vendor lock-in, ensure portability and encourage competition among providers. Include explicit service level agreements (SLAs) with metrics for security (including third-party assessments), continuity of operations, and service quality for individual needs.

**Realise value:** Take steps during migration to ensure the expected value. Shut down or repurpose legacy applications, servers and data centres. Retrain and re-deployed staff to higher-value activities.

### 3. Managing services rather than assets

**Shift mind-set:** Re-orient the focus of all parties involved to think of services rather than assets. Move towards output metrics (e.g. SLAs) rather than input metrics (e.g. number of servers).

**Actively monitor:** Actively track SLAs and hold vendors accountable; stay ahead of emerging security threats and incorporate business user feedback into evaluation processes. Track usage rates to ensure charges do not exceed funded amounts. "Instrument" key points on the network to measure performance of cloud service providers so service managers can better judge where performance bottlenecks arise.

**Re-evaluate periodically: Re-examine** the choice of service and vendor. Ensure portability, hold competitive bids and increase scope as markets mature (e.g. from IaaS, to PaaS and SaaS). Maintain awareness of changes in the technology landscape, in particular new cloud technologies, commercial innovation, and new cloud vendors.

Disclaimer: This summary of the of the "Decision framework for cloud migration" section from Vivek Kundra's Federal Cloud Computing strategy uses abridgements and paraphrasing to summarise a larger and more detailed publication. The reader is urged to consult the original before reaching any conclusions or applying any of the recommendations. All rights remain with the original authors.

## 3.16: Some pragmatic cloud advice from down under

In addition to the US federal government, other countries are starting to publish useful summaries and guidance docs. One of the more pragmatic ones comes from Canberra (Australia).

When publishing its guidance, the Australian federal government  took a slightly different approach to that taken by the USA. They wisely decided in their 44-page strategy paper  not to create their own definition but to subscribe to the definition created by NIST. Apart from the mandatory 'executive summary' and repeat of the above definitions, the Australian strategy document includes a number of pragmatic checklists, including:

**A list of Potential Risks and Issues of Cloud Computing  
** (page 14-16) grouped into the following categories:  
-Application design  
-Architecture  
-Business continuity  
-Data location and retrieval  
-Funding model  
-Legal & regulatory  
-Performance and conformance  
-Privacy  
-Reputation  
-Skills requirements  
-Security  
-Service provision  
-Standards

**A listing of Potential Business Benefits** (page 17/18)  
grouped into:  
-Scalability  
-Efficiency  
-Cost Containment  
-Flexibility  
-Availability  
-Resiliency

It also includes an overview of their key adoption drivers: value for money, flexibility, operational reliability/robustness; environmental scan (pages 31-34), containing a short overview of (government) cloud projects and pilots in Australia, the United States, the United Kingdom, the European Union, Canada, Japan and Singapore; and lastly a 5-page glossary of terminology (sourced from several places including Meghan-Kiffer Press, NIST and ZDNet).

# Section 4:  
A new role for IT management?

Salesforce.com markets its solutions under the slogan "NO SOFTWARE" and Amazon's Elastic Cloud basically promises: "NO HARDWARE". Will this mean "NO JOB!" for the average IT manager?

## 4.1: The rumours of the IT managers death were greatly exaggerated

As we noted in the introduction, cloud computing has the potential to transform IT into a utility: affordable, reliable, always-on and ubiquitous. This transformation is bound to have implications for the way IT is managed and the role of its manager, the CIO.

In 2004, in Does IT matter, the follow-on book from his Harvard Business Review article, Nicolas Carr forecasts a bleak future for the CIO and IT managers of the future. He writes: "Just think of the rapid shift in the way business viewed electricity a hundred years ago. Early in the 20th century, many large companies created the new management post of 'Vice President of Electricity', an acknowledgement of electrification's transformative role in companies and industries. Soon, electricity's strategic importance diminished and 'Vice President of Electricity' quietly disappeared from the corporate hierarchy."

As we now know, the IT manager has not (yet) followed the VP of Electricity. In fact one could argue that in many companies the VP of Electricity is making an unexpected comeback as chief sustainability officer. More recently, however, we did hear people wonder what role will remain for IT. As we noted earlier, SaaS providers like salesforce.com promote their solutions as 'NO SOFTWARE', while Amazon and other IaaS providers promise 'NO HARDWARE'.

Rest assured, a role will remain for IT, albeit different. And in this context it is also good to bear in mind that cloud computing is not the first innovation that IT management has encountered. IT has long been talking about 'business and IT alignment' and 'running IT as a business'. Experts have filled libraries (which we won't try to replicate here) with books on IT governance. Standards like ITIL and COBIT were created, discussed, encouraged and sometimes even adopted. But somehow the chasm between IT and the business seems to have become greater instead of smaller. In the meantime, agility (the ability to change the way we work on the spot) seems to have decreased as companies automated more processes.

And just as the need for a common and accepted set of traffic rules increases as communities go from using horses and carts to using cars and motor cycles, the increased possibilities and speed that cloud computing promises will demand a management model that brings greater alignment and agility.

### An industrialised model for managing IT

To accommodate a meaningful discussion between IT and business we propose to use an analogy that both groups can be expected to understand. We will use the transformation that industrial manufacturing went through over the past four decades as an analogy for the path that lies before IT. Industry moved from relatively simple assembly lines and job shops to globally integrated real-time supply chains that seamlessly fulfil consumer demand by optimising design, production and distribution across vast communities of supply chain partners. It is this transformation that enabled the world to produce the amounts and variety of products that we now can buy at very competitive cost in shops all over the world (and increasingly online). One could say it was this industrial transformation that drove our global economic growth, and now it is the turn of IT to do the same with a supply chain approach towards IT management.

However, just like Rome, these modern manufacturing organisations were not built overnight, and neither will tomorrow's IT supply chain management organisations. Production transitioned from craftsman via job shops and assembly lines to today's modern supply chains, where multiple organisations (many of them in China) work together on bringing an ever-faster innovating portfolio of products to market. All 'just in time' and with 'total quality'.

Taking the analogy further, the cloud is to IT what China is to industrial organisations: a source of low-cost and (increasingly higher) quality components (and products) without which the average manufacturing company could no longer be competitive. And just as manufacturers in China are potential competitors to many of its current customers, so the cloud will in some cases turn out to be for internal IT.

Note: Given the enormous quantity of publications, books, blogs and articles available on IT governance, IT service management etc., I will not attempt to recap or summarise this body of knowledge here. The 'industrialised approach' to cloud computing is in many ways complementary and builds upon the work done in these best-practice frameworks and standards. The supply chain model should be seen more as a different (higher level) lens to look at reality than as a new reality. Just as inventory management and production planning have not been replaced by supply chain planning, IT supply chain planning sits on top and interacts with these existing disciplines.

## 4.2: Why cloud spells c.o.m.p.e.t.i.t.i.o.n. for the average IT department

For the first time, IT is facing outside competition. Outsourcing was no picnic, but outsourcing was more like subcontracting to a 'friendly' supplier than real competition. With cloud computing, users can simply go outside to procure the services they need.

I am currently following an interesting example nearby. While the internal IT department is scrambling to offer an in-house-social-media-type collaboration environment, one user department has already gone outside. To protect the innocent, I won't disclose whether this was a production, sales, marketing, R&D or other department, but you get the idea.

Starting in Australia - the place furthest away from corporate headquarters, so a good place for some rogue innovation - users set up a twitter like internal collaboration environment with an outside cloud provider. In just a few weeks, every member of this global department started sharing their activities, thoughts and projects, and enjoying the typical communication that people enjoy on social networks.

As this cloud service is low cost, easy to use and offers anywhere, any-time access also from non-HQ-supported devices such as iPhones and home PCs, the chances of IT winning this department back for their corporate service are slim at best. One good soul tried to help IT by requesting a similar online watering hole from corporate IT. As instructed, he filled out a service request form at the central service desk, but to date he is still awaiting the first response from IT (a first response is likely to be questions about priorities, about what executive will sign this off and what cost centre it needs to be charged to). This may not be a mission-critical enterprise system, but similarly we see user departments contracting directly with system integrators to build new enterprise solutions on a PaaS platform. My point is that many IT departments still seem to be in denial regarding the realities of this new competitive world called cloud. Time for a wake-up call.

### IT is not the first department under pressure

IT is not the first department in corporate history to face some serious competition. Here is an analogy from the consumer electronics industry (if you are not a big fan of analogies, just substitute 'application' for 'TV and 'CIO' for 'factory manager').

About two decades ago, a company headquartered near my home town was the global market leader in colour TVs. At that time the average life cycle of a TV, before a new model would arrive, was about three years. The average price was fairly stable and components were custom-designed for each model and produced in house. In my home town, becoming the head of a TV factory was the ultimate career dream (at least it was for the more geeky 12-year-olds; others would have preferred a career in this company's football team). But just a few years later, after Japan and Korea entered the global market, prices had dropped significantly (and have continued to halve every two years since), new models replaced old ones every six to 12 months and market leadership was determined by who was fastest at introducing innovations such as remote controls, stereo, Picture-in-Picture (PiP) and c-text.

Our local multinational nearly did not make it through this transition. To cope, they first flew all their managers to Japan for a factory wake-up tour and then introduced 'just in time' and 'total quality' programmes. Shortly afterwards they started a 'design for manufacturing' approach (a kind of DevOps movement) using standard off-the-shelf components to accommodate the much shorter life cycles. And to top things off, they stopped producing the main component (cathode ray tubes) in house; instead they created a production joint venture (a.k.a. 'a community cloud') with one of their biggest competitors.

Overnight, the head of manufacturing had to change from being 'the king of low cost production' to 'the fastest orchestrator of the supply chain'. Agility became the watchword. But agility did not replace the need for low-cost, high quality and advanced innovation. It was about delivering all of those at the same time and at breakneck speed. In some other industries, management decided this was just too difficult and stopped in-house manufacturing altogether (in fact our local television player also recently announced that it has come to the same conclusion). Others saw it as an opportunity for differentiation.

In my view this analogy is a graphical illustration of the rollercoaster ride IT is about to embark. Many of the needed skills and tools such as smarter sourcing, resource pooling, service-oriented architectures and a focus on DevOps integration have already been trialled in IT for years. Under the banner of agile development we even attempted rapid change management, despite the overwhelming complexity of enterprise IT. In addition, there are many manufacturing best practices, lean being the obvious one, that IT can benefit from (see also chapter 4.9 'How lean is your cloud?').

The question in my view is: is IT ready and willing to give up its manufacturing role as a provider of services and transition into an orchestration / supply chain role?

## 4.3: Why is it so complex to make IT simple?

Whenever I tell someone I work in IT, you see a little spark of fear pop into their eyes while they quickly check their watch. Probably because they know from experience (experience with other IT people, I hasten to add) that there is a strong chance the conversation will become complex, lengthy and probably even incomprehensible.

So lately I just tell them I work in the cloud, which leads to longer and more engaged conversations-mainly because they have no idea what that means. But it did make me wonder how IT got into this position, and more importantly, how we can get out of it.

IT has not always been like this. On my first working day, fresh out of university, when joining the IT department of AKZO (now Akzo Nobel), there was coffee and cake. Not because I had joined (it was the seventies; there were 12 candidates for every vacancy) but because a colleague was leaving for Spain. He was taking with him a small PDP server with all of Akzo's business applications and a book How to learn Spanish in 30 days.

Four months later he was back. He had implemented all Akzo's standard processes in the newly acquired Spanish consumer products division and he had lots of stories about the Spanish consumer market, the competition, the customers, the food, the weather and our new colleagues. He had spent most of his time with users, such as sales people, logistics and marketing, and almost no time with other IT people - partly because we did not have many in Spain! As a result, he was consulted regularly by the European management team on matters concerning Spain or other new markets. At that time we did not have enterprise ERP, SOAs or the enterprise service bus; we just had specific applications for services like purchasing, inventory, order entry and invoicing, and a good understanding of how AKZO wanted to manufacture and market consumer products.

Somehow that got lost. Now IT talks mainly about SAP, Oracle or data warehousing and 90 percent of the time we talk to other IT people. Granted, IT is more important and there is a lot more IT around. And because scale is larger and the level of technical integration is much higher, the complexity is often overwhelming. But there must be a way to get back to what really matters: the business.

Luckily, there are recent management approaches that help achieve this. Both approaches borrow from other disciplines. Portfolio management originated in investment management and made its way first to IT in project and portfolio management; it is now gaining popularity for service portfolio management and is a topic we will return to in chapter 4.10. Later in this section we will examine the application of manufacturing best practices such as agile, lean and a supply chain approach to IT management. In the next chapter we analyse the approach one of the leading management consultancies is proposing.

## 4.4: Reshaping IT management-by cutting it into two halves?

McKinsey recently published an interesting and very readable piece on 'reshaping IT for turbulent times'. In the article they analyse what seems to be a dichotomy for today's IT management: how to balance running an efficient IT factory with being a responsive customer-focused provider.

In the article  (freely accessibly after registering) Roberts, Sarrazin and Sikes describe two models: an efficient factory approach and a more enabling, innovation-oriented approach. However, their suggested approach of applying two models, splitting the organisation effectively into two separate parts - a mainstream factory and a boutique - seems less than optimal. This split resembles the traditional split of IT into development and operations; something that is also turning out to be less than optimal and too slow for today's markets. Hence the emerging of the new IT discipline called DevOps.

It is understandable why they use two models: traditionally efficiency and innovation require different approaches. Think of the organisation as a sponge. If you want more efficiency, you centralise (squeeze the sponge and any excess water pours out). However, if you want innovation and new ideas, you need to let go of the sponge, creating room to suck up water (new ideas). Squeezing and letting go at the same time seems impossible.

Addressing efficient production and customer responsiveness simultaneously also seemed impossible in traditional manufacturing, as well - until management innovations such as just-in-time supply chain optimisation gave management the tools they needed. The main difference between the new supply chain and the traditional manufacturing-oriented approach was that the goal shifted from efficient production to effective end-customer delivery. This leads to vastly different decisions when put into an optimisation model.

Splitting the IT organisation into a back-office grinder shop and front-office boutique will turn out to be a temporary solution at best. Not just because a dual-model approach prevents any optimisation across the two, but also because experience shows that in cases like these, the low-cost grinding part will soon move to a low-cost provider (for example, manufacturing moving to China). Very soon afterwards, the innovation part is likely to tag along (again look at what is starting to happen in China). Traditionally, the best innovation labs are near factories, as the interchange of ideas and knowledge on what works and what does not is essential. An exception might be fundamental research, but that is an area most commercial enterprises have lost interest in, or can no longer afford.

It took the manufacturing industry several decades (and fierce competitive pressure from pioneers such as Japan) to make the transition and become efficient and responsive at the same time. IT can learn from these experiences. The competitive pressure required to make such a transition has already arrived. Cloud computing enables users to bypass IT completely and source solutions directly from outside service providers; a practice sometimes referred to as 'rogue IT'.

IT does need to be closer to the business, but I think this can be achieved without cutting it in two. By taking an integrated approach based on the IT supply chain thinking, with a large emphasis on sourcing IT, organisations will be able to 'have their cake and eat it.'

## 4.5: About rogue IT and stealth clouds

Cloud computing can be seen as an important enabler for more user and business empowerment. Traditionally, we consider any IT outside the IT department's purview as rogue or 'shadow IT'.

Rogue IT (sometimes called consumerisation) occurs when employees outside the IT department deploy IT technology to achieve or automate certain tasks. Rogue IT is not new. However, cloud computing is giving it a new runway and much better camouflage. This is because with cloud computing you do not need to secretly misuse a server that IT made available for another purpose, or explain to your colleagues why you have all these computers under your desk.

The idea of IT outside the IT department is enjoying renewed interest. Industry analyst Ted Schadler wrote an interesting article in the Harvard Business Review called 'IT in the Age of the Empowered Employee' . A recent survey of 4,000 U.S.-based knowledge workers showed that no less than 37 percent of them are using do-it-yourself technologies. In Schadler's new book, Empowered, he calls these covert innovators HEROes - highly empowered and resourceful operatives. CIO magazine  picked up on the topic recently, using the term "stealth cloud".

**Personal experience  
** I had my first encounter with rogue IT many years ago, during my first assignment in one of Holland's largest multinationals. The company had a department called 'Information Systems and Automation', ISA for short, that relied on mainframes to run corporate reporting and accounting. But there was also ISA-2, a divisional IT department, which ran operational and planning systems. Its platform of choice was PDPs and Digital VAX. At the production location where I worked (a massive manufacturing plant so far in the south of Holland that it was practically considered a foreign factory) we had ISA-3, a local IT team that supported office automation and printing using the then just-emerging PC platform.

But all these were not considered rogue; this structure was a logical consequence of the fact that bandwidth at the time was so expensive that this was the best way to deliver IT services. How easily we forget in the age of the cloud... and if your organisational chart still looks like this, it might be time to reconsider!

I met the rogue IT function-dubbed 'ISA4'-on my second day working there. The plant I was stationed at manufactured medical equipment, for which it was necessary to calculate the exact trajectory of electrons. For this, the chief engineer had obtained what was at that time a state-of-the-art, luggable, suitcase-size UNIX system. He took the system home each night so these calculations could be run while he enjoyed a good night's rest. So far, so good,-were it not that he had also built a small inventory and quality assurance system that kept track of all the work-in-process in the factory. This data was manually typed into the corporate and divisional systems at the end of each month (the portable system held the 'primary records' as my EDP auditing professor would point out).

Each of these pieces of medical equipment cost more than my car (my current car, not the old banger I had then) and the final QA outcome (approved or scrap) was extremely important for the financial and commercial success of the manufacturing operation. Yet, every evening this crucial data left the premises, to arrive back only if our chief engineer had not run into a tram during his daily commute.

#### Moving on

That was many years ago. Since then, those systems have been replaced by bright and shiny ERP applications. The factory has been consolidated, off-shored, outsourced and then insourced again, while the type of equipment has long been replaced by a whole new generation of, you guessed it, digital technology. I am sure that in this new factory there are still users outside the IT department building rogue solutions and applications, because that is simply what smart employees love to do.

I'm not suggesting I'm akin to one of those smart employees, but with most of the jobs I have held since, I have managed to create my own rogue pet systems. It began with using a Windows Help text editor and a CD writer to save time on faxing product information to 12 European offices every Friday night. Next, I developed a Lotus Notes system for gathering enhancement requests, followed by a rogue intranet site (somebody gave me a fileserver password and putting IIS on there was easy). Next was a kind of precursor to salesforce.com, a CRM intranet site for logging visit reports and forecast data. If only I had known...!

The problem with industrious/innovative end users is that they eventually get bogged down by the same thing that is slowing down IT departments: providing support and maintenance of what they created earlier. This is comparable to that 70 percent of the overall IT budget that is spent on 'keeping the lights on'. As soon as a user has developed something really cool, such as a way to use his iPhone to support his customers, five of his colleagues will want the same. Two of these colleagues may not have an iPhone but instead use a BlackBerry (cutting our innovator's development productivity in half, as he now has to build and support the functionality on two platforms). So he spends less time on his real job and basically becomes a type of IT person.

This is why, after a while, once these industrious types have moved on, IT is called back in to clean up the mess. That results in the IT department having even less time or budget available to provide the types of innovation the business was looking for in the first place.

#### The cloud impact

The beauty (or danger) of cloud computing is that it allows business users to create innovations without 'hobbying around' on their phones or PCs as it allows them to go out and contract with outside vendors to create solutions in a more professional but still rogue manner. Traditionally, business users had to go through the IT department for any such project, as eventually the solution had to run on the corporate network or servers. With cloud computing, the new applications no longer have to run on the corporate network or servers, enabling these business departments to move outside.

Neither of the above scenarios is desirable. We do not want creative business people bogged down by maintenance tasks, nor do we want end-user departments to contract with any IT vendor they like, bypassing any attempts at having an enterprise architecture in the process. But at the same time, we do want this type of user-led innovation to continue.

Somebody (some department) will need to guide and orchestrate these innovations. IT would be the logical candidate, but only if they can free up their time and resources away from 'keeping the lights on' towards these more innovative tasks. Cloud computing, with its potential to deliver formerly complex IT tasks 'as a service', may be just the recipe to free up IT's time from the mundane, towards these more innovative and differentiating endeavours.

#### End-user computing

An interesting phenomenon in this context is end-user computing, in particular Microsoft SharePoint. Here the IT department in many organisations seems to provide end users with a gun to shoot themselves in the foot. Many users start enthusiastically, to find out after a year that the maintenance is overwhelming, prompting them to abandon the project or start all over again. The fact that many of these sites are aimed at a specific department makes this worse. Organisational structures tend to change yearly or even more frequently, rendering the site's objectives and design no longer valid.

Now IT professionals may rightfully say: "But that means the design was simply wrong. If you mirror the org chart, your system will always become obsolete. You should mirror the process or even better the data model, as those tend to be much more stable." It is a right-brain-left-brain scenario. And that is exactly why it makes sense to involve the structured thinking of IT professionals in helping users come up with solutions that match their needs. This, however, is only likely to happen if the IT-support personnel are working inside these departments (and not somewhere in a central basement).

A well-documented example  of this is the reshaping of IT at Procter & Gamble (P&G) by CIO Filippo Passerini. Under his leadership, P&G outsourced a large part of its hard-core infrastructure tasks several years ago. The majority of the retained IT people were reallocated (also physically) to work in business departments like marketing, product development and sales. Together with their business colleagues they started creating new solutions and approaches to both operational and strategic issues, like creating a closely monitored social media / advertising campaign as recently described in Fortune magazine (hardcopy only). In this way IT's core strengths, for example structured thinking and problem-solving capabilities, start to have a crucial role in the overall success of the enterprise. This, however, only happens after a large part of the repetitive infrastructure-related services that traditionally keep IT busy are supplied as a utility, either by using an outsourcing construct or by leveraging the cloud.

#### Time for hybrids

This approach of IT and business working closely together as one team seems logical. In reality though, IT has actually became more siloed in the last decade in a lot of industries. For example, I was one of the first graduates of a new curriculum called IT & Business, which consisted of, you guessed it, 50 percent IT and 50 percent business and economics subjects. The goal was to develop hybrids: people able to straddle and connect business and IT, either working as an IT person in a business department or as a business person in an IT department.

After graduating I started in IT in the pharmaceutical industry and quickly discovered that there were only two departments recognised by management as strategic to Pharma: sales and research (funnily enough in that order). IT was listed among a long range of various supporting departments that included finance, manufacturing, logistics, HR and catering. Being young and having unmatched faith in the power of technology, I moved over to an industry where IT was pretty core: the IT industry itself.

Having moved over to the vendor side, I did observe a distinct widening of the chasm between IT and business. Early in my career I found the IT manager often to be the best person to talk to gain a fast understanding of what a company did (as he worked with many departments, often having developed the applications they used himself in some kind of 4GL language during his younger years). But with the proliferation of standard ERP packages, three tier client/server, Java and service oriented architectures, IT became more involved with technology and less with what the company did. Some may argue that the worst culprit was Java. Traditional 4GLs and even COBOL aimed to be like plain English, so the step from using those languages to speaking to users in plain English about their business was not a big step, something that cannot be said for Java and its typical practitioner.

Another testament to this mind-boggling increase in complexity is the fact that the IT management industry (the solutions needed to manage IT itself) surpassed the application industry (the solutions used to manage business processes) in revenue about a decade ago. Companies are now spending more money on keeping IT running than on doing business things with IT. A weird and worrying statistic. The cloud however has the potential to change this. Both because of the abstraction from the technology that cloud, virtualisation and its sibling technologies can deliver, but also because enterprises are starting to realise they need to take a distinctly different, more business outcome-oriented approach to managing cloud IT than the approach that became standard for managing traditional IT. A supply chain approach to managing IT can free up in-company IT talent to truly engage in business matters again.

#### Time to choose sides?

If you are currently in IT, you may want to have a look at The Future of Corporate IT  from think-tank consulting firm The Corporate Executive Board. Their five-year outlook, contains some astonishing conclusions: "The IT function of 2015 will bear little resemblance to its current state. Many activities will devolve to business units, be consolidated with other central functions such as HR and Finance, or be externally sourced. Fewer than 25 percent of employees currently in IT will remain, while CIOs face the choice of expanding to lead a business shared service group, or seeing their position shrink to managing technology delivery".

If you are currently on the business side you may consider letting more IT people into your ranks. With your business becoming more about shipping bits instead of atoms to customers, now is a good time to start adding more IT skills to your team. If only to keep your rogue innovators productive.

## 4.6: The IT-dustrial revolution

Driven by the endless, pre-and post-cloud, vendor-driven technology push of the last 20 years, many of today's managers seem clueless about what to do with a problem

In industrial manufacturing, year-on-year cost reductions of substantial size are quite normal and expected. The plasma or LCD screen that was produced at the time of introduction for $400, has a fully-loaded manufacturing cost of $100-150 two years down the line. And rightfully so-because in the third year, these TVs are likely to retail at less than $200. We see a similar pattern of continuing reductions in manufacturing cost in food, air travel and thousands of other products and services.

There is only one sector where this kind of productivity increase seems to be largely elusive. An industry segment that costs more money every year, despite enormous progress in technology. We are talking about the steam engine of today's business processes: information technology. Although you get ever more computer power for less money, the cost of an average company desktop is the same (or more) than in previous years. Moreover, the total cost of ownership of ERP and other business applications keeps on rising.

#### Division of labour

A set of very clear principles underpins the constant cost reductions found in industrial production. It makes sense to examine the history of these industrial developments and to try and draw some conclusions that can help to better manage IT. In 1911 the American engineer Frederick Winslow Taylor published the ground rules of industrial division of labour in a book called The Principles of Scientific Management. The broad acceptance and implementation of his theories led to significant increases in productivity and launched the world economy into an unprecedented spiral of increasing wealth and prosperity.

Before that products were made by individual craftsman. A gunsmith, for example, made one gun per day. To do so, a gunsmith required an education (from apprentice to master) that often took up to five years. Through the division of labour proposed by Taylor's theory, the job was done by multiple people. Someone made the barrel, someone else the trigger and a third person specialised in making powder chambers. Tasks were divided and simplified further and further, ideally until they were so simple they could be automated. This approach also turned out to be very beneficial to leverage the biggest invention of those days: the steam machine. Instead of 10 rifles manufactured in a day, 10 people now produced a hundred or more rifles per day. The average training time of five years for a gunsmith went down to five weeks for a barrel maker. And as a result, the average pay also decreased significantly, often to the level of so-called unskilled labour. The productivity increase that resulted from Taylor's ideas proved enormous.

Early factories were laid out and managed solely based on optimising the utilisation of the (often expensive) machines. In front of every machine there was a queue of products waiting to be processed. This enabled the machine to carry on processing continuously, resulting in utilisation rates of up to 99 percent. Unfinished products were transported from machine to machine and put in a long queue. For the owner of the machine this was ideal, less so for the customer waiting for the product. Often products had up to six weeks lead-time, while the actual processing time was only one hour. Also the customer had very little choice, as the machines could only plough ahead productively if the number of variations was kept to the absolute minimum and everything was produced in vast quantities. An approach similar to how mainframes were traditionally used in IT.

#### Assembly line

To address some of the shortcomings of mass production, a new way of organising production was soon introduced: the assembly line. The whole layout of the factory was now optimised to get the product through the factory as quickly as possible. The main advantage was that in half a day a car, in one hour a laptop and in 10 minutes a complete phone could be assembled. The drawback was that it seemed slightly more expensive, as machines were utilised less. A machine press, for example, that could press four times per minute now only pressed once per five minutes when a car happened to come by. An approach similar to the one-server-per-application approach seen in IT. It was also relatively inflexible as it was not easy or even impossible to produce many different products on one line. All of which led to Henry Ford's immortal line: "You can have any colour you like, as long as it's black".

Soon, customers began to demand more choice. Today they even want individual choices for TV models and mobile phones, at lower prices than the going rate for last year's standard black model. Influenced by the ideas and work of W. Edward Deming, a new way of managing industrial production was pioneered in Japan: just in time (JIT). Today, Toyota manufactures a multitude of different models and variants on a single assembly line. Small trucks and standard family cars can be produced on the same line, one after the other. Something made possible not by new technology, but merely by refining the management of the existing technology. An approach comparable to the ideas of a private cloud.

The most important difference between this JIT or lean management and the traditional approach was that products were no longer PUSHed through the factory, they were PULLed. In other words production only starts when there is specific demand (PULL) not when the machine happens to be available (PUSH). To only produce when there is demand required a new approach to production management. In the Japanese car factories, a refined combination of kanban cards, standard bins and MRP-type systems was used for this. In addition, factories needed real-time insight into what the effects of a certain action or decision on the work floor are on the end product and therefore what the impact is on the actual customer (who created the PULL). Modern production environments use supply chain optimisation software to do this. With this software one can see directly the impact of a certain delay, problem or change in planning on the end customer. More importantly, corrective action can be taken.

So what have we learned from our short excursion into industrial history? That the computer and IT industry had, until recently, seemed stuck in a pre-Taylorian era, with IT staff resembling pre-industrial gun smiths. In IT the subject matter still is too complex and specialist knowledge too important for any sensible division of labour. This is partly due to the persistent use of IT's own language, which is so peppered with acronyms that only members of the guild (IT people) can participate in meaningful discussions about the trade and the profession. As a result of this 'professional conspiracy', division of labour is, unlike in every other industry, still far from standard in IT. This has resulted in the well-known IT silos, where 'gunsmiths' looked over their own individual, well-guarded areas. As a result many of Taylor's productivity benefits have passed the IT industry by. Or have you recently heard someone complain about the now very low salaries in IT or claim that three months' experience is adequate for a senior Java developer?

The degree of standardisation and interoperability in IT is also less than desired. There is a plethora of emerging (XML) standards, but just like the French gunsmiths of the 18th century, the IT industry seems to understand too well that the lucrative implementation and integration services industry does not benefit from widely implemented common standards.

Pre-industrial revolution can be likened to a batch/mainframe. Here the user was secondary to the utilisation and cost of the mainframe and whether he or she liked it or not, they had to wait until the next day for the output, as this was the only way the use of the machine could be optimised. Similarly, the successor of the job shop organisation, the assembly line, as first introduced by Henry Ford, has its equivalent in IT. The process orientation of ITIL, reinforced by the speed obsession of the Internet bubble, caused suppliers to push 'one-app-per-server'. As a result, servers were dedicated to a specific 'assembly line' (application), regardless of utilisation. Many IT organisations still have hundreds of servers (a.k.a. space heaters) all of which are used only a fraction of the time, using their own proprietary storage, print and security subsystems. A far from optimal situation.

So how do the ideas of PULL-driven JIT production equate to IT? Some of the same principles apply in what was originally called 'on demand' or 'utility computing'. This approach ensures that only IT services for which true demand exists are being delivered and subsequently paid for. Incidentally, cloud computing builds on many of these utility computing ideas but also allows users to source these services over the Internet. And just as in other industries, the principles of delivering only what is truly needed 'just in time' resonate well with customers.

To implement this type of on-demand computing, IT departments are taking measures similar to those being taken by their manufacturing colleagues to make JIT production possible. In JIT manufacturing, standardising and reducing the number of components is essential. Toyota may make 40 different models of car, for example, but they use only four different gear boxes and maybe three different radios. Service oriented architectures can be seen as the IT equivalent of this approach, where multiple applications can call the same services. Services that as a result of the larger usage can be built more economically and operated with a much higher rate of automation Another example is on-demand provisioning. Where in the past servers would be dedicated to a single task, they can now be flexibly provisioned, similar to the way that a team in a Japanese car factory quickly configures a press or a welding robot. This is another concept that has been around for a long time but is now is gaining popularity rapidly as part of (private) cloud computing.

#### Management matters

The successful implementation of on-demand computing is as dependent on real-time insight into the consequences of decisions, issues and changes as production in the real physical world is. We need to understand what business processes are impacted if the router on the first floor gives up, or worse if the printer at the shipping dock runs out of ink. And we, of course, need to see the alternatives available to continue critical processes like invoicing or month-end close, despite the fact that a specific router or printer is temporarily down.

Most modern IT management systems allow us to specify which IT components are involved with which business process (like invoicing or month-end closing). But often this has to be manually entered and maintained, which is difficult if this changes frequently. Defining and maintaining this manually will no longer be an option when the virtual systems, through automatic provisioning, are dynamically allocated to certain business processes. What is required is some kind of real time monitoring function, that analyses the running processes dynamically (on demand). What is required is some kind of real-time monitoring function, that analyses the running processes, determines the correlation and interdependencies and presents these to the administrator (in terms of service levels) so he can take action or approve the suggested remedial actions.

When all this is eventually in place, one can begin to manage the IT processes in an industrial manner, cost effectively and on demand. This is not a question of technology or nicer shinier boxes. In the 1980s, when European managers started visiting the JIT Japanese factories, the first thing they noticed was how old the equipment in many of these factories was. The Japanese had not replaced the machines, but changed the way they managed them. Likewise in IT, today's equipment is often sufficient. What is needed is better management of these machines. If we now start truly to focus on managing and integrating what we have, instead of thinking about replacing everything we have with yet another shinier generation, then maybe the contribution of IT to increased labour productivity does not have to remain elusive or impossible to measure.

## 4.7: Managing an industrialised supply chain of services

Having established that managing IT basically means managing the delivery of services in an increasingly industrialised and robust manner, one question remains. What type of systems are needed to support this more advanced type of IT management?

Just as in manufacturing planning, many of the individual management tools are available, but they do not share a common model or common framework. The outcome being integration headaches and/or suboptimal results. In the case of manufacturing, it was APICS, the American Production and Inventory Control Society, that provided a common framework. Centred around a bill of material, it provided the model that most of today's ERP systems are still based on in some shape or form. ITIL certainly has a role here, but as a best practice framework it is descriptive not prescriptive.

If we agree the key product of IT is the services delivered, then ideally the heart of an IT management suite would be a model of these services. The service model becomes the main source of information for populating the service catalogue, to describe the available services in detail and explain what constitute these services from the functional perspective and what infrastructure is need to deliver them. It can also indicate who can access them and how much they cost.

#### A 'bill of material' for the IT factory

For years, manufacturing has relied upon this central document called a 'bill of material', which describes, in minute detail, each component of the product to be manufactured. It serves as a foundation for design, procurement, manufacturing and distribution; and is therefore the linchpin for company-wide planning, costing and communication.

In IT, the service model is developing into a similar foundational, describing all the items required for each IT service as well as their interrelationships. The service model helps IT organisations develop new services more easily, provide current services more efficiently and promptly correct any errors that may occur.

The service model is not simply a discovered list of items used for a certain service at any moment in time. It is conscious design - like a master recipe - that describes what types of items would ideally be used for this service. This distinction is especially important in dynamic environments like virtual ones, for instance, where the actual used components can vary by day or even by minute. In the same way that a bill of material prescribes the use of 12mm bolts of Grade AA+ but does not specify which vendor should deliver these, a service model describes what type of processing or storage is required for a specific service, without specifying the physical machine or location.

#### A model as the foundation

Service management is a discipline based on the ITIL philosophy. Using a model-based approach for service management can provide an integrated view of IT services and facilitate the required cost control. This is crucial, because in today's information-based society, IT cost becomes an ever-more significant part of the total cost of organisations. Stringent rationalisation and automation as part of industrialised service creation and delivery can reduce IT costs by at least 10 percent to 15 percent annually, contributing significantly to a company's competitiveness.

Just as the bill of material in manufacturing operations describes each component of the product being manufactured, the model describes all the components, sometimes called configuration items (CIs), comprising an IT service. This includes the servers and databases needed, along with the network components, applications, settings and printers required in order to perform an IT service, as well as the users and their respective roles.

#### Fixing problems faster

A consistent model also makes it easier to find, correct and even prevent errors, because staff can immediately see which IT services will be affected when an IT resource goes down. Moreover, the service model reveals which business processes are impacted by an interruption in a given IT service. This helps staff set the right priorities during trouble-shooting. For example, the breakdown of an entire server can be less important than a small bottleneck in a label printer queue that prevents a critical customer shipment from being sent on time.

The model also makes it easier to search for the cause of errors within the vast matrix of a company's IT services. For example, if a user reports a problem, the administrator can see exactly which assets it normally requires to function, and then evaluate these assets and repair them if necessary.

#### Implementing changes safely

Even today, the majority of service outages occur as the undesired and unexpected result of introduced changes. This often results in a common attitude of "if it ain't broke, don't fix it". A service model enables the impact of changes to be evaluated before they are made and as a result enables a proactive continual improvement attitude.

Providing new IT services faster

Better IT service availability and performance are not the only economic benefits of a service model. Just as industrial designers, manufacturing engineers and sales personnel can more quickly develop new products based on an existing bill of material, IT teams can customise existing IT services to meet new demands or derive new IT services from existing models.

#### Cater for planning and costing of services

Moreover, in the same way a bicycle manufacturer can calculate, based on the bill of material, how many wheels, frames and luggage racks and how much time is needed to build 2,000 bicycles, the service model can be used as a basis to calculate how much data storage and server performance the introduction of a new IT service will require. It can also be used to measure which application functions and network connections are needed, and how much time and effort this is likely to take.

#### Providing existing IT services easier

A service model-based approach also simplifies the process of establishing and provisioning a front-end service catalogue. If the service model is the bill of material of the IT factory, then the service catalogue is its online store. Much like a bicycle manufacturer's online store - where customers are not interested in the individual components but in the different types of bikes, sizes and colours they can order - the service catalogue describes the finished IT services along with its offered service levels and cost.

Likewise, IT management suites can use a service model both top down (to do planning, costing, provisioning and impact analysis) and bottom up (to assist with trouble-shooting, root-cause analysis and performance management).

A uniform model of IT services also helps all the parties speak a common language, whether they're talking about demand management, service interruptions or costs, because the service model improves the understanding of services and enables communication among IT staff and users.

#### Towards advanced optimisation

Using a service model as the basis of integrated planning and tooling, IT can start to look at optimisation of processes. It makes IT experts the production planners and financial engineers of the IT factory, enabling them to allocate resources where and when it makes most sense economically. Instead of voodoo magicians who spend their days monitoring blinking lights to ensure equipment is up and running, they become partners with management and other departments in the delivery and continuous optimisation of business services.

## 4.8: Applying manufacturing BEST practices

When applying the practices of manufacturing organisations to IT management to industrialise IT, it makes sense to look at what is broadly accepted as today's manufacturing best practice: lean manufacturing.

Much has been written about lean IT, and we have no intention of repeating or including that here. However, a discussion about the industrialisation of IT and its next logical step, cloud computing, would not be complete without a short section on lean IT. CIOs find themselves in a tough spot. Budgets seem to decrease year on year, but expectations for service delivery remain high. So CIOs are going lean; applying 'lean' thinking to their IT strategies. Lean IT allows CIOs to focus on what is most important: delivering value to their internal and external customers while eliminating waste (effort spend on non-value add activities).

Lean principles are the leading principals of modern industrial production. Applying lean principles to IT enables organisations to identify the true added value and eradicate anything that is wasteful within IT management. Thinking lean does not necessarily need to result in reactive cost cutting. Instead, many organisations are exploring proven technology strategies and principles more familiar in manufacturing to maximise value and minimise waste.

To understand why lean IT principles have a place in the management of IT services and the underlying technology infrastructure, it is necessary to look at the role of IT: to develop, support and enhance business services that deliver value to the organisation and its customers. Similar to manufacturing goods, the development of business services involves managing demand, prioritising activities, marshalling finite resources and controlling defects.

In essence, the lean approach centres around maximising value (value in the eyes of the end customer/user) and minimising waste (any steps, delays or non-quality that do not add value).

This pragmatic management discipline was road-tested in the manufacturing sector by visionary pioneers of lean thinking, like Toyota, Motorola and Xerox, who realised a long time ago that any waste in the manufacturing of a product should be quickly identified and eliminated. Why? Because it adds no value to the customer, it dilutes quality and reduces profitability. To make the discovery of such areas of waste easier, a list of seven deadly kinds of waste was created for manufacturing. The list consists of the following: defects, over provisioning, waiting, non-value-added processing, transportation, inventory (excess), motion (excess), and employee knowledge (unused). Alternatively, this can easily be remembered by its acronym 'DOWNTIME' and an equivalent list for IT has been described at Wikipedia .

Lean management also makes heavy use of simple visual techniques like kanban cards. Such lean visualisation techniques apply just as equally to the management of IT services and the underlying technology infrastructure. In IT however, business services consist of intangible bits and packets coursing through electronic infrastructure. It is not directly apparent which servers and infrastructure components are supporting which services, so it becomes imperative to have tools that visualise end-to-end transactions and the infrastructure that sits under these transactions.

Another important aspect of lean process management is the concept of value streams. In the previous chapter we spoke about 'services' when describing the product that an IT organisation produces. Applications is another popular term to describe what IT delivers. In reality, neither service nor application are very well defined concepts. Most people agree they are related and often refer to the same thing. Development will call what they build an application while the operations team refers to the same thing a service. The first question we face when trying to define these concepts is granularity: is Microsoft Office the application or is it Excel? And is the shared spell-checker part of the application or not?

But more important than the answers to the above is the fact that in reality nobody (outside IT) sees any added value in defining more precisely what IT delivers. This changes when we approach it using the concept of value streams.

Based on the concept of Value Chain  as first described by Michael Porter in his 1985 best-seller, Competitive Advantage: Creating and Sustaining Superior Performance (now we are talking), a value stream is an end-to-end business process that delivers a product or service to a customer or consumer.

Value Stream Mapping  is a lean manufacturing  technique used to analyse the flow of materials and information currently required to bring a product or service to a consumer. At Toyota, where the technique originated, it is known as 'material and information flow mapping'. Wikipedia details the concept of Value Streams in an IT context  further under lean IT.

The advantage of using the value stream concept over service/application is that it allows IT to invite users to re-engineer the value streams that service their customers. And as part of that process, discuss with him/her where IT could add value and where he sees waste that can be eliminated by, through or from IT. Regardless of whether we end up calling the result a service model or a value stream, mapping the conversation will likely be 80 percent around what the company does for its customers and only 20 percent about IT specifics (which would be a good thing).

Many IT organisations have explored the idea of using lean techniques. But, its usage has typically been restricted to the application development department, which shares many of the factory-production-style characteristics. Most of today's development takes place using lean (a.k.a. agile) development methodologies like Scrum. However, the rest of the organisation still operates pretty much in a traditional fashion. And if only one link in the IT supply chain is agile, the total is still pretty traditional.

In fact, most IT shops still consist of the same departments that made sense 20 years ago: development, operations, support and more recently a PMO (project management office). The latter trying to somehow connect these together into a predictable and manageable process.

The combination of lean or agile development with lean IT operations enables not only fast development, but also fast deployment of the developed applications. It is bridging the gap often referred to as DevOps, the area (or no-man's land) between development and operations.

Fujitsu Services, for example, has demonstrated that IT operations can also be managed from a lean perspective. The company has invested heavily in creating industrialised IT infrastructures and services; making them more efficient, more reliable, quicker to implement and easier to change, as documented in Masters of Lean IT .

In IT operations there are also multiple areas that add no value to the customer of the final service. These elements of waste include time spent managing defects, over-provisioned capacity and time-intensive manual procedures that could easily be automated. Each element of waste considered independently is costly; but when aggregated, they significantly compromise the ability of IT to support both internal and external customers on a sustainable basis.

Operational and tactical demand management, often referred to as 'keeping the lights on', is an important part of IT as it typically consumes up to 70 percent of the IT budget. And this operational and tactical demand management lends itself particularly well to lean thinking. Using lean process techniques, IT operations can standardise and automate the delivery of a broad range of services. For example, the process to 'onboard' new employees, which consists of requisitioning an appropriate PC, setting up an email account and granting access rights to applications, can be completely automated to eliminate waste in the form of wait times, excess motion and lost productivity.

In the next chapter we will examine lean in a cloud and IT supply chain context.

## 4.9: How lean is your cloud?

As we saw earlier, in many ways delivering IT services can be compared to manufacturing processes. Cloud computing can be seen as the logical next step, converting the traditional IT factory into a modern IT supply chain.

In this chapter, we discuss the lessons that IT can learn from a hundred years of manufacturing best practices and the possible role of cloud computing in further industrialising IT.

#### Mass production

Cloud computing leverages the concept that mass produced is almost always cheaper than custom made. The proverbial Ford Model T was all about using standardisation to drive cost down... any colour as long as it is black! Today's cloud computing offers mass-produced standardised services to millions of users, and as a result monthly cost per user can be relatively low.

#### Mass customisation

It did not take long for consumers to reject the notion of driving only black cars. In Japan, Toyota perfected lean manufacturing to offer choice at a cost comparable to mass production. Instead of an assembly line dedicated to one model, Toyota managed to run all kinds of different models on the same production line. Thanks to virtualisation, IT is likewise abandoning 'one server per app' and is running multiple applications in varying combinations on a flexible cloud infrastructure. In a comparable way, multiple customers are using the same SaaS application in very different ways. In this case, the concept of multi tenancy makes such premium flexibility possible, while maintaining the low cost of mass production.

#### Mass standardisation in product design

The secret behind mass customisation in manufacturing is massive standardisation of the underlying components and platforms. In the consumer electronics industry, the average manufacturer's portfolio of television sets went from ten different models-with an average life span of three years-to hundreds of different models with major product renewals occurring every six months. Product development lead times needed to be slashed from years to months. Smart manufacturers moved from every TV having its own custom-designed printed circuit boards to using the same board across most of its models. Agile development and component re-use are the IT incarnations of this trend.

#### Assemble to order in product delivery

Prior to industrialisation, most products were 'made to order' by craftsmen. Post World War II, with demand high and supply low, the market went to products 'made to stock'. As the market changed from a seller's to a buyer's market, customers demanded differentiated products at low prices and with short lead times. To meet these higher demands, manufacturers perfected an 'assemble-to-order' supply chain, where final products were rapidly assembled from low cost and often purchased standard components.

IT went through as similar transition, moving from tailor-made software via standard packages to a service oriented architecture, whereby end-user services in theory are assembled to order. In a cloud computing context, this means sourcing low-cost component services and flexibly assembling these either in house or using a PaaS solution.

#### Design for manufacturing

The time when R&D designed products and then threw these over the fence for manufacturing to figure out how to produce them was no longer feasible in modern manufacturing. Products need to be designed with manufacturing in mind. In most modern manufacturing organisations, R&D and production work very closely together, throughout the whole product life cycle. Similarly, product developers at today's cloud providers are much more involved with how it will run, than are traditional developers. With automatic provisioning and scaling up and down of applications becoming standard practice, 'design-for-operations' or DevOps will be a required discipline for all developers.

#### Deliver double the features, at half the cost, every 12 months

An outrageous claim? Quite the reverse. In the consumer electronics market, for example, this may even be understating the market dynamics. Designing and manufacturing products that will sell for half of today's price by the time they reach the market is no picnic. IT will need to prepare for similar market dynamics. To some extent, hardware and basic services like bandwidth and hosting are already keeping up with this rapid cost reduction. But IT needs to prepare for their 'premium services' to meet these requirements too. The impact of the cloud means that with bandwidth making distance and time-zones increasingly irrelevant, competition can come from everywhere.

#### From manufacturing requirements planning (MRP) to supply chain management (SCM)

The main difference between traditional MRP and SCM was that MRP tried to plan the organisation's own manufacturing activities, while supply chain planning took into account the activities of the extended enterprise. SCM also acknowledged that not all parameters were under the control of the planner. If the ship transporting cars from Kobe to Rotterdam left on the 12th of the month, then production needed to be planned around that. It is the same with cloud computing: many of the components required for the final customer experience are no longer under our (direct) control, but we remain responsible for the final result and need to plan around obstacles.

#### Synchro kanban

As shown in the example above, attempts to micromanage the macro-environment are not a good idea. Micromanagement would be as futile as making trained butterflies take off from the coast of Japan to prevent a hurricane in Central America. A better approach is to leverage where the internal management capabilities of the individual subsystems. Toyota's synchro kanban approach is a good example: a macro-level plan that worked on a factory-to-factory and country-to-country level, while each factory is responsible for meeting its commitments using its own capabilities and flexibility. Toyota has achieved this using simple low-tech systems such as kanbans or dual bins. Centralisation of cloud computing could invite megalomaniac global planning attempts. In my view, the ideal approach would be to Keep it Stupidly Simple (KISS) - just like synchro kanban.

#### Continuous incremental improvement

It is tempting to assume that such demanding requirements also require a wholesale approach to innovation. In other words, starting again by building a new factory instead of trying to improve the old factory. With the exception of some base components, in most industries the incremental approach of trying to improve a working factory has proven more effective than trying to get a brand-new factory to work. With service oriented architectures, that realisation is reaching IT - that is, not building yet another new cloud powered factory next to our current mainframe, UNIX and windows manufacturing plants of the past.

#### Costing

Many people believe that the billions industry has invested in ERP systems can be justified by the improved planning capabilities that such a global perspective provides. But in reality, the benefits of ERP, if any, come more from improved financial visibility. The ability to compare costs, prices and efficiencies per country enable the overall portfolio to be optimised. And although most ERP systems have added a supply chain planning solution over the past years, their cost analysis and financial functionality is often both more advanced and more widely implemented.

#### Fully, loaded, integral products cost

Understanding the cost of a product in the manufacturing process is both an art and a science. Using a variety of tools and methods, direct and indirect cost elements are allocated to cost carriers (products). In fact, most manufacturing innovations are first screened against their impact on product cost, before their implementation is even considered. The two main allocation methods are a roll-up based on the bill of materials and an activity-based allocation using intermediate pools of costs. IT cost and especially infrastructure cost traditionally were fixed and allocated in an overhead way. With configuration management databases (CMDBs) and service models being more widely implemented, organisations can begin to allocate specific technical cost directly to the business service they support. The pay-as-you-go model of cloud computing also makes that easier.

#### Global sourcing and spot markets

It is clear that under the above market conditions, smart sourcing becomes a key competitive differentiator. Static long-term contracts and multi-year commitments are replaced by spot markets, hedging and pricing based on average daily price. In cloud computing, we are already seeing the first examples of this, again with Amazon leading the pack by introducing spot pricing for its EC2 elastic cloud offering.

#### Just in time

At first sight just-in-time manufacturing seems to be at odds with a focus on product cost. Making large quantities to stock (just in case) seems more efficient than producing each item only when required (just in time). The revolutionary idea of lean manufacturing was that if it is more efficient to make hundreds of the same in a row, the conclusion should not be to make batches of a hundred, but to change and tweak the system until a batch size of one could be produced as efficiently as a batch of hundred.

#### Single minute exchange or die

SMED, sometimes jokingly referred to as 'single minute exchange or die', is when the entire factory team works together to change the manufacturing process over from one type of product to another, within a minute. Imagine a Formula 1 racing team getting ready to switch tyres (including a guy with a whistle and endless rehearsals) and you get an idea of how serious a practice this is in lean factories. To most of us, this may appear outrageous and expensive. But this singular focus on achieving together what everyone agreed is important (in this case make to order and meeting real and not forecasted customer demand) makes SMED relevant in a cloud context.

#### Total quality

Before lean manufacturing introduced the concept of zero defects, the industry consensus was that optimal quality sufficed. The idea was that if one in x thousand products failed, it was cheaper to repair those few than to improve manufacturing quality. But It will be clear that just-in-time and zero inventories has no tolerance for that idea. If even a single component fails, the whole delivery to the end customer will be severely delayed. As a result, manufacturing started measuring defects in parts per million instead of percentages or per Milles. The idea was that manufacturing something correctly the first time is ultimately cheaper than having to repair it later on, as all unplanned repair activities are essentially waste that adds no value. The idea of 'first time right' is gaining traction in IT.

Cloud computing, with its large scale, is accelerating this. With a million users, the impact of an error or (security) flaw is much higher and does warrant a larger focus on quality. Interestingly, this focus on zero defects also progresses another lean concept, namely excluding any functions that the user sees no value in, as these increase complexity and the chances of errors, but not the value. Something also referred to as "offering good enough functionality".

#### Maximise value, minimise waste

As mentioned earlier, something you may find useful is using the lean IT mantra of maximising value (only do what adds value to the end customer) and 'minimising waste' (eliminating steps that do not add value) to guide your decisions. Take any idea or proposal and evaluate it against these two simple criteria: does the service in question add significant value in the eyes of your end customer? Or, does it minimise waste by eliminating steps that do not add value relevant to your customers? These two very simple criteria can also prove very useful in streamlining your cloud computing efforts.

Now is the time for organisations to begin evaluating how cloud computing can help them transform their traditional IT factories into a modern IT supply chain.

## 4.10: A service portfolio approach

Earlier on we discussed the use of a service model and supply-chain techniques to optimise the delivery and support of IT services. But first of all we need to determine which services need to be delivered. This is where service portfolio management comes in.

The manufacturing equivalent of portfolio management is master production planning (MPP) where at a high level and for a longer time horizon (quarters not days) the master plan for production would be agreed. A master plan is typically created at a product family level (leaving the decision about what exact products should be produced to a later date) and would not use detailed bills of material, but simple resource profiles to check feasibility.

The portfolio management technique originated in the investment management profession and adds an important capability to our IT management toolset. While ITIL and lean IT gives us many ways to make our existing portfolio more efficient and to increase the value from the services therein, it does not necessarily offer help in selecting which services should be in the portfolio in the first place. That is where service portfolio management comes in. ITIL Version 3 acknowledges the validity of this approach and includes portfolio management prominently in its fifth book, the book on service strategy, which covers service portfolio management, IT financial management and demand management.

To understand the role and purpose of portfolio management for IT services, it is best to start with looking at investment portfolio management. The manager of the average investment portfolio has so many stocks and bond options to choose from, that making decisions at such a granular level is impossible. As a result, the manager establishes rules at a higher level that the portfolio will adhere to (this percentage in stock, this percentage in bonds, this percentage abroad, this percentage domestic, this percentage in food, this percentage in technology, for example). Those rules vary of course, based on a range of objectives; for example, does the investor want to retire in two years or in 20 years? Is he or she risk averse or more of an entrepreneurial type? Having created the rules, the manager can now decide more easily which options to pick and monitor the progress of the portfolio against the goals and adjust when needed.

For several years, this concept has been applied successfully to project management. The average enterprise has thousands (or even tens of thousands) of individual projects, request and ideas, all demanding budget and resources. And all are equally important, at least if you ask the departments sponsoring them! Using project portfolio management (PPM), these individual projects are classified against company goals, risks and returns. By introducing a portfolio management system, a balanced selection and prioritisation of projects to be executed can be made. In addition, the progress of these projects can be monitored and, by using the planning capabilities, scarce resources can be allocated. Portfolio management also allows for advanced optimisation techniques, such as selecting parts of projects that deliver higher value in a relatively short time instead of executing the whole project from start to finish.

In the same way that an investment portfolio view enables a sensible dialogue with the investor, avoiding personal and subjective preferences for specific stocks, PPM enables the organisation to have an equivalent discussion about which projects to prioritise without becoming embroiled in pet-project or political debate. An important aspect of portfolio management is the ability to do simulations. Calculating through variations in all underlying assumptions, such as cost, volume, demand, price and exchange rates, results in a number of scenarios that can be compared against each other.

Service portfolio management and application portfolio management, its close cousin on the development side, enable this approach for services and applications. As expected, service portfolio management keeps a close eye on the financial aspect of the portfolio. In fact, it enables the calculation of costs and the assignment of budgets at a service level. For people with a production background, this may seem a no- brainer, as in manufacturing almost all decisions are cost-driven. Decisions to switch supplier, replace components, use another machine or another production process all are calculated, simulated and evaluated from a financial point of view.

This is less frequently the case in IT. In fact, many IT organisations will struggle to determine the fully loaded cost of each provided service. Decisions about how to implement or deliver a certain service are often based more on continuing with already selected standards or approved vendors than on an analysis of the cost impacts of that specific decision. Further industrialisation of IT, where IT is basically 'selling services by the pound' and competing with external service providers, will require the IT department to increase its maturity with regard to service costing.

The service portfolio management discipline is gaining popularity rapidly as it enables us to discuss what IT does (deliver services) without getting into the typical technical details that make business users switch off. It does this by translating IT questions into planning and financial terms, a language business people generally understand and like to speak.

## 4.11: An IT supply chain model-once more, with feeling

We have already touched upon the idea of cloud computing making IT management more akin to supply chain management. Now it is time to take a closer look.

First of all we will look at the supply chain in the simplest form imaginable, something even simpler than the supply chain at a manufacturing company. Think of a transport company like Federal Express, DHL or TNT-that transports packages from location A to location B. Processes, people and resources are required to get the package from A to B within the supply chain.

The reality today is that many of these distribution companies do not actually come to your door themselves, at least not in every region or town. They use subcontractors and local partners at various points. It would be prohibitively expensive for the delivery firm to own their own vehicles and employ their own drivers in every remote country, city and village around the world. (Bear with me; we will get to cloud computing in a minute). This way, they can still offer you end-to-end-service and keep you up to date with minute-by-minute parcel movements around the globe. They provide customers with tracking numbers, or meta-information. They know exactly which trucks are where, and with which packages. As a result they can outsource almost every logistical process (the outside arrows in our diagram).

But IT does not transport packages from A to B (at least I hope that is not what you do all day!). IT meets the demands of the business by providing a steady supply of services. IT does not have trucks or warehouses, but departments such as development, operations and support that work within the supply chain. Essentially, an IT supply chain takes IT resources, like applications, infrastructure and people, and uses these to create and deliver services.

Some IT shops have decided not just to react to demand, but to actively help the users working with the business to figure out what they should want or need. The arrow marked 'innovation' in diagram 7 (page 129) indicates this. A more recent trend is the introduction of DevOps, a way to closely connect and integrate the demand side with the supply side. This is often done in conjunction with the introduction of agile development processes.

Users typically care about speed, cost and reliability, not about whether IT uses its own trucks or someone else's. Speed as in many supply chains, is one of the main criteria. Responding faster to customer or user demands reduces cycle time and time-to-market and makes organisations more agile and more competitive. The use of cloud computing in all its incarnations, such as IaaS, PaaS and SaaS can play an important part in further increasing this speed.

With **IaaS** , the IT department can significantly speed up the procurement, installation and provisioning of required hardware. Because of its operational expenditure (opex) model, no capital expenditure requests need to be raised, no boxes with hardware need to be unpacked and no servers need to be installed. Just as in the above distribution example, the organisation can respond rapidly to heavily fluctuating demand, extreme growth or demand for new services by using external capacity if and when needed.

With **SaaS** , the route from determining demand to getting a service up and running is even shorter, because the whole thing is already a service the minute we start looking at it. There is no buying, installing or configuring of the software. It all runs already at the provider's website. Large SaaS implementations go live much quicker than traditional on-premises implementations, in many cases for psychological or even emotional reasons. As the solution is already running, users are much more willing to start using it on the spot. Many SaaS providers reinforce this further by specifically designing their software to enable simple 'quick starts.'

In those cases where there is no ready-made solution available, **PaaS** can deliver significant time savings. As soon as the developer has defined the solution, it can be used in production. The PaaS provider, through its PaaS platform, takes care of all the next steps, such as provisioning the servers, loading the databases and granting the users access. Comparing PaaS with IaaS, the big difference is that with PaaS, the provider continues to manage the infrastructure, including tuning, scaling, securing and so forth. IT operations does not have to worry about allocating capacity, about moving it from test to production or about all the other things operations normally takes care off. And because the PaaS provider has already done this many, many times, it can be done immediately and automatically.

#### Sounds too good to be true?

Well, actually it might be. Because although the above can be faster, it also can mean IT loses control and can no longer assure the other two aspects that users care about: reliability and cost. So, how can these concerns be addressed? In the same way as in the distribution example: by making sure that at all times IT has all the information about 'where everything is,' or rather, 'where everything is running".

This management system - call it a 'cloud connected management suite' - needs not only to give insight about where things are running and how well they are running, but also allow you to orchestrate the process, move workloads from one provider to another and help you decide whether to take SaaS or PaaS applications back in house (or move them to a more trusted provider). Ideally, it will allow you to optimise your environment dynamically based on the criteria (speed, cost and reliability) and constraints (compliance, capacity, contracts) that are applicable at that moment in time to your specific business.

This dynamic approach is a long way from the more traditional 'If it ain't broke, don't change it', but IT will have to get used to it and embrace this new way of doing things, just like planners at industrial companies did. Today's global manufacturing would not be as efficient and such a driver for the world's prosperity if they had not started to optimise their global processes a long time ago.

There are, however, a number of prerequisites required to implement such a supply chain approach in IT. First, we need to achieve fluidity or movability of IT. IT needs to be able to take fairly sizable IT chunks and move them somewhere else with relative ease. On the infrastructure side, virtualisation is a major enabler of this. Virtualisation containerises and decouples from the underlying hardware, thus acting as a much needed 'lubricant'. But to enable true movability, more is needed.

Many of today's applications are as intertwined as the proverbial plate of spaghetti. This makes the average data centre a house of cards, where removing one thing may cause everything else to come crashing down. On the functional side, the use of service oriented architectures can help, but we will also need to apply this thinking on the operational side. A virtual machine model is in many cases too low level for managing the movement of complete services; management needs to take place at a higher level of abstraction, ideally based on a model of the service.

The second hurdle is security. I do not mean that the security at the external providers may be insufficient for the needs of our organisation. In fact, the security measures implemented at external providers are often much more advanced and reliable than those inside most enterprises. Note that fear of a lack of security is consistently listed as a priority concern by organisations before they use cloud computing; but it rapidly moves down the list of concerns once organisations have hands-on experience of the cloud. The real security inhibitor for the dynamic IT supply chain is this: most organisations are not yet able to dynamically grant or block access to a constantly changing set of users, across a fast-moving and changing portfolio of applications, running at a varying array of providers. This requires us to rethink how security is approached. It should be seen more as 'security as a service'; an enabler instead of an inhibitor.

The third consideration is that any optimisation will have to work across the entire supply chain, meaning across all the different departments and silos that comprise the average large IT organisation. For example, it has to look at the total cost of a service, including running, supporting, fixing, upgrading, assuring and securing it. It also has to optimise the speed and the reliability or at least give visibility into these across the entire chain.

To prevent sub-optimisation (the arch enemy of real optimisation) one needs to understand and connect to many of the existing information and systems in these departments - systems in diverse areas such as helpdesk, project management, security, performance, costing, demand management and data management. IT supply-chain optimisation is in its infancy and many start-ups are gearing up to offer some form of cloud management, but it is clear that offering optimisation requires a broad and integrated view of IT.

The end result of adopting a supply chain approach is that IT becomes more an orchestrator of a supply chain, more like a broker of services than a traditional producer of services. Demand and supply are two sides of the same coin that occur (almost recursively) throughout the chain. Once we close the loop, the supply chain becomes a cycle that constantly improves and becomes more efficient and agile in delivering on the promises the organisation makes to its customers - just like an industrial supply chain but also very much in the spirit of Deming (see chapter 4.6) and the original ideas around service management.

## 4.12: Building your first virtual IT factory

With a private cloud strategy and dynamic data centre you can respond quickly to rapid business fluctuations. But how do you get there?

This chapter discusses some approaches for building a dynamic data centre that not only address complexity and reduce cost, but also accelerate business response time, to ensure the organisation realises the true promise of cloud computing, business agility and customer responsiveness.

Cloud computing presents an appealing model for offering and managing IT services through shared and often virtualised infrastructure. It is great for new business start-ups that do not want the risk of a large on-premise technology investment, or organisations that cannot easily predict what the future demand will be for their services. But for most of us with existing infrastructure and resources, the picture is very different. We want to capitalise on the benefits of the cloud - on-demand, low-risk, affordable computing - but we have spent years investing in rooms stacked high with hardware and software to run our daily mission-critical jobs and services.

So how do organisations in this situation make the shift from straight forward server consolidation to a dynamic, self-service virtualised data centre? How do they reach the peak of standardised IT service delivery and agility that is in step with the needs of the business? Many virtualisation deployments stall as organisations stop to deal with challenges like added complexity, staffing requirements, service level management or departmental politics. This 'VM stall' tends to coincide with different stages in the virtualisation maturity life cycle, such as the transition from tier 2/3 server consolidation to mission-critical tier 1 applications, and from basic provisioning automation to a private/hybrid cloud approach.

#### The virtualisation maturity life cycle

The simple answer is to take it step by step, learning as you go and building maturity at every step. This will earn you the skills, knowledge and experience needed to progress from an entry-level virtualisation project to a mature dynamic data centre and private cloud strategy.

This is called the virtualisation maturity life cycle, and it builds in four steps. In the same way that pilots begin their training in small aircraft (going full cycle from take-off to landing) before they move on to large commercial jets, it is advisable for organisations to implement these virtualisation maturity steps iteratively. For example, start a full maturity cycle on test and development servers before moving to mission-critical servers and applications.

Start slowly, by consolidating servers, to increase utilisation and reduce your current carbon footprint. To ensure deep insight and continuity in support of the migration from physical to virtual, you might want to leverage image backup and physical-to virtual restore tools that allow you to move your physical IBM, Dell and HP images directly to ready-to-run VM images for VMware, Red Hat, Citrix and Microsoft. The next step involves optimising the infrastructure. Apart from maintaining consistency, efficiency and compliance across the virtual resources (which is proving fast to be even more complex in virtual than in physical environments), we analyse, monitor, (re)distribute and tune our applications and services.

While optimising, we also discover and document the rules we will automate in the next phase \- rules about which applications fit together best, which areas are suitable for self service and which types of service are most important. As you can imagine, these will be very different for a nuclear plant (safety first) compared to an online video rental service (customers first), which it is why it is such an important step. If you skip this stage and go straight into automation, you will probably end up in the same situation that you are in today, just automated.

A successful cloud strategy is all about agility and flexibility, and the next step in the virtualisation maturity life cycle helps take care of automation and the orchestration of your (now) virtual services. You can empower users to help themselves through self-service processes without calling IT for every service request. Automation has many advantages here. It is the catalyst to standardise your virtual infrastructure, integrate and orchestrate processes across IT silos, and accelerate the provisioning of virtual cloud services. Once the industrialised provisioning process is live, automation technologies can then also be used to monitor demand volumes, utilisation levels and application response times and to assist root-cause analytics to help isolate and remediate virtual environment issues.

The final stage is the the dynamic data centre. This is the centrepiece of a cloud strategy, and allows you to manage the definition, demand and deployment of IT services. Your now agile infrastructure, delivered from a secure, highly available data centre, enables you to respond quickly to rapid business fluctuations. To reach a dynamic data centre, you need to industrialise the entire process of service delivery from request to fulfilment. This includes centralised service requests, automating the approval process so that department heads can approve or reject requests quickly, a standard and repeatable provisioning process, and standard configurations.

In summary: first determine where opportunities exist for consolidation and rationalisation across your physical and virtual environments, assessing what you have in your data centre environment, and establish a baseline for making decisions that take you to the next stage. Then, to achieve agility, you automate the provisioning and deprovisioning of virtualised resources, including essential elements, such as identities, and other management policies such as access rights

The next step in delivering an on-time, risk-free (zero failure) cloud computing strategy is assurance at the service level, moving away from individual infrastructure elements. You need to manage IT service quality and delivery based on business impact and priority - top to bottom and end to end.

All these factors combined ultimately lead to agile IT service delivery. With agility, you can build and optimise scalable, reliable resources and entire applications quickly. By embarking on the virtualisation maturity roadmap, you can move closer to a dynamic data centre and successful cloud strategy.

This goes much further than the traditional dream of a 'lights out' data centre, which basically was a static conveyor-belt-like factory where all labour was automated away. The dynamic data centre is like a modern car factory, where robots perform almost all tasks, but in ever-changing sequences and configurations, guided by supply-chain lead orchestration and capacity planning.

**Read more?  
** If you are interested in building private clouds and would like to read more on the subject, my colleague Andi Mann recently co-authored with the IT Process Institute a whole book on the topic, entitled: Visible Operations Private Cloud: From Virtualization to Private Cloud in 4 Practical Steps. The book is structured into four sections and is available from the IT Process Institute  and at Amazon.com . A free summary is available here .

## 4.13: On the importance of planning

Once our IT supply chain becomes more dynamic, we achieve higher utilisation - reducing our traditional capacity buffers and capacity planning becomes a core competency.

You should not get onto a motorway without first checking how much fuel you have in the tank or buy a subcompact car if you have four children to ferry around. So why do so many of us ignore capacity planning when it comes to cloud computing? We appear to be under the illusion that cloud elasticity makes the need for capacity planning obsolete; that cloud computing offers infinite scalability that can be turned on and off, just like a tap.

In fact, although cloud computing enables your organisation to adopt new methods of delivering critical IT services, it remains a shared resource, either shared with other departments (in a private cloud) or shared with other organisations (in a public cloud). And like almost anything that is shared, it is difficult to predict who is using the resource most and when supply will run dry. When the supply runs dry in IT, business stops: customers are not able to engage with the organisation, our sales people do not know who to call and our factories do not know what to produce. Instead, organisations need to have contingency plans for alternative capacity deployment to ensure business continuity.

It is a bit like the airline industry. Airlines are forever trying to predict demand and determine how many aircraft, how many flights and how many seats they should schedule to a destination. Their solution to this resource demand conundrum is either to overprovision the number of flights (which is expensive), or a more likely scenario is service oversubscription - overbooking seats and bumping passengers off flights - which can be the quickest route to losing a loyal customer's business. This airline business model of today may be the cloud computing model of tomorrow. But it is not just airlines that do not plan for theoretically maximum usage; banks would not have enough money if all their customers wanted to withdraw their money on the same day. And it is the same for cloud providers - it is just not efficient or even sensible to provide for the theoretical maximum. So smart planning becomes essential.

Virtualisation, the heartbeat of a private or public cloud computing model, adds a layer of complexity that can make it more difficult to monitor capacity use and requirements effectively. If you stand on a car factory floor, it is relatively easy to determine whether there is excess capacity: demand for the paint spray facility may be exceeding capacity, while welding may be freely available, for example. It is the same for a traditional data centre where there are dedicated machines for each application and where it is easier to see which services are reaching their maximum capacity. Capacity constraints in a virtualised environment, however, are notoriously hard to identify as they are hidden in layers of virtualised components. By lacking in-depth visibility into resource utilisation, your organisation may unexpectedly reach the capacity limits of its virtualised environment well before achieving its business goals.

With so many enterprise data centre virtualisation and cloud computing strategies being rapidly rolled out across heterogeneous physical and virtual infrastructures, how can you tame their management? Specifically, how can you effectively determine how hardware, software, storage and network resources are being used and what resources will be needed in the future?

A dynamic, flexible approach to capacity management first has server sprawl in its crosshairs. Capacity planning helps pinpoint server sprawl by identifying machines that are not used significantly or not used at all. These machines - both the physical and the virtual ones - are candidates for consolidation or even elimination. Another category to identify are machines that show large fluctuations in use. Having identified and virtualised these resources, they are ideal candidates to reap the capacity elasticity that virtualisation and cloud bursting (when you run out of your computing resources in your internal data centre, you 'burst' the additional workload to an external cloud on an on-demand basis) can bring. In essence, the capacity planning identifies the loads that are set to gain the most by being moved to a cloud environment.

This planning can also help identify opportunities for consolidation and predict business service levels from the proposed configuration. Heat maps, for example, can show all capacity bottlenecks existing in an environment for CPU, memory, storage, throughput and latency. With the cloud, more organisations than ever are turning to on-going capacity, and for good reason. In summary, capacity planning addresses three needs of modern IT organisations:

#### Optimise data centres for performance

Maximise the performance of your existing physical and virtual infrastructure to reduce the cost and need for additional IT resources while preventing over-provisioning.

#### Mitigate risk of business change (risk of fluctuating demand)

Ensure delivery at the agreed and expected service levels by calculating IT resource requirements to support forecasted business volumes: preferably including results of mergers and acquisitions, and the expected growth in business demands.

#### Mitigate risk of infrastructure change (risk of fluctuating supply)

Preparing for keeping IT aligned with business needs by predicting cost vs. performance scenarios for consolidation, virtualisation and/or hardware refresh initiatives taking into account planned application rollouts and service changes.

In a modern IT supply chain, capacity planning and its next step - capacity optimisation - are crucial. It is the difference between reacting after the fact and proactively assuring your environment will deliver on business expectations. In evolving from an entry-level virtualisation project to a mature dynamic data centre and private cloud operation, the planning discipline has transitioned from project based, periodic analysis of physical systems, to the continuous management of dynamic virtual resource capacity.

Not doing capacity management is like depending on your local supermarket to be open at dinner time \- leaving the table to run to the shop to get some more bread or potatoes, will probably impair your guests' dining experience. The old adage 'failing to plan' is 'planning to fail' still holds in the cloud too!

## 4.14 Are there any shortcuts or even a better way?

The evolutionary approach we described based on the customer virtualisation maturity life cycle may be a bit too long (and safe) for some tastes. Here we explore a possible alternative route to the cloud.

Reading about all the steps and phases in the life cycle you may have wondered: Is this the only way? What if I need it now? Is there no revolutionary approach to help me get straight to a private cloud much more quickly? - as in developing countries that have skipped the wired 'plain old telephone service' (POTS) phone system altogether and moved directly to a fully wireless infrastructure. Such a revolutionary approach does exist. The secret lies in the fact that, in addition to the application logic itself, all the supporting infrastructure components, required for that application, can be virtualised as part of the overall service. This includes datacentre components like load balancers, firewalls, NAS gateways and monitoring tools. This entire entity - the application and the required infrastructure it needs to be successfully deployed - can then be managed as a single object. Want to deploy a copy of the application? Simply load the object and all the associated virtual appliances are automatically loaded, networked, secured and made ready. This is called an application-centric cloud.

With traditional virtualisation, the servers are the parts that are virtualised, but afterwards, these virtual servers, networks, routers, load balancers and more, still need to be managed and configured to work with the other parts of the data centre, a task as complex and daunting as it was before. This is infrastructure-centric cloud. With full application-centric clouds, the whole business service (with all its involved components) is virtualised into a virtual service (instead of a bunch of virtual servers), which reduces the complexity of managing these services significantly.

With application-centric clouds, we can now model, configure, deploy and manage complex, composite applications as if they were a single object. This enables operators to use a visual model of an application and the required infrastructure, and store that model in the integrated repository. Users or customers can then pull that model out of the repository, reuse it and deploy it to any data centre around the world with the click of a button. Interestingly, users can deploy these services to a private cloud, or to an MSP, depending on who happens to offer the best conditions at that moment. Sound too futuristic? Far from it. Several innovative service providers, like DNS Europe, Radix Technologies and ScaleUp are already doing exactly this on a daily basis.

For many enterprises, governments and service provider organisations, the mission for IT today is no longer just about keeping the infrastructure running. It is about the critical need to create new services and revenue streams quickly and improve the competitive position of their organisation.

Some parts of your organisation may not have time to evolve into a private cloud. For them, taking the revolutionary (or green field) approach may be best, while for other existing revenue streams, an evolutionary approach, ensuring investment protection, may be best. In the end, customers will be able to choose the approach that best fits the task at hand, finding the right mix of both evolutionary and revolutionary to meet their individual needs.

## 4.15: The need for a cloud abstraction model

If the cloud is to fulfil its promise, we need to start thinking of it as a cloud, not as an aggregation of its components - such as VMs and other technologies.

This requires the creation of an 'abstraction model' that can be used to think about (and eventually manage) the cloud. Industry analyst Jean-Pierre Garbani introduced this in a recent post at Computerworld UK, when he talked about the need to consider the cloud as a solution not a problem .

Garbani used the example of the Ford Model T, which was originally designed to use the same axle as Roman horse carriages, until someone came up with the idea of paving the roads. In a similar vein, Garbani argues that customers should not "design cloud use around the current organisation, but redesign the IT organisation around the use of cloud... The fundamental question of the next five years is not the cloud per se but the proliferation of services made possible by the reduced cost of technologies".

I could not agree more. It is about the goal not about the means. But people keep thinking in terms of what they already know. It was Henry Ford who once said, "If I had asked people what they wanted, they would have said faster horses." Likewise, people think of clouds and especially of IaaS in terms of virtual machines. It is time to move beyond that and think of what the machines are used for (applications/services) and start managing them at that level.

Just as we do not manage computers by focusing on the chips or the transistors inside, we should not manage clouds by focusing on the VMs inside. We need a model that abstracts from this. Just as object orientated models abstract programmers from having to know how underlying functions are implemented, we need a cloud model that abstracts IT departments from having to know on which VM specific functions are running and from having to worry about moving them.

In that context, Phil Wainwright also wrote an interesting article: This global super computer the cloud 062, a post that originated 10 years ago. Firstly, it is amazing that the original article is still online after 10 years; imagine what it would take to do that in a pre-cloud era. Secondly, thinking of the cloud as a giant entity makes sense, but I disagree with him when he quotes Paul Buchheit's statement on the cloud OS : (One way of understanding this new architecture is to view the entire Internet as a single computer. This computer is a massively distributed system with billions of processors, billions of displays, Exabyte's of storage, and it's spread across the entire planet).

That is the equivalent of thinking of your laptop as a massive collection of chips and transistors, or of a program you developed as a massive collection of assembler put, gets and go to statements. To use a new platform, we need to think of it as just that, as a platform, not what it is made of. If you try to explain how electrons flow through semiconductors to explain how computers work, nobody (well almost nobody) will understand. That is why we need abstractions.

Abstractions often come in the form of models, like the client/server model or (talking about abstraction) the object-oriented model, or even the SQL model (abstracts from what goes on inside the database). Unfortunately, the current cloud does not have such a model yet, at least not one we all agree on. That is why everyone is trying so hard to slap old models onto it to see whether they stick. For example with IaaS, most are trying to use models of (virtual) machines that are somehow interconnected, which makes everything overly complex and cumbersome.

What we need is a model that describes the new platform, without falling into the trap of describing the underlying components (describing a laptop by listing transistors).

#### 4.16: It's all about the fabric

The term 'fabric computing' is gaining rapid popularity, although for now it is mainly relegated to the hardware community. But a fabric approach has much to offer to applications and services also.

According to a recent report, more than 50 percent of attendees at the recent Datacentre Summit had implemented, or are in the process of implementing, fabric computing. So this is an opportune moment to take a look at what fabric computing means for software and for (cloud) computing.

Depending on which dictionary you choose, you can find anywhere between two and seven meanings for 'fabric'. Etymology-wise it comes from the French fabrique and the Latin fabricare. The Dutch version, fabriek, actually means factory. But in an IT context fabric has little to do with the earlier-used manufacturing or supply chain analogies. It actually relates much more closely to fabric in its meaning of cloth, a material produced (fabricated) by weaving fibres.

Wikipedia,  says... **Fabric computing** or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a 'weave' or a 'fabric' when viewed collectively from a distance.

Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processing functions linked by high bandwidth interconnects.

In the context of data centres it means a move from having distinct boxes for handling storage, network and processing towards a fabric where these functions are much more intertwined or even integrated. Most people started to note the move to fabric or unified computing when Cisco began to include servers inside their switches, which it did partly in response to HP including more and more switches in its server deals. Cisco's UCS (unified computing system), but also its bigger sibling, the VCE Vblocks, are the first hardware examples of this trend towards fabric computing (although inside the box you can still distinguish the original components).

One reason to move to such a fabric design is that by moving data, network and compute closer together (integrating them) performance gains can be obtained. Juniper's QFabric 065 announcement can also be seen in that light. But the idea of closer integration of data, processing and communication is actually much older. In some respects, we may even conclude IT is coming full circle. Let me explain.

Many years ago I spoke to Professor Scheer, founder of IDS Scheer 066 and a pioneer in the field of business process management (BPM). (I should disclose that in later years IDS Scheer was acquired by my former employer, Software AG) He spoke about how, in the old days of IT, data and logic were seen as one. Literally! If you dropped your stack of punch cards while walking to the computer room (which in those days was a computer the size of a room, not a room with a computer in it), both data and logic would be in one pile on the floor and you would spend the rest of your afternoon sorting them again. At that time we had one stack: first the processing/algorithm logic and next the data. His argument was that we had eventually figured out that data did not belong there and so we moved it to its own place, typically a relational database. Likewise, he argued, we should now separate the process flow instructions from the algorithms and move these to a workflow process engine (preferably of course his BPM engine). All valid and true... at that time.

But not long after, Object Oriented programming became the norm, and we started to move data back with the logic that understood how to handle that data, and treat them as objects. This of course created a new issue of having these objects perform in an acceptable way, as we used relational databases to store or persist the data inside these objects. You could compare this to disassembling your car every night into its original pieces in order to put it in your garage. Over the years the industry figured out how to do this better, in part by creating new databases which design-wise looked remarkably similar to the (hierarchical) databases we used back in the day of punch cards.

And now , under the new shiny name of fabric computing we are moving all these processes back into the same physical box. But this is not the whole story. There is another revolution going on. As an industry we are moving from using dedicated hardware for specialised tasks to generic hardware with specialised software. For example, you might use a software virtualisation layer to simply emulate a certain piece of specific hardware. Or, look at a firewall for example. Traditionally it was a piece of dedicated hardware built to do one thing (keeping non-allowed traffic out). Today, most firewalls are software-based. We use a generic processor to take care of that task. And we're seeing this trend unfold with more equipment in the data centre. Even switches, load balancers and network-attached storage are becoming software based (virtual appliance seems to be the preferred marketing buzzword for this trend).

Using software is more efficient than having loads of dedicated hardware, and we cannot ignore the fact that software, because of its completely different economic and management characteristics, has numerous inherent advantages over hardware. For example, you can copy, change, delete and distribute software, all remotely, without having to leave your seat, and even do automatically. You would need some pretty advanced robots to do that with hardware (if it could even be done today).

#### So how do these two trends relate to cloud computing?

By combining the idea of moving stuff that needs to work together closer together (the idea of fabric) and the idea of doing that by using software instead of hardware (which gives us the economics and manageability of software) we can create higher performance, lower cost and easier to manage clouds.

Virtualisation has been on a similar path. First we virtualised servers, and then storage and networking, but all remained in their separate silos. Now we are virtualising all of it in the same "fabric." This means that managing the entire stack gets simpler, with one tool to define it, make it work and monitor it. And that's something that should make any IT pro smile.

#### Epiphany

Nicholas Carr gives a good example of this in his book The Big Switch . In a subsequent interview with the eCommerce Times, he commented,"

" _In 3Tera's AppLogic, you can see the broad potential of virtualisation to reshape how corporate IT systems are built and managed..._

Disclosure: 3Tera was subsequently acquired by my employer CA Technologies.

When I first saw a demo of this, it reminded me of two earlier demos I had seen of something that later reshaped IT as I knew it. The first one was after I installed Windows 1.0 (all 15 floppies). At that time, everything was still monochrome, there were no applications and the performance was not great, but it did make me think: "Boy, if they ever get this to work, it will really change desktop computing".

The second 'epiphany' was my first experience with x86 virtualisation. After having confiscated the biggest machine in the office, with the most memory, and after some tinkering, I saw an actual x86 machine boot inside a window (of course this was not an actual machine, but a virtual one, but the effect on me was enormous none the less). After it booted it would not run much, and running two of them brought the whole machine to a grinding hold. But it did make me think, "Wow, if this ever scales, it can completely change how we handle our machines". And (admittedly somewhat to my surprise), around 2009, a decade later, it did begin to scale and it developed into the billion dollar industry that is changing the way we manage our servers.

In the same way that Windows profoundly changed the way we use desktops, and virtualisation is changing the way we manage servers, I think this new approach has the potential to change the way we think about our data centres. This 'fabric' approach towards creating a cloud application, including its compute, storage and network components, is very different from a traditional 'automation' approach. Here, the different resources are added and configured one by one, typically through some kind of scripting.

A good way to understand the difference is to think of a spreadsheet (a fabric) versus an automated calculator. A calculator and a spreadsheet have similar base functions (add, subtract, multiply, square root), but on a calculator you start with a value and perform functions against that. If you perform the same functions every month you could put these in a script, so you can play them back automatically next time. You could even edit that script to do it slightly differently. This may make things easier, but it is not a spreadsheet. Now ask yourself: when did you last see an accountant with a calculator? Indeed, automated calculators never really took off. Spreadsheets (or fabrics) are simply the better way

For a long time, 3Tera AppLogic was almost an industry insider secret. Several analysts and writers, like Nicholas Carr, were aware of it, discussed it and listed it in their publications. This may be a good time for you to have a closer look. If only as an interesting implementation of the above discussed trends and principles.

## 4.17: Is your cloud strategy 3D-ready?

While the TV industry has been preparing for its next wave of innovation - 3D - , the IT industry has been going through a similar three dimensional transformation. Let's have a closer look at IT's 3D journey and how a good cloud strategy should support all three dimensions. And don't worry; you won't need to wear funny 3D glasses to read this chapter.

Cloud computing is not the first innovation to hit IT. Although the hype and blogs seem to indicate otherwise, since the moment the first computer was carried into the building to the emergence of latest generation of tablets, the way we use IT, the things we use IT for and IT itself has been changing profoundly. We can classify these changes along three dimensions: extending IT's reach to new users and into new functional areas, abstracting problems so they can be managed at new conceptual levels and sourcing solutions from specialists where it makes sense.

### Dimension 1: Extend your reach

Traditionally, the computers and applications that IT managed were used exclusively by employees. For example, general ledger and inventory systems were accessed by the book-keeping and manufacturing departments. This exclusivity has long gone. Applications have extended their reach and are now directly used by customers, by employees of partners and subcontractors and in some cases our applications reach out directly to suppliers. This extension of reach has made IT a lot more time critical. Any failure can have a direct impact on the customers' experience.

In some cases, the line between what is the business and what is the supporting application is even blurring completely. For many people, banking is their home banking application, the service the travel agent provides is an application to book tickets and hotels and Telco's run software to connect people. More and more, the digital process is becoming the business process itself.

Extending the reach of applications also has a severe impact on who should be given access to our systems and applications. From a 'simple' list of employees with their roles and responsibilities, we are moving to a situation where the list of potential users is endless. Security is becoming less about keeping people out and more about enabling the right people to do the right things, with decisions about who and what are allowed taking place at increasingly granular levels of detail and subtlety.

The inherent network orientation of cloud computing provides a natural fit for enabling 'Extend your reach', but 'Extend your reach' goes beyond having more and different people accessing IT's applications. It is also about extending into completely new application areas. Recent examples are convergence of traditional data-processing-based IT with voice and video and ventures into 'big data', where analysis of volumes of information traditionally too large or too diverse to sensibly process-leads to new insights and advanced levels of optimisation. These applications go far beyond the traditional business IT applications that essentially were limited to capturing and processing administrative facts about business processes, with processing that seldom became more complex than adding and subtracting and the occasional multiplication. Cloud computing can help IT extend into these new, more complex, areas.

### Dimension 2: Abstraction - IT moving up in the food chain

At the dawn of IT, companies could not buy computers; they had to build their own. Later on computers could be bought but they did not come with any applications or even an operating system. Customers were expected to build these themselves, first in assembler, later in higher-level languages, while nowadays many complete standard software applications are readily available. The point is that IT for years has been moving to higher levels of abstraction to enable usage to move from extremely detailed technical work to higher level tasks.

Abstraction is basically the mechanism that makes modern IT possible. If we were still required to manually manage every transistor on a modern chip, every register in a CPU or every disk in a content management system, IT would never get around to actually helping the business.

Abstraction occurs in programming, hardware and management. In programming we went from assembler via 3GLs and 4GLs to modern object oriented languages, where abstraction basically is the core concept. In storage we went from addressing blocks and spindles to disks to NAS or even content management systems. Similarly, virtualisation allows us to abstract from the underlying (detailed) physical implementation to a more standardised high-level representation. Also, in IT management, we abstracted from managing individual components such as network, storage and processing to managing at higher conceptual levels such as services (ideally using some kind of service model).

#### Automation providing abstraction

Abstractions have been around forever. In fact, any spoken language can be seen as an abstraction describing underlying realities. In IT, however, they are often implemented through automation. We enable users to abstract to the higher level by 'automating' all the tasks they traditionally had to execute at the lower level. Traditional programming was all about memory management. Higher level languages take care of this automatically. Traditional data processing was about running hundreds of sequential jobs across many sets of data in the right sequence. Workload automation suites automated this away. Service oriented architectures (SOAs) offer services that perform complex tasks 'as a service' automatically. These automated services free the developer from having to manage or even understand the internal workings of the services he uses.

Automation is the engine that enables the user to manage processes at a higher, conceptual level. Having the right conceptual model is essential to success. Conceptual models come in many shapes and forms. A file system is such a conceptual model, so is a database. Programs, applications and services are another example of conceptual models covering different levels. A good conceptual model is close to the reality the user wants to manage and allows him to specify in the appropriate level of detail what the solution needs to do. Appropriate is the key word here. Assembler language does not provide a good model to implement general ledger or CRM systems, but could be appropriate to define operating systems or microcode.

#### Appropriate cloud abstraction models

Traditionally, conceptual models for new technologies closely resemble the old reality; remember how the first cars closely resembled carriages, but without the horses. The driver's seat would be really high because he needed to be able to see over the horse. And even though the automobile had no horse anymore; the seat was still high up. Cloud computing is also still in search of the appropriate conceptual models for its management. Traditional data centre management was about provisioning, starting and stopping servers and configuring networks. When using a private cloud to run applications, a conceptual model around servers may be too detailed, a more appropriate model would be based on services rather than underlying machines.

In a similar fashion, the industry will have to find conceptual models to manage the use of SaaS and PaaS cloud offerings. Initially, people will try and manage these in the same way as we managed the previous generation of standard software packages. Just as we managed these standard packages in the same way as the generation of in-house developed applications before that. But over time we may move to higher, more appropriate levels of abstraction. An interesting development here is the Service Measurement Index (created by the SMI consortium in co-operation with Carnegie Mellon University and hosted at cloudcommons.com) that aims to abstract the provided application services into a number of core characteristics that enable management at a higher abstraction level.

### Dimension 3: Source - divide and conquer

The third transformation of IT is the sourcing dimension. As IT organisations moved on, they started to subcontract, outsource, offshore and procure as a service more and more tasks they had traditionally carried out in house.

To some extent, abstraction and sourcing are related; they both result in organisations not having to perform certain task themselves. But the two dimensions also tend to reinforce each other. The external providers perform their specialisation at such scale that they are best equipped to automate their services up to a next level of abstraction. Many organisations that outsourced their service desk operations found that the provider rapidly moved from a Chinese army approach (where they processed millions of tickets manually) to offering automated remediation and self- service to make the support process more efficient. In-house teams simply did not have the time, skills or scale to set this up.

Sourcing also means letting go of control, no longer being able to step in and fix things yourself in case things go wrong. As a result any sourcing strategy should include an exit and a failover strategy. One CEO became acutely aware of these sourcing risks when he read about several companies ceasing service to WikiLeaks. He asked his IT department how dependent they were on the IaaS vendor they sourced their capacity from. His IT department, 'always game for a challenge', took up the gauntlet and 48 hours of non-stop programming, gallons of Diet Coke and tens of pizza boxes (containing cheese and salami, not CPUs) later, they had created the ability to automatically move their complete operations to another IaaS provider. Given the crucial nature of today's IT from a business and personal perspective, every organisation should consider such a divide-and-conquer strategy. By dividing the workload across multiple vendors or storing a shadow backup copy of critical data at an alternative vendor, they can arrange instant failover and prevent themselves from being locked in.

Of course, cloud computing has a distinct sourcing angle. So much so in fact that many people see cloud computing as just another form of outsourcing. But the attractiveness of cloud computing is that it can further IT along all three dimensions: extending IT's reach to new users and into new functional areas, abstracting problems so they can be managed at new conceptual levels and sourcing solutions from specialists where it makes sense.

Such a **3D cloud strategy** enables you to **E** xtend, **A** bstract and **S** ource **Y** our **IT** , maybe something we acronym crazy IT guys should call **EASY IT.**

## 4.18: Eight simple rules for creating a cloud strategy

In this book, we have covered all kinds of analogies, models and ways to look at IT. We have also offered a 3D model (extend, abstract and source) and discussed the recently published federal strategy. Here we offer some final thoughts and guidance.

What follows is not a step-by-step recipe, but more guiding principles for organisations setting a course towards cloud computing. They reflect my personal approach - there are some you may agree with, others you may not. To keep it brief, I am listing them as eight simple rules (or guidelines), several of have already been covered at length in earlier chapters.

#### 1. Start your cloud strategy (and any cloud project) with an exit strategy

In a publication about cloud computing this may appear to be controversial advice. However, cloud computing, to an even greater extent than traditional computing,-has a huge risk of vendor lock-in. From past experience, many of us are aware of the downside of vendor monopolies. These include high cost, low responsiveness and inflexible vendor business practices, which are often the result of vendor lock-in and excessive switching costs.

Nevertheless, there is another even more important reason to avoid vendor lock-in with cloud computing. Unlike in the past, when we bought products and used these to deliver services to our users, we are now directly dependent on the providers to deliver these services. The effect of a breakdown or contract termination is much more immediate. As a result, it is imperative to have a plan B. Always! Plan B can have various forms; it can consist of a standby contract at a secondary vendor, it will include rights to your data and sometimes even may include rights to use the software of your SaaS provider at another IaaS provider.

Ideally, you architect your cloud endeavours in a way that you can move to an alternative (bootstrap yourself back into action) within a reasonable timeframe. Where reasonable, depending on your type business and the specific function can vary between six months and six seconds. Standards, although currently just emerging, will play a crucial role and it is recommended to consider as temporary any implementations that are not based on such standards. Meanwhile, automation, provider-independent management tools (allowing you to move services across different platforms) and approaches such as RAIC (deploying clouds as redundant arrays of inexpensive cloud services) can help you enable such exit strategies in a cost-effective manner.

#### 2. Start thinking of IT as a supply chain

Although deceptively simple, this analogy will help change both the way the IT organisation thinks and acts and how other departments perceive IT. Supply chain thinking does not mean that you would rather buy than build; it means foremost that you put all your decisions in context of what your company needs to be able to deliver to its end customers. A good rule of thumb is the lean mantra: only do what adds value to your customers and religiously remove any steps/activities/processes (wastes) that do not add value (or are legally required).

A supply chain approach is also about adopting agility as a way of life, fine tuning your processes to establish an optimal 'flow' through the various parts of the organisation, constantly balancing resources (both internal and external) against rapidly changing goals and constraints towards an optimal end result. This is a very different game from traditional IT where part of the IT, organisation was busy implementing new shiny (and often risky) projects, while the other part tried to 'keep the lights on' as reliably and efficiently as possible.

In a modern electronics or automotive factory, the product mix changes constantly, new products are constantly phased in, while others are phased out. Production processes are highly automated, but at the same time constantly tweaked and improved. Decisions concerning which parts to produce in house and which to source can also change weekly or even daily, based on available skills, resources and capacity. Small teams with direct links to customers and sales have the authority to make changes where it makes sense. One could say that what started in IT with agile development will expand into agile operations.

Setting up an IT supply chain organisation includes mirroring many of the disciplines - like planning and costing - that are common in industrial supply chain organisations. Just as the proverb says, the final chain is also only as strong as its weakest link. But the reverse is also true. Being excellent in certain areas is of no value to the organisation if the rest of the chain cannot keep up.

#### 3. Use a portfolio approach for deciding what to move to the cloud

A good cloud strategy is as much about WHAT to do as about HOW to do it. A portfolio approach helps you decide which services would benefit most from moving to the cloud and which services at this time it is less desirable to move to the cloud. Of course, you begin your portfolio approach with the strategic goals of your organisation in mind. Goals that can be as varied as becoming more customer focused, capturing more growth in emerging markets, or becoming the world cost leader in your industry. Next, you match these with your constraints (legal, resource and financial) and you look at them in the context of your existing services (of which you all have I am sure an exact and up-to-date overview).

Out of the portfolio management process you get your master production plan, a roadmap for where you need to go. The portfolio process also prepares you for monitoring your progress (your various projects) against this master plan, and it allows you to allocate and schedule your human, financial and technical resources.

Just as investment portfolio management, IT portfolio planning is not a one-off exercise, it is a constant dialogue. Many argue that the process of communicating and the resulting consensus (or conflict) is as valuable as the end result: the plan. Comprehensive portfolio management is not something you can implement overnight, but at a minimum you can use it to determine your 'low hanging cloud fruit'. In other words, which projects would add considerable value to the users (not just to IT!) and do not have a risk profile that makes them less suitable candidates. For this you can use the earlier mentioned criteria for assessing value versus risk/readiness.

#### 4. Decide what the post-consumerisation value proposition is for IT

A good exercise is to imagine the role of your IT department in a (fairly futuristic or even unrealistic) scenario in which the business consumes all IT services from the cloud. No data centre, no installed software, everything is sourced as a service. Even in that extreme scenario there are a number of tasks that users will expect the IT department to look after. Or to put it differently, there will be a number of necessary and important tasks that no other department will be inclined to take responsibility for. These include screening possible vendors against criteria such as business continuity, legal compliance and support of industry standards; arranging support that goes across multiple cloud services and that understands the context of these services; arranging integration and cross-service reporting; determining and monitoring SLAs; checking the invoices and tracking the cost; and making the use of earlier approved services easier by offering them in a self-service catalogue.

Over the years we have seen that as soon as the novelty wears off, users are more than happy to hand over the responsibility for support and maintenance of their new tools to IT. We saw this with PCs, with user-created intranet and Internet websites, we are already seeing the same with tablets and smartphones and we will no doubt see the same with today's and tomorrow's rogue cloud services.

This may seem at odds with the consumerisation trend of BYO (bring/buy your own) but it really is not. The idea of consumerisation is that there is hardly any support or maintenance required. How many consumers have reserved the weekend to upgrade their TV or to back up their CD collection? Consumer PCs and consumer phones are seldom upgraded; people simply buy a new one when they feel it is worth the extra cost. As more and more traditional maintenance tasks (such as patching and upgrading) are automated away, consumerisation will increase. This will allow IT to allocate the time traditionally spent on these routine tasks to working closer with users, arranging the mentioned post-consumerisation services, and act more as a broker.

#### 5. Move your people closer to the business users  
(which will mean further away from the technology)

For a number of reasons, including increasing technical complexity and larger scale of implementations, IT has steadily moved further away from the business. Cloud computing is offering a unique opportunity to reverse this trend. Many organisations are reversing the trend of IT being a completely central function and are moving a considerable percentage of their headcount back into operating divisions and end-user departments. Like in the case of Proctor & Gamble described earlier, the experience with this is generally good. While business processes become more and more digital and business colleagues become more IT literate it is no longer a good idea to confine IT to its silo. IT is becoming the business and business is becoming IT, and by moving IT professionals into the business divisions they can act as the hybrids that are needed to broker the two.

While IT assumes more a role of a broker - connecting business and users to available services - the remaining central IT functions are more and more organised as a shared service centre, which can benchmark itself against external providers of similar services. Some organisations retain a small strategy and architecture group that provides guidance and insight for the IT colleagues now in the business divisions and set standards for vendors.

A question that often comes up is: "Should we outsource everything technical?" Or putting it another way: "Do we need to keep a core body of technical knowledge in house?" The answer is "It depends!" It is a general trend (across all industries not just in IT) that activities that traditionally were done in house become more and more commoditised and are eventually sourced from an external party. This applies to sourcing electricity and raw materials but also to supporting services like physical distribution, HR, company cars and catering. The questions that need to be asked include: does outsourcing lock me in? "Does outsourcing hinder me (over the long run) in adding value for my customers?" "Does it prevent me from having a believable/feasible exit strategy?" The most commonly asked question is of course whether a certain activity is to be considered a core competency. Note that we do not ask here whether IT is a core competency. It needs to be more granular: "Is providing desktops to my employees a core competency for us? Is optimal (ERP) planning a core competency for us? Is running a data centre a core competency for us?" "Is providing really user-friendly home banking a core competency for us (for some banks it is, for others not)?" Some of the famous mistakes in industrial history are about getting the core competency issue wrong, so here it is also vitally important to have some sort of exit strategy, preventing any mistakes from becoming too costly.

#### 6. Make service costing a core competency

We said it before and we will say it again: with the world becoming more digital, the share of IT cost in the overall cost is rapidly increasing. The services that banks, media companies, travel agents, universities and even governments deliver increasingly consist of bits instead of atoms. And as a result, it becomes more important to understand these costs implicitly. Cost engineering has been a core discipline in manufacturing for close to a century and it is rapidly becoming essential for all other industries.

Regardless of whether a service is completely rendered in house, composed from various sourced components or procured completely as a service, it is essential to understand the exact cost characteristics of these components and the impact on our overall cost of doing business. Much has been said about cloud moving IT cost from capex to opex, but the move of IT cost from the overhead bucket into direct cost may be even more important. And this cannot be done without a service model (the IT equivalent of the industrial bill of material) to accommodate this cost allocation process.

IT needs to be able to exactly determine not just the cost of IT but the cost of each service. This includes more traditional IT services such as the cost of processing a payment, issuing a ticket, sending an invoice, creating an order or handling a complaint, and also the cost of watching an on-demand movie, making a (video) conference call, taking an interactive course or the exact cost of a customer using online media through an iPad app provided by the organisation.

Many felt the global industry invested hundreds of billions in implementing ERP systems because of the benefits better planning and scheduling would bring. In reality most companies benefited more from their ERP investments by exactly understanding their global cost and being able to optimise their global decision accordingly. In electronics manufacturing, the heads of product divisions are not interested in how much the company spent in total on plastic versus aluminium or copper. They want to know whether they can offer their shiny new tablet at a competitive price compared to Apple or Samsung. IT needs to prepare itself for such discussions and be able to show the cost of their services per alternative method of delivery.

#### 7. Treat security as a service

Security or better 'a fear for a lack of security' is cited consistently as the biggest concern regarding public cloud computing. But when asking people whether they would rather keep a million dollars under their bed (a private place) or in the bank (a public place), the answers tend to favour the public infrastructure. Similarly, we see security rapidly move down the list of concerns once organisations have hands-on experience with using public cloud computing. This is because they then see that the measures implemented at external providers are much more advanced than those that most organisations are able to implement internally.

The security issue that IT will need to address in a dynamic supply-chain type cloud computing setting is this: most organisations are currently not set up to dynamically grant or block access to a varying set of users, across a fast-moving and changing portfolio of applications running at a vast array of providers. This calls for a rethink of how security is approached, more as "security as a service", as an enabler instead of a prohibitor. The difficulty is not keeping every one out. Instead it is in a very granular way only letting the right people access a very small subset of the data (and not to all documents that happen to be stored there) and actively to monitor the usage to signal any out-of-the-ordinary activities.

From that perspective it is interesting to see that more and more companies are turning to external (public) 'security as a service' offerings to secure their internal (private) systems. This is something few realise when they talk about the risks of cloud computing.

#### 8. Use the 3D-ready test as a rule of thumb

In a previous chapter we discussed how cloud computing can help to extend the reach of IT to new users and into new application areas, abstract to a higher level of activities by using automation to implement an appropriate management model and source those components and services where it makes sense to do so. We called this Extend, Abstract & Source Your IT, or EASY IT.

Applying this three-dimensional approach allows IT, including the more technically inclined IT people, to implement cloud computing not just to save on cost but as a way to add more value to users and customers. Increasing value is also the guiding principle that U.S. federal CIO Vivek Kundra uses in his recently published federal cloud strategy. In this strategy he also provides a pragmatic framework and checklist for the next step 'cloud migration'.

Often, cloud computing is approached as a way to do the same things we do today, faster, better and more cheaply. But I think that faster, better, cheaper cannot be the whole strategy. Continuously improving speed, quality and cost is simply a prerequisite to be allowed at the table. Business users consistently improve the processes in their departments and expect IT to do the same in their area of expertise. The potential for cloud computing lies in being able to do things that were not done before, to boldly go where no man has gone before. And, as you would expect, that is an area where IT people love to go.

The appropriate speed for deploying cloud computing depends on the culture and current state of your organisation, and realistically you are not going to be in the cloud tomorrow. The U.S. federal government issued a directive called 'Cloud First'. Basically a type of 'Comply or Explain policy' which states that cloud options should be evaluated first (comply) unless there are significant reasons not to do so (in which case departments should explain). Pushing the accelerator that hard might not be everyone's cup of tea, but whatever you do: do not rush in, but also make sure you do not get left behind!

###

#  Appendix  
The NIST Definition of Cloud Computing

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

#### Essential Characteristics:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service's provider.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data centre). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out, and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured Service. Cloud systems automatically control and optimise resource use by leveraging a metering capability1 at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilised service.

#### Service Models:

Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g. web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Cloud Platform as a Service (PaaS) **.** The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Typically through a pay-per-use business model.

Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g. host firewalls).

#### Deployment Models:

Private cloud. The cloud infrastructure is operated solely for an organisation. It may be managed by the organisation or a third party and may exist on premise or off premise.

Community cloud. The cloud infrastructure is shared by several organisations and supports a specific community that has shared concerns (e.g. mission, security requirements, policy, and compliance considerations). It may be managed by the organisations or a third party and may exist on premise or off premise.

Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organisation selling cloud services.

Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardised or proprietary technology that enables data and application portability (e.g. cloud bursting for load balancing between clouds).

Source: csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdf .

## About the author

Gregor Petri is Advisor Lean IT and Cloud Computing at CA Technologies and responsible for CA Technologies European go-to market strategy around cloud computing, virtualisation and service automation. He launched The Cloud Academy, an initiative he started in Europe, where the academy ran in several countries in co-operation with partners such as Amazon Web Services, Cap Gemini, Cisco and NetApp and which is now being rolled out globally.

Gregor is also a regular expert or keynote speaker at industry events throughout Europe and author of Shedding Light on Cloud Computing. He is also a columnist at ITSM Portal, contributing author to the Dutch Over Cloud Computing book, member of the Computable expert panel on SOA, SaaS & Cloud Computing and his LeanITManager blog is syndicated across many sites worldwide. He was recently named by Cloud Computing Journal as one of world's **Top 100 Bloggers on Cloud Computing**.

Early in his career, Gregor worked as a management trainee in the office of the CIO at Akzo and helped roll out just-in-time manufacturing at Philips. In the following years, he was instrumental in the introduction of several IT innovations into the European market, like the first object oriented ERP application for Marcam Solutions, where he was Director of Product Marketing for Europe, and the first XML server for Software AG, where he was Director of Sales and Marketing for the Netherlands.

Gregor was a co-founder of the Dutch Web-Services Association and board member of the XML Users Group Holland and of Geel-Zwart field hockey, where he still plays. Gregor studied Business Economics and Information Technology in Rotterdam and Tilburg, when he also wrote and marketed one of the first European shareware applications.

You can follow Gregor on Twitter @GregorPetri and read his blog at blog.gregorpetri.com

###

##

Copyright © 2011 CA. All rights reserved. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies. This document is for your informational purposes only. It may not be copied, transferred, reproduced, disclosed, modified or duplicated, in whole or in part, without the prior written consent of CA. This documentation is confidential and proprietary information of CA and protected by the copyright laws of the United States and international treaties.

To the extent permitted by applicable law, CA provides this document 'As Is' without warranty of any kind, including, without limitation, any implied warranties of merchantability or fitness for a particular purpose, or noninfringement. In no event will CA be liable for any loss or damage, direct or indirect, from the use of this document, including, without limitation, lost profits, business interruption, goodwill or lost data, even if CA is expressly advised of such damages. ITIL® is a registered trademark and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.

