

# Deconstructing Cloud

Third Edition | First Edition August 2013

Copyright © A Knoblauch

Smashwords Edition, License Notes

This ebook is licensed for your personal enjoyment only. This ebook may not be re-sold or given away to other people. If you would like to share this book with another person, please purchase an additional copy for each recipient. If you're reading this book and did not purchase it, or it was not purchased for your use only, then please return to Smashwords.com and purchase your own copy. Thank you for respecting the hard work of this author.

This book is dedicated with thanks to of my friends and family.

# Contents

Foreword

Cloud, Virtualization and the Rest of the Jargon

Cloud, Without the Jargon?

Virtualization: A Computer in a Computer

Understanding Cloud Platforms

OpenStack: The Open Source Cloud

Open or Closed Clouds?

Building Cloud Environment

Cloud Storage

The Big Three of Cloud

IaaS: Infrastructure as a Service

PaaS: Platform as a Service

SaaS: Software as a Service

Doing More With Less

Introducing Cloud into your Enterprise

Say Goodbye to Internal Cost Centres

Cloud and the Demise of On-Premise Equipment

Vendor Management in the Age of Cloud

Using Cloud for Standardization

The Side Benefits of Cloud

Cloud as a Tool for Cost Control

Cloud Transformation

Cloud Benefits for the C-Level Crowd

Big Data

DevOps: The New IT Team

Virtual Desktop Infrastructure

How I Learned to Stop Worrying and Learned to Love the Cloud

Why CFOs Love Cloud Computing

The New Role of the IT Team

Cloud as a Catalyst for Innovation within IT

Securing the Cloud

Whoever Marketed Cloud Is a Genius

Protecting the Virtual Landscape

Cloud Security Simplified

Paravirtualization

Endpoint in Virtual Environments

Perimeter Security in Cloud

Virtualization and Visibility

Access Control and Cloud

User Management

Mobility and BYOD

Security Testing in Virtualized and Cloud Environments

Cloud Security Resources

Big Data and Security

Compliance & Other Things that go Bump in the Night

How Cloud and Virtualization Affect Security

Virtualization and Forensics

Disaster Recovery, Cloud Style

Cloud Replication

Outsourcing Security

Getting Started with Cloud

Application Virtualization

Application Modernization

Application Design

Virtual Desktop Infrastructure

Intelligent Desktop Virtualization

Cloud and Collaboration

Mobile Device Management

Leveraging Big Data for Good

Cloud as a Competitive Advantage

Cloud Service Providers

Cloud and Mid-Market Organizations

Cloud Brokers

Vendor Collaboration

Cloud and the Education Sector

Cloud and the Careers of Tomorrow

About the Author

# Foreword

In the winter of 2012, I created a short-lived daily blog entitled Tinder Stratus. While the blog doesn't exist in it's daily format today, in 100 posts I was lucky enough to create some ripples in Canada's "cloud" economy. "Cloud" was a word surrounded by hype; it was on the lips of every senior level executive of every company around the world. The problem was, no one really knew what it meant.

Explaining cloud has always been tricky. Marketers have attempted to balance the technical aspects of cloud with its business benefits but have effectively accomplished little more than promoting confusion between cloud, virtualization, and business transformation.

When I set out to write this book, and even when I wrote the blog, my goal was to figure out how to explain clearly the relevant information, without adding to the existing marketing hype. It's a challenge because, even with this book, many readers are likely to ask: "Do we really need another cloud book?" The answer is yes, and I sincerely hope this is the one.

This book was written for one purpose. Much like the old Tinder Stratus blog, this book was written to deliver as much information as possible about cloud, so we can minimize the preexisting learning curve. Let's face it: no one has time to read a stack of books and articles on cloud to gather the fundamentals they need. Instead, we need a comprehensive guide that addresses all the key issues, a guide that organizations can use so they can begin adopting these amazing new processes. The only way to do this is by addressing both the positive and negative aspects of cloud in a way that our leaders can understand. That is why this book exists.

I hope that after reading this book, organizations will begin leveraging these next-generation business practices, and we will start seeing higher adoption rates for cloud services as a result. Furthermore, I am optimistic that these ideas will help inspire the creation of more cloud services: services that will not only make organizations more efficient, but will also drive overall social change.

# Cloud, Virtualization and the Rest of the Jargon

Where do we start? Cloud, virtualization: these words are commonplace today. Virtually every business magazine has run some kind of feature on cloud, and technology publications are also jumping on the bandwagon. However, many readers lack the knowledge background to fully understand these terms. Due to the speed at which technology is advancing, the learning curve for cloud and virtualization remains steep, and it's causing headaches for organizations that must now navigate a terminology minefield if they want to begin offering cloud-related services.

Cloud is not technology; it is not a trend. Cloud is the evolution of a group of different technologies and business approaches into a single, new service delivery model. Cloud cannot exist without its technology roots, which stem from IT optimization practices mainly found in virtualization and service delivery.

What is cloud? How will it change your organization? Let's find out.

## Cloud, Without the Jargon

The term "cloud" is said to have originated circa 1994, when we started using the cloud as a metaphor to explain the Internet. As a symbol, the cloud was a great way to represent the resources we located offsite, content floating somewhere in the ether. In a similar fashion, the term "cloud" was used to describe the abstraction of resources from on-premises infrastructure. While the term itself has become somewhat of a buzzword, the origin of what we now refer to as "the cloud" (i.e., cloud computing) offers some perspective on our understanding of this new business model.

According to Wikipedia, the cloud's beginnings go back much further:

" _The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe became available in academia and corporations, accessible via thin clients / terminal computers, often referred to as "dumb terminals", because they were used for communications but had no internal computational capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. The practice of sharing CPU time on a mainframe became known in the industry as time-sharing._ 1 _"_

Many argue that Amazon was another key motivating force behind cloud computing. In 2006 Amazon launched its Amazon Web Service (AWS), as a means to leverage the extra computing power it had created in order to drive its website. Because Amazon required an inordinate amount of computing power during peaks such as holiday seasons, the company tried to figure out a way to offer its extra resources as a service to other organizations during off-peak periods. This led to the introduction of AWS, and the first form of traditional cloud computing, as we know it today.

Cloud computing leverages computing resources (such as hardware and software) delivered as a service over a network (typically the Internet). Generally located offsite, cloud computing can optimize use of low-cost resources (such as processors and storage), new efficient computing platforms, and high-capacity networks in order to deliver business services more efficiently and at a lower cost.

Cloud's flexibility comes from enabling end users to gain access to remote resources from a wealth of devices, by use of a web browser or application as the main point of access. Due to the flexibility of the cloud platform being rooted in virtualization, cloud computing enables organizations to apply new hardware and software approaches to business applications, resulting in improved manageability and less maintenance while scaling resources to manage computing requirements and minimizing costs.

Cloud, however, isn't just about how you can build new service delivery models through the application of hardware and software designs; it is about transforming your organization to capitalize on new business processes that previously weren't easily accessible. Cloud is truly about business transformation. It is about doing more with less.

The real benefit of the cloud model comes from new service models that are being offered by service providers. Traditionally, organizations had to build their own IT environments, and the innovation of the organization was tied to the IT department's ability to enable the business to leverage those innovations. If your IT team could provide the latest applications and resources to enable a business transformation project, there was a higher chance for overall business innovation. Sadly, unless you were a multi-million-dollar startup, the skillsets and funding required for these projects were scarce, and the ability to thrive on the innovational edge wasn't entirely realistic.

This is where the traditional cloud model came from. Organizations that had the luxury of building large data centers to manage innovation projects were often hampered by underutilized resources that sat dormant only until periodic demand (such as holiday seasons) required them. These organizations realized that other businesses could benefit from subscribing to their underutilized resources, and this, in turn, created a new potential revenue stream for the larger hosting organization. This is where we started to see models such as Software as a Service (SaaS), whereby users are provided access to application software and databases, and the cloud provider manages the infrastructure and platforms that run these applications. This model allows organizations to reduce IT operating costs by outsourcing hardware and software maintenance, as well as support, to the cloud provider. Outsourcing these responsibilities enables the business to redirect funds previously budgeted for their management, which allows increased spending on more critical projects. As more organizations begin capitalizing on these outsourcing models, they do so knowing it will lead to greater adoption and standardization, while lowering overall costs for the entire subscriber base.

For the sake of this book, I use the term "cloud" as a means of describing the methodology of leveraging cloud-computing technologies. Cloud is a movement.

http://en.wikipedia.org/wiki/Cloud_computing

## Virtualization: A Computer in a Computer

I can't talk about cloud without discussing the key component that makes all things cloud possible: virtualization. Virtualization isn't necessarily a new technology, but its pervasiveness today is why cloud is now such a hot issue. Cloud is the use of virtualization to transform the way organizations manage their IT processes through either enabling on-site virtualization of resources, or through subscribing to hosted off-site services found in cloud offerings from infrastructures as Service to Software as a Service (SaaS).

So, what is virtualization?

Virtualization is the ability to create a virtual machine (VM) that acts like a physical computer. Just as you have a computer with an operating system, storage, and processor, virtualization allows you to create the same environment, albeit virtually instead of physically. The benefit of virtualization comes from the ability to put more than one of these VMs on a server. Depending on the size of the server (host), theoretically you could have several VMs sharing that server's resources, and in doing so, reduce the number of independent servers you actually need. You can also mix and match operating systems on the same server, so if your application needs a Linux Host, you can run it alongside another active VM or on a physical server running Windows OS.

In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, with a guest machine or VM running on it. The words "host" and "guest" are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor, or Virtual Machine Manager. Depending on the platform, the hypervisor may have a specific name, as in the case of Microsoft's Hyper-V.

To show what virtualization looks like, the diagrams below offer visual representations of several common builds.

Figure A shows a typical server configuration. This is how almost every server is built, with system resources (storage, processors and network functionality), an operating system, and the end applications. Keep in mind: if you create a server for every major application (databases, CRM, email, etc.), you require a veritable legion of these servers. Hence, this type of model is growing obsolete. The real limitation to traditional architecture however, is that these servers are designed to run a single operating system and a single application. This often results in an inefficient 5-20% average capacity usage per server, not to mention the maintenance required to keep this environment up and running. When one considers the expenses associated with building these servers and the capital costs required to power and cool these machines (especially if your organization has a data-center full of them), you can imagine how much money is spent by IT just to keep the lights on.

With virtualization, the goal is to take these inefficient servers and share their resources. You are no longer dedicating a server for a single application; rather, you are now running many of these applications on a single server. The beauty of virtualization is that the underlying platform allows for the hosting of multiple types of operating systems on the same host server.

Figure B illustrates how, by leveraging virtualization, you can run several of these virtual VMs, each with their own OS and application, within a single host server. Virtualization software solves the problem of one-server-one-application by enabling several operating systems and applications to run on one physical host. Each self-contained VM is isolated from the others, and uses as much of the host's computing resources as it requires. These VMs act as independent entities, containing their own operating system and applications. They are surrounded by internal logical barriers which give them separation and independence from one another, allowing several VMs to be run at the same time on a single host.

The VMs sit on a thin software layer called a hypervisor (the software or firmware that creates the VM), and are assigned individual quotas of system resources depending on their needs, such as RAM, storage and the type of network service required. The only real limitation to how many guest VMs can run on a single host is the amount of resources available to support the functions of the VMs.

There are several key types of virtualization. Full virtualization takes the entire hardware environment and transitions it to a virtual format to run the same way as it would normally. In some other cases, organizations may wish to leave some applications unmodified, and would leverage partial virtualization, which transition only certain parts of the environment to the new virtualized model.

The other major type of virtualization is hardware-assisted virtualization. In hardware virtualization, the hypervisor acts as a piece of hardware, or in most cases, the entire computer itself. The hypervisor can leverage specially designed CPUs and hardware components that help improve the efficiency and performance of a guest environment. The benefit of using hardware-assisted virtualization is it can help provide more flexibility and stability in the types of VMs you want to run, and as a side benefit it allows IT departments to allocate hardware costs to specific departments within the organization based on actual resource usage. Goodbye IT cost centre.

So, why is virtualization so important when it comes to building cloud infrastructure? Beyond the technical flexibility it provides, virtualization really shines in relation to streamlining processes and reducing costs. By layering several operating systems on a single host, you can reduce costs, while running multiple applications (regardless of OS requirements) on the same server. This reduction in complexity also helps organizations manage system change requirements more effectively and increase the resources available for projects. No longer do you have to wait for a dedicated server to be built to design an application, VMs can be created instantly with full requirements pre-configured. From there, the application can be built and moved from host to host as needed.

Organizations not only benefit from the streamlining of IT that comes from virtualization, but also from several additional functionalities. First, by running many applications on each server with virtualization, you can reduce the number of physical servers while gaining maximum server utilization. Instead of having multiple servers running at 15% capacity, you can run fewer at higher performance rates, with the result of every physical machine being utilized to full potential.

Second, virtualization speeds up application and resource provisioning, as it leverages self-contained VM files. VMs by nature can be copied, pasted, and duplicated, making it easy to set up configuration templates that can be quickly deployed without the need for individual customization. You can also easily move VMs across hosts, even while they are running, to deliver additional functionality in the performance of operations such as load balancing and backup/restore.

For advanced virtualization users, you can capitalize on the efficiency and agility of this platform in order to automate every part of your data center with policy-based provisioning, whereby applications can be provisioned on demand with full security controls already embedded. This means IT is spending less time managing the overall infrastructure and more time innovating.

There is no doubt that the shift to widespread adoption of virtualization has provided significant advantages to organizations. From reductions in IT complexity, to significant cost savings, leveraging virtualization has paved the road for full-scale cloud methodology for organizations and service providers. Virtualization is really what makes cloud possible, as it allows the key requirement of cloud-on-demand elasticity of resources to exist.

## Understanding Cloud Platforms

When it comes to adopting virtualization or leveraging a cloud service, one of the biggest decisions has to do with the platform itself. Overall, virtualization fundamentally works the same way across all platforms; in each case there are pros and cons that differ based on your chosen platform. While by no means the only choices for cloud/virtualization platforms, I want to touch on the three more popular options: Amazon, Microsoft, and VMware.

Let's start with Amazon and their AWS platform. This solution is said to have kicked off the cloud movement, because it was born out of an IT optimization project within the company. In order to cut costs, Amazon decided to rent out extra computing resources that were in demand during the holiday season but underutilized for the rest of the year. From this, Amazon Web Services (AWS) was born, offering several tools including Amazon EC2, its pay-as-you-go computing service, and Amazon S3, its storage solution.

To provide its service, Amazon uses Xen, a hypervisor platform, to offer online services for organizations looking for website services or client-side applications. It became a popular choice due to its pay-per-usage billing, which makes it easy for organizations to rent services such as extra computing power or storage, eliminating the need for dedicated cash allocation within the capital budget.

Amazon AWS is a great choice for organizations that want a standardized platform with a wealth of different services. Because Amazon is an e-commerce site, they also provide the benefit of having industry compliant data centers (such as those which meet the regulatory requirements of PCI or SOX), security controls, and disaster recovery plans in place. That being said, they have suffered from severe server outages in the past, resulting in a few blemishes on their track record.

Microsoft has also been working on becoming a strong contender through their cloud services. This makes perfect sense, considering the number of organizations that leverage Microsoft products. From Windows to Office to Lync, Microsoft powers most of the business world with its family of products. By transitioning these applications to a cloud service, it then becomes easier for organizations to subscribe to these products without the hassle of licensing, maintenance, updates, etc.

Powering Microsoft's cloud is their Windows Azure platform, which relies on Microsoft-managed data centers to facilitate the construction, deployment and management of applications and services. Azure offers support for many different programming languages and tools from both Microsoft and third-party sources through both its Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).

If your organization currently uses a number of Microsoft solutions and likes the idea of vendor consolidation, this is a great choice for a cloud platform. Users benefit from the standardization it provides, especially if they are familiar with the products already. Here, your only potential downside is that, if you decide to change platforms later, you may have to do some extra work converting your data.

No company is more closely associated with virtualization than VMware. Founded in 1998, in Palo Alto, California and acquired by EMC Corporation in 2004, VMware was the original designer of the hypervisor model, essentially creating virtualization (as opposed to hardware emulation platforms like Citrix). It is considered the market leader for virtualization and cloud, and has been focused on working with its partner network to create solutions that support its enterprise virtualization platform, ESX.

VMware has always been a solid choice for a virtualization platform due to its consistent performance record. It has built a significant portfolio of solutions (including its internal cloud platform solution, Cloud Foundry) to make it easier for organizations to start building virtualized environments, while helping them start leveraging cloud.

VMware is a widely adopted, versatile platform that allows users to design their own solutions from a hardware perspective. Because it is a very commonly used platform, many cloud providers offer services built on the VMware platform.

## OpenStack: The Open Source Cloud

There is another platform type that warrants mention. While still a bit of an outsider, open source cloud platforms are starting to attract more attention. Much like Linux, open source platforms aim to promote more open environments that are built through collaborations between large numbers of users. The main powerhouse behind the open source cloud comes from OpenStack.

From www.openstack.com:

" _OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of_ _interrelated projects_ _delivering various components for a cloud infrastructure solution._ 2 _"_

Rackspace and NASA started the OpenStack movement in 2010. The companies were working in concert to build a huge open source cloud environment that was not just standardized, but could scale massively. After great success and garnering substantial critical acclaim, the platform is now used by more than 150 companies and 2,000 developers, and has received support within the market, including major backing from HP and Dell. While not necessarily as popular as some of the other platforms, cloud providers such as RackSpace (one of the key forces behind the model); offer this open source model to customers.

The OpenStack movement has made an impact on the traditional cloud world too, evidenced by the fact we are now seeing open cloud projects coming from large vendors. Amazon Web Services has been working with a company called Eucalyptus (an open source provider of private cloud systems) to make the integration between AWS and open source private clouds easier.

So, does it make sense to use open clouds? The nice thing about this model is that it removes the usual things that make IT projects a pain: patents and licenses. You also have the flexibility of making the environment customized, because there are no key vendor platforms by which you are forced to standardize. There are also many Application Programming Interfaces (APIs) designed by the OpenStack community, providing a variety of unique resources. The downside is that since it is a newer platform, the security controls that exist in more mainstream solutions tend to be more robust than those found in OpenStack environments. This means you might need to leverage your in-house security team or a third party managed service to help secure the environment.

The nice thing about the open source model is that we are starting to see more key players backing open source clouds. Because it is still an emerging model, there are fewer providers. However, if you choose this model you benefit from one of its founders also being a provider. There are many arguments that show the benefits of each platform, but it really comes down to how your organization plans to leverage the cloud environment. Like any other cloud service, due diligence is recommended when deciding which platforms you wish to use within your environment.

2 www.openstack.com

## Open or Closed Clouds?

We've now covered the benefits of newer cloud platforms like OpenStack and traditional clouds like Amazon or VMware offer. OpenStack has received more attention, mostly due to the fact that more organizations are starting to look at alternatives to the big three cloud platforms (Amazon, Microsoft and VMware). Open source clouds have also received increased visibility due to the perceived flexibility this model is said to provide. But, is open source a viable option for organizations, or does it make more sense to go with an established cloud platform?

Open source clouds like OpenStack are a great option for developers who want to be able to draw from a wide development community. Because there is no official fixed framework, ample flexibility remains in regards to developing custom applications that run in the environment, and all code is available without hefty licensing fees. The drawback (as with any open source platform, because code is developed by different sources) is that the application may require extra testing before official deployment, as there may not be a formal support team whom you can contact should issues arise. That being said, OpenStack has a wealth of great vendors backing it, suggesting a bright future for this platform. Like with any new solutions, however, it may take some time to work the kinks out.

On the other hand, if you decide to go with a tried-and-true, mainstream-supported platform such as Amazon, Microsoft or VMware, you generally know what you are getting into. These platforms have plenty of funding and testing behind them, and a diverse customer pool that provides them with a great source of feedback — factors that work to ensure the platform operates as it should. Additionally, because large vendors fund these projects, there is a formal support organization to help troubleshoot any issues within the environment. The downside is that cloud providers are usually tied to one main platform, so you need to decide which platform to use and look at the market to ensure you can later move your infrastructure without the fear of vendor lock-in, even in cases with OpenStack. The other major drawback to traditional platforms is licensing costs, which will hopefully come down as economies of scale kick in.

OpenStack is a great alternative for organizations that want more flexibility in a platform, or have the skillsets to build more custom applications, which may be limited by traditional platforms. If your team is more comfortable with officially backed platforms, there are many benefits to building a VMware environment or leveraging a cloud service from the vendor itself such as Amazon's AWS. The security controls for these platforms have, historically, had more support from leading vendors, and there are more formal support structures for organizations that plan to build their cloud environments internally. These platforms have also been around longer and, as a result, may offer more stability and resources.

No matter which platform you decide on, defining your cloud strategy ahead of time will help provide better insight into which platforms are a better fit for your cloud objectives. It is also recommended that you work with your cloud provider to ensure that if you outsource your infrastructure, they can fully support your projects. Readjusting the scope of your entire project due to a vendor limitation (such as in respects to compliancy or legacy application support) could cause significant problems down the road when it comes to transitioning your cloud environment onto another platform.

## Building Cloud Environments

If you have decided that building an internal cloud is the right move for your organization, there are some hardware nuances that will directly impact how you implement your projects. If you are using existing hardware, this section might not be as useful, but if you are looking to redesign your infrastructure, it is important to understand the difference between the two major approaches to virtualization architecture: Reference Architecture and Converged Infrastructure.

What do these terms mean? Simply put, they refer to the way the actual infrastructure hardware (servers, storage etc.) is built. You might have seen advertising from EMC and their VSPEX platform, or even the NetApp and Cisco partnership with FlexPod. These solutions would be classified as Reference Architectures as they are usually integrated and validated platforms that include server, network and storage components with a hypervisor pre-installed. These vendors tout the flexibility of these setups as a major benefit, because they enable users to mix and match products as long as they keep the same basic format. This model uses open APIs and management tools, which make them quick and easy to deploy, as well as a relatively risk-free option for private clouds. It's a great way for budget SMBs (small and medium businesses) to put together a cloud environment. It is crucial, however, to remember that you can theoretically break any type of model if you don't know what you are doing.

One of the key drawbacks with Reference Architecture is that, although they act as a pre-configured system, you have enough flexibility choosing components that you can end up essentially making it another piecemeal solution. Until the components are integrated together on a customer site and used for that specific customer's purposes, it is impossible to predict how it will act when it is up and running. The flexible aspect, while seemingly attractive, can work to undermine the effectiveness of the model in the end. The disadvantage is it puts the user on the hook for troubleshooting, and if the system doesn't perform to optimal levels, it negates all the benefits this model purports to provide. At this point, almost all the support calls that result from this model are going to be related to configuration issues, and the finger of blame will be pointed at every component vendor, in turn making it a pain to troubleshoot if more than one vendor is involved.

The other central issue is the fact that, sometimes, these Reference Architecture configurations are based on static sizing and architecture deployments that don't bode well for allowing future requirements. This means the end user will ultimately have to reconfigure their solution and find a way to reintegrate what they've done in their new environment. You may need to revisit the original project plan to figure out how to re-adjust your environment to meet the new requirements.

Converged or Single Stack Infrastructure solutions are another option for cloud environments. These all-in-one offerings include solutions like Oracle's Exadata and Exalogic, Dell's vStart, HP's CloudSystem Matrix, and IBM PureSystems. These solutions have tightly defined software stacks above the virtualization layer, and might include bundled infrastructure and service offerings. These are a great option, but do come with a downside: vendor lock-in.

For example, if you decide to go with HP's option, you are looking at certain network switches from supported platforms such as TippingPoint or 3Com. This means if you decide to implement hardware upgrades later, you are tied to these vendors, instead of having the choice to use products offered by other vendors, such as a VMware platform.

Speaking of VMware, a true Converged Infrastructure is not only pre-tested and pre-configured, but should also be pre-integrated. Single SKU is what we are looking for. One of the most popular solutions that fit this bill is VCE's Vblock, which includes hardware, storage and software pre-built and shipped as a single item. These units leverage technologies from VMware, Cisco and EMC (VCE) to deliver a pre-configured, single product that is consistent, no matter where in the world you order it. The principle benefit here is that, although the environment is fixed, you know what you are going to get. The environment will perform pretty much the same way across the board, making support a lot easier, as there is only one vendor (VCE is an independent company, but provides support for all products included in the Vblock configuration). Due to VCE's large customer base, the bugs in your system have probably showed up at some point with other customers, making it easier to troubleshoot than custom deployments.

The other nice thing is that because it is pre-configured and assembled, it has the quickest deployment time. A typical deployment can occur up to four times faster than the time it takes to deploy a Reference Architecture model. This makes Converged Infrastructure a perfect choice for private clouds, because the speed of deployment will give you an immediate reduction in your TCO.

While either model has significant benefits and can be a good fit for more situations, every business is different. The best model for your project will be the one that is able to scale with your future requirements. If you have a fairly static business from an IT perspective, Converged Infrastructure might be a good fit. If you have a dynamic environment, the flexibility of Reference Architecture could provide a better model for your requirements.

## Cloud Storage

Another key area of cloud infrastructure involves how resources are stored and accessed. The most common form of cloud storage outside of internal resources will most likely be "storage as a service" whereby data stores are subscribed to based on the company's developing needs. This is a great service in general, because organizations can start with a smaller tier of storage and add more as needed. This means you don't need to predict how your storage requirements will change over time, or stockpile drives in anticipation of future use. Many organizations can also subscribe to services that have flexibility to accommodate higher usage during peak times, such as during the holiday season, and then return to a smaller service tier for the remainder of the year, sometimes referred to as a pay-as-you-go service.

Another conversation on storage pertains to more user-accessible cloud storage, such as storage offered by services including Box.com, Dropbox and iCloud. These services allow users to connect to a cloud-based folder to access files from anywhere, on just about any device. This service is simple, efficient, and can be a lifesaver when you have no physical access to your computer. For salespeople this means they can store a corporate presentation and access it on a tablet at the customer site instead of lugging around a laptop. Of course, there are just as many reasons companies loathe it. One drawback in particular is the massive security holes this service creates.

There are really two central issues when it comes to cloud storage: data loss prevention (DLP) and ownership. From a DLP perspective, unless you properly control your documents from a corporate standpoint (e.g., if you put in the right permission and access controls on said files) it really makes leaking corporate data as simple as using a USB stick. It's actually easier, because you can transfer documents instantly to anyone with access to your cloud folder. It's a nightmare situation for organizations that haven't implemented the right access controls.

How widespread is this type of data security problem? Sadly, it is an incredibly common occurrence. There are frequent news reports on employees (especially in Government sectors) being terminated after a confidential document is leaked online. While these incidents are not always without malice, in many cases these situations occur when an employee, in good faith, posts the document to his/her personal cloud storage folder, where it is later retrieved and leaked by a third party. This leaves the question of where to place the blame. Is it the employee who thought the document was safe in their password-protected storage who is at fault? Or is the employer responsible for the situation, because the internal restrictions around file sharing were so tight that the user found it easier to post the document in cloud storage than to copy it to a USB stick or email it to their home in order to have access offsite?

From an ownership perspective, a lot of debate surrounds the question of who owns data stored in cloud folders. While users will argue that they own the data and thus have sole control over it, the reality is that the cloud provider owns the data, as the cloud folder uses their technology. If you read the fine print in any of these service agreements, you will note that the providers are quick to separate themselves from any liability associated with unauthorized file access. Additionally, can you really be sure that they aren't accessing your data and making copies without your knowledge?

Cloud storage is a huge problem for many organizations, as it makes it harder to keep track of corporate assets. When you look at the data loss implications, especially if you deal with information that is subject to compliance regulations, there are significantly more endpoints you must keep track of.

But locking down these services isn't necessarily the right solution either. Many employees work more efficiently when they are able to work on files from home, from sales reps who want to work on contracts to marketing professionals on a deadline who need to access presentation files for the sake of making last minute edits. Giving employees the flexibility in access provides many benefits, but the risks must be considered. Many organizations are finding success creating internal file sharing services, or opting for paid services that provide more security controls. Employees are happy to use internal services as long as these services don't give them significant headaches when it comes to security controls or access restrictions. The key is ensuring these tools have the right controls in place to balance both security and accessibility.

Cloud involves so many systems, data centers, networks, and security controls, that it's almost impossible to create clear segmentations highlighting where cloud environments begin and end. It's a global entity made up of fenced-off clusters of information.  But beyond the simple cloud issue, there is also what social media and personal devices do to affect these already dynamic boundaries. These devices connect to the same cloud network and suddenly create new endpoints and access points that weren't part of the original network plan. On top of this, social media offers a completely new dimension to the issue of security. The peer networks that are created by social media produce a situation where connections are extended to other public groups. This means there is endless potential for open portals to your network with an unknown number of additional risks. This is what we call Cloud Sprawl.

Cloud Sprawl is another nightmare that comes from the new architecture of cloud, because it adds elasticity to the overall cloud boundaries of the organization. Previously, fixed networks perimeters protected much of the corporate assets, with the exception of information that was copied into another external location, much in the same way we see with cloud storage. Now, however, anything that can access the environment can potentially alter the physical boundaries of the corporate perimeter.

So, what can IT professionals do to contain these risks? Being aware of the endpoints within the infrastructure and the types of risks associated with each device is paramount. But instead of looking at it as a means of locking down the edges of your environment, which will continue its endless sprawl regardless, it might benefit us to look at it instead from an inside-out perspective.

What if we started with the data, and put controls on where that data could be moved to, accessed from, viewed and edited by, and used this data control protocol as the basis for security policies? This way, regardless of sprawl, the original governing policies would protect the data. An example of this would be to tag individual documents as having characteristics that define their usage, such as documents that contain financial information only accessed by financial department employees. By tagging these files as financial, no matter where they are moved, these properties will move along with them.

Cloud sprawl, along with cloud storage, are two of the biggest headaches for organizations, because they create potential security holes and can alter regulatory compliance. The only way to effectively deal with these two significant factors is to promote education for all users and to ensure that any controls used to manage these risks offer a comparable alternative. If you simply cut users off, there will undoubtedly be an increased usage of unauthorized tools and more headaches than if your team had instead provided a suitable alternative.

# The Big Three of Cloud

In order to understand how moving towards a virtualized (or cloud) environment will undoubtedly impact the way your current infrastructure works, it's important to start with the basics. This especially applies to those who are planning to leverage a cloud service provider. Cloud lingo has been around for a few years, and while it has undoubtedly helped explain and clarify what vendors and customers accept as key service types, realistically these different definitions have also confused many or oversimplified what these types of environments actually mean.

It's time to explain things in jargon-free language.

Many organizations are planning to start introducing cloud methodology into their IT environment by leveraging an external cloud services — in particular, one of the "as a Service" (aaS) models. There are many different variations of these aaS offerings, but most focus around one of three major forms: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These services reflect the level of control the customer has over the environment. If you want to rely on a provider simply for the bare hardware and network services, IaaS is where you would start. If you want to have a pre-configured environment where you can install applications, PaaS is a good choice. If you want a more comprehensive offering where you are only responsible for the configuration of applications, you would be more inclined to adopt a SaaS model.

This section will look at how these services differ, and what the responsibilities are for both the customer and service provider in each case. We'll also cover the pros and cons of different types of services, as well as the key issues that you need to be aware of before you subscribe to these cloud service models.

## IaaS: Infrastructure as a Service

When you look at the cloud service stack, the usual starting point for services is Infrastructure as a Service, or IaaS. IaaS refers to a cloud service model in which customers leverage the cloud provider's physical infrastructure, which is usually a dedicated server share running VMs in a data center, along with physical services which include storage and networking. IaaS is a popular service model due to the flexibility and lower cost associated with the service in comparison to some of the other cloud service models available. IaaS services are generally billed to the customer based on the amount of computing resources required and system resources used, although each provider may bundle it differently. The usual subscription models are priced based on usage of resources (storage, CPUs), or as a flat rate for a reserved block of infrastructure.

The benefit for a customer utilizing IaaS is that this service can eliminate the need for an on-site data center, and to some extent, reduce the need for sophisticated networking and storage resources. The customer doesn't need to worry about purchasing servers, software, or rack space — they simply outsource the whole thing as a complete solution. The ideal customer for this type of service is generally one who wants to maintain control over the operating environment itself, but does not necessarily have the resources (or the desire) to manage the physical equipment.

The types of companies that offer IaaS services tend to focus on providing an open model for customers, whereby the end customer can customize the environment to their particular needs. This can be a great solution for organizations that have a specific build in mind, and have the internal capabilities to configure and manage the overall environment, including connections to in-house resources.

Cost is a primary reason organizations adopt cloud. By subscribing to a cloud service model, you can virtually eliminate all the upfront purchase fees associated with an in-house build, leverage an operation expense (OPEX) model over a capital expense (CAPEX), and put an end to budgeting for base infrastructure costs (power, cooling, etc.). IaaS is also a popular IT model because organizations can outsource their storage and computing power, which eliminates a lot of in-house hassles. The amount of time and effort it takes to manage an IT environment is significant, and IT folks are better occupied with the important tasks of provisioning, configuring and troubleshooting hardware. IaaS is a great option to achieve these benefits without relinquishing control of the operating environment to the service provider.

But IaaS is not for everyone. In many cases, IaaS is not a good choice for organizations that need more support when it comes to building and managing a virtual or cloud environment. Because IaaS is a low-level service from a management perspective, it leaves the customer responsible for everything but the hardware.

So, why would organizations look at a self-service model such as IaaS? Customers still responsible for security and maintenance of the O/S and programs are burdened with complications. Many could argue that in light of this complication, why wouldn't customers instead just upgrade to Platform as a Service (more on this in a bit), have the cloud provider handle all the security and compliance (written into the service-level agreement) and simply worry about the actual programs running on the cloud boxes? Doesn't it make more sense to delegate a larger portion of the responsibility of maintaining the computing environment to the provider, and in doing so make it easier for your IT team to take care of all their other tasks that are more business-focused, such as upgrading billing systems or customer centric services?

One of the arguments for the popularity of IaaS is that IaaS simply provides more flexibility in an offering. Customers can choose individual components, including storage, operating systems, virtualization models and myriad other details. For the seasoned IT professional, IaaS may provide the level of customization they require for successfully transitioning internal resources to a subscription model, or supporting custom builds that might require too much modification to run within a standard PaaS environment. It really comes down to a compromise between what your organization wants to do, and what you can support internally. If your organization is leaning towards an IaaS cloud model, there are some key points you should address before signing up for any services.

First, from a security standpoint, you should be aware that there is some complexity that differs based on if you want to design a private cloud or public cloud. With a private cloud, you retain complete control over the solution from end to end. With a public cloud, you may be hindered by the service provider in some areas (generally the SLA will make note of your access from an infrastructure standpoint, including computing level resources, network, and storage). Regardless of whether you use public or private cloud services, there are a few fundamental security controls you can put in place to minimize the risk of a security breach.

Data Loss Prevention is one of the central security safeguards that should be applied in any environment, and is absolutely critical if using a public cloud. Data Loss Prevention (DLP) simply means your organization knows who is accessing information in your cloud environment and, specifically, what they are doing with the information at all times. DLP is a common topic that comes up when laptops or hard drives are lost or stolen, as it also points to a related security issue of disk encryption. One of the least intrusive ways to implement DLP is to set permissions on all business-critical assets, prescribed settings which dictate how these assets can be used (e.g., do not copy, do not distribute). Some DLP solutions also allow for data tagging, a function that can remotely delete any files that should not be used outside the designated network. In either case, it is important to ensure that a DLP solution is implemented, whether in silent mode (where files are automatically restricted by user profile), or in active mode (where a user is prompted every time they access a restricted file), in order to ensure they are aware of their actions.

There are several key components to securing the infrastructure itself: including encryption, hardening and user authentication. In the case of encryption, it is recommended that you not only encrypt data files, but also that you ensure all connections to the IaaS service are encrypted with SSL VPN or IPSEC (covered in the "Securing the Cloud" section in this book). This includes all communication channels, such as those from users, remote management consoles, and inter-VM traffic (if applicable). It is also recommended that you use a mainstream encryption key. Don't use proprietary solutions built in-house, as this will make it more difficult to integrate additional services down the road, or complicate your security policies should encryption keys go missing.

You also want to make sure that your virtual machine templates and master image files are clean and hardened. Hardening is really just a simplified term for the concept of reducing the number of risks that can affect an environment through the implementation of security solutions, including firewalls, intrusion detection, and prevention, and ensuring ports and users are routinely verified to look for potential vulnerabilities. Through hardening, you can reduce the risk of IT environment corruption, should you find security vulnerabilities in an image. If you are able to maintain a secure master VM image, any new machines created from the master should be secure as well.

Another vital topic to consider before signing up for a service is that of user authentication. This crucial aspect of security ties back to DLP, because the only way that DLP can be truly successful is when the proper user authentication controls are in place. Two-factor authentication, which uses both a fixed password and a variable token password, is the minimum that you should be using to allow users to access data in cloud environments. This is especially critical in public cloud environments. The easiest way to introduce user authentication is to assign group profiles to users in order to control their access to data and business-critical assets, and rate applications to decide what levels of authorization to apply to each.

I will get deeper into security later in this book, but if this all seems daunting to yourself or your team, then perhaps Platform as a Service, or PaaS, is your cup of tea.

## PaaS: Platform as a Service

Platform as a Service, or PaaS, is one of the lesser-known service models. This is likely due to a lack of clear understanding between the differences between PaaS and IaaS. I could delineate the technical differences here, but the key difference is the type of users that typically work on creating the environment. While IaaS is strongly favored by infrastructure teams, PaaS is the preferred model of developers.

Why do developers love PaaS? Quite simply, the flexibility that comes from virtualization and cloud has allowed developers to create applications in ways they previously had not been able, such as the OS-less application (this topic is covered later in the book). PaaS is a great option for organizations that want to develop applications, but would rather do so using infrastructure that they don't have to host or deal with on any level. It sounds like a great concept, so why isn't it more popular?

PaaS is a service in which the cloud provider offers a hosted environment for organizations to create applications that run on pre-configured operating systems. In an in-house environment, this would require the purchase of servers, installation of the OS, and finally, configuration of the development environment to create applications that run on the server. With PaaS, you automatically skip to the application creation stage, as the platform is already pre-configured and ready to be utilized.

The paragraph above hints at one of the key problems with PaaS, and the reason it might not be as popular as it could be: vendor lock-in. When you standardize on an application development platform, you run the risk of being limited to vendors that offer the same platform. For example, if you create an application that runs only in Microsoft Azure environments, your choice will be limited to providers offering Azure environments. This is one thing you should keep in mind when choosing a PaaS service, when weighing the pros and cons of picking a widely adopted platform, or building an application that can transitioned to another platform should the need arise. The best way to prepare for this possibility is to avoid signing any long-term contracts before you have decided on which platform you will standardize, and to avoid providers who use proprietary services. It's not uncommon for developers to start a project on a particular platform and realize that it's easier to build on another one instead, or to be forced to change platforms due to technical limitations. In the past, these projects were run in-house and required CAPEX, which meant that if developers ran into technical limitations, they had to deal with the issue until the technology was paid off or rendered obsolete. With cloud and virtualization, you now have more flexibility, as the application can be moved to another service, or can be built on a new VM platform. Vendor Lock-in is the reason why PaaS has been less widely adopted, and is the reason PaaS services are being rebranded to position itself as a more flexible solution.

There are a few vendors making strides in creating an abstraction layer between the application and the platform, in order to facilitate the switch between platform types while reducing costs and headaches associated with vendor lock-in. Organizations including Simple Cloud are working with vendors to create an open cloud standardized platform based on PHP for use in cloud environments.

From the Simple Cloud website:

" _You can start writing scalable, highly available, and resilient cloud applications that are portable across all major cloud vendors today."_

" _[Zend, the organization behind Simple Cloud] invited the open source community and software vendors of all sizes to participate. IBM, Microsoft, Rackspace, Nirvanix, and GoGrid have already joined the project as contributors._ 3 _"_

Thanks to advancements in standardization and the increased availability of services, we are starting to see resurgence in the popularity of PaaS.

From a security standpoint, the same controls exist for PaaS that would apply to an IaaS model, but because PaaS is concerned heavily with the security of proprietary (owned by the customer) applications, there is an increased focus on protecting those applications. This layer is where you would need to ensure that you use a next-generation firewall and Intrusion Detection/Prevention Systems (IDS/ IPS) to ensure that the applications are protected from Internet and network vulnerabilities. In addition, a Web Application Firewall (WAF) is strongly recommended if you are accessing any applications through a web portal, such as in e-commerce applications. These methods of protection will help identify security vulnerabilities that might exist within the environment and applications, as well as providing a virtual patch to buy the organization time to complete remediation. Remember, these recommendations should be used in addition to previous security recommendations that were described under IaaS.

Many cloud providers will offer security bundled in with PaaS, although it is up to the customer to ensure that the provider clearly identifies which security controls they use to protect your environment (this is where the SLA review will come in handy). You can also look to Security as a Service (SecaaS), a service offered by Managed Security Service Providers (MSSPs) and several cloud service providers, whereby they manage the security of your cloud environment.

3 Simplecloud.org

## SaaS: Software as a Service

Software as a Service, or SaaS, refers to an application or software that is hosted in the cloud and accessed through a web portal. The provider takes care of all the work associated with the application, allowing you to simply manage access. As the most popular cloud service model, SaaS is usually licensed on a per-user or per-use basis, rather than an up-front licensing fee. This makes it an attractive option for smaller organizations that do not have large amounts of capital. Larger enterprises are also attracted to SaaS, as it is an easy way to lower the costs of using applications, minimize the resources required to install and maintain software, and to speed up the implementation process in many cases. A famous example of the SaaS model is Salesforce.com, which uses a web-based model for users to access their Customer Relationship Management (CRM) application from anywhere in the world. This makes it easy for employees such as sales reps who may work remotely, or for large dispersed organizations that want to standardize on a single software platform across multiple types of devices like laptops, desktops, tablets, and mobile devices. Overall, SaaS is a great way to provide access to resources without having to manage all the fiddly bits like licensing, patching, updating, and maintaining consistency across multiple networks.

Unfortunately, SaaS has a history of problems, causing organizations to rethink whether SaaS is the right model for them. The key issue lies in data integrations. Almost 20% of SaaS implementations are affected by this issue, which ultimately causes the deployment to fail. This isn't a new issue that comes from the application being a hosted model, as application deployment failure has happened with applications in the past as well. We've all heard the story of the organization that installs some fancy new application or sales portal, only for it to become an expensive repository where documents go to die.

This downside of SaaS doesn't make headlines, as many vendors focus on the number of customer deployments instead of their customer satisfaction levels. After the media spotlight fades, new SaaS customers are often left with shiny new portals for an online service bundled along with plenty of headaches from trying to get their legacy data to work with the new service. Despite the issue surrounding integrations, most organizations are already using some kind of SaaS application. Symantec conducted a report with Forrester back in 2010, and found the following:

_"58% of enterprises use two or more SaaS-based business applications today, and 72% plan to in 12 months. More strikingly, 19% of enterprises report having six or more SaaS-based business applications today, and 30% plan to have that many in 12 months._ 4 _"_

While overall we have seen an improvement in the stability of SaaS applications, integration is still a fundamental concern that keeps organizations from fully embracing the SaaS model. When you think of it from an economics standpoint, for every dollar you perceive to save on capital expenditure using the SaaS model, you will have to spend almost the same amount integrating it into your environment. While this isn't the case for every SaaS application, it is something that needs to be taken into consideration when deciding whether to transition an internal service to the cloud.

Similar to data integration, SaaS also faces complications around data portability. Say you finally get your application up and running, and successfully integrated into your environment. What happens if, suddenly, the SaaS provider goes under? We've seen the risk of this occurrence several times; think about the dot.com bubble. How many fly-by-night shops popped up with the promise to make your business more efficient, only to close shop within five years? When this happens, how do you get your data out? And where can you transfer it?

It's not just the risk of the SaaS provider closing shop that leads us to question the benefits of adopting this model. There is the associated risk that, in trusting the provider with your data, you're then essentially playing by their rules. They subsequently control your assets, can dictate what the services are, how much you will pay for these services, and what options you have should you decide to move to another provider. If you manage to get your data out, there is a good chance that not only will it cost you money — the process will also involve a significant number of headaches. In other words, when the SaaS providers come knocking with promises of great service and flexibility, it's time to put your skeptic hat on. Carefully examine every part of the SLA, research other customers, and generally get as much information about the service as possible, so that you are aware of any potential issues that may arise should you choose this option.

Oddly enough, neither data integration nor portability tops the list of reasons why SaaS adoption is causing headaches for IT managers. Similar to every other cloud service, the key barrier to SaaS is security, as there have been many cases of SaaS offerings being compromised, including large players like Google and Microsoft.

Given the provider is responsible for many security controls, it might sound like SaaS is a reasonably safe model to adopt. Keep in mind that it's also a streamlined, efficient, and easy way for hackers to get direct access to masses of business-critical and sensitive data. Normally hackers had to get past tons of network sniffing security devices to get to the good, sensitive data, and that process of hoop jumping was just to gain access to data for one organization. Now with SaaS, sensitive data is put in a central location, and furnished with a web interface where people can potentially use legitimate accounts to gain unauthorized access. Large SaaS providers have tons of different customers, so from a hacker's perspective, it's a one-stop shop for multiple targets.

While hackers are getting more sophisticated, they are also getting lazier. They want to hit many systems quickly without doing tons of busy work. So, a web browser is a perfect gateway for them, because they can do things such as forcing spyware or botnet installs through web browsers, one of the most common security threats. To this day, the biggest source of vulnerabilities is designed to affect web applications, with SQL injection being one of the favorite tools of hackers, along with cross-site scripting.

So, how can organizations make SaaS more secure? It really comes down to making sure providers have well managed and enforced security policies, as well as governance reviews. They should also have a disaster recovery plan in place, a plan that also has its own emergency response procedures. The easiest way to anticipate the potential is to remember that the more critical the business asset, such as a CRM or ERP system, the more important the security measures required. The minute your application talks through a web browser directly to the end-user, firewalls (with the exception of web application firewalls) are going to be useless. As a result, you need solutions that protect the application specifically.

The key takeaway from this is the fact that, the minute you give access to applications through a web browser, you need to ensure your applications are being protected — not only your infrastructure. SaaS is a great way to benefit from all the cost savings and efficiencies that make up the service, but you need to make sure that in part, those cost savings are allocated to ensure your application is secure.

4 Enhancing Authentication to Secure the Open Enterprise. A Forrester Consulting Thought Leadership Paper Commissioned By Verisign (Symantec)  https://scm.symantec.com/.../whitepaper-forrester-open-enterprise.pdf

# Doing More With Less

Now that you are up to speed on the different types of cloud models, it's important to explore how these types of models can be used internally. Not every organization will outsource their virtual environment through the adoption of cloud services. For some organizations, keeping these functions in-house might make more sense, especially if the resources exist to implement and maintain these infrastructures. Unfortunately, keeping these services in-house has some caveats that should be addressed before you start moving your existing services into cloud.

Internal IT departments are expected to build cloud environments that deliver on the same promises as those of public clouds or other hosted models. They need services that can be delivered quickly, have minimal associated cost, and allow for self-management and provisioning. The reality is that enterprises only seem to want to invest in technologies that allow them to differentiate themselves, assist in executing their core mission, or allow them to be more competitive. In general, they seek to acquire non-differentiating IT services as commodities.

## Introducing Cloud into your Enterprise

While services such as payroll, HR, and other standard internal services don't typically differentiate one business from another (unless, of course, you are a CRM company), these are often the first types of services that get put into the cloud. This is why we are seeing the first line of cloud services falling under application-friendly SaaS models, or in some cases IaaS models, for those who want to build their own environment to support these functions. The concept of moving these services first comes from the fact that if you automate these processes so that they take no internal resources to manage, the funds normally attributed to these projects can then be reinvested in endeavors that are more beneficial. I'm not saying you can or should outsource your whole HR or Payroll department (although in some cases it might be handy, especially for SMBs), but there are ways you can make them more efficient.

The first benefits of cloud adoption for IT in enterprises will come from continued movement towards the building and automating of services to manage aspects of business that deter from core differentiation for the enterprise. Believe it or not, PaaS is really the preferred first step, because it automates the configuration of the environment running the application. Most organizations don't have the risk tolerance to fully convert a service to a cloud model without having an interim "middle-ground," where they can test the integration of the application into the new model. The last thing you want is to lose your CRM or HR data during the switchover to a cloud model. PaaS is a good testing ground, as you can keep your current implementation but move it off-site for testing to see how it would fare in a SaaS model, or even more importantly, revert to the old version in case there is a problem with the new service.

PaaS is also a great model for this transition period due to the fact that developers have the benefit of already starting halfway into the process in the development lifecycle, and spending less time and resources on building and maintaining the development environments. This means the overall development process takes less time and costs less money, leaving the company more flexible to innovate faster and with less investment. In many situations, PaaS will allow organizations to invest and realize huge savings opportunities from having shared services across multiple units within an organization.

At this point, you might find it strange that so much time is spent promoting IaaS and SaaS to enterprises, when the foremost business opportunity is clearly PaaS. PaaS allows enterprises to have their own internal startup to drive business value for internal and external stakeholders.

## Say Goodbye to Internal Cost Centers

So, your IT department has built an internal cloud to help foster flexibility and innovation, but costs are still not showing much benefit. Capital is still being spent on infrastructure, and your IT department still requires resources to maintain and built custom environments. This means IT is still a cost center, much to the dismay of your C-level and other IT leadership. How can you streamline these costs while freeing up OPEX to help continue innovating through cloud infrastructure building? You simply create an internal IT company.

When it comes to budgeting IT infrastructure projects, including the addition of new virtualized servers and the IT personnel requirements, these costs are normally associated with the OPEX and CAPEX budgets of the IT department. By implementing and using a cost-allocation model, each individual resource allocated (including VMs, RAM, storage, and associated IT personnel costs) can be charged to a department within the organization. For example, if the HR department requires a VM to manage the hosting of HR files and databases, a project cost proposal can be sent to HR for funding approval. Once the work is completed, the IT department can then send an internal invoice to HR, ensuring that the costs associated with the project are paid out of the HR budget, as opposed to the IT budget.

At the time this book was written, VMware was one of the primary vendors responsible for simplifying the management of IT cost allocation through an application called VMware vCenter Chargeback Manager. This solution is aimed at helping organizations, and particularly IT departments, not only enhance transparency and facilitate virtualized infrastructure visibility, but to also help improve the way IT projects and costs are allocated within the organization. From VMware's website:

" _VMware vCenter Chargeback Manager allows rate cards and prices to be customized to the process and policies of different organizations. Virtual machine resource consumption data is collected from_ _VMware vCenter Server_ _ensuring the most complete and accurate tabulation of resource costs. Integration with_ _VMware vCloud Director_ _enables automated chargeback for private cloud environments, integration with vCOPs enables you to cost your capacity and finally the integration with ITBM enables you to push your true_ _usage for overall IT Planning and Costing._ 5 _"_

Until now, IT has frequently masqueraded as a huge cost center, but in reality it actually uses its resources for other departments. Through using a tool such as Chargeback Manager, the IT department can essentially operate as a small business within the organization and host its own provisioning and costing processes, to ensure that funding for these projects is provided by the departments involved, instead of solely being paid out of the IT budget.

The real benefit from implementing a cost allocation model like this, whether you manage these costs manually or through a packaged solution, is that it will significantly improve the accuracy of accounting within the organization. From an IT department perspective, the larger the organization, the more significantly these IT costs will be impacted, as project costs are shifted to be properly allocated to the right department. It will also clarify resource allocation from an auditing perspective, and ensure that project costs are clearly outlined and attributed where required. Finally, because virtualization leverages existing infrastructure, costs are therefore fairly predictable, as the main expenses come from resource allocation, which can become progressively easier to estimate and increasingly economical, as more projects are built within this cost allocation model.

5 VMware vCenter Chargeback Manager www.vmware.com/products/vcenter-chargeback/

## Cloud and the Demise of On-Premise Equipment

As someone who comes from the IT security industry, the idea of a managed or hosted business model automatically makes me think about all those IT boxes that collect dust in a corner. Every IT department has those outdated servers collecting dust, as well as the rest of the equipment that was never actually implemented due to the complexity and time requirements. The amount of capital depreciation from the things sitting idle within the IT department alone is enough to keep the CFO from enabling these same IT departments to build more efficient systems and solutions that help the organization become more productive. After all, just about every organization has used the on-premises model for IT security, so why is cloud a better decision moving forward for other IT services?

To get a better idea of why this issue is so important, we need to first look at how cloud is starting to trend from a cost and deliverables perspective. The key goals of a CIO or CFO are to reduce elevated capital requirements, streamline business processes, and of course, keep costs low. This makes sense, because by lowering overall operations costs, organizations can reinvest in their business and provide more benefit for their customers (both internal and external). This is really the essence of how the deciding factor of cloud manifests: less CAPEX, more OPEX.

Cloud also brings another significant luxury: the ability to do more with less. With so many cloud providers already hosting large infrastructures, if you are able to take advantage of economies of scale to obtain the same desired results, this makes even more sense. Economies of scale are starting to benefit the market significantly. Amazon alone has dropped their price almost 20 times within the last six years, and Microsoft is following suit. This type of business model is smart: reduce prices to get more customers, and use the economies of scale to keep lowering prices for everyone. It's a win-win situation. Service Providers can offer lower costs to their customers, ensuring better retention and increased expansion of the customer base. The resulting steady source of income allows them to invest in new features and services. This is why there are so many new market contenders for cloud services: they see the cost benefits of building (and paying for) the infrastructure once, and re-using that infrastructure to provide services for customers until the revenue replaces the original, capital investment. We also see this same trend happening for traditional capital-rich services, particularly in software solutions and managed services like security.

Sadly, IT departments can't normally allow other organizations to pay for usage of their equipment, so buying vast amounts of on-premises gear is not going to keep driving operating costs down; they are going to essentially stay the same until the equipment is paid off, and unfortunately, by that time they usually need new models. IT departments are never going to be in a position to get more customers to offset the cost of equipment, yet they are always going to spend money as it is required by the needs of the business. With a ledger full of depreciating assets, they simply cannot compete with the attractive price models of cloud-hosted solutions.

So, when outdated equipment comes up for a refresh, many organizations start to look at new delivery models to leverage OPEX models and reduce the capital funding requirements. Naturally, the idea of outsourcing or leveraging a hosted solution in order to achieve this will be one of the most attractive options, because there are no up-front costs associated with purchasing equipment, and the operational costs generally remain fairly uniform and free of surprises. This significantly changes the way IT departments plan for projects, because there is reduced work associated with integration (although there will still be some work even if you leverage a hosted solution) and lower costs for maintenance and resources. No longer do you need two or three people dedicated to managing and maintaining equipment; instead you can utilize these resources for other more business critical-tasks. This doesn't even take into account the free space gained from cleaning out server rooms and IT graveyards.

As cloud continues to penetrate every aspect of the organization, it is anticipated that we will eventually see the demise of on-premises equipment and a movement to adopt full cloud-delivered solutions for security. Not only will it help organizations keep up with the constant challenges of the security landscape without significant investments in IT resources and equipment, but will allow the overall market to dictate to the provider market what services are going to be the key differentiator between standard IT providers and full-service cloud providers.

## Vendor Management in the Age of Cloud

One of the biggest headaches for any organization, no matter which department you reside, is the complexity that comes from dealing with vendors. Very few people enjoy this process, not to mention the amount of cost and time associated with sending out an RFP or inviting vendors to show how their solution can benefit your organization. Vendors aren't necessarily deceitful, but they are trying to position a solution that aligns with what they're selling. Most of the time, this fundamental flaw in the way organizations operate leads to messy internal integrations, solutions that get ripped out, the high cost of quickly finding and implementing a replacement solution, and resentment towards vendors. Organizations are faced with a decision whether to implement a broad solution from one vendor, or multiple solutions from different vendors. With pros and cons to each, it really comes down to tolerance for complexity.

This complexity has become standard practice for many organizations and is the culprit behind the resistance to innovate. It's hard to get any project off the ground when you have 10 different legacy systems that need to be integrated to work, especially when they do not all support the same types of workflows. For any organization looking to update their infrastructure, outsource or subscribe to a hosted cloud model, one of the key benefits is the streamlining of your vendor pool. Fewer vendors mean (hopefully) more consistency, fewer throats to choke, and less cost. But it doesn't mean you should remove and consolidate all your solutions. Vendor consolidation is a fantastic benefit of cloud, but like any other change in process, it needs to be vetted against the possible risks associated.  Luckily, there are only a few key things to keep in mind before embarking on a standardization project.

First, less is more. One of the biggest issues I come across is how organizations have a ton of different legacy gear, and still try to add a new vendor into the mix. It's really like having a calm fish tank and then adding an aggressor. It's not necessarily a good idea, considering that if you start buying one-off boxes you'll end up with either too many devices from different vendors (and thus lots of different interfaces), or you'll run the risk of a conflict of communication between devices. Personally, I've always been a fan of the "less is more" school of thought, because the fewer points of complication you introduce, the less chance of conflict. This is why you'll often see large vendors such as Microsoft, Cisco, EMC and the like offering a wide suite of solutions to help replace internal workflows. We can't expect every solution to work natively with another, but thanks to the huge IT vendor mergers we've been seeing, organizations have more options in designing their new infrastructure workflows with reduced vendor complexity.

The second issue that comes with introducing lots of vendors is the pain of managing such a complex infrastructure. If you have different equipment (say Juniper, Check Point and IBM) all purchased from different vendors, it becomes messy from a support perspective, because vendors will point fingers at each other, and you'll have each vendor calling you to sell you more stuff. When was the last time you called a vendor's support line and they clearly tried to push you to prove it was their solution causing the error, and not another vendor's solution causing the conflict? Instead of trying to support and fix the issue, it becomes a tangled web of "whodunit," causing more time and frustration on the customer's side, not to mention potential losses of business operations. The same goes when I am buying a solution in the first place. It makes sense to purchase from one vendor who knows my business well, and who can suggest the right services based on what I have and what they think I should be doing. The more points of contact you have, the more points of potential failure.

Finally, if you are looking for functionality, try to avoid one-off boxes. It's an expensive way to do things and it will end up being more expensive than if you buy fewer products that have more functionality. It's hard to enforce this rule. Often a particular solution is needed, and it requires the use of a single vendor solution that can't be replicated through a current vendor. There will always be exceptions to the rule, but it doesn't take away the benefit of consolidation where possible. There are some great solutions now that are coming as the great vendor consolidation continues, making it easy (and more affordable) to buy solutions that do more than one function through the use of technology such as software blades or modules. The benefit of buying an integrated solution is these services can often be turned on and off without having to purchase new equipment, they provide additional functionality without additional capital cost (additional features usually leverage licensing costs), and because they are interconnected in a single device, can often scale better than if you were to buy everything separate.

While many organizations look to vendors for education and advice on new technologies that can help them, often it comes with a bias towards selling more advanced solutions you may not necessarily need. This is why it is critical for you to independently research to make sure the right solutions are being proposed and that they are industry standard. The last thing you want is to implement something that is overly complicated or has limited scalability. You also can't expect that the vendor knows all the details about your environment and interconnected systems, and thus can't assume that they will be able to identify potential conflicts that could arise from implementing new technologies.

So, does it make sense to work with your vendors to create your IT Strategy? We can all agree that every vendor has some bias towards their own product. After all, if you look at larger organizations such as EMC or IBM, both have made many acquisitions to help enhance their product offerings. The benefit is that, as new products become integrated, they become part of the supported ecosystem. This means you can add more functionality and, as long as the equipment is supported, you can (hopefully) expect less incompatibility issues. Additionally, there is merit in listening to consultants from your key vendor partners, as they may possess insight from other installations they've come across. It's definitely worthwhile to pick their brains about complimentary solutions that fill any gaps in their product line.

With all the next generation technologies coming out, it's going to be increasingly important to ensure that whether you add new components to your infrastructure or not, you streamline the number of vendors and solutions to ensure that your environment is manageable. At the end of the day however, no matter whether you decide to consolidate on a single vendor or use best-of-breed across multiple vendors, the decision has to be made internally and in line with the organization's goals. There's a reason your senior technology team exists, and this is where the expertise in defining your next generation technology solutions should originate.

## Using Cloud for Standardization

Just as you wouldn't overnight your entire business to sell new products, the notion of expediting a transition to cloud infrastructures is doomed to fail. I don't expect many to jump into this new business model with both feet, as it makes more business sense to instead consolidate services through staged approaches.

One of the great things about cloud is that it gives you the opportunity to standardize your environment across many geographic locations. More importantly, it allows for phased introductions of this standardization. This is a great option for organizations that have subsidiaries or have purchased new locations or divisions and have legacy systems that need to be incorporated into the main infrastructure. Anyone who has worked for an organization that has gone through a merger or acquisition knows that the back-end systems and processes are the biggest integration hurdle, and usually the strongest (or more expensive) solution survives.

Luckily, organizations that want to adopt a standard method for all branches and locations of their business have the option of several key cloud technologies to help streamline this move. Of course, depending on the type of organization, not every service is applicable, but they might prove useful in segments of the organization if not adopted universally.

From an infrastructure perspective, leveraging SaaS offerings such as centralized CRM or Email services is a great start. These cloud-based services allow new users to be added to existing systems, and are accessible in most geographic environments. Additionally, major SaaS offerings such as Salesforce.com offer localization assistance such as language support. Normally this would be the responsibility (and cost) of internal IT teams, so the cost savings could be quite significant if you are working with multiple language regions. It is also a great option if you are considering an overhaul of your existing systems and want to take advantage of the latest technologies.

The second cloud service that helps with the streamlining of business units is through authentication-based services. Cloud authentication services allow for the widespread adoption of a single technology platform without the expensive, upfront costs of two-factor tokens (Such as SafeNet or RSA) that would normally be used. Many of these services offer token distribution through electronic formats on mobile devices, desktop and tablets that eliminates the headaches and cost savings of buying and distributing these items. This is a huge advantage as the tokens are managed through a centralized web portal, allowing for self-service models and accessibility from all locations. The cost associated with managing inventory and help desk staff (dealing with lost tokens, password resets) is reduced significantly and transferred from a CAPEX to an OPEX model. Additionally, these services can often be used across multiple internal systems, reducing password complexity and making it easier to manage multiple identities (more on this later).

Another great area to look at in terms of cloud services is through unified communications (UC) such as VOIP services. These can be very beneficial in organizations where the workforce is dispersed either through telecommuting or simply through geographic diversity. The advantage with a cloud-hosted UC service is you can route calls to any supported endpoint and they act as an internal calling group. This not only lowers the cost of setting up landlines on each site, but the cost of equipment and long distance charges. Several solutions also allow you to switch cellular devices over to WiFi to take advantage of network data costs over cellular costs that could add up quickly if long distance is used.

Enterprises that want to set up large cloud deployments and move everything to these infrastructure models often benefit from having the resources to manage such a large project. But not every organization can afford to simply say, "let's change everything over." Personally, I think the biggest gain in cloud is the ability to use it as a consolidator, where the legacy systems and office locations costing the organization money and resources can be fixed. These are the perfect places to start bringing in cloud solutions for testing. From there, you can use it as the standard to streamline the rest of the organization. It also helps ensure the right solution can be used in the right location, as a single organization, multiple locations and divisions may have separate functions that require specialized requirements.

## The Side Benefits of Cloud

The cloud has forced a shift in the way organizations operate. It allows them to take risks and innovate in ways that never would have been possible before. Startups now have the ability to utilize cloud services to build new solutions without searching for venture capital, and can provide investors with a near-finished solution when they are ready to launch. This is why we see tons of investment in providers viewed as key to running next generation business systems, and a generally higher rate of investment for cloud startups. The question is: why haven't service providers and organizations started thinking this way?

Organizations that provide services to customers often have an internal need to develop these solutions. There is often a huge amount of time, resources and money spent on planning how to develop and host these new services. With cloud, organizations can scale and deploy back-end solutions to meet the growing needs of their customers without saying, "Sorry, we can only provide this level of service". It's a more organic way to develop and grow services without trying to estimate ahead of time the amount of resources required to provide them. It is also much easier to convince the CFO for funding if you can slowly build onto services, as they are required, versus anticipating the entire capital cost upfront. No one likes going back for more money, especially if they aren't showing any ROI.

For example, let's assume you want to set up a service that requires extra computing power during off-hours, such as overnight data management. Why should you invest in these resources on a permanent costing model (paying for more power all the time) when you can subscribe to it on a pay-per-usage model? By utilizing this type of model, you can take advantage of lower computing costs due to the off-peak usage, and avoid the costs of building this type of service. CAPEX become OPEX and the service can be run in trial mode almost instantaneously, without purchasing and setting up a complex lab environment.

From an end customer perspective, cloud makes startups more efficient because a lot of the standard corporate functions can be outsourced to cloud providers, such as storage and email capabilities. This means that instead of hiring IT staff to run these operations, the headcount can go towards more pressing business-critical functions that drive the organization. For a startup, having just one or two more bodies working on the key corporate focus area can make a substantial difference in the long-term success of the organization. Do you need a full HR and Payroll department for 10 people? Maybe a hosted model to manage some of the basic functions is a better fit for your organization.

The problem that service providers struggle with is getting these types of benefits heard within the organization. The startup market is small and over-saturated with competition. Cloud providers are focused on selling to the big organizations that ironically, are going to be the slowest in adopting radical business changes such as cloud. They are also inclined to partner with the biggest vendors, as they are the ones who have the ability to push into the market and sell new services based on their track history.

However, cloud business models are also going to force us to re-evaluate the way we do business and how we view the marketplace. Startups that are focused on cloud services are going to be the new key industry for service providers, because they can quickly create new and innovative services that meet the growing needs of the industry. We need to better understand how to work with them so as not to ensure their success (which will, in turn, help make service providers more successful). Bigger will not necessarily be better in this new market. Innovation is what will provide the next generation of business transformation, and it will come from these new startups. As a result, organizations and service providers need to build services that help make these startups more successful, but first we need to start listening to them.

## Bring it On!

The last key benefit I want to touch upon has less to do with technology and more with looking at new ways of doing business. The key benefit for organizations when it comes to cloud — and not just large enterprises, but organizations of all sizes — is that it grants you the resources of a large company but keeps your core business lean and efficient. Think of it like hiring extra seasonal employees during peak periods. Cloud allows you to scale your internal IT teams in line with your business, without having to purchase tons of capital assets only to have them lie in dormancy.

IT departments have been under pressure to deliver innovative ways for organizations to operate. Hiring virtualization specialists to manage new infrastructure that will lower operating costs, as well as enacting new systems and applications to replace obsolete ones are all on the minds of Directors and CIOs, but realistically these projects require large amounts of capital and skillsets that aren't easily found within organizations. Instead of pushing these projects back, organizations should be looking to cloud providers to help them transform their businesses.

Imagine if, suddenly, your IT department started generating income for the organization instead of costing it. This is the premise behind cloud. Cloud enables your organization to create value for your customers, internal and external. Internal processes can be streamlined to reduce costs, new applications and services created with a fraction of the resources, and more flexibility in designing new business platforms. External customers benefit from new services that help them grow their own businesses. Cloud is therefore like a trickle-down effect, moving from the cloud providers to the end customers.

Resources:

VMware vCenter Chargeback Manager

www.vmware.com/products/vcenter-chargeback/

# Cloud Transformation

This is where things start to get good. You've successfully suffered through all kinds of technical babble (or skipped, that works too), and so we are ready to delve into the real reason you are reading this book. Here's where we begin discussing how cloud can transform the way your entire organization functions. I'm going to explain how to make your organization more competitive and more efficient at the same time.

While I don't expect every organization to implement these ideas, the reason this section is important is it will help you understand how the basic principles of cloud adoption can help transition your organization into a leaner, more efficient machine while creating new revenue streams from internal services. It will also give you examples of how leveraging the needs of the rest of the organization can help provide funding for internal IT transformation projects that can, in turn, support future operations of the entire company.

## Cloud Benefits for the C-Level Crowd

I often have conversations centered on whether cloud is a discussion that should begin within the IT department or from the C-level down. It's a great question, because as much as I like to think it's a top-down discussion, it really comes down to the fact that cloud will primarily affect the IT department, including a fundamental redesign of the IT department as a whole. This is why it's in the best interests of the IT team (and hopefully the security team) to drive the transformation to cloud. By having this team lead, there are real benefits.

When you talk to C-level executives about what they are trying to accomplish with the company, it really depends which department they represent. CFOs want to reduce operating costs, move away from owning depreciating assets, transition from a CAPEX to an OPEX model, and streamline business processes. CMOs (and other marketing folks) want to yield better market insight from their campaigns, understand how their brand is doing and how they can increase market share. CEOs want to do more with less and keep up to industry trends so that they can remain competitive in their markets. The problem is that if it takes years for them to start implementing these new technologies and methodologies, you lose your competitive edge. This is why cloud is so interesting to the C-suite.

When it comes to making all these strategies happen, it's usually the IT team that is called to bat. The IT team usually resides in isolation, busy with day-to-day operations of the organization, and have no context around the originating idea. By the time the requests get down to them it's in such a state of disarray (often due to executive excitement) they feel burdened with the responsibility of responding quickly with half the information and none of the resources available. This is where cloud suddenly turns into an outsourcing conversation, instead of an enablement conversation.

From a business perspective, the real issue with the cloud model is that it quickly becomes two isolated conversations. At the high level, it's a business transformation conversation. At the IT level, it's all about infrastructure and process. It's rarely a collaboration between the two, and this is why everything tends to be more complicated. IT has always been treated as a box pushing cost center, so decision makers often don't see the value inviting these folks to the table, resulting in very little understanding and involvement in the larger corporate mission. They simply don't understand how everything is meant to fit together because they are never involved in the high-level strategy discussions.

What can we do to fix this? More than anything, it comes down to re-thinking how the IT department is organized. Take a look at everyone in the organization and their skillsets. You need to take an inventory and think about designing a team environment that focuses on building and supporting the cloud infrastructure. Half the team should be focused on building the environment that delivers the services and makes sure it's available and able to support the organization's objectives. The second team focuses on creating the services themselves, working very closely with all areas of the organization to ensure their goals are achieved through these services. If there are gaps in what resources are available and what is needed, this is the team that should choose the solutions.

If you have these two teams supporting your organization, it's no longer a conversation about whether cloud is IT or part of the overall business strategy. Cloud is suddenly the way the organization will deliver value for its customers. IT is the backbone required to help the organization meet these commitments.

## Big Data

It's hard to talk about cloud and virtualization without discussing one of the most significant advancements available to organizations today. If you regularly read articles on cloud, you've probably seen the picture of a bright yellow elephant staring you down. This elephant, Apache's Hadoop Elephant, is the mascot of what is considered to be one of the most important technologies in the transition to large-scale cloud environments. In fact, Yahoo! has been the largest contributor to Hadoop and uses it across their entire organization, as does Facebook and countless others. This Big Data elephant is going to change everything.

Apache's Hadoop is a framework created by Doug Cutting, who developed it to support a search engine project called Nutch (Hadoop was the name of his son's toy elephant, in case you were wondering). Hadoop allows applications to work with extremely large amounts of data (we're talking petabytes) using a java-based platform. With contributors around the globe, the opensource Hadoop Common JAR package allows organizations to map their filesystems and gives them the ability to replicate data using the Hadoop Distributed File System (HDFS). This means that if there is a power outage or other hardware failure, there is a greater chance that the data will still be accessible.

So how does Hadoop work? Without going into specifics (check out Apache Hadoop's website for all the details), Hadoop uses HDFS to replicate data, then breaks it up into smaller "datanodes" and shares them across multiple servers. You can set the number of copies you want and then use the datanodes to load balance the data across different servers. Sadly, HDFS cannot provide high availability due to the way the filesystem works. For technical folks, the issue is that the filesystem instance requires a "namenode" which contains a directory tree of all files in the system and tracks its current location. There is only one version of the namenode and if the system goes down, it can take long periods of time for the namenode to restore itself. The upside of HDFS's design is the increased performance of data output and the ability to handle large files. Guess which the more compelling choice is.

What does this have to do with business transformation? Well, if you can suddenly manage huge amounts of data, and actually leverage it to gain better insight, think of what your organization can achieve! If you are a customer-service organization, you can suddenly gain more insight into who your customers are, how they are using your products, and how you are interacting with them. You can then use that information to build products better designed to meet their needs, and identify additional revenue streams from complimentary services.

There are numerous other advantages of the Apache Hadoop platform, many of which are too technical to get into here. But you can visit the following sites for more information on this project.

http://hadoop.apache.org  
http://wiki.apache.org/hadoop/FrontPage  
http://en.wikipedia.org/wiki/Apache_Hadoop

## DevOps: The New IT Team

When you think about how cloud is affecting the way businesses operate from an IT perspective, the usual teams involved are the IT infrastructure guys and perhaps the security folks. What we tend to forget is that these groups aren't the only ones who are looking at how cloud can make business processes more nimble. There are the developers and operations folks who have their own agenda, and have even created a sub-movement. The DevOps movement is showing organizations how changing the way they run development can lead to some astounding results.

Do you ever wonder how companies like Flickr and Amazon are able to adapt to the cloud so quickly? The secret sauce is they restructured their IT teams into a group called DevOps, which encompasses both development and operations teams. This term originated in Belgium in 2009 and has gained huge momentum as a result of its impressive results. Think about it: most organizations structure their IT environment into separate entities including development, operations, security, management and QA. These teams aren't known for working together particularly well, and when they do, the processes are pretty drawn out and costly. Each team is spread as thin as it is; they naturally resist taking on any more projects.

This is where DevOps fixed the problem. DevOps aims to enable developers to create new systems and methods for streamlining business tasks. The current issue is that it often introduces complexity and ends up causing more work for operations and security teams. The DevOps model recommends combining the teams into one big DevOps team (there is even a term for teams that leverage security teams by restructuring them to work lock-step with the development team: "Rugged" DevOps). The benefit here is that because the Operations team is now involved from the ground up in the development cycle, you benefit not only from project cohesion, but average deployment time is now cut significantly. Still not convinced? A proponent of this model, Amazon, claims to conduct more than 1,000 deployments a day.

While it seems like a lot of hassle to change the way the organization operates, think about the key goals of the CIO. They want to make sure the IT development process results in tight IT security controls, standardized processes and the ability to see and react to IT risks. Generally, developers hate getting security teams involved because the latter forces the former to lock everything down, which forces developers to undo a lot of work. Making security a part of the DevOps process can reap huge rewards.

The beauty of the Rugged DevOps model is that it helps organizations get rid of old, inefficient legacy IT systems, while simplifying business processes. This results in cost savings from consolidation, but also speeds up the time of development. This means better products and faster time to market (and cost recovery). By streamlining the complexity of the development process, the steps and number of groups involved can be reduced dramatically, making the IT department, if not the entire organization, immensely more efficient.

From a security standpoint, this is a win-win because the complexities associated with legacy systems are eliminated. This reduces the number of inherent risks and makes the entire infrastructure easier to secure. Additionally, if security and QA are involved from the ground up in the development process, the code itself will be more secure because the teams that are now part of the development group have already vetted it. It's no longer a post-development process where code is tested and secured (or rewritten).

DevOps is a huge shift from the way organizations typically design their IT teams. If I haven't said it before, cloud isn't a trend; it's a new way of thinking about how we have done things in the past and figuring out how we can do things more efficiently and securely in the future. As for how you design your teams, think about what you need to achieve and stack your teams to give you the best odds possible.

I recommend taking some time to read on this fascinating topic, as well as the benefits of creating a Rugged DevOps group, which can be seen in this interview with Tripwire founder Gene Kim:

 http://www.csoonline.com/article/701479/how-security-can-add-value-to-devops.

## Virtual Desktop Infrastructure

No section on business transformation should avoid one of the most effective baby steps in cloud: VDI Virtual Desktop Infrastructure (VDI). You may have heard the term VDI mentioned as part of the first wave of cloud adoption. VDI allows organizations to run virtual copies of operating systems on any device with an Internet connection and keep all their resources safe inside corporate firewalls. From a user perspective, it looks like and acts like a regular copy of Windows (or whichever OS you use) with all their files and resources still accessible. Why should you care about this type of business model? How about uplift in security, availability and employee satisfaction! What if it also made IT management and deployment of user environments more efficient and standardized?

Let's start with the security angle:

While virtualized desktops do allow for centralized policies when it comes to configuration, patching, and in some cases DLP (where the data is locked down so it can't be copied onto an external device or emailed), with virtual desktops, the real security comes from the fact that data is technically isolated within the VM, and not actually installed on the desktop.

That being said, in order to make VDI truly secure there must be endpoint (anti-virus) installed, so if a computer is infected it won't spread to the host hypervisor and infect other VMs residents on the same host. But this isn't anything new and should be standard policy for any environment, virtualized or not.

What about availability?

This is where I take a deep breath. I'm going to assume the infrastructure guys running the VM shop are rock stars and can keep the environment up 24/7. If this is the case, you can skip this section and I'll commend your team's skill. In truth, if you suffer an accident, outage, or instability and use VDI, your desktops will sadly go with it. The last thing you need is an executive working remotely between flights and trying to get a proposal or important document finished and cannot access it (or worse, they lose it). Retail applications — a beautiful example of VDI used effectively to deploy standardized environments across many locations —are also susceptible to this risk. Should the VDI host go down during business hours it could lead to significant loss of revenue, not to mention brand damage. This is one of the fastest ways to kill any VDI project and cause employees to start dumping important files into cloud storage services.

What's in it for me?

My favorite point about virtual desktops is the flexibility it provides employees. I doubt I am alone in saying there have been times when I wanted to hurl my brick of a laptop out the window because my netbook or tablet can load faster. I also sympathize with the legion of rogue iOS/Linux users who install VMware Fusion so they can run Outlook in a Windows VM just to get around corporate policy. Why not allow employees the choice of device? We commuters despise lugging a heavy laptop daily where a netbook would do perfectly fine. In some cases, if I plan to be in meetings all day, it would be great to bring a tablet so that I can quickly check emails to make sure everything is under control without having to boot up a laptop during breaks. Happy employees are productive employees. By leveraging VDI, you could give employees a choice in device (some organizations go so far as to give employees an allowance to purchase hardware) and then install the VDI onto the device. No more worrying if someone's kid gets online and floods the device with malware. The VDI at that point should (assuming the correct security policies are in place) protect the corporate infrastructure. Employees are happy to get the device they want, IT is less stressed about ordering and provisioning dozens of laptops that need quick upgrading (and listening to grumpy employees complain about these devices), and the security folks are less worried about people trying to thwart their good intentions and policies. It's win-win.

It seems like a lot of work for IT folks though.

One of the true benefits of VDI is that the simplicity of this model makes it easier and more standardized, meaning less work for your IT teams. VDIs are rolled out through leveraging a virtual image of the desktop environment, which sits on some form of hardware (laptop, desktop, mobile device). So, if you can create standard images for different user profiles (Active Directory will be the key here), you can easily set up images for users and push them out as needed. Even if your users manage to damage their desktop image, a new one can be pushed out and business can continue as usual. This makes it easier for IT teams to manage and troubleshoot large numbers of users, and it ensures security controls are standardized and in place across all assets.

So, is VDI worth it? In some cases, absolutely. It really comes down to identifying what your user profiles are. If you have a significant ratio of mobile users (sales reps, consultants, road warriors) this could be a great option to help provide more security and flexibility in accessing corporate resources, but understanding it is heavily reliant on the availability of the infrastructure. It is also a great option for environments with telecommuters, as you can simply have them install a client and use their home computers, reducing significant IT provisioning costs. If your workforce is mostly in-house and you have a significant stock of hardware already up and running, the costs to switch over to a VDI model might outweigh the current benefits of switching over. The decision to implement VDI really depends on the goals of your cloud project.

References:

www.hadoop.com

 http://www.csoonline.com/article/701479/how-security-can-add-value-to-devops

# How I Learned to Stop Worrying and Learned to Love the Cloud

At this point you are either really excited about the cloud or want nothing to do with it. It's complicated, and the steep security learning curve makes it vulnerable to IT threats. On the other hand, it'll make your organization more efficient, innovative, and possibly save you money once it's up and running. The complexity and hype surrounding this issue often scares off the very people who should be its greatest ambassadors.

Like any project, the success of using cloud to transform your organization depends on your ability to convince key stakeholders that the work is going to be worth it. Unless they have taken the time to learn about cloud, and know what it actually means, it is going to be a tough battle to get these decision makers onboard (especially considering each of them has priorities and a vision to achieve those priorities through this new business methodology).

So, how do you start to work with cross-functional decision makers to explore whether these projects will benefit them? The key is understanding what makes these folks tick, and how you can influence them, by proving support for your project will help them achieve their own objectives as well.

## Why CFOs Love Cloud Computing

When it comes to key stakeholders in your organization who you would imagine being involved in leveraging cloud, the CFO isn't generally one you would expect to be supportive. As it relates to cloud, CFOs generally have a love-hate relationship with these types of concepts. On one hand, they understand the financial benefits that come from adopting this type of business model and may be intrigued by what it can do for their ledgers. But at the same time, the way the cloud business model works means costs are tied closely with the cloud market, making it difficult to predict future expenditures.  CFOs are tasked with having to make a call on whether cloud will stick around or if it's just a trend.

There is no doubt the role of the CFO has changed with the adoption of cloud in organizations. Because IT costs are generally one of the most visible signs of frustration to the CFO, their role has traditionally been about managing capital costs and business functionality. Now the role is affected by how virtualization and cloud solutions are utilized to replace these traditional models, changing the way IT cost has typically been allocated. Many of the decisions on cloud will now be dependent on whether CFOs understand the value of cloud, and if they view the cloud pricing model as appropriate or not. If they feel cloud models reduce the costs for their organization's IT groups, there will be a higher adoption rate (usually because the CIO reports to the CFO).

But even savvy CFOs aren't necessarily big fans of cloud. As I mentioned before, due to its infancy, the market for cloud services is still a little volatile. It's hard to design a proper cost model when cloud providers are still trying to figure out how to appropriately price these services. Monthly costs can fluctuate significantly based on usage, and this will throw off financial forecasting (especially cash flow projections). This is bad for CFOs who are required (or just really enjoy) to give a fairly accurate estimation of costs.

To get around the apprehension of pay-per-use models, one approach is to purchase a fixed-cost model of cloud where they commit to a certain amount of resources from the provider. This model can allow for lower cost per unit and also allows for bursting, where overage is allocated when one month requires more resources (think the holiday season). For applications that tend to stay moderately flat in terms of usage, this is a viable option because it can reduce overall cost and decrease the risk of cost variability.

For other CFOs, cloud is seen as an opportunity. They love the cloud because of the wonderful things it does to the organization's books. For one, it reduces capital investments, one of the key measures of an organizations business posture. By limiting the amount of depreciating assets, organizations can spend money on more important things that drive core business objectives.

Cloud also moves organizations from a CAPEX based business to an OPEX based business. There is a reason this is one of the most important discussions around cloud, because the more capital investments that can be removed from the organization's operating plans, the more flexible the organization becomes.

But the key benefit for CFOs really lies in the removal of uncertainty. Cloud helps reduce many of the capital questions associated with running an organization, making the job of the CFO a lot easier.

For example, when it comes to building applications, it's not uncommon that projects go over in time and expense. It's also almost impossible to project the impact it will have on the company and the market. Will it be successful? Will there be quick adoption? Will it deliver on the promises made? If it's very successful, does the infrastructure or application need to be re-tooled or extra capabilities added?

Second, what if the market takes a downturn and suddenly the company needs to scale back? Here is where the pay-as-you-go model shines. The fear of purchasing assets that will be affected by the economic market is virtually eliminated with cloud models.

Finally, what if the company is suddenly bought? What if there is a fundamental change in direction as a result of a market opportunity? Cloud models allow for quick adjustments of business focus due to the flexibility and elasticity of the model, making it a great option for fast moving organizations.

Involving the CFO in the cloud decision is a worthwhile endeavor, and it also helps them understand the benefits of cloud, as they can be the largest allies in creating a cloud adoption model for organizations. Cloud is no longer just an IT department discussion; it needs to happen in all areas of the organizations.

## The New Role of the IT Team

It seems that when it comes to capital investments in IT we have been in a perpetual recession, with CEOs and CFOs constantly looking for ways to reduce costs. Part of that train of thought tends to fall on whether there is value (cost savings) to offset the entire IT team in favor of a managed solution rather than do individual layoffs. In organizations it often means the CIO has to spend way too much time justifying the value of IT, and if you have hungry shareholders there is a risk that cloud is looking a little too enticing. After all, who wouldn't want a full solution that delivers cost savings up-front in exchange for predictability in operating costs? The problem with cloud is you can throw in the CIO as part of the outsourcing deal.

Conversely, CIOs who run the risk of being outsourced also have a great opportunity to look like a savior to the organization. By transitioning the IT department into an internal brokerage, they can match the best services and cost structures to implement an in-house solution that operates at the same cost levels as an outsourced solution. Additionally, by transforming their department to one that creates value and can bill in-house departments, they can share the OPEX across the whole organization, instead of carrying it under the IT umbrella. This means more innovation and more (hopefully) larger budgets to create projects that increase organizational value (and shareholder happiness).

A great place to start transitioning to this model is to look at your resources — those fabulous IT people who keep the organization up and running. If you break down the amount of hours allocated to standard IT tasks such as configuration, management of infrastructure, troubleshooting, end-user support and dealing with obsolete resources, there is probably a significant amount of time that could be better used. This is on top of the amount of frustration these projects cause towards IT staff, which is often a key reason for turnover.

Going back to this example, if you are able to outsource all the tasks that are deemed to be a misuse of your team's time (why pay a level-three IT employee to do level-one support?), think of all the potential you have to create more effective services that actually benefit the company! These resources are now freed up to create new applications, process and actually start tackling those projects that the organization has been trying to do for years.

Outsource doesn't have to be a four-letter word; it can be seen as a huge help to the IT team considering they can ferret out tasks they don't want to do, and can focus on tasks they prefer. Employee happiness means employee productivity, so if you can save money through outsourcing and increase productivity, you'll have a more valuable department than ever.

## Cloud as a Catalyst for Innovation within IT

Whether accepted or not, the role of senior IT leadership will change with cloud. In fact, many IT roles will change. We can't expect to leverage new business models and practices while keeping everything status quo. In order to survive the transition you need to start thinking about what role you want to play in the process and work towards that role, or risk becoming obsolete in this new cloud world.

Starting at the top, C-level executives are hopefully excited about the business transformation possibilities and cost savings that cloud is slated to bring. Unfortunately, as you go further down the organization, especially into the IT departments, there is a different type of attitude. Many IT professionals (regardless of team) view cloud as hype that is designed to make their lives more difficult or render them obsolete. This isn't the first time we saw significant change. Remember the mainframe transition?

The great thing about cloud is that it's not just designed to streamline IT inefficiencies from a role perspective; it's meant to do more with less. IT professionals need to see that cloud gives them the chance to learn new technologies and become experts in a fast-moving market that will rely heavily on those who can adopt these new skills the quickest.  At this point, nothing says IT job security more than having cloud knowledge.

Forrester analyst James Staten, who studies the cloud market, believes the largest hurdle for organizations to overcome towards cloud adoption is the dramatic shift in culture of the IT department. In particular, the fear of being outsourced to cloud providers has created a culture of what Staten calls "server huggers," IT professionals who resist cloud deployments. The problem is that, in order to maximize the benefits of cloud adoption, unless the expertise exists in-house to transition and maintain all aspects of the cloud environment, we are going to see the need for an increased adoption of cloud services that allow organizations to utilize the expertise of the cloud provider. This is reminiscent of when organizations began leveraging managed security service providers (MSSPs) to manage their security postures, and IT security professionals wondered if their organizations might outsource their roles. This was a short-lived fear, as the demand for security professionals is still high with no real decline expected anytime soon.

But it's not just the management level that is putting pressure on organizations to rethink cloud adoption; the C level is reliant on their in-house experts to help them make the case for transition to a cloud model, which just happens to be the same group at the heart of the outsourcing debate. Sadly this causes stress, as IT folks now feel the need to justify their jobs. IT folks need to see this as an opportunity to position themselves as leaders in the transition, and perhaps get a promotion out of the deal if they are able to show how their skillsets will be critical.

The key to remember when transitioning to an outsourcing model is that there will be a need to manage these new extensions of the internal IT teams. Managers will now be involved in not just managing their own teams, but managing the outsourcing partners. This is where we will see a large amount of emphasis placed on integration between external and internal resources, and smart IT teams outsourcing the responsibilities for resource maintenance, which eats a significant amount of time. This frees up teams to provide more significant contributions to the organization in deployment of next generation services and transitioning the business from a legacy IT framework to a more efficient model involving cloud environments. It's also a way to actually use the skillsets of your teams for what they are good at instead of having them stare at a screen all day.

Do you have programmers on your team? With the increase in mobility, having application designers will be a significant advantage as they can start transitioning legacy applications to be more nimble and accessible on devices and platforms.

Cloud doesn't mean IT teams are going to start losing their jobs. In fact, the move to cloud is creating a significant number of new roles. This includes roles such as cloud architect, security specialist, developer, and infrastructure manager. But more importantly, we will see an increased requirement for unique skillsets that can support cloud and specialists who can work with large amounts of information spread across multiple sources (even across multiple geographic locations). This is an opportune time for IT and security professionals to take stock and explore new careers in cloud environments while the new global refresh is happening. As the cloud continues to evolve, there is no end to what kind of roles will be required to maintain this new global infrastructure.

# Securing the Cloud

I feel bad for security folks. Maybe it's a result of working in the peripherals of the industry for more than five years, but if you ask me, it's virtually a losing battle. First, you have drastic cuts in funding due to the recession, which means obsolete hardware lingers and purchases are often valued on what is "good enough" rather than what is required. It becomes a reactive environment versus a proactive one. Of course, funding often magically appears the minute there is a breach, but it is often too late. Security engineers have an uphill battle when it comes to getting many projects funded, usually because when you are trying to explain something of a technical nature, chances are CFOs have no understanding of the topic and aren't willing to make a decision — which often means they simply say no.

Second, look at the news. On any given day, there is an article about some government or organization being hacked (let's not forget about the sizeable number of events that are not reported). IT crime has become one of the most prevalent types of crime. It's cheap, easy, and more criminal organizations excel at it than ever before. Even worse, thanks to cloud, many of these organizations can gain access to tons of cheap computing power to run attacks such as Distributed Denial of Service (DDoS) attacks, the popular choice for hactivists today. If that wasn't enough, even internal whistleblowers are leveraging gaps in security which has resulted in these folks being key sources of information for organizations such as Wikileaks.

If you work in security for a large organization, particularly in finance or government, you are in the trenches of a historic battle. The amount of attacks has skyrocketed, but the IT security departments of these organizations, and most other organizations, have not necessarily scaled in response. Staffing alone is tricky, as there are only a small amount of computer engineer resources available, let alone IT security specialists. On top of this, if you are lucky to find stellar IT security folks, you will have to keep them happy and paid well in order to retain them, taking away funding that could be used for purchasing solutions or equipment.

The biggest reason I am sympathetic to the security community has to do with cloud. I strongly suggest that even if you are not technical, this section will be of great benefit. Security is the biggest barrier to cloud adoption, and I intend to tear down parts of the barrier by clearing up some of the misconceptions while providing key information to help you properly prepare for this new evolution of security. Fear not, there are ways to fix all of this.

## Whoever Marketed Cloud Is a Genius

As a marketer, I get the privilege of reading and writing marketing copy that contains just about every buzzword you can imagine. By far, cloud is my absolute favorite. This isn't simply due to the fact that I am well versed in the subject, but because nine times out of 10, the customers we are targeting have a different idea about what the word means than we do. It's the same when I see an ad from another company that talks about cloud. We're all using the same word, yet we all seem to mean very different things.

Each customer has a different idea of cloud, and they are seeing a lot of exciting ads about different cloud solutions, which means as an IT or security engineer, you have to try to figure out what everyone is talking about when they come into your office and ask for the latest cloud solution. For example, your CFO is on his flight back from a business trip and happens to come across an ad from "insert company here" that can save him 75% more money by using their solution. Great! He goes back to the office, talks to the IT manager and tells him he wants this solution, and then instructs the IT manager to shove it into their environment to save the company money. Unless the CFO is technical, explaining that it's not as easy as he thinks is a pointless endeavor.

Realistically, this type of scenario doesn't occur all that often, but you get the idea. If an IT engineer is tasked with making recommendations for the environment, chances are he will run into a cloud conversation and will have to try to explain what the marketing jargon really means.

The second part of cloud that really makes me feel for the brave folks in IT security is that whoever thought of virtualization didn't have security on their minds.

## Protecting the Virtual Landscape

This is where things start to get more complicated. Luckily, coming from a marketing background means I can explain things in proper, non-technical English. This is where you decide whether you want to go down the rabbit hole. I'm referring to the rabbit hole that goes to The Grid, the virtualized world in Walt Disney's TRON. The metaphor might seem an odd way to describe security, but out of all the ways I have explained cloud security, this way seems to be the easiest to visualize for most. (Disclaimer: I'm assuming most of you have seen the Disney movie TRON or TRON:Legacy. If not, take a break from the book, watch them, and come back).

Why is The Grid like cloud security? Think of it this way: cloud, in its simplest form, is virtualization — lots of elastic, on-demand computing power located wherever. Virtualization is really a computer in a computer (again, simplest form), or a world in a world, like The Grid. You can even joke that it can be accessed through Linux. So, if The Grid is our virtual machine, where does security fit in? Well, remember TRON was actually an Anti-Virus program and Recognizers (the big flying ships) are not unlike Intrusion Detection Systems (IDS), scanning programs for threats. Like The Grid, virtualized environments require their own security. We have our police, the Grid has theirs, and the two of them have really nothing to do with each other. The same can be said about how many of the traditional security solutions and virtual environments interact.

No one can deny that the types of IT security threats have continued to evolve over the last decade. No longer are attacks designed solely at penetrating the IT network through traditional network layers currently protected by firewalls and intrusion detection/prevention systems. Today's attacks are leveraging the path of least resistance: the Internet, through Wi-Fi, SQL injections on websites and through mobile devices. Add the amount of corporate assets that are accessible through these network connections and it is quickly evident how large the security landscape has gotten. The dynamics of securing corporate assets has changed significantly and unfortunately, traditional IT solutions cannot address some of these changes.

Traditional IT security solutions have generally focused on securing the perimeter of the IT environment. Typical security approaches would include things like installing endpoint security to protect and lock down desktops and laptops, setting up VPN tunnels with authentication for remote access, and configuring firewalls to protect the network from unauthorized access.

When cyber criminals started to force the adoption of web based attacks, the market saw adoption of solutions including Next Generation Firewalls (NGFW), which can detect malware before it hits the network and also controls access to applications on the network, and Web Application Firewalls (WAF) to prevent against DDoS and SQL injection attacks caused by malicious code entered into web-facing applications. While these solutions provide advanced protection for networks against next generation attacks, they don't do much to protect virtualized environments once the network has been breached.

The main reason why traditional security technologies have limited effects on virtualized environments is due to the way systems are designed and operate. Traditional security solutions often look for physical endpoint data, not virtualized endpoints. When more virtualized environments are created, there is a need to adopt security practices for them. Normally this wouldn't be an issue, but due to the separation of duties that often lie between the infrastructure teams who design and implement the virtualized environment, as well as the security personnel who are normally tasked with securing the environment, it is not uncommon for a lack of communication resulting in one team scrambling to adapt their processes to the change. It is rare to see organizations utilize both teams during the design and implementation stages, and even when both teams do work together, the learning curve associated with understanding how security and virtualization function together is another reality. Security departments have been tasked with securing physical environments, and so their policies are based on traditional approaches. The infrastructure teams are under the gun to adopt cloud and virtualization, and essentially do more with less footprint. If infrastructure becomes more virtualized and spreads out across multiple locations (as in cloud implementations), the complexity of securing these environments becomes increasingly difficult. Realistically however, you can't expect an organization to pass up the advantages of adopting cloud and virtualization. So what do you do?

Often it works out in the favor of the infrastructure guys, and leaves the security guys holding the bag. That's where the problem lies. Not only are they swamped just keeping up with the increase in threats, but now they have to figure out how to plug all these holes in the new virtualized environment. This is why I feel sorry for IT security folks.

## Cloud Security Simplified

Most modern security solutions leverage Application Programming Interfaces (APIs) to see data as it passes through the network. This is how most solutions are able to track threats, such as someone hacking your network, or users mucking up internal assets. Through APIs, vendors can create rules for their appliances to access the data and interact with the network. You'll often hear the term API thrown around when you see security vendors talk about how their solution integrates into your network. It is through APIs that virtualization platforms can communicate with security solutions either external to the server, or integrated within the server.

Several years ago, VMware released a set of APIs that allowed third-party vendors to create security solutions that would work natively in their environments. Because there are a significant number of customers utilizing VMware as their platform, the security solutions market for VMware environments has become a hot spot for vendors. This has been one of the key reasons for the availability of solutions that focus on securing virtualized and cloud environments across all platforms.

The need for new virtualized environments solutions comes from the longstanding use of traditional security solutions focused on detecting network traffic on servers, which cannot be expected to provide visibility and security within a virtual environment. These tools have no visibility beyond the physical Network Interface Card (NIC) of the server, and because virtual machines leverage virtual NICs, internal movement and changes will be unnoticed by monitoring devices and will not produce log events. If something happens, such as a network breach, there will be no record of it from the device or in any logs. This is the key difference that has prompted security manufacturers to start designing solutions that enable security within the new virtualized and cloud infrastructure.

The first waves of security solutions designed for virtual environments contained the usual suspects: endpoint, firewalls, IDS/IPS, security event management, identity management and single sign-on. These solutions aren't new by any means. The problem is that, in order to do all the fantastic things that virtualization does, the environment has a lot of unique features that one could surmise were designed that way just to keep security folks on their toes. Luckily, the security folks fought back with a little trick called paravirtualization.

## Paravirtualization

If you'll remember, I said I was a marketer and not a technical writer. Here is where we put that theory to a test. Paravirtualization (pa-ra-vir-tu-a-li-za-tion) is a fancy way of saying virtualization-optimized. The basic theory of paravirtualization, as best described by Wikipedia, is:

" _The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). A successful paravirtualized platform may allow the_ virtual machine monitor _(VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine-execution inside the virtual-guest."_

In plain English, we could say paravirtualization lets you fool the virtual computer into thinking it's doing all kinds of fancy stuff, using local resources rather than through virtualized ones. It does this by creating connectors that tap into the environment, instead of having the device itself connect. This means instead of including the device on every machine, you can run it somewhere else and tap into multiple hosts, reducing the amount of processes running on each server.

A practical (and sadly perfect) example of why paravirtualization is so important can be seen by looking at the biggest system resource leech and number-one source of frustration for IT security folks: endpoint. Because endpoint security is a standard in just about every organization, and is also the mostly misused service within virtual environments, it is a great place to start.

## Endpoint in Virtual Environments

A quick introduction to endpoint security for non-technical folks, and a refresher for everyone else: endpoint security is a program that sits on every desktop and laptop and provides security such as anti-virus or encryption. The basic principle is that every endpoint has the program (called an agent) loaded, and is managed by a central monitoring program located within IT. If you look at your corporate computer, there is a good chance that sitting in the taskbar is an icon for some type of Anti-Virus or other endpoint program. The fact that endpoint uses an agent is the reason that it is the most abusive security controls in a virtual environment. As more and more infrastructure becomes virtualized, using endpoint solutions that are optimized for virtual environments will become increasingly important.

Traditional endpoint solutions were created to work in disparate locations, across multiple physical computers including desktops, laptops and servers. The trouble with implementing these traditional solutions within a virtualized environment is the policies that were written for physical infrastructures must be redesigned for virtual environments. If the corporate policy states endpoint must be installed on every machine, and this is enforced within a virtualized environment, there can be significant decreases in infrastructure efficiency, not to mention the possibility of taking the entire infrastructure down due to over-allocation of resources.

Here's what it means in proper, not-quite-the-Queen's English.

If you copied the standard type of endpoint installation from a physical environment and translated it into a virtual environment, the endpoint agent would be installed into each virtual machine (Figure C). In a simplified example, a server may host four virtual machines (VMs) on a single hypervisor layer. Each virtual machine is allocated system hardware resources such as CPU, RAM, HD and NICs, and runs designated applications within a specified operating system. If endpoint is installed on every single virtual machine, it will require a part of the allocated resources in order to function, and increased resources during peak functionality such as during system scans or booting procedures. Think about when you run a disk defragmenter on your laptop; it takes a performance hit from your system resources as it runs. An endpoint agent works just the same, and because system resources are shared across the entire physical server, the more instances of endpoint that are installed, the more system resources will be required.

Here is where simplified math kicks in. If each instance of endpoint requires 2% system resource utilization to run during idle mode, it equates to 2% per VM. Spread this across a hypervisor with 4 VMs, and the endpoint module will require 8% of total server resource utility when idle (2% x 4 VMs = 8%). If, during a system scan, the endpoint solution requires 20% of system resources, and you multiply that by the number of VMs resident on the hypervisor, the resource usage increases significantly. In this case 20% (20% x 4 VMs = 80%). It is not uncommon for a large-scale virtualized environment to crash due to a large number of endpoint overcommitting system resources. If it doesn't crash, the performance hit alone negates the whole efficiency point of virtualization. Also keep in mind that infrastructure folks are usually measured by how their environments perform, so getting them to install anything that makes them look less than stellar is going to be an uphill battle. No wonder infrastructure teams are resistant to installing endpoint on critical infrastructure using the traditional model.

So, when the security team insists on installing endpoint security on every machine, one of three things will happen:

1) The infrastructure guys running the VMs say "sure thing boss," and no endpoint will ever touch their servers. You'd be surprised how often this occurs in production environments.

2) The infrastructure guys say, "I agree, endpoint is such a wonderful thing, but, we need to utilize endpoint that is actually designed for virtual environments." I wish this happened more often.

3) Security trumps infrastructure and we see degraded performance of the virtual servers as endpoint is installed on every VM, and the risk of resource over-allocation when system scans kick in becomes higher. This is the usual response.

Luckily, thanks to paravirtualization, there is a way to keep both the security and Infrastructure folks happy. With the release of APIs for virtualization platforms, a new endpoint model was created and adopted by the leading endpoint security vendors including IBM, McAfee, Symantec and Trend Micro. This new model of securing endpoints in virtual environments not only provides the same level of security we see in traditional models, but also reduces the significant performance hit by leveraging the APIs (as illustrated in Figure D).

As you can see, there is only one red endpoint box per hypervisor (no matter how big you scale this model). This is possible through an endpoint API that plugs into your management console, and essentially cross-pollinates the server with anti-x goodness. If we go back to our last model and say that in idle mode, it takes 2% of resources, then as in Figure D, that is all it will take across the board. Not (# of VMs) x (2%). And when a scheduled scan takes place, it is only one performance hit across the entire hypervisor. This means even during a scan, it'll remain a safe 20% in our paravirtualized scenario versus crippling death to the infrastructure.

Endpoint can also be used for another nifty purpose: managing the security of individual VMs. If you have a virtual environment that uses snapshots or moves VMs between different servers, tagging the VMs from an endpoint just makes sense. It is prudent to remember that endpoint is important in live environments being used on a regular basis, but what happens when you forget to apply consistent, automated security policies to all VMs?

Let's take snapshots for example. It is critical for any environment to ensure you approach vulnerabilities from a proactive point of view. Patching is a necessary evil, and sadly is a very reactive practice. This is why we have things like "Super Tuesday" to dedicate time to patching the latest security holes. But what if, during the latest patching event, you bring your entire infrastructure up to par and up to speed, but something happens (absolutely unrelated to the patching) that causes a glitch in one of the VMs? The most common practice is to fire up a previous snapshot to bring the system back online. However, what if the snapshot was taken before the patch was applied? If you don't ensure that all VMs, including the snapshot, is tagged to check for compliance, you've just potentially introduced a threat into your otherwise patched environment. You've created another hole, albeit unintentionally, that causes conflict with your otherwise compliant infrastructure.

What if you decide to move that same VM (or another VM) to another location without the proper controls in it? Any vulnerability will potentially carry over to the new environment, introducing risk. While many enterprises have strict patching processes in place for private infrastructure, in a public cloud environment you also need to assume your neighbors might not be as diligent.

The best way to protect your environment against these types of liabilities is to ensure you have an automated endpoint process in place. Many of the latest endpoint solutions can do this, most notably from vendors like TrendMicro who offer an agentless endpoint security solution. These endpoints stick in the VMs and move with them, so you have consistency no matter where the VM sits. Regardless of which solution you use, it is important that you keep these things in mind when updating your security processes to include protection for virtual environments.

## Perimeter Security in Cloud

If you've managed to get to this point, you've survived perhaps the most technical portion of the book. I could go on for hundreds of pages about the ins and outs of security and virtualization, but the point of this book is to give you the most amount of information to make informed decisions while keeping the learning curve akin to a bunny ski hill (if you want more information on security, there are some recommendations at the end of the section). The reality is, your organization is probably avoiding adopting cloud or virtualization because of some kind of security fear. Luckily, the bulk of the biggest security fears can really be lumped into a few key areas: Perimeter Security, Visibility and Access Control.

The first key area of virtual/cloud security management that can help protect your environment is through perimeter security devices — firewalls in particular. By design, traditional firewalls were designed to control the type of data that can flow between network segments and physical hardware. When the physical design of the network is removed, largely due to the collapsing of physical servers into fewer virtualized servers, the main source of security control is removed. This is because in virtualized environments, it is the virtual network interfaces that allow data to move between the individual VMs. This becomes a significant issue, especially when multi-tenancy is utilized, because the logical barriers protecting virtual machines become the point of network protection, not just the network around the physical server. External firewalls aren't able to control how inter-server traffic interacts, because they are designed to manage external server traffic, thus leaving a security gap inside the server itself. So, how do you protect the inter-VM traffic when a traditional firewall cannot see traffic beyond the physical NIC card of the server?

The answer is virtual firewalls. These are a new breed of firewall that uses virtualization APIs to hook into hypervisors and control traffic between virtual machines. Virtual firewalls use a per-host firewall VM for configuration and logging, while taking advantage of the hypervisor kernel to filter the network traffic. The advantage of this operational redesign is the significant reduction in lag and increased visibility into changes happening within the virtual machines and the servers themselves. The newest generations of virtual firewalls have adopted connection tables and rule sets to increase performance even further, making them a great solution to manage VM security, while avoiding some of the typical performance hits that come with other security tools.

Virtual firewalls are currently one of the few methods of ensuring traffic between VMs is controlled from a security and compliance standard. This extends to providing security against the movement of virtual machines between physical servers, as firewall rules can be embedded in the individual VMs and are automatically applied upon movement.

Another approach organizations use when looking at how to leverage firewalls to minimize the internal security control requirements is with the use of Web Application Firewalls, or WAFs. These firewalls sit in the cloud in front of a web application, such as an ecommerce website, and monitor the activity that flows into the application from the Internet. It can best be described as putting a gate in front of your web application, as the minute it senses something fishy, it locks out suspicious traffic. WAFs, like those from vendors such as Imperva, are currently one of the few IT security solutions that can protect environments from DDoS or SQL Injection attacks, while also providing advanced Next Generation Firewall (NGFs) capabilities. While they don't provide internal protection for networks, they stop malicious traffic from entering your network, making Web Application Firewalls a key security tool in any environment. After all, it's easier to manage your environment if a big percentage of the threats don't make it to the network. This is why they are also able to address specific compliance requirements found in PCI and other industry standards.

## Virtualization and Visibility

The second key area where you can make a significant difference in your virtual or cloud environment's security is visibility. Until recently, traditional security had been focused on physical networks, and solutions were designed to sit on the perimeter or in-line with the network. Intrusion Detection and Prevention systems (IDS/IPS) was delivered through in-line solutions alongside next generation firewalls that fed Security Information Event Managers (SIEMs) which log the traffic data and note any discrepancies based on the policies and controls that the SIEM device was programmed, or tuned, to watch for. This is standard practice in all IT shops, but is sadly inefficient the minute you incorporate cloud or virtualization.

With cloud we suddenly have a huge field of abstraction, not unlike a gigantic bubble, that covers the virtualized environment. An external network security device is only good at detecting traffic until it hits the physical server. It cannot see the inner workings of virtual environments beyond any changes made to the host that would normally be detected on a traditional security device. But because the whole point of virtualization is to load a server full of VMs to maximize the ROI on infrastructure, what happens to the security policies that were applied in the physical server space when the servers are virtualized?

Intrusion Prevention Devices (IPS) were designed to protect networks from malicious traffic by sitting in-line with network traffic. Based on rules set by the administrator, the IPS looks at all traffic on the network for anything that doesn't fall in line with these rules. Unlike an Intrusion Detection System (IDS), the IPS sits in the network traffic flow (IDS devices are usually used as a network tap), where it can thwart attacks by either terminating the user connection, blocking access to the target, or blocking access based on the user account, IP address or other distinguishing characteristics. IPS devices can also be used to modify policies and device rule sets such as routers and firewalls, and apply patches or remove properties like attachments from emails.

Right now we are in the tail end of a transformation in IDS/IPS to what we call "NGIPS," or "Next Generation IPS." These devices take web application traffic into consideration, which may normally be ignored by legacy technologies as they can masquerade as other files such as web traffic, images and audio/video. This is why you are seeing an influx of solutions from manufacturers such as Check Point, Dell SonicWALL, IBM, Sourcefire and so on. We are also finally beginning to see these companies leverage hypervisor APIs to develop solutions for virtualized and cloud environments that increase performance and visibility.

So, what can a paravirtualized IPS appliance do that a traditional IPS cannot? Quite simply, these virtualized IPS devices can tap into the hypervisor layer and look for abnormalities affecting not just inter-VM network traffic, but discrepancies in system usage and resource utilization. This means that, should an unauthorized event (such as the creation of a virtual NIC that connects two adjacent VMs) be detected, the event will be prevented and the information will be noted on any connected SIEM device. Suddenly there is visibility into the underlying workings of a virtual environment, something that until recently had not been possible, with the exception of standard management information that fed into the virtual platform reporting system. It is critical for any environment with security requirements to have this type of visibility into any resource (virtualized or not) that contains business critical information. But it extends past this as well. Remember the Recognizers in TRON? That's these guys, scanning the programs for troublemakers.

As cloud and virtualized environments become more distributed and shared, the ability to verify that these VMs are protected through the implementation of an IPS device (among other security controls) is paramount in not just proving to compliance auditors that your resources are protected, but to ensure from an internal visibility perspective that all network traffic and inter-VM behavior can be monitored. This will undoubtedly aid with not just analyzing the current state of your security posture, but also for forensic analysis should any security or network incident occur.

With Security Information Event Management (SIEM) devices, the same rules apply because traditional SIEM devices also can't see into virtualized environments. This constitutes one of the biggest security problems as you can imagine, because any changes done within a VM environment cannot be tracked. SIEM devices have always been one of the key security tools used to manage the overall security posture of the organization. But it's not just the threat of people spinning off virtual NICs or duplicating VMs and then moving them to another server undetected that makes a SIEM critical. It also becomes indispensible when your IT team needs to be able to figure out what happened should something affect the environment. A SIEM can act as a forensic tool for determining what caused a particular failure in the system and to verify the root cause. In order to have proper logs that can be audited for security purposes, or just to be able to review and understand why something unexpected happened, you should have a device that can see the inner workings of the virtual environment. An example of where this comes in handy is in the case where a server crashed. How can you know if it was the consequence of a faulty patch being installed, or a result of a third-party plugin causing a memory leak?

The unique capabilities of virtual environments mean that users can create, clone and move virtual machines without being detected by external security controls. Administrators cannot see who is accessing the virtual environment and if this access poses any threat to the infrastructure. This is why SIEM and IDS/IPS show up in almost every regulatory compliance standard associated with IT security.

Luckily these technologies are starting to hit the marketplace. Some of the largest SIEM manufactures such as RSA now have the capability to tap into virtual environments and report on exactly what is happening on the inside. This offers another layer of visibility beyond virtualization management tools. It also helps ensure that compliance requirements are being met, as they can map to individual standards such as PCI or HIPAA. As virtualization management tools become more sophisticated, the requirement for proper visibility will become more critical to ensure the virtual environment meets compliance standards.

There's a great whitepaper on SIEM and virtualized environments from the folks at the Cloud Security Alliance's Telecom Working Group which you can download from the CSA's website.

## Access Control and Cloud

The last key area of transformation for security as it relates to virtualization and cloud is Access Control. Access Control can be broken down into three main issues: Cloud data protection, user management, and mobile device management.

As organizations start outsourcing data to services such as Amazon and service provider environments, there is an increased urgency surrounding the security of the data that resides in these environments. Organizations are moving business-critical and privacy sensitive data off-site to take advantage of reduced infrastructure costs, and in some cases, to leverage the security postures of the cloud providers themselves.

However, because most of these providers leverage multi-tenant environments, there is an innate need to protect the databases that reside in these segregated spaces and prevent an accidental (or malicious) breach due to a lack of security policies and an increased sophistication of VM attacks. This brings up a question, regarding whether an encryption or DLP solution will help reduce the potential of attacks, and who should be responsible for making sure these measures are in place to help protect these assets.

The transition of Data Loss Prevention (or DLP) from a "find unwanted data" to a key function of classifying data by importance is a key first step in planning a cloud strategy. Security engineers almost have to become data specialists, in that they need to look at DLP classification as a means of ensuring these resources are only accessible by specific groups of users. It also requires a balance between over-controlling data (making it accessible to very few people) and not putting enough controls in place. If you veer towards over-securing the environment, you end up with a lot of frustrated users who are tired of seeing messages pop up asking if they need access to the data, and administrators spammed with event messages notifying them of unsuccessful attempts.

Off the top, DLP doesn't seem like anything new, but think of it in terms of cloud. Suddenly you're dealing with a huge amount of data that is only going to scale larger. You have to manually tag tons of data. Not only does this take a lot of time, but the data may not be easy to find because of the distributed environment. You also have to ensure that whatever policies you set up to classify the data, it can be used across the entire environment, local or offsite, to maintain consistency and avoid misalignment of policies. This is on top of dealing with data that's encrypted, compressed, mislabeled, etc.

Is there a way to manage this without needing an army of data experts? Yes and no. There are automated tools that can perform preliminary classification of data, but these can't be applied blindly. Each organization is different, so there is a lot of fine-tuning that will have to be done on the back end to ensure the right controls are in place. A good place to start is through tools that utilize Active Directory so management structures can be used to define who has access to what.

So, while DLP will always be a pain to set up and manage, it's not going away. In fact, it's inherently critical for managing cloud environments. The best way to deal with it is to start the classification process as soon as possible, ideally before moving to distributed environments and scaling as you go. It's much easier to add new data to existing classification methods than to do it all at once. The key is to make sure the right policies are in place and they are designed to scale with cloud environments, which may contain unique characteristics that weren't considered during the first round of classification. Once these controls and policies are laid out, you can move onto the second layer of DLP — encryption. Encryption is a recommended solution to help secure cloud and virtual environments, but there are key considerations to keep in mind.

First, it is always recommended to leverage best practices for encryption key management when using any encryption or decryption product. It is imperative to obtain technology and products from proven vendors such as SafeNet and maintain your own keys, or at least use a trusted cryptographic service through a proven hosted provider.

Second, if you wish to maintain key scoping at the individual or group level, or supplement group access through an off-the-shelf technology such as DRM (Digital Rights Management) that runs at the endpoint (such as email, hard disk, and folder encryption), it is imperative that the organization maintains control over the encryption algorithms rather than the cloud provider. This way, if something happens, you have access to your data. It is not recommended to create proprietary encryption algorithms or leverage standards such as DES, as they can be easily broken. However, layering object security (such as SQL grant and revoke statements) is a great way to help prevent access even to the encrypted resources.

## User Management

This brings us to the second key area of Access Control: user management. As end users become more adamant about using mobile or cloud based services, an opportunity arises for organizations to also leverage cloud technologies for their cost savings and business simplification benefits. Because the nature of cloud requires a migration to redistributed systems often spread across multiple locations, federated identity policies and solutions will be one of the first critical key steps an organization will have to integrate into their cloud roadmap. This is why we are seeing an increased focus from vendors, including Microsoft and IBM, on simplifying complex user management and authentication systems.

With more organizations starting to move internal services to cloud and web based portals, the complexity of managing employee login credentials (from both the IT administrators and end users point of view) increases. The natural reflex for users is to create simplified passwords for the different systems or save them in easily accessible places. Unfortunately, this ends up causing more work for administrators, as the need increases to manage requests for password resets and to maintain the individual credential systems. This is why if you Google "Single Sign-On," every single security and IT manufacturer suddenly seems to have a solution.

Single sign-on (SSO) is one of those funny technologies that seems to waver in popularity depending on network design trends. Identity management itself has become a significant trend in cloud adoption, as resources become more spread out across multiple servers and locations. Users who require multiple passwords to access cloud or virtualized services can quickly cause security issues due to simplification of passwords (making passwords easy to guess or hack), increasing workloads on support teams required to perform password resets, and forcing the disparity of operations that are separated from other processes. As the transition to cloud services becomes a higher priority for organizations, the consolidation of user identities through the adoption of SSO and cloud based authentication will increase. Each user simply logs in once, and can access tools across multiple systems. If you plan on taking advantage of cloud and virtual environments, SSO is going to be one of the most important tools in your IT arsenal.

Another option to manage user access is through cloud based authentication services. These solutions are meant to streamline the standard two-factor and PKI (Private Key Infrastructure) authentication solutions such as physical tokens, but additionally offer benefits in terms of simplified management and reduced costs. In a cloud authentication solution, the private seed key (the originating module that generates individual authentication codes) resides on a physical appliance resident on the vendor's cloud. This appliance is then connected to a VM resident (hopefully in the same physical location), which runs the host management software. The administrator then connects to the management server through a web portal, which grants him access to the stock of certificate licenses assigned to the account. From the web portal, the administrator can sync the licenses up for deployment through standard methods, including LDAP.

End-users receive an email inviting them to sign up for their token, which can be pushed out to any electronic device including desktop, laptop, tablet, and mobile (including using "out of band" services) through SMS and email services. This means there are no longer physical tokens requiring inventory management, bulk upfront purchases (often these cloud-based models allow flexible pricing on a "per use" basis), and end users enjoy the benefits of not keeping track of a hard token. The end users can also control password resets and token processes through the web portal on a self-serve basis, reducing the workload of IT.

## Mobility and BYOD

The third main area of Access Control that must be addressed with cloud and virtualization is mobile device access. It's almost impossible to avoid the conversation of whether corporations should allow tablets and other mobile devices on the corporate network. The whole trend of Bring Your Own Device (BYOD) is one of the biggest forces behind the shift to design networks that support these technologies. While managers tout the benefits of a mobile workforce and the flexibility of connecting to resources from anywhere, security engineers are worried about security risks and the increased number of unsecured hot spots generated by mobile devices (not to mention the HR implications of bypassing acceptable-use policies that traditional network restrictions put in place).

As we see the increased adoption of tablet and mobile devices in the workplace, the issue of security becomes increasingly important; the prevalence of tablets and mobile phones on corporate networks naturally raises questions such as "how will this affect my security posture?" or "what kind of access should we allow?" and more importantly, "how do we control it?" There is no clear-cut answer to these questions. But there are things you can do to help reduce the risks of mobile device usage on your corporate network.

The mass adoption of smartphones and tablets has not just resulted in an increase in Wi-Fi bandwidth and management, but a universal requirement to access corporate assets from multiple locations. This means there will be a strong focus on managing these connections and their effects on networks as it relates to Wi-Fi usage. The key issue for these devices is that, if they are not secured to connect to in-house corporate Wi-Fi, users often generate Wi-Fi hotspots using cellular networks and access corporate resources while bypassing traditional Wi-Fi network policies. This increases the risk for data loss and leakage, and also creates gaping holes in the security controls put in place to protect corporate assets.

When it comes to Wi-Fi, the more rogue devices and interference from mobile hot spots, the slower and more cumbersome the network will be, considering most organizations' Wi-Fi networks are built to allow for laptop connections where LAN is not accessible, not to support a large number of mobile devices. As a result, several security vendors have designed solutions to mitigate wireless interference, delegate traffic into other frequencies to help alleviate Wi-Fi stress, and even allow for the creation of policies based on MAC address or browser profiles. Most of the leading firewall vendors also have the ability to create secure VPN tunnels via Wi-Fi and manage credentials, including limiting high-volume traffic over Wi-Fi such as YouTube, etc. It is important to remember that these types of traffic are subject to the same security threats of malware, DDoS attacks, intrusions and viruses. To help reduce threats, many IT folks leverage a few tricks, such as disabling the SSID broadcast, disabling the DHCP server, or setting the device user limit to one to avoid unauthorized connections. Regardless of which method is used, it is important to ensure that if these devices connect to the corporate network, they are protected by a firewall and anti-x installed on the host device.

In 2012 and 2013, we saw even more sophisticated Mobile Device Access Control (MDAC) solutions available. These solutions now allow IT security to control the type of devices, services and bandwidth used while allowing the enforcement of security policies as they relate to browsing and applications.

Authorized mobile devices will require the application of corporate security policies through the use of VPN applications and Wi-Fi session encryption. Employee adoption of mobile devices and the increase in passwords must also be taken into consideration when transitioning to cloud. The thing to keep in mind is that, the larger the disruption an employee faces, the more likely they are to adopt unsecured methods to access corporate resources. This means IT security should ensure the use of mobile and tablet devices are addressed in a way that it allows employees to utilize these devices, but not cause threats to the corporate network.

## Security Testing in Virtualized and Cloud Environments

When it comes to securing a cloud environment, one of the most difficult challenges for security professionals surrounds the increased complexity of domain ownership, especially as it relates to penetration testing. The reason for this is that, in a private cloud environment, security teams generally maintain control of the infrastructure from a policy standpoint and can perform various security processes without requiring the intervention of outside resources. Once the infrastructure is moved off-site to a hosted model such as IaaS, PaaS or SaaS, suddenly the provider becomes an extension of your IT team and part of the security equation. This is just the beginning of the effect cloud has on vulnerability and penetration testing.

If your organization needs to perform penetration (pen) tests (a process that identifies vulnerabilities in your network security posture by essentially hacking the environment), there are many factors that will influence how complex the process is going to be. First, the SLA between you and the cloud provider will have an undeniable effect. Depending on whether your SLA includes a clause for penetration testing, you will have varying levels of control over the process. You should ask whether the SLA allows for internal resources to provide the pen test (assuming you have the internal resources) or if it must be done by a resource designated by the provider. In addition, the SLA should stipulate what domains you have control over for testing, and what specific vulnerabilities can be tested (some types of tests may affect the overall compliance state of the infrastructure and are not allowed). These factors must be measured against the goals of your security team. If it doesn't meet the requirements, there is no sense performing the test.

The second key issue (assuming you get permission from the cloud provider to perform the pen test) involves the kind of service to which your organization subscribes. This will define exactly what you can and cannot test. In IaaS models, you have significantly more control over the environment and the types of tests you can perform, but in a SaaS environment you are less likely to test anything outside the actual application. In IaaS environments, it's recommended you review any documents from the provider that relate to security policies and network architecture design. If you are utilizing them for the pen tests, ask them to perform a vulnerability scan on your portion of the infrastructure. Keep in mind that in an IaaS model, you are responsible for most of the security.

If you are using SaaS, the provider will have most of the control over security policies, but you should ask to have penetration testing performed on the web layer to make sure there are no web application vulnerabilities, and that they check for these regularly or use a Web Application Firewall to protect your resources. It is also important for any type of service model to ask the provider what kind of testing they do (black, grey or white box), what they test for (Top 10 or Top 20 OWASP vulnerabilities), and what kind of reports they provide.

If you plan to perform the tests yourself, you will need to work with the provider to ensure any tools and processes you use do not affect the integrity of other customers who might share the provider's infrastructure (and make sure the same can be said about other tests conducted for other customers). Standard tests that might work for an internal environment can wreak havoc on cloud environments, as they may bypass some of the security controls in place to protect the individual virtual environments. It is both the customer's and provider's responsibility to ensure that if these tests are to be performed, they are agreed upon in advance.

## Cloud Security Resources

As the adoption of cloud and virtualization become commonplace, there will be an increased expansion of the scope of cloud business transformation from service adoption (IaaS, SaaS and PaaS) to one that integrates all aspects of cloud adoption into the transformation plan. This means focusing on how operational units of the organization are affected by the transition to a virtualized or cloud environment. It will move from being an infrastructure initiative to a plan that includes, at a minimum, security and operations. From this shift, the importance on education and adoption of new technologies and processes will become more prominent. In particular, a business review of compliance, privacy, and governance as it relates to cloud (and additionally, how to ensure the increased adoption of mobile devices is managed from an operations and security perspective) will be paramount.

The methods used to previously secure these environments must now be adapted to meet the unique characteristics of cloud and virtualization. Organizations need to adopt technologies that leverage paravirtualization to gain visibility into all layers of the virtual infrastructure and adjust security and governance policies that take into account the risks associated with these environments. Security teams need to start working closer with infrastructure teams to ensure technologies meet the needs of the security team, but do not negate the benefits of virtualization for the infrastructure teams.

Cloud and virtualization are a disruptive process for any organization. However, the benefits of adoption far outweigh any risks that might occur. The key to a successful transition is to have security and infrastructure teams involved in the infrastructural transformation so the organizational objectives of all parties are met.

There are some great resources and certifications in place to support this new era of IT security. One of the biggest proponents of this movement is the Cloud Security Alliance (CSA), who was the first to standardize a new certification: the Cloud Computing Certification Knowledge (CCSK) designation. Not unlike other similar certifications, the goal of the CCSK is to give IT specialists the knowledge they need to help secure their infrastructures against these new forms of vulnerabilities, while bringing them up to speed on how virtualization has created deficiencies in traditional network security. It wasn't long ago that virtualization was mischaracterized as posing no real security threats. Cloud has shown everyone that there is indeed a significant risk to adopting these models.

## Big Data and Security

To conclude the topic of security, I wanted to introduce one of the biggest advancements we will see in the coming years. Security vendors are beginning to leverage Big Data as a means of providing more insight into the threat landscape and powering security intelligence. With the ability to provide more information at faster speeds, Big Data is going to be key in developing next generation security tools in several key areas.

First, Big Data is going to be vital in powering security analytics, especially in SIEM and log management. The amount of information currently collected by these devices for analysis, investigation and forensics is already bordering on terabytes and it will only increase. Current tools will not be able to keep up with the increase in workload unless they take advantage of the processing power Big Data provides.

Security vendors have already started realizing the need for Big Data. RSA with enVision, HP with ArcSight, McAfee with Nitro Security and IBM with Q1 Radar are examples of how important these vendors anticipate Big Data will be in powering future security intelligence tools. The amount of data collected globally every day regarding threats is staggering. Until now, vendors have focused on providing insight into the more common risks, feeding OWASP Top 10 or Top 20 lists. However, Big Data is promising the ability to provide even more insight and flexibility in making connections between events, so we will now see the groundwork that pre-empts these big attacks. The more information we have regarding how these risks develop, the more security controls can be created, and hopefully, the more proactive government organizations will be to stop these attacks from happening. Let's not kid ourselves, the next wave of terrorism will be linked to cyber warfare, and this is where Big Data will be our greatest asset.

Big Data will also be key in powering the next generation of dashboards required to manage dispersed and abstracted cloud environments. Right now it seems we can't get enough information to manage our virtual and cloud environments effectively; soon we will have too much information. If we can't properly manage and analyze all this new data, it will cripple our infrastructure and data storages. We have to balance the ability to use this data to understand our environment while heeding the risk of data overload. Our existing tools don't provide us with the right analytics, so there will be a great need for vendors who can create the right analytic tools that give us better insight. Companies like Splunk and LogRhythm are great examples of a next generation of vendors striving to provide the right analytics to power security.

The biggest hurdle in taking advantage of Big Data in the security realm will be the need for analytic and Big Data specialists: people who can properly manage data, tag it, catalogue it, and most importantly, understand it. The new IT departments will be focused on managing information and streamlining infrastructure to make the organization more nimble, while protecting an increasingly complex environment from threats that evolve faster than the tools to protect them. We need a team that can quickly manipulate the data as it comes in so they can make smart decisions, especially when the organization is under attack.

Resources:

Cloud Security Alliance

https://cloudsecurityalliance.org

SANS Institute

http://www.sans.org/course/virtualization-security-fundamentals

The Virtualization Practice

http://www.virtualizationpractice.com/resources/virtualization-security-podcast/

**Virtualization Security: Protecting Virtualized Environments** -Dave Shackleford

**VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment** -Edward Haletky

# Compliance & Other Things that go Bump in the Night

If you ask most IT folks — particularly security folks — one thing that keeps them up at night regarding their virtual/cloud environment, might be the notion of compliance. This is especially true if your organization deals with any data that contains sensitive information subject to compliance requirements from organizations such as government or healthcare, or even from the Payment Card Industry (PCI). Compliance is required for many organizations in order to prove they are utilizing industry-standard practices for dealing with IT environments (usually as it relates to security), and in particular, the data within it. The penalty for not meeting compliance varies, but in the case of PCI, it can also affect your organization's ability to collect electronic payments through your physical or online stores.

Because compliance audits are one of the first glimpses many organizations have into the need to secure virtual environments, the learning curve for both auditors and IT teams is staggering. There is a lot of grey space at the moment, where interpretation of requirements can have a significant impact on the audit, and auditors in over their heads walk into an unfamiliar, mixed environment where virtual and physical resources co-exist. Even something that is straightforward in a physical environment can actually be complex to understand and make compliant in a virtualized environment. So what do you do?

Before you bring a Qualified Security Assessor (QSA) into your environment, it's very important that, if you plan on auditing your virtual environment, you ensure that the QSA is fully versed in how to audit your infrastructure. Virtualization is a newer technology for a lot of auditors, so if they don't know what to specifically look for, or if they don't understand what they need to evaluate, the result will probably end with lots of money spent, requirements missed, and penalties skyrocketing. Auditors are there to help your organization meet the requirements of compliance to better your security posture, so if they can't help you due to a lack of information, it defeats the entire purpose.

The problem facing auditors (as it relates to virtual environments) is that they are often walking into a mixed-mode infrastructure where you have both physical servers and traditional security controls, and virtualized servers that may not even be on the same site. On top of this, you may have a variety of different environments running Citrix XenServer, Microsoft Hyper-V, RedHat KVM and maybe even VMware vSphere, not to mention all the networking, resources and storage attached to them. The complexity puts a spin on how to audit these environments properly, based on where each process lies in relation to the compliance standard.

Because data flows within both the VMs and any connected monitoring or management tools, you need to ensure that your auditor understands how these factors apply to compliance. PCI is going to be one of the first key compliance requirements to drive security adoption for virtual environments, so it is imperative that auditors and security teams understand exactly what the goals of these requirements are and how to properly enforce them.

## How Cloud and Virtualization Affect Security

There are a few things you can do from a compliance perspective to help reduce the pain of audits. I'll speak mainly to PCI, but these principles address overall best practices and are a great way to get a jump-start on the path to compliance. If you look at several compliance requirements across different industries, there are a lot of commonalities.

First things first, make sure you know what data you are planning to move to the cloud. If you plan on moving data that has PCI (or any other sensitive information) implications, you're really just ensuring that your entire cloud environment is going to be in scope of the audit. I'm not saying you shouldn't do it, but if you aren't using a private cloud (that is, only your VMs run on a single hypervisor; no other external party shares it) things could get ugly when you start treading into the gray zones of potential security risks. Try to simplify it so all PCI data is in one place and hopefully separated from anything else to prevent scope creep.

Second, separate your systems, and network and protect them in this way. Assume other VMs on the same hardware are a threat and deploy your firewall, as well as IPS/IDS to protect each of your VMs separately, especially in a public environment where access is available through the Internet. Endpoint tagging can also be a useful way to embed security profiles within the VM. This will help ensure that if you ever need to move data around (even to another cloud provider) you can take the same security controls with you.

Monitoring is the next key goal. Virtual environments are a pain to monitor because they are so dynamic. VMs are created, copied, moved, restarted, shut down and just about everything in between. You need to ensure you have a proper monitoring and logging solution in place. If you outsource your cloud environment, most providers should offer this as a service (if not, include it!). In addition, make sure that the audit trails and logging information have tight user access restrictions in place and that they cannot be altered. It is not uncommon for auditors to ask for proof of logs.

Speaking of users, managing users in virtual and cloud environments is critical. It is easy to overlook entitlements and put yourself in a situation where, suddenly, any user has access to your sensitive data. Server sprawl, where VMs are moved to other servers (known or unknown) and public cloud environments just makes things worse, so start with a least-privilege approach and work your way up. Also make sure that if you are using multiple locations that these privileges extend across to other locations.

The last point I am going to touch upon is that, if you are planning on outsourcing your cloud environment, work with a provider that can offer PCI compliant VM images. If you start with an image that already has the right controls in place, it's going to be a lot easier than auditing every single image every time you create a new VM. If you plan on using this approach, your provider must be able to prove audit compliance for these images.

## Virtualization and Forensics

Forensics, as it relates to data, has always been a tricky area for investigators. As technology becomes more sophisticated, it has become harder to ensure the availability of information in forensics investigations. The introduction of sophisticated web attacks has made it only harder to accurately pinpoint attacks, and cloud has thrown another wrench into the whole thing. In fact, forensics for cloud and virtualization environments has become one of those subjects no one wants to address because honestly, there are a lot of problems.

A key issue that complicates forensic investigations in virtual or cloud environments is that it concerns data that can be in any of three main states at any given time: at rest, in motion or in application/use. Data at rest is understandably the easiest of the three to access, as by nature it needs to allocate disk space that can be accessed even if the data is deleted (provided that the space has not been re-written or allocated by some other means).

Data in motion is trickier, especially in virtual and cloud environments, as they suddenly take the basic rules of data in motion. That is, when data is transferred from one place to another it leaves a trail on systems and network devices. Cloud makes it trickier because data can move across multiple servers and geographic regions, either sporadically, or as a result of regular IT activities such as load balancing. The ability to prove where the resources were at any given time becomes problematic when variables include the ability to be in two places at once or even nowhere at all, if the VM is shut down.

Finally, if the data is used by an application or is an application in some form, it's really not at rest or in motion — it's being executed. Taking a snapshot of the system state is the only way to catch use of the data. Snapshots are perhaps the best tools for any forensics investigation because they provide a copy of the state of the machine that was running. This means a snapshot can be taken and used for investigations while the virtual machine keeps running. Unfortunately, the ability to take these snapshots requires access to the infrastructure, which may not be part of the cloud service (such as in SaaS or PaaS models). If you plan to use snapshots, make sure your configuration isn't set to overwrite snapshots but to archive them so you can access different shots, depending on your use. It's the same concept as when you save a game; if you screw something up, it's always nice to be able to go back and change the situation.

The real hindrance to forensics with the movement to cloud models is the involvement of the cloud service provider. Involving cloud providers and cloud infrastructure in itself means there will be a loss of control on the part of the investigator. Traditionally the investigators were able to reconstruct scenarios and test hypotheses, but in a dynamic environment like cloud, this is no longer possible. There are simply too many variables involved, and many of them arise from the cloud service provider's SLA.

Cloud and virtualization are fairly new models for business (yes, you can argue that virtualization has been around for quite some time, but I'm talking about it purely in a cloud model). Forensic investigators have to rely on the cloud provider to identify and collect relevant data that either helps confirm or deny their investigation. But very few SLAs outline in detail what security measures are in place in the cloud environment and thus, what data is accessible. Are there proper logging controls in place? How long are they kept on record and who can vouch for their integrity? There's a good chance that until security tools for virtualized environments become more sophisticated, there will be gaps in the information available. If a provider says that full visibility and auditing is available, how can a customer verify this beyond accepting there is a sentence about it in the SLA?

Forensics in cloud and virtualized environments is going to be a challenge until we have global standards. Cloud itself is still in the very early stages of developing standards and controls, and due to the complexity of trying to balance best practices without hindering the benefits of cloud, it will take time.

## Disaster Recovery, Cloud Style

So you've managed to appease the auditors by putting in the right controls. What happens if it all goes wrong? When it comes to dipping your toes in the cloud pool, one of the first services organizations adopt involves Disaster Recovery. Disaster Recovery is one of those services that highlight the reasons cloud isn't going anywhere, but it also puts the complicated learning curve of cloud in the spotlight.

Disaster Recovery (DR) is becoming critical, mostly in the SMB market where IT resources are limited. A full-fledged DR plan that leverages the cloud is a flexible alternative to hosting a full fail-over site, and also comes with an attractive usage-based model (remember, the whole point of DR is that you don't want to have to use it, except in emergencies). This means that, aside from having an OPEX-based DR plan, it reduces the need for datacenter space, infrastructure and IT resources to manage it, all of which help smaller (and larger) organizations save costs and gain access to leading-edge technologies that would normally be within the financial reach of only larger organizations. The conversation suddenly moves from datacenter space planning to cloud capacity planning.

Like any modern service, there are concerns surrounding the adaptation of a cloud-based disaster recovery plan. For starters, the security around a cloud service becomes increasingly important as the resources become more business-critical. Is the provider able to prove they meet regulatory requirements? How do they control access to resources and ensure data is being transferred and stored in a secure manner? Does the customer have the right bandwidth and network resources to enable users' access to the resources in case of a disaster, while at the same time allowing your internal teams to access data and get internal systems up and running again? Do you have the right in-house expertise to manage the restoration of systems? Most importantly, how long will it take?

When deciding which DR service they should subscribe to, it really comes down to a single question: if something happens to take my entire system down, can I trust my provider to help me get up and running as soon as possible before significant damage is done? Choosing a DR partner based on cost is probably not going to be the wisest decision.

Reliability, availability and the ability to keep users up and running during a disaster are the most important benchmarks a provider will be measured against. The wrong decision can cripple an organization, cause significant damage to its reputation and put the key stakeholders in hot water.

But it isn't as easy as signing a DR service contract to protect the organization in the case of an emergency. Every organization is different; the types of applications, data and users are always unique. This means each DR plan needs to be written specifically for each individual customer, and it needs to include key processes that address and prioritize applications, data and services. Finally, it should furnish a time window during which things can be brought online before significant organizational impact is felt. No cookie-cutter SLAs here. By prioritizing these resources, the customer and the DR provider can create recovery time objectives for each resource that will be included in the disaster recovery plan. You also don't want to be rushing an SLA through legal when you are in the middle of a DR crisis, so planning in advance is highly suggested.

As the customer, it is your responsibility to ensure all key applications and resources are covered under this plan, to reduce operational impact in the event of an emergency. It is recommended that you also perform regular testing and reviews of the plan to ensure any changes to the business resources are captured in the document to avoid any oversights as new systems are brought online. Once recovery time objectives are established, you'll have a better idea of what types of services should be leveraged as they relate to DR. Don't assume that only one type of DR method will be used for all resources, as the critical nature of the application or resource will influence how they are grouped and how DR methods are assigned. So, what kinds of DR services exist?

## Cloud Replication

The first DR service, and one of the more common entry-level types of service, is replication to VM. This is a cloud-resident service that can be used for cloud-VM-to-cloud-VM or on-site-to-cloud-VM data protection. This type of service is based on continuous data protection solutions such as EMC Atmos, and is best suited for applications that have strict recovery time and recovery requirements, because it offers application awareness and can be used to protect both on-site and cloud production instances.

The second style of DR offering is back-up and restore of the cloud. This utilizes both cloud storage and cloud computing resources to allow organizations to restore data to virtual machines that are located in the cloud and run the environment remotely until the on-premises infrastructure is restored. It can also be used for pre-staging, where the restores are currently updated on a scheduled basis, to ensure recovery time objectives are met. In some cases, the enabling of the cloud infrastructure is included as part of the DR plan.

Similar to this offering is backup and restore from the cloud style services. As you can imagine, this service uses the cloud to replicate data on VMs that are restored to on-premises infrastructure in the case of a disaster. This is an evolution of tape backup services that are common IT practices. The key to ensuring this service meets your objective is to make sure your IT team understands the backup and restore processes to avoid any potential issue with the restoration of services. You should also look for services that offer compressed or data de-dupe and encrypted movement of data alongside customized retention options to ensure the service works with your network and security requirements. Bandwidth will be a large concern during the restoration stage, and if you are pushing terabytes of data through an un-optimized network pipe, you run the risk of not meeting your recovery time objectives. This is where many service providers will offer additional service such as Wide Area Network (WAN) Optimization services to help transfer your data even faster.

The last major DR solution is the fully managed DR model, an increasingly popular option for organizations that don't want to have to deal with the risks and processes associated with DR. In this model, the customer uses a managed service provider to carry out both the production and the backup instances. The benefit of this type of model is that the customer leverages the cost savings associated with a cloud infrastructure and the usage-based costing of the DR solution. The service provider handles everything from beginning to end, including ensuring recovery time objectives are met. The risk in this case is pretty obvious: the service provider is responsible for everything, so it is absolutely important that the SLA is reviewed very carefully and critical business assets identified with the appropriate recovery time objectives assigned. If not, should a disaster occur, the customer will be at risk while the provider says that "it wasn't part of the SLA." Should an emergency occur, the SLA will ultimately determine whether you have access to your applications and resources.

There are clearly many different options for DR that leverage the cloud. The cost savings and technological benefits can be huge for SMBs that do not have the in-house expertise to provide DR services, or for larger organizations that want to ensure they have a "Plan B," should something happen to their environment. Whatever plan you decide to implement, the key is to test routinely and provide correct training to all users in advance, so as to avoid any complications in the event of a disaster.

## Outsourcing Security

The 2012 Information Security Breaches Survey conducted by Price Waterhouse Coopers revealed almost 73% of organizations in Europe are using some kind of outsourced service, but only about 38% of large organizations ensure this data is being encrypted. Even more frightening is the fact that only 56% of SMBs don't do any verification of security services for their data, relying instead on contracts and contingency plans with their provider.

Many might argue that the type of data being hosted isn't necessarily the most business-critical or sensitive (which sadly, is not always the case). The fact that so few organizations are doing due diligence is a red flag. Although there are tons of great benefits from outsourcing your cloud services to a large provider who has security controls in place (and a wide selection of smaller providers who offer additional security services to guarantee compliance), it still requires that you do your homework.

Think about the commonly used cloud services — website, email, payment services — used by some of the largest organizations in the world. If we assume that basic security is in place, it still leaves a huge gap as it relates to contingency. Many SLAs do not specify what happens to data if there is a breach, or in the case they need to move data in failover or load-balancing situations. Luckily, the new Cloud ISO/IEC 27017 will address how this factor will affect the compliance of organizations across the world and how to manage it, including legal nightmares around the validity of compliance implications including the Patriot Act.

The other key thing is to ask your service provider how they deal with such issues, not just high-level network or security issues, but down to personnel and escalation and notification procedures. Is the provider required to tell you there has been an issue?

It really comes down to remembering that if you trust the provider with your data from a security and contingency aspect, it's probably a good idea to see how it deals with their own internal policies. If the provider has a solid contingency and security plan in place, and it shows proof of stress-testing for these scenarios, it's a lot easier to trust with your data than with a provider that doesn't.

Compliance, privacy and governance will be the first areas of review for organizations starting to transition to a cloud or virtualized environment. Because virtualization has traditionally been an internal infrastructure issue with few ties to larger organizational governance and security policies (with the exception of security), there is a requirement for organizations to quickly adapt to the increasing regulatory issues that cloud brings. Additionally, under the adoption of cloud environments, where the resources are located in a third-party cloud provider's physical environment, the control over policies is now shared with the provider. As pressure from governance agencies over cloud environments increases, most notably with the evolution of PCI DSS to include provisions for virtual environments, organizations will have to start ensuring that all policies and SLAs address these requirements. This means ensuring that the internal security teams implement the right solutions to security the virtualized or cloud environment and all SLAs with service providers clearly outline responsibilities as they relate to security, management, litigation (in the event of a breach) and recovery. It is also critical that if multi-tenancy is used, that privacy and compliance requirements are met to ensure separation of customer data.

For more information on Compliance, some suggested places to start are:

Payment Card Industry Guidelines for Virtual Environments

https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf

VMware Compliance Center

http://www.vmware.com/technical-resources/security/compliance/resources.html

ISO/ IEC 27017 Standards

http://www.iso27001security.com/html/27017.html

Virtual Forensics: A Discussion of Virtual Machines Related to Forensics Analysis by Brett Shavers

http://www.forensicfocus.com/downloads/virtual-machines-forensics-analysis.pdf

International Standards Page of the Cloud Security Alliance

https://cloudsecurityalliance.org/isc/

PWC Information Security Breaches Survey 2012

 http://www.pwc.co.uk/audit-assurance/publications/uk-information-security-breaches-survey-results-2012.jhtml

# Getting Started with Cloud

At this point you probably think you know more about cloud than you ever want. Here is where we start putting all that learning to use. My aim with this section is to give you inspiration for introducing cloud into your organization. Cloud is not something you just do; it's a huge challenge and takes longer than most organizations initially plan.

While many organizations have virtualized a good chunk of their infrastructure, when it comes to migrating applications to a cloud environment the numbers are pretty slim. This book was written in the early cloud adoption stages; those numbers are expected to increase moderately in the next few years.

It's expected that cloud's transformation of your organization's IT and overall business strategy could leave you feeling a bit intimidated. As I've said before, no organization in its right mind would suddenly go from pre- to post-cloud adoption overnight. Sneaking projects in as the need arises is the best way to tackle this transformation, and a great way to start to see how cloud ROI is going to benefit your organization.

So, what projects are good starting points for cloud migration? Let's find out.

## Application Virtualization

Application virtualization offers a less intimidating introduction to cloud methodology, and is a great starting point, as it will help alleviate common IT frustrations. One of the biggest headaches in the IT process has to do with the ongoing maintenance for legacy systems. But let's be honest: moving legacy systems to cloud environments involves a lot of legwork, and when you start looking at mission critical systems, it really gets complicated. Not only do you have to customize and troubleshoot every application, you may have applications that cannot be taken down at any time. What do you do?

The best way to get through this headache is to start with less critical systems, or with SaaS offerings such as SalesForce.com and Office365 from Microsoft. These are great projects because they can show significant ROI, and are relatively less cumbersome to undertake. The trick with these types of migrations is to get as many team members together and map systems that might be co-dependent. This will help ensure the migration takes into account any cross database functionality that would suffer from an interruption.

I mentioned previously the importance of DevOps teams and projects lead by the IT teams. This is where it rings true. IT knows where these systems are, and the common issues that could be remedied are. If you leverage the DevOps methodology, you can build special project teams with members from all related departments. You'll also need to get the marketing and sales departments involved, as they are often the users of these systems and can provide valuable information. After all, if you are going to find a new solution for them to use, why not have them involved in the project itself?

If you keep each application as a separate project, which I anticipate is how most of you would tackle it, it'll be more manageable. These projects are best confined to independent applications used for a single primary purpose. The minute you start looking at applications that perform several functions, such as CRM applications, you're introducing more complexity. The key here is to keep things simple so you can quickly virtualize applications that will provide benefits, without undertaking a huge project that takes a long time to implement. Rip out your internal email system and use a cloud service instead, or use cloud versions of office applications such as Microsoft Office 365. Stop worrying about patching, updating, deploying, upgrading and otherwise doing anything with these applications other than a quick configuration. It's time better spent by your IT team.

The other reason these projects are a great starting point is they can quickly show other stakeholders how virtualization provides cost savings and streamlines processes, gaining you more support for future projects. After all, if you can show your CFO how wonderful the cloud transition is going, he's more likely to provide less resistance when you come back for future project funding.

## Application Modernization

So, you've decided on your application projects and are busy working on creating cloud migration paths for current applications and data stores. It's important to keep in mind not all systems can be easily ported. Most applications weren't created with the cloud in mind, so they might not operate the same way in this new environment. Unless you have a team rewriting these applications to run in virtualized or cloud environments, the movement to a cloud model will need to be done in several steps and on a steep learning curve.  Here's how you do it.

First phase. Here's where you take inventory of the applications and all dependent systems, including other applications, data stores and third-party tools. These all need to be mapped so you can identify any systems that could affect the productivity or availability of other systems. Most often, the first applications and systems that should be transitioned are typically legacy systems still used for back-end functions. These are perfect candidates because they require maintenance and dedicated servers, and just aren't cost effective. They are also typically the culprits behind 90% of your business process headaches. Other independent applications such as CRM and email tools are also great candidates because they can be introduced into the organization with relatively few IT headaches.

The second phase builds on all the previous work of mapping systems. This is where you start looking at automating and repackaging applications that don't need altering in the new environment. An example of this would be updating applications that run automated tasks such as performing database analysis to run automated reports. They simply need to be remapped to the new resource locations. Don't underestimate the benefits of doing the proper legwork to avoid any issues up the road. The last thing you need is to forget a single detail, causing a system to fail. You also should think about keeping the old application running in sync with the new application for a few days as a precaution. It's easier to failover to the old application if it's up to date.

The third phase of application migration brings a lot of complication and headaches. The upside is that, by now, your team is more familiar with how everything works. Here is where you start figuring out what to do with other applications that cause headaches. There is a lot of benefit to reworking them so they will run in a virtual environment, but keep in mind these virtualization solutions are often in infancy stages and can't necessarily scale or provide distributed random accessibility. They might also come with high price tags or specialized skill requirements. This is where many organizations will start to feel the steepness of the curve.

It's important to note that the costs associated with application virtualization are often what hinders cloud projects. Because these projects tend to be more expensive and hard to staff, a new service has emerged called application modernization.

## Application Design

For those organizations that are interested in getting the right resources to support long-term application design, the question is often "what skillset do I even look for?" When it comes to APIs, there are several key cloud platforms that are emerging as leaders.

Amazon and their AWS APIs have been around for years, and are said to be a mature platform that incorporates many best practices for APIs. Unfortunately, because the API code has been changing for years, there are a lot of legacy conventions and inconsistencies. The other key issue is that, because the platform has so many subscribers, it requires the ability to balance between sustaining legacy customers while introducing updates that reflect today's products. Many organizations are slow to update to new versions for the fear of disrupting a production system, forcing Amazon to support several versions and avoid significant changes to the platform that may disrupt customers' environment.

VMware's approach with APIs is to release them to vendors for the sake of creating next-generation products that can support a wealth of different tools from security and management to development. Their goal is to offer a stable platform that allows for integration with other development systems, while helping organizations avoid the risk of vendor lock-in by making their APIs more flexible to work with other platforms.

With open source APIs, such as those used for developing on the OpenStack platform, you benefit from a large development group made up of individuals from all over the world. With a wide variety of developers and the independence from major vendor restrictions, there is more flexibility in creating custom applications tailored to your environment. The downside is that, with any type of open-source model, there is a risk that the code may be insecure or unstable. This is due to the fact that there is no true regulating agency to verify that the code is recommended for implementation. The other question is: who will support you in times of technical difficulties? More critical to the continued stability of your environment is the revolving door of vendors that support the platform in its infancy stages.

When it comes to cloud adoption, there are myriad details that have a direct effect on the success of implementation and future business operations. The more information we have, the better decisions we can make.

## Virtual Desktop Infrastructure

Virtual Desktop Infrastructure (VDI) is another project that offers a great starting point for transitioning business processes. Desktop services have always been a resource-intensive problem. Enterprise users (contractors, partners, consultants, etc.) constantly bombard the help desk with requests to update their computers (desktops, laptops), while the IT organization is tasked with maintaining the inventory of machines, and ensuring the proper patches and security controls are in place. Technology turnover and providing up-to-date infrastructure carry a tremendous cost, and easing these operational costs is always a frustration. These factors are causing organizations to start looking at a managed desktop service.

We are going through a huge technological revolution. Employees are demanding flexibility in the types of devices they use. They are also spread out geographically and working remotely. The coordination of maintaining the equipment pool has become more complicated than an employee simply approaching the IT personnel and asking for support.

Of course, we can't forget the security implications: making sure all the endpoints are using the correct anti-x, making sure the privacy and data protection controls are in place, avoiding (or reducing) the amount of unauthorized applications installed, and keeping up with employee role changes, which could affect access and application usage. In a 2010 report, IDC estimated that companies spend $3 on management for every $1 of hardware. That's quite a bad ratio if you ask me!

So, what is desktop as a service (DaaS)? There are a few key models, including virtual desktop streaming (VDS), virtual desktop infrastructure (VDI), application or OS streaming and the least cloud-y, terminal services.

Virtual desktop streaming (VDS) is one of those unique adaptations of virtual desktops in which the local device utilizes virtualization to host a desktop image. This image is synced with a master image that resides in a data center. The advantage with VDS over other types of virtual desktop models is with VDS, users have access to data when they are offline.

In virtual desktop infrastructure (VDI), the desktop itself is a virtual image that is hosted in the cloud or data center. The end-user accesses it with a thin client, usually through a web browser. The nice thing about this is regular backup schedules can be applied to all desktops, and should one desktop be compromised, it can be reset to an earlier backup and business can resume.

Application and OS streaming are some of the more common models, whereby parts of the application are downloaded to a remote device and executed locally. They don't use a hypervisor; rather, the desktop devices connect directly to the network, and then the network server mounts a disk image (either virtual machine or virtual hard disk). The application executable is downloaded each time the application is started. It doesn't save anything remotely.

Last is terminal services. This is one of the more commonly known models (not really cloud so much as remote access) where the desktop is hosted remotely and accessed through a thin client. It can be virtualized or hosted on a dedicated server.

Maybe you're waiting for me to start talking about how terminal service isn't secure. Good news: DaaS is actually not that bad when it comes to security. Remember, with DaaS (in particular VDI), all images and data are in the data center, not on the local machine. This means that as long as you have authentication, encryption, VPN and proper firewall rules, it's really not going to be any less secure than if the local devices were kept onsite in the corporate network. DLP will be less of a threat if you lock down remote device usage, which means that files must be stored in the virtual desktop, not on USB sticks or emailed through web-mail.

## Intelligent Desktop Virtualization

When I started writing this book, I didn't expect to get too technical, and I intend to keep it that way. That being said, I'd like to broach a discussion regarding a technical subject: the difference between VDI and IDV. Here is where the dyslexia kicks in: IDV stands for Intelligent Desktop Virtualization. Although it essentially delivers the same idea as Virtual Desktop Infrastructure (VDI), it's not just more efficient — it's really quite brilliant.

Previously, I wrote about VDI, which is a way to push virtualized desktops environments to end users through a centralized VM structure. Essentially the end user loads a VM of a desktop on their laptop, desktop, or tablet. The image can be either cloud based or network resident, and is usually made up of a pre-configured Windows environment that has all the usual applications built in. The end user connects to the virtual desktop, does their thing, and when they shut down, the image resides in a paused state on the host infrastructure. It's a great way to standardize endpoint desktops without worrying about which hardware it's running on, and it's even better from a security standpoint because the virtual machine doesn't run the risk of catching nasty viruses or malware that can reside on the end-user's hardware.

IDV works differently, using a more distributed approach to provisioning compute power while centralizing all the back-end management and deployment requirements. There are a few downsides that have always plagued full VDI adoption, such as limitations on the type of peripheral devices that can be used and the upfront and ongoing costs associated with storage and bandwidth. IDV aims to fix this, as well as offer a unique solution that uses a client-side hypervisor.

A company by the name of Virtual Computer came up with a product called the NxTop Engine, which is essentially a bare metal client side hypervisor that can run one or more virtual machines on the PC without a care as to what hardware is resident. Virtual Computer was later acquired by Citrix and is now XenClient, which has helped make it a more widely adopted solution. This is huge, because if you are an IT person who works with virtual machines you know that differences in hardware can pose problems when running a standardized VM. This type of hypervisor allows administrators to create a master image that works on any endpoint, desktop or laptop. It also does some other cool stuff (blatantly stolen from the old Virtual Computer website):

Complete virtual machine isolation. Unlike virtual machines running on top of untrusted operating systems, NxTop virtual machines are completely isolated from one another. Malware in an unmanaged Windows desktop does not compromise a managed NxTop virtual machine, even on the same hardware.

Hardware abstraction. NxTop presents a consistent set of virtual hardware to the end-user operating system, simplifying migration of users to new hardware platforms. Driver management and other hardware-specific compatibility challenges are eliminated.

Full disk encryption. The entire disk including all virtual machine and system data on NxTop-enabled PCs is encrypted, providing peace of a mind in the event that a PC containing sensitive data is lost or stolen.

Granular policy controls. IT administrators can protect against data leakage and unauthorized use through a robust set of policy controls. Access to hardware such as USB ports and network interfaces can be restricted or filtered based on centrally defined policies at global, group, and individual-user levels. Virtual machines can be governed by time-based expiration policies and on-demand remote disablement.

Remote termination with lost-data destruction. As an added layer of security, IT administrators can flag lost PCs for remote termination. If a lost or stolen PC connects to a network, it is directed to digitally shred all data and encryption keys, then self-destruct.

It also supports USB without the nasty bandwidth issues prevalent in traditional VDI USBs, and it's a cool approach to solving the issues associated with complexity in endpoint types, drivers, memory, and security. There are issues, however: the minute you abstract hardware from the OS you could run into support problems. But with cloud being such an emerging territory, running extensive evaluations will invariably be part of the plan.

For more information on XenClient, check out  www.citrix.com/products/xenclient/overview.html

## Cloud and Collaboration

Cloud is quickly becoming the basis of collaboration, and anymore it's becoming incredibly important in helping how organizations function internally and externally. Cloud storage solutions like Dropbox are commonplace in every organization, despite the problems they cause for security teams. The reason for its popularity is the ease of collaboration it provides: it enables employees to be more productive by easily sharing files between different teams, departments and mobile devices. If you're not using cloud for your sales and marketing departments at least, you're missing out on one of the most important changes in today's business landscape.

Remember the cloud commercials where they show everything being accessible from any device? Well, this is how collaboration starts: the idea of having a single copy of a document that is accessible to anyone yet can be updated from any source. For example, a marketing department can use a cloud-hosted application to create a presentation. This document will live online, and sales reps all over the world can connect to this document and share it with their customers. In the back end, marketing can keep the slide decks updated and create new ones for different campaigns. Sales reps can connect to these from any web browser, so there's no need to carry around bulky laptops. They can even be accessed through a tablet or smartphone for on-the-fly presentations. Layer in collaboration suites like Tabillo and the ability to correlate documents from all stages of the customer lifecycle can provide even more value by presenting a holistic view of all interactions.

Now, extend this thought to contracts, SOWs, training materials, any document or resource that could possibly be of value to the organization — all accessible from any device anywhere in the world. That is the power of cloud collaboration. You can even virtualize your PBX and have it mimic corporate phone extensions, yet forward it to any phone, including VOIP and smartphone.

Cloud is not technology, it's a way to change processes and embrace new ways of thinking and doing business. Collaboration is just a small part of the overall picture. When you think about it, it's already a big part of how we do things today.

## Mobile Device Management

When it comes to mobile device support in organizations, there are usually two strong views on the topic. From a security perspective, dealing with mobile devices on the network is a pain because it creates security holes that are tricky to fix, because locating the source device is not always easy. On the other hand, many organizations realize that mobile devices are part of the workplace culture, and if supported with the right policies in place, can help employees be more productive.

Sadly, a very small percentage of organizations have a formal policy in place when it comes to security and mobile devices. While many organizations have been able to push off the adoption of tablets as a standard business device, we've gotten to the point where we can no longer ignore they are a huge part of corporate culture. This pushes a problem upon the IT departments because they are now tasked with both securing the device, but also managing the ability to support a wide variety of devices.

From a security perspective, having so many Wi-Fi devices on the network creates a huge risk. Employees want to access the corporate network from their device, but managing individual password requests is incredibly time consuming (not to mention the password reset support required) for the IT team. Throw in contractors, temporary employees, vendors and other guests and you get the idea.

When users are connected, they are also a risk, as they can inadvertently allow access to the corporate network through malware installed on the device, or in the case of loss/theft. Once the Wi-Fi network has been breached, it will be much easier for unauthorized users to gain access to the network from outside the premises.

From an IT infrastructure side, this is where the main reluctance arises. These folks are responsible for troubleshooting all devices that are considered business devices. The certification of business devices is a long an arduous process, and employees are increasingly particular regarding which devices they wish to use. Realistically, if an executive buys a new tablet and is denied access to the network, the IT team will be forced to allow it anyway.

The second benefit lies in internal applications such as conferencing, expense management, analytics or social media marketing. It's much easier to push out new information and marketing content on employees via applications than it is to rely on them on your company intranet. Internal applications can help increase productivity and collaboration significantly if implemented properly.

Finally, depending on the organization, mobile devices can help extend business functionality. We're seeing this especially in retail, in regards to accepting mobile payments. This type of application can be built to integrate with banking, accounting and other customer related applications, and if you are already investing in a mobile solution, adding this type of functionality can yield better ROI than doing it in stages.

## Leveraging Big Data for Good

The last area I want to recommend is not necessarily the easiest project to start with, but has proven to deliver such impressive business benefits I felt it important to mention. There are two sides to almost every part of cloud, and for me, one of the biggest two sided coins is data mining. It's impossible to exist in any form online without mass amounts of data being tracked and mined on the back-end. It's a scary thing. While there are great free services like Google and Facebook, we all know that in return for these services we pay a price with our privacy. Other companies pay service providers to leverage their user information to create dedicated marketing campaigns, such as user-based marketing campaigns that you see on search engine pages and vendor websites. With more software and services being placed into the cloud than ever, the desire for companies to leverage these services as a source of data to help drive revenue will become more invasive. But there is an upside (in my opinion) to data mining, an upside that could help mankind advance through leaps and bounds. This upside is science.

There are great examples of how people are working with social networks to mine enough data to help cure diseases — not to sell more gadgets and clothing — but to help make the world a better place. Take Nicholas Christakis and his work using network analysis to help identify early signs of flu pandemics that will prevent hundreds of millions of deaths. Larry Brilliant (I love the name), the former Executive Director of Google.org (the philanthropic arm of Google), now uses the world's biggest cache of information to drive social entrepreneurship.

Why not have patients sign up for these types of studies and leverage information that can be analyzed with tools that are only possible with new advances in cloud and Big Data? The data that would emerge would provide so much information regarding how patients respond to different treatments that we could make significant leaps in medicine and science. Information collected with consent can help people who share a disease understand potential treatments and options. There are even scientists backing projects to map DNA through public submissions of information and samples.

If we extend this to social media, how fast could we find information about the spread of disease? It would be a great source of data that could be mined to look for key triggers to explain the source of the outbreak. We never had such access to data before, and it is something that we should look into closely.

In addition to using this type of health information for studies, we can help drive unified health systems, whereby patients, regardless of location, can access health information. This would be a significant move towards a universal healthcare system where, if a patient is injured outside their home area, the treating physician can easily access the patient's information to provide better treatment. Right now, one of our biggest issues with healthcare is the fact that systems are so disparate that an enormous amount of time is wasted finding additional information, instead of helping the patient.

If you take the idea of extending healthcare to a national system that can be accessed from anywhere in the country, we could also start looking at different ways to connect these systems. What if a link to each patient's information could be stored on his or her health card, so that in case of emergency, the attending physician simply scans a barcode or RFID chip to access the patient's information? It would also make it easier in cases where English is not a patient's native language, or if they have been rendered unable to communicate.

These types of examples show how organizations can start looking to Big Data to provide next-generation solutions, either for internal use, or to provide new services for customers. Next-generation analytics is going to be one of the largest areas of growth for the foreseeable future, due to how data's significant advantage can help manage and analyze these large stores of information. If we are able to start harnessing this information, we can significantly advance the way organizations interact with internal and external resources.

Resources:

Gartner Forecast: Software as a Service, Worldwide, 2010-2015, 1H11 Update

http://www.gartner.com/resId=1728009

The Christakis Lab / Harvard University

http://christakis.med.harvard.edu/

The Skoll Foundation

http://www.skollfoundation.org/

# Cloud as a Competitive Advantage

I felt it is important to finish this book with a section on the direction where many anticipate the cloud is going. Right now we are still in a bit of a cloud bubble, where there are all kinds of great promises of things to come, yet its unclear exactly how sustainable business models will take shape.

While it's anyone's guess where the big growth from cloud will take us, there are a few trends that are showing signs of promise. From changes in service provider business models, to the shift in education and adoption of Big Data analytics, there is a vast amount of opportunity for organizations to start figuring how they can play in the new cloud economy.

I hope that as you read this section, you start thinking about how your organization can help contribute to this new market and business transformation opportunity. There are a significant number of gaps in the cloud industry due to its relative infancy, and the ability to innovate in this space, in this moment, will separate the next-generation of business leaders from the old legacy ones who refuse to make a transition. Like the change from mainframe to client-server, cloud will help organizations realize new ways of doing business and will hopefully usher in a new wave of innovation.

## Cloud Service Providers

When you talk about cloud, a lot of the emphasis on business transformation happens internally and extends outward to customers. The first group of players in the cloud market — the technology and service providers — will be the ones to feel the need to re-evaluate their businesses. The transition to new service models will determine which of these providers will make it through the early adoption of cloud and will continue to be players in the established cloud market. Others will either end up being acquired by another organization or, they'll simply decide to focus on other business lines.

For companies that decide to stay in the cloud game, they will need to re-evaluate what role they plan to play helping organizations adopt cloud technologies and services. Personally, I see the cloud dividing technology providers into two major groups: enablers and providers. Here's why.

When it comes to cloud, businesses (customers of service providers) really have two major adoption paths. Larger organizations might look to build their own internal cloud to support their business, whereas smaller organizations might prefer the outsource model. Both market segments will need service providers to help enable them, and there will be overlap from market segments in some areas, especially around SaaS. So, what does this mean for providers in either market?

For cloud service providers that enable organizations through providing cloud services such as IaaS, PaaS, and SaaS (along with additional services such as data replication, security and network), cloud is a great way to evolve legacy services. The dominant players in this area will most likely come from the traditional telecommunication providers who are looking to create more value from their network services, or from managed service providers that normally support organizations with security or data services. These organizations will provide value as service brokers or aggregators for businesses that want a single source for their services and are looking to avoid issues with service conflicts such as incompatible platforms, data integration issues and portability.

For technology providers that are focused on helping organizations build their own clouds (which will also be a large market, especially in early cloud adoption for large enterprises), the key benefit these players can provide is through consulting and resource portfolios that support cloud as a market. Traditionally, these providers have sold hardware and software in silos such as storage, networking, security and unified communications. These portfolios will need to be evolved to reflect changes in the market, such as heavier emphasis on virtualization technologies (which would include virtualized UC and security for example), cloud platforms (cloud environment solutions for storage and networking) as well as implementation and architecture services to help customers transform their current business models and take advantages of the benefits cloud has to offer. These organizations will also play an important part helping service providers figure out how they will transition to the new cloud economy.

## Cloud and Mid-Market Organizations

For mid-market organizations that want to build internal cloud environments to support their IT objectives, the biggest barrier has always been the capital costs associated with purchasing infrastructure and security solutions to support their initiatives. Traditionally, the large server manufacturers haven't really targeted mid-market because they never saw a large revenue stream potential, or they assumed these organizations never had the in-house expertise to manage the equipment. Now it seems like every vendor is suddenly touting the latest mid-market solution, as if they just realized the market segment exists. Why the shift?

With cloud and virtualization, mid-market organizations are seeing the potential in building their own internal environments and outsourcing the more complicated services to cloud providers. As a result, there has been significant investment from vendors to make smaller versions of their enterprise class devices to address this market. In the past this hasn't worked so well, as organizations still needed the skillset of high-paid specialists to operate. What these vendors don't keep in mind is that in countries like Canada, the vast majority of organizations fit into the mid-market demographic, and while they have the same goals as enterprise organizations when it comes to reducing their infrastructure complexity. They need it to be even simpler.

Cloud solutions are a perfect fit for mid-sized organizations, as they are geared towards providing the same types of services that would exist on enterprise devices, but providers understand these organizations usually have less complex environments. Mid-sized organizations often run a single type of operating system, have a simplified network, and want a single device to do multiple tasks. They also want some of the key features of larger solutions such as snapshots, de-duplication, advance security features such as next-generation IPS and Firewall, and they want it all delivered via a simplified interface.

One nice thing coming from the vendor movement to support mid-market is we are seeing more all-in-one devices that use virtual technology to add features. This was never much of an option before; each function was sold through another device add-on. Vendors are now providing the market with simplified licensing and pricing models. This has also streamlined channel support for the vendors themselves.

These new business models also help reduce the barrier to adoption. Talking to mid-market about security and infrastructure was always a pain, because they assumed the price point was out of their reach. But with the OPEX pricing model, cloud is making the conversation a lot easier by giving the market more options to leverage these new technologies. It's nice to know you don't need thousands of employees to qualify for modern solutions anymore, or to get the attention of cloud vendors.

Transitioning your business model (assuming you are looking at the service provider business model) to help strengthen and enable mid-market organizations will present several benefits. Particularly in countries where the majority of organizations fall into this mid-market size, there is a huge untapped market that has traditionally been overlooked in favor of enterprises. The problem with targeting enterprises is they make up a relatively small percentage of the overall opportunity, making it a question of "bigger bang for the buck" in terms of sales allocation. Leveraging cloud to offer new services that address the needs of the mid-market space, and to be one of the first to market with these services, will provide a significant competitive advantage, especially if your organization is nimble enough to stay ahead of the game.

## Cloud Brokers

Sorting the various cloud offerings from service providers, technology manufacturers and software developers is not getting easier. It seems as if every day there is a new cloud platform or automation tool available, and it is becoming increasingly difficult for organizations to keep track of all these advancements without losing sight of their organization's own objectives. When it comes to cloud, the learning curve means spending lots of time reading to figure out how it all applies to your organization, and often this is on top of your regular job. Unless you have a dedicated cloud expert resource, you will undoubtedly want to leverage some help. This is where one of the biggest cloud markets — and simplest business models for service providers to implement — comes in.

Enter the new type of service provider, the cloud broker. Cloud brokers are a perfect business model for many service providers who don't want to go all-out in building a cloud infrastructure, but want to offer their customers some of these services. Some of the perfect candidates we see starting to adopt this model are telecommunication providers and ISPs. Because they currently offer network solutions, additional third-party services gives them the ability to provide their customers some of the benefits of cloud, including multiple complimentary services, with the convenience of a single monthly bill. It also promotes stickiness on the network, driving overall network sales while creating additional revenue streams from subscription models that require very little CAPEX to build.

Let's look at a typical cloud broker model. A telecom company might currently provide services in the way of network connectivity, unified communications and mobile devices. What if they added the ability to subscribe to additional services such as Office 365, SalesForce.com, cloud replication and cloud storage? It's a pretty good way to differentiate the provider from its competition, especially considering it reduces vendor overload on behalf of the client. This is a compelling business proposition; should something happen with any of the services, the customer only needs to call one number. The customer benefits from a set of solutions that have been tested to be compatible, saving them the effort of verifying compatibility themselves. The customer benefits through a more comprehensive solution and the provider can offer third-party solutions (with a healthy margin added) without building the service from scratch. It's no wonder these cloud brokers will start to play a larger role in the cloud market, primarily through cloud service intermediation, service aggregation and cloud service arbitrage.

Because of their unique position in the market, it makes sense for other cloud service providers to start working with cloud brokers to help ensure that their services are meeting the needs of the market. Cloud brokers can provide an extra benefit to services offered by cloud providers by building additional services including security or advanced management capabilities. They can also assist in creating customer solutions spread across multiple providers and manage the integration into a single platform. Finally, cloud brokers can evolve to be the regulators of the space by supplying flexibility and fostering competition between cloud providers to ensure end users benefit from the most opportunistic choices.

Personally, I am most excited about seeing cloud brokers as a single point for customers to purchase multiple services from leading vendors, and allowing for a federated platform including identity services, security and infrastructure. It will also ensure that, should there be a service interruption, the cloud broker will have the ability to move services around to keep organizations up and running until the interruption is resolved.

But where is the immediate opportunity for cloud brokers? Right now there is a lot of talk about cloud brokerage services in the government space. Due to the high regulatory and compliance requirements, it will be beneficial for governments to outsource their cloud strategies to cloud brokers who can work across multiple vendors to build a tailored cloud solution that meets the needs of the government. This saves governmental agencies large amounts of time they would have otherwise spent sourcing and reduces the risk of choosing services that would be incompatible with the goals of the project.

Once the dust has settled, other organizations will begin taking a hard look at cloud brokers as potential service providers. In terms of cloud solutions, most analysts expect the telecommunication providers will make forays into these markets by building their own in-house services, with other key players moving into the cloud brokerage model as well.

## Vendor Collaboration

Just like every other technology, there is something to be said about the benefits of collaboration. Take smartphones, for example. The real benefit for these devices lies in how many applications have been created for the platform. Just like developers in the smartphone market, cloud vendors are building partnerships to develop solutions for attached security, storage, asset management, performance monitoring and other operational technologies within virtual environments.

Overall, vendor collaboration in the cloud space has resulted in a better focus on solutions that address the inherent risks associated with virtual and cloud environments. Platform vendors understand that these environments introduce a new area of risk and, as a result, are working closely with industry leaders to address them at all layers of the cloud stack. As more of these early adopter issues are addressed, channel vendors and service providers will be better protected when reducing obstacles that affect client adoption.

Association with partner vendors has always been a wise move for vendors, as customers will naturally choose solutions that are certified to work together. For organizations, nothing is more frustrating — not to mention expensive — than purchasing a bunch of different technologies and learning that they are incompatible, or that one of the manufacturers is no longer supporting another vendor's component.

The other great benefit to this type of model is that the solutions — especially security solutions — can be designed from a platform level through the use of APIs. This means they have access to the code that builds the environment. This also means more robust security and visibility in virtual environments. In security, this is a key issue because a lot of traditional security solutions were not designed for virtual environments and cannot control the unique elements that reside within. Many organizations cite this reason as why they are waiting to make the transition. By integrating market-leading technologies from other vendors into the platforms themselves, these types of arguments are being rendered obsolete.

From a cloud solution provider, the ability to offer multiple solutions through vendor collaboration will ultimately help drive adoption of services from your customers. This doesn't necessarily mean that the provider with the most solutions will win, but rather, the better integrated each service will be. Having multiple services that need to be customized to work with other applications increases the complexity on the part of both the provider and the customer. Customers don't want to try to figure out whether a service will or will not support another one. They expect that the provider has performed due diligence around interoperability, and will look to the combination of services in the offering as a recommendation regarding which vendors are more likely to support each other.

This is where the benefits of vendor collaboration really shine. By working with other vendors to certify solutions, vendors can position their solutions as part of larger, overall solution. It also helps them gain more promotion through partnering with larger organizations and building solutions that leverage their technology, such as APIs, to build more robust add-ons and additional functionality. Often, larger vendors will leverage their partner network to help provide solutions to the market, and in return offer special designations to help distinguish these partners.

As cloud brokerage becomes more prevalent, the ability to certify solutions from multiple partners on a single platform will be increasingly important to customers. We are already seeing this happen with cloud providers themselves, such as those who standardize on a single platform like VMware, AWS and OpenStack, and wish to provide their customers peace of mind in regards to knowing the entire solution, regardless of component, will work.

## Cloud and the Education Sector

Earlier I touched on how cloud is a great way for service providers and cloud brokers to target specific markets, particularly government or finance. But there is an equally attractive market that is not just generally untapped, but could be one of the best use cases for cloud technologies that will also, hopefully, result in a noticeable change in the market. That market is education.

The education market has always been a unique one for IT vendors. First, because they are under consistent scrutiny from the public, they are faced with increased pressure to provide top notch services with severely slashed funding, all while ensuring that security is a key part of the overall strategy. Security is particularly tricky because they are faced with a large student population that becomes more technically savvy every year, and who are persistent in getting around every control in place. The education sector is tasked to ensure the next generation of workers is enabled with the latest technologies, but they need to deliver them in a cost effective way with some form of standardization. The good news is that cloud really shines with it comes to helping education; particularly in infrastructure optimization, content management and collaboration.

Let's start by looking at cloud as a tool for resource optimization, considering it's a great way schools can start upgrading their legacy infrastructure (which normally would require huge costs to overhaul). Virtualization can be used to run legacy applications, while allowing for the integration of new ones, to make internal infrastructure more flexible and optimize storage and memory resources. It also allows for the development of next generation solutions such as mobile access and thin clients for classroom use.

From a content management perspective, by creating centralized repositories for teaching materials, the curriculum of several educational institutions can be standardized and updated to take advantage of changing content. Centralization will allow all schools to take advantage of one main source of tools and content without having to build and host in-house, and best-of-breed solutions from one institution can be replicated across many with few additional costs. Content can also be made accessible through web portals for students to access at home on multiple devices.

In my opinion, one of the most important uses of cloud is how it will transform collaboration. It's no secret that class sizes are getting bigger while budgets are getting smaller. Through collaboration, teachers can receive more detailed input on each student's progress and can create individualized plans based on their unique needs. If analytics is integrated into all content, you could have an automated process whereby, if a test has a certain grading curve and a student falls below that curve, it would flag the system to look at other tests to identify if the student needs help in one subject or across the board. This would help teachers identify these students and create a learning plan that utilizes content from the centralized repository. The content could allow for online tracking so the teacher would be able to see progress and better assist the student. Parents could also be given access to this information to gain a better understanding of how their child is doing, and the student can do work outside class.

It's hard not to be excited about the idea that e-learning can help students take advantage of individualized plans to help them succeed. Of course, it requires that students have access to technology and are familiar with it, but through cloud, applications can be created to work on any operating system, tablet or laptop/desktop with thin clients or standard web portals. This will be increasingly effective in geographic areas where access to high-end computing devices is not available for most families. In either case, education has changed significantly from when we had ICON and Apple II computers, and we need to make sure education keeps up to give us a great generation of future cloud developers.

## Cloud and the Careers of Tomorrow

I wanted to take the last chapter of this book to focus on one of the biggest gaps in cloud: people. Provided that you, and perhaps your team, have sat down and gone through the goals of your organization, you've probably started thinking about how the skillsets required to support cloud will require an overhaul. Whether undergoing a cloud or virtualization project is the right decision, you can't help but notice the types of careers that will come from cloud are going to be different from the ones we are currently familiar with. New IT specialists will need to understand not just virtualization, but abstract thinking as it relates to planning infrastructures over multiple locations that are not physically visible. Add to this skillsets in security, storage and networking, and you're most likely able to get a cloud environment implemented and managed. But cloud doesn't stop there. Once all these new solutions are in place, there is the question of what we do with the data that flows through it. How do we even deal with the significant increase in data filling up the pipes? In other words, how do we deal with the real opportunity of cloud: Big Data?

If you think of LINUX and the "Grep" command, particularly its usefulness in finding a text variable in a data file, you could almost argue it is the forefather of Big Data. The ability to make meaningful sense of data, particularly unstructured, is going to be a skillset that puts you at the top of the employer's lists. Having the ability to find data is one thing — using the data to drive innovation is another.

In the US, a McKinsey Global Institute report predicts that by 2018 the demand for Big Data professionals will exceed supply by up to 190,000 positions, and US enterprises will need about 1.5 million (!) more managers and business analysts who can understand the data. The problem however, is that the skillset doesn't exist yet. Education is still trying to keep up with basic virtualization fundamentals; it has a long way to go before it can start training the next workforce on Big Data magic. If we don't have Big Data experts, how can we train more?

There are two approaches that can be taken to reduce this critical gap in the workforce. First, we could look to education to start creating content that helps fill the gaps. Unfortunately, this isn't easy to do. The technology is still evolving, and getting the right resources in place to teach such critical skills will take time. Some universities and colleges are starting to add cloud to their curriculums, but until the resources to support these courses are available, they won't amount to much more than Cloud 101.

The other option is to look at service providers. Big Data Analytics services are going to be something required by all organizations if they want to compete in a competitive, post-cloud landscape. By offering the right technology and analytics, service providers could create huge value for customers by helping them make sense of their data as part of a cloud service. We're already seeing big systems integrators such as IBM and Accenture, as well as platform vendors like Teradata, setting up models that support analytics services.

For organizations that wish to develop internal big data teams, the best place to start is looking at your employees and identifying those next-generation workers. These employees are attracted to open source tools and cloud computing; they want to use the latest and greatest tools and are focused on developing their career paths. The downside is that organizations need to nurture these employees to maintain their loyalty, as these skills are highly valuable to other organizations. Your employees need access to the right resources. This may be through partnerships with service providers or vendors, or by giving them access to secondary sources of education.

Big Data is going to be the driving force behind how companies operate in the coming years; putting together a strategy that takes advantage of these new technologies is vital. The first step is to identify internal resources who would be good candidates in transitioning along with these technologies, and ensure they have the resources to start learning about Big Data. There will also be an increased focus on keeping these resources happy, as we can expect to see an extremely competitive market for those with next generation skillsets.

Resources:

McKinsey Global Institute Report on Big Data Expertise

 www.mckinsey.com/mgi/publications/big_data/pdfs/MGI_big_data_exec_summary.pdf

# About the Author

Andrea Knoblauch is a Canadian Cloud & Virtualization Security Strategist with a passion for all things tech. With over 18 years of experience in marketing and product management, Andrea has spent the last few years working with leaders in the cloud space to promote best practices in cloud and virtualization. As part of her non-profit activities, Andrea has contributed to the Cloud Security Alliance (CSA)'s research groups on the topic of security, works with Canadian cloud startups and industry professionals, writes for several blogs and regularly meets with Canadian industry groups to help further cloud adoption.

Her blog is located at tinderstratus.com

