I have heard about CLOUD but what the heck is IaaS, PaaS and SaaS?

Anyone who who follows technology trends has undoubtedly heard the term “cloud service” thrown around a few gazillion times over the past few months. But if you don’t know the difference between terms such as PaaS, IaaS and SaaS, don’t fret — you’re far from alone.

Let’s start at the beginning. “Cloud” is a metaphor for the Internet, and “cloud computing” is using the Internet to access applications, data or services that are stored or running on remote servers.

When you break it down, any company offering an Internet-based approach to computing, storage and development can technically be called a cloud company. However, not all cloud companies are the same. Typically, these companies focus on offering one of three categories of cloud computing services. These different segments are called the “layers” of the cloud.

Not everyone is a CTO or an IT manager, so sometimes following the lingo behind cloud technology can be tough. With our first-annual CloudBeat 2011 conferencecoming up at the end of this month, we thought this would be a good opportunity to go over the basics of what purpose each layer serves and some company examples to help give each term more meaning.

Layers of the cloud

A cloud computing company is any company that provides its services over the Internet. These services fall into three different categories, or layers. The layers of cloud computing, which sit on top of one another, are Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Infrastructure sits at the bottom, Platform in the middle and Software on top. Other “soft” layers can be added on top of these layers as well, with elements like cost and security extending the size and flexibility of the cloud.

Here is a chart showing simplified explanations for the three main layers of cloud computing:

IaaS-PaaS-SaaS

IaaS: Infrastructure-as-a-Service

The first major layer is Infrastructure-as-a-Service, or IaaS. (Sometimes it’s called Hardware-as-a-Service.) Several years back, if you wanted to run business applications in your office and control your company website, you would buy servers and other pricy hardware in order to control local applications and make your business run smoothly.

But now, with IaaS, you can outsource your hardware needs to someone else. IaaS companies provide off-site server, storage, and networking hardware, which you rent and access over the Internet. Freed from maintenance costs and wasted office space, companies can run their applications on this hardware and access it anytime.

Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace and Red Hat. While these companies have different specialties –  some, like Amazon and Microsoft, want to offer you more than just IaaS — they are connected by a desire to sell you raw computing power and to host your website.

PaaS: Platform-as-a-Service

The second major layer of the cloud is known as Platform-as-a-Service, or PaaS, which is sometimes called middleware. The underlying idea of this category is that all of your company’s development can happen at this layer, saving you time and resources.

PaaS companies offer up a wide variety of solutions for developing and deploying applications over the Internet, such as virtualized servers and operating systems. This saves you money on hardware and also makes collaboration easier for a scattered workforce. Web application management, application design, app hosting, storage, security, and app development collaboration tools all fall into this category.

Some of the biggest PaaS providers today are Google App Engine, Microsoft Azure, Saleforce’s Force.com, the Salesforce-owned Heroku, and Engine Yard. A few recent PaaS startups we’ve written about that look somewhat intriguing includeAppFogMendix and Standing Cloud.

SaaS: Software-as-a-Service

The third and final layer of the cloud is Software-as-a-Service, or SaaS. This layer is the one you’re most likely to interact with in your everyday life, and it is almost always accessible through a web browser. Any application hosted on a remote server that can be accessed over the Internet is considered a SaaS.

Services that you consume completely from the web like Netflix, MOG, Google Apps, Box.net, Dropbox and Apple’s new iCloud fall into this category. Regardless if these web services are used for business, pleasure or both, they’re all technically part of the cloud.

Some common SaaS applications used for business include Citrix’s GoToMeeting, Cisco’s WebEx, Salesforce’s CRM, ADP, Workday and SuccessFactors.

We hope you’ll join us at CloudBeat 2011 at the end of the month to explore a number of exciting case studies in cloud services.

Content Reference:

The original article can be found here and the Cloud breakdown slide is taken from “Windows Azure Platform: Cloud Development Jump Start” via Microsoft

The Battle of the Cloud – Microsoft Azure Vs VMware Vs Amazon AWS.

All three of these big players have their own way of connecting data center of an organization to public clouds, not to mention, each one of them come with their share of benefits and risks that should be considered before making a choice. With the all new edition of Windows server, Microsoft looks committed to a patent […]

What’s a Layer 1 or Layer 2 Hypervisor and Where Does Hyper-V Fit In?

In the world of Server Virtualization, there are two types of hypervisors: Layer 2 hypervisors are installed as an application (or service) on an existing operating system (such as Microsoft Windows). Layer 1 hypervisors are in and of themselves operating systems that are installed on the ‘bare metal’ – directly on the hardware.

The hypervisor is the virtualization layer – the platform on which the virtual servers are hosted. Because all operating systems require resources (some more than others) it is axiomatic that the Layer 1 Hypervisors – those that are themselves thin operating systems – are going to be more efficient than the Layer 2 Hypervisors, which have to first allow the parent operating system to take the resources that it requires, and then meter our the available resources to its applications and services as it sees fit.

It used to be easy enough to know which virtualization platforms were which, based on how you installed them. So when Microsoft released Hyper-V as a role on Windows Server 2008 (and all subsequent versions) it was an easy mistake to make that it, as was its predecessors, a Layer 2 Hypervisor. However that assumption is wrong.

As with all other Roles on Windows Server, Hyper-V is installed by first installing the operating system, then adding the role. it requires a total of 10 clicks, two reboots, and it is done.

Two reboots… that is a bit unusual, isn’t it? Usually Roles either do not require a reboot, or occasionally a single reboot. Only when you install multiple roles would you need to reboot multiple times, and even then only occasionally. So why does Hyper-V require two?

The following is going to feel, for a couple of paragraphs, as if I accidentally cut and pasted a completely irrelevant article below. Please read on, I will tie it all together in a few paragraphs!

If you have ever been to downtown Montreal you may have seen Christ Church Cathedral. According to the church’s website the building was completed in 1859, and consecrated in 1867 (not sure why the 8 year lag… but then, I am not entirely sure why a building needs to be consecrated). In other words, it recently celebrated its 150th birthday… and despite the efforts of the best architects (Frank Wills, Thomas S. Scott) and masons, older buildings tend to require a certain level of care to maintain. They may have built them well back then, but ask any Egyptologist to confirm that the pyramids are crumbling… slowly.

Now, the following story is my interpretation of a historical discussion that I have no insight into. The facts are there, but the story behind it is simply pure guesswork. In the mid-1980s the church (which it should be mentioned is also the home of the Anglican Diocese of Montreal) evaluated its resources and holdings and determined that financially they were lacking. Their most prominent holding – the plot of land on which the church was built – was worth millions (at the heart of downtown Montreal, in the booming building economy of the 1980s), and they needed a way to leverage that if they were to remain (or return to) financially healthy.

The board called for ideas of how to leverage the property… remember, this was before Matt Groening gave us the idea to commercialize the church. Some of the ideas were certainly money-makers, but unrealistic.

  1. They could tear down the church and build a commercial property. Unfortunately, this would essentially eliminate the point of the church… couldn’t do that!
  2. They could build OVER the church… however there were several issues with that, not the least of which that building over an architectural wonder like the cathedral would mean masking its true beauty. However from a more practical standpoint, building onto a building that old would have all sorts of concerns, some of them involve the scary words ‘building could fall down.’
  3. The strangest idea is what they actually ended up doing… they dug under the church, essentially putting the building on stilts, and built an underground shopping mall, which today is known as Promenades de la Cathedrale. It is a multi-level mall with over fifty stores and a food court, along with underground parking. It is an architectural feat that must have taken a year to design and longer to plan. The steeple of the cathedral, however, is no higher than it was in 1867, and the project was executed successfully with movements never exceeding 3/16” inch.
  4. Hyper-V installs in much the same way. It lifts the base operating system up off the bare-metal, injects the thin-layer hypervisor onto the bare-metal hardware, and instead of placing the original back where it was, it condenses it into what I call a para-virtual machine, and creates the Parent Partition, which is a concept unique to Microsoft. The Parent Partition is the ‘first among equals’ which controls the drivers, and allows the administrator to use the console rather than remoting into the system. It does not use a .vhd (virtual hard drive) for storage, but rather writes directly to the hard drive. There is no way to differentiate it from a non-virtual machine… except that the system boots to Hyper-V and then loads the Parent Partition.

Chapter_houseHyper-V installs in much the same way. It lifts the base operating system up off the bare-metal, injects the thin-layer hypervisor onto the bare-metal hardware, and instead of placing the original back where it was, it condenses it into what I call a para-virtual machine, and creates the Parent Partition, which is a concept unique to Microsoft. The Parent Partition is the ‘first among equals’ which  ontrols the drivers, and allows the administrator to use the console rather than remoting into the system. It does not use a .vhd (virtual hard drive) for storage, but rather writes directly to the hard drive.

There is no way to differentiate it from a non-virtual machine… except that the system boots to Hyper-V and then loads the Parent Partition.

The hypervisor loads in Ring –1… there are no hooks into it for any external code – it is purely written by Microsoft and read-only. However on top of that the virtual machines (or Child Partitions) are all created equally… or at least three of the four types have equal access to the distribution of resources, with the fourth type (the Parent Partition) being the only partition that can reserve its own resources off the top – by default 20% of the CPU and 2GB of memory, but those numbers are adjustable.

One primary difference between the Parent Partition and the Child partitions is seen in the following graphics. In the first graphic (Image1) we see the Device Manager for the Parent Partition. The expanded information is what you would expect – HP LOGICAL VOLUME denotes the HP RAID Array, the Display Adapter is ATI, there are two HP NC371i Multifunction Gigabit NICs, and the iLO Management Controller driver. The second graphic (Image2) is a similar screenshot from an operating system running in a Child Partition on the same physical box. It is the same ACPI x64-based PC… and it even has the same Dual-Core AMD Opteron™ Processor 8220 SE CPUs… it just has fewer of them (while Hyper-V allows us to assign up to four virtual CPUs to a VM, this one only has two). Where the Parent Partition has HP LOGICAN VOLUMES, ATI ES1000 video, and HP NC371i network adapters, the corresponding drivers for the Child Partition are MSFT Virtual Disk Devices, Microsoft Virtual Machine Bus Video Device, and Microsoft Virtual Machine Bus Network Adapters. While they have similar performance to the physical, the virtual partition has virtual hardware, unlike the para-virtual machine, which has physical hardware… sort of.

Image1: Device Manager, Parent PartitionImage2: Device Manager, Child Partition

Because the actual drivers for the physical hardware run in the Parent Partition, it also has a feature called the ‘Virtual Service Provider (VSP).’ The VSP communicates with the feature in the Child Partitions called the ‘Virtual Service Client (VSC).’ This is how the virtual machines can perform as well as their virtual counterparts, with the limitations of their virtual hardware only being how many of the resources are allocated to (or shared with) the VM.

Because of how the hypervisors differ, ESX (and ESXi) does not have a Parent Partition… their ‘operating system’ is their hypervisor. With Microsoft Windows the hypervisor kernel is still Windows, so it works differently. However, benchmark performance tests of both prove that there is slight to no difference in performance between ESX and Hyper-V**, whether testing against the full installation of Windows Server, Server Core, or Hyper-V Server.

Incidentally, I mentioned earlier that there are three types of Child Partitions… while this is true, the only differentiator is the operating system installed in the Child Partition… so the three types are:

1.  Child Partition with Hyper-V supported OS
2.  Child Partition with a non-supported (Legacy) version of Windows (or non-supported x86 OS)
3.  Child Partition with a supported Xen-Enabled Linux Kernel (SLES, RHEL, CentOS)

Where VMware claims to support many more versions of many more operating systems than Hyper-V does, Microsoft is more realistic. For example, Microsoft wrote Windows NT, but stopped supporting it years ago. It, like any other x86 operating system, will install in a Hyper-V virtual machine, it will not have Integration Components. You could will not be able to fully leverage the gigabit Ethernet adapter or high resolution video… but if you are still running NT chances are you didn’t have that anyways. Microsoft also recognizes that it would be impossible to support many Linux builds, especially the ones that are primarily supported by community. On the other hand, the three kernels that are supported account for well over 90% of Linux in professional datacenters. Chances are there will be more kernels supported in the future… but the majority are covered currently.

If your operating system of choice is Linux, then vSphere may be your best bet. However, if you run a Windows-centric datacenter, but happen to have a number of Linux machines that you need to run, then Hyper-V with System Center is definitely for you… especially since you now understand why Hyper-V is really a Layer 1 Hypervisor, despite what some may claim!

**Although I have performed these tests, the End User License Agreement of vSphere 4.0, 4.1, and 5.0 all prohibit the publication of these benchmarks, and I would be stripped of my VMware certifications and subject myself to legal action if I did. Solution… build them for yourself

Content source:  The above article is written by Mitch Garvis (@MGarvis) and also appears on The World According to Mitch.

vSphere’s Virtual CPUs – Avoiding the vCPU to pCPU ratio trap

2011 was a year where despite the economic constraints everything Big was seemingly good; Big Data, Big Clouds, Big VMs etc. Caught in the industry’s lust for this excess, 2011 was also the year I lost count of how many overprovisioned resources to ‘Big’ Production VMs I witnessed. More often than not this was a typical reaction from System Admins trying to alleviate their fears of potential performance problems to important VMs. It was the year where I began to hear justifications such as “yes we are overprovisioning our production VMs..but apart from the cost savings, overallocating our available underlying resources to a VM isn’t a bad thing, in fact it allows it to be scalable”. Despite this 2011 was also the year where I lost count of the amount of times I had to point out that sometimes overprovisioning a VM does lead to performance problems – specifically when dealing with Virtual CPUs.

VMware refers to CPU as pCPU and vCPU. pCPU or ‘physical’ CPU in its simplest terms refers to a physical CPU core i.e. a physical hardware execution context (HEC) if hyper-threading is unavailable or disabled. If hyperthreading has been enabled then a pCPU would consitute a logical CPU. This is because hyperthreading enables a single processor core to act like two processors i.e. logical processors. So for example, if an ESX 8-core server has hyper-threading enabled it would have 16 threads that appear as 16 logical processors and that would constitute 16 pCPUs.
As for a virtual CPU (vCPU) this refers to a virtual machine’s virtual processor and can be thought of in the same vein as the CPU in a traditional physical server. vCPUs run on pCPUs and by default, virtual machines are allocated one vCPU each. However, VMware have an add-on software module named Virtual SMP (symmetric multi-processing) that allows virtual machines to have access to more than one CPU and hence be allocated more than one vCPU. The great advantage of this is that virtualized multi-threaded applications can now be deployed on multi vCPU VMs to support their numerous processes. So instead of being constrained to a single vCPU, SMP enables an application to use multiple processors to execute multiple tasks concurrently, consequently increasing throughput. So with such a feature and all the excitement of being ‘Big’ it was easily assumed by many that taking advantage of such a feature by provisioning additional vCPUs could only ever be beneficial – but if only it was that simple.
The typical examples I faced entailed performance problems that were either being blamed on the Storage or the SAN and not CPU constraints especially as overall CPU utilization for the ESX server that hosted the VMs would be reported as low. Using Virtual Instruments’ VirtualWisdom I was able to quickly conclude that the problem was not at all related to the SAN or Storage but the hosts themselves. By being able to historically trend and correlate the vCenter, SAN and Storage metrics of the problematic VMs on a single dashboard it was apparent that the high number of vCPUs to each VM was the cause. This was indicated by a high reading of what is termed the ‘CPU Ready’ metric.
To elaborate, CPU Ready is a metric that measures the amount of time a VM is ready to run against the pCPU i.e. how long a vCPU has to wait for an available core when it has work to perform. So while it’s possible that CPU utilization may not be reported as high, if the CPU Ready metric is high then your performance problem is most likely related to CPU. In the instances that I saw, this was caused by customers assigning four vCPUs and in some cases eight to each Virtual Machine. So why was this happening?

VirtualWisdom Dashboard indicating high CPU Ready
Well firstly the hardware and its physical CPU resource is still shared.Coupled with this the ESX Server itself also requires CPU to process storage requests and network traffic etc. Then add the situation that sadly most organizations still suffer from the ‘silo syndrome’ and hence there still isn’t a clear dialogue between the System Admin and the Application owner. The consequence being that while multiple vCPUs are great for workloads that support parallelization but this is not the case for applications that don’t have built in multi-threaded structures. So while a VM with 4 vCPUs will require the ESX server to wait for 4 pCPUs to become available, on a particularly busy ESX server with other VMs this could take significantly longer than if the VM in question only had a single vCPU.
To explain this further let’s take an example of a four pCPU host that has four VMs, three with 1 vCPU and one with 4 vCPUs. At best only the three single vCPU VMs can be scheduled concurrently. In such an instance the 4 vCPU VM would have to wait for all four pCPUs to be idle. In this example the excess vCPUs actually impose scheduling constraints and consequently degrade the VM’s overall performance, typically indicated by low CPU utilization but a high CPU Ready figure. With the ESX server scheduling and prioritising workloads according to what it deems most efficient to run, the consequence is that smaller VMs will tend to run on the pCPUs more frequently than the larger overprovisioned ones. So in this instance overprovisioning was in fact proving to be detrimental to performance as opposed to beneficial. Now in more recent versions of vSphere the scheduling of different vCPUs and de-scheduling of idle vCPUs is not as contentious as it used to be. Despite this, the VMKernel still has to manage every vCPU, a complete waste if the VM’s application doesn’t use them!
To ensure your vCPU to pCPU ratio is at its optimal level and that you reap the benefits of this great feature there are some straightforward considerations to make. Firstly there needs to be dialogue between the silos to fully understand the application’s workload prior to VM resource allocation. In the case of applications where the workload may not be known, it’s key to not overprovision virtual CPUs but rather start with a single vCPU and scale out as and when is necessary. Having a monitoring platform that can historically trend the performance and workloads of such VMs is also highly beneficial in determining such factors. As mentioned earlier CPU Ready is a key metric to consider as well as CPU utilization. Correlating this with Memory and Network statistics, as well as SAN I/O and Disk I/O metrics enables you to proactively avoid any bottlenecks and correctly size your VMs and hence avoid overprovisioning. This can also be extended in considering how many VMs you allocate to an ESX Server and in ensuring that its physical CPU resources are sufficient to meet the needs of your VMs.  As businesses’ key applications become virtualized it’s an imperative that whether they are old legacy single threaded workloads or new multi threaded workloads the correct vCPU to pCPU ratio is allocated. In this instance size isn’t always everything it’s what you do with your CPU that counts.

 

Content Reference: The original author of this article is Archie Hendryx. The article can be found at: http://www.thesanman.org/2012/01/vspheres-virtual-cpus-what-o.html 

Cloud Computing vs Virtualisation: There Is A Difference?

English: Cloud Computing

In the many meetings that we have at customer sites, conferences, seminars, and other industry events, one conversation point that we continually come into is the difference between Cloud Computing and Virtualization. The “cloud” has become a term shrouded in confusion.  Many IT organizations position themselves to be “doing cloud” to satisfy the questions from the business and other executives, because they believe virtualization to be synonymous. While virtualization is an integral part of cloud computing, they are not the same thing.

Virtualization has been around for many years and is the way for IT to maximize the use of compute, storage, networking and to provide increased flexibility to those resources. Cloud computing brings significant value on top of virtualization platforms by streamlining management processes and increasing efficiencies to reduce the total cost of ownership (TCO). While most people talk about these two terms interchangeably, they are truly very different.

So, what technically makes cloud different than virtualization? 

At its most basic level, Cloud treats computing as a utility rather than a specific product or technology. Cloud computing evolved from the concept of utility computing and can be thought of as many different computers pretending to be one computing environment. Cloud computing is the delivery of compute and storage resources as a service to end-users over a network.  Virtualization itself does not provide the customer a self-service layer and without that layer you cannot deliver compute as a service.  Orchestration is the combination of tools, processes and architecture that enable virtualization to be delivered as a service. This architecture allows end-users to self-provision their own servers, applications and other resources.  Virtualization itself, allows companies to fully maximize the computing resources at its disposal but it still requires a system administrator to provision the virtual machine for the end-user.

Orchestration is what allows computing to be consumed as a utility and what separates cloud computing from virtualization. Cloud computing is the belief that computing resources and hardware are a commodity to the point that people and companies will purchase these resources from a central pool and only pay for what they used. In essence, these resources are metered, very similar to how you buy power or water for your home.

Cloud computing is an approach for the delivery of services to an end-user while virtualization is one possible service that could be delivered.

A self-serving model however is not an essential component of virtualization, but it is in cloud computing. Some will argue that virtualization solutions may include a self-serving component, however it is not mandatory. In cloud computing, self-service is crucial to deliver on-demand resources to end users, which is what service is all about. Self service is then, an effective mechanism to reduce the amount of training and support needed at all levels within an organization. It is a crucial vehicle to accelerate the ROI of a cloud computing solution.

So how do you decide which of these technologies meets your needs?

It is important to remember that you are not choosing between Virtualization and Cloud Computing as a final solution for your IT needs.  If you have already invested in virtualization, the cloud can work on top of that to help further maximize your computing efficiency in certain instances and help deliver your exiting network as a service. The cloud enables you to always use only what you need and this ability to rapidly, elastically, and in some cases automatically, provision computing resources to quickly scale out and rapidly release to quickly scale in, allows you to concentrate on your core business and not have to worry about IT management.

In the end, virtualization and the cloud computing operational model allow you to do more with what you have by maximizing the utilization of your computing and infrastructure resources. Just remember, they are not the same thing. While both have their respective advantages, you’ll want to think about factors like start-up vs. long-term costs in both models and which is the better fit for your organization.

* Source: The original author of this article is Josh Ames.

Related articles

Cloud computing – What is it? What are the key components?

English: Cloud Computing Image

There seems to be a lot of confusion around what exactly is ‘Cloud computing‘?

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

According to NIST there are some key components that help define Cloud computing. The cloud model is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics:

On-demand self-service : A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human  interaction with each service provider.

Broad network access : Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource pooling : The provider’s computing resources are pooled to serve multiple consumers

using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacentre). Examples of resources include storage, processing, memory, and network bandwidth.

Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.

Measured service : Cloud systems automatically control and optimize resource use by leveraging  a metering capability1 at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Service Models :

Software as a Service (SaaS) : The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure2. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user specific application configuration settings.

Platform as a Service (PaaS) : The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider.3 The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

Infrastructure as a Service (IaaS) : The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

Private cloud : The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.

Community cloud : The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.

Public cloud : The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.

Hybrid cloud : The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

  ______________________________________________

1 Typically this is done on a pay-per-use or charge-per-use basis.

2 A cloud infrastructure is the collection of hardware and software that enables the five essential characteristics of cloud computing. The cloud infrastructure can be viewed as containing both a physical layer and an abstraction layer. The physical layer consists of the hardware resources that are necessary to support the cloud services being provided, and typically includes server, storage and network components. The abstraction layer consists of the software deployed across the physical layer

3 This capability does not necessarily preclude the use of compatible programming languages, libraries, services, and tools from other sources.

*Information for this article was taken from the NITS

Follow

Get every new post delivered to your Inbox.