2. Cloud Computing Architecture



TOPICSCloud Computing Architecture: Cloud computing stack, Comparison with traditional computing architecture (client/server), Services provided at various levels, How Cloud Computing Works, Role of Networks in Cloud computing, protocols used, Role of Web services, Service Models (XaaS), Infrastructure as a Service (IaaS), Platform as a Service(PaaS), Software as a Service(SaaS), Virtualization Technology: Virtual machine technology, virtualization applications in enterprises, Pitfalls of virtualization, Infrastructure as a Service (Iaas)using OpenStack/OwnCloud.

2.1 CLOUD COMPUTING ARCHITECTURE

Cloud Computing architecture comprises of many cloud components, which are loosely coupled. We can broadly divide the cloud architecture into two parts:
  • Front End
  • Back End
Each of the ends is connected through a network, usually Internet. The following diagram shows the graphical view of cloud computing architecture:





Front End

The front end refers to the client part of cloud computing system. It consists of interfaces and applications that are required to access the cloud computing platforms, Example - Web Browser.


Back End

The back End refers to the cloud itself. It consists of all the resources required to provide cloud computing services. It comprises of huge data storage, virtual machines, security mechanism, services, deployment models, servers, etc.
Note
  • It is the responsibility of the back end to provide built-in security mechanism, traffic control and protocols.
  • The server employs certain protocols known as middleware, which help the connected devices to communicate with each other.



2.2  CLOUD COMPUTING STACK

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
.
The diagram below depicts the Cloud Computing stack – it shows three distinct categories within Cloud Computing: Software as a Service, Platform as a Service and Infrastructure as a Service.
http://c179631.r31.cf0.rackcdn.com/cloudcomputestackimage1.pngIn this report we look at all three categories in detail however a very simplified way of differentiating these flavors of Cloud Computing is as follows;


• SaaS applications are designed for end-users, delivered over the web 


• PaaS is the set of tools and services designed to make coding and deploying those applications quick and efficient 



• IaaS is the hardware and software that powers it all – servers, storage, networks, operating systems

Software as a Service

Software as a Service (SaaS) is defined as :

  • software that is deployed over the internet... With SaaS, a provider licenses an application to customers either as a service on demand, through a subscription, in a “pay-as-you-go” model, or (increasingly) at no charge when there is opportunity to generate revenue from streams other than the user, such as from advertisement or user list sales 

  • SaaS is a rapidly growing market as indicated in recent reports that predict ongoing double digit growth. This rapid growth indicates that SaaS will soon become commonplace within every organization and hence it is important that buyers and users of technology understand what SaaS is and where it is suitable.


Characteristics of PaaS

There are a number of different takes on what constitutes PaaS but some basic characteristics include

• Services to develop, test, deploy, host and maintain applications in the same integrated development environment. All the varying services needed to fulfil the application development process 

• Web based user interface creation tools help to create, modify, test and deploy different UI scenarios 


• Multi-tenant architecture where multiple concurrent users utilize the same development application .


• Built in scalability of deployed software including load balancing and failover 

• Integration with web services and databases via common standards 


• Support for development team collaboration – some PaaS solutions include project planning and communication tools 


• Tools to handle billing and subscription management

Infrastructure as a Service


  • Infrastructure as a Service (IaaS) is a way of delivering Cloud Computing infrastructure – servers, storage, network and operating systems – as an on-demand service. Rather than purchasing servers, software, datacenter space or network equipment, clients instead buy those resources as a fully outsourced service on demand.



  • As we detailed in a previous whitepaper , within IaaS, there are some sub-categories that are worth noting. Generally IaaS can be obtained as public or private infrastructure or a combination of the two. “Public cloud” is considered infrastructure that consists of shared resources, deployed on a self-service basis over the Internet.



  • By contrast, “private cloud” is infrastructure that emulates some of Cloud Computing features, like virtualization, but does so on a private network. Additionally, some hosting providers are beginning to offer a combination of traditional dedicated hosting alongside public and/ or private cloud networks. This combination approach is generally called “Hybrid Cloud”.



Characteristics of IaaS
As with the two previous sections, SaaS and PaaS, IaaS is a rapidly developing field. That said there are some core characteristics which describe what IaaS is. IaaS is generally accepted to comply with the following:

• Resources are distributed as a service 

• Allows for dynamic scaling 


• Has a variable cost, utility pricing model 


• Generally includes multiple users on a single piece of hardware





2.3 COMPARISON WITH TRADITIONAL COMPUTING ARCHITECTURE (CLIENT/SERVER)

  • Client server is a method where information processing is split between a client and a server. Back in the old days, we had time share computers (mini's, mainframes, etc) that were accessed by terminals that only manipulated the display of information, but didn't do any processing. Much of what we do with web apps today are not really client-server for the same reason. The server is doing the work and the browser is displaying the results. Yes, with Javascript, we can make the display fancy and dynamic and even do some processing, but in most cases, the application model is still the browser displays information and the servers process information. It's a terminal model, just prettier.

  • A better example of client/server is email. Your email client processes incoming email and then presents it to you. The mail server processes email and figures out where it goes next. Both sides are processing information. Other examples of client server are: a web application that uses a RDBMS or a web services that relies on another web service for processing or data. 

  • Cloud computing is a different animal altogether. Cloud computing embodies the ideas that you can abstract the software from hardware, have applications that can scale up and down based on reasons such as demand, time, etc. The act of provisioning services in the cloud is automated and requires no user intervention. Clouds are also on-demand and can be metered meaning that you are only charged for the resources that you use. It's a consumption model. 

  • Client server describes how applications are modeled. Cloud computing describes the environment that applications reside in. 

  • You can have client/server apps that live in a cloud environment and ones that don't. If you put a client/server app into a cloud, that doesn't mean it will get all the benefits of cloud computing like auto-scaling because auto-scaling requires that the application be designed to specifically for independent scaling. Similarly, you can take cloud apps often run them in a non-cloud environment just fine.






2.4 SERVICES PROVIDED AT VARIOUS LEVELS

Four service models

According to NIST there are three service models: infrastructure (IaaS), platform (PaaS), and software as-a-service (SaaS). To get a better understanding on what each of the service models comprises, refer to the following image that depicts the layers of which atypical IT solution consists



An infrastructure as a service solution should include vendor-managed network, storage, servers, and virtualization layers for a client to run their application and data on. Next, platform as a service build on top of infrastructure as a service adding vendor-managed middleware such as web, application, and database software. Software as a service again builds on top of that, most of the time adding applications that implement specific user functionality such as email, CRM, or HRMM
Refer 2.2 for description of IaaS,PaaS,SaaS.


2.5 HOW CLOUD COMPUTING WORKS

  • Unlike shared grids, which are based on open source technologies, clouds are a proprietary technology. Only the resource provider knows exactly how their cloud manages data, job queues and security requirements.


  • To understand exactly how cloud computing works, let’s consider that the cloud consists of layers -mainly the back end layers and the front end layers. The front layers are the parts you see and interact with. When you access your profile on your Facebook account for example, you are using software running on the front end of the cloud. The back end consists of the hardware and the software architecture that delivers the data you see on the front end.


  • Clouds use a network layer to connect users’ end point devices, like computers or smart phones, to resources that are centralised in a data centre. Users can access the data centre via a company network or the internet or both. Clouds can also be accessed from any location, allowing mobile workers to access their business systems on demand.


  • Applications running on the cloud take advantage of the flexibility of the computing power available. The computers are set up to work together so that it appears as if the applications were running on one particular machine. This flexibility is a major advantage of cloud computing, allowing the user to use as much or as little of the cloud resources as they want at short notice, without any assigning any specific hardware for the job in advance. 



2.6 ROLE OF NETWORKS IN CLOUD COMPUTING


  • Networks are no longer the traditional packet switching platforms, it’s the heart and soul of intelligence which integrates with other intelligent applications to differentiate the multitude of services that can be enabled over a common medium. As application requirements are increasingly becoming complex, the need for equally smarter transport is critical.



  • Virtualization is bringing a whole new perspective to this discussion. It’s true you can account for network, compute and storage virtualization within a given solution; virtual switch, virtual machine, virtual firewall, virtual load-balancer, etc.; but how far can we abstract the network ? One can absolutely argue, Cloud Computing is server/compute resource centric, however for most enterprises, when you combine this compute structure with application workload requirements from business, technology and operations perspectives, suddenly the foundation architecture plays a crucial role – i.e. the network and it’s interconnects.

  • whether you offer a consumption model that is IaaS, PaaS, SaaS or any such combination, the underlying foundation network capabilities and dependencies must be accounted for from all perspectives, thus establishing a need for a solid infrastructure foundation architecture, thus the value of the network.



What role does network play in a Data Center ?


  • Data Centers are usually where business critical applications reside and where business critical logic happens, for both internal and external consumers. There are many levels of communications that need to happen internally and externally to the Data Centers. Ensuring that these communications happens seamlessly, efficiently and in a secure manner is a critical role of the network that ties all these components together.


  • Let’s take a look at the simplified figure below. Each block has a dependency with multiple blocks which establishes the workload patterns that the network has to carry. This dependencies across the modules perform specific business functions. These business functions could be carrying different workloads like:
1. Running complex business critical applications across multiple tiers and locations
2. Load sharing and clustering of applications across geographies
3. Cloud Computing – Automation & Orchestration workloads
4. Disaster Recovery and Business Continuance (DR/BC) – availability workloads
5. Data replication and backup workloads
6. Security and compliance enforcement
7. Development and testing workloads
8. Day-to-day maintenance
9. Management workloads


One thing in common across all these functions is the network and it’s ability to bind these components together !





  • It is critical now than ever to have an intelligent, reliable and functional network that provides next generation innovations for enterprises to evolve from a traditional network to a “Cloud enabled” network. What is a “Cloud enabled” network ? A network that is VM-aware, a network that can grow and shrink based on consumption demands, a network that can re-calculate paths dynamically during failures, a network that can guarantee different classes of service based on predefined parameters and postures, a network that ensures no blocked paths, a network that can track shifting workloads and react accordingly (VM mobility), we can go on and on. Bottom line, networks are becoming programmable (APIs) and flexible to accommodate the shifting applications paradigm which are demanded in various Cloud models.


  • There are many network based innovations that have been widely discussed in Cisco and other forums, like Virtual Port-Channels (vPC), Overlay Transport Virtualization (OTV), Locator/ID Separation Protocol (LISP), FabricPath, FibreChannel-over-Ethernet (FCoE), Virtual Security Gateway (VSG), etc. These innovations with next generation HW/SW combinations like Cisco Nexus series products help create a path towards unified fabric, network and compute approach to Cloud Computing. This is further proof that we are trying to address business and technical challenges with smarter networking tools. I am not saying that this level of intelligent networking is required in every scenario, but based on the business and technology requirements, next generation Data Center networks are making application decisions that it never had to make before !



2.8 ROLE OF WEB SERVICES

  • Some of you who have used Amazon Web services have wondered about the role of SaaS in cloud computing. The good news is that you can develop Web-aware, cloud-friendly SaaS on top of these Web services. You can sell the SaaS to a large customer base, such as consultants or product engineers, and reduce the up-front expenses of purchasing software by offering less costly, on-demand pricing. Another advantage is that SaaS provides updates at a centralized location, eliminating the need for you to download frequent patches and upgrades.


  • Cloud computing is a style of computing in which virtualised and standard resources, software and data are provided as a service over the Internet.

  • Consumers and businesses can use the cloud to store data and applications, and can interact with the Cloud using mobiles, desktop computers, laptops etc. via the Internet from anywhere and at any time.

  • The technology of Cloud computing entails the convergence of Grid and cluster computing, virtualisation, Web services and Service Oriented Architecture (SOA) - it offers the potential to set IT free from the costs and complexity of its typical physical infrastructure, allowing concepts such as Utility Computing to become meaningful.

Key players include: IBM, HP, Google, Microsoft, Amazon Web Services, Salesforce.com, NetSuite, VMware.
Benefits of Cloud Computing:
  • predictable any time, anywhere access to IT resources
  • flexible scaling of resources (resource optimisation)
  • rapid, request-driven provisioning
  • lower total cost of operations
Risks and Challenges of Cloud computing include:
  • security of data and data protection
  • data privacy
  • legal issues
  • disaster recovery
  • failure management and fault tolerance
  • IT integration management issues
  • business regulatory requirements
  • SLA (service level agreement) management

Web services refers to software that provides a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards over the Internet.

2.9 Service Models (XaaS)

XaaS

This is the essence of cloud computing. It refers to an increasing number of services that are delivered over a network. Anything as a service requires an understanding of the service objectives and the accounting of service use and quality. The objectives, use, and quality can be determined from the underlying reference model for SOI:
Broad network access (cloud) + resource pooling (cloud) + business-driven infrastructure on-demand (SOI) + service-orientation (SOI) = XaaS

Service Assurance

Cloud computing guarantees certain levels of service to the cloud’s customers. When that service degrades, it is necessary to understand the relationship of infrastructure activity to these services so that the situation can be remediated. SOI facilitates the determination of these relationships:
Operational transparency (SOI) + measured service (cloud) = Service Assurance

PaaS – Platform as a Service

This concept is the provisioning of an application platform, typically for web applications, from a Cloud Service Provider to the customer.  The customer requires an environment and the tools to create a specific application, so the CSP hosts the platform and the application for the customer to leverage.  An example of PaaS is NIRIX’s oneCloud Control Panel (oCCP) tool.

IaaS – Infrastructure as a Service

This is the perfect service for businesses seeking the use of a datacenter or a server, but not wanting to invest in the building and maintenance of facilities or hardware/software costs.  Customers can run applications on the Cloud Service Provider’s infrastructure, usually through a virtual/cloud server(s), or can host servers in the CSP’s datacenter.  IaaS is often a server hosting service. An example of IaaS is NIRIX’s oneServer.

DaaS – Desktop as a Service


This service allows a customer to host their entire desktop computing environment through a Cloud Service Provider. This means access to software, applications, email, data storage/online backup, etc. without IT maintenance and software costs. An example of DaaS is NIRIX’s oneDesktop.  This is an “everything as a service” option.





2.10 INFRASTRUCTURE AS A SERVICE (IAAS)

Infrastructure as a Service (IaaS) abstracts hardware (server, storage, and network infrastructure) into a pool of computing, storage, and connectivity capabilities that are delivered as services for a usage-based (metered) cost. Its goal is to provide a flexible, standard, and virtualized operating environment that can become a foundation for PaaS and SaaS.

IaaS is usually seen to provide a standardized virtual server. The consumer takes responsibility for configuration and operations of the guest Operating System (OS), software, and Database (DB). Compute capabilities (such as performance, bandwidth, and storage access) are also standardized.




2.11 PLATFORM AS A SERVICE(PAAS)

Platform as a Service (PaaS) delivers application execution services, such as application runtime, storage, and integration, for applications written for a pre-specified development framework. PaaS provides an efficient and agile approach to operate scale-out applications in a predictable and cost-effective manner. Service levels and operational risks are shared because the consumer must take responsibility for the stability, architectural compliance, and overall operations of the application while the provider delivers the platform capability (including the infrastructure and operational functions) at a predictable service level and cost.




2.12  SOFTWARE AS A SERVICE(SAAS)

Software as a Service (SaaS) delivers business processes and applications, such as CRM, collaboration, and e-mail, as standardized capabilities for a usage-based cost at an agreed, business-relevant service level. SaaS provides significant efficiencies in cost and delivery in exchange for minimal customization and represents a shift of operational risks from the consumer to the provider. All infrastructure and IT operational functions are abstracted away from the consumer.




2.12 VIRTUALIZATION TECHNOLOGY: VIRTUAL MACHINE TECHNOLOGY

What is virtualization?

Virtualization is a proven software technology that makes it possible to run multiple operating systems and applications on the same server at the same time. It’s transforming the IT landscape and fundamentally changing the way that people utilize technology.

Benefits of Virtualization
Virtualization can increase IT agility, flexibility, and scalability while creating significant cost savings. Workloads get deployed faster, performance and availability increases and operations become automated, resulting in IT that's simpler to manage and less costly to own and operate.
  • Reduce capital and operating costs.
  • Deliver high application availability.
  • Minimize or eliminate downtime.
  • Increase IT productivity, efficiency, agility and responsiveness.
  • Speed and simplify application and resource provisioning.
  • Support business continuity and disaster recovery.
  • Enable centralized management.
  • Build a true Software-Defined Data Center.
Virtual Machine Technology


Cloud computing is the delivery of utility computing as a service. Rather than a product installed on one specific device, it provides computation, data access, and storage services without the requirement of end-users’ awareness of the configuration of system which provides the services as well as the indeed physical location. Resources and information are offered by cloud computing as a utility to computers or other digital devices through Internet (or other kinds of network). Cloud computing is revolutionizing how Information Technology resources and services are used and managed. As it makes the software more appealing to migrate from local personal computers to distant network-base servers as a vendor provided service, cloud computing has the most attractive potential to shift in the geography of computation and transform a large part of the Information Technology industry such as the Internet services, the software development and the way hardware is designed and purchased In the view of fact that the server system which provides cloud computing service is totally isolated from the end-users, virtualization is an important enabling technology and has already been widely adopted into the cloud computing environments. Virtualization provides many benefits, including ease of management, security, performance isolation, flexibility of running in a user-customized environment and helps end-users make more efficient use of hardware resources. Many computers pretend to be the one computing environment. In practice, the computers making up the cloud system will also be virtualized in order to maximize the resources of the physical computers. As one of the main tasks in virtualization, a virtual machine (VM) is the software implementation of a machine (typically a computer) that executes programs like a physical machine. One of the major advantages of cloud computing is the ability to use a variable number of physical machines and VM instances depending on the demand of the problem. It means sharing the resources of a single physical computer among several different virtual computing environments. A cloud is built up of numerous physical machines (the hardware). Each of these machines then runs multiple virtual machines, which is what are presented to the end-users. This kind of technology is applied as an enabler of cloud computing which provides high-performance, high-availability computing resources with unique benefits.



2.13 VIRTUALIZATION APPLICATIONS IN ENTERPRISES

Virtualization today is mission critical. A strong majority of respondents (64%) are virtualizing mission-critical applications.

The business reason for most virtualization projects: server consolidation. Eighty-two percent of network administrators listed “server consolidation” as the reason for their virtualization initiatives. Almost two-thirds (63%) use virtualization to reduce power and space requirements.

Server virtualization dominates. Virtualization of storage, network, and desktops will grow in 2009, but will still trail server virtualization significantly.

VMware ESX is by far the most widely deployed technology. Three-quarters (75%)of the survey respondents use VMware ESX.

Microsoft and Oracle dominate the list of apps that organizations virtualize. Respondents are virtualizing popular business applications (Exchange, SQL, IIS, SharePoint), but almost equal numbers virtualize custom internal applications and open source apps.

Network administrators worry about the affect of their network on virtualization, and the affect of virtualization on their network. Three-fourths of respondents (75%) feel networking issues are at least somewhat of a barrier to virtualization. A nearly equal amount of network administrators (74%) say that application performance is at least somewhat of a barrier to virtualization.

Virtualization still has untapped potential for integration throughout the corporation. For 60% of virtualization projects, the organization’s server team drivesimplementation. Other aspects of the corporate network which could benefit greatly from virtualization, such as storage, networking, applications, and desktops, added upto a slim minority of implemented projects.
Which applications do you have running in virtual production environments?

                   %
Microsoft SQL Server
48%
Homegrown applications
48%
Microsoft IIS
45%
Open source applications
34%
Microsoft SharePoint Server
32%
Microsoft Exchange Server
29%
Other
20%
Oracle 10g
15%
Microsoft Office Communications Server
11%
Oracle PeopleSoft
5%
SAP ERP
5%
Oracle E-Business Suite
4%
Oracle Siebel
3

The amount of organizations virtualizing open source applications (34%) seems strong, given that the open source community was not a particular target of the survey. The combined responses for homegrown applications (48%) and open source applications (34%) show that organizations are not afraid to roll up their sleeves and customize if they deem the effort worthwhile.


2.13 PITFALLS OF VIRTUALIZATION


Executive Summary
For all its benefits, server virtualization can be difficult to implement and could actually increase your costs and complicate network management. Here are some common pitfalls to look out for and avoid.
Everybody’s talking about virtualization and server consolidation these days, and many companies are taking some kind of action, with large enterprises in the lead. Server consolidation through virtualization is a proven way to save money in many ways: less hardware, lower power consumption, less floor space and more.  
For all its benefits, however, server virtualization can be difficult to implement and could actually increase your costs and complicate your life. The objective is to avoid potential pitfalls, many of which are described below.


Poor Preparation
Your virtualization project is almost certainly only the first step toward a completely virtualized network that is going to be much different than the hardware-centric system you are used to working with. That’s why even small virtualization projects tend to be much more complex than they first appear.
There are implications far beyond simply adding virtual operating systems to a host server to boost computing power. You will face a wide range of issues, from server compatibility to network limitations, software licensing to storage infrastructure and more.
If you don’t understand all these issues up front, before the virtual machines start multiplying (and they will multiply), you could find yourself in a desperate race to catch up with proliferating problems.


Insufficient Server Capacity
Virtualization does not add computing resources, only usage. Multiple operating systems — even if they are virtual — require substantial processing power, input/output capacity, memory and disk capacity.
To be a good candidate for virtualization a server should have at least two CPUs, 4GB of RAM or more and more than 100GB of available disk space. Servers with less capability could prevent you from gaining the substantial benefits of multiple Virtual Machines (VMs), especially if the server already hosts an input/output intensive application, such as a database.
So, evaluate your hardware to determine whether your CPUs, RAM, and storage are underutilized, and if so, by how much. Your first step toward profitable virtualization might have to be upgrading your servers.


Mismatched Servers
If you are virtualizing multiple servers and they use different chip technology (Intel and AMD), you might encounter difficulties with migrating VMs between them. In some cases, you won’t be able to migrate VMs without rebooting the server, somewhat mitigating the benefits of virtualization and live migration. It’s a good idea to standardize your servers on a single chip manufacturer.


Slow Network Communications
One of the goals of virtualization, and its primary benefit, is a huge leap in computing capacity for a given amount of hardware. But, latency and insufficient network bandwidth can steal much of the intended gain. So be ready to bolster your network’s communications capabilities. That will probably require some combination of new servers, new switches, network interface cards, and cabling: think 10Gbps Ethernet (and imagine the future 100Gbps Ethernet). Be sure to include the cost of faster communications devices in your virtualization budget.


Slow Mechanical Disks
Current mechanical disk technology cannot keep up with the read/write demands of multiple servers in high-use periods, so you will experience some latency. One possible solution is storage caching, whereby frequently accessed data is served from high-speed memory instead of mechanical disks. Another solution on the near horizon, is solid state disks, which promise read/write speeds up to 30 times faster than spinning-disk technology.


Too Many VMs per Host
One of the great things about VMs, and one of the most problematic, is the ease with which they can be deployed and migrated to meet changing needs. If you and your staff aren’t careful, however, you could end up with more VMs than you can effectively manage and more than a server can handle without degrading performance. It is a good idea to establish a VM implementation policy that recognizes system and personnel limitations and make sure the entire IT staff adheres to that policy.


Uneven Workload Distribution
To maximize usage of your data center computing power, you need to really fine-tune the distribution of processing requirements across all your physical servers. That means you need to monitor application usage to detect daily, weekly or monthly peak usage and determine response times and so on. This will enable you to allocate applications accordingly.
There are a number of application performance software tools available for this purpose, but the burden will grow rapidly as more and more VMs are added. Managing — let alone optimizing — even a single server with 10 or more VMs running different operating systems and multiple applications can be very challenging. Fortunately, automated data center management tools, now on the horizon, will help you meet that challenge.


Losing Track of Applications
Where you once had a clear idea of which applications were running on which servers, you will now find it much more difficult to track applications running on virtual servers. This can really complicate application patching and updates, as well as software licensing.
This is a network management issue. And the larger the virtualized network, the more critical it becomes.
Enterprise virtualization applications now on the market offer strong virtualization management capabilities. They can help IT managers dynamically tag and map all VMs and software applications, enabling efficient patching and updates.


Software Licensing Restrictions
Historically, software vendors have relied on hardware-based licenses. However, in many cases, that approach will restrict usage in the virtualized environment. Microsoft software licenses, for example, cannot be skirted by installing multiple VMs on a single physical server; each VM will have to be licensed.
Although Microsoft has shifted from installation-based licensing to instance-based licensing to accommodate the needs of virtualization, the restrictions on live migration of VMs to other physical servers can be confusing. Other vendors have simpler licensing terms whereby a single license for a physical server permits installation of unlimited VMs.

Be sure to talk with your vendors about how to address the important licensing issue before you get too far into your virtualization project. If you don’t, you might end up paying a sizable penalty for using key applications in your new infrastructure.

1 comment:

  1. It is very useful for me to learn and understand easily. Thanks for sharing such a great information about different type of cloud computing models. Also checkout blog like cloud engineering services and how it will impact IT sectors.

    ReplyDelete