CLOUD COMPUTING EXPLAINED IMPLEMENTATION HANDBOOK FOR ENTERPRISES PDF

adminComment(0)
    Contents:

Cloud Computing Explained - Implementation Handbook for Enterprises - Ebook download as PDF File .pdf), Text File .txt) or read book online. text book. This books (Cloud Computing Explained: Implementation Handbook for Enterprises [PDF]) Made by John Rhoton About Books Cloud. (Pdf free) Cloud Computing Explained: Implementation Handbook for Enterprises . Cloud Computing Explained: Implementation Handbook for. Enterprises. Title.


Cloud Computing Explained Implementation Handbook For Enterprises Pdf

Author:ANDRES LAUTER
Language:English, French, Japanese
Country:Netherlands
Genre:Business & Career
Pages:297
Published (Last):15.10.2015
ISBN:356-3-60256-921-6
ePub File Size:15.46 MB
PDF File Size:13.19 MB
Distribution:Free* [*Registration needed]
Downloads:26496
Uploaded by: MARLEN

Handbook For Enterprises (john Rhoton) Pdf. Cloud Computing Explained: Implementation Handbook for Enterprises · Cloud By John Rhoton. – This book. Read saving Cloud Computing Explained: Implementation Handbook for Enterprises Be the first to ask a question about Cloud Computing Explained. raudone.info: Cloud Computing Explained: Implementation Handbook for Enterprises (): John Rhoton: Books.

This doesnt mean that every cloud attribute is essential to cloud computing or even that there is necessarily any which qualifies a given approach as fitting the cloud paradigm. On their own, they are neither necessary nor sufficient prerequisites to the notion of cloud computing. But typically the more of these attributes apply, the more likely others will accept it as a cloud solution. Some key components include: Off-Premise: The service is hosted and delivered from a location that belongs to a service provider.

This usually has two implications: The service is delivered over the public Internet and the processing occurs outside the company firewall. In other words, the service must cross both physical and security boundaries. Elasticity: One main benefit of cloud computing is the inherent scalability of the service provider, which is made available to the end-user.

The model goes much further in providing an elastic provisioning mechanism so that resources can be scaled both up and down very rapidly as they are required. Since utility billing is also common this elasticity can equate to direct cost savings. Flexible Billing: Fine-grained metering or resource usage, combined with ondemand service provisioning, facilitate a number of options for charging customers.

Fees can be levied on a subscription basis or can be tied to actual consumption, or reservation, of resources. Monetization can take the form of placed advertising or can rely on simple credit card charges in addition to elaborate contracts and central billing.

Virtualization: Cloud services are usually offered through an abstracted infrastructure. They leverage various virtualization mechanisms and achieve cost optimization through multi-tenancy. Service Delivery: Cloud functionality is usually available as a service of some form. While there is great variance in the nature of these services, typically the services offer programmatic interfaces in addition to the user interfaces.

Universal access: Resource democratization means that pooled resources are available to anyone authorized to utilize them. At the same time, location independence and high levels of resilience allow for an always-connected user experience. Simplified management: Administration is simplified through automatic provisioning to meet scalability requirements, user self-service to expedite business processes and programmatically accessible resources that facilitate integration into enterprise management frameworks.

Affordable resources: The cost of these resources is dramatically reduced for two reasons. There is no requirement for fixed downloads. And the economy of scale of the service providers allow them to optimize their cost structure with commodity hardware and fine-tuned operational procedures that are not easily matched by most companies. Multi-tenancy: The cloud is used by many organizations tenants and includes mechanisms to protect and isolate each tenant from all others.

Pooling resources across customers is an important factor in achieving scalability and cost savings. Service-Level Management: Cloud services typically offer a Service Level definition that sets the expectation to the customer as to how robust that service will be.

Some services may come with only minimal or non-existent commitments. They can still be considered cloud services but typically will not be trusted for missioncritical applications to the extent that others which are governed by more precise commitments might.

All of these attributes will be discussed in more detail in the chapters to come. Related terms In addition to the set of characteristics which may be associated with cloud computing it is worth mentioning some other key technologies that are strongly interrelated with Cloud Computing. Service-Oriented Architecture A service-oriented architecture SOA decomposes the information technology landscape of an enterprise into unassociated and loosely coupled functional primitives called services.

In contrast to monolithic applications of the past, these services implement single actions and may be used by many different business applications. The business logic is then tasked with orchestrating the service objects by arranging them sequentially, selectively or iteratively so that they help to fulfill a business objective.

One of the greatest advantages of this approach is that it maximizes reusability of functionality and thereby reduces the effort needed to build new applications or modify existing programs. There is a high degree of commonality between cloud computing and SOA. An enterprise that uses a service-oriented architecture is better positioned to leverage cloud computing. Cloud computing may also drive increased attention to SOA.

However, the two are independent notions. The best way to think of the relation between them is that SOA is an architecture which is, by nature, technology independent. Cloud computing may be one means of implementing an SOA design. Grid Computing Grid Computing refers to the use of many interconnected computers to solve a problem through highly parallel computation. These grids are often based on loosely coupled and heterogeneous systems which leverage geographically dispersed volunteer resources.

They are usually confined to scientific problems which require a huge number of computer processing cycles or access to large amounts of data.

However, they have also been applied successfully to drug discovery, economic forecasting, seismic analysis and even financial modeling for quantitative trading including risk management and derivative pricing. There may be some conceptual similarity between grid and cloud computing. Both involve large interconnected systems of computers, distribute their workload and blur the line between system usage and system ownership. But it is important to also be aware of their distinctions.

A grid may be transparent to its users and addresses a narrow problem domain. Cloud services are typically opaque and cover a wide range of almost every class of informational problem using a model where the functionality is decoupled from the user. Web 2. Darcy DiNucci first used the expression in DiNucci, to refer to radical changes in web design and aesthetics. Tim OReilly popularized a recast notion in his Web 2. This term has evolved to refer to the web as not only a static information source for browser access but a platform for web-based communities which facilitate user participation and collaboration.

There is no intrinsic connection between cloud computing and Web 2. Cloud computing is a means of delivering services and Web 2. Nonetheless, it is worth observing that Web 2.

These requirements, and the absence of legacy dependencies, make it optimally suited to cloud platforms. History Cloud computing represents an evolution and confluence of several trends. But the first commercially viable offerings actually came from other sectors of the industry.

site was arguably the first company to offer an extensive and thorough set of cloud-based services. This may seem somewhat odd since site was not initially in the business of providing IT services. However, they had several other advantages that they were able to leverage effectively.

As most readers will recall, site started as an on-line bookstore in Based on its success in the book market it diversified its product portfolio to include CDs, DVDs, and other forms of digital media, eventually expanding into computer hardware and software, jewelry, grocery, apparel and even automotive parts and accessories. A major change in business model involved the creation of merchant partnerships that leveraged sites portal and large customer base.

site brokered the transaction for a fee thereby developing a new ecosystem of partners and even competitors. As site grew, it had to find ways to minimize its IT costs. Its business model [3] implied a very large online presence which was crucial to its success.

Without the bricks-and-mortar retail outlets, its data center investments and operations became a significant portion of its cost structure.

site chose to minimize hardware expenditures by downloading only commodity hardware parts and assembling them into a highly standardized framework that was able to guarantee the resilience they needed through extensive replication. In the course of building their infrastructure, their system designers had scrutinized the security required to ensure that the financial transactions and data of their customers and retail partners could not be compromised. The approach met their needs however it was not inherently optimized.

site and partners shared a common burden or boon, depending on how you look at it, with other retailers in that a very high proportion of their sales are processed in the weeks leading up to Christmas. In order to be able to guarantee computing capacity in December they needed to overprovision for the remainder of the year.

This meant that a major share of their data center was idle eleven out of twelve months. The inefficiency contributes to an unacceptable amount of unnecessary costs. site decided to turn their weakness into an opportunity. When they launched site Web Services in , they effectively sold some of their idle capacity to other organizations who had computational requirements from January to November. The proposition was attractive to their customers who were able to take advantage of a secure and reliable infrastructure at reasonable prices without making any financial or strategic commitment.

Their story bears some resemblance to sites. They also host a huge worldwide infrastructure with many thousands of servers worldwide. In order to satisfy hundreds of millions of search requests every day they must process about one petabyte of user-generated data every hour Vogelstein, However, their primary business model is fundamentally different in that they do not have a huge retail business which they can leverage to easily monetize their services.

Instead Googles source of revenue is through advertising and their core competence is in analytics. Through extensive data mining they are able to identify and classify user interests. And through their portals they can place advertising banners effectively. I will not go into detail on the different implications of these two approaches at this point but it will be useful to keep in mind as we discuss the various cloud platforms in the next chapters.

Also keep in mind that there are several other important cloud service providers such as Salesforce. Each has their own history and business model. The two above are merely two notable examples which I feel provide some insight into the history of cloud computing. That is not to say that it could not be completely different players who shape the future.

Innovation or Impact? I hope I have clarified some of the mystery surrounding cloud computing and provided some insight into how the new service delivery model has evolved. However, there is still one very important question that I have not addressed: What is so novel about cloud computing? Ive listed a number of attributes that characterize the technology.

But which of those has made significant breakthroughs in the past few years in order to set the way for a revolutionary new approach to computing? Timesharing multi-tenancy was popular in the s. A utility pricing model is more recent but also preceded the current cloud-boom.

The same can be said for Internetbased service delivery, application hosting and outsourcing. Rather than answering the question, I would challenge whether it is essential for there to be a clear technological innovation that triggers a major disruption even if that disruption is primarily technical in nature. I will make my case by analogy. If you examine some other recent upheavals in the area of technology, it is similarly difficult to identify any particular novelty associated with them at the time of their break-through.

Nonetheless, historians tend to agree that its influence on the economy and society was fundamental. More recently, the PC revolution saw a shift of computing from mainframes and minicomputers for large companies to desktops that small businesses and consumers could afford. There were some advances in technology, in particular around miniaturization and environmental resilience, but these were arguably incremental in nature.

The Internet is a particularly clear example of a technological transformation that caught the industry and large segments of business by surprise, but was not primarily a technical breakthrough. Although Tim Berners-Lees vision of the World Wide Web brought together many components in a creative, and certainly compelling manner, the parts were invented long before the web made it into the mainstream.

For example, the fourth version of the Internet Protocol IPv4, RFC , which is the most common network protocol today, was specified in Even the notion of hypertext isnt new. Vannevar Bush wrote an article in the Atlantic Monthly in about a device, called a Memex, which created trails of linked and branching sets of pages.

Ted Nelson coined the term hypertext in when he developed his Hypertext Editing System at Brown University in And yet, the impact of these technological shifts is hard to oversee and all those who contributed to their initial breakthrough deserve credit for their vision of what could be done with all of these parts.

The innovation of the Internet, from a technical perspective, lies in identifying the confluence of several technical trends and visualizing how these can combine with improving cost factors, a changing environment and evolving societal needs to create a virtuous circle that generates ever-increasing economies of scale and benefits from network effects. Cloud computing is similar. It is difficult to isolate a single technological trigger. A number of incremental improvements in various areas such as fine-grained metering, flexible billing, virtualization, broadband, service-oriented architecture and service management have come together recently.

Combined, they enable new business models that can dramatically affect cost and cash flow patterns and are therefore of great interest to the business especially in a down-turn. This combined effect has also hit a critical threshold by achieving sufficient scale to dramatically reduce prices, thus leading to a virtuous cycle of benefits cost reduction for customers, profits for providers , exponential growth and ramifications that may ripple across many levels of our lives, including Technology, Business, Economic, Social and Political dimensions.

Technology The impact of cloud computing is probably most apparent in information technology where we have seen the enablement of new service delivery models. New platforms have become available to developers and utility-priced infrastructure facilitates development, testing and deployment. This foundation can enable and accelerate other applications and technologies. Many Web 2. There is significant potential to offload batch processing, analytics and compute-intensive desktop applications Armbrust, et al.

Mobile technologies may also receive support from the ubiquitous presence of cloud providers, their high uptime and reduced latency through distributed hosting. Furthermore, a public cloud environment can reduce some of the security risks associated with mobile computing.

It is possible to segment the data so that only non-sensitive data is stored in the cloud and accessible to a mobile device. The exposure of reduced endpoint security for example with regard to malware infections is also minimized if a device is only connected to a public infrastructure.

By maximizing service interconnectivity, cloud computing can also increase interoperability between disjoint technologies.

1. Cloud Computing Explained - Implementation Handbook for Enterprises

For example, HPs CloudPrint service facilitates an interaction between mobile devices and printers. As cloud computing establishes itself as a primary service delivery channel, it is likely to have a significant impact on the IT industry by stimulating requirements that support it in areas such as: Business Cloud computing also has an undeniable impact on business strategy.

It overturns traditional models of financing IT expenditures by replacing capital expenditures with operational expenditures. Since the operational expenditures can be directly tied to production, fixed costs tend to vanish in comparison to variable costs thus greatly facilitating accounting transparency and reducing financial risk.

The reduction in fixed costs also allows the company to become much more agile and aggressive in pursuing new revenue streams. Since resources can be elastically scaled up and down, they can take advantage of unanticipated high demand but are not burdened with excess costs when the market softens.

The outsourcing of IT infrastructure reduces the responsibilities and focus in the area of IT. This release can be leveraged to realign the internal IT resources with the core competencies of the organization.

Rather than investing energy and managerial commitment to industry standard technologies these can be redirected toward potential sources of sustainable competitive differentiation. Another form of business impact may be that the high level of service standardization, which cloud computing entails, may blur traditional market segmentation. For example, the conventional distinction that separates small and medium businesses from enterprises, based on their levels of customization and requirements for sales and services support, may fade in favor of richer sets of options and combinations of service offerings.

As a result of the above, it is very likely that there will be market shifts as some companies leverage the benefits of cloud computing better than others. These may trigger a reshuffling of the competitive landscape, an event that may harbor both risk and opportunity but must certainly not be ignored.

Economic The business impact may very well spread across the economy. Knowledge workers could find themselves increasingly independent of large corporate [4] infrastructure Carr N. Through social productivity and crowdsourcing we may encounter increasing amount of media, from blogs and Wikipedia to collaborative video Live Music, Yair Landau.

The Internet can be a great leveling force since it essentially removes the inherent advantages of investment capital and location. However, Nicholas Carr , p. At this stage it is difficult to predict which influences will predominate, but it is likely there will be some effects.

As the cloud unleashes new investment models its interesting to consider one of the driving forces of new technologies today: the investment community. Most successful startups have received a great deal of support from venture capitalists. Sand Hill Road in Palo Alto is famous for its startup investments in some of the most successful technologies businesses today ranging from Apple to Google.

On the one hand, small firms may be less reliant on external investors at all in order to get started. If someone has a PC and Internet connection, they can conceivably start a billion-dollar business overnight. On the other hand, and more realistically, investors are able to target their financing much more effectively if they can remove an element of fixed costs. Social You can expect some demographic effects of cloud computing.

By virtue of its location independence there may be increases in off-shoring Friedman, There may also be impact on employment as workers need to re-skill to focus on new technologies and business models. Culturally we are seeing an increasing invasion of privacy Carr N.

While the individual impact of privacy intrusions is rarely severe, there are disturbing ramifications to its use on a large scale. Carr alerts us to the potential dangers of a feedback loop which reinforces preferences and thereby threatens to increase societal polarization.

From a cognitive perspective we can observe the blending of human intelligence with system and network intelligence. Carr , p. While there are certainly benefits from the derived information it begs the question of our future in a world where it is easier to issue repeated ad hoc searches rather than to remember salient facts. Political Any force that has significant impact across society and the economy inevitably becomes the focus of politicians. There are many increasing regulations and compliance requirements that apply to the Internet and information technology.

Many of these will also impact cloud computing. At this point, cloud computing has triggered very little legislation of its own accord. However, given its far-reaching impact on pressing topics such as privacy and governance there is no doubt it will become an object of intense legal scrutiny in the years to come. Chapter 2 Chapter 2 Cloud Architecture Physical clouds come in all shapes and sizes.

They vary in their position, orientation, texture and color. Cirrus clouds form at the highest altitudes. They are often transparent and tend toward shapes of strands and streaks. Stratus clouds are associated with a horizontal orientation and flat shape. Cumulus clouds are noted for their clear boundaries. They can develop into tall cumulonimbus clouds associated with thunderstorms and inclement weather.

The metaphor quite aptly conveys some of the many variations we also find with cloud-like components, services and solutions. In order to paint a complete and fair picture of cloud computing we really need to analyze the structure of the offerings as well as the elements that combine together to create a useful solution.

Stack One characteristic aspect of cloud computing is a strong focus toward service orientation. Rather than offering only packaged solutions that are installed monolithically on desktops and servers, or investing in single-purpose appliances, you need to decompose all the functionality that users require into primitives, which can be assembled as required. In principle, this is a simple task but it is difficult to aggregate the functionality in an optimal manner unless you can get a clear picture of all the services that are available.

This is a lot easier if you can provide some structure and a model that illustrates the interrelationships between services. Google App Engine is generally considered to be a Platform as a Service. And Salesforce. There are two primary dimensions which constrain the offerings: The services differ according to their flexibility and degree of optimization Figure 2 1.

Software services are typically highly standardized and tuned for efficiency. However, they can only accommodate minimal customization and extensions. At the other extreme, infrastructure services can host almost any application but are not able to leverage the benefits of economy of scope as easily. Platform services represent a middle ground. They provide flexible frameworks with only a few constraints and are able to accommodate some degree of optimization.

Figure 2 2: SPI Model The classification illustrates how very different these services can be and yet, at least [5] conceptually, each layer depends on the foundation below it Figure 2 2.

Platforms are built on infrastructure and software services usually leverage some platform. Figure 2 3: SPI Origins In terms of the functionality provided at each of these layers, it may be revealing to look at some of the recent precursors of each Figure 2 3: SPI OriginsFigure 2 3.

PaaS is a functional enhancement of the scripting capabilities offered by many web-hosting sites today. IaaS is a powerful evolution of colocation and managed hosting services available from large data centers and outsourcing service providers. The conceptual similarity of pre-cloud offerings often leads to cynical observations that cloud computing is little more than a rebranding exercise.

As we have already seen, there is some truth to the notion that the technical innovation is limited. However, refined metering, billing and provisioning coupled with attractive benefits of scale do have a fundamental impact on the how services are consumed with a cloud-based delivery model. Figure 2 4: Extended Model We will examine each of the layers in more detail in the next three chapters. But to give you an idea of what each represents, its useful to take a look inside Figure 2 4.

Software Services represent the actual applications that end users leverage to accomplish their business objectives or personal objectives in a consumer context. There are a wide range of domains where you can find SaaS offerings. One of the most popular areas is customer relationship management CRM. Desktop productivity including electronic mail is also very common, as well as forms of collaboration such as conferencing or unified communications. But the list is endless with services for billing, financials, legal, human resources, backup and recovery, and many other domains appearing regularly on the market.

Platforms represent frameworks and common functions that the applications can leverage so that they dont need to re-invent the wheel. The offerings often include programming language interpreters and compilers, development environments, and libraries with interfaces to frequently needed functions. There are also platform services that focus on specific components such as databases, identity management repositories or business intelligence systems and make this functionality available to application developers.

I have divided infrastructure services into three sublevels.

I dont mean to imply that they are any more complex or diverse than platform or software services. In fact, they are probably more homogenous and potentially even simpler than the higher tiers. However, they lend themselves well to further segmentation. I suggest that most infrastructure services fall into three categories that build on each other.

There are providers of simple co-location facilities services. In the basic scenario the data-center owner rents out floor space and provides power and cooling as well as a network connection. The rack hardware may also be part of the service but the owner is not involved in filling the space with the computers or appliances that the customers need. The next conceptual level is to add hardware to the empty rackspace. There are hosting services that will provide and install blade systems for computation and storage.

The simplest options involve dedicated servers, internal networking and storage equipment that is operated by the customer.

There are also managed hosting providers who will take over the administration, monitoring and support of the systems. Very often this implies that they will install a virtualization layer that facilitates automated provisioning, resource management and orchestration while also enforcing consistency of configuration.

In some cases, they will leverage multitenancy in order to maximize resource utilization - but this is not strictly required. Management Layers In addition to the software and applications that run in the SPI model and support a cloud application in its core functions, there are also a number of challenges that both the enterprise and service provider need to address in order to successfully keep the solution going Figure 2 5.

Figure 2 5: Implementation, Operation and Control Implement It is necessary to select and integrate all the components into a functioning solution. There are a large and ever increasing number of cloud-based services and solutions on the market.

It is no simple task to categorize and compare them. And once that is done, it would be nave to expect them all to work together seamlessly. The integration effort involves a careful selection of interfaces and configuration settings and may require additional connectors or custom software. Operate Once the solution has been brought online it is necessary to keep it running. This means that you need to monitor it, troubleshoot it and support it. Since the service is unlikely to be completely static you need to also have processes in place to provision new users, decommission old users, plan for capacity changes, track incidents and implement changes in the service.

Control The operation of a complex set of services can be a difficult challenge. Some of the. However, this doesnt completely obviate the need for overseeing the task. It is still necessary to ensure that service expectations are well defined and that they are validated on a continuous basis. Standards and Interoperability There are software offerings that cover all of the domains from the previous sections ranging from the SPI layers to the integration, operation and governance.

One of the biggest challenges to cloud computing is the lack of standards that govern the format and implied functionality of its services. The resultant lock-in creates risks for users related to the portability of solutions and interoperability between their service providers. The industry is well aware of the problem. Even though it may be in the short-term interests of some providers to guard their proprietary mechanisms, it is clear that cloud computing will not reach its full potential until some progress is made to address the lockin problems.

The problem is quite challenging since it is not yet exactly clear which interfaces and formats need to be standardized and what functionality needs to be captured in the process. There is also some concern that standards will lead to a cost-focused trend to commoditization that can potentially stifle future innovation. Nonetheless, there is substantial activity on the standards front, which is at least an indication that vendors realize the importance of interoperability and portability and are willing to work together to move the technology forward.

The Open Cloud Manifesto established a set of core principles in that several key vendors considered to be of highest priority. Even though the statement did not indicate any specific guidance and was not endorsed by the most prominent cloud providers e. site, Microsoft or Google it demonstrated the importance that the industry attaches to cloud standardization.

Since then several standards organizations have begun to tackle the problem of cloud computing from their vantage points: The Object Management Group OMG is modeling deployment of applications and services on clouds for portability, interoperability and reuse.

The Open Group Cloud Work Group is collaborating on standard models and frameworks aimed at eliminating vendor lock-in for enterprises. They develop benchmarks and support reference implementations for cloud computing. Evidently, the amount of standardization effort reflects the general level of hype around cloud computing. While this is encouraging it is also a cause for concern. A world with too many standards is only marginally better than one without any. It is critical that the various organizations coordinate their effort to eliminate redundancy and ensure a complementary and unified result.

Private, Partner and Public Clouds In the earliest definitions of cloud computing, the term refers to solutions where resources are dynamically provisioned over the Internet from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.

This computing model carries many inherent advantages in terms of cost and flexibility but it also has some drawback in the areas of governance and security. Many enterprises have looked at ways that they can leverage at least some of the benefits of cloud computing while minimizing the drawbacks by only making use of some aspects of cloud computing.

These efforts have led to a restricted model of cloud computing which is often designated as Private Cloud, in contrast to the fuller model, which by inference becomes a Public Cloud.

Private Cloud The term Private Cloud is disputed in some circles as many would argue that anything less than a full cloud model is not cloud computing at all but rather a simple extension of the current enterprise data center.

Nonetheless, the term has become widespread and it is useful to also examine enterprise options that also fall into this category. In simple theoretical terms, a private cloud is one that only leverages some of the aspects of cloud computing Table 2 1. It is typically hosted on-premise, scales only into the hundreds or perhaps thousands of nodes, connected primarily to the using organization through private network links.

Since all applications and servers are shared within the corporation the notion of multi-tenancy is minimized. From a business perspective you typically also find that the applications primarily support the business but do not directly drive additional revenue.

So the solutions are financial cost centers rather than revenue or profit centers. Table 2 1: Private and Public Clouds Common Essence Given the disparity in descriptions between private and public clouds on topics that seem core to the notion of cloud computing, it is valid to question whether there is. The most obvious area of intersection is around virtualization. Since virtualization enables higher degrees of automation and standardization it is a pivotal technology for many cloud implementations.

Enterprises can certainly leverage many of its benefits without necessarily outsourcing their entire infrastructure or running it over the Internet. Depending on the size of the organization, as well as its internal structure and financial reporting, there may also be other aspects of cloud computing that become relevant even in a deployment that is confined to a single company.

A central IT department can just as easily provide services on-demand and cross-charge business on a utility basis as could any external provider. The model would then be very similar to a public cloud with the business acting as the consumer and IT as the provider. At the same time, the sensitivity of the data may be easier to enforce and the controls would be internal. Cloud Continuum A black-and-white distinction between private and public cloud computing may therefore not be realistic in all cases.

In addition to the ambiguity in sourcing options mentioned above, other criteria are not binary. For example, there can be many different levels of multi-tenancy, covered in more detail in Chapter There are also many different options an enterprise can choose for security administration, channel marketing, integration, completion and billing.

Some of these may share more similarity with conventional public cloud models while others may reflect a continuation of historic enterprise architectures. What is important is that enterprises must select a combination which not only meets their current requirements in an optimal way but also offers a flexible path that will give them the ability to tailor the options as their requirements and the underlying technologies change over time.

In the short term, many corporations will want to adopt a course that minimizes their risk and only barely departs from an internal infrastructure. However, as cloud computing matures they will want the ability to leverage increasing benefits without redesigning their solutions.

Partner Clouds For the sake of completeness, it is also important to mention that there are more hosting options than internal versus public. It is not imperative that a private cloud be operated and hosted by the consuming organization itself. Other possibilities include colocation of servers in an external data center with, or without, managed hosting services.

Outsourcing introduces another dimension. They can manage these services in their. In some ways, you can consider these partner clouds as another point on the continuum between private and public clouds. Large outsourcers are able to pass on some of their benefits of economy of scale, standardization, specialization and their point in the experience curve.

And yet they offer a degree of protection and data isolation that is not common in public clouds. Vertical Clouds In addition to horizontal applications and platforms which can be used by consumers, professionals and businesses across all industries, there is also increasing talk about vertical solutions that address the needs of companies operating in specific sectors, such as transportation, hospitality or health-care.

The rationale behind these efforts is that it is obvious that a large part of even the most industry-specific of IT solutions fail to generate a sustainable competitive advantage. Reservations systems, loyalty programs, logistics software are easily replicated by competitors and therefore represent wasted intellectual and administrative effort that could be channeled much more effectively into core competencies.

A much more productive approach would be to share and aggregate best practices and collectively translate them into an optimized infrastructure which all partners can leverage thereby driving down the costs and increasing the productivity across the industry.

Needless to say there are many challenges in agreeing on which approaches to use and financially recompensing those who share their intellectual property. However, if completed effectively, it can be an example of a rising tide that lifts all boats. One area where there has been significant progress is the development of a government cloud.

Terremark has opened a cloud-computing facility that caters specifically to US government customers and addresses some of their common requirements around security and reliability Terremark, It offers extensive physical security ranging from elaborate surveillance, including bomb-sniffing dogs, to steel mesh under the data center floors.

As long as the governments involved belong to the same political entity there is less need for elaborate financial incentives. And the concerns around multi-tenancy may also be somewhat reduced compared to enterprises sharing infrastructure with their direct competitors.

Multisourcing The categorization of cloud providers in the previous section into private, partner and public is a great simplification. Not only is there no clear boundary between the three delivery models but it is very likely that customers will not confine themselves to any given approach. Instead you can expect to see a wide variety of hybrid constellations Figure 2 6. Figure 2 6: Multisourcing options The final imperative is to determine the business outcomes that must be achieved and then to analyze and compare the various options for accomplishing them.

In some cases, they may be fulfilled with public cloud services securely and cost-effectively.

Standards and Interoperability

In others it will be necessary to create internal services or to partner with outsourcing organizations in order to find the best solution. In some cases there may be legitimate reasons to work with multiple cloud providers that deliver the same functionality. The term cloudbursting characterizes a popular approach of creating an internal service that can extend into a public cloud if there is a burst in demand, which causes it to exceed capacity.

Other reasons might be to improve business continuity through redundancy or to facilitate disaster recovery by replicating data and processes. Topology Over the past half century weve seen the typical computer topology shift from the mainframe in the s to the client-server computing in the s.

The s popularized the notion of N-tier architectures, which segregate the client from the business logic and both from the information and database layer Figure 2 7. Figure 2 7: Connectivity Evolution We are now seeing an increase in mesh connectivity. For example, peer-to-peer networks leverage the fact that every system on the network can communicate with the others. Data processing and storage may be shared between systems in a dynamic manner as required. Cloud computing can facilitate any of these models but is most closely associated with a mesh topology.

In particular it is very important to consider the client device as part of the complete cloud computing topology. Desktop virtualization can have a fundamental impact on cloud computing and can also leverage cloud services to provide content on the terminal. However, Moores law continues to apply. We may have reached limits in transistor density but processing power is still advancing with multi-core processors.

Therefore it is not realistic to think that cloud equates to thin client computing. Some functionality is simply easier to process locally, while other functions, particularly those that are collaborative in nature, may be more suitable for the cloud.

The key challenge ahead will be the effective synchronization and blending of these two operating modes. We may also see more potential for hybrid applications. Content Delivery Model One way to look at topology is to trace the content flow. There are many possible options for delivering content on the Internet. These do not necessarily change through cloud computing, but it is important to be aware of all actors and their respective roles since they are all very much a part of cloud offerings too.

Figure 2 8: Content Delivery Model There are at least three different players in many solutions. The entity that creates the content or provides the ultimate functionality may be hidden from the user.

It is inherent in a service-oriented architecture that the end user not be explicitly cognizant of the individual component services. Instead the user interacts primarily with a content aggregator who bundles the services and contents into a form that add value to the user. These network providers have extensive global presence and very good local connectivity. They can replicate static content and therefore make it available to end-users more quickly, thereby improving the user experience and off-loading the hosting requirements from the aggregator.

Value Chain Although there is some correlation, the path of content delivery is quite distinct from the payment and funding model Figure 2 9.

Figure 2 9: Payment ecosystem The simple part of the payment model is the flow from the aggregator to the delivery network and content creator. This is intuitive and merely reflects a means of profit sharing toward those who facilitate the end-to-end service. The source of the funding model is the bigger challenge for all investors who would like to capitalize on the excitement around cloud computing. There are at least two ways. In a case where the value of the service is explicitly recognized by the end user there is the opportunity to charge the user.

Most users have an aversion to entering their credit card details on the Internet unless it is absolutely required. This typically means that the user must be convinced the content has a value that covers both the transaction costs including risk and effort and the actual billed costs. Services from site and Salesforce. For small items, the transaction costs may actually exceed the perceived value.

This makes direct billing virtually impossible. However, Google has popularized another way to monetize this value: This business model means that an advertiser pays the content provider in exchange for advertising exposure to the end user.

10 Free Cloud Computing eBooks

Ecosystem In reality, the roles of the value chain are more complex and diverse than just described. An ecosystem ties together a fragmented set of cloud computing vendors. There are two key parts of the cloud computing ecosystem that you should keep in mind as you look at different offerings.

It is extremely large. The hype surrounding cloud computing, combined with the lack of entry barriers for many functions, has made the sector extremely attractive in an economic downturn. There are literally hundreds of vendors who consider some of their products and services to relate to cloud computing It is very dynamic. This means there are many players entering the market. Some are exiting. But it also means that many are dynamically reshaping their offerings on a frequent basis often extending into other cloud areas.

Even the delivery mechanisms themselves are changing as the technologies evolve and new functionality becomes available. As a result, it is very difficult to paint an accurate picture of the ecosystem which will have any degree of durability or completeness to it.

The market is changing and I can only provide a glimpse and high-level overview of what it looks like at this point in time. Total Cloud There are many parts to cloud computing and each of these components can be technically delivered in many different ways using a variety of different business models.

A direct outcome of this diversity is that we can expect the effects of the technology to cross many boundaries of influence. A less obvious form of impact is that each of the functions needed to implement cloud computing can, itself, be delivered as a service.

Slogans such as Anything as a Service or Everything as a Service are becoming more popular to indicate that we not only have software, platforms and infrastructure as services, but also components of these such as databases, storage and security, which can be offered on-demand and priced on a utility basis. On top of these, there are services for integrating, managing and governing Internet solutions.

There are also emerging services for printing, information management, business intelligence and a variety of other areas. It is unclear where this path will ultimately lead and whether all computational assets will eventually be owned by a few service providers, leveraged by end users only if and when they need them. But the trend is certainly in the direction of all functionality that is available also being accessible on-demand, over the Internet, and priced to reflect the actual use and value to the customer.

Open Source and Cloud Computing Richard Stallman, a well-known proponent of open source software attracted. His concerns around loss of control and proprietary lock-in may be legitimate. Nonetheless, it is also interesting to observe that cloud computing leverages open source in many ways. Self-supported Linux is by far the most popular operating system for infrastructure services due to the absence of license costs.

Cloud providers often use Xen and KVM for virtualization to minimize their marginal costs as they scale up. Distributed cloud frameworks, such as Hadoop, are usually open source to maximize interoperability and adoption. Web-based APIs also make the client device less relevant. Even though some synchronization will always be useful, the value proposition of thin clients increases as the processing power and storage shifts to the back end.

Time will tell whether enterprises and consumers take advantage of this shift to reduce their desktop license fees by adopting Linux, Google Android or other open-source clients. Many SaaS solutions leverage open-source software for obvious cost and licensing reasons.

In some ways, SaaS is an ideal monetization model for open source since it facilitates a controlled revenue stream without requiring any proprietary components. In summary, there is the potential that cloud computing may act as a catalyst for open source. Gartner, Inc. Infrastructure as a Service In the beginning there was the Data Center at least as far back in time as cloud computing goes.

Data centers evolved from company computer rooms to house the servers that became necessary as client-server computing became popular. Now they have become a critical part of many businesses and represent the technical core of the IT department. The TIA Data Center Standards Overview lists four tiers of requirements that can be used to categorize data centers ranging from a simple computer room to fully redundant and compartmentalized infrastructure that host mission-critical information systems.

Infrastructure as a Service IaaS is the simplest of cloud offerings. It is an evolution of virtual private server offerings and merely provides a mechanism to take advantage of hardware and other physical resources without any capital investment or physical administrative requirements. The benefit of services at this level is that there are very few limitations on the consumer.

There may be challenges including or interfacing with dedicated hardware but almost any software application can run in an IaaS context. The rest of this chapter looks at Infrastructure as a Service. We will first look at what is involved in providing infrastructure as a service and then explore the types of offerings that are available today. Figure 3 1: Infrastructure Stack In order to understand infrastructure services it is useful to first take a look behind the scenes at how an Infrastructure Service provider operates and what it requires in order to build its services.

After all, the tasks and the challenges of the provider are directly related to the benefit of the customer who is able to outsource the responsibilities. Co-location This section describes a co-location service. Note that services at this level are available from many data centers. It would be stretching the notion of cloud computing beyond my comfort level to call them cloud services.

However, they are an essential ingredient to the infrastructure services described in this chapter. At the lowest level, it is necessary to have a piece of real estate. Choice locations are often re-purposed warehouses, or old, factories that already have reliable electrical power but it is becoming increasing common to take a barren plot of land and place container-based data center modules on it.

Some of the top cloud service providers scout the globe in search of cheap, large real estate with optimal access to critical infrastructure, such as electricity and network connectivity. Power and cooling are critical to the functional continuity of the data center. Often drawing multiple megawatts, they can represent over a third of the entire costs so designing them efficiently is indispensible.

More importantly, an outage in either one can disrupt the entire operation of the facility and cause serious damage to the equipment. It is very important for the data center to have access to multiple power sources.

Points of intersection between the electrical grids of regional electricity providers are particularly attractive since they facilitate a degree of redundancy should one utility company suffer a wide-spread power outage.

In any case, it is necessary to have uninterruptable power supplies or backup diesel generators that can keep the vital functions of the data center going over an extended period of time. Another environmental requirement is an efficient cooling system. Over half of the power costs of a data center are often dedicated to cooling. As costs have sky-rocketed. Most recent cooling designs leverage outside air during the colder months of the year. Subterranean placement of the data center can lead to better insulation in some parts of the world.

The interior of the data center is often designed to optimize air flow, for example through alternating orientation of rows of racks, targeted vents using sensors, log data and plenum spaces with air circulation underneath the floor.

One other area of external reliance is a dependency on network connectivity. Ideally the data center will have links to multiple network providers. These links dont only need to be virtually distinct they need to be physically distinct.

In other words, it is common for internet service providers to rent the physical lines from another operator. There may be five DSL providers to your home but only one copper wire. If you were looking for resilience then having five contracts would not help you if someone cuts the cable in front of your house. Whoever owns and operates the data center must also come up with an internal wiring plan that distributes power and routes network access across the entire floor wherever computer hardware or other electrical infrastructure is likely to be placed.

Other environmental considerations include fire protection systems, and procedures to cope with flooding, earthquakes and other natural disasters. Security considerations include physical perimeter protection, ranging from electrical fences to surveillance systems. Hardware The next step of an infrastructure provider is to fill the rented or owned data center space with hardware.

These are typically organized in rows of servers mounted in inch rack cabinets. The cabinets are designed according to the Electronic Industries Alliance EIAD specifications which designates dimensions, hole spacings, rack openings and other physical requirements.

Each cabinet accommodates modules which are 19 inches mm wide and multiples of 1U 1. The challenge is to maximize the number of servers, storage units and network appliances that can be accommodated in the cabinet. Most racks are available in 42U form 42 x 1. But the density can be augmented by increasing the proportion of 1U blades versus 2U and 3U rack-mountable components. These modules then need to be wired for power and connected to the network.

Again, an advantage of the larger enclosures is the reduction in number of external wires that are necessary since much of the switching fabric is internalized to the system. This aggregation can lead to increased operational efficiency in that less manual labor is involved and the possibility of human error is reduced.

Figure 3 2: Virtualization One of the biggest advances in data center technology in the last decade has been the advent of virtualization. There are many forms of virtualization including: Both of these notions are important in the data center.

The VLANs help to segment traffic and provide a degree of isolation by compartmentalizing the network. VPNs can create a secure connection between cloud entities and enterprises, end users or even other cloud providers. These allow applications to operate in a trusted mode whereby they can treat the cloud service as an extension of the private network.

There has also been extensive research into providing clusters of functionality, sometimes called cells, that act as their own self-contained infrastructure. These bundles of servers on dedicated networks contain their own management and security components to operate a fully functional, and often complex, system. The abstraction of the physical infrastructure can facilitate higher utilization through pooling of units and thin provisioning.

It also makes it much easier to migrate data without disrupting a service. The applications can continue to make the same logical requests even if the data is transferred to another device. Memory virtualization can abstract volatile memory space and map it to a set of pooled memory resources among networked systems.

Again, this offers flexibility, redundancy and better utilization. Desktop virtualization is a term that embodies yet again a number of related.

These delivery models vary according to the degree of isolation they provide applications, containers, operating systems and the means by which they are delivered. They may be pre-loaded, loaded at boot time, streamed as needed or simply hosted remotely and presented on the desktop. The main advantages of desktop virtualization or virtual desktop infrastructure [VDI] are standardization of the environment, ease of provisioning and the ability to manage the desktops while they are off-line.

There is a significant opportunity to leverage cloud services to provide all of these functions in any of the delivery models. Server virtualization - The virtualization techniques stated above all offer benefits to cloud-based services.

But certainly the most prominent in the cloud context is server virtualization. Server virtualization abstracts the underlying physical resources and presents these as a set of virtual machines, each of which appears to its applications and users as though it were a physical system.

There are two kinds of management layers or hypervisors which facilitate the abstraction. Server virtualization provides a high degree of isolation between guest operating systems.

This doesnt mean that it is immune to attack or vulnerabilities. There have been many kernel and hypervisor bugs and patches. Nonetheless, the hypervisor typically presents a smaller attack surface than a traditional operating system and is therefore usually considered superior to application isolation, which might be compromised through kernel exploits in the host operating system.

There are many reasons why virtualization has become so popular. The virtual machine can provide instruction set architectures that are independent of the physical machine thereby enabling platforms on hardware for which they were not necessarily designed. It improves the level of utilization of the underlying hardware since guest applications can be deployed with independent and ideally complementary resource demands.

Probably the most important driver is the fact that virtual machines can be launched from a virtual disk independent of the hardware on which they were configured. It is simply a matter of copying the virtual machine to a new machine which is running the same hypervisor.

This encapsulation makes it very easy to load-balance and redeploy applications as usage requires. It also enforces a level of standardization in configuration between similar instances of the same application. And it makes it very easy to provision new instances instantly when they are needed. One final feature of infrastructure services, which I have placed into the virtualization layer, is metering and billing. One of the reasons that cloud computing is becoming popular now is that capabilities have evolved to allow the service provider to recuperate its costs easily and effectively.

Fine-grained instrumentation provides the foundation by accounting for usage and delivering accurate information that can be used for internal cross-charging or fed into a billing and payment system for external collection. Green Clouds Environmentally sustainable computing is an important priority that IT managers need to consider as they develop their long-term infrastructure strategy.

Energy efficiency and effective disposal and recycling of equipment are likely to become even more important in the future. Cloud computing primarily shifts the location of processing and storage, which doesn't directly translate to a global ecological benefit.

A study by Greenpeace picked up alarming trends on the growing carbon footprint of cloud data centers and cloud-based services Greenpeace, However, the economy of scale of the cloud service providers can also help to address environmental objectives more effectively.

It is in the provider's interests to minimize energy expenditures and maximize reuse of equipment, which it can achieve through higher consolidation of infrastructure. Its scale also increases the attractiveness of investments in sophisticated power and cooling technology, including dynamic smart cooling, elaborate systems of temperature sensors and air flow ducts as well as analytics based on temperature and energy logs.

Infrastructure Services I like to divide IaaS services into three categories: Servers, Storage and Connectivity. Providers may offer virtual server instances on which the customer can install and run a custom image. Persistent storage is a separate service which the customer can download. And finally there are several offerings for extending connectivity options. The de facto standard for infrastructure services is site.

While they are not unique in their offerings virtually all IaaS services are either complements to site Web Services or else considered competitors to them.

I therefore find it useful to structure the analysis of IaaS along the lines of what sites offerings. Before diving in, Id like to point out that there is an open-source equivalent to site Web Services that is roughly compatible to its interface.

It is also shipped with Ubuntu since version 9. Cloud computing can be seen as the evolution of managed hosting providers such as Navisite, Terremark or Savvis. They offer co-location capabilities as well as dedicated pools of rack-mounted servers.

Their managed hosting capabilities often include virtualized resources on dedicated infrastructure with consolebased provisioning.

The server outsourcing model can be divided into three allocation options: Physical, Dedicated Virtual, Shared Virtual. Physical allocation means that specific hardware is allocated to the customer as in the examples above. Dedicated virtualized servers offer dedicated hardware but provide a hypervisor on the physical machine so that the customer can run multiple operating systems and maximize server utilization.

Shared virtual servers are exposed to the customers as pools of virtual machines. It is not discernible on which physical equipment a particular instance is running or what other applications may be co-resident on the same machine. Beyond these distinctions, the key differentiating options are the operating systems that are supported usually confined to Windows and a set of Linux distributions and the packages that are available off-the-shelf e.

There are a variety of providers that offer multiple combinations of allocation options, operating systems and packages. Some of the best known include: site falls into the category of shared virtual machines and is perhaps best known. The customer can use pre-packaged AMIs from site and 3rd parties, or they can build their own. They vary in resources RAM, Compute units, local disk size , operating. AppNexus offers dedicated virtualized servers based on Dell computers with Ipsilon storage.

Their differentiation lies in the degree of visibility they provide on the location of the server with transparency of the data center as well as rack location and position. LayeredTech provides co-location and dedicated as well as virtual server offerings. They employ VMware and 3Tera Applogic for the virtualized solutions along with a proprietary technology called Virtuozzo Containers that differ from conventional hypervisors in that they offer isolation of virtual machines while sharing the underlying operating system.

One challenge in using many virtual servers is that they typically do not maintain any local storage and may even lose their state between invocations. There are advantages to this approach. After all, any important configuration information or other data can be stored externally. But for some purposes it may be better, or easier, to maintain persistent local storage. Some server offerings do exactly that. The Rackspace Cloud formerly called Mosso is another very well known IaaS provider with many managed hosting options in addition to virtual server offerings covering a number of Linux distributions such as Ubuntu, Fedora, Centos and RedHat Enterprise Linux.

It has a large pool of dedicated IP addresses and offers persistent storage on all instances. Joyent uses the term accelerators to refer to its persistent virtual machines. A feature called automatic CPU bursting provides reactive elasticity.

Joyent also offers a private version of their framework called CloudControl for enterprise data centers.

Figure 3 3: It also provides a free hardware-based load balancing to optimize the performance of customer instances. ElasticHosts currently targets the British market with two data centers near London.

Rather than using one of the more common hypervisors, they have selected LinuxKVM for their architecture and may appeal to organizations who have taken the same path. They also offer the ability to individually configure server specifications along a continuous spectrum of values Figure 3 4. Figure 3 4: Similar to the first application hosting providers the initial offerings were not very.

More recent on-demand offerings have changed the game, however, and made Storage as a Service one of the most promising areas of cloud computing. The offerings are typically characterized by a location-agnostic, virtualized data store that promotes the illusion of infinite capacity in a resilient manner while their high level of automation makes them very easy to use.

One of the most common applications is an online backup using SaaS delivery, such as Mozy or SugarSync. Storage services are also useful for archiving, content delivery, disaster recovery and web application development. They still face some challenges such as outages, vendor lock-in, co-mingling of data and performance constraints.

However, they also offer benefits of lower cost of storage infrastructure, maintenance and service, while reducing staffing challenges. At the same time they provide increased agility and give the customer the benefits of the providers expertise in compliance, security, privacy and advanced information management techniques such as archiving, de-duplication and data classification.

In order to cater to the strict demands of cloud computing, many of the storage vendors, such as EMC with their Atmos product line, have begun to deliver hardware and software that is specifically designed for geographically dispersed content depots with replication, versioning, de-duplication, and compression capabilities. These appeal to both the storage service providers as well as many enterprises that are pursuing similar functionality in a private cloud.

In examining on-demand storage services, some of the factors to consider include persistence and replication options as well as the speed and latency with which it can be accessed.

Note that due to the synchronization requirements of content delivery networks you may observe very different speeds for reading and writing data. Also of relevance are the access protocols and mechanisms as well as the data structures allowed. site offers two persistent storage capabilities: Note that, as implied above, the site AMIs do not have any persistent storage but locally mounted disks can be used for logs, results and interim data while the instance is active.

It offers distributed, redundant buckets that are replicated with sites CloudFront content delivery network across Europe, Asia and the United States. S3 can accommodate data sizes from a single byte to 5GB and provides permissions for controlling access based on site Web Services authentication. In February , site extended S3 to also facilitate versioning so that customers can recover accidentally deleted or overwritten objects. The feature also lends itself for data retention and archiving requirements.

The Elastic Block Storage is intended as a high-performance virtual hard disk. It can be formatted as a file system and then mounted on any EC2 instance. The size can range.

site also provides a mechanism to store an EBS snapshot in S3 for long-term durability. Other storage services include: The Rackspace Cloud: It provides containers for static content, which can be replicated via the Limelight content delivery network to over 50 edge data centers.

The Dynavol service supports data that is mirrored for redundancy and supports many access mechanisms including: CloudNAS is a policy-based offering with an enterprise-focus. EU only for compliance or performance reasons. ParaScale is not a provider, per se. The most important open-source contribution to cloud file systems comes from Apache Hadoop.

It isnt a service, in and of itself, but rather an important component which is modeled on Google MapReduce and the Google File System. Nodes can talk to each other to rebalance data, to move copies around, and to keep the replication of data high. By default, the replication value is 3, whereby data is stored on three nodes: There are also cloud database options for more structured data. Note that even though most of the information is tabular it is typically not SQL-conformant and may not support joins, foreign keys, triggers or stored procedures.

It is common for these services to be able to also accommodate unstructured data blobs. Instead it defines domains with items that consist of up to attributes and values. The values can contain anywhere from one byte to one kilobyte.

It also supports simple operators such as: So most common queries are possible as long as they are confined to a single domain. Some other interesting data services include:. Google BigTable is a fast and extremely large-scale DBMS designed to scale into the petabyte range across "hundreds or thousands of machines.

Each table has multiple dimensions one of which is a field for time, allowing versioning. It is used by a number of Google applications, such as MapReduce Hypertable is an open source database inspired by publications on the design of Google's BigTable and sponsored by Baidu, the leading Chinese language search engine. It is a highly available keystore that distributes and replicates storage based on hashing rings.

While Dynamo isnt directly available to consumers it powers a large part of site Web Services including S3. Cassandra is Facebook's distributed storage system.

It is developed as a functional hybrid that combines the Google Bigtable data model with site's Dynamo. Network The notion of cloud computing would be very dull without connectivity. But merely having a network isnt sufficient.

There are many variations in the kind of capabilities that the connections can have. If they require additional addresses, static addresses, or persistent domains then they need to request these separately. There are two other network-related functions that cloud providers may offer.

Firstly, there may be provisions for network segmentation and mechanisms to bridge the segments. A second optional feature is performance related functionality such as load balancing.

Many cloud server providers, such as site EC2, allow the customer to define firewalls, which restrict the inbound and outbound traffic to specific IP ranges and port numbers. Additionally, the guest operating systems may apply further personal firewall settings. Not only do these give you the advantage of static IP addresses and reduced exposure to broadcast traffic, but it is also possible to segregate traffic from that of other tenants through the use of Access Control Lists ACLs.

It is based on OpenVPN an open source SSL-VPN and sets up an encrypted tunnel to the enterprise and within the cloud if needed by placing firewall servers that create inbound connections. They are able to persist the IP addresses by providing a virtual pool and can also deliver functionality around load balancing, failover and access controls.

In , site launched a service that enhances and secures connectivity between cloud services. The site Virtual Private Cloud facilitates hybrid cloud implementations for enterprises by creating a secure virtual private network between the enterprise and the site Web Services and extending the infrastructure, including firewalls, intrusion detection and network management to bridge both networks. Once connectivity has been configured, the next task it to ensure that it performs satisfactorily.

The network performance of cloud services is primarily defined by two factors. The latency is directly related to the geographical coverage since it is bounded by the physical distance between the client and the server and the amount of bandwidth the provider is able to obtain from its network provider.

However, there can also be internal bottlenecks that impact the performance. In particular, the servers themselves may become overloaded during periods of high activity and therefore not be able to service requests with sufficient speed. Assuming the application can be horizontally scaled to multiple servers then the solution to the bottleneck is to balance the load of incoming requests.

This balance can be accomplished at a local level internal to the data center or a global level. Many providers offer local load balancing capabilities. The focus of global load balancing is less on balancing the load over heavily utilized servers and more on distributing the load geographically so that users connect to the closest available services. The DNS client will sequentially send a request for DNS resolution and will accept the first response directing it to the closest server.

Integration The next step after arranging network connectivity is to configure the applications to be able to exchange data and synchronize activity.

There are also differences in the level of support which IaaS providers offer. In theory, they do not need to facilitate the integration at all. However, this is also an area where the infrastructure provider can demonstrate added value and many of them do. It provides an unlimited number of queues and messages with message. The customer can create queues and send messages.

Since the messages remain in the system for up to four days they provide a good mechanism for asynchronous communications between applications. As mentioned above, synchronous connections can be accomplished without any infrastructural support.

Nonetheless providers such as OpSource provide additional web services that applications can leverage. It facilitates reporting and visualization of key performance indicators KPIs based on data such as: The OpSource Connect services extend the Services Bus by providing the infrastructure for two-way web services interactions, allowing customers to consume and publish applications across a common web services infrastructure.

This is of particular interest to infrastructure customers who intend to generate revenue from selling the application as a web service. Apache Hadoop also provides a framework for much more tightly coordinated interaction of applications.The Elastic Block Storage is intended as a high-performance virtual hard disk. To do so, a survey was conducted with leading IT executives from 64 medium and large companies in Brazil.

Software services are typically highly standardized and tuned for efficiency.

The learners on successful completion of the course receive the certification, and placement opportunities. Each has their own history and business model.

BILLI from Houma
Please check my other articles. I have only one hobby: subbuteo. I do like sharing PDF docs unethically.
>