Cloud deployments have brought about game-changing benefits for both the providers and the consumers but continue to be challenged by certain inhibitors to adoption. Consider the case of an enterprise’s chief information officer (CIO) contemplating a move to the cloud. The cost and agility benefits offered by cloud deployments make it an attractive option for the organization. It allows the IT group to focus its limited resources on the core business of the company, enabling it to fund and undertake new projects with business impact.
This book chapter is excerpted with permission from Designing Networks and Services for the Cloud, written by Huseni Saboowala, Muhammad Abid, and Sudhir Modali for Cisco Press.
The network provides the capabilities and analytics that allow the cloud provider to allay the fears of the CIO. So far, this chapter explored the network’s pivotal role in spurring the adoption of the cloud, enabling organizations to migrate more and more of their core workloads to the cloud today. And as we look ahead, the network is poised to play an even more crucial role in future clouds.
A variety of clouds exist today: public, private, and hybrid clouds, along with community and specialty clouds to address the needs of different business verticals such as healthcare, media, finance, or government. As illustrated in Figure 4-2, we are moving toward a world of many interconnected clouds, serving the needs of users who want to experience cloud services anywhere, at any time, and on any device, and of businesses, which want IT to be delivered as a service.
Figure 4-2. World of Many Clouds (Source: Cisco)
In this multicloud world, the network’s role is significantly expanded because these clouds need to securely connect to each other. In addition, massive amounts of infrastructure resources, along with applications and content, need to be combined and delivered on demand, to provide a secure and consistent user experience regardless of the user location and number of cloud platforms involved. The network fabric enables bringing together these capabilities dynamically, virtualizing connections within the cloud, between clouds, and beyond the clouds to the consumers.
Over the past few years, there has been an explosion in the number and types of consumer and business mobile devices, sensors, and actuators, many of which are now connected to the network. Although we tend to think so, clouds are not limited to the servers in data centers. In fact, the cloud extends out to all these network-connected electronic devices, smart meters, and other sensors, as illustrated in Figure 4-3. When you put it all together, it is easy to see that this is an even larger cloud on the horizon, with billions of network-connected components.
Figure 4-3. Cloud of Mobile Consumer Devices and Sensor Devices
(Source: J. Rabaey, “A Brand New Wireless Day”)
Consider the dozens of sensor devices running inside modern cars today. With 3G/4G mobile data connectivity enabling machine-to-machine (M2M) communications, sensor devices can monitor and share vehicle performance data with the car manufacturer, who can then use it to suggest appropriate maintenance or repairs. Or consumers might want their car to communicate with other cars around them, over an ad hoc local network, and learn about road and traffic conditions up ahead. Security is obviously critical here. After all, we would not like untrusted parties gaining access to these devices, with perhaps the ability to start interfering with brakes or other vehicle safety features. The possibilities are endless, and as you can see, dynamic, scalable, and secure networks have an increasingly vital role to play in the cloud in the years ahead. These futuristic clouds are further explored in Chapter 13, “Peeking into the Future.”
Consumer and business cloud services, including rich-media services, keep growing in popularity, leading to an explosion in data center traffic. According to Cisco’s Global Cloud Index, cloud IP traffic is expected to grow at 66 percent compound annual growth rate (CAGR) from 2010 to 2015, which is twice the 33 percent CAGR expected for overall data center IP traffic during the same period. As illustrated in Figure 4-4, overall data center traffic volume is expected to reach 4.8 zettabytes in 2015. And cloud traffic is expected to be over a third of that pie (1.6 zettabytes). (A zettabyte is a billion terabytes; the number 1 followed by 21 zeros!)
Figure 4-4. Data Center Traffic Quadruples from 2010 to 2015. Cloud Traffic Is Expected to Be Just over One Third of the Data Center Traffic in 2015.
(Source: Cisco Cloud Index)
NOTE: Cisco’s Global Cloud Index considers all provider and enterprise data centers, and includes the following traffic categories:
Let’s try to put 1.6 zettabytes in perspective. This is the equivalent of 5 trillion hours of business web conferencing or 1.6 trillion hours of HD video streaming. Another interesting comparison is with the overall global Internet traffic, which in 2015 is expected to be just under 1 zettabyte, according to the Cisco Visual Networking Index (VNI).
In addition to the mind-boggling growth in traffic volumes, cloud applications, services, and infrastructure are responsible for transforming the pattern of data center traffic flows. Cloud-ready networks inside data centers, between data centers, and from data center to users will play an increasingly crucial role in terms of scaling efficiently to handle this growth in cloud data traffic and maintain profitability for the cloud providers without compromising the end-user experience.
Earlier in this chapter, we discussed the role of the network in speeding up adoption of cloud services, providing solutions to the fundamental concerns that businesses have about wholeheartedly embracing the cloud. Cloud providers can leverage their network assets to enable their customers to confidently start moving more and more of their critical workloads to the cloud. On top of this, what if cloud providers could also directly monetize their network assets? What if networks and network services could be offered by the provider as a service; that is, network-as-a-service (NaaS)?
Along with compute and storage, networks and network services can be offered as a service, to be consumed, metered, and billed, based on usage. The economics of this model provide network vendors and cloud providers with strong incentives to innovate on compelling network services that add significant value for their customers.
The following are methods to offer networks and services for consumption.
The discussion on cloud service management in Chapter 3, “Cloud Taxonomy and Service Management,” explained how cloud services, defined in the service catalog, are offered to customers through self-service portals or via application programming interface (API) access. In addition to including various predefined cloud services, the service catalog enables the flexibility to add or modify optional features for those services. The same service catalog provides a means to define and offer networking for consumption (ranging from a basic VLAN service to a complex network service that provides security across multiple data centers).
To include network services in the service catalog, they need to be abstracted and presented in a simplified manner to the customer who may not be a networking expert. The intricacies and complex operations involved in enabling the network service must be hidden from the customer. Simplification is key, and ordering NaaS should be as easy as a few clicks on the cloud portal or a small number of intuitive API calls.
Here are a few examples of data center networking services, both basic and premium, that a provider could offer in their service catalog:
The service catalog does not need to be restricted to network services inside the data center. After all, the end user consumes the cloud service from across the WAN (Provider IP NGN) or Internet. Cases where the cloud provider owns or controls network assets in the IP NGN present an opportunity to abstract network services available in the IP NGN bring it up to the service catalog. Examples of such services include the following:
Not only do these NGN services open up additional revenue streams for the cloud provider, they also enable the provider to offer end-to-end security and performance capabilities. Certain network services such as firewall, QoS, and WAN application acceleration could potentially be distributed across the NGN and data center networks.
One option for monetization is to offer network services à la carte. Here network connectivity and services can be individually ordered by the consumer. The exact needs are conveyed as part of the API call or via a portal. For instance, if the developer needs to simply connect the database VM to an isolated virtual network segment that is not routable from the Internet but reachable from the web servers, those network attributes would be specified as part of the API invocation, as shown in the following pseudo API example:
A well-designed API enables the users to easily describe what they want out of the network: for example, a network that supports a certain amount of bandwidth, a network with QoS, or perhaps a network with monitoring services. The APIs represent a contract to provide a certain service. While the underlying networking devices may differ, the functionality delivered by the API call is expected to be the same. In essence, a network hypervisor is needed. Analogous to the compute hypervisor, the network hypervisor would provide the ability to abstract the underlying networking hardware into services that can then be consumed by the user.
Not too long ago, though, developers did not have any visibility or control over the network, with infrastructure-as-a-service (IaaS) offerings focusing primarily on compute and storage, as illustrated in Figure 4-5. The network was there only to provide connectivity. Each VM would have a very flat view of the world, and there would not be any topology at all. Obviously, network services would not be available for consumption in such architectures.
Figure 4-5. IaaS Offerings Lacking API Access to the Network
(Source: Cisco, Lew Tucker)
OpenStack is open source software that enables any organization to build their public or private cloud stack. It aims to deliver a massively scalable cloud operating system, along the lines of the software that powers colossal clouds such as Amazon EC2 today. OpenStack has been gaining momentum, with contributions from a growing global community of developers, vendors, and service providers helping it grow in functionality and maturity.
Initially, OpenStack started off as a platform underpinned by three major services: the Nova compute service, the Swift storage service, and the Glance virtual disk image service. The OpenStack development community has been actively engaged in developing additional services, some of which are shown in Figure 4-6. One such service, named Quantum, aims to provide network connectivity as a service. Along with requesting VMs and storage, developers can now request network connectivity, as well, using the Quantum API.
Figure 4-6. OpenStack Services
Figure 4-7 shows how Quantum has a pluggable framework with plug-ins offered by multiple networking vendors, including Cisco and Nicira/VMware. This is key to adoption; customers do not have to fear being locked into a particular vendor. The plug-ins map the API abstractions to the actual networking device underneath. In addition to offering basic Layer 2 virtual network segments, the Quantum API has an extensible architecture allowing advanced network services to be offered through the API extensions. And this extensible architecture is important, as the Quantum API is still evolving, and new network features such as firewalls, VPNs, and load balancers can be offered through the extensions first, before they get baked into the core Quantum API over time. Cloud providers have an opportunity to differentiate themselves by offering advanced networking features via the extensions.
Figure 4-7. Quantum API Architecture
Services such as OpenStack Quantum represent a fundamental shift in cloud networking. Networks are no longer hidden beneath the hypervisor, and network services are no longer limited to providing basic connectivity for the VMs. Applications can interact with network services via the API, bypassing the hypervisors.
Network containers provide a representation of the data center network infrastructure that is dedicated to a tenant for the provisioned time. As compared to ordering individual network services, containers enable a higher level of abstraction, encompassing the set of network connectivity and network services allocated to a tenant service. Figure 4-8 shows an example of a tenant network container for a three-tier web application. Separate network containers have been created for the Web, App, and DB tiers, nested inside the tenant network container and separated by firewall services. External connectivity is provided for the container to be reachable from the corporate VPN for management purposes, while the Web container is reachable from the Internet through a load balancer.
Figure 4-8. Network Containers with External Connectivity for a Tenant’s Three-Tier App
If the entire topology in Figure 4-8 can be saved as an abstract model, it could be offered through the services catalog for consumption. That would significantly ease the deployment of the tenant’s application, freeing the tenant from the lengthy process of individually ordering these network services and managing the interdependencies. A sophisticated network abstraction system such as the Cisco Network Services Manager (NSM) enables such use of network container models to define the behavior of the network services as a holistic virtual network infrastructure.
Cisco NSM provides model-based policy-driven abstraction and orchestration of the cloud network environment, leading to increased flexibility in terms of what can be done in the network, what services/capabilities can be exposed from the network, and what tenant container environments can be provisioned on the network. A REST-based API allows orchestration and other systems to interact with NSM and access the abstractions.
Comprehensive network container models, such as the three-tier web application in Figure 4-8, can be instantiated on diverse cloud network infrastructures, with NSM abstracting away the platform-specific behaviors of the underlying networks. Figure 4-9 shows an NSM system managing three cloud infrastructure stacks or pods. One of the pods could be based on Nexus networking platforms, the other may be leveraging existing Catalyst-based networking, and the third may be based solely on virtual network services. The NSM service controller associated with a pod understands the specific devices and platforms in the pod, and when it receives a directive to instantiate a particular abstract topology model, it interacts with the networking devices in that pod to stitch that topology together.
Figure 4-9. Cisco NSM and Instantiated Network Containers for Multiple Tenants
In addition to the abstraction, this model enables the mobility of network containers. Instantiated network containers, including the application and data residing in them, can be moved from one cloud pod to another, as needed, without any changes.
Various types or tiers of container model can be included in the service catalog, addressing different requirements such as security, performance, or application delivery. The customer can then pick one or more of these containers, and then select the VMs, which will be placed inside the container. The cloud administrator designs these container models to address the varied network service needs of their customers and enable the provider to offer differentiated pricing on these containers based on the density, complexity, and perceived value of the included network services.
Even though the service catalog allows the tenant to easily pick and choose from a variety of network services and predesigned topologies, tenants might need to customize and fine-tune their logical network in the cloud to meet their goals. Providers that can offer the tenant admin increased flexibility on day 2 operations, such as runtime configuration and modification of network services, will be able to further differentiate their offerings from the competition.
Through our discussion about OpenFlow Quantum service and the Cisco NSM system, you saw how network services can be offered in a simplified manner to spur consumption (either as individual network connectivity services or as network containers). These offerings enable cloud providers to gain access to additional revenue streams, realizing improved returns on their infrastructure investments.
To fulfill their role in the adoption and monetization of cloud services, networks need to adapt to the cloud environment. The rise of cloud models is changing what is happening on the network:
These changes are driving the rapid evolution of networks. But not everything about the network has to change. Its foremost purpose still remains the same. The network still has to provide transport for the movement of data between the various components of an application, its storage, and the end user. It still has to provide security for access to applications and data. And it is still responsible for delivering a certain level of application performance to the end user. What changes is how these jobs are to be performed (with automated provisioning and management, with support for virtualization and multitenancy, and with location independence).
Automation is one of the most important areas of evolution for networks. And APIs are a fundamental means of enabling automation. One of the biggest impacts of the cloud on networks is the sheer scale and the frequency of change. And APIs allow us to address both of them. When network and network services can be provisioned and managed with well-designed APIs, such as those exposed by the network hypervisors discussed earlier in this chapter, the cloud network can scale efficiently from one rack to a whole data center to collections of data centers. At the same time, frequent changes brought about to the network, as tenants allocate and de-allocate cloud services, can be handled without any human touch. The economics of the cloud make such zero-touch operations mandatory.
A couple aspects of virtualization are relevant to the evolution of networks. First is the network’s awareness of server virtualization, which was introduced in Chapter 1, “Virtualization.” Such virtualization-aware networks can identify and treat each VM as a separate networking endpoint. In addition, such networks can attach security and other policy profiles to VMs in a sticky fashion. As VMs migrate from one physical host to another, or one data center to another, these profiles move along with them.
The other aspect relates to networks themselves: that is, network virtualization. Also discussed in Chapter 1, virtualization of networks and network services enables the end-to-end isolation required to allow multiple tenants to securely coexist on the same shared underlying infrastructure. Advanced network abstractions such as containers can build on top of this virtualization and provide the flexibility of carving up the infrastructure into network containers. Such containers, described earlier in this chapter, would be completely isolated from the network containers of other tenants, enabling multitenancy.
Networks today support user and device mobility in various ways. With the advent of cloud, network capabilities around mobility need to evolve further. The virtualization and resource pooling aspects of clouds means that servers and applications are no longer tied to physical infrastructure either. In fact, applications can be thought of as floating over a pool of infrastructure resources, seamlessly extended within and between clouds.
With the mobility of applications and data in addition to the users themselves, networks can no longer depend solely on their location to make policy decisions. These modern networks, shown in Figure 4-10, gather and rely on context information in this borderless world, ensuring that users can access only those applications and that data to which they are entitled. In addition, these networks strive to achieve a consistent level of user experience, irrespective of the location of the user, application, and data in the cloud.
Figure 4-10. Application/Data Mobility
The network fabric is the glue that securely binds together heterogeneous resources inside clouds and between clouds and delivers them beyond the cloud to the end users. Based on requirements, characteristics, and administrative domains, cloud networks can be divided into three distinct entities:
How are these networks evolving to support cloud models? What is the role played by these networks in enabling business-grade cloud services? And how do we instantiate these concepts in deployment use cases? What end-to-end considerations apply for the secure delivery of cloud services with an SLA? These are some of the questions we explore in the rest of this book.