Категория: Filezilla 550 access is denied

Oreilly sdn software defined networks cisco

oreilly sdn software defined networks cisco

SDN: Software Defined Networks. Thomas D. Nadeau and Ken Gray. O'REILLY*. Beijing • Cambridge • Farnham • Köln • Sebastopol • Tokyo. It's good to take a look on the communication scheme. Between SDN Application and SDN Controller, the “communication language” is API . The concept of network programmability and Software Defined Networking (SDN) enables public and private cloud operators to address mounting. CITRIX SOFTWARE DEPLOYMENT

Morreale and J. Anderson, Software defined networking: Design and deployment. CRC Press, Browse By Issue. By Author. By Title. User Username. Remember me. Notifications View Subscribe. Tech News Hyphens in paper titles harm citation A big step toward the practical Combining multiple CCTV images could Applying deep learning to motion Article Tools Print this article.

Indexing metadata. How to cite item. Finding References. Email this article Login required. Email the author Login required. Open Access Subscription or Fee Access. Vairam 2. DOI : For example, there may be two server rooms within a three storey DC, each containing a web server farm; one on the first floor and one on the second floor.

Although both server farms are connected to devices in different physical locations, as long as they are both configured and connected to the same VLANs and those VLANs are trunked between floors; all connected users will have access to data within their specified VLANs irrespective of what server farm they connect to. Another example is the Marketing department being spread across three floors. As all Marketing staff require access to the same enterprise resources, they all reside within a VLAN pre-allocated to Marketing.

If this VLAN is trunked between the three floors connecting to each Marketing area, all connected staff will have access to Marketing resources. This transit of VLANs between network segments is the purpose of Quality of Service QoS is another functionality configured at the aggregation layer, allowing bandwidth BW capacity to be divided and prioritized for different network traffic types according to company policy or application requirements. A call Centre would need voice traffic prioritized over email traffic because telephone calls are their core business.

A TV production company may need video traffic prioritized to ensure enough available BW to support HD streaming between multicast endpoints at peak usage. ACLs feature at the aggregation layer and specify network port numbers, protocols and digital resources in the form of IP address ranges. ACLs filter network traffic, defining who has access to what. Any useable IP address within this range has access to a server containing backup network configurations.

If someone with an IP address outside this permitted range i. Engineering tried to gain access, an ACL would be in place to deny this. ACLs are important for enforcing trust access and security policies at the aggregation layer. VLANs provide segmentation at L2, defining the logical grouping of network hosts to an associated computing domain. This mapping populates forwarding tables which are databases of ingress inbound and egress outbound interfaces on each infrastructure device.

These tables contain data used by the infrastructure devices to decide which interfaces send and receive data. The core is the backbone of the network facilitating L3 App. I routing services to the aggregation layer. High speed packet switching is required at the core, to route IP traffic as quickly as possible throughout the network.

Crucial design principles within the core require speed, simplicity, redundancy and high availability. To remain competitive, enterprise and service provider networks need connections to be available 7x24x or Cisco, This north-south traffic Fig 5 flows from the client request initiated from an EP connected to the access switch.

If the destination EP is on the local network segment, the request will get switched directly to the connected host. This north-south traffic pattern enjoys low latency switching between local hosts on the same network segment or rack cabinet. If the destination EP is in another broadcast domain VLAN , the request traverses extra network hops through the aggregation layer to the core introducing latency.

The core router would then consult its routing table to decide which interface to forward the traffic to its destination. Each additional hop in the path introduces latency that can negatively impact business critical applications as links connecting to remote networks become saturated and oversubscribed.

In Fig 6, imagine the access switches SW1 and SW6 each have 48 ports connected to users on the corporate network, with each port having the maximum forwarding capacity of 1Gbps. If the uplinks between the access-aggregation block 2 per switch also forward at 1Gbps, each uplink will have an oversubscription ratio of Oversubscription is a consideration required in all networks irrespective of size and is the ratio between the potential BW requirements for all connected hosts, divided by the maximum capacity of the connecting uplinks.

A general rule of thumb accepted by DC professionals is having an access layer switch with x48 10GigE ports connecting to an aggregation layer switch through x4 40GigE ports:. If the interfaces connecting network tiers are not provisioned with enough BW to support bi-directional data transmission at wire rate, blocking and oversubscription ensues, introducing congestion and packet loss into the network.

This east-west traffic pattern has to travel across the interconnections between source and destination network segments. As none of the connecting links have 60GB capacity, congestion, latency and packet loss would choke the network due to the oversaturation of the uplink connections. There will always be a level of oversubscription within any design but this should be balanced against the application and user requirements; or better is generally recommended.

QoS can be implemented to ensure that priority traffic types are allocated BW before link saturation occurs. Network operators employ acceptable oversubscription ratios pertaining to overall network design; striking a balance between performance and expense. Oversubscription is a vital consideration in enterprise DCs where east-west traffic patterns constitute routine traffic patterns.

This consideration is even more paramount in large scale DCs where data is forwarded between hundreds or even thousands of physical and virtual EPs Bloomberg, Large scale DCs have to manage a tremendous amount of complexity across the network infrastructure as well as the applications sitting on top. These are all examples of routine, east-west traffic flows within modern DCs that contribute to this complexity.

Server-to-server communications across the classical hierarchical architecture introduces congestion and network choke points, not only due to oversubscription but also because of the L2 path redundancy, introduced to fix outages if a switch port or circuit were to fail. This redundancy introduces the possibility of switching loops, where traffic is sent on an infinite circular path within the three-tier architecture.

Switching loops occur in L2 networks with redundant paths between infrastructure devices. Switches that learn paths to destination networks from conflicting interfaces can get stuck in a process where they continually flood VLAN traffic creating broadcast storms and Media Access Control MAC table instability. The RB is the center root of the inverted Spanning tree that provides a single logical forwarding path through the entire switching fabric for each virtual network VLAN.

To achieve this loop free, optimal path, the fastest shortest path connections to the RB on each infrastructure device is allocated as either a Route Port RP or a Designated Port DP , all other ports are blocked which removes redundant links Fig 7a and 7b. As you can see in Fig 7a there are 11 links in the three-tier architecture.

Once STP calculates the best loop free path, redundant links are blocked and the resultant spanning tree is formed, ready to forward VLAN traffic. As seen in Fig. Such wastage is costly in both enterprise and cloud environments because congestion and network bottlenecks are introduced when all but one forwarding path is blocked via STP.

In this topology, redundant fiber links remain unused until a change in the STP topology re-calculates an alternate path. Optical cables and modules are expensive and have to be non-blocking in order to justify investment. In addition, the number of hops required for server-to-server, east-west traffic flows add extra latency to transactions between hosts communicating over the network. As big data analytics with tremendous server-to-server communication requirements increases in volume and importance in the data center, this latency becomes even more of a concern.

East-west traffic flows, network convergence, oversubscription and STP are some of the limitations that have prompted the exploration of alternative architectures and technologies in DC environments. With background given on the traditional hierarchical model and its limitations, the following section provides information on the preferred architectural model used within modern DC environments.

This provides an understanding of the physical infrastructure that powers DC operations before looking at the software overlay technologies that sit on top. In the DC, the successor to the three-tier model is spine-leaf. Spine-Leaf is the de facto reference architecture used in modern DCs Wang and Xu, , and improves on the limitations presented by the hierarchical model and STP. Spine-Leaf is an adaptation of the Clos network developed in Hogg, and comprises a flat, non-blocking switching architecture where the core is divided across a number of modular spine switches.

Each spine switch up-links to every leaf. As each leaf switch is connected to all spine switches, this creates a flat, densely connected topology where all leaf switches are equidistant from anywhere in the architecture Fig 8b. This gives each device access to the full BW of the fabric, building a simplified foundation of predictable performance Alizadeh and Edsall, Applications, services and DC EPs can communicate to endpoints on opposite sides of the DC through the directly connected, local leaf switch through the connecting spine switch; to the remote leaf switch connected to the destination host Fig 8b.

This brings consistent and predictable latency and BW through the DC facilitating horizontal scaling for east-west traffic requirements. Fig 8a depicts an example Spine-Leaf topology with two spines, each connecting to four leaf switches that in turn have 48 connected hosts.

The oversubscription ratio is shown with each group of 48 connecting through 10Gbps connections with x4 40Gbps up-links from each spine to every leaf. DC operators need to maximize utilization of invested resource. Although foundational to networking, the hierarchical model provides a forwarding path confined to the cable or ether-channel permitted by STP.

This introduces choke points between tiers creating a blocking architecture that cannot scale beyond a certain point, wasting capital investment. In Spine-Leaf designs, the uniform, densely connected switches place each EP no more than three hops away from the destination Fig 8b. This equidistance allows simplified and predictable scaling through the DC as low latency traffic is load balanced across multiple paths of equal cost.

This non-blocking architecture enables infrastructure devices to forward packets bi-directionally at line rate through the switching fabric with the ability to scale to hundreds of thousands of ports. Now that we have introduced basic infrastructure concepts, we can delve into the application landscape and the changes seen in recent years. This provides context on the evolution of applications from the monolithic architectures of the past to the cloud native applications of the future.

Forward thinking enterprises are now considering the challenges around migrating on premise applications to the cloud. The next section will describe traditional, monolithic applications and how they differ from highly available cloud native applications. This section describes the transition from owning and maintaining on-site application architectures, to consuming pay as you use cloud computing services. This is important because the SDN approach is most powerful when it is used to provide cloud native applications with the speed and agility required for highly available, self-healing, autonomic network services.

This serves to convey that the health and resilience of a cloud native architecture is dependent upon the capabilities afforded by an underlying, programmable infrastructure. The traditional waterfall development process has been disrupted by agile development practices.

Each new phase enables application developers to iterate more development processes in parallel. Taking advantage of new automated application code management, testing, and deployment tools. This dramatically increases application quality and significantly accelerates the development and deployment of new business capabilities, which has driven their adoption in the IT world.

Traditionally, when service providers build or buy applications to serve new business outcomes; the web, application and database components are provisioned in silos connected by the underlying network infrastructure Fig 9. This methodology represents a complex workflow where continuous delivery application development is impeded by the compartmentalization of each application tier. For a new application to be brought into production, the requirements of that application are set by the application development, web and database teams depending on its particular features Richardson, The network team is expected to distil and translate these requirements into configurations the network can understand.

Classically this configuration of infrastructure involves a device level process where each infrastructure device is configured one by one through the CLI. Such configurations include:. This is important whether the traffic remains within the originating administrative domain or has to cross organizational boundaries to an external entity.

These are but a few examples of the complex configuration tasks required on each infrastructure device within a DC potentially housing hundreds of network devices Fig For all this work to get approval from the business, scheduled maintenance windows have to be arranged with advisory boards scrutinizing the intended procedures to minimize the potentiality of a negative impact on production services.

Depending on the size of the project or impact of the network change, this whole process can become convoluted and protracted, putting pressure on project timelines and budget justifications. This is why innovative automated approaches to network provisioning are being explored, as manual provisioning is not scalable in large scale cloud environments ONF, Once the new application is finally up and running in the DC i.

Classically, such applications have been proprietary, closed and monolithic in nature, which suited IT silos with static delivery models that elicited infrequent change. This methodology represents a pre-DevOps, and now antiquated approach to delivering the current trend in cloud native micro-services Fig The IT industry is a world of constant change, with many companies wishing to take advantage of the operational cost efficiencies cloud computing delivers.

Innovations in hyper-convergence are now commonplace in the DC where compute, storage and network virtualization are all tightly integrated through software centric architectures Sverdlik, This highlights the application centric trend which affords tighter control on the management of physical infrastructure through Graphical User Interfaces GUIs and software tools.

Interconnecting DCs must efficiently manage the connectivity requirements of geographically distributed computing services across heterogeneous IT domains. To efficiently manage the scale and complexity of this conglomeration of network requests, services, functions and processes, network programmability and automation is paramount Mayoral et, al. The proliferation of dynamic east-west traffic patterns between; multi-tier applications within and between DCs requires additional layers of sophistication from the infrastructure.

This will enable operators to remain competitive, keeping up with the pace of innovation seen in forward thinking DC environments Kleyman, Forward thinking cloud operators manage their infrastructure with an orchestration layer, enabling them to accelerate continuous delivery through the automated provisioning and tearing down of computing services Mayoral et, al.

The programmability afforded through the use of open APIs, drives down repetitive configuration tasks from months to hours Wibowo, The automation of administrative tasks derived through orchestration and programmability tools derives new found operational efficiencies from the enterprise IT investment sdxcentral, Applications that were once closed, inflexible islands of functionality are now deconstructed into modular, reusable micro-service components that can be manipulated by developers via Application Programming Interfaces APIs that run service calls for software functions.

The loosely coupled, platform agnostic capabilities afforded by cloud micro-service components afford reusability, control and runtime analytics providing administrative flexibility and the generation of new revenue streams Cisco, c. This reality is not matched by the static, error prone, device level network implementation methods currently used to provision infrastructure. The capabilities of the network must facilitate the requirements of the services it supports with simplicity, ease, speed and flexibility.

This flexibility is offered through open APIs that support a programmable infrastructure. Howard, Table 1. Compa rison between traditional and cloud native applications - Adapted from: Baecke Cloud Native. Micro services. Infrastructure Redundancy. Highly available, resilient application architecture.

Hard to scale and build out. Distributed, easily scalable, service reuse. Enterprises are increasingly contemplating ways to lower operational costs while heightening productivity through the migration of business applications to the cloud. These applications take the form of micro services defining discrete pieces of functionality that are easily updated, replicated and migrated across or between DCs without any down time Andrikopoulos, et al.

Cloud native micro-services leverage multi-tenancy to supply multiple consumers with isolated, logical instances of the applications they connect to. Like in any IT environment, hardware failures can occur at any given moment. Due to the amount of cloud subscribers, operators have an extra burden of responsibility to their huge consumer base. For this reason, it is paramount the implemented cloud architecture has imbedded resilience and fault tolerance built into its design principles Balalaie, et.

Cloud Native applications differ from their monolithic counterparts in that they have a self-healing, fault tolerant capabilities derived from the application having a cognizance and interaction with the health of the underlying infrastructure through southbound APIs. For applications to truly be cloud native, the underlying infrastructure and application need to comprise a closed feedback loop communicating real time analytics regarding an application contract between the two, to maintain optimal performance West, This characteristic enables the application to be resilient to network disruptions enabling the elastic, adaptive and proactive preservation of its uptime irrespective of any underlying network disruption Goodwell, Such programmatic methodologies provide powerful options for developers to write scripts, tools and orchestration templates automating routine administration processes.

Not only can these innovations provide analytics on the status of the underlying infrastructure; they can also be manipulated to provide taxonomy and telemetry regarding the applications health and Runtime Environment RTE. In a cloud native environment, the RTE of an application can be written to access real time analytics on infrastructure resources like compute, storage, BW and latency.

If network resources fail to meet the terms of the contract i. Such options can include the resizing or provision of additional compute instances. Another option could take the form of migrating any negatively affected VMs to a more stable network segment or even to another DC central to the majority of the applications subscribers.

In any case, through open APIs, the programmability of the infrastructure enables the application to become intelligent enough to reach a new plateau in resilience, high availability and fault tolerance West, For an example of this mechanism we can look at MS Windows 10, which has the minimum hardware requirements of:. These hardware attributes are required for the OS to function on a bare metal installation.

Similarly, cloud native applications use a run-time contract to inform the infrastructure of a SLA for optimal performance. In a cloud native environment, the application is sending periodic keep-alives, probing the infrastructure for status metrics on these contractual requirements Cane, If the available resources fall below these requirements, the IaaS layer will provision the required resources to mitigate any potential degradation to the user experience. This intelligence occurs at the network infrastructure level, leaving the end users completely agnostic of any degradation in service Stine, The physical constraints of popular, vendor specific network appliances are limited by the tight coupling of closed software to proprietary hardware platforms.

To meet this end, the industry is witnessing a prevalence of standards based efforts working towards creating an ecosystem of open, interoperable technologies that enable tighter control and flexibility for engineers and cloud providers who embrace innovation. A cloud native architecture gives providers the ability to raise ROI while lowering TCO as the investment in hardware and DC capacity resourcing is insulated by the programmatic capabilities of the infrastructure.

Cloud operators require the ability to elastically scale and adapt their computing services to support dynamic market, technological and business transitions. Cloud native architectures combined with a programmable infrastructure are steps in the direction towards achieving these business objectives. OpenStack is a popular, open source IaaS solution that uses a set of configurable software tools for building and managing public and private cloud computing platforms.

Managed by the non-profit OpenStack Foundation, OpenStack enables the rapid deployment of on-demand computing instances i. VMs that developers can interface with to test and run code. This code normally contributes to the functionality of the micro-services users subscribe to. This enables the automation of interrelated projects that manage processing, storage and networking resourcing utilised by the upper cloud computing layers.

OpenStack rapidly provisions logical computing instances that can be elastically spun up, scaled down and released on demand in minutes. This gives the cloud operator tight control over elastic computing services while providing flexible and cost effective management of cloud infrastructure Denis, et, al.

OpenStack handles the scaling of reusable computing instances in two ways; vertical and horizontal. Vertical scaling, is essentially like a hardware upgrade instantiated in software where a cloud provider may have a physical server running one computing instance hosting a new service. This service may have less than desirable response times due to its resources being overrun with too many subscribers. To mitigate this, the provider may resize the compute instance powering the service to provide more CPU cores, RAM etc.

This may alleviate the issue for a time but is capped by the amount of physical hardware resource available on the server. Horizontal scaling is where the service is run on multiple servers that each run the required service within two or more computing instances. A virtual load balancer can then be used to evenly distribute customer traffic between the computing instances spread across the multiple servers. This allows enormous scaling potential over the vertical method as more servers can be added behind the load balancer with additional hardware added as needed Grinberg, OpenStack provides a critical set of tools that provide operators with the agility, flexibility and scale required to manage complexity in the cloud.

This in turn enhances and accelerates business objectives through the orchestration of value added cloud based services. Table 2. OpenStack cloud service components — Source: Lawrence OpenStack Service. Identity Service Keystone. Provisions users, groups, roles, projects, domains. Compute Nova. Schedules VMs, manages their lifecycle.

Image Service Glance. Provides data assets VM images, heat templates. Networking Service Neutron. Provides Software Defined Networking. Object Storage Service Swift. Provides scalable, persistent object store. Block Storage Service Cinder. Provides persistent block level storage. Orchestration Service Heat. Orchestrator with auto-scaling functionally. Metering Service Ceilometer.

Collects metering probes from other services. Dashboard Horizon. Provides web based interface for other services. The cloud computing industry is witnessing exponential growth with new customers and businesses subscribing to more services every day Gartner, This growth is underpinned by the explosion seen in agile development teams adopting DevOps approaches which accelerate software feature releases used to maintain and develop cloud consumption models.

Server and to a lesser extent storage virtualization has revolutionized the amount of application services used throughout enterprise DCs, giving rise to large scale multi-tenancy cloud computing architectures that introduce a tremendous amount of complexity into the network Christy, Networks are a critical component of any IT investment. Cloud computing environments are amongst the most complicated ecosystems of technologies and services seen in the industry.

Current network administration practices are clunky, complicated and time consuming. The same manual, device level, CLI configuration process used 30 years ago is still heavily used by engineers today Arellano, The CLI alone as an administrative tool is not flexible enough to manage the increasing complexity witnessed in large scale DC environments. In cloud environments, operators require application architectures that provide the rapid provisioning and scaling of computing services on demand, providing business agility and continuity.

This speed and flexibility requires the underlying infrastructure to also be open, flexible and fault tolerant to support high performing cloud computing services. These VMs can easily be migrated, released and scaled up and down anywhere within or between DC environments. The explosion of virtual endpoints introduces increasing demands and complexity on the DC network Ziolo, Cloud operators cannot remain competitive within the marketplace by solely engaging in traditional network administration practices because the rigidity and limitations physical networks introduce works against the fluidity and extensibility of the services they support ONS, The amount of time required to make a new application secure and accessible through the most optimal paths across the network can take hours at best compared to the minutes taken to spin up VMs hosting a new application.

Agile development teams and server virtualization capabilities are driving the demand for networks to become even faster and more sophisticated in their orchestration of cloud computing services. This calls for a fresh approach that requires tighter control and extensibility in how network services are provisioned, maintained and monitored. As manual network configuration is an iterative process performed by humans, there is always an associated risk to the preservation of existing production services should something be misconfigured.

For this reason, network teams associated with the implementation of a new project submit change management request tickets detailing each configuration step required to execute administrative tasks. Although change management ensures governance over the protection of production services; this is provided at the expense of continuous delivery.

The many corporate processes required for change management approval impacts service execution timelines. These risks include human error, and any negative impact on existing production services which can amount in loss of revenue. This presents a choke point in service delivery as agile development teams have to wait for the network before being able to rapidly test and release new service features.

Shadow IT is a descendent of this culture where frustrated development teams employ public cloud services to circumvent corporate IT. This produces real concerns to the business as the enterprise is effectively financing for their private, Intellectual Property IP protected data to be hosted in the public cloud. The potential security, compliance and regulatory ramifications present noteworthy concerns for the enterprise. This necessitates the need for operators to find secure ways to mobilize and accelerate their infrastructure internally Petty, Before virtualization, the classical client-server model consisted of a client request made to a physical server hosting web application software that responds accordingly to each incoming request.

The amount of simultaneous connections this server can handle is limited by the maximum hardware capacity the server OS is running on. It would be untenable to facilitate such a colossal user base using physical infrastructure alone. Virtualisation was introduced within the DC to provide scale through multi-tenancy Kajeepeta, Virtualisation software is a specialised OS running on a physical server allowing the co-location of multiple operating system instances to run inside a hypervisor which is a VM management system used to deploy and administer VMs.

The physical hardware resources can then be divided in software and allocated amongst multiple VM instances. Virtualisation is very popular in DCs due to multi-tenancy. All virtual machines are logically separated and thus have exclusive resources and network connections that are isolated from each other. When subscribers connect to cloud services, they are effectively connecting to a logical instance within a VM giving them their own slice of the service resources. Each VM is containerized within its own logical environment which can host a guest operating system housing multiple applications for cloud service subscribers to consume as web based services.

Although larger servers with more physical resources are required to support multi-tenancy, virtualization allows service providers to reduce their physical server count while optimizing hardware. Resource isolation and multitenancy are advantageous in cloud environments as hardware resources are carved up and distributed between multiple guest operating systems i.

Windows, Ubuntu etc. Securely managing and scaling such a service would be untenable if limited to physical hardware and manual network administration practices. An issue with hosting multiple VMs on physical servers is if the server goes down it is a single point of failure for all hosted services. However, the flexibility of provisioning, copying, and migrating VMs accordingly serves to justify its position within the DC.

Providers can also mitigate risk through providing redundant servers to mirror VM configuration through clustering. Virtualisation is not only limited to servers but also extends to storage and network. The former is where multiple storage arrays are clustered and managed via one administrative interface. The latter Fig 12b is where network components like NIC cards, firewalls and switches are replicated as logical, software components that are decoupled from the physical hardware.

The advantage is seamless integration of network functionality within divergent physical and virtual computing domains. Such a scenario could see a VM with an integrated virtual switch passing data from a virtual network to the physical network. The bottom line is that sophistication in software is necessary for DC operators to scale cloud services for their network, compute and storage assets.

Although network virtualization has extended the capabilities of physical network resources, SDN serves to provide a further layer of abstraction where distributed network control is consolidated and centralized within a network controller serving as the brains of the Network Management System NMS ONF, This relates to section 1 aim xi to prepare the reader for the network overlay topics. I , electrical signals are encoded with data and sent onto the transmission medium for transport across the network.

Ethernet frames contain source and destination MAC addresses. The MAC address denotes a static identifier signifying the physical address of a particular network host. This process is termed flood and learn. Fig 14b shows the unicast request blue arrow and response green arrow between PC1 and PC4 now that the L2 topology has converged:. In enterprise networks, there are multiple departments needing exclusive access to company resources related to departmental tasks.

This includes applications, databases, servers and other corporate resources. In technical terms, the isolation required is termed segmentation. VLANs are logical separations assigned to network hosts that can be based on location, department or organizational function. VLANs provide a security boundary to contain network traffic within a broadcast domain Cisco, b. Typically, every Ethernet interface on a network switch connected to an end user specifies an access VLAN.

This defines the associated broadcast domain BD relating to an associated IP subnetwork. A broadcast domain denotes the extent to which broadcast traffic is propagated to devices within the same administrative domain. For data to cross the broadcast domain i. HR sending an email to Marketing a Layer 3 L3 device like a router is required. In production networks switch access ports connected to end user workstations are assigned to a single VLAN denoting the BD that the connected device is a member of.

When Ethernet frames travel between network switches, they traverse point to point connections called trunk links fig 4 , allowing traffic from multiple VLANs to propagate throughout the network. The IEEE The important part of the This field has a value of 12 bits which determines the maximum amount of VLANs that can be used at any time across the network. Modern DCs are structured with physical servers stored in multiple rack cabinets that can run into the thousands Fig Virtual networks also contain logical constructs that require VLAN assignments to categories and isolate traffic traversing between physical and virtual network topologies.

Network operators have long investigated ways to optimize the management of their DC operations. This provides network hosts with a secure, logical boundary where only traffic within the same broadcast domain is processed.

Spanning Tree attempts to remove switching loops caused by redundant paths through the STP algorithm. In the DC, the number of physical servers can range into the thousands. Virtualisation multiplies this administrative domain considerably as VMs require communication between physical and virtual hosts.

For each network host to remain reachable across the entire DC fabric, end state information IP and MAC address mappings needs to be accurately replicated, propagated and updated on spine-leaf device forwarding tables. Although multiple tenants have their own virtual network domain, they are sharing the underlying physical infrastructure.

This can cause the duplication of end state information resulting in address conflicts and reachability issues between physical and virtual networks. In large scale, multi-tenant environments computing workloads are often migrated from the private to the public cloud when a hardware failure occurs or business imperatives change Cisco, e. The decoupling of location dependence of computing workloads from the underlying network infrastructure is necessary for operators to provide elastic compute services on demand.

Virtualized environments often require the network to scale between DCs across the layer 2 domain to adequately allocate compute, network and storage resources. In any enterprise network deployment, convergence is critical. Network convergence is where changes that happen to the end state of a host i. In cloud and enterprise environments, rapid convergence is a huge concern as the complexity of maintaining end state information is of a high order in complexity for reasons described previously.

Managing the complexity involved in maintaining real time reachability of DC EPs, combined with the administration, scaling and distribution of DC workloads is an issue providers are constantly looking to improve. In an attempt to solve both the VLAN limitation and scaling challenges, network overlays have surfaced to the forefront Onisick, The mass adoption of server virtualization seen in DC environments offers greater speed, agility and flexibility in the provision and distribution of computing workloads.

Up until this point the same level of innovation, speed and efficiency has not been seen in the administration of contemporary networks. Network overlays provide logical tunneling techniques that help to bridge this gap by removing the dependencies between the physical location of a device from its logical instantiation. Fig 18 illustrates a simplified example of nine network devices under the blue pane that are connected via a partial mesh topology.

This is the network underlay representing the physical spine-leaf DC architecture. The three network devices above the blue pane connected via a hub and spoke topology represent the network overlay. The overlay is a logical network formed independently of the physical underlay. The loose coupling of physical and logical network topologies provides a layer of abstraction that gives location independence.

As mentioned earlier, ToR switches connect directly to server ports with any given ToR switch having 24 - 48 ports depending on the port density of the implemented device. In virtualized, multi-tenant environments, the demand on these CAM tables becomes intense as they must maintain consistent state information for participating nodes; which can easily include thousands of hosts on physical and virtual networks IETF, Maintaining this expanded broadcast domain presents a huge administrative challenge that is mitigated through the implementation of network overlays.

This provides a layer of abstraction where workloads can be transported between VMs independent of the underlying, physical infrastructure. This negates network administrators having to navigate complicated layer 2 semantics and configurations by delivering IP reachability to any device across L3 boundaries. There are many ways in which to implement network overlays across DC switching fabrics. Within DC environments, compute workloads for cloud applications can span multiple pods that stretch between DCs.

For example, if the web tier of an application were to communicate with the DB tier hosted in a pod located in another DC, both applications still need to communicate within the same layer two domain Fig This logical, L2 overlay offers powerful scaling capabilities for cloud providers managing extended L2 environments; performing greater functionality than STP through using ECMP across the underlying spine-leaf architecture.

This affords operators the ability to provide elastic computing services on demand between cloud environments. VXLAN is a stateless tunneling scheme that overlays L2 traffic on top of L3 networks and provides the functionality required for cloud environments to distribute computing workloads across geographically dispersed application tiers independent of the underlying infrastructure IETF, VTEPs are normally implemented on the VM hypervisor but can also be configured on a physical switch or server; providing implementation options in both hardware and software.

For this reason, the VM itself is not cognizant of the encap-decap mechanism required for end to end communications through the VXLAN tunnel as this occurs transparently. When a VM wants to exchange data with a VM on another pod, it encapsulates the application data within an Ethernet frame before the data is forwarded to the destination VM Fig 20b.

The encapsulated packet is then tunneled across the network. This is added to the forwarding table to provide a unicast response to the sender. This end to end process is transparent to the VMs who function as though they are connected to the same network segment irrespective of physical location. Overlays provide a loose coupling between infrastructure and policy where changes to physical topology are managed separately from changes in policy Cisco, This allows for an optimisation and specialisation in service functions where network configuration is not constrained to a particular location within the network, adding flexibility while reducing complexity.

This facilitates an elastic computing architecture where complexity is handled at the edge VTEPs. This releases the burden from spine devices mapping the topology of the entire infrastructure, allowing the overlay and physical infrastructure to be handled independently.

Physical servers, virtual machines and dense spine-leaf infrastructure devices can collectively provide an almost innumerable amount of DC EPs that all require network reachability. VXLAN allows for these network entities to communicate with one another over layer 3 networks as though they were attached to the same cable. This technology provides the capability for cloud operators to effectively and reliably communicate data within or across large scale DCs.

In the world of distributed computing SDN provides an open and centralized approach to automating network administration. Configuration changes through the CLI are executed at the device level where each device is configured manually. A typical DC supporting a cloud environment can easily have between - spine switches Morgan-Prickett, If they all require the latest image upgrade for example, the changes would be executed through the CLI, on each device before testing.

This method is not only time consuming but is also prone to human error and delay especially if the complexity of the configuration change is of a high order; or the number of devices requiring the change is large. This static methodology is proving too slow for both enterprise and cloud operators. Leading providers utilize some flavor of auto-scaling to provide business agility. This shows the necessity for securely automating the routines that allow for business logic and compute workloads to be distributed end to end Orzell and Becker, All networking devices have three main attributes which govern how they operate Fig The management plane is a software environment that deals with how the network devices within the topology are managed.

An example would be the Command Line Interface CLI in Cisco networking devices which provides the interface through which administration changes are applied to an infrastructure device Fig Whenever an engineer is configuring an infrastructure device, they are using the management plane to make and apply those changes. The data plane governs how data packets flow through the network from source to destination.

The control plane is a software construct that maps the topology of the network calculating reachability information to destinations across the network to discover the best paths for traffic to flow. Traditional networks are replete with vendor specific hardware platforms that are proprietary in nature Crago, et al. SDN moves away from this closed environment through the embracement of open source internetwork operating systems and APIs i.

As seen in Fig 21, it is assumed an administrator has logged into the router and is connected to the management plane. This represents the classical way of managing networks where each device is logged into individually for the application of administrative changes. The control and data planes exist within each infrastructure device to map the network and forward data to its intended destination. In traditional networking, the management, control and data plane all exist on each infrastructure device.

This distribution of functionality explains why configuration changes are made at the device level. In SDN the management and control planes are abstracted from individual devices into a centralized server accessed through a Graphical User Interface.

This provides a virtualized, software orchestration layer network controller where all network administration is viewed, managed and orchestrated. The control plane is the brains of the network, the centralization of this functionality presents the network in a holistic fashion where a configuration change made on the network controller control plane can be applied and pushed simultaneously to any number of devices in the physical infrastructure. This is the type of scaling required by modern DC administrators to remain competitive Fig In contrast to closed internetwork operating systems, open source internetwork operating systems can run on commodity hardware termed white boxes.

This creates an open architecture where enterprise development teams can create their own scripts and scripting tools that can be used to automate network operations. This approach is not only more cost effective both in the capital and operational sense; but also fosters customization and interoperability across an ecosystem of platform agnostic network orchestration technologies like OpenStack and OpenDaylight Baldwin, This integration of services is possible through open APIs.

This provides a closed feedback loop that presents specific application RTE data from the application to the network controller, enabling the controller to be cognizant of the mission critical parameters within the application SLA and application profile. Southbound APIs i. This allows for any configuration change made on the controller to be pushed across the network to any number of infrastructure devices.

The SDN approach allows cloud operators to automate business processes across their infrastructure which has the potential to cut project implementation times from months to minutes Cisco, c. As seen in Fig 23 the three network planes are depicted with a closed feedback loop between the application and control layers.

As discussed on pages , this feedback mechanism presents Service Level Agreement SLA information from the application profile through the northbound API to the control layer. Similarly, the control plane uses a southbound API to probe and monitor physical infrastructure conditions giving the system an awareness of the health of the network services powering the application in real time. This closed feedback loop provides real time analytics and health monitoring of the cloud native application.

If network conditions fall below the specifications within the application profile SLA, the concept is that the cloud native application will be able to dynamically spin up virtual instances of itself, pertaining to where within the DC fabric adequate network resources are available.

In some cases, this migration can take place across DCs. The real-time analytics afforded by this open architecture gives cloud operators new found capabilities to drive agile services that are fault tolerant and self-healing Pronschinske, One of the main selling points of SDN is its ability to abstract the lower level complexity of the network infrastructure away from the control plane.

This allows enterprise and cloud operators to build a highly productive and scalable infrastructure through programmable, standards based tools that support automation and business agility. Knowledge on how APIs work and how to manipulate and parse network status information in programmatic ways has now taken precedence over specific product knowledge of hardware platforms. ONF see SDN as a disruptive and revolutionary approach to networking where network control is decoupled from data forwarding, providing a directly programmable architecture ONF, The speed, flexibility and configuration options afforded by SDN enable administrators to provision networks through a policy driven approach that directly aligns with business needs.

Configuration templates can be created and inserted into the network controller that pushes configuration throughout the infrastructure as seen in fig This provides the automated provision of required network services that mobilize business objectives. This opens up the capabilities of the network as infrastructure devices are no longer constrained by vendor specific idiosyncrasies that have hindered interoperability in the past. SDN extracts the repetitive tasks engineers must complete when updating services or adding new applications to the network as the SDN controller is able to provision the routine tasks that engineers typically configure manually.

The rise of these open source technologies introduces a declarative model where the end state of the business requirement is specified through a GUI fig 30b as oppose to the step by step imperative process employed using the CLI. For engineers this is paramount, as the emphasis on coveted, vendor specific certifications are now eclipsed by the requirement to gain a more holistic view of open source standards, automation, APIs and programmatic methodologies.

The adoption of open source standards brings cloud and enterprise operators the speed, agility, control and visibility required to automate their infrastructure Sdxcentral, Traditionally network services like switching, firewalling, load-balancing etc.

This consolidates network components as network services can be instantiated on a VM purely in software. This represents a new way of designing, deploying and managing network services within a virtual infrastructure Garg, This illuminates the trend that interoperability and systems integration is only capable through the collaboration of open, standards based technologies.

This shift away from closed, proprietary technologies marks the importance of open community based efforts as they are integral to the advancement of future networking technologies Stallings, SDN has an impact on varying layers within the landscape of the IT industry. Business continuity, operational effectiveness, speed and flexibility in continuous delivery and customer satisfaction are all important factors but within the context of this paper ICT infrastructure administration is the focus.

From an IT professional standpoint, SDN encompasses the dynamic configuration and automation of a programmable network infrastructure. To cope with this trend in cloud consumption, operators are looking for ways to deliver high performing services to remain competitive. SDN provides an alternate architectural approach that allows operators to respond quickly in a transient market.

SDN brings the speed, control and visibility required for businesses to remain agile and adaptable while delivering high performing computing services at scale. Marist College is the New York state center for cloud computing and analytics and are the authority on academic cloud computing in New York DeCusatis, Their research has developed many innovative use-cases applicable to SDN and cloud automation; one of which is explored below investigating the challenges that cloud exchanges have concerning efficient workload distribution, BW utilization and predictive analytics through SDN and cloud orchestration.

Services like remote desktop, IaaS, and video streaming at 10 Gigabits per optical wavelength were proposed as deployment parameters. Due to the issue of excessive expense on the network, the project was terminated. In fig 25, a graph is showing the BW utilization of a large bank in New York that had purchased an Optical Carrier 12 OC telecommunications circuit with a transmission rate of The y axis shows BW utilization, the x axis shows time increments.

Although only half of the available BW was used by most applications, performance still suffered due to the bursty traffic characteristics of enterprise workloads that exceeded the limit of the circuit; this is punctuated by the spikes seen above the green line in fig To mitigate this bottleneck, the client opted to double the BW to Gigabit Ethernet Mbps as illustrated by the red line in fig Although the BW was doubled and performance did increase, network latency issues still continued.

As highlighted by the blue arrow in fig 25, workload characteristics still found a way to spike above the Gigabit Ethernet limit blue arrow in fig These spikes were experienced several times a day DeCusatis, The outcome is that no matter how much static BW is provisioned; the nature of the workloads moving across the network will always have intervals where they exceed the capacity of the circuit. This equates to the expensive, over provisioning of BW resulting in wasted capacity as it is not utilized until the time intervals where network spikes occur.

To find an alternate solution, DeCusatis and his colleagues investigated the concept of predicting when a large network spike would occur, to then dynamically provision BW by providing available optical wavelengths for a finite time period to efficiently absorb the traffic peaks.

These wavelengths would then be de-provisioned once workload traffic returns back to average utilization. This pool of optical wavelengths would be advertised across the suite of enterprise applications for use whenever necessary.

This system has the ability to actively monitor BW performance to allocate more or less BW when required to mitigate any associated performance degradation during peak workload activity. As discussed earlier with cloud native applications, once the closed feedback control loop is in place via SDN, predictive analytics can be used to monitor the health of the system. This enables the SDN system to be application centric so the infrastructure can readily facilitate and honor the SLA requirements of the application profile.

Predictive analytics also enables cloud operators to proactively monitor BW utilization to allocate additional optical wavelengths to pre-empt periods of maximum BW usage DeCusatis, Fig 27 is plotting VM migration times vs the page dirty rate, which is the rate at which copies of the memory pages are transferred from the source VM to destination VM. Different vendors who specialize in virtualization technologies utilize different algorithms to determine the page dirty rate, some of which are proprietary DeCusatis, For this reason, cloud operators need to make educated, evidence based assumptions on how these algorithms function to determine which changes yield the most desirable results.

The curves depicted in Fig 27 represent three different algorithms used for VM migration. Such trends can provide useful telemetry analytics for cloud operators. This telemetry data can be collated over time to create Big Data sets to make predictions on what the BW will do within a particular time interval to mitigate the performance challenges illustrated in fig For example, if a cloud provider is migrating VMs from the private to public cloud; they could proactively model the curve of their given technology and use this data to make appropriate decisions on how to provision their infrastructure DeCusatis, SDN is what pushes down these changes to the physical and virtual infrastructure elements, making the combination of SDN and predictive analytics very powerful.

To understand how each element of the orchestration process comes together a closer look at the SDN architecture is required. Fig 28 shows three tiers, the lower represents all physical and virtual network infrastructure devices including attached storage and any Virtual Network Functions VNFs that have been service chained together to provide a suite of requisite network services for a particular set of business outcomes.

The top tier is all web and file serving services i. ERP applications, mobility and video. Similar to the access tier of the hierarchical enterprise networking model, the application tier represents the interface through which users interact with the network. The middle tier consists of a services platform that handles management and orchestration DeCusatis, The orchestration engine then collates these options into actionable services using a suite of software packages that can include but are not limited to applications like Docker, Tail-f, Ansible, Puppet, Chef, Open Daylight and OpenShift.

YANG and TOSCA are hierarchical data modelling languages constructed by a service template responsible for the structure and invocation of management services within an IT infrastructure fig The information that gets pulled from the infrastructure layer to the management and orchestration tier sits within the service catalogue available for the cloud administrator to correlate with incoming services requests to decide how these services are provisioned.

Fig 30a illustrates the CLI of an enterprise router which is the common way in which network administrators have interacted with infrastructure devices. Rather than using the CLI to imperatively update the network with manual configuration updates line by line, device by device; SDN pushes configurations from the controller using a GUI to push policy driven data models containing the configuration within a service template. This project focuses on the transformative technologies that cloud and enterprise operators use to deliver value added computing services.

Although SDN facilitates the automation and business agility required to accelerate the digital business processes and outcomes of the enterprise, there are also many societal aspects that come into play. In the entertainment industry, global TV broadcasting companies normally subscribe to a telecommunications provider offering a baseline of network services. Similar to the use case explored earlier, such a subscription for a private telecommunications circuit would typically be over provisioned to attempt to facilitate times of peak activity i.

As peak network usage only represents a fraction of the overall utilization of the circuit, the result is an over provisioned, underutilized service that amounts in expensive network capacity that is wasted. SDN could make more efficient use of this wasted capacity if the service provider took advantage of a programmable infrastructure.

This would allow the TV broadcasting network to pre-emptively scale up network resource during peak events like the Champions League, Wimbledon or the Olympics. This would provide the dynamic auto scaling of BW requirements pertaining to the real-time load on the network.

Oreilly sdn software defined networks cisco splashtop gamepad nexus 7

RED HAT VNC SERVER SETUP

This vendor-agnostic book also presents several SDN use cases, including bandwidth scheduling and manipulation, input traffic and triggered actions, as well as some interesting use cases around big data, data center overlays, and network-function virtualization. Discover how enterprises and service providers alike are pursuing SDN as it continues to evolve.

Become well-versed with basic networking concepts such as routing, switching, and subnetting, and prepare for the …. Networking is changing. Command line interface CLI skills are no longer the only configuration skills you …. Morreale, James M. Skip to main content. Start your free trial. Nadeau , Ken Gray. A secure TCP control session between controller and the associated agents in the network elements. A standards-based protocol for the provisioning of application-driven network state on network elements.

A device, topology, and service discovery mechanism; a path computation system; and potentially other network-centric or resource-centric information services. It also includes a number of open source controllers. As we discussed earlier in this book, the original SDN application of data center orchestration spawned SDN controllers as part of an integrated solution. It was this use case that focused on the management of data center resources such as compute, storage, and virtual machine images, as well as network state.

More recently, some SDN controllers began to emerge that specialized in the management of the network abstraction and were coupled with the resource management required in data centers through the support of open source APIs OpenStack, Cloudstack. The driver for this second wave of controllers is the potential expansion of SDN applications out of the data center and into other areas of the network where the management of virtual resources like processing and storage does not have to be so tightly coupled in a solution.

This includes the network service virtualization explored in a later chapter. Network service virtualization, sometimes referred to as Network Functions Virtualization NFV , will add even more of these elements to the next generation network architecture, further emphasizing the need for a controller to operate and manage these things.

We will also discuss the interconnection or chaining of NFV. Virtual switches or routers represent a lowest common denominator in the networking environment and are generally capable of a smaller number of forwarding entries than their dedicated, hardware-focused brethren.

Although they may technically be able to support large tables in a service VM, their real limits are in behaviors without the service VM. In particular, that is the integrated table scale and management capability within the hypervisor that is often implemented in dedicated hardware present only in purpose-built routers or switches. This is the case in the distributed control paradigm, which needs assistance to boil down the distributed network information to these few entries—either from a user-space agent that is constructed as part of the host build process and run as a service VM on the host, or from the SDN controller.

In the latter case, this can be the SDN controller acting as a proxy in a distributed environment or as flow provisioning agent in an administratively dictated, centralized environment. In this way, the controller may front the management layer of a network, traditionally exposed by a network OSS. SDN controllers provide some management services in addition to provisioning and discovery , since they are responsible for associated state for their ephemeral network entities via the agent like analytics and event notification.

VMware provides a data center orchestration solution with a proprietary SDN controller and agent implementation that has become a de facto standard. VMware was one of the genesis companies for cloud computing, founded in See Figure for a rough sketch of VMware product relationships.

It also eliminated a required guest VM i. This creates a very scalable VM health-monitoring system that tolerates management communication partition by using heartbeats through shared data stores. This includes managing the hypervisor integrated vswitch from a networking perspective as well as the other, basic IaaS components—compute, storage, images, and services. Primary application for compute, storage, image resource management, and public cloud extension.

Application monitoring, VM host and vSphere configuration and change management, discovery, charging, analytic, and alerting. Application management primarily for multitiered applications, described in the definition of degree of tenancy, and managing the dependencies. The virtual switch in the hypervisor is programmed to create VxLAN tunnel overlays encapsulating layer 2 in layer 3 , creating isolated tenant networks.

VMware interacts with its own virtual vswitch infrastructure through its own vSphere API and publishes a vendor-consumable API that allows third-party infrastructure routers, switches, and appliances to react to vCenter parameterized event triggers e. Spring-based component framework [ 65 ]. The Spring development environment allows for the flexible creation and linking of objects beans , declarative transaction and cache management and hooks to database services.

When looking over the architecture just described, one of the first things that might be apparent is the focus on integrated data center resource management e. This is an important feature, as it can result in a unified and easy-to-operate solution; however, this approach has resulted in integration issues with other solution pieces such as data center switches, routers and appliances. One of the primary detractions commonly cited with VMware is its cost. Even so-called enterprise versions of open source offerings are often less expensive than the equivalent offering.

Other perhaps less immediately important considerations of this solution is its inherent scalability, which, like the price, is often something large-scale users complain about. The mapping and encapsulation data of the VxLAN overlay does not have a standardized control plane for state distribution, resulting in operations that resemble manual or scripted configuration and manipulation.

Finally, the requirement to use multicast in the underlay to support flooding can be a problem, depending on what sort of underlay one deploys. Nicira was founded in and as such is considered a later arrival to the SDN marketplace than VMware. NVP now works in conjunction with the other cloud virtualization services for compute, storage, and image management.

This is good news because OVS is supported in just about every hypervisor [ 70 ] and is actually the basis of the switching in some commercial network hardware. As a further advantage, OVS is shipping as part of the Linux 3. This vApp is mated to each ESXi hypervisor instance when the instance is deployed. This is unlike a number of the other original SDN controller offerings. The Nicira NVP controller Figure is a cluster of generally three servers that use database synchronization to share state.

Nicira has a service node concept that is used to offload various processes from the hypervisor nodes. Broadcast, multicast, and unknown unicast traffic flow are processed via the service node IPSec tunnel termination happens here as well. This construct can also be used for inter-hypervisor traffic handling and as a termination point for inter-domain or multidomain inter-connect.

See Figure for a sketch of the NVP component relationships. OVS, the gateways, and the service nodes support redundant controller connections for high availability. NVP Manager is the management server with a basic web interface used mainly to troubleshoot and verify connections. Due to the acquisition of Nicira by VMware, [ 72 ] both of their products are now linked in discussion and in the marketplace.

Though developed as separate products, they are merging [ 73 ] quickly into a seamless solution. Nicira supports an OpenStack plug-in to broaden its capabilities in data center orchestration or resource management. Most open source SDN controllers revolve around the OpenFlow protocol due to having roots in the Onix design Figure , [ 74 ] while only some of the commercial products use the protocol exclusively.

In fact, some use it in conjunction with other protocols. Although hybrid operation on some elements in the network will be required to interface OpenFlow and non-OpenFlow networks. This is in fact, growing to be a widely desired deployment model.

Unless otherwise stated, the open source OpenFlow controller solutions use memory resident or in-memory databases for state storage. Since most controllers have been based on the Onix code and architecture, they all exhibit similar relationships to the idealized SDN framework.

This is changing slowly as splinter projects evolve, but with the exception of the Floodlight controller that we will discuss later in the chapter, the premise that they all exhibit similar relationships still generally holds true. All of these controllers support some version of the OpenFlow protocol up to and including the latest 1. Also note that while not called out directly, all Onix-based controllers utilize in-memory database concepts for state management.

Before introducing some of the popular Onix-based SDN controllers, we should take some time to describe Mininet, which is a network emulator that simulates a collection of end-hosts, switches, routers, and links on a single Linux kernel. Mininet is important to the open source SDN community as it is commonly used as a simulation, verification, testing tool, and resource. Mininet is an open source project hosted on GitHub.

If you are interested in checking out the freely available source code, scripts, and documentation, refer to GitHub. A Mininet host behaves just like an actual real machine and generally runs the same code—or at least can. In this way, a Mininet host represents a shell of a machine that arbitrary programs can be plugged into and run. Packets are processed by virtual switches, which to the Mininet hosts appear to be a real Ethernet switch or router, depending on how they are configured.

In fact, commercial versions of Mininet switches such as from Cisco and others are available that fairly accurately emulate key switch characteristics of their commercial, purpose-built switches such as queue depth, processing discipline, and policing processing. One very cool side effect of this approach is that the measured performance of a Mininet-hosted network often should approach that of actual non-emulated switches, routers, and hosts.

Figure illustates a simple Mininet network comprised of three hosts, a virtual OpenFlow switch, and an OpenFlow controller. All components are connected over virtual Ethernet links that are then assigned private net IP addresses for reachability.

As mentioned, Mininet supports very complex topologies of nearly arbitrary size and ordering, so one could, for example, copy and paste the switch and its attached hosts in the configuration, rename them, and attach the new switch to the existing one, and quickly have a network comprised of two switches and six hosts, and so on. One reason Mininet is widely used for experimentation is that it allows you to create custom topologies, many of which have been demonstrated as being quite complex and realistic, such as larger, Internet-like topologies that can be used for BGP research.

Another cool feature of Mininet is that it allows for the full customization of packet forwarding. As mentioned, many examples exist of host programs that approximate commercially available switches. In addition to those, some new and innovative experiments have been performed using hosts that are programmable using the OpenFlow protocol. It is these that have been used with the Onix-based controllers we will now discuss.

This move in fact made it one of the first open source OpenFlow controllers. It was subsequently extended and supported via ON. It provides support modules specific to OpenFlow but can and has been extended. NOX is often used in academic network research to develop SDN applications such as network protocol research.

One really cool side effect of its widespread academic use is that example code is available for emulating a learning switch and a network-wide switch, which can be used as starter code for various programming projects and experimentation. SANE is an approach to representing the network as a filesystem. Ethane is a Stanford University research application for centralized, network-wide security at the level of a traditional access control list. Both demonstrated the efficiency of SDN by reducing the lines of code required significantly [ 78 ] to implement these functions that took significantly more code to implement similar functions in the past.

POX runs anywhere and can be bundled with install-free PyPy runtime for easy deployment. Trema [ 80 ] is an OpenFlow programming framework for developing an OpenFlow controller that was originally developed and supported by NEC with subsequent open source contributions under a GPLv2 scheme.

Unlike the more conventional OpenFlow-centric controllers that preceded it, the Trema model provides basic infrastructure services as part of its core modules that support in turn the development of user modules Trema apps [ 81 ]. Developers can create their user modules in Ruby or C the latter is recommended when speed of execution becomes a concern.

The main API the Trema core modules provide to an application is a simple, non-abstracted OpenFlow driver an interface to handle all OpenFlow messages. Trema now supports OpenFlow version 1. X via a repository called TremaEdge. Developers can individualize or enhance the base controller functionality class object by defining their own controller subclass object and embellishing it with additional message handlers.

Other core modules include timer and logging libraries, a packet parser library, and hash-table and linked-list structure libraries. The Trema core does not provide any state management or database storage structure these are contained in the Trema apps and could be a default of memory-only storage using the data structure libraries.

The infrastructure provides a command-line interface CLI and configuration filesystem for configuring and controlling applications resolving dependencies at load-time , managing messaging and filters, and configuring virtual networks—via Network Domain Specific Language DSL, a Trema-specific configuration language. The appeal of Trema is that it is an all-in-one, simple, modular, rapid prototype and development environment that yields results with a smaller codebase. There is also an OpenStack Quantum plug-in available for the sliceable switch abstraction.

Figure illustrates the Trema architecture. The combination of modularity and per-module or per-application service APIs, make Trema more than a typical controller with a monolithic API for all its services. Trema literature refers to Trema as a framework. This idea is expanded upon in a later chapter.

Ryu [ 86 ] is a component-based, open source supported by NTT Labs framework implemented entirely in Python Figure The Ryu messaging service does support components developed in other languages. Components include an OpenFlow wire protocol support up through version 1. A prototype component has been demonstrated that uses HBase for statistics storage, including visualization and analysis via the stats component tools.

While Ryu supports high availability via a Zookeeper component, it does not yet support a cooperative cluster of controllers. Floodlight [ 87 ] is a very popular SDN controller contribution from Big Switch Networks to the open source community. Floodlight is based on Beacon from Stanford University. The Floodlight core architecture is modular, with components including topology management, device management MAC and IP tracking , path computation, infrastructure for web access management , counter store OpenFlow counters , and a generalized storage abstraction for state storage defaulted to memory at first, but developed into both SQL and NoSQL backend storage abstractions for a third-party open source storage solution.

These components are treated as loadable services with interfaces that export state. The API allows applications to get and set this state of the controller, as well as to subscribe to events emitted from the controller using Java Event Listeners, as shown in Figure Floodlight incorporates a threading model that allows modules to share threads with other modules.

Synchronized locks protect shared data. Component dependencies are resolved at load-time via configuration. There are also sample applications that include a learning switch this is the OpenFlow switch abstraction most developers customize or use in its native state , a hub application, and a static flow push application.

The Floodlight OpenFlow controller can interoperate with any element agent that supports OpenFlow OF version compatibility aside, at the time of writing, support for both of-config and version 1. In addition, Big Switch has also provided Loxi, an open source OpenFlow library generator, with multiple language support [ 92 ] to address the problems of multiversion support in OpenFlow.

A rich development tool chain of build and debugging tools is available, including a packet streamer and the aforementioned static flow pusher. In addition, Mininet [ 93 ] can be used to do network emulation, as we described earlier. Big Switch has been actively working on a data model compilation tool that converted Yang to REST, as an enhancement to the environment for both API publishing and data sharing. These enhancements can be used for a variety of new functions absent in the current controller, including state and configuration management.

As we mentioned in the previous section, Floodlight is related to the base Onix controller code in many ways and thus possesses many architectural similarities. As mentioned earlier, most Onix-based controllers utilize in-memory database concepts for state management, but Floodlight is the exception. Floodlight is the one Onix-based controller today that offers a component called BigDB. The virtualization of the PE function is an SDN application in its own right that creates both service or platform virtualization.

The addition of a controller construct aids in the automation of service provisioning as well as providing centralized label distribution and other benefits that may ease the control protocol burden on the virtualized PE. The idea behind these offerings is that a VRF structure familiar in L3VPN can represent a tenant and that the traditional tooling for L3VPNs with some twists can be used to create overlays that use MPLS labels for the customer separation on the host, service elements, and data center gateways.

This solution has the added advantage of potentially being theoretically easier to stitch into existing customer VPNs at data center gateways—creating a convenient cloud bursting application. The figure demonstrates a data center orchestration application that can be used to provision virtual routers on hosts to bind together the overlay instances across the network underlay. The controller is a multi-Node design comprised of multiple subsystems.

The motivation for this approach is to facilitate scalability, extensibility, and high availability. The system supports potentially separable modules that can operate as individual virtual machines in order to handle scale out server modules for analytics, configuration, and control. As a brief simplification:. Provides the compiler that uses the high-level data model to convert API requests for network actions into low-level data model for implementation via the control code. This server also collects statistics and other management information from the agents it manages via the XMPP channel.

The Control Node uses BGP to distribute network state, presenting a standardized protocol for horizontal scalability and the potential of multivendor interoperability. The architecture synthesizes experiences from more recent, public architecture projects for handling large and volatile data stores and modular component communication. The Contrail solution leverages open source solutions internal to the system that are proven. It should be noted that Redis was originally sponsored by VMware.

Zookeeper [ 99 ] is used in the discovery and management of elements via their agents. Like all SDN controllers, the Juniper solution requires a paired agent in the network elements, regardless of whether they are real devices or virtualized versions operating in a VM.

The explicit messaging contained within this container needs to be fully documented to ensure interoperability in the future. Several RFCs have been submitted for this operational paradigm. The implementation uses IP unnumbered interface structures that leverage a loopback to identify the host physical IP address and to conserve IP addresses.

The solution does not require support of MPLS switching in the transit network. In terms of the southbound protocols, we mentioned that XMPP was used as a carrier channel between the controller and virtual routers, but additional south bound protocols such as BGP are implemented as well.

Otherwise, it would be solely first-come, first-served, making the signaling very nondeterministic. Even with this mechanism in place, the sequence in which different ingress routers signal the LSPs determine the actual selected paths under normal and heavy load conditions. Now imagine that enough bandwidth only exists for one LSP at a particular node. So if A and B are signaled, only one of A or B will be in place, depending on which went first.

Now when C and D are signaled, the first one signaled will preempt A or B whichever remained , but then the last one will remain.

Oreilly sdn software defined networks cisco citrix xendesktop handbook

Software Defined Networking - Georgia Tech - Software Defined Networking oreilly sdn software defined networks cisco

T 38 THUNDERBIRD

In theory, an SDN controller provides services that can realize a distributed control plane, as well as abet the concepts of ephemeral state management and centralization. In reality, any given instance of a controller will provide a slice or subset of this functionality, as well as its own take on these concepts. In this chapter, we will detail the most popular SDN controller offerings both from commercial vendors, as well as from the open source community. We have also included text that compares the controller type in the text to that ideal vision of a controller.

We would like to note that while it was our intention to be thorough in describing the most popular controllers, we likely missed a few. We also have detailed some commercial controller offerings, but likely missed some here too. Any of these omissions, if they exist, were not intentional, nor intended to indicate any preferences for one over the other. An idealized controller is shown in Figure , which is an illustration replicated from Chapter 9 , but is repeated here for ease of reference.

We will refer back to this figure throughout the chapter in an effort to compare and contrast the different controller offerings with each other. The general description of an SDN controller is a software system or collection of systems that together provides:. Management of network state, and in some cases, the management and distribution of this state, may involve a database. These databases serve as a repository for information derived from the controlled network elements and related software as well as information controlled by SDN applications including network state, some ephemeral configuration information, learned topology, and control session information.

In some cases, the controller may have multiple, purpose-driven data management processes e. In other cases, other in-memory database strategies can be employed, too. A high-level data model that captures the relationships between managed resources, policies and other services provided by the controller. In many cases, these data models are built using the Yang modeling language. A modern, often RESTful representational state transfer application programming interface API is provided that exposes the controller services to an application.

This facilitates most of the controller-to-application interaction. This interface is ideally rendered from the data model that describes the services and features of the controller. In some cases, the controller and its API are part of a development environment that generates the API code from the model.

Some systems go further and provide robust development environments that allow expansion of core capabilities and subsequent publishing of APIs for new modules, including those that support dynamic expansion of controller capabilities:. A secure TCP control session between controller and the associated agents in the network elements.

A standards-based protocol for the provisioning of application-driven network state on network elements. A device, topology, and service discovery mechanism; a path computation system; and potentially other network-centric or resource-centric information services. It also includes a number of open source controllers. As we discussed earlier in this book, the original SDN application of data center orchestration spawned SDN controllers as part of an integrated solution.

It was this use case that focused on the management of data center resources such as compute, storage, and virtual machine images, as well as network state. More recently, some SDN controllers began to emerge that specialized in the management of the network abstraction and were coupled with the resource management required in data centers through the support of open source APIs OpenStack, Cloudstack.

The driver for this second wave of controllers is the potential expansion of SDN applications out of the data center and into other areas of the network where the management of virtual resources like processing and storage does not have to be so tightly coupled in a solution. This includes the network service virtualization explored in a later chapter.

Network service virtualization, sometimes referred to as Network Functions Virtualization NFV , will add even more of these elements to the next generation network architecture, further emphasizing the need for a controller to operate and manage these things. We will also discuss the interconnection or chaining of NFV. Virtual switches or routers represent a lowest common denominator in the networking environment and are generally capable of a smaller number of forwarding entries than their dedicated, hardware-focused brethren.

Although they may technically be able to support large tables in a service VM, their real limits are in behaviors without the service VM. In particular, that is the integrated table scale and management capability within the hypervisor that is often implemented in dedicated hardware present only in purpose-built routers or switches. This is the case in the distributed control paradigm, which needs assistance to boil down the distributed network information to these few entries—either from a user-space agent that is constructed as part of the host build process and run as a service VM on the host, or from the SDN controller.

In the latter case, this can be the SDN controller acting as a proxy in a distributed environment or as flow provisioning agent in an administratively dictated, centralized environment. In this way, the controller may front the management layer of a network, traditionally exposed by a network OSS.

SDN controllers provide some management services in addition to provisioning and discovery , since they are responsible for associated state for their ephemeral network entities via the agent like analytics and event notification. VMware provides a data center orchestration solution with a proprietary SDN controller and agent implementation that has become a de facto standard.

VMware was one of the genesis companies for cloud computing, founded in See Figure for a rough sketch of VMware product relationships. It also eliminated a required guest VM i. This creates a very scalable VM health-monitoring system that tolerates management communication partition by using heartbeats through shared data stores. This includes managing the hypervisor integrated vswitch from a networking perspective as well as the other, basic IaaS components—compute, storage, images, and services.

Primary application for compute, storage, image resource management, and public cloud extension. Application monitoring, VM host and vSphere configuration and change management, discovery, charging, analytic, and alerting.

Application management primarily for multitiered applications, described in the definition of degree of tenancy, and managing the dependencies. The virtual switch in the hypervisor is programmed to create VxLAN tunnel overlays encapsulating layer 2 in layer 3 , creating isolated tenant networks. VMware interacts with its own virtual vswitch infrastructure through its own vSphere API and publishes a vendor-consumable API that allows third-party infrastructure routers, switches, and appliances to react to vCenter parameterized event triggers e.

Spring-based component framework [ 65 ]. The Spring development environment allows for the flexible creation and linking of objects beans , declarative transaction and cache management and hooks to database services. When looking over the architecture just described, one of the first things that might be apparent is the focus on integrated data center resource management e.

This is an important feature, as it can result in a unified and easy-to-operate solution; however, this approach has resulted in integration issues with other solution pieces such as data center switches, routers and appliances. One of the primary detractions commonly cited with VMware is its cost. Even so-called enterprise versions of open source offerings are often less expensive than the equivalent offering. Other perhaps less immediately important considerations of this solution is its inherent scalability, which, like the price, is often something large-scale users complain about.

The mapping and encapsulation data of the VxLAN overlay does not have a standardized control plane for state distribution, resulting in operations that resemble manual or scripted configuration and manipulation. Finally, the requirement to use multicast in the underlay to support flooding can be a problem, depending on what sort of underlay one deploys. Nicira was founded in and as such is considered a later arrival to the SDN marketplace than VMware.

NVP now works in conjunction with the other cloud virtualization services for compute, storage, and image management. This is good news because OVS is supported in just about every hypervisor [ 70 ] and is actually the basis of the switching in some commercial network hardware. As a further advantage, OVS is shipping as part of the Linux 3. This vApp is mated to each ESXi hypervisor instance when the instance is deployed. This is unlike a number of the other original SDN controller offerings.

The Nicira NVP controller Figure is a cluster of generally three servers that use database synchronization to share state. Nicira has a service node concept that is used to offload various processes from the hypervisor nodes. Broadcast, multicast, and unknown unicast traffic flow are processed via the service node IPSec tunnel termination happens here as well. This construct can also be used for inter-hypervisor traffic handling and as a termination point for inter-domain or multidomain inter-connect.

See Figure for a sketch of the NVP component relationships. OVS, the gateways, and the service nodes support redundant controller connections for high availability. NVP Manager is the management server with a basic web interface used mainly to troubleshoot and verify connections. Due to the acquisition of Nicira by VMware, [ 72 ] both of their products are now linked in discussion and in the marketplace.

Though developed as separate products, they are merging [ 73 ] quickly into a seamless solution. Nicira supports an OpenStack plug-in to broaden its capabilities in data center orchestration or resource management. Most open source SDN controllers revolve around the OpenFlow protocol due to having roots in the Onix design Figure , [ 74 ] while only some of the commercial products use the protocol exclusively. In fact, some use it in conjunction with other protocols.

Although hybrid operation on some elements in the network will be required to interface OpenFlow and non-OpenFlow networks. This is in fact, growing to be a widely desired deployment model. Unless otherwise stated, the open source OpenFlow controller solutions use memory resident or in-memory databases for state storage.

Since most controllers have been based on the Onix code and architecture, they all exhibit similar relationships to the idealized SDN framework. This is changing slowly as splinter projects evolve, but with the exception of the Floodlight controller that we will discuss later in the chapter, the premise that they all exhibit similar relationships still generally holds true. All of these controllers support some version of the OpenFlow protocol up to and including the latest 1.

Also note that while not called out directly, all Onix-based controllers utilize in-memory database concepts for state management. Before introducing some of the popular Onix-based SDN controllers, we should take some time to describe Mininet, which is a network emulator that simulates a collection of end-hosts, switches, routers, and links on a single Linux kernel.

Mininet is important to the open source SDN community as it is commonly used as a simulation, verification, testing tool, and resource. Mininet is an open source project hosted on GitHub. If you are interested in checking out the freely available source code, scripts, and documentation, refer to GitHub.

A Mininet host behaves just like an actual real machine and generally runs the same code—or at least can. In this way, a Mininet host represents a shell of a machine that arbitrary programs can be plugged into and run. Packets are processed by virtual switches, which to the Mininet hosts appear to be a real Ethernet switch or router, depending on how they are configured.

In fact, commercial versions of Mininet switches such as from Cisco and others are available that fairly accurately emulate key switch characteristics of their commercial, purpose-built switches such as queue depth, processing discipline, and policing processing. One very cool side effect of this approach is that the measured performance of a Mininet-hosted network often should approach that of actual non-emulated switches, routers, and hosts.

Figure illustates a simple Mininet network comprised of three hosts, a virtual OpenFlow switch, and an OpenFlow controller. All components are connected over virtual Ethernet links that are then assigned private net IP addresses for reachability. As mentioned, Mininet supports very complex topologies of nearly arbitrary size and ordering, so one could, for example, copy and paste the switch and its attached hosts in the configuration, rename them, and attach the new switch to the existing one, and quickly have a network comprised of two switches and six hosts, and so on.

One reason Mininet is widely used for experimentation is that it allows you to create custom topologies, many of which have been demonstrated as being quite complex and realistic, such as larger, Internet-like topologies that can be used for BGP research. Another cool feature of Mininet is that it allows for the full customization of packet forwarding.

As mentioned, many examples exist of host programs that approximate commercially available switches. In addition to those, some new and innovative experiments have been performed using hosts that are programmable using the OpenFlow protocol. It is these that have been used with the Onix-based controllers we will now discuss.

This move in fact made it one of the first open source OpenFlow controllers. It was subsequently extended and supported via ON. It provides support modules specific to OpenFlow but can and has been extended. NOX is often used in academic network research to develop SDN applications such as network protocol research.

One really cool side effect of its widespread academic use is that example code is available for emulating a learning switch and a network-wide switch, which can be used as starter code for various programming projects and experimentation.

SANE is an approach to representing the network as a filesystem. Ethane is a Stanford University research application for centralized, network-wide security at the level of a traditional access control list. Both demonstrated the efficiency of SDN by reducing the lines of code required significantly [ 78 ] to implement these functions that took significantly more code to implement similar functions in the past.

POX runs anywhere and can be bundled with install-free PyPy runtime for easy deployment. Trema [ 80 ] is an OpenFlow programming framework for developing an OpenFlow controller that was originally developed and supported by NEC with subsequent open source contributions under a GPLv2 scheme. Unlike the more conventional OpenFlow-centric controllers that preceded it, the Trema model provides basic infrastructure services as part of its core modules that support in turn the development of user modules Trema apps [ 81 ].

Developers can create their user modules in Ruby or C the latter is recommended when speed of execution becomes a concern. The main API the Trema core modules provide to an application is a simple, non-abstracted OpenFlow driver an interface to handle all OpenFlow messages. Trema now supports OpenFlow version 1. X via a repository called TremaEdge.

Developers can individualize or enhance the base controller functionality class object by defining their own controller subclass object and embellishing it with additional message handlers. Other core modules include timer and logging libraries, a packet parser library, and hash-table and linked-list structure libraries.

The Trema core does not provide any state management or database storage structure these are contained in the Trema apps and could be a default of memory-only storage using the data structure libraries. The infrastructure provides a command-line interface CLI and configuration filesystem for configuring and controlling applications resolving dependencies at load-time , managing messaging and filters, and configuring virtual networks—via Network Domain Specific Language DSL, a Trema-specific configuration language.

The appeal of Trema is that it is an all-in-one, simple, modular, rapid prototype and development environment that yields results with a smaller codebase. There is also an OpenStack Quantum plug-in available for the sliceable switch abstraction. Figure illustrates the Trema architecture.

The combination of modularity and per-module or per-application service APIs, make Trema more than a typical controller with a monolithic API for all its services. Trema literature refers to Trema as a framework.

This idea is expanded upon in a later chapter. Ryu [ 86 ] is a component-based, open source supported by NTT Labs framework implemented entirely in Python Figure The Ryu messaging service does support components developed in other languages.

Components include an OpenFlow wire protocol support up through version 1. A prototype component has been demonstrated that uses HBase for statistics storage, including visualization and analysis via the stats component tools. While Ryu supports high availability via a Zookeeper component, it does not yet support a cooperative cluster of controllers. Floodlight [ 87 ] is a very popular SDN controller contribution from Big Switch Networks to the open source community.

Floodlight is based on Beacon from Stanford University. The Floodlight core architecture is modular, with components including topology management, device management MAC and IP tracking , path computation, infrastructure for web access management , counter store OpenFlow counters , and a generalized storage abstraction for state storage defaulted to memory at first, but developed into both SQL and NoSQL backend storage abstractions for a third-party open source storage solution.

These components are treated as loadable services with interfaces that export state. The API allows applications to get and set this state of the controller, as well as to subscribe to events emitted from the controller using Java Event Listeners, as shown in Figure Floodlight incorporates a threading model that allows modules to share threads with other modules.

Synchronized locks protect shared data. Component dependencies are resolved at load-time via configuration. There are also sample applications that include a learning switch this is the OpenFlow switch abstraction most developers customize or use in its native state , a hub application, and a static flow push application. The Floodlight OpenFlow controller can interoperate with any element agent that supports OpenFlow OF version compatibility aside, at the time of writing, support for both of-config and version 1.

In addition, Big Switch has also provided Loxi, an open source OpenFlow library generator, with multiple language support [ 92 ] to address the problems of multiversion support in OpenFlow. A rich development tool chain of build and debugging tools is available, including a packet streamer and the aforementioned static flow pusher. In addition, Mininet [ 93 ] can be used to do network emulation, as we described earlier. Big Switch has been actively working on a data model compilation tool that converted Yang to REST, as an enhancement to the environment for both API publishing and data sharing.

These enhancements can be used for a variety of new functions absent in the current controller, including state and configuration management. As we mentioned in the previous section, Floodlight is related to the base Onix controller code in many ways and thus possesses many architectural similarities. As mentioned earlier, most Onix-based controllers utilize in-memory database concepts for state management, but Floodlight is the exception.

Floodlight is the one Onix-based controller today that offers a component called BigDB. The virtualization of the PE function is an SDN application in its own right that creates both service or platform virtualization. The addition of a controller construct aids in the automation of service provisioning as well as providing centralized label distribution and other benefits that may ease the control protocol burden on the virtualized PE. The idea behind these offerings is that a VRF structure familiar in L3VPN can represent a tenant and that the traditional tooling for L3VPNs with some twists can be used to create overlays that use MPLS labels for the customer separation on the host, service elements, and data center gateways.

This solution has the added advantage of potentially being theoretically easier to stitch into existing customer VPNs at data center gateways—creating a convenient cloud bursting application. The figure demonstrates a data center orchestration application that can be used to provision virtual routers on hosts to bind together the overlay instances across the network underlay. The controller is a multi-Node design comprised of multiple subsystems. This chapter presents the reader with a number of use cases that fall under the areas of bandwidth scheduling, manipulation, and bandwidth calendaring.

We demonstrate use cases that we have actually constructed in the lab as proof-of-concept trials, as well as those that others have instrumented in their own lab environments. These proof-of-concept approaches have funneled their way into some production applications, so while they may be toy examples, they do have real-world applicability. This chapter shows some use cases that fall under the areas of data centers. Specifically, we show some interesting use cases around data center overlays, and network function virtualization.

We also show how big data can play a role in driving some SDN concepts. These uses cases concern themselves with the general action of receiving some traffic at the edge of the network and then taking some action. The action might be preprogrammed via a centralized controller, or a device might need to ask a controller what to do once certain traffic is encountered.

Here we present two use cases to demonstrate these concepts. First, we show how we built a proof of concept that effectively replaced the Network Access Control NAC protocol and its moving parts with an OpenFlow controller and some real routers. This solved a real problem at a large enterprise that could not have been easily solved otherwise.

We also show a case of how a virtual firewall can be used to detect and trigger certain actions based on controller interaction. This chapter brings the book into the present tense—re-emphasizing some of our fundamental opinions on the current state of SDN as of this writing and providing a few final observations on the topic. Indicates new terms, URLs, email addresses, filenames, file extensions, pathnames, directories, and Unix utilities. Indicates commands, options, switches, variables, attributes, keys, functions, types, classes, namespaces, methods, modules, properties, parameters, values, objects, events, event handlers, XML tags, HTML tags, macros, the contents of files, and the output from commands.

Shows commands and other text that should be typed literally by the user, as well as important lines of code. Supplemental material code examples, exercises, etc. This page hosts a. You may download the configurations for use in your own lab. This book is here to help you get your job done. In general, if this book includes code examples, you may use the code in your programs and documentation.

For example, writing a program that uses several chunks of code from this book does not require permission. Answering a question by citing this book and quoting example code does not require permission. We appreciate, but do not require, attribution. Nadeau and Ken Gray. Copyright Thomas D. Nadeau and Ken Gray, If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at permissions oreilly.

Safari Books Online www. Technology professionals, software developers, web designers, and business and creative professionals use Safari Books Online as their primary resource for research, problem solving, learning, and certification training.

Safari Books Online offers a range of product mixes and pricing programs for organizations , government agencies , and individuals. For more information about Safari Books Online, please visit us online. We have a web page for this book, where we list errata, examples, and any additional information. To comment or ask technical questions about this book, send email to bookquestions oreilly.

Life is a journey, and I am glad you guys are walking the road with me. I would also like to thank my parents, Clement and Janina. Thank you to my many colleagues present and past who pushed me to stretch my imagination in the area of SDN.

Also, I will never forget how George Swallow took me on as his young Padawan and gave me the Jedi training that helped me be where I am today. Without that, I would likely not have achieved the accomplishments I have in the networking industry. There are many others from my journey at Cisco, CA, and my current employer, Juniper Networks, who are too numerous to mention. And, of course, Patrick Ames, our editor who held the course when we strayed and helped us express the best, most articulate message we could convey.

Last, but surely not least, I would like to give my heartfelt thanks to Ken Gray, my coauthor on this book. Without you grabbing the other oar of this boat, I am not sure I would have been able to row it myself to the end. Your contributions truly enhanced this book beyond anything I would have imagined myself.

I would like to thank my amazing wife, Leslie. You patiently supported me through this project and all that went with it and provided much needed balance and sanity. For my children, Lilly and Zane, I hope my daring to write this first book may provide inspiration for you to start your own great work whatever it may be. We share a common view on this topic that we developed from two different but complementary perspectives. Putting those two views together, first in our numerous public engagements over the past year and finally in print, has been a great experience for me, has helped me personally refine the way I talk about SDN, and hopefully has resulted in a great book.

Skip to main content. Nadeau, Ken Gray. Start your free trial. OSI model The Open Systems Interconnection OSI model defines seven different layers of technology: physical, data link, network, transport, session, presentation, and application. Switches These devices operate at layer 2 of the OSI model and use logical local addressing to move frames across a network. Ethernet These broadcast domains connect multiple hosts together on a common infrastructure.

IP addressing and subnetting Hosts using IP to communicate with each other use bit addresses. ICMP Network engineers use this protocol to troubleshoot and operate a network, as it is the core protocol used on some platforms by the ping and traceroute programs. Data center A facility used to house computer systems and associated components, such as telecommunications and storage systems.

MPLS Multiprotocol Label Switching MPLS is a mechanism in high-performance networks that directs data from one network node to the next based on short path labels rather than long network addresses, avoiding complex lookups in a routing table. Northbound interface An interface that conceptualizes the lower-level details e. Southbound interface An interface that conceptualizes the opposite of a northbound interface.

Network topology The arrangement of the various elements links, nodes, interfaces, hosts, etc. Application programming interfaces A specification of how some software components should interact with each other. Chapter 1, Introduction This chapter introduces and frames the conversation this book engages in around the concepts of SDN, where they came from, and why they are important to discuss.

Chapter 6, Data Center Concepts and Constructs This chapter introduces the reader to the notion of the modern data center through an initial exploration of the historical evolution of the desktop-centric world of the late s to the highly distributed world we live in today, in which applications—as well as the actual pieces that make up applications—are distributed across multiple data centers. Chapter 7, Network Function Virtualization In this chapter, we build on some of the SDN concepts that were introduced earlier, such as programmability, controllers, virtualization, and data center concepts.

Chapter 8, Network Topology and Topological Information Abstraction This chapter introduces the reader to the notion of network topology, not only as it exists today but also how it has evolved over time. Chapter 10, Use Cases for Bandwidth Scheduling, Manipulation, and Calendaring This chapter presents the reader with a number of use cases that fall under the areas of bandwidth scheduling, manipulation, and bandwidth calendaring.

Chapter 13, Final Thoughts and Conclusions This chapter brings the book into the present tense—re-emphasizing some of our fundamental opinions on the current state of SDN as of this writing and providing a few final observations on the topic. Conventions Used in This Book. Italic Indicates new terms, URLs, email addresses, filenames, file extensions, pathnames, directories, and Unix utilities.

Constant width Indicates commands, options, switches, variables, attributes, keys, functions, types, classes, namespaces, methods, modules, properties, parameters, values, objects, events, event handlers, XML tags, HTML tags, macros, the contents of files, and the output from commands. Constant width bold Shows commands and other text that should be typed literally by the user, as well as important lines of code.

Constant width italic Shows text that should be replaced with user-supplied values. Note This icon signifies a tip, suggestion, or general note. Warning This icon indicates a warning or caution. Using Code Examples. Note Safari Books Online www. How to Contact Us. Acknowledgments from Thomas Nadeau.

Oreilly sdn software defined networks cisco ifast video zoom pro download

Introduction to SDN (Software Defined Networking)

Следующая статья vnc server launch with xcfe

Другие материалы по теме

  • Fortinet failover vpn tunnel
  • Teamviewer release notes
  • Teamviewer photoshop
  • 2 комментариев

    1. Gagore :

      upgrade software on cisco c2960

    2. Daidal :

      little tikes workbench replacement parts

    Добавить комментарий

    Ваш e-mail не будет опубликован. Обязательные поля помечены *