There is tremendous interest in network virtualization and the benefits it can bring to service provider network deployments, especially for 5G. However, terms like legacy router, virtualized router, disaggregated router, virtual router, and network slicing may mean different things for different people. That sometimes gets in the way understanding the benefits of virtualization.
The goal of this series of three blog posts is to define these terms and provide a framework around the definitions to show how virtual routers are a key component that fulfills the potential benefits of virtualization.
Most routers used in service provider networks today are based on architectures introduced in the 1990s. All these architectures share two components:
- Control Plane: this component implements the routing protocols like BGP, OSPF, or ISIS. These routing processes exchange control information with other instances of the same processes running in other routers and execute algorithms to select a route to a given destination, for all the destinations that they learn about. The output of these algorithms is then stored in the Routing Information Base (RIB), a database storing the next hop (next router) to a given destination, for all known destinations. This database is managed by another process known as the RIB manager.
- Data plane: This component consists of the network interfaces through which packets arrive to and depart from a router, the switching fabric, and associated logic that interconnect the interfaces and enable the packet-switching functionality.The data plane component processes incoming data packets by extracting the destination address from the packet’s header and by looking up that address in a database known as the Forwarding Information Base (FIB), in order to determine the outgoing interface to reach the next hop towards the final destination.
Network interface capacity, or port capacity, of a router, can be increased by adding additional ports to the data plane. This is usually accomplished by adding additional Input/Output (I/O) modules to the data plane, each one containing a number of ports, and supporting a wide range of layer 2 standards such as Ethernet, ATM, T-1 or DS-3.
These different module types can be mixed and matched as needed. Generally, lower speed network interfaces or ports are used for customer-facing connections with higher speed ports for backbone connections. This also allows these devices to adapt to higher bandwidth demands.
Each of these I/O modules in the data plane contains a local copy of the FIB, which in turn is computed from the RIB that the control plane maintains. That is, FIB databases in the data plane can be interpreted as cached copies of the RIB database in the control plane (along with some additional layer 2 information necessary for switching and forwarding). Therefore, continuous communication between the control plane module and the different modules of the data plane is essential in order to keep FIB databases up to date with the changes that occur in the network.
In legacy routers, the control plane and data plane are bundled into the same device. The control plane software runs on its own module which containing a general-purpose CPU and a fixed amount of RAM. Vendors have branded their control plane software as with Cisco’s IOS and Juniper’s JUNOS. The control plane processing module is known as the Routing Engine (Juniper) or Switch Route Processor (Cisco).
In contrast, the data plane is implemented in hardware, with vendors building custom ASICs for switching fabric with data pipelines to ensure that they could deliver line-rate performance when passing packets.
The control plane module and the data plane modules communicate through a backplane that is also part of the device. For example, the FIB population in data plane modules based on RIB information is achieved via backplane communication. This tightly integrated approach is also a closed one because the customer can only use the vendor’s hardware and software. Moreover, both the control plane and data plane are managed as a single entity, and Cisco’s Command Line Interface (CLI) became the de facto approach to manage the entire device (rather than individual components as might be done with general-purpose server). Other vendors followed Cisco’s model.
Based on this closed and tightly integrated approach, the appliance model has a number of issues related to CapEx and OpEx, including high cost and vendor lock-in, that are beyond of the scope of this blog post, and for more information on those issues, you can take a look here.
In virtualized routers, both the control plane and the data plane are implemented in software. That is, both are implemented via software processes running in the same server. Servers are usually virtual machines (VMs) running on a hypervisor on an x86 server.
Virtualized routers can either run on cloud infrastructure or on customers’ premises, and they can share the VM fleet with other workloads. In service providers, this approach to virtualization is called Network Function Virtualization (NFV) and networking functions being virtualized are known as Virtual Network Functions (VNFs) and include not only virtualized routers but also firewalls, video transcoding, etc.
Sometimes, the software data plane is implemented through specialized software libraries. One example is DPDK, an open-source set of libraries and drivers for fast packet processing on x86 servers. DPDK enables fast packet processing by allowing network interface cards (NICs) to send Direct Memory Access (DMA) packets directly into an application’s address space, allowing the application to poll for packets, and thereby avoiding the overhead of interrupts from the NIC. This ensures a reasonable level router performance for low-throughput applications. These can scale up to as much as 10Gbps, but the cost of the server becomes much higher and the features become more limited.
As a result of these performance limitations, virtualized routers tend to be most effective where throughput requirements are lower, such as in customer premises, and thus they usually end up being part of Customer Premise Equipment (CPE), for example for 100 Mbps connections on what is known as uCPE.
The advent of commercially-available switching ASICs from suppliers like Broadcom, with enough capability and capacity to meet the performance requirements of service providers, has opened the door for a new type of networking devices known as white-box switches that leverage these ASICs to deliver low-cost hardware data planes at a fraction of the cost of closed hardware platforms.
Broadcom is the market leader in switching ASICs. They have two families of switching ASICs used in white boxes:
- The Strata XGS Family includes the Tomahawk, Trident 2, and Trident 2+ which are used primarily in data center switching products.
- Strata DNX Family (also known as Dune family) which includes the Qumran, Jericho, and Jericho 2. This family came from Broadcom’s acquisition of Dune Networks. DNX has a different architecture with expandable TCAM which provides deep table capacity to Qumran. DNX also supports an Expandable Packet Buffer and are better suited to carrier routing applications.
White-label switch manufacturers using Broadcom chipsets include Edgecore Networks, Delta, and Alpha among others. For example, the Edgecore AS7316 or the Delta 208 both use DNX Qumran chipsets and are targeted for cell site gateway scenarios.
White-label switches are usually provided without an operating system (OS), which means customers are free to use the OS they prefer, much like servers in IT. Similarly, control plane software is also not present, so customers are free to obtain them from a variety of open source and commercial solutions.
In the next post, we will focus on different options for software control planes on disaggregated routers running on white-label switches because the combination of software control planes and hardware data planes running in white label switches opens the door for what can be considered the best of worlds: line-rate performance at a reasonable cost, and the avoidance of vendor lock-in due to separate, multiple vendors for both hardware and software. The software control plane is the remaining piece to unlock all this potential value.