Lesson 3: Compartmentalize change

Lesson 3: Compartmentalize change

The hypervisor was a technology that could be added to the existing mix that made everything work better.  The applications, the OS and hardware were all the same after the hypervisor was introduced into the mix. As a result, the focus could be on prioritizing what workloads to virtualize enabling a gradual introduction of the new technology.

There is one big difference between VMs on a server and virtual routers. The CPU on servers was often underutilized so server virtualization allowed organizations to take advantage of this unused capacity. Legacy routers were not designed to support multiple virtual routers. One example of this is Node Slicing and it requires an external server to run multiple copies of the OS and it limited to one VR per IO module.  Thus, we need to consider what hardware will drive the data plane.

The choices come down to white box switches and x86 servers.  An x86 server running DPDK can run a VM of a legacy vendors’ OS (we call this a virtualized router). Given the CPU and memory requirements of the VM, these servers can cost up to several thousand dollars and have some significant limitations on throughput.  These virtualized routers are much more practical in applications like uCPE where the throughput requirements are low so the server can be modest in performance and cost.

For most service provider applications, white box switches will offer much more throughput at a much lower cost per port than servers.  This is where Volta has focused and one of our innovations is to support up to 255 virtual routers on one white box switch. That is only possible because we run the majority of the control plane on the cloud where it is much easier to scale up. However, most service providers don’t have many white box switches deployed yet. Check out our partnership announcement with Edgecore.

This has several implications for service providers who are interested in virtual routers:

First, ensure interoperability with legacy protocols. SDN was a great idea. It fostered the concepts of disaggregation and choice of low-cost hardware.  However, the introduction of Openflow created a barrier to acceptance and SDN never took off.  Thus, we want to keep to the tried and true protocols to minimize the magnitude of change and cost. Multivendor interoperability is necessary for a smooth deployment. The analogy with server virtualization is that it easy to have some servers running VMs and others that weren’t.

Second, pick the applications which will benefit the most. Since new hardware will be required, it makes sense to look at applications that are driving changes to your network. uCPE, DCSG, Crosshaul, and MEC are all examples of locations that require new hardware, so they are natural candidates for virtual routing.

Third, cost savings were an important driver for server virtualization, but agility was equally important. Spinning up a new copy of a VM to support more users was significantly easier than spinning up a new server. The same will be true for carrier networks. Applications like RAN sharing, support for IoT and network slicing will all require service providers to respond quickly to customers.  Thus, virtual routers become essential to enable this new level of service agility and revenue velocity as well as keeping costs down as networks change.