5 Lessons for virtual routing from server virtualization

5 Lessons for virtual routing from server virtualization

In tomorrow’s webinar hosted by IHS Markit’s Michael Howard, we will be discussing routing virtualization in the context of cloud native NFV.  One of the key points is that network operators want the same benefits that data centers saw from server virtualization. I want to take the next few blog posts to discuss what lessons operators should take.

Virtualization on commodity servers began a revolution in computing that delivered significant cost savings and changed how workloads are managed. Like any technology that drives down marginal cost, it enabled a significant expansion of computing. It also laid the foundation for today’s cloud computing, hybrid data centers, and containerization. As network operators get serious about virtualizing routing, it is instructive to look to what made server virtualization so successful and apply those lessons to networks.

Lesson 1: Separate the workload from the hardware

As mass-produced processors became increasingly powerful, it was clear that most of the processing power on a server was going unused. Hypervisors made it possible to run multiple application workloads on a server at the same time. This resulted in significant cost savings as the ratio of applications went from one per server to many per server.

The parallel in networking is the separation of the data plane and the control plane. This concept has been around for many years and is core to Software-Defined Networking (SDN). The IETF published RFC 3746, “Forwarding and Control Element Separation (ForCES) Framework” in 2004. The OpenFlow protocol has its roots in academic research dating back to 2006. The goal was always to allow customers the freedom to select hardware and software separately which would help keep costs down.

SDN never lived up to its promise in no small part because it threatened the appliance model of the legacy routers while requiring their cooperation. Cisco, Juniper, and the other router vendors all deliver products where the software and hardware are tightly integrated. This appliance approach meant that the operator could not mix and match the routing software, OS and hardware. They were limited to what the vendor was would support. OpenFlow would have to run on the existing routers that any operator had in order to avoid a completely impractical forklift upgrade. Legacy vendors have become very good at professing support while protecting their vested interests.

In recent years, we have seen a step in the right direction with disaggregated appliances. Open networking is gaining traction in the data center and interest from service providers. Switches are available from white-box vendors like Edge-core and as “Brite Boxes” from leading IT vendors such as Dell. They can be paired with a network OS like Open Network Linux (ONL) and choice of different networking stacks. However, Network Operating Systems (NOS) are most commonly used. NOSes come from vendors like Cumulus Networks and open source projects like DANOS and SONiC. These combine the OS with the networking software into an integrated piece of software.

This disaggregated appliance does allow the customer to pick their own hardware (as long as it is on the NOS vendors supported list) so it does represent cost savings in the data plane. However, it maintains the one to one relationship between application (routing stack) and hardware. In that sense, it perpetuates the router appliance model where all the software processes run on the box that does the data plane.

In Lesson 5, we discuss how scaling out the control plane provides the same level of benefit as server virtualization.

Leave a reply

Your email address will not be published. Required fields are marked *