Author: Hugh Kelly

Disaggregating the Appliance Model

In carrier networks, the appliance model is giving way to a disaggregated model. Traditionally, functions like routing were implemented in a tightly coupled hardware and software bundle. Routing vendors could ensure the proper operation of their system because they controlled the entire network element. They even developed their own switching ASICs for their line cards.

Lesson 5: Cost, Agility and Velocity

Server virtualization gained interest as a way to optimize costs. The processing capacity of servers had grown to a point where it was common to see very low utilization levels. Thus, there was enough processing available to add a hypervisor and run multiple workloads on a single server. IT departments could point to better server

Lesson 4: Multiple Workloads

In server virtualization, the key is thinking of an application as a workload.  A server with a hypervisor can run multiple, independent workloads.  This allows for better utilization of the server and much greater flexibility, agility, and responsiveness in managing workloads. In routing, we have seen many attempts to run multiple control plane workloads trying

Lesson 3: Compartmentalize change

The hypervisor was a technology that could be added to the existing mix that made everything work better.  The applications, the OS and hardware were all the same after the hypervisor was introduced into the mix. As a result, the focus could be on prioritizing what workloads to virtualize enabling a gradual introduction of the

Lesson 2 of 5: Embrace commodity hardware

x86 processors from Intel and AMD were used by a range of server ODMs. Moore’s Law held and these chips grew in raw processing power. It meant that the need for custom silicon greatly diminished. Custom silicon was common – think of Sun servers running Sun’s own SPARC processors. Commodity should not be viewed as

5 Lessons for virtual routing from server virtualization

In tomorrow’s webinar hosted by IHS Markit’s Michael Howard, we will be discussing routing virtualization in the context of cloud native NFV.  One of the key points is that network operators want the same benefits that data centers saw from server virtualization. I want to take the next few blog posts to discuss what lessons

Thinking about the cost trade-offs of NFV

We are getting our presentation ready for a webinar on September 12 hosted by IHS Markit “Exposing the Cost Trade-offs of Cloud Native NFV” featuring Michael Howard, IHS Technology Fellow. The webinar will focus on the impact of virtualization on network operators.  Volta’s portion will use two routing uses cases to talk about why virtualization

TCO: Service Velocity and Agility

In our recent blog posts, we have looked at different aspects of Total Cost of Ownership (TCO). CapEx spending gets the headlines and for good reason. Flattening revenues driven by lower revenue per bit coupled with explosive growth in bandwidth demands have made it difficult for carriers to achieve the industry’s standard goal of 15%–20%

TCO: Capital Intensity

In our last post, we discussed how service provider revenues are massive but have settled into a low growth rate of around 1 % per year. How does this affect CapEx? Network operators can drive new sources of revenue because they invest in their networks. Service provider CapEx is a source of intense interest because