Moore’s (Traffic) Law

#centaur #40gbe Pop quiz time – and you get +100 geek points if you get this one right.

What was the main difference between a 386SX and a 386DX?

If you answered (without using Google) the data bus (16-bit for an SX and 32-bit for a DX) then award yourself 100 geek points and a pat on the back.

How, you are asking, is this relevant to Moore’s Law? Well, if you recall, Moore’s law is, in layman’s terms, that processing power doubles approximately every two years. A little known corollary (little known because I just made it up) should be that along with processing power, traffic – data – on the network always significantly increases along with with processing power. Call it Moore’s Traffic Law.

A variety of industry analysts and experts have been predicting such growth for years now.

"We expect the Ethernet Switch market to experience two significant years of market growth in 2013 and 2014 from the migration of servers towards 10 Gigabit Ethernet," said Alan Weckel, Senior Director of Dell'Oro Group. "We believe that in 2013, most large enterprises will upgrade to 10 Gigabit Ethernet for server access through a mix of connectivity options ranging from blade servers, SFP+ direct attach and 10G Base-T.

-- Data Center to Drive Ethernet Switch Revenue Growth through 2016, According to Dell'Oro Group Forecast 

Back in 2007, an IEEE presentation also described this growth, attributing it in part to, you guessed it, Moore’s Law:

Global IP traffic has increased eightfold over the past 5 years, and will increase fourfold over the next 5 years. Overall, IP traffic will grow at a compound annual growth rate (CAGR) of 32 percent from 2010 to 2015.

Server I/O bandwidth generators

  • Moore’s Law processing improvements
  • Data center virtualization
  • Networked storage
  • Clustered servers
  • Internet applications (e.g. IPTV, VoIP, Web2.0, Finance)

-- IEEE April 2007 from http://www.ieee802.org/3/hssg/public/apr07/hays_01_0407.pdf

Interestingly, the rate at which core networking doubled is faster than Moore’s Law’s interval of 24 months, according to one industry expert:

The presenters at the conference made a compelling case that server IO doubled every 24 months, while core networking doubled every 18 monthsServer bus architectures must also mature to take advantage of the high bandwidth interconnect.  This led to the idea to create 100Gb for the core (between switches) and 40Gb for the distribution/aggregation (pedestal/rack/blade servers to switches).  As for the uses for these speeds, it is the next generation of servers which are characterized by dense computing and high utilization through virtualization which will use 40Gb and 100Gb will enable the success of 10Gb servers.

-- 40Gb and 100Gb Ethernet status and outlook (March 2010), Stuart Miniman 

So how does this all relate again to the difference between a 386SX and 386DX? Well, if you were a geek back when these models were popular, you generally built your own desktop. And when you built that desktop you had to choose a motherboard. If you could afford it, you got the DX because the bus speed difference was noticeable; it was important to the performance of applications because one of the primary bottlenecks in a computer is – you guessed it – the data bus speed. It’s equivalent to having a very fast car, but being on the 101 at rush hour. This is relevant to today’s data center networks because one of the primary data center bottlenecks is the “data bus” speed: the network.

WHAT THIS MEANS for the NETWORK

With the growth of both server and desktop virtualization, demand for high-performance applications, increasing consumption of video, consumerization, and increasing connectivity, we’re seeing challenges in the core data center network related to bandwidth. Where 1GB between services in the data center used to suffice, we’re now seeing a need for 10GB. And when we start seeing servers and desktops leveraging fatter pipes, we start seeing a demand for fatter pipes and greater capacity at aggregation points throughout the network, like in the application delivery tier.

Adding pressure to legitimate traffic growth is the need for higher capacity protection. With Internet data center firewalls failing to withstand the load of massive and increasingly diverse attacks, scalable and higher performance security platforms are necessary to provide more comprehensive coverage. Without increased network capacity, data centers must manage the multitude of attacks through several point products, which increases the complexity, latency, and points of failure in the data center architecture.

This is unacceptable at a time when operational efficiency is required to manage constrained budgets and resources.

Switching, routing, application delivery. These critical data center components will need to increase their bandwidth capacity sooner rather than later to keep up with the growth of internet traffic (expected to quadruple by 20151) and the growing density of server virtualization within the data center (anticipated growth by a factor of five between 2010 and 20152). Critical network infrastructure will need to consider Moore’s (Traffic) Law and increase dramatically its capacity to manage larger volumes of traffic – and soon.

The adoption of 40GBE is ramping up as costs decline, and that means infrastructure must also step up and meet that demand – and comply with Moore’s (Traffic) Law.

 

1 Top Ten Trends and How they will affect the Data Center, Gartner, Inc, David Cappuccio December 2011

2 Gartner, From Virtual Machines to cloud computing , Tom Bittman, December 2011



Published Feb 20, 2012
Version 1.0

Was this article helpful?

No CommentsBe the first to comment