By Yevgeniy Sverdlik
The technology is here to stay, according to a presentation by a panel of analysts at the Ethernet Technology Summit in San Jose earlier this year. And it seems that further development and mass-scale adoption are guaranteed by several market characteristics.
“From a bandwidth perspective, there is a thirst for consumption,” says Ray Mota, managing partner at ACG Research. Users “don’t care about network complexity”, he adds. “They want it to be like dial tone. They want to pick up the phone and be able to do something.”
Meanwhile, Jag Bolaria, senior analyst at the Linley Group, says that besides user-induced factors (digital video and mobile carrier traffic), the thirst for bandwidth is driven by data centres. And both mobile carriers and data centre operators are looking to Ethernet as the technology that can satisfy this thirst.
“They all kind of intersect through Ethernet,” Bolaria says. “At some point, it all gets switched somewhere, and that’s where you start to need 10Gb ports and line cards. So, we’re seeing 10Gb port count increasing. State-of-the-art stuff is sometimes 48 by 10Gb Ethernet ports.”
The next stage is aggregation of that capacity, creating the need for 40Gb and 100Gb Ethernet.
Key drivers of Ethernet development
There is little doubt that the development of virtualisation and cloud computing – the first being at the core of the data centre and the second having its core in the data centre – is acting as a key driver for Ethernet technology development.
Frank Berry, chief executive of IT Brand Pulse, surveyed more than 100 data centre managers to find out what is driving the adoption of 10Gb Ethernet in data centres. More than half said it was virtualisation.
Virtualisation has replaced hardware as the centre around which the rest of the architecture is built. For example, one participant had just deployed 200 servers, all identically configured, each with 10 Ethernet ports. “Server virtualisation was top-of-mind for him.” To the analyst, this operator exemplified the ideal candidate for 10Gb Ethernet.
Primary Global Research vice president Rajan Varadarajan has noticed a jump in 10Gb product revenue within the enterprise switch market. Another reason for the jump is an upswing in desktop upgrades. “As desktops get upgrades to 1Gb ports, access switches have to be upgraded as well. Because of that, the access-to-aggregation link has to be upgraded to 10Gb. That’s where you see growth for 10Gb.”
New carrier funding
According to Mota, as the economy begins to show signs of improvement, and loans from the US government’s Troubled Asset Relief Programme make its way into the pockets of credit unions, more money is available to carriers for new equipment. “Ethernet forces a lot of the carriers to say: ‘You know what? I really didn’t know if I trusted that technology, but maybe I should consider [it] because it may help some operational and [capital] cost,” says Mota.
The technology, it seems, is enjoying more acceptance from carriers after a period of hesitation, for several reasons. “They were offering TDM [time-division multiplexing] at an 85% premium,” says Mota. “Now, they’re going to offer something that gives 100 times more bandwidth at a [much] cheaper price.”
This disparity between service-delivery cost and bandwidth is driving Ethernet into carrier data centres, which are observing demand from their enterprise customers and are realising that many of their problems can be solved if they transition their backhaul networks onto Ethernet. “They had to do it because the enterprises were forcing them to. And if they didn’t offer it, they were going to go somewhere else,” says Mota.
Besides market forces, technological barriers to mass-scale adoption of 40Gb or 100Gb Ethernet still exist. “If you’re processing 40Gb and 100Gb of data, you need memory bandwidth to support 40Gb and 100Gb, which is a problem yet to be addressed,” says Bolaria. “Folks are looking at serial memory buses to be able to move data back and forth between the processor and memory fast enough,” he explains. Processors pose another challenge. Currently, a typical set-up will use three or four processors to handle 100Gb. “And then you have to do traffic management,” says Bolaria.
“Sometimes it can be on the same device. Invariably, what we find is that the first designs tend to be using FPGAs [field-programmable gate arrays].” In addition to these challenges, there is also the need for new modelling tools. “Things like SPICE [Simulation Program with Integrated Circuit Emphasis] models and so on start to break down at 40Gb, so you need to actually start developing new tools for simulating and modelling those designs,” says Bolaria.
This article first appeared on DatacenterDynamics