“Twin Sons of Different Mothers: Latency and Bandwidth…and How They Impact Your Data Center Strategy” – Guest Post by Jose Ruiz, Compass Datacenters
by Josh Anderson
Chances are your network is not ready to support that new public cloud initiative that your company is planning, both in terms of speed and volume of data. And chances are that your network is also not ready to handle the inrush of packets that will come from accelerating Internet of Things (IoT) deployments, both in terms of performance and volume. In other words, you likely have a latency and bandwidth problem. And if they haven’t already, it will be apparent to your users who are going to tell you about it and ask you what you’re going to do about it.
Although they often seem to be used interchangeably, it is vital to understand the difference between latency and bandwidth. Bandwidth is the average rate of data transfer through a communications path. For example, a one (1) Gigabit connection means that 1 Gigabit is the rate of the information that can pass through the connection at any one time. Latency is the delay of the incoming data. Typically, a function of distance, latency is the measurement of the time it takes data to travel between two points.
With the proliferation of public cloud (AWS, Azure, etc.), Software-as-a-Service (such as Office365), larger, rich packet types of applications (video for example) and the anticipated data volumes that will characterize the IoT, network needs and location has vaulted to the top of the list for determining where an upcoming data center needs to be located. This heightened emphasis on location reflects the desire to place information as close to the end user as possible. Closer proximity eliminates the negative impact of latency on applications that must process data in real time for it to be usable, as in the case of IoT, to ensure the highest level of customer satisfaction on more consumer-oriented applications.
IDC estimates that 50% of existing networks will be bandwidth-constrained as a result of the rapid increase in data volume for IoT. Not only will this thirst for bandwidth come from the information generated by IoT devices themselves, but from its capacity-consuming components as well. The data flowing from private clouds will require large amounts of bandwidth. Further, enhanced technologies such as Docker that can support up to four (4) to six (6) times more server applications than a hypervisor running on an equivalent system will correspondingly require more capacity than most operators currently have available to support them.
At present, many data center operators utilize networks that are “telco-centric.” These structures rely on the public Internet as their conduit to the cloud reflecting the traditional thought process of “the bigger the pipe, the better.” Before the emergence of the cloud, this mode of operation was, for the most part, sufficient to support the traffic any one data center was sending/receiving via the Internet.
The problem with the historical telco approach is the need to share. Even a 1 or 10G line circuit ultimately hits a switch or router with an “oversubscribed” backplane. In the world of the cloud, combined with the changing nature of data and end user expectations and requirements, even the biggest pipe typically cannot support the sheer volume of organizations that are all using public cloud SaaS at the same time. Thus, organizations relying on this traditional data center network model are incurring unjustifiable costs concerning the ROI derived from their performance.
The quick rise of public cloud offerings like Amazon Web Services, Azure, Office 365 and others has made them or offers the opportunity to make them, integral portions of the corporate network. As a result of the emergence of these shared cloud offerings, many providers now offer direct connect capability to one or more of the cloud industry’s largest providers, Equinix’s Cloud Exchange is an example of this type of solution. For many organizations, these types of products can enable you to maximize their networks ability to bypass the public Internet to gain the speed of throughput necessary to support more demanding bandwidth-intensive applications.
The need for more bandwidth impacts on the hardware necessary to support it, specifically the switches that route the traffic throughout the network. In a recent study, Crehan Research estimated a 100% annual growth rate for the sale of high-speed data center switches, which by 2017 will make 40GB and 100GB the new data center standards. Planning for increasing the bandwidth of your data center(s) will also require you to address the efficacy of your existing hardware and the potential for more frequent hardware refreshes than the traditional 3-5-year period.
The changing nature of the data, coupled with the intensified demand for its faster delivery and processing, can be expected to increase dramatically for the foreseeable future. These elements will drive requirements for lower levels of latency and higher bandwidth speeds. The corresponding result will be the expansion of data centers and “data center-like” capabilities to locations that heretofore have not been considered viable data center destinations, a movement away from the use of the public Internet for cloud connectivity to be replaced by non-contentious dedicated connections and smaller refresh intervals for network hardware. To address these needs data center operators will be looking for vendors and partners that have the agility to quickly adapt to changing circumstances. In short, the time for laurel sitting is becoming shorter and shorter.
About the Author: As Compass Datacenters’ VP of Operations, Jose provides interdisciplinary leadership in support of the company’s strategic goals. He began his Compass’ career at Compass as its Director of Engineering, a role that involved responsibility for all site and sales engineering activities. Prior to joining Compass, he spent three years serving in various engineering positions and was responsible for a global range of projects at Digital Realty Trust, one of the largest data center companies in the world. In these positions Jose served as the company expert on CFD modeling and attained the designation of LEED AP, Building Design and Construction, from the U.S. Green Building Council. For his exemplary efforts, he was named the company’s Sales Engineer of the Year in 2010. Prior to Digital Realty Trust, Jose was a pilot in the United States Navy where he was awarded two Navy Achievement Medals for leadership and outstanding performance. He is a graduate of the University of Massachusetts with a degree in Bio-Mechanical Engineering.