What are the Most Common Upgrades for Efficiency in Ashburn?
by Josh Anderson
ASHBURN, VA — There are a lot of greenfield data centers in Northern Virginia. But almost anyone who is a data center veteran in Loudoun County has worked in a legacy data center. Big firms are bound to have a mixture of old sites and new sites. So when honing in on the analytics of how to make a legacy data center both reliable and profitable, we wanted to hear more about what upgrades for efficiency and reliability are most common in the Ashburn area.
We first heard from Mike Coleman, Vice-President for Global Data Center Operations at Yahoo. “We definitely have a diverse group of data centers,” he replied. “We’ve got some that are 20 years old believe it or not And we have some as new as a few months old. Over the past 7 years we have gone through a process called re-wire where went from over 40 data centers down to 8, focusing on large campuses, so that we could consolidate and focus on energy and efficiency.”
But we do have a few legacy data centers that from our perspective are in areas that are latency-specific, or require a certain metro or geo,” he continued. “Those are the only facilities that we would be looking at making any capital investment from an energy efficiency standpoint. When you look at a 20 year old or 15 year old data center, which were at the time breaking the ground of 1.5 KW per rack, and the way that we operate the enterprise today, with 12-15 KW per rack, the areas that we look at for upgrade are really redistributing all of that redundancy and cooling and power back into a more simplified infrastructure.”
So in other words, according to Coleman, power and cooling is there. “The redundancy doesn’t need to be in the physical infrastructure,” he offered. “It’s been moved into the application layer. So we do look at things like containment, where we have to monitor for cooling, redundant radio feeds to the site, and A&B systems moving back to a single system electrically, so that we can redistribute power and cooling. Those are the areas that we focused. It’s where we feel we need to maintain a legacy site.”
Next we checked in with David Fiedler, Datacenter Operations Design and Build Manager, Dropbox. “To piggy back on Mike’s point, if you’re going to be doing all of that stuff, at the end of the day what really matters, if you’re going to stay safe during that transition, is telemetry,” he began. “So from our perspective and I’m sure from Mike’s as well, as we’re moving onto new gen servers, new power strips, smart strips, new control systems, things like that, what we are moving away from is legacy closed loop systems.”
And what are they moving toward, exactly? “We’re moving toward things that are transmitting data in fractions of a second as opposed to fractions of hours,” Fiedler explains. “And at the end of the day those fractions of a second are required when you’re going to try to relax the envelope. As we’re going to do that stuff, your run times are seconds. They’re not minutes. They’re not hours. So you’ve got to have the right telemetry to be able to do all of that stuff.”
Header image: Mike Coleman, Vice-President for Global Data Center Operations at Yahoo.