menu

New England Insiders Talk Performance Indicators for Legacy Data Centers: One of Many Opportunities Improve Operational Efficiency

 
Feb 12, 2018
by Josh Anderson

CAMBRIDGE, MA — Every couple of years, new data center standards are announced. Best practices change. But sometimes a standard has staying power. About a year and a half ago, the Green Grid released performance indicators for making data centers more efficient. At our recent New England Data Center Summit, we asked Daniel Bodenski, Managing Director at DAB Solutions for his thoughts on whether and how these particular standards impacted the data center arena.

Dan Bodenski, DAB

To begin, he knowingly provided a primer. “Well, we all know about the PUE,” he responded. “And that was developed by the Green Grid. I don’t know if it was..10 years ago, it may have been longer. Most recently, the Green Grid has come up with another measurement type standard or tool really, called the Performance Indicator score. Essentially what this is, is a way to measure server reliability and temperature, but also capital costs and operations costs, and put it all in a really simple, graphical model.”

According to Bodenski, this indicator allows owners to choose temperatures that they want to run their servers at, and also balance it with the cost and efficiency of running a data center. “There recently was a survey that went out, it was put together by the Creative Facility Design folks. Paul Bryn put this together, if you’re connected with him. He surveyed all of his LinkedIn contacts. He sorted them into subsets, and interestingly enough, 55% of that whole group was familiar with that concept, even though it’s still fairly new – from 2016.”

Next, Bodenski’s co-panelist, Indra Purkayastha, Founder and CEO at Purkay Labs was asked to speak about the instrumentation of tools such as the Performance Indicator score, and how it can extend the life of data centers. “I want to recall something I heard in a presentation earlier today about air cooled data centers. These [facilities] still have a lot of data. And what I’ve noticed is that, if you make a product that measures temperature and humidity in data centers and it’s a portable instrument, I’ve found that in legacy data centers, everything could be great on day one. However, a couple of months later, things change. When new servers got added.”

Purkayastha said that the whole mix of K/W per cabinet over time easily drifts. “You have to do the CFD every time you change,” he advised. “But people aren’t doing that. And there’s where I saw where our types of applications come into play. And there are people [out there] doing the same thing. Go measure and benchmark your data after the cabinet mix has changed. Know what your densities are on the surface. If you don’t, then you might run your data center inefficiently.”

“Once you know that…you can refine your data center again and at the end of the day, the pay-off will be reduced energy costs,” Purkayastha concluded. “And your efficiency will go up. But a whole lot of legacy data centers are running inefficiently today and that’s just money going to your utility instead of going to your bottom line. So the message that I’d like to give is that there are opportunities still in legacy, air-cooled data centers, and it’s all about measuring your data center and finding out how it’s running. Then you can fine-tune it.”

Be sure to check out a previous CapRE Insider Report covering an earlier part of this panel discussion: kW Mission Critical Engineering’s Chris Mahoney Outlines Steps for Addressing Legacy Data Center Challenges in New England

 

 

Sign Up For Updates:

Categories