menu

Data Centers Are Getting Denser. But How Dense is Dense?

Oct 9, 2018
by Josh Anderson

DALLAS, TX – Data center operations and management is a wide-ranging topic, and it encompasses a lot of salient issues — budgets, power, equipment, staffing, you name it. But lately, it seems that the lynch-pin in most conversations revolves around one thing – cooling. So CapRE’s Sixth Annual Texas Data Center Summit in Dallas featured a rousing panel titled Data Center Management & Operations: Trends in Power & Cooling, that dove deep into the perspective of a trio of regional insiders. Below is a snippet of that conversation.

data center summitModerator Jon Paulsen, Partner, Engineered Fluids and Founding Partner, Cool Data Centers: One of the big things that is going on with chips, GPUs and CPUs is that they keep getting hotter and hotter. How do you see things going forward, with cooling generally being the biggest limiting factor in a data center? How do you see the future around the cooling issue?

Steve Coon, Managing Principal, kW Mission Critical Engineering: There are a lot of restrictions out there as things are getting hotter and hotter, and more and more dense. Pushing air through a data center has its limitations. And refrigerant has its limitations. What we are seeing is a lot of movement toward the rear-door heat exchanges. We’re hearing more people say, hey, let’s bring more water to the chip. And we’re also hearing a lot more about this immersive cooling. Which is a good thing, I think. But we’re talking about trends, and the trend right now is definitely on the rear-door exchanger. And that’s what we’re seeing deployed most often at this current time.

Akhil Docca, Director, Future Facilities, Inc.: And I definitely think that the densities are driving towards liquid cooling. part of our business is working with the actual hardware designers on the electronics side. So we are seeing very similar things.

John Paulsen, Partner, Engineered Fluids and Founding Partner, Cool Data Centers

Paulsen: So when you say that things are getting denser, what kind of numbers are you talking about at the top of the rack?

Docca: Well we are hearing 25, 30, 35 kilowatts per cabinet.

Paulsen: I was just having a conversation with a chip designer company and my buddy who works for this company has been asked to design a 68-kilowatt rack. How would you guys propose to cool that rack?

Coon: The question would be how many of them are going to be in the room. So if there’s one of them in the room, then that’s not a problem. If there’s a cluster of them and it’s 10% of the data haul, then that’s not a problem. If you start taking 50% of the data haul with a 68-kilowatt rack, you’re going to see something you haven’t seen before. How would I propose to cool it? Because it’s something we haven’t seen before, it’d be a solution that we haven’t deployed yet. So that would definitely get some people thinking.

Paulsen: So when you say high density, what are you guys looking at?

Coon: We see pockets of high density. So a high density data center is 250 watts per square foot. Or we will see a high-density cluster of 30-kilowatt racks, as you were indicating earlier. That’s the high density that our customers are asking us about.

Docca: I concur. What we are seeing is that people are moving from legacy IT hardware to Cloud-based architectures. Heavy storage demand and things like that, especially in healthcare. So they’ll get a cluster of five racks or something, at twenty-five kilowatts per cabinet. And it’s really the biggest challenge I think they’re having is that it’s breaking their tried and true rules, all of the assumptions that they’ve made on cooling, adapting to the changes.

Sign Up For Updates:

Categories