How Soon will 70 KW Solutions be Standard?
by Josh Anderson
CAMBRIDGE, MA — A typical IT workload is 10 kw/per rack. If you go to an outdoor goods store, your average barbecue is 35,000 BTUs – the same output as a 1 k/w rack. With the emergence (and some would say an inevitable tsunami) of high-performance computing on data center environments, solutions that are 40 kw, 70 kw are becoming more common. At our recent New England Data Center Summit, we spoke with some engineers to find out if they think these types of workloads are going to be typical of the data center of the future.
First, we heard from Vali Sorell, President of Sorell Engineering, Inc, who said that yes, this is what you’ll see in the data center in the future. “But not in the next generation or the generation after that OR the generation after that,” he cautioned. “So I give credit to the end users who see a very strong need for high performance compute, because that’s the kind of stuff that actually empowers that kind of development to get that many KW per cabinet. But the reason that it’s not ready for the next generation yet is because it is kind of expensive to implement.”
Sorell said that, in this case, it’s actually an issue of the liquid. “It’s actually a very simple design. There’s no mechanical cooling required. In concept, it’s extremely easy. But the implementation is not, in terms of cost,” he explained. “It’s very expensive to put it together on day one, and you have to have a valid reason to go to such high density. Because for every high-density application like this, I can make the case for going to conventional air cooling. I think there is enough capacity left in the air cooling infrastructure that it could be mined for more opportunity and potential.”
Sorell predicted that, in the next couple of generations, as these leading edge facilities start bringing high-performance computing to a more normal level, air-cooling systems will probably max out to their true use-potential. “Once upon a time people thought 10-20 KW per cabinet were the peak,” he recalled “Well, it’s not. There’s still a lot left to go. As long as there is s till capacity to be utilized, people will be creative and they’ll find a way to do it at the lowest possible cost. later in the future, when this has been done for awhile and there’s more acceptance of liquid cooling, I think it’ll start percolating its way down to the masses or the normal, everyday end-users who’ve been designing from the ground up. There is certainly enough incentive to do it in terms of proficiency.”
Next, we checked in with Bruce Edwards, President of CCG Facilities, who said that the future of high-performance computing is really it’s a question of whether or not the user is driving it or not. “Are there applications, are there hardware deployments that require that sort of density?” he asked. “When there are, you’ll see them deployed. The other question is, not just can we do it technically, but can we do it cost-effectively? And that really relates to the market.”
Edwards cautioned that, at this point in time, high-performance computing along the likes of 70 Kw per rack is a really small segment. “It’s a specialized segment of the market, so you’re not going to see it widespread,” he concluded. “It’s going to slowly increase in breadth but that’s going to be a slow process. And it may be that the IT technology changes before it becomes so widespread that this is the direction of the overall market. In any case it has got to be cost-effective. So the capital investment and the skill required to operate that kind of facility are going to play into the equation too.”