What Are Some Key Outcomes We Can Expect from AI for Data Centers?

May 2, 2018
by Josh Anderson

NEW YORK, NY — AI has a lot of implementations and a lot of connotations, especially when you think about what it can do for the data center industry. At CapRE’s Seventh Annual New York Data Center Summit, we welcomed a trio of insiders and experts to talk about this buzzworthy topic, and asked them, what are some key outcomes that we can address with AI in the data center market? Below we showcase each answer.

For more from this panel, check out an earlier CapRE Insider Report: What Does AI Mean to You? NY Data Center Insiders “Bring AI Down to the Human Level”

data center summitRhonda Ascierto, Research Director for Data Centers & Critical Infrastructure Channel, 451 Research: What we are seeing with AI across all industries is that it’s the existing business issues where AI is first going to apply. And that’s also true for the data centers. So, we’re seeing AI mostly be applied for operational challenges. Capacity planning is a really big one – to be able to predict and forecast capacity, based on historical data. We’re seeing that as one of the biggest applications for AI today.

It’s not “Pure AI” though, because oftentimes as we all know, in data centers, there are step changes that have nothing to do with historical data. So that’s getting a huge customer to come into your facility, and having to increase capacity unexpectedly, and you’ve got to do it within the next two years for whatever reason. Those kinds of scenarios are not part of the learned behavior that you’re getting from AI. They also require non-machine learning-based models. So that’s what I’m seeing mostly – applications for capacity forecasting, machine learning-based behavior applied to non-machine learning-based models, so you can predict and better understand the costs, the performance, the requirements, the design configurations, the equipment configurations all for these big step changes. Do you agree Scott?

Scott Noteboom, Founder & CEO, Litbit: I do, when you look at the “data center” most of the folks that I’ve met here are in the investment and development space. Data centers traditionally have been pretty dumb. And what do you do when you build dumb things? You come up with a pricing metric – that’s something like Tier I, II, III, IV – that has a number of nines of uptimes against it – and the dumb data center definition tells you, How many boxes do I need to put together to produce this number?

And because these physical boxes are stupid, how many smart construction workers and how many smart operators do I need to bring in to be able to comprehend all of the data that these stupid machines are producing? To produce a refined result on a piece of paper? That’s traditionally how we’ve done things. What AI is going to do is, at every angle – and let’s look at the Uptime Institute’s definitions of Tier I, II, III, IV. Well why are those designed by physical devices that are taped together, vs the software that is smartly used to operate them? So I’m going to propose that with the difference between a Tier III and Tier IV data center, AI is going to have a much bigger influence.

This means that software is, more than any number of boxes that you can deploy from a redundancy perspective, going to change every metric of the gear that we’re buying, the systems that we’re integrating together, as well as the expertise that’s required and the approach that we use to operate the sites. So the intention is to disrupt the entire life-cycle, from beginning to retirement.

Shane Painter, Director of IT Infrastructure, Operations, and Security, zColo by Zayo Data Centers: Exactly. I would agree with that. Clearly, the historical piece has been capacity planning and optimization around heating and cooling and fluid dynamics-based approaches. What’s exciting to me is what the future holds. We’re on the verge now where AI and machine learning have reached a commodification inflection point, if you will, where this will begin to invade things like preventive maintenance, predictive analysis around component failures within HVAC units or pumps of any number of mechanical aspects within a data center.

So for me, my advice to the industry is that we’re building new data centers, we should be be constructing them in a way that allows for robust and different data points to be collected. Because it’s no longer just circuit-branch monitoring, now you maybe want straps on individual motors or fans or units. It really is about trying to think forward in terms of where we can best gather these data points, and that’s where these optimizations will come from in the future, I think.

Banner Photo (L-R): Rhonda Ascierto, Research Director for Data Centers & Critical Infrastructure Channel, 451 Research; Scott Noteboom, Founder & CEO, Litbit; Shane Painter, Director of IT Infrastructure, Operations, and Security, zColo by Zayo Data Centers

Sign Up For Updates: