A Tale Of Three Facilities (And The Technologies That Saved Them)

October 29, 2017

By Dr. Clifford Federspiel, President and CTO of Vigilent

Free cooling is the de-facto choice for many data center operators and the people making financial decisions. And with good reason: it’s a cost-competitive use of available and sustainable resources.

But evidence shows that free cooling rarely meets its desired objective. When free cooling stops working, or isn’t working well, it’s often difficult to tell. I know from personal experience that many free cooling systems deployed in data centers I have visited don’t work or are working poorly. In several cases, the facility operators are considering swapping out the free cooling infrastructure.

In most cases, replacement isn’t necessary. Free cooling systems can deliver on their promise. Their issues lie with a lack of visibility and diagnostic capability into how systems are performing and when they are not. In deployments I observed, free cooling performance problems were detectable and fixable with existing infrastructure.

What’s needed is a layer of intelligence. This intelligence is best achieved with a combination of machine learning, Internet of Things (IoT) technology and smart software to make operations more transparent and keep systems fully optimized as changes occur to both IT load and infrastructure.

Specifically, a layer of sensors and monitoring, combined with self-learning, self-adapting software, provide the visibility and diagnostic capability you need to keep free cooling tuned, make vendors accountable, and identify ways to reduce energy costs.

In a recent trip to visit data centers across Europe, I observed one system that was recirculating inside air even though the outside air was cool. The DX compressors were running continuously.

The facility operators believed that free cooling should be reducing the load on the compressors, but the compressors had been running at 100% since the free cooling system had been deployed, and the facility operators couldn’t determine exactly what was wrong.

At a second site, there was very little cold air blowing despite a large temperature difference between the outside and inside air.

There was also a big variation between the free cooling performance in two different ducted areas. Subsequent inspection revealed that one outside air duct was properly connected to the outside air, but the other was pulling from inside the wall.

A third large European data center had a large variation between the performance of different units. A quick inspection showed that both the outside and return air dampers were closed on the poorly performing unit, and that the outside air damper was actually stuck in the closed position.

In all three cases, visibility into the performance of the free cooling systems was revealed by sensor data. Relatively simple fixes pinpointed by monitoring the cooling infrastructure and analyzing performance data restored free cooling at all facilities to full performance.

All of these problems resulted from improper deployments, but none of the facilities had been able to discover the cause.

As free cooling technologies mature and integration issues become better understood, deployment issues may decline. But as technology continues to change and vendors come and go, having ongoing visibility and self-adapting systems will help facility operators keep pace with changes, ensuring maximum returns from free cooling investments and faster, more effective response when issues occur.

This article first appeared in Data Economy.