Let’s start with a short review of what stranded power is and what is its cause. Stranded power is running out of room in your data center server hall (also referred to as “white space”) before you run out of installed UPS capacity.
Invariably the culprit is over-optimistic projections for power densities (expressed in kW/rack) in a data center’s design phase. If a 2 MW data center is designed to house racks with an average power density of 8 kW/rack it would be designed to house 250 racks. If those racks actually average a power consumption of 5 kW/rack, by the time you fill all available space you will be using 1.25 MW of power and cooling. You have thus stranded 0.75 MW of power. At an average construction cost of $12M/MW, you have stranded $9M worth of power and cooling infrastructure. Needless to say, that is a poor use of corporate capital.
I have seen a large corporation struggle with this problem. Running out of space in its brand-new flagship data center preempted them from deploying IT infrastructure required to support their growth plans. They launched into an investigation of how to put more IT racks in their building. They seriously considered converting staging areas and storage areas into additional white space. They soon realized they actually needed those spaces for their original intended use. Their solution was to procure space from colocation service providers.
There are integrators/fabricators that have been providing pre-fabricated power distribution centers and modular air handlers used routinely to accelerate the construction of hyperscale data centers. Some of these same fabricators are now turning their attention to providing server enclosure modules as a solution for recovering stranded power.
Server enclosure modules are designed to house server racks and tie into your data center’s stranded power and cooling capacity. In the above case you’d need enclosure modules to house an additional 150 IT racks.
A typical enclosure will have these characteristics:
- Metal construction
- ~50’ long by 10’ wide (driven by shipping logistic considerations)
- 20 – 25 server racks per enclosure with hot aisle containment
- Air handlers or in-row coolers that would connect to existing chiller plant
- Fire and early smoke detection
- Inert gas-based fire suppression
- Hot and cold aisle containment
- No raised floor; overhead power distribution and telecom cabling.
- Security video cameras
- Intended for deployment onto concrete piers or a concrete slab
From a financial standpoint, deploying server enclosure modules is a sound decision. The highest cost items in a new data center build-out are the MEP (mechanical, electrical, and plumbing) components. Server enclosure modules re-use the stranded portion of the MEP infrastructure you have already paid for.
Deploying metal server enclosure modules allows for rapid deployment of additional space in a temporary structure that does not require the lengthy permitting process to add on to a permanent building. Given their temporary and re-deployable nature, they can be depreciated quicker than you would a permanent structure, potentially resulting in tax advantages to the operator.
Server enclosure modules offer a temporary solution to stranded power. If your rack power densities eventually increase to the original design goals (8kW/rack in the above example), you can re-deploy these temporary structures. A good use for them is as a temporary data center. This would require adding a modular power distribution center as well as a modular chiller plant.
If you are about to run out of space in your brand-new data center and you are not using all of your commissioned power infrastructure, you are well advised to consider the use of server enclosure modules as a solution to your capacity problems. Granted this solution will only work if you have space available in close proximity to your existing data center building shell. This temporary solution is likely to result in lower costs than going to a colocation data center.
|I’d love to hear from you regarding other potential solutions to data center stranded capacity problems. I can be reached at: firstname.lastname@example.org.|