Optimising cooling: keeping in control of costs

    0
    131

    Specifying cooling systems without considering their control methods can lead to issues such as demand fighting, human error, shutdown, high operation cost and other costly outcomes. So how can data centres effectively optimise cooling efficiency? 

    The choice of cooling architecture, including hot and cold air containment, is of paramount importance for minimising the operating expense of a data centre. However, an effective control system is also essential when hoping to achieve the maximum energy efficiency and PUE.

    Achieving efficient use of electrical power is a major concern for data centre operators for both cost and environmental reasons. Next to the IT itself, the largest consumer of power in a typical data centre is the cooling system. Assuming a 1MW data centre with a PUE of 1.91 at 50% IT load, for example, the cooling system will consume approximately 36% of the energy used by the entire data centre (including IT equipment) and about 75% of the energy used by the physical infrastructure (without IT equipment) to support the IT applications.

    Given its large energy footprint, optimising the cooling system provides a significant opportunity to reduce energy costs. Three steps to achieving this goal are: selecting an appropriate cooling architecture; adopting an effective cooling control system and managing airflow in the IT space.

    One key approach to reducing cooling plant energy is to operate in economiser mode whenever possible. When the system is in economiser mode, high-energy-consuming mechanical cooling systems such as compressors and chillers can be turned off, and the outdoor air is used to cool the data centre. There are two ways to use the outdoor air to cool the data centre:

    • Take outdoor air directly into the IT space, often referred to as ‘fresh air’ economisation

    • Use the outdoor air to indirectly cool the IT space

    In certain climates, some cooling systems can save in excess of 70% in annual cooling energy costs by operating in economiser mode, corresponding to more than 15% reduction in annualised PUE. The latest white paper from Schneider Electric highlights some of the critical issues affecting the cooling process, which include the fact that cooling system capacity is always oversized due to availability requirements and data centres operating under less than total IT capacity; the IT load in terms of equipment population and layout frequently changes over time; cooling system efficiency varies with factors other than IT load such as outdoor air temperature, cooling settings and control approaches; and compatibility issues arise due to the installation of cooling equipment from different vendors. 

    The paper warns that traditional control approaches involving manual adjustments to individual pieces of equipment such as chillers and air conditioners lead to uneven cooling performance as the effect of adjustments made to one unit can lead to hot spots elsewhere in the data centre. Frequently, too, there is no visibility to the performance of the entire cooling system, a flaw that is often compounded by the presence of poor-quality or badly calibrated sensors and meters.

    The paper also recommends approaches for effective control systems entailing the use of automatic controls for shifting between different operation mode such as mechanical mode, partial economiser mode and full economiser mode. Indoor cooling devices should be coordinated to work together under a centralised control system with the flexibility to change certain settings based on immediate requirements.

    The white paper proposes that control systems be categorised into a hierarchy of four levels: namely device-level control, group-level control, system-level control and facility-level control for maximum efficiency.

    Device-level control involves the control of individual units such as chillers. Group level control refers to the coordination of several units of the same type of device, typically from the same equipment vendor and controlled by the same control algorithm. System-level control coordinates the operation of different cooling subsystems within a data centre, for example a pump and a CRAH (computer room air handler). Finally, facility-level control integrates all functions of a building into a common network that controls everything in the building from heating, ventilation, air conditioners and lighting systems to the security, emergency power and fire-protection systems.

    Characteristics of effective control systems 

    According to the white paper, an effective control system should look at the cooling system holistically and comprehend the dynamics of the system to achieve the lowest possible energy consumption. The following lists the main characteristics of effective control systems:

    Automatic control: The cooling system should shift between different operation modes like mechanical mode, partial economiser mode, and full economiser mode automatically based on outdoor air temperatures and IT load to optimise energy savings. It should do this without leading to issues like variations in IT supply air temperatures, component stress, and downtime between these modes. Another example of automatic control is when the cooling out-put matches the cooling requirement dynamically, by balancing the airflow between the server fan demands and the cooling devices (ie CRAHs or CRACs) to save fan energy under light IT load without human intervention.

    Centralised control based on IT inlet: Indoor cooling devices (ie CRAHs or CRACs) should work in coordination with each other to prevent demand fighting. All indoor cooling devices should be controlled based on IT inlet air temperature and humidity to ensure the IT inlet parameters are maintained within targets according to the latest ASHRAE thermal guideline.

    Centralised humidity control with dew point temperature: IT space humidity should be centrally controlled by maintaining dew point temperature at the IT intakes, which is more cost effective than maintaining relative humidity at the return of cooling units.

    Flexible controls: A good control system allows flexibility to change certain settings based on customer requirements. For example, a configurable control system allows changes to the number of cooling units in a group, or turning off evaporative cooling at a certain outdoor temperature.

    Simplified maintenance: A cooling control system makes it easy to enter into maintenance mode during maintenance intervals. The control system may even alert maintenance personnel during abnormal operation, and indicate where the issue exists.

    Next generation cooling technology

    Next generation economiser technology has now been developed by Schneider Electric with the aim of  addressing some of the aforementioned issues. Launched at Data Centre World, (London Excel), the Ecoflair indirect air economiser cooling solution uses a proprietary polymer heat exchanger technology to help optimise operating temperatures, while keeping energy consumption to a minimum. According to John Niemann, Schneider Electric’s director of cooling product management, the technology is capable of reducing cooling operating costs by 60% compared with legacy systems based on chilled water or refrigerant technologies.

    Increased efficiency

    The company claims that, even when compared with other indirect air economiser systems, the overall efficiency of Ecoflair has been found to be better by between 15% and 20%,

    This increased efficiency allows data centre operators the ability to increase the IT load with the same electrical infrastructure. Studies undertaken by Schneider Electric suggest this could mean 30% more IT with the same electrical infrastructure when compared to typical cooling topologies such as chilled water or DX-based technology.

    By way of comparison, a 1MW data centre based in London using a traditional efficient chilled-water cooling system would operate at a PUE of 1.14 whereas the same facility using an Ecoflair system would reduce PUE to 1.039. This not only results in annual financial savings of about £75,000 but also greater efficiency and reduced carbon emissions which are increasingly important in today’s environmentally conscious world.

    “We have been targeting the Cloud and colocation customers with this product,” says Niemann. “Following feedback from data centres, Ecoflair was developed to address four key challenges: to improve Capex, free available power for IT, improve availability and increase flexibility.”

    Reducing capex

    He explains that the reduction in overall Capex is due to the smaller electrical infrastructure needed from a smaller electrical distribution and backup power requirements. Niemann suggests that the Capex savings could be as much as 6% when using the Ecoflair product.

    “In terms of helping to ensure availability, it is important that the equipment can be easily maintained while keeping systems up and running. We have introduced features to Ecoflair to facilitate this,” Niemann continues.

    Instead of a large, traditional heat exchanger, the design  features small, modular segments which can be easily removed, maintained or replaced, thereby minimising downtime and inconvenience.

    Niemann explains that the tublar design prevents fouling that commonly happens with plate style heat exchangers. This minimises maintenance and impact to performance over the life of the heat exchanger. In addition, the polymer is corrosion-proof compared to other designs that use coated aluminium which corrodes when wet or exposed to the outdoor elements.

    “Many of the larger Cloud and colocation sites are worried about the life of the data centre, operational simplicity and how to maintain systems – having modular components, such as the heat exchanger, as part of the design helps offer them peace of mind that they can keep the data centre operational… The modular design of the Ecoflair also means you can allow a smaller space for service clearance, which is another benefit.”

    Niemann points out that flexibility is also important: “We are seeing centralised Cloud data centres, with Cloud capability being duplicated at The Edge in colocation environments. Because of the variability of different types of data centre and infrastructure projects, flexibility is required to adapt to these different site locations.”

    Ecoflair is reported to reduce cooling operating costs by 60% compared with legacy systems based on chilled water or refrigerant technologies

    Available in 250kW and 500kW modules, Ecoflair is designed to offer this flexibility and enables customisation according to the cooling requirement and local conditions. The scalable approach makes Ecoflair particularly suited for colocation facilities rated between 1 and 5MW (250kW modules) and large hyperscale or cloud data centres rated up to 40MW (500kW modules). The modularity also allows the cooling to grow at the same rate as power upgrades, according to the needs of the data centre as IT loads expand.

    Indirect air economisation can be deployed regardless of most environmental or climactic conditions pertaining to the data centre’s location – the technology is typically suitable for at least 80% of all global locations.

    Schneider Electric points out that such adaptability helps data centre owners to standardise the cooling architecture of their facilities around the world providing repeatable designs that speed deployment and reduce operational and maintenance costs.

     

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here