The modern organizations rely heavily on computer services, often compensated by the data center legacy. Its utilization has grown by leaps and bounds with further addition of cloud computing and virtualization technologies. However, the growth in data centers has doubled the electricity consumption and resulting heat generation.
Nowadays, the power in data centers range from 540 W/m2 to 2160 W/m2, requiring energy amounting to 10MW. Moreover, the adoption of cloud computing has increased the utilization of servers from 10 to 70 per cent, which has exceeded the available capacity for power density and airflow at rack level, leading to power resilience and thermal challenges.
Conventional data centers and the cooling systems are designed considering average power density per square foot, which in reality is not much of an effective method. The amount of heat generated actually varies at rack or row level, depending on the equipment and utilization. As such, varying air flow is required at different rack levels for efficient cooling of the data centers, which requires better understanding the air flow in order to develop the cooling systems. More often, in legacy data centers, the cooling systems are designed for over-cooling, which is an expensive practice.
Additionally, the utility costs, regulatory requirements and the need to reduce carbon emissions is forcing data center designers to utilize cost-effective methodologies for efficient thermal management.
Since data centers are required to operate 24 hours and 7 days a week throughout the year, even a small standstill in the center can lead to huge expenses. The key is to balance the IT infrastructure with the available thermal management system in order to maximize data center capacity and prevent downtime due to thermal issues. As such, last few decades have witnessed a lot of research in this field to determine efficient operating conditions in data centers for uninterrupted operation and cost-effective thermal management. The airflow has been identified as a deciding factor for data center performance, requiring comprehensive understanding about the air flow characteristics.
CFD for Thermal Management:
Usually, the conventional approach in designing thermal management systems for data centers was to overprovision the systems to ensure uptime. However, modern data centers are being developed considering the ASHRAE guidelines, for maximum efficiency and least operational expenses.
In today’s energy saving scenario, most companies turn to thermal simulations using computational fluid dynamics (CFD), to evaluate multiple thermal loading scenarios and accordingly optimize the placement of equipments within the available area. Since it requires complete information on thermal dynamics to balance the infrastructure with the cooling capacity, it is essential to have better insights on airflow, which can be visualized using CFD.
The simulation techniques provide the facility to conduct what-if scenario tests to identify alternative options on deciding the cooling capacity, position of IT equipment and improvising air flow around critical regions. However, for a meaningful outcome from the simulation process, detail is vital. By assuming the cabinet as a black box and assigning total heat generation will lead to a more generalized result. It is essential to enter as much detail as possible for each piece of equipment to pinpoint the exact root of the problem.
Implementing the CFD technique during the initial planning of the data center lifecycle greatly reduces the chances of problems from occurring during actual installation, and significantly brings down the possibility of making wrong choices.
Mehul Patel specializes in handling CFD projects for Automobile, Aerospace, Oil and Gas and building HVAC sectors.
(Image Source: data247.ru/2013/09/02/energosberezhenie-v-cod-s-ustarevshim-oborudovaniem-eto-legko/)