HoMEDUCS's Unique Approach to Keeping Modular Data Centers Cool
Joe Milan | Jul 25, 2023
With the advancement of AI and 5G, concerns with privacy, and the continuing expansion of the internet of things, there is an ever-growing demand for edge computing. Modular data centers have been growing in popularity to meet that demand, since they can be deployed rapidly to remote areas as well as supplement brick-and-mortar data centers.
However, one major issue for all data centers is their energy and water consumption (particularly for cooling, which accounts for up to 40% of data center energy use), which has made them the subject of front-page news.
Related: AI Workloads Spur Competition in Networking Chips
In February 2023, it was revealed that a Google data center uses a quarter of an Oregon town's water, and according to a 2021 study from Virginia Tech, "One-fifth of data center servers' direct water footprint comes from moderately to highly water-stressed watersheds." Much of this water is used for evaporative cooling, which becomes an issue in the historic drought conditions being experienced in the U.S. Southwest.
For modular data centers, cooling is a particularly important issue because of their tight spaces and ability to be deployed to remote places requiring liquid-based immersion cooling using energy-intense chillers or evaporative cooling drawing from the water supply.
Related: Data Center Sustainability in 2023: Top Stories So Far
In May, the U.S. Department of Energy's Advanced Research Project Agency - Energy (ARPA-E) announced its Cooling Operations Optimized for Leaps in Energy, Reliability, and Carbon Hyperefficiency for Information Processing Systems (COOLERCHIPS) program, which funds projects to reduce the environmental impact of data centers by developing "highly efficient and reliable cooling technologies."
The fundamental goal of the project, according to Dr. Peter de Bock, the program director for COOLERCHIPS, is to find "a transformational path to more energy-efficient data centers and computing" as part of the larger hope of reducing carbon emissions to, as U.S. Secretary of Energy Jennifer M. Granholm said, "beat climate change and reach our clean energy future."
One funded project that shows particular promise for cooling modular data centers is the University of California, Davis' Holistic Modular Energy-efficient Directed Cooling Solutions (HoMEDUCS) project.
Rather than trying to achieve better cooling through a single element or a simple upgrade to prior cooling methods, the HoMEDUCS project deploys a series of significant improvements to the whole modular data center cooling system, starting with the basic principle of what really needs to be cooled.
HoMEDUCS data center in operation mode (courtesy of the UC Davis HoMEDUCS team)
Unlike offices with comfortable temperatures between 68-76 degrees Fahrenheit (20-24.4 degrees Celsius) — as recommended by OSHA — computer chips tolerate far higher temperatures (158-176 degrees Fahrenheit/70-80 degrees Celsius). (This is a fact all of us learn from hot laptops, sometimes causing severe burns). Computer chips can handle temperatures that, even on the hottest days, the U.S. Southwest doesn't reach.
By focusing on this idea, HoMEDUCS Project Lead Dr. Vinod Narayanan says, "If you have a computer chip that is at 80 degrees Celsius, even if you have an outdoor ambient that's 40 degree Celsius (104 degrees Fahrenheit) ... that [temperature difference] can be used to drive the heat away from the chip." HoMEDUCS' cooling project focuses on extracting and dissipating the heat from the chip into the ambient air, starting with the direct liquid cooling of the chip.
HoMEDUCS uses a cold plate design that differs from other cold plate designs by its unique fluid channel design that focuses on smaller scales and differing geometry that enhances heat transfer while reducing pressure drop, thus reducing the pumping power needed for the cold plate fluid (propylene glycol).
Once the fluid exits the cold plates, it travels to a wall of ultra-efficient heat exchangers, which utilize the HoMEDUCS team's innovative "pure" counterflow heat exchanger design and are made from polymer (for cost reduction). The heat then dissipates to the ambient air with the aid of fans; no compressors or chillers are involved, unlike existing modular data center cooling designs. Using only simple pumps and not compressors or chillers drastically increases energy savings, much in the same way that a simple ceiling fan draws less electricity than an AC unit. The cooled fluid returns to the cold plate on the chip and repeats the process.
If outside temperatures exceed 40 degrees Celsius (104 degrees Fahrenheit), HoMEDUCS' design incorporates Skycool's radiative cooling panels on the module's roof, which can cool liquid below the ambient temperatures without electricity, even on a sunny day. Cooled fluid will be stored below the module and used during times of extreme heat.
Currently, cooling data centers using evaporative or chiller-based cooling typically consumes around 25% to 40% of their energy, not including the incredible amounts of water being consumed, which have caught the ire of drought-burdened communities. HoMEDUCS' three design elements — cold plates, ultra-efficient heat exchangers, and radiative cooling panels — and others, which have yet to be disclosed, are projected to use less than 5% of a data center's power consumption for cooling and use no water.
"What makes us unique," Narayanan says, "is the combination of technologies that we are bringing forward that make it more efficient and compact," the very things needed for easily deployable modular data centers that could operate anywhere in America where power is available.
Besides UC Davis' HoMEDUCS project, there are 14 other COOLERCHIPS projects with energy efficiency as their goal, ranging from Nvidia's project of applying "green refrigerants" directly to the chip cold plate with rack manifolds with built-in pumps and liquid-vapor separators to the University of Maryland's "Multi-Objective Software" that aims to provide decision support for data center designers in order to develop the next generation of data centers.
Luis Colón, senior technology evangelist of fauna.com, who has a long history of working with private and rented data centers, says that the impact of the COOLERCHIPS program will be felt significantly by "hyper-scale" data centers. They will be able to decrease the load on their locality's energy grid, especially during the extreme temperature fluctuations in winters and summers when the local grids are stressed the most, allowing them to be "better neighbors."
De Bock believes the COOLERCHIPS program is more than just a path to energy-efficient data centers. The developed technologies will "generally apply to many electronic systems limited by the efficiency of their cooling systems and could also benefit power conversion systems for solar systems [and] wind turbines."
De Bock adds that modular data centers will likely be the biggest benefactor of whatever technologies are developed by the COOLERCHIPS program. "Modular data centers or edge data centers include their own building structure and can therefore more rapidly adapt and utilize the COOLERCHIPS technologies developed," he says. "They also offer unique use cases where computing can occur close to the customer for low latency, fast communication between the data center and the user site, and the potential of waste heat reuse for a greenhouse, heating, drying, or other application."
Perhaps the greatest potential benefit of projects like HoMEDUCS is a future where data centers can be, as Colón puts it, "better neighbors" by using less energy and water, and increasing edge computing to users wherever they live.
More information about text formats