Tell us about your infrastructure requirements and how to reach you, and one of team members will be in touch shortly.
Let us know which data center you'd like to visit and how to reach you, and one of team members will be in touch shortly.
Tell us about your infrastructure requirements and how to reach you, and one of team members will be in touch shortly.
Let us know which data center you'd like to visit and how to reach you, and one of team members will be in touch shortly.
By Brendon Yoder, DataBank Solutions Architect
With the rapid adoption of High-Performance Computing (HPC) and AI services, discussions about the rapidly increasing power consumption in datacenters and the challenges of cooling technology have become more prevalent. These concerns have fueled a growing interest in high-density computing—what it entails and why it matters for enterprises looking to advance their technology strategies.
In the datacenter world, density refers to the amount of power a single cabinet or rack requires to operate its workloads. In the early 2000s, this was around 1-3kW per cabinet. By the 2010s, this increased to 5-6kW, and today, the average has risen to 8-10kW. While this growth has been steady, recent developments in HPC environments have accelerated the shift toward significantly higher densities. The demand for these systems stems from the increasing need to process vast amounts of data—whether for analytics, large language models (LLMs), or data-driven applications.
Before the advent of AI computing, high-density environments typically operated at around 17kW per cabinet, with few environments outside of supercomputers exceeding that threshold. However, the rise of GPU-based workloads for AI and data processing has pushed power requirements dramatically higher. Today, many high-density environments require 35-45kW per cabinet, with some systems expected to reach 70-80kW or more within the next few years.
To maximize resources while keeping up with technological demands, many companies are consolidating workloads into more powerful servers. While these high-performance servers consume more power individually, they offer greater efficiency and improved performance per watt compared to older equipment. Additionally, consolidation reduces networking complexity and maintenance overhead, allowing IT teams to allocate more resources toward business-driven initiatives like R&D and emerging technologies.
What once required multiple cabinets can now run on a single high-power cabinet, freeing up space for cutting-edge innovations like AI. With record-high demand for datacenter space and power, enterprises are prioritizing efficiency to ensure sufficient capacity for future workloads. However, the increasing global demand for power, sustainability requirements, aging power grid infrastructure, and extended lead times for new power delivery signal that power supply constraints will remain a challenge in the foreseeable future.
The increasing gap between standard and high-density workloads presents significant challenges for datacenters. Traditionally, forced air cooling was sufficient for most environments, while liquid and immersion cooling were reserved for the most power-intensive systems. However, as power density rises, air cooling alone is often no longer viable. Furthermore, high-power servers are emerging with unprecedented energy demands, requiring more robust power infrastructures. Whereas two redundant power systems were once sufficient, some modern servers are being designed to require three, four, or even five diverse power systems to ensure continuous operation during maintenance or power failures.
Many high-power computing components now require liquid cooling, posing challenges for enterprises operating in colocation spaces not equipped for these systems. Businesses investing in high-density hardware may find that their existing infrastructure is incompatible with the cooling demands of modern AI and HPC workloads. Additionally, liquid cooling can introduce significantly higher costs, making strategic planning and preparation essential. Due to the varying requirements of different liquid cooling solutions, secondary water or refrigerant loops going to customer equipment or Cooling Distribution Units (CDUs) often need to be custom-built to ensure optimal cooling efficiency. Due to the difference in scope and work for liquid solutions, the costs are often significantly higher than an equivalent air-cooled solution and can be unexpected to unprepared enterprises.
Datacenters must now accommodate a diverse array of workloads, each with vastly different power and cooling requirements. An environment designed for 10kW per cabinet has drastically different needs than one supporting 40kW or more. Surprisingly, while air cooling can function across varying densities, lower-density servers placed too close to high-density equipment can struggle to intake sufficient cold air, leading to overheating. Liquid cooling mitigates this issue by reducing air cooling dependency in high-density servers, allowing for more effective placement and cooling of surrounding equipment. This rapid evolution in datacenter infrastructure makes it increasingly difficult for unprepared enterprises to keep pace with technological advancements.
While much of the focus for high-density computing is on the power and cooling, the transition to this paradigm for IT, affects all parts of datacenter design and management, from Datacenter providers to individual customers and enterprises. The challenges presented by these changes are critical to understand so that plans for growth are not crippled by the differences in costs, architecture, and technologies.
One of the clearest examples comes in the way of networking cabling and equipment. While the complexity is often reduced, the cost of the cabling can easily reach the millions for even smaller HPC deployments. The quality and length of the cables required to transport the data to the servers at the speeds and reliability they need to be fully utilized is unlike traditional environment requirements. The networking costs, combined with the high costs of the servers and cooling infrastructure are continuing to push the initial capital investment higher opposing the recent trend of lowering capital cost to increase operational costs over time.
The rise of high-density HPC environments is reshaping enterprise technology strategies and influencing power companies’ planning for future energy demands. As datacenter densities increase, IT decision-makers must evaluate several critical factors, including:
By proactively assessing these pieces, enterprises can future-proof their technology infrastructure and remain competitive in an increasingly data-driven world. The transition to high-density computing is not just a technological shift—it’s a strategic requirement for businesses aiming to lead in the AI and HPC era.
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.