
The AI revolution is transforming the way data centres are designed and operated. But while demand for high-density computers continues to grow, many existing facilities are struggling to keep up. Retrofitting traditional data centres for AI workloads is proving to be far more complex than simply swapping in new equipment.
Challenges With Retrofitting Data Centres for AI
1. Power Limitations
AI servers consume significantly more energy than traditional IT loads. Many legacy facilities simply don’t have the electrical infrastructure to support racks drawing 50-100kW. Upgrades often require major works to incoming supplies, switchgear, and distribution networks, which is both expensive and disruptive. In some cases, it is not just about cost, the local grid may not be able to handle the additional demand at all, making expansion effectively impossible.
2. Cooling Shortfalls
High density computers mean high density heat. Traditional perimeter-based cooling designs can’t deliver the airflow required to manage AI workloads. Air volumes are too large, air paths too long, and existing cooling systems are too inefficient for today’s requirements. Facilities that rely solely on air cooling are also not suitable for the latest generation of direct liquid-cooled (DLC) equipment, which is becoming essential for 50kW+ racks.
3. Legacy Infrastructure Doesn’t Fit the Model
Many older data centres were built with raised floors, low ceilings, and layouts optimised for distributed rather than concentrated loads. Lower ceiling heights, once sufficient, leave no space for additional infrastructure, overhead busbars, or walkway integration. Retrofitting these spaces is often impractical without wholesale reconstruction.
4. No Room for Expansion Outside the Building
Even in regions with cooler climates, such as the Nordics, we’re seeing challenges with available space for external plant. Adding dry coolers, chillers, or additional cooling towers simply isn’t possible in constrained sites. As demand for AI infrastructure grows, grey space outside the building is becoming just as critical as white space inside.
5. Perimeter Cooling is Outdated
Perimeter cooling was never designed for today’s AI racks. The sheer density of airflow required makes these designs obsolete. High-density AI clusters also rely on ultra-low-latency interconnects, which means servers within a pod typically need to be located close together. Spreading racks further apart for thermal reasons is therefore impractical, reinforcing the need for integrated, row-level cooling solutions.
Where Do Retrofits Work?
Not all retrofits are impossible. Facilities previously designed for cryptocurrency mining, for example, are better suited to AI upgrades. We have also seen successful retrofits in industrial sites with historically high power demand, such as former paper mills. These types of sites often:
- Already have higher-density power and cooling provisions
- Were designed with airflow and concentrated loads in mind
- Are located in remote areas with land available for external plant
This makes them more adaptable for next-generation AI-ready cooling and power systems.
Data Centre Cooling Solutions from EcoCooling
AI is driving a fundamental shift in how data centres are designed. Many existing facilities, constrained by power, cooling, space, and layout, are reaching the limits of what can realistically be retrofitted. Future-ready facilities will need integrated designs that bring cooling closer to the rack, optimise airflow, and incorporate direct liquid cooling as standard, while making room for the additional infrastructure AI workloads demand.
At EcoCooling, we’re working with clients worldwide to design and deploy cooling solutions that meet the challenges of AI head-on, whether that’s optimising a retrofit or creating a new facility from the ground up. Get in touch to discuss your requirements today.
