Find out more about the systems in this article:

“A gale of creative economic destruction is blowing the roof off the conventional data centre economic model, revealing something altogether different!” So says Alan Beresford, MD of data centre cooling experts EcoCooling. We challenged Alan to explain his statement!

There’s a whole raft of new applications utilising larger proportions of the data centre sector; AI, IoT (the internet of things) and Cloud-based hosting and blockchain processing for over 1,300 different digital currencies is increasing the need for High Performance Computing (HPC) equipment.

You will no doubt have heard about tech giants Facebook, Microsoft, Google and Amazon Web Services building hyper-scale data facilities. These are a far cry from conventional data centres – a new breed based around efficient compute technologies, built specifically for the service each operator is providing.

These hyper-scale facilities have smashed conventional metrics. They achieve very high server utilisations, with PUEs (power usage effectiveness) as low as 1.05 to 1.07 – a million miles from the average 1.8-2.0 PUE across conventional centres. To achieve this, refrigeration-based cooling is avoided at every opportunity.

Built on the back of new HPC applications (Bitcoin mining etc), smaller entrepreneurial setups have adopted these high efficiency, extreme engineering practices. They are no longer the preserve of traditional hyper-scale facilities, thus turning the economics of data centre construction and operation on its head.

Blockbase cryptocurrency mining facility – Sweden

Intensive computing

CPU-based boxes are highly flexible and able to run applications on top of a single operating system like Unix or Windows. But being a relatively slow ‘jack of all trades’ means that they are ‘masters of none’ – unsuitable for HPC applications.

The hyper-scale centres use a variety of server technologies:

GPU – Graphics Processor Unit servers based on the graphics cards originally designed for rendering.

ASIC – Application Specific Integrated Circuits are super-efficient, with hardware optimised to do one specific job but cannot normally be reconfigured. The photo (pic 1) shows an AntMiner S9 ASIC which packs 1.5kW of compute-power into a small ‘brick’.

FPGA – Field-Programmable Gate Array. Unlike ASICs, they can be configured by the end user after manufacturing.

Extreme Engineering

In the conventional enterprise or co-location data centre, you’ll see racks with power feeds of 16A or 32A (4kW and 8kW capacities respectively). Although the typical load is more like 2-3kW.

Conventional data centres are built with lots of resilience: A+B power, A+B comms, n+1, 2n or even 2n+1 systems for refrigeration-based cooling. Tier 3 and Tier 4 on the Uptime Institute scale.

What we’re seeing with these new hyper-scale centres however is that HPC servers with densities of 75kW per rack are regularly deployed – crazy levels on a massive scale.

And there’s no Tier 3 or Tier 4, in fact there’s usually no redundancy at all except maybe a little bit on comms. The cooling is just fresh air.

Standard racks are not appropriate for this level of extreme engineering. Instead there are walls of equipment up to 3.5m high – stretching as far as the eye can see.

Blockbase – Sweden

The Economics of Hyper-Scale

Whereas we all have our set of metrics for conventional data centres, the crypto guys have only one: TCO (total cost of ownership). This is a single measure that encompasses the build cost and depreciation, the costs of energy, infrastructure and staff etc.

They express TCO in ‘cents (€) per kilowatt hour of installed equipment’. In the Nordics, they’re looking at just 6-7 cents per kWh, down to around 5 cents (€) in China.

However, all is not lost for operators in the UK and Europe. These servers and their data are very valuable – tens or hundreds of millions of pounds worth of equipment in each hyper-scale data centre.

As a result, we are already seeing facilities being built in higher-cost countries where equipment is more secure. But they still need to follow the same extreme engineering and TCO principles.

Keep it simple

You can’t build anything complicated for low cost. In this new hyper-scale data world, it’s all about simplicity.

Brownfield buildings are a great starting point. Particularly sites like former paper and textile mills where there tends to be lots of spare power and space.

Those of you who operate data centres will know that only about half the available power is actually used. But worse still, when we get down to the individual server level, some are down to single digit utilisation percentages.

 These guys squeeze all their assets as close to 100 percent as they can.

Almost zero capital cost is spent on any forms of redundancy and direct fresh air is used for cooling.

A New Dawn – prototyping to benefit you

EcoCooling are supplying cooling solutions for one of the most ambitious and potentially significant hyper-scale developments in Europe. The aim of the ‘Boden Type DC One’ project is to build a prototype of the most energy and cost-efficient data centre in the world. This will create achievable standards so that people new to the market can put together a project which will be equally, if not more, efficient as those of the aforementioned giants such as Amazon, Facebook and Google.

We are aiming to build data centres at one tenth of the capital cost of a conventional data centre. Yes, one tenth! That will be a massive breakthrough. A true gale of creative economic destruction will hit the sector!

One of the key components is EcoCooling’s modular fresh-air cooling system: our new CloudCooler range. And we’re trying to break some cost and performance records here too! 

For example, our new three-cooler Group racking system supplies tons of attemperated fresh air – at 15 cubic meters per second. Each Group is enough to house/protect 144 ASICs dissipating around 250kW.

3 cooler group solution – Assembled in under 2 hrs.

In these hyper-scale data centres, things have got to be done fast, so we’ve designed each Group to have an on-site build time of just two hours. Utilising integrated racks and cooling appears to be the way forward. Power distribution, PDUs and comms, plus the racks, containment and cooling is totally Plug & Play.

It’s early days yet, but we’ve already got clients deploying 3-4 megawatts of our Plug & Play CloudCooler equipment PER WEEK!

ECV CloudCooler production facility – Bury St Edmunds, UK

This way, we might get the cost of ownership for that part of the infrastructure down to less than one cent (€) per kWh.

If we’re hitting those costs on core infrastructure, we’ve got a great chance of meeting a TCO of five, six, seven cents (€) per kWh. You certainly couldn’t do that with conventional infrastructure.

One of the early leaders in the Arctic data centres, Hydro66, use Extreme Engineering. In their buildings, the air is completely changed every 5-10 seconds. It’s like a -40C wind tunnel with the air moving through at 30 miles an hour.

Hydro66 colocation facility – Sweden

Importantly, you’ve got to look after all this highly expensive equipment. An ASIC AntMiner might cost 3000 Euros. With 144 AntMiners in each of the racking units, that’s almost half a million Euros of hardware!

So, we need to create ‘compliant’ environmental operating conditions to protect your investment. You’re probably familiar with the ASHRAE standards for temperature and humidity etc. In many instances we have achieved 100% ASHRAE compliance with just two mixing dampers, a low-energy fan and a filter. There is some very clever control software, vast amounts of experience and a few patents behind this too of course.

Hang onto your hats

So, to conclude: The winds of change are blowing a gale of creative economic destruction through the conventional approach to data centres. Driven by Blockchain and Bitcoin mining, Automation, AI and cloud-hosting, HPC equipment in the form of GPUs and ASICs will be required to drive the data-led economies of the future.

Hyper-scale compute factories are on the way in. TCOs of 5-7 cents (€) per kWh of equipment are the achievable targets.

This needs a radically different approach: Extreme engineering, absolute simplicity, modularity and low-skill installation/maintenance.

Hang on to your hats, it’s getting very windy in the data centre world!

Alan Beresford is the Managing Director of UK-based cooling manufacturer, EcoCooling. A trained mechanical engineer, he started EcoCooling in 2002 and has since pioneered the introduction of energy efficient direct evaporative and free cooling into IT environments.

 Watch the 2-hr ‘3-cooler group solution’ assembly time lapse here: https://www.youtube.com/watch?v=Kr7sKiX8QRs