UK heatwave causes cooling related outages for centralized cloud providers

Decentralization is cool! :cold_face: :ice_cube: :cloud_with_snow: :snowflake: :smile:


So cool :wink:

1 Like

lol. No worries for my nodes running in the midlands of England. It was quite toasty in the garage where my nodes are situated (maybe into the 40 degrees territory). But no downtime through out last few days. A lot cooler today, around 22.


i managed some downtime yesterday lol
the heatwave overloaded my server PSU, but rearranged some cables.
cleaned some dust filled air intakes…

ended at just about 27C in the server room, but the issue started at 24C also not quite sure if it was temp related, but i would think so… i run my PSU way to close to its limit like 80-90%+ of its spec.

measured at the wall… another good point to why i generally don’t recommend running PSU’s close to max…

had like 2-3 hours of downtime, so nothing of note…


For an always on system that is really not a good idea. Though if it’s measured at the wall, then it’s probably closer to 75-80% since the rating is what they can deliver, not what they pull from the wall. Is it at least a PSU rated for 24/7 use?

It got up to 38C here, but with AC I managed to keep the temp in my node room to 28C max. Haven’t had any issues. But the thing is, individual issues don’t matter. Even if there are some nodes with issues. There isn’t a heatwave globally and there will always be plenty of nodes available to still serve every segment. Yay for decentralization!


Hopefully not yet.؜؜؜

1 Like

yeah its an old server PSU, which is also why i’m less worried about taking it closer to the limit…
the problem might have been cables in the way of the intake for the psu itself and the air intake for the server being filled with dust after it not being cleaned for like 4 years.

also the 90% draw is at peak workload…
usually the system isn’t stressed like that

1 Like

Most probably the issue is due to the dust, the dust creates a film that interferes with the metal-air heat exchange, but most importantly it blocks the air inlets, reducing the cross-sectional area, which significantly reduces the cooling capacity. If it is a good server the cooling should be generously oversized but enough dust and the capacity gets below the actual rated heat output = failure. 80%-90% load on a server PSU is fine, and like BrightSilences says, measuring at the input side means the reading is about 10% higher than the actual load (the PSU wastes the 10% on itself).

It is actually good to have a “heatwave” for a day or two, or a temperature wave in general - it reveals unknown/untested issues and gives a chance to solve them without too much downtime. On the other hand, a constantly high temperature is just a nuisance.

Always interesting to read about these failures and solutions.

1 Like