Good job!
How many TB is that? I have 160 on 8 HDDs with a bunch of SSD for cache in front.
It’s an old DDR3 system, so capacities are low and I really, really cannot remove any of the 4x 8GB DIMMs
Good job!
How many TB is that? I have 160 on 8 HDDs with a bunch of SSD for cache in front.
It’s an old DDR3 system, so capacities are low and I really, really cannot remove any of the 4x 8GB DIMMs
Did you compared the money saved with the money spent buying them?
I did that in the video. I’ve not put the RoI section into the write-up, I should do that. In local prices, the return on invest on a brand new fanless Seasonic unit is 10 years.
The very first platinum PSU I found on my local used market was for 850dkk, which brings the RoI down to ~6 years, but I see them go for 600 constantly, where the RoI is ~4 years.
Upgrading from the old bronze unit -for me- has much more to it than pure RoI, which agreed, if that was the only measurement point would not be a great place to put your money. Better power supply is cleaner power, which can handle brownouts, unstable current and fluctuations much better than the old unit. My synology box that houses all the testing is +10 years old, and it’s not like it was not designed with that very bronze unit in mind, but I like the idea of giving it the best circumstances here in it’s last days.
The Gold PSU that was in was semi-passive, but fan never spun up during the low load I put them through, so while the fanless Seasonic unit is ridiculously, it’s a bit of a moot point with fanless operation
You… with a titanium power supply laying around in the closet ![]()
But congrats on the saving of an additional 4.5 watts!
Are these used PSUs safe to use? I imagine a Titanium 1200W unit that new was like 2-300$, maybe more, to be sold at 20$, must have some history… Aren’t those capacitors or other parts break down after some time? I’m not very knowledgeble about electronics, so I ask as a curiosity.
Most of the really cheap titanium power supplies are proprietary in size and (probably) provides only 12volt and ground at very high power. They’re also not stand atx, so you can’t just plug them into anything.
If you have a SuperMicro chasis, they’re great, if not, they require a lot of custom tooling to get working. Not impossible, but not recommended either. Sure, they do break down over time - just like anything else on this planet, but by the time they no longer can supply power within spec, they’ll have far outlived their usefulness.
Here is a lifehack. See the gadget you think is interesting to tinker with. Buy it. Put it on the shelf. Go to sleep. Wake up next morning – oh look! A gadget on the shelf I already bought in the past™! Exactly what I needed!
I said above:
you can always sell it back to ebay. At one point I had a pile of 9 HBA on my desk. Now I have one, and the other in the server.
And as we discussed above your prices are way off.
Yes. Much safer than any new consumer one.
Scroll few comments above, e.g. My power saving testings on old equipment - #13 by arrogantrabbit
These end up on the secondary marker way, way before they exhaust their useful life.
More like $500-$600
History: Company bought 200 servers. Company used 200 servers for a few years. Company need increased and they needed new servers. Or company needs plummeted to zero because most startups fail. They offloaded 200 old now slow-ish old-ish server to hardware recyclers. Hardware recyclers disassembled the servers and sold by part to hobbiests on ebay. This is why I keep repeating that buying new consumer stuff is insane. Buying new enterprise stuff is goofy, unless you need cutting edge. Recyclers are your friends.
Three years ago I bought this server It had two CPUs, Xeon E5 v3, 12 HT cores each, 128GB of DDR4 ram, dual socket Supermicro MLB, with 10G dial ethernet, IPMI, pair of platinum PSU, Supermicro HBA, And Supermicro expander backplane (with NVME support), 12 drive bays on the front, two bays on the back. With disk caddies. Inside – pristine condition, not a single dust spec. For how much? About $350. From unixsurplus, local recycler. They rent a fricking hangar at the airport to keep all that stuff. There is absolutely nothing wrong with that server. It has 20 years of useful life left. Over years I replaced MLB with a single-core one ($30), processor upgraded to v4 (bought one for $5, not joking), and the rest you can read in my other thread.
We live in the amazing time.
… then go get supermicro chassis!
Properly designed server chassis is a far cry from a consumer rolled steed crap. Everyone, do yourself a favor, and buy one at least once. You won’t be able to touch consumer PC cases without cringing.
… then go get supermicro chassis!
:) .... >:(
Apartment living is great, but my rack can only be 50cm deep in it’s current position. Most SuperMicro chassis are not. I saw this post a few years ago, and absolutely fell in love with it. How COOL is that?!
Some day, I want to buy a three sided farm (which are all over the place in Denmark), and dedicate one of the wings for things that can make noise, and produce heat. Current life conditions has rack in my office, so every dB shaved off is great.
I have dozens of other excuses as to why I don’t just have a stack of enterprise gear, so keep good reasons coming ![]()
Maybe you know to thinker with old stuff and test it if it realy works or is a time bomb that waits to put your house on fire, but I don’t. That’s why I go the costly route of buying new.
Few issues with that. You pay more and get less. Let’s assign ratings from 0- worst, to 10-best. C - cost (lower-better), R - reliability (higher better)
In my mind, in terms of diminishing reliability: first goes new enterprise stuff. Right after – used enterprise stuff.. Then a vast valley of nothingness, and in the very end consumer new and used gear. Ignore it. Don’t touch it. Don’t pay for it. Don’t bring it home. Especially power supplies.
To your point – vendor is important. Don’t buy consumer trash from nobodies on ebay. Buy enterprise gear from reputable recyclers: you get an enterprise class product and used consumer gear prices. It’s a n-brainer. No domain knowledge needed – just logistics.
Another point: One might argue that new is untested, and used in fact had time to prove itself under load for a few years.
Did I tell you I only buy used hard drives? Same reason. I got tired sending back new WD and HGST and Seagates. So I switched to buying used. Now I send them back much less frequently, and mostly sell back to ebay once I grow out of them.
…and so get your power use down from 150W to 25W and we believe
Power consumption of what? I have raspberry pi, it consumes 3W. So.. I win?
I’m not sure what are you trying to say.
I am saying that by worshipping this enterprise god, you are burning 100+W.
You don’t need 5 9s of reliability that you think enterprise gear gets you.
You don’t need dual data path nvme sas drives, sata will do fine.
You dont need a 1000W platinum psu, a 200W energy star from 2010 will do.
A storagenode uses about 1W above background; 150W to support that is extravagant
This makes even less sense. But ok, I’ll try to understand what are you trying to say.
Are you saying enterprise equipment is less efficient that consumer one? How does it make sense to you?! And this is even before considering reliability, tolerances, and corner cutting culture of designing for price, but let’s keep out of scope for now.
Irrelevant to power consumption discussion.
And I don’t use them.
Wrong. 200w Energy Star from 2010 will burn 160W to provide 135. Titanium supply will burn 150w. 10W savings. Cost of used PSU is also irrelevant, both are under $20. So why would I go out of my way to buy worse power supply?
Well, then I burn 1W on storagenode, don’t I? That’s all it takes to support that. Your words, not mine.
150W existed before storagenode.
So what’s your point?
If you have specific recommendations how to reduce power consumption further, I’m all ears.
Here is the config:
X10 single socket Mlb
E5-1650 processor. Speedstep supported and enabled. (10W at my average load)
128GB ram (9W)
12 disks 18-20TB each
2 nvme ssd for special device (P3600 for now. Before you ask - it’s 4W)
2 9500-8i HBA (also about 4W).
Go ahead. Ask for any additional information you need and tell me how can I get to your 20W consumption, while hosting the comparable amount of storage with comparable responsiveness.
Otherwise it’s all unproductive ranting.
And lastly,
What you call worshiping, I call rational behaviour to maximize value for dollar spent and maintain minimum quality bar. I don’t want to fund consumer marketing and corner cutting “engineering” and gimmicks designed to sell and appeal to bloggers, as opposed to be boring and work well; I’d rather that money goes to reliability R&D and testing. So I’m refusing to buy products that are immediately e-waste right off the factory production line. There is no conspiracy — consumer products are shit due to very rational reasons. These reasons contradicts my values. And so I refuse to buy shit.
Hey, I have the same machine. It is 70W at idle.
It also has a rtx 4060 which is overkill.
Err mine has NO hdd, so it would be less
So you are recommending removing all disks from my storage server? Got it, very helpful, thank you.
![]()
You are using 10W of a 130W cpu. You dont need it a i7 will be enough.
128gb ram for a storage node is a lot. If you could make do with 32G you could use 3770 or 4770 with cheap ddr3.
Is your ram registered? Those eat loads of power by the way.
So i7-3770 32G pc3 about 20W, add in your other cards 32W?
Right. At this point you are arguing replacing the whole platform — because replacing that just processor won’t yield any meaningful improvement anymore. These processors idle very efficiently. Even replacing 2637 with 1615 did not make any difference — all those QPI links are also powered off in a single cpu config and leakage current is very small.
I did consider upgrading the whole platform - Xeon E2414 (or even epyc 4344P) and modern MLB -x13scl (or X13sae, respectively). But then shit with AI happened and I’m not prepared to pay that kind of money for marginal improvements.
The key here is the server is close to idle most of the time and processors in the past 20 years idle very well.
No, 32GB is not enough, and neither is 64. I actually tested that: power consumption increased when I removed 64GB from the system: less caching, more disk access, more power consumption; at least that extra disk Io was more than savings from removing memory in my specific workloads (measured over two days).
Yes. Its ECC registered rdimm. MTA18ASF2G72PDZ-2G6E1. Registered memory does consume more, but my whole 128 gb consumes 9 watts, as reported by pcm, so there is not much room for improvement.
Of course, I would pick udimm today, but this means changing platform — and see above. Overal, platform change would buy me 5W in ram savings, maybe $5W chipset efficiency tops (doubtful — I don’t know how much current chipset actually consumes, but the heatsink is not very warm), and maybe slightly better cpu idle and leakage, also few watts. But the switching cost is prohibitive.
Please show pcm output from your system. I’m curious. I did not see meaningul difference between ddr3 and ddr4 in power consumption. Once I get home I can show pcm from xmy 10 and x9 based systems (ddr4 vs ddr3 respectively, 128 vs 32GB).
I don’t follow.