SSD vs. HHD electrical cost?

I had set up Storj on my computer 4 or 5 years ago, but never made anything from it, not sure why, maybe something was set up wrong, but I eventually stopped for that reason. But I’m thinking about giving it another try.

I was wondering though, do people usually use SSD or HHD for this, or does it really matter much? It’s hard for me to think about how often these drives would be getting accessed and how to convert a certain amount of data read into electrical cost over time, so this question is probably best answered by someone who has actually done this. Any thoughts here? Is the difference in electrical usage even significant enough to worry about over time?

Welcome to the community,

From personal experience HDD have a better ROI. because cost per TB is much better.
Yes HDD tend to draw much more electricity but depends on HDD. But in my opinion it comes down to how much power the whole system consumes. or how much power you spend per TB. so if you think about it a 16TB HDD might be similar to a few hundred GB SSD when created as a unction of Q/TB.
I still recommend using HDD as their upfront cost are much lower. I currently run multiple nodes across 4 different networks. My latest mode has power draw of 3.5W/TB not the most amazing compared to raspberry pi but it is an enterprise solution with built in redundancy. it all comes down to the whole system.

1 Like

If you want to try it for fun and have some SSDs laying around, sure, give it a try. But there is no way to earn money with anything other than a hard drive. SSDs are an order of magnitude more expensive, not to even mention anything better. Electrical usage won’t matter much unless you have a bunch of very small drives. 7k2 RPM hard drives will use less than 10 kWh/month in electricity.

2 Likes

I always look at efficiency when making decisions. My HDDs are data center class 14TB UltraStar drives that consume up to 6w per hour (or about 65 cents per month). For performance and reliability sake, I turned off power down settings so these drives never power down or sleep. The income is expected to be roughly $50 per drive per month when filled.

1 Like

i think they uses even less actually… from when i look at my wattage draw in normal operation and when scrubbing or such there is a significant change due to the hdd’s running full tilt.
most of the time i would say they use like 2-4 watts with very light workloads.

an idle ssd is ofc cheaper and faster at responding than a hdd… but when actively working i think most or many ssd’s will pull more power than the hdd does…
ssd’s do have a lot of chips to power… most of the hdd is pretty passive
ofc some ssd’s will have a very low power draw, if designed for that…

don’t think it’s worth it for the power savings if there even is any

1 Like

Just a small correction; it is 6W all the time, there is no per hour. Yeah they don’t use much.
Actually I wondering if SSDs even use less when you take into account how many of them you need for 1 large hard drive. But in any case, it’s not much for either. You would need a lot of them to notice any difference in your electricity bill.

Power consumption is measured in watt hours, or KW/h or MW/h.

there’s no such thing…
It’s Power*time so W*h, not W/h

3 Likes

Lol! In the article you linked, it says:
“While a watt per hour exists in principle (as a unit of rate of change of power with time), it is not correct to refer to a watt (or watt hour) as a “watt per hour”.”

Energy is measured in watt hours, kWh, MWh, TWh, etc. MW/h is a rate of change.
E.g., a very weird light bulb might be rated 25W, 20W/h, this would mean that the bulb would use 25W when first powered, then by the next hour it would be using 45W, after 2 hours 65W and so on. After a few hours this would no longer hold as the bulb would blow and use 0W. Or, more excitingly, and very unlikely, cause an arcover, causing a fire and building to burn down.

Power consumption/energy use rate is measured in watts, OR if you really want it to be over time, you can use watthours per hour - Wh/h. Naturally the hours would cancel each other out, but it is a completely correct and valid unit.

a device uses a certain amount of watts or is specified to, if we want calculate power usage we can simply add the time factor which is usually done by hours.

1 watt usage for 24hours gives 24Wh usage

1kWh = 1000 watt hour “units”… distributed however.
like say 500watts over 2 hours, or 200watts over 5 hours or 4 watts over 250hours

it’s done this way to make it easy to calculate, you can ofc also do other time scales… but seems pretty pointless for common usage.

i don’t even

1 Like

realistically an SSD offers more performance / watt but a HDD will store more per watt of draw comparing WH/KWH per day seems silly in this situation just average watts is more than acceptable for comparing rigs and storage personally i would say that calculating watt/TB of storage is a good way of comparing over all efficiency currently

if and when large nodes become a thing and people start to see higher mbps of down/upload i think people will start needing more CPU power as proven by my raid 10 array the older CPU is the limit and i cant max out the bandwidth on my 1000/1000 connection that’s fine because it keeps it below 80% network usage and stops it slowing down other stuff however once i get my 10gbe connection i will have to upgrade the CPU by a fair amount obviously this is not an issue for storj right now but eventually people will be running higher power systems and that will effect the watt/TB but not necessarily as much or in the direction as you would think

realistically you can host a raid 1 with 2x 16tb HDD for under 50w including networking and that its going to be fine for a long while as from what i have seen you wont see over 100mbps and that needs minimal CPU overhead and the disks can keep up

at 50w and for a 15/30TB node ( depending if you run raid ) you looking at 3.33w/TB - 6.66w/TB

now for a large node with 16x 16tb drives and a CPU capable of 1gbps+ with redundant PSU and UPS your looking at around 500w

depending on raid you would be looking at 120-240TB and 2.08w/TB - 4.11w/TB

alternatively you could calculate as TB/w

10TB at 100w can be either 10w/TB or 0.1TB/w either are acceptable ways of measuring but i personally lean more towards w/TB but TB/w is equally correct

1 Like

Yeah, it scales very nicely with capacity. Perhaps even less than 500W for 16 drive system. Gets very efficient.

The main point is that it makes no financial sense to go with SSDs at scale, throwing money away.

I run an Unraid Server and I use SSD for cache drives 2TB but they are mirrored so if one fails no data is lost. But for storage in general I use HDD’s as they are cheaper and offer a bigger storage capacity. HDD’s are also more durable and can take more read/write over their life span. For StorJ other then the OS drive you do not really need an SSD. If you are inclined to run like me an SSD cache then the Gigabyte SSD’s offer the best noted here these are for the 1TB version and Mean time between failure is 2mil hours, and a 547,9GB per day write performance. And has a draw of 2.5w to 3.0w. It would cost roughly 25 Euros per year in Kw/H but it all depends on you’re energy contract.

i was going as 500w for a full system with backup and all power used in support systems ( 500w would be at full usage on a 1000/1000 connection based on my real world setup on idol you would see 250w approx. )

i did consider SSD write cashing but for my normal use other than storj i don’t need it so much due to raid 10 and the files being large sequential reads / writes now that i have 8tb allocated to storj i may need to add write cashing in the future but i don’t tend to see over 10% disk utilisation at this point maybe if storj goes big i can then move it to its own box as i have an unused 16 drive 4u hot swap case but for now its not cost effective

Yeh that is sounds solid, i only have a 35TB box and i use it for backups, some crypto projects etc. So i get the benefit of having the Vm’s run of the cache in terms of the OS. If StorJ goes big then just having mutiple drives behind the same IP should spread the load as well.

i am yet to decide how i would setup a dedicated storj box as i personally don’t want any lost data so raid is a must now i have read that 1 driver per node is the accepted practice but i feel raid 1 is the safe bet if your going to do it for a long time given that you cant max out a fast connection with a single or pair of drives it may pay to make raid 10 as you get upto 8x the speed with redundancy when you have 16 drives and a dead drive would cause minimal issue compared to other raid setups the downside is you could have 8 drives worth of backup and that is close to 3k worth of hardware

Unraid you run a Parity drive what covers the whole array you might want to look into that one and you can add more drives to cover more drive failures at the same time. Only problem with my current setup i thought i had normal drives but it turns out they are that crappy SMR wank so my parity drive needs a WD RED or something to not go into snail mode while writing the parity on a weekly basis.

As far as the nodes go i have 2 x 6TB nodes on Windows that also will host 1 6TB INXT node each. Then another 2 StorJ nodes when i am at capacity on linux. And i also make backups with UrBackup of all my windows OS’s drives and files so i can recover from almost anything.

And yes it is a investment so i try to make the most of my server with doing INXT/StorJ/MYST and some other projects like a lightning node and Helium to offset some of that investment. Roughly about 2.5k when completed worth of hardware for the server.

my issue with anything other than raid 10 and raid 1 is the rebuild time from a lost drive what i may do is grab the 16 old 240gb drives that came with the older 16 bay server and do some benchmarks at various raid levels and see what i find as there is very little info around on larger nodes they must exist but nobody is documenting it my main concern is the ability of the drives to actually max out the connection with hundreds if not thousands of concurrent small files i will probably run multiple nodes as each one is fully vetted just to minimise risk of i get some other random bug that takes down a single node or database gets corrupted as i wont be back at nothing that then goes back to just running raid 1 with 8 pairs of 2 drives but that may then have iops limits compared to a raid 10 setup i think i will spend some time and calculate the various setups ( i have also considered starting nodes on a raid 10 array then when each node is at a set data limit move it over to a different raid that favours reads and requires less drives )

Yes rebuilding drives takes a long time and i also could find very little data on that so i opted for the Unraid system but i have taken into account i might lose a node so i decided the bring nodes online 2 by 2 for a total of 4. And if StorJ takes of i can make a 2nd box and bring online some more drives. As well as the SSD cache i thought was very usefull to speed up operations and running various dockers.

RAID i did not want to use myself since i do not run a rack, so i only have space for 10 x HDD. If I were to run a rack then i could consider RAID 10 etc as i would have far more space to place more HDD’s.