Hesitancy to take advantage of old hardware to mount nodes

Good morning everyone,

Recently the company sold me EonStor A08/12U-G1410 that contains 12 disks of 2 TB each model WD20EARX-00PASB0 that have been used for more than 100,000 hours.

I provide you with the EonSTor datasheet:

ES A12U-G1410

Right now I have it connected to the server I have and my question is, given the age of the hardware, is it worth preparing it to use mounting nodes? Or do I remove the drives and connect them directly to the motherboard via SATA to get better performance?

I have an NVME on the server with the PRIMOCACHE software to speed up performance

What do you think?

Thank you very much in advance.

It depends on what your expectations are. The 2TB drives a not very great for price/performance anymore. If you plan on setting up for learning and having fun (and if you have cheap power), I think it could be a fun idea. If now, you’re better off selling the hardware and using the funds to but something a bit more modern.

That’s the point where I came back to after reading the power on hours. Then I thought to myself someone tweaked the drive’s head parking (as they should have).

Greens suffered from excessive head parking (load cycles statistic, 193). As long as you tweaked that, the drives are good to go.

What I suggest you do is run a smart full test on each of them and check reallocated/pending sectors being reported.

If the drives come out fine, you need to re-check that the head parking is disabled, and if you want to use them in an array that TLER is enabled (that’s what it was called when greens were a thing).

Of course, as said above, none of this matters if you don’t have access to cheap electricity.

It really didn’t cost me much, about €100 between the EonStor and the hard drives. For electricity with the nodes I currently have and the normal consumption of the house, I currently pay about €40-45 per month.

I have tried and checked all the disks and they are OK, there are no bad or pending sectors. There is also no typing error rate in them. These disks have actually been making backups, they have not had to suffer excessive workload (I hope). I see them prepared to endure at least two or three more years

You could try with 1 2TB hard drive or with a raid0 (3 hard drives, I’m not taking any more risks) of up to 6TB and compare results. Later, if I see that it performs well, I will migrate the node to a larger capacity disk. I see that it can be expanded to larger capacity disks (10-12TB) as long as I see that the electricity cost does not increase too much for using the EonStor (At the moment between the light and what I earn with STORJ is positive, so I can risk growing the network and contributing more to the project).

What do you think, what configuration do you recommend before setting up the node?

I’m going to prepare a new WAN for the node and see how it goes. As they say, you have to reuse the hardware and I am very happy with the STORJ project, my first node has been in operation for 38 months and I hope to be here for a long time

Don’t use raid with those drives unless you tweak their timeouts as I have suggested above. They are not designed for raid and will cause problems down the road.

Head loading refers to the power saving features those drives had. They parked the head to reduce power consumption (the actuator doesn’t need to be energized to keep the head in its place). The thing is, that timeout that caused the parking was too aggressive and the head ended up being parked all the time. There is a small “ramp” that leads the edge of the head to its parked position, and this ramp ended up being hit a lot, which damaged the heads over time. I have seen drives with 250K load cycles, although most problems start at ~200K load cycles. As a reference, one of my drives (different model) with 82593 power on hours, has 3162 load cycles.

What I would suggest is to pull the drives and use them in an PC instead. One node per drive and move the nodes to bigger drives as they fill up. Keep in mind though that those drives will show their age soon, so don’t plan on them running for the next 10 years.

3 Likes

I doubt that’ll happen when you have an active node using the HDD’s all the time though. But I do agree with going with individual HDD’s. That also allows you to add new nodes only when others are close to filling up and save some on running costs.

Although

sounds really cheap. So maybe energy cost isn’t much of a concern.

4 Likes

If memory serves right (we are going back, what, 15 years?), the default parking was at 8 seconds. That meant that if the drive saw no activity for 8 seconds, the head was sent to park. Depending on how fast those drives will get filled (and remembering that they still have to be vetted), that head should be flying in and out of parking all the time.

3 Likes

I’m sure it will work fine. But my rule-of-thumb is that any node HDD needs to fill at least 2TB of space to be paying for itself (power, Internet, enclosure costs etc). So to me those 2TB models will only ever barely break-even… and until they’re full you’re just paying to run them.

(Off-topic: but I also slice off 1TB for TB/TiB and trash/free-space overhead when I estimate full-disk profits. So say for a 10TB HDD I’d subtract 2TB for ongoing running costs, and 1TB for assorted overhead… and pretend it’s 7TB of possible profitable space. Then estimate it would make 7x$1.50 = $10.50/month if full)

1 Like

Yeah, not too long ago only 4TB and up would make an insignificant amount of profit for me. So I tend to not bother with smaller anymore. And when I buy new, I buy the biggest I can.
To be fair, energy prices have gone down a little since then. But I still highly doubt I could break even on 2TB.

4 Likes

The way I think of it is if I’m buying a new drive, that drive needs to be sufficiently filled so that it can ROI as fast as possible. Having a brand new 20TB drive sitting at 1TB for a few months isn’t that good of an investment (for me).

Smaller drives can be used for vetting nodes. Vet the node on a small drive that you don’t care if it blows up, when it’s fully vetted move it to an intermediary drive. When that is full (let’s say 8TB+), move that node to its final (20TB) drive. That way the new drive can start “pulling its weight” on its own without having to worry about it failing out of warranty without getting its cost back.

That’s my logic. It may work for others, it may not.

6 Likes

Thank you very much for your comments and seeing them, I am going to configure the EonStor as follows:

  • Of the 12 2 TB hard drives, I am going to remove half because they are not really going to be used (at the moment)
  • Create 1 Node with 1 2TB hard drive and wait for it to fill up
  • If it has been full but the sum of the profits does not allow me to buy a higher capacity hard drive, for example WD121KRYZ, I will use another 2 TB hard drive to start another node
  • If in that time I can buy a disk with a larger capacity, I install the disk in EonStor and migrate the full disk to the one with the highest capacity that I end up purchasing.
  • And repeat the entire process until you have the disk bay with the maximum possible capacity

Do you see this configuration as optimal?

3 Likes

I do something quite similar, but I try to stick with one move. Moving 8TB takes a long ass time. I even have 1 backup node warmed up on a 300GB HDD still. But I won’t be reusing that one after the move. It’s running at a loss for such a long time now that fill rate has dipped a little.
But then I’m also still using my largest HDD’s in an array that serves other (low IO) purposes as well. I wouldn’t run it like that for Storj alone.

3 Likes

LVM is the solution. No downtime when moving.

3 Likes

Every little bit helps. Why run the node on LVM when it’s only going to be used once or twice in its lifetime? I prefer having the node run on pure EXT4 and deal with the 1h downtime it takes to finalize a move (The first two rsync runs are always done while the node is online).

When it’s a failing disk, it’s a different story. Moving an LVM volume would actually cause more damage in that case instead of helping. But again, you’ll only deal with this once or twice in a node’s lifetime.

1 Like

It would be if I wasn’t running all this on a Synology NAS. :slightly_smiling_face:

I absolutely recommend that for anyone who isn’t on such a more restrictive system.

I actually use a more dangerous migration method to speed things up because rsync is quite slow.

  1. Lower assigned space to way below the amount stored and restart the node.
  2. Cp the blobs folder. It will have files that are deleted during the copy, but GC will take care of that later.
  3. Stop the node
  4. Copy the rest. Trash is the biggest part of that, but this has never taken more than like half an hour.
  5. Start the node in the new location.

Not sure if I’d recommend this for other’s though. But it has worked fine so far.

2 Likes

So even if that copy takes a week… you don’t care… because you know the node isn’t accepting any new data. Clever! :+1:

1 Like

Tonight I will begin to do all the assembly of the node and thank you all very much for your comments on how to do the migration to the new disk with greater capacity in the future without losing information along the way.

I will check the discs a couple of times before proceeding and make sure they are healthy.

I hope the 2 TB drives keep up and don’t explode :crossed_fingers:

2 Likes

It might be a bit late, but if you want to prevent the heads from parking too frequently in the future, you could use ioping for this.

I run this script at server boot. ioping needs to be installed. Adjust “models” accordingly.

#!/bin/bash

models="WDC_WD60EZRZ"

killall ioping 2>/dev/null

for model in $models; do
  disks=$(ls /dev/disk/by-id/*$model*|grep -v part)
  for disk in $disks;do
    ioping -i 7s -q $disk &
  done
done
1 Like

Thank you very much @donald.m.motsinger, when the disks are installed. I add the script “modified so that it works for me” at system startup

LVM is sort of a default for me. For Storj uses it’s not just easier moving, but you can also add LVMcache without even stopping the node. And I tend to use LVM in all my Linux installations, so I’m more used to it than to regular partition tools. It’s one of those “killer app” tools in Linux.

The idle3-tools package should be easier to use here.

And what is your normal price per kWh?