Node migration plan [advice needed]

Now you look as with a total lack of experience. I am not saying that as a rookie but again definitely without a proper experience. And I am not pretending giving you a lesson here. By any means. :- )

Had another question. How long do I have once the node is offline to get it back online? Becuase once the data is one the arc drive and I get the other drives by themselves the copy back will take some time.

12 days

after 4h the node will lose data because of repairs starting.

Still doing the intial copy of the data but it has dawned on me that for the copy back, arc to new single drive, I can just clone as it will be ext4 to ext4 so it should be much faster.

Well update. Had to abadon the node as in the same server, different pool, I had hardware failure so i decided it was not worth the risk of my personal data vs 3tb of storj data. So I’ll just start over from scratch, this time with no raidz1. Thanks for all the help and advice.

2 Likes

Welcome to the forum @Fill :slight_smile:

Don’t make array topology decision for storj. Configure it for your other needs.

If you are not going to be using raidz — are you going with mirrors? It’s an expensive overkill, unless you need mirrors for other things — like ultra fast sequential transfers.

2 Likes

just doing straight 1 node per 1 drive with just ext4. just going to run the risk of drive failure

1 Like

Please review performance posts about ext4 and ZFS before you commit. (TLDR: you will be much better off with a one or more raidz1 devices plus special device than you can ever achieve with ext4, let alone separate single disk ones, both from performance, cost, and reliability perspective)

That is what i did, minus the db on ssd, it was with the data. And yes I know that caused a lot of problems. My setup for this and all my other servers is proxmox contains the boot drives. I then have truenas serve nfs share to proxmox that it then uses as a storage pool. From there when I create the vm, i give it a small boot disk on proxmox directly and a large qwmu/raw disk on the nfs share. For when I was running this pool in raidz1 I was hitting what some of the truenas devs called, ioblender. The db took a lot of the io away but even when i turned off the node and was just copying the files, it was doing it at a speed that was insanely slow so something else was up. Small file io on any raidz style is not fun. Mirrors would be better for that.

In that case they would be forced to use a virtual disk to store data. The direct usage of NFS to store data is a total failure accordingly reports from other SNOs: Topics tagged nfs.
Instead I would recommend to run a node on the TrueNAS directly if the storage is there.

it’s not the risk, it’s just not supported. Meaning - it can work until don’t.
The storage database is still a database. It needs locks, a normal locks, not is what NFS is offer.

Please share your experience, when the node will grow above 1TB, very interesting!
SNOs reported issues after this point.