Node migration plan [advice needed]

Now you look as with a total lack of experience. I am not saying that as a rookie but again definitely without a proper experience. And I am not pretending giving you a lesson here. By any means. :- )

Had another question. How long do I have once the node is offline to get it back online? Becuase once the data is one the arc drive and I get the other drives by themselves the copy back will take some time.

12 days

after 4h the node will lose data because of repairs starting.

Still doing the intial copy of the data but it has dawned on me that for the copy back, arc to new single drive, I can just clone as it will be ext4 to ext4 so it should be much faster.

Well update. Had to abadon the node as in the same server, different pool, I had hardware failure so i decided it was not worth the risk of my personal data vs 3tb of storj data. So Iā€™ll just start over from scratch, this time with no raidz1. Thanks for all the help and advice.

2 Likes

Welcome to the forum @Fill :slight_smile:

Donā€™t make array topology decision for storj. Configure it for your other needs.

If you are not going to be using raidz ā€” are you going with mirrors? Itā€™s an expensive overkill, unless you need mirrors for other things ā€” like ultra fast sequential transfers.

2 Likes

just doing straight 1 node per 1 drive with just ext4. just going to run the risk of drive failure

1 Like

Please review performance posts about ext4 and ZFS before you commit. (TLDR: you will be much better off with a one or more raidz1 devices plus special device than you can ever achieve with ext4, let alone separate single disk ones, both from performance, cost, and reliability perspective)

How increase inefficency and complexity by 1000x! /s

But seriously, why not just use a VM for STORJ on Proxmox, store the database there, NFS mount a TrueNAS share for data? That way you can make great use of the SSD speed for the DB and host data with the right volume block size on TrueNAS.

That is what i did, minus the db on ssd, it was with the data. And yes I know that caused a lot of problems. My setup for this and all my other servers is proxmox contains the boot drives. I then have truenas serve nfs share to proxmox that it then uses as a storage pool. From there when I create the vm, i give it a small boot disk on proxmox directly and a large qwmu/raw disk on the nfs share. For when I was running this pool in raidz1 I was hitting what some of the truenas devs called, ioblender. The db took a lot of the io away but even when i turned off the node and was just copying the files, it was doing it at a speed that was insanely slow so something else was up. Small file io on any raidz style is not fun. Mirrors would be better for that.

I donā€™t think so. You mount the disk as raw on the NFS share. And that NFS share is not even mirror but RAIDZ1 which comes with a bunch of concerns!

What I do is hosting the disk and the db on the local proxmox RAW mirror ZFS. In fstab of the VM I mount the NFS share of TrueNAS. That is totally different when it comes to complexitiy, performance, space efficency, volume block size and so on.

It is espacially not fun for databases. It also is not fun for any kind of block storage because of the fixed volblock size and the extra paritiy cost you can get with that.

In that case they would be forced to use a virtual disk to store data. The direct usage of NFS to store data is a total failure accordingly reports from other SNOs: Topics tagged nfs.
Instead I would recommend to run a node on the TrueNAS directly if the storage is there.

You keep repeating that and pointing to some random forum posts.
I am not going to dig into some random crashes from the forum.
It is like that debate we had about SMR drives, if you can give me a reason why NFS should be any kind of risk for the data storage, I am all ears. Otherwise this does not become true just by repeating it.

And please, yes I know the DB should not be an SMR or NFS.

itā€™s not the risk, itā€™s just not supported. Meaning - it can work until donā€™t.
The storage database is still a database. It needs locks, a normal locks, not is what NFS is offer.

So there is no rational reason why it should not work, it is just unsupported? That is fine by me.
Database is not on NFS.

Please share your experience, when the node will grow above 1TB, very interesting!
SNOs reported issues after this point.

Will do in my monthly report post :grinning:
I know of multi TB Nextcloud setups that have the data on NFS, which is basically the same: Just some files. Well not quiet, I use async for STORJ because loosing a file wonā€™t bother me compared to Nextcloud.

1 Like