Move storage node to lvm

Hello everyone
I want to increase my Disk space I have my node setup on a linux vm on a 1Tb vhdd unfortunately i didnt use lvm so i cant add other hdds. I want to move my node data to a new lvm setup so i can expand easily in the future. The only problem is this move would take multiple hours (5-6 at least) is it ok to turn off my node for that period ? Or there is a better solution ? (i dont want to set up multiple nodes)

I would strongly recommend against using lvm to “add more drives” to the node. You will create what is essentially RAID0 and if one of the drives fails, your whole node will be lost.

Either create a second node on the second drive or create a RAID so you can recover in the event of a drive failure.

However, you can copy data from one location to another using rsync.
(while the node is running on /old)
rsync -avpP /old/ /new/
wait until it completes (can take many hours), then
rsync -avpP --delete /old/ /new/
now it should run rather quickly
stop the node
run rsync -avpP --delete /old/ /new/ again
start the node on /new

1 Like

Thanks! Im planning to use Raid5 or 6 in the future but atm i dont have eanugh hdds.

For now the downtime disqualification is turned off, so you’re lucky and you can play around. Whether it will make sense, that’s another matter. For me, LVM is useful to organize non-Storj data on the same drives (e.g. on one of the nodes I have the root filesystem on a RAID1 volume while having Storj data in single-drive linear volumes), so I’d go for it, just not exactly for the goal you stated.

Unfortunately, this is also true of a single drive. A single drive is also essentially RAID0.
I have 5 single drive nodes.

A RAID0 made from multiple drives is less reliable than a single drive. A single drive has some chance to fail, but a RAID0 only needs one of the drives to fail, so the chance to fail is much higher.

Running separate nodes on each drive is better since now one drive failure will only take out that node and not all of them.


Bear in mind that if you have really large drives, 4-14 TB it can take days to recover a failed drive, and that itself can break one of the other drives and your entire raid will fail. Not saying it will do that but I’d do Raid50 or Raid60 if you can.

My plan is to use some old hdds i have for free (1-3Tb in size) for storj to test it and see how mutch i can make. And later with lvm i simply just add a single 8Tb drive (or 3x4Tb with Raid5 for 3x the power consuption) and remove the small ones one by one. I dont think i can add more since i have only 25Mb/s uplink atm. So im trying to avoid multiple nodes becaus of this and beacause of the vetting process takes longer on multiple nodes.

Set them up one after the other. If one gets ~75% full, start the next node on the next hdd. There will be no advantage in setting them up all at once, the opposite is the case.