What is the best way to change the hard drive to more capacity?

Hello, I would like some questions to greet me.
I want to expand from 1TB to 6TB
1 What is the fastest way to pass all the data from one hard disk to another?
2 Should I pause the node or turning it off and doing it from another pc would be useful?
3 what should I do to make storj keep in mind that now I have 6tb instead of 1Tb?
THANKS FOR YOUR HELP :grinning:

https://documentation.storj.io/resources/faq/migrate-my-node

rsync is your friend here. You don’t have to stop node until very end.

3 Likes

From what I have been investigating it is a software for linux and it is quite complex to use. Is there an alternative for windows 10, would Freefilesync work?

You can use robocopy on windows. Instructions are here.
https://documentation.storj.io/resources/faq/migrate-my-node/how-to-migrate-the-windows-gui-node-from-a-one-physical-location-to-other

1 Like

To make the network aware of your increased capacity you need to change the STORAGE parameter in your config.yaml file. This should be done after your disk migration is done, and can be done at the same time that you change the storage path to the new disk. Instructions for increasing the STORAGE parameter can be found here:

1 Like

it’s going to take a while to copy it… doubt you will be done in an hour… or even two… you should most likely expect it to take something 4-6 hours if not just expect to leave it until the next day… and follow the guides…

run the robocopy a good deal of times until it doesn’t take to long… rsync with --delete… or thats what i do at last because it’s the fastest and rarely much have been deleted…

and then ofc don’t delete the old data before you are sure the new one is up and running…
with luck you might be done in a day… but expect it to take a couple… total downtime shouldn’t be much more than 1hour… or less

if so then you are doing it wrong :smiley:

It took me about 12hrs to move my 2150GB to the newer 2x12TB ZFS mirror (mostly due to the old server’s IOWait).

It’s now back to taking in uploads and already added 0.1TB in one full day’s worth of time.

@SGC for what it’s worth, I have an unprivileged LXC container that has docker inside it with a dataset for storj on ZFS, as a mountpoint inside of that LXC container, that the docker container is pointing to- I’d be happy to share notes if you’d be interested on how I did that in proxmox.

1 Like

@kalloritis
does both the host and the vm’s have direct access to the media…

my main issue is that i want to run everything / most stuff in one pool… and then if i have data on one vm then it’s not accessible on another, just like if i have data on the host it might not be accessible over the host… ofc this i can solved with network access… however storj doesn’t like network access…

the solution i think ill end at will be simply to put the storagenode on a virtual hdd in a dedicated dataset on the main pool…
i don’t really need to access it from the host anyways…

but if you have a suggestion of how i can access a pool on the host without using the network protocol from a vm, then that would be great…

been thinking of trying to setup like a SAS layer or whatever its called… and then connect the VM’s over virtual HBA’s thus allowing them direct drive access, but without using the network protocols that cause storj issues…

thus all the storagenode data can be located easily accessible on the pool, even from the host…

but yeah most definitely open to ideas, kinda just left it as is, because i don’t want to do any major changes without fixing that issue and i don’t have a good answer for the problem.

and tho it might be a few months before i migrate my storagenode for the 4 or 5th time… the i expect it to be the last for a very long time… before i get around to migrating it this time it will most likely be a little BLOB of 20TB so thats going to be fun to migrate… :smiley:

security wise it might just be better to place it all into the vm vhdd’s
i just really like that… lets say my vm had issues… or server had issues… then i just need to mount the pool on a new OS install and aim the storage node dir at it… and its running… also its kinda nice that i can go in and simply check stuff directly on the drive… also there must be some sort of overhead associated with running it all in vm’s…

ofc before i was a lot less familiar with how little paravirtualized vm’s actually demanded…
i should really start looking at that container stuff to… never actually got that to work lol

but yeah long story short
if you can access the data directly from both the VM’s and the host then i’m all ears…

You mean like this?

I borked the permissions recently, but basically as you described. ZFS pool accessible inside and outside the LXC container&docker.

is that because it’s a container…? i kinda managed with using paravirtualization thus far
when it’s a vm it seems that it has to happen over the network protocols.

but doesn’t really have a good reason as to why i’m not using containers, aside from that i didn’t bother learning how to make it work in proxmox… tried a few times but didn’t have much luck… but i’ve had many smaller issues that i have over time got ironed out…

so maybe it’s time to get back to making containers work… :smiley:

My personal setup is a hyper-v host with a storage space, and a vm machine with the storj node on it. If i want to add or replace a hard drive with a larger one, i add the new one first and let the windows host move all the data, once the old drive is empty i can remove this drive. This way i can keep the storj node online while all the data is being moved and only to shutdown the node for a few minutes while i’m removing the old drive. (or if your hardware supports hotswap you never have to shut down)
If i ever want to move to a new server i could even add the new server as an iscsi target, move the storage location to the iscsi target while the vm is online, then just shutdown the vm and import it on the new server, again making the downtime very tiny.

Careful! Am I understanding you right that you move the data while the node is still running? Then you will be disqualified during the move as you will surely fail audits on pieces which are not in the old location anymore. Please follow the link in the solution of this thread.

@donald.m.motsinger no that’s the whole point of the virtual machine, the storjnode doesnt know it’s being moved, the virtual host is moving the data, the virtual machine doesnt know where it’s data is stored, it’s just writing to the virtual disk that is being managed by hyper-v. This is why you can ā€œdo stuffā€ to your virtual disk while you stay online.

1 Like

yeah storage spaces is kinda awesome… was my 2nd pick after zfs… sure would have saved me a lot of work to just go storagespaces… now i got vm’s and terminals all over the place… and a storage solution that has more to learn than i would get through in 10 years…

i especially liked the tiered storage features of storagespaces… just the whole idea that the entire ā€œpoolā€ / JBOD is being rated and sorted based on their performance and then the faster ones will be used as cache for the slower ones… and sure it will do a lot of extra work and some extra wear… but if you want a file to be safe you mark it critical and it’s spread across all drives or however many you asked the redundancy to be…

i really like the concepts its working with… not sure if it’s the future of storage… but it’s certainly a very viable current tech

Ah, ok. I’m not familiar with Hyper-V and it’s virtual disks, but it looks like you can do similar things with it like you can do with LVM2.

1 Like

So there’s also things like layered filesystems that are CoW based and basically would be moving the image to the new drive while writing all new changes to the new drive and all reads ā€œfall throughā€ the current top layer of the new drive to the old drive.

AUFS does similar things inside the images and volumes for docker.