What is the best way to change the hard drive to more capacity?

Hello, I would like some questions to greet me.
I want to expand from 1TB to 6TB
1 What is the fastest way to pass all the data from one hard disk to another?
2 Should I pause the node or turning it off and doing it from another pc would be useful?
3 what should I do to make storj keep in mind that now I have 6tb instead of 1Tb?
THANKS FOR YOUR HELP :grinning:

https://documentation.storj.io/resources/faq/migrate-my-node

rsync is your friend here. You donā€™t have to stop node until very end.

3 Likes

From what I have been investigating it is a software for linux and it is quite complex to use. Is there an alternative for windows 10, would Freefilesync work?

You can use robocopy on windows. Instructions are here.
https://documentation.storj.io/resources/faq/migrate-my-node/how-to-migrate-the-windows-gui-node-from-a-one-physical-location-to-other

1 Like

To make the network aware of your increased capacity you need to change the STORAGE parameter in your config.yaml file. This should be done after your disk migration is done, and can be done at the same time that you change the storage path to the new disk. Instructions for increasing the STORAGE parameter can be found here:

1 Like

itā€™s going to take a while to copy itā€¦ doubt you will be done in an hourā€¦ or even twoā€¦ you should most likely expect it to take something 4-6 hours if not just expect to leave it until the next dayā€¦ and follow the guidesā€¦

run the robocopy a good deal of times until it doesnā€™t take to longā€¦ rsync with --deleteā€¦ or thats what i do at last because itā€™s the fastest and rarely much have been deletedā€¦

and then ofc donā€™t delete the old data before you are sure the new one is up and runningā€¦
with luck you might be done in a dayā€¦ but expect it to take a coupleā€¦ total downtime shouldnā€™t be much more than 1hourā€¦ or less

if so then you are doing it wrong :smiley:

It took me about 12hrs to move my 2150GB to the newer 2x12TB ZFS mirror (mostly due to the old serverā€™s IOWait).

Itā€™s now back to taking in uploads and already added 0.1TB in one full dayā€™s worth of time.

@SGC for what itā€™s worth, I have an unprivileged LXC container that has docker inside it with a dataset for storj on ZFS, as a mountpoint inside of that LXC container, that the docker container is pointing to- Iā€™d be happy to share notes if youā€™d be interested on how I did that in proxmox.

1 Like

@kalloritis
does both the host and the vmā€™s have direct access to the mediaā€¦

my main issue is that i want to run everything / most stuff in one poolā€¦ and then if i have data on one vm then itā€™s not accessible on another, just like if i have data on the host it might not be accessible over the hostā€¦ ofc this i can solved with network accessā€¦ however storj doesnā€™t like network accessā€¦

the solution i think ill end at will be simply to put the storagenode on a virtual hdd in a dedicated dataset on the main poolā€¦
i donā€™t really need to access it from the host anywaysā€¦

but if you have a suggestion of how i can access a pool on the host without using the network protocol from a vm, then that would be greatā€¦

been thinking of trying to setup like a SAS layer or whatever its calledā€¦ and then connect the VMā€™s over virtual HBAā€™s thus allowing them direct drive access, but without using the network protocols that cause storj issuesā€¦

thus all the storagenode data can be located easily accessible on the pool, even from the hostā€¦

but yeah most definitely open to ideas, kinda just left it as is, because i donā€™t want to do any major changes without fixing that issue and i donā€™t have a good answer for the problem.

and tho it might be a few months before i migrate my storagenode for the 4 or 5th timeā€¦ the i expect it to be the last for a very long timeā€¦ before i get around to migrating it this time it will most likely be a little BLOB of 20TB so thats going to be fun to migrateā€¦ :smiley:

security wise it might just be better to place it all into the vm vhddā€™s
i just really like thatā€¦ lets say my vm had issuesā€¦ or server had issuesā€¦ then i just need to mount the pool on a new OS install and aim the storage node dir at itā€¦ and its runningā€¦ also its kinda nice that i can go in and simply check stuff directly on the driveā€¦ also there must be some sort of overhead associated with running it all in vmā€™sā€¦

ofc before i was a lot less familiar with how little paravirtualized vmā€™s actually demandedā€¦
i should really start looking at that container stuff toā€¦ never actually got that to work lol

but yeah long story short
if you can access the data directly from both the VMā€™s and the host then iā€™m all earsā€¦

You mean like this?

I borked the permissions recently, but basically as you described. ZFS pool accessible inside and outside the LXC container&docker.

is that because itā€™s a containerā€¦? i kinda managed with using paravirtualization thus far
when itā€™s a vm it seems that it has to happen over the network protocols.

but doesnā€™t really have a good reason as to why iā€™m not using containers, aside from that i didnā€™t bother learning how to make it work in proxmoxā€¦ tried a few times but didnā€™t have much luckā€¦ but iā€™ve had many smaller issues that i have over time got ironed outā€¦

so maybe itā€™s time to get back to making containers workā€¦ :smiley:

My personal setup is a hyper-v host with a storage space, and a vm machine with the storj node on it. If i want to add or replace a hard drive with a larger one, i add the new one first and let the windows host move all the data, once the old drive is empty i can remove this drive. This way i can keep the storj node online while all the data is being moved and only to shutdown the node for a few minutes while iā€™m removing the old drive. (or if your hardware supports hotswap you never have to shut down)
If i ever want to move to a new server i could even add the new server as an iscsi target, move the storage location to the iscsi target while the vm is online, then just shutdown the vm and import it on the new server, again making the downtime very tiny.

Careful! Am I understanding you right that you move the data while the node is still running? Then you will be disqualified during the move as you will surely fail audits on pieces which are not in the old location anymore. Please follow the link in the solution of this thread.

@donald.m.motsinger no thatā€™s the whole point of the virtual machine, the storjnode doesnt know itā€™s being moved, the virtual host is moving the data, the virtual machine doesnt know where itā€™s data is stored, itā€™s just writing to the virtual disk that is being managed by hyper-v. This is why you can ā€œdo stuffā€ to your virtual disk while you stay online.

1 Like

yeah storage spaces is kinda awesomeā€¦ was my 2nd pick after zfsā€¦ sure would have saved me a lot of work to just go storagespacesā€¦ now i got vmā€™s and terminals all over the placeā€¦ and a storage solution that has more to learn than i would get through in 10 yearsā€¦

i especially liked the tiered storage features of storagespacesā€¦ just the whole idea that the entire ā€œpoolā€ / JBOD is being rated and sorted based on their performance and then the faster ones will be used as cache for the slower onesā€¦ and sure it will do a lot of extra work and some extra wearā€¦ but if you want a file to be safe you mark it critical and itā€™s spread across all drives or however many you asked the redundancy to beā€¦

i really like the concepts its working withā€¦ not sure if itā€™s the future of storageā€¦ but itā€™s certainly a very viable current tech

Ah, ok. Iā€™m not familiar with Hyper-V and itā€™s virtual disks, but it looks like you can do similar things with it like you can do with LVM2.

1 Like

So thereā€™s also things like layered filesystems that are CoW based and basically would be moving the image to the new drive while writing all new changes to the new drive and all reads ā€œfall throughā€ the current top layer of the new drive to the old drive.

AUFS does similar things inside the images and volumes for docker.