Hello, I would like some questions to greet me.
I want to expand from 1TB to 6TB
1 What is the fastest way to pass all the data from one hard disk to another?
2 Should I pause the node or turning it off and doing it from another pc would be useful?
3 what should I do to make storj keep in mind that now I have 6tb instead of 1Tb?
THANKS FOR YOUR HELP
https://documentation.storj.io/resources/faq/migrate-my-node
rsync is your friend here. You donāt have to stop node until very end.
From what I have been investigating it is a software for linux and it is quite complex to use. Is there an alternative for windows 10, would Freefilesync work?
You can use robocopy on windows. Instructions are here.
https://documentation.storj.io/resources/faq/migrate-my-node/how-to-migrate-the-windows-gui-node-from-a-one-physical-location-to-other
To make the network aware of your increased capacity you need to change the STORAGE
parameter in your config.yaml file. This should be done after your disk migration is done, and can be done at the same time that you change the storage path to the new disk. Instructions for increasing the STORAGE
parameter can be found here:
itās going to take a while to copy it⦠doubt you will be done in an hour⦠or even two⦠you should most likely expect it to take something 4-6 hours if not just expect to leave it until the next day⦠and follow the guidesā¦
run the robocopy a good deal of times until it doesnāt take to long⦠rsync with --delete⦠or thats what i do at last because itās the fastest and rarely much have been deletedā¦
and then ofc donāt delete the old data before you are sure the new one is up and runningā¦
with luck you might be done in a day⦠but expect it to take a couple⦠total downtime shouldnāt be much more than 1hour⦠or less
if so then you are doing it wrong
It took me about 12hrs to move my 2150GB to the newer 2x12TB ZFS mirror (mostly due to the old serverās IOWait).
Itās now back to taking in uploads and already added 0.1TB in one full dayās worth of time.
@SGC for what itās worth, I have an unprivileged LXC container that has docker inside it with a dataset for storj on ZFS, as a mountpoint inside of that LXC container, that the docker container is pointing to- Iād be happy to share notes if youād be interested on how I did that in proxmox.
@kalloritis
does both the host and the vmās have direct access to the mediaā¦
my main issue is that i want to run everything / most stuff in one pool⦠and then if i have data on one vm then itās not accessible on another, just like if i have data on the host it might not be accessible over the host⦠ofc this i can solved with network access⦠however storj doesnāt like network accessā¦
the solution i think ill end at will be simply to put the storagenode on a virtual hdd in a dedicated dataset on the main poolā¦
i donāt really need to access it from the host anywaysā¦
but if you have a suggestion of how i can access a pool on the host without using the network protocol from a vm, then that would be greatā¦
been thinking of trying to setup like a SAS layer or whatever its called⦠and then connect the VMās over virtual HBAās thus allowing them direct drive access, but without using the network protocols that cause storj issuesā¦
thus all the storagenode data can be located easily accessible on the pool, even from the hostā¦
but yeah most definitely open to ideas, kinda just left it as is, because i donāt want to do any major changes without fixing that issue and i donāt have a good answer for the problem.
and tho it might be a few months before i migrate my storagenode for the 4 or 5th time⦠the i expect it to be the last for a very long time⦠before i get around to migrating it this time it will most likely be a little BLOB of 20TB so thats going to be fun to migrateā¦
security wise it might just be better to place it all into the vm vhddās
i just really like that⦠lets say my vm had issues⦠or server had issues⦠then i just need to mount the pool on a new OS install and aim the storage node dir at it⦠and its running⦠also its kinda nice that i can go in and simply check stuff directly on the drive⦠also there must be some sort of overhead associated with running it all in vmāsā¦
ofc before i was a lot less familiar with how little paravirtualized vmās actually demandedā¦
i should really start looking at that container stuff to⦠never actually got that to work lol
but yeah long story short
if you can access the data directly from both the VMās and the host then iām all earsā¦
You mean like this?
I borked the permissions recently, but basically as you described. ZFS pool accessible inside and outside the LXC container&docker.
is that because itās a containerā¦? i kinda managed with using paravirtualization thus far
when itās a vm it seems that it has to happen over the network protocols.
but doesnāt really have a good reason as to why iām not using containers, aside from that i didnāt bother learning how to make it work in proxmox⦠tried a few times but didnāt have much luck⦠but iāve had many smaller issues that i have over time got ironed outā¦
so maybe itās time to get back to making containers workā¦
My personal setup is a hyper-v host with a storage space, and a vm machine with the storj node on it. If i want to add or replace a hard drive with a larger one, i add the new one first and let the windows host move all the data, once the old drive is empty i can remove this drive. This way i can keep the storj node online while all the data is being moved and only to shutdown the node for a few minutes while iām removing the old drive. (or if your hardware supports hotswap you never have to shut down)
If i ever want to move to a new server i could even add the new server as an iscsi target, move the storage location to the iscsi target while the vm is online, then just shutdown the vm and import it on the new server, again making the downtime very tiny.
Careful! Am I understanding you right that you move the data while the node is still running? Then you will be disqualified during the move as you will surely fail audits on pieces which are not in the old location anymore. Please follow the link in the solution of this thread.
@donald.m.motsinger no thatās the whole point of the virtual machine, the storjnode doesnt know itās being moved, the virtual host is moving the data, the virtual machine doesnt know where itās data is stored, itās just writing to the virtual disk that is being managed by hyper-v. This is why you can ādo stuffā to your virtual disk while you stay online.
yeah storage spaces is kinda awesome⦠was my 2nd pick after zfs⦠sure would have saved me a lot of work to just go storagespaces⦠now i got vmās and terminals all over the place⦠and a storage solution that has more to learn than i would get through in 10 yearsā¦
i especially liked the tiered storage features of storagespaces⦠just the whole idea that the entire āpoolā / JBOD is being rated and sorted based on their performance and then the faster ones will be used as cache for the slower ones⦠and sure it will do a lot of extra work and some extra wear⦠but if you want a file to be safe you mark it critical and itās spread across all drives or however many you asked the redundancy to beā¦
i really like the concepts its working with⦠not sure if itās the future of storage⦠but itās certainly a very viable current tech
Ah, ok. Iām not familiar with Hyper-V and itās virtual disks, but it looks like you can do similar things with it like you can do with LVM2.
So thereās also things like layered filesystems that are CoW based and basically would be moving the image to the new drive while writing all new changes to the new drive and all reads āfall throughā the current top layer of the new drive to the old drive.
AUFS does similar things inside the images and volumes for docker.