Best Practice for Transferring Data To New Drive

What is the best practice for transferring to new drive and increase capacity?
I set up my node and honestly I did not do enough research before I have a 1tb smr Drive and after research realize this is not best practice I have ordered a 20TB Drive and should be here tomorrow I would like to move transfer to this drive and not cause issues for my node or storj network. currently have about 96gb on current node. Note I am running node on Raspberry PI 4

  1. Rsync data from old drive to new drive
  2. Depending on how long the first run took, run it again
  3. Stop the node
  4. Run it again
  5. Change node settings to new drive
  6. Start the node

You could also run the first run with cp instead of rsync (should be little bit faster) but it is harder to see the progress in cp.

2 Likes

Official recommendation is discussed above (cp followed by rsync).

Other possibilities:

  1. Clone the whole volume to a new disk and then expand. You don’t want to copy file by file — it’s very slow.
  2. Copy all data from the root except the file storage folder, and write a script to symlink pieces from old disk to the new. This way your old data will be served from the old drive, and new one — from the new. You can then gradually replace symlinks with their targets if you want to free up the old drive. This may not be much faster but will combine storage.
  3. Use mergefs, if you don’t mind relying on FUSE. You get combined storage and all new data will go to the empty drive of old is full, or if not — you can control policy for old data.
1 Like

With only 96GB of data I wouldn’t bother with multiple rsync passes. Just stop your node and copy everything to the new drive.

2 Likes

I would add that the last rsync you need to run with --delete option, otherwise you may end with corrupted databases.

2 Likes

Well I have the new drive I set it up and mounted it however i can not clone the data with rsync I keep getting
sudo rsync -aP /mnt/storj/identity/storagenode/ /mnt/storj2/storagenode-new/identity/
sending incremental file list
rsync: [Receiver] mkdir “/mnt/storj2/storagenode-new/identity” failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(787) [Receiver=3.2.3]

when I try to manually create the folders I am getting permission denied

sudo mkdir -p …

1 Like

It was my error I forgot to run the sudo chown -R : /mnt/storj2 after mounting disk

I was able to get it migrated and upgraded! thank you all for the advance help
This is the guide I followed
First Initialize and mount the new disk as follows

Formatting and mounting your HDD

Please do not reformat your HDD if it already contains the storage node’s data and you want only to mount it after an OS reinstall!

Format your hard drive

If you just reinstalled the system on the SD card, you can skip this step and continue to Mount your hard drive below, otherwise, please proceed with:

sudo apt-get install gdisk -y
sudo gdisk /dev/sda

Then, type n, and Enter until you exit out of the command. Write changes to the disk: w, confirm by y.

Now we will format the drive to use the ext4 filesystem (do not try to use btrfs or zfs on models with RAM less than 4GiB! The exFat is strictly not recommended in any setup, the ntfs uses a lot of RAM on Linux and you can lose data, if this disk were used on Windows (modern Windows uses dedup and compress features by default and they are not fully supported under Linux)).

sudo mkfs.ext4 /dev/sda1

Mount your hard drive

sudo mkdir /mnt/storj
lsblk

Find your drive and request its UUID:

sudo blkid /dev/<location (example: sda1)>

Copy UUID and open the /etc/fstab file in a text editor:

sudo nano /etc/fstab

Then add the following line to the end (replace with the copied UUID):

UUID= /mnt/storj ext4 defaults 0 2

Save the /etc/fstab (Ctrl-O and confirm saving, then exit with Ctrl-X)

Check your mount:

sudo mount -a

It should not print any errors. Otherwise - please, check the UUID and the filesystem type. Do not reboot until you fix the error, otherwise your Pi may stuck on boot.

To check that all ok:

df -HT

You should see your disk and free space on it, mounted to /mnt/storj.

If mount is ok, you can proceed further.

sudo chown -R pi:pi /mnt/storj

Then Rsync The Drive Using the Following
Migrating with rsync
We will assume that your parameters look like this:
the source folder where the existing identity is located is /mnt/storj/identity/storagenode;
the source folder where the existing stored data is located is /mnt/storj/storagenode/storage;
the source folder where the existing orders folder is located is /mnt/storj/storagenode/orders;
the destination folder the existing identity will be copied to is/mnt/storj2/storagenode-new/identity;
the destination folder the existing stored data will be copied to is /mnt/storj2/storagenode-new/storage.
the destination folder the existing orders will be copied to is /mnt/storj2/storagenode-new/orders.

To migrate your identity, orders and data to the new location, you can use the rsync command (please, replace the example paths mentioned above to your own!):
Open a new terminal
Keep your original storage node running
Copy the identity:
rsync -aP /mnt/storj/identity/storagenode/ /mnt/storj2/storagenode-new/identity/
4. Copy the orders
rsync -aP /mnt/storj/storagenode/orders/ /mnt/storj2/storagenode-new/orders/
5. Copy the data
rsync -aP /mnt/storj/storagenode/storage/ /mnt/storj2/storagenode-new/storage/
6. Repeat running the orders (step 4.) and data copying command (step 5.) a few more times until the difference would be negligible, then
7. Stop the storage node
docker stop -t 300 storagenode
8. Remove the old container
docker rm storagenode
9. Run the copying command with a --delete parameter to remove deleted files from the destination:
rsync -aP --delete /mnt/storj/storagenode/orders/ /mnt/storj2/storagenode-new/orders/
rsync -aP --delete /mnt/storj/storagenode/storage/ /mnt/storj2/storagenode-new/storage/
10. Now you can copy config.yaml file and revocation.db to the new location:
cp /mnt/storj/storagenode/config.yaml /mnt/storj2/storagenode-new/
cp /mnt/storj/storagenode/revocation.db /mnt/storj2/storagenode-new/revocation.db

  1. After you copied over all the necessary files, update your --mount parameters in your
    Storage Node. For our example, it will look like this (we only show a partial example of the new–mount parameter lines, not the entire docker run command!):
    –mount type=bind,source=/mnt/storj2/storagenode-new/identity,destination=/app/identity
    –mount type=bind,source=/mnt/storj2/storagenode-new,destination=/app/config \

The network-attached storage location could work, but it is neither supported nor recommended!
Please, note - we intentionally specified/mnt/storj2/storagenode-new as the data source in the --mount parameter and not /mnt/storj2/storagenode-new/storage because the storagenode docker container will add a subfolder calledstorage to the path automatically. So please, make sure that your data folder contains a storage subfolder with all the data inside (blobs folder, database files, etc.), otherwise the node will start from scratch since it can’t find the data in the right subfolder and will be disqualified in a few hours.

1 Like

I recommend you use label instead of uuid

mkfs.ext4 -L storj01

Then on your fstab

LABEL=storj01 /mnt/storj01 ext4 defaults,noatime 0 1

Also, if you already have it format with ext4, you can use the command

e2label

To give it a label.

Also, you don’t need to create a partition to format it to ext4.
Just do

mkfs.ext4 -L storj01 /dev/sdx

2 Likes

It’s better to have it:

I personally prefer to use LVM, it could help a lot for data migration later without any downtime.

1 Like

It might help with migration, but what happens if one of the drives fails during the move of extents? Ok, you might say that that the node would be gone anyway…

It also adds a layer of complexity, which might be too much for the average user.

1 Like

depending on which drive. If the destination drive - no problem, data will be actually deleted from the source only when the move process is finished, otherwise you may just abort the move. However, if the destination drive is completely broken, there are some tricks how to remove the missing pv: luks - Removing failing drive from LVM volume group ... and recovering partial data from an incomplete LV (with a missing PV) - Server Fault
if the source is died, well, there is end of the story.
Of course it would be more robust, if you would create a mirror or volume with parity.

I have to agree. But adding a LVM signature later to the existing drive if you would decide to use it will be tricky: Moving from Windows to Ubuntu and back