Transfering NODE data

No. It’s inside the machine. RAID5 running on four SATA ports, while for the temporary 8TB drive i’ve added LSI controller to the same machine as there are no more SATA ports available.

4 070 809 to be exact :slight_smile:

well there are lots of stuff to learn with zfs, if you already have a raid 6 setup ready to go, then i would stick with that… tho i would look for a file system that uses checksums…
and is Copy on Write…

those features makes the world of a difference for data integrity

zfs is nice, but it does involve a lot of learning and the gear is also a bit different.

regular raid6 is also much more easy to manage…

i kinda love and hate my zfs… if i had been using regular raid i would have finished my setup a long long time ago… ofc i’m new to zfs which is why i made some rookie mistakes… and that has cost me a lot of added time and work.

the worst thing about zfs is the inability to remove or add drives easily… no way to make a raidz1 into a raidz2 or add an extra drive or remove one… something which on an LSI raid controller is nothing more than a click away and a few hours of the system working.

in the base concepts they are sort of similar raid5-raid6 vs raidz1-raidz2
but zfs just takes the options of what one can do to the next level with all the caching and such… most lsi proper controllers can also do ssd caching tho… you just need a HW key for it.

I only have fresh installed Ubuntu server on RAID1 SSD’s. Currently data is copying to one single large drive, afterwards i’ll take RAID5 drives add some on the top and create a RAID6. Then will copy data back from single 8TB drive to RAID6.

I would like to try and learn ZFS, but i’m very short with the time as i have couple of huge projects coming on. So there will be no time for playing arround that’s the reason no1. Reason 2, this node is running for about 1 year, probably not the best idea for experiments on it.

But if you could share the information, that you was using for your build and learning, i would love to study it and if i will have some free time, i might build another node with ZFS :slight_smile:

P.S. otherwise it is so simple and easy to install RPi and attach single 8Tb drive to it :smiley:

well for zfs you want to use HBA’s or such… because its software raid, ECC memory and a good bit of it, else you want a half decent cpu, because when you do operations on the datasets, it will eat a bit of cpu time.

i’m happy that i made the switch to zfs… its by far a superior way of storing data and very developed software, its nice when stuff just works… only thing is that like now… i’m doing 1 pool using 3 sets of 3drive raidz1’s, which locks me into using 3 drives basically every time i want to add more space i should add 3 drives of equal size to the rest… it’s not strictly required… and one can load balance between them i believe…

zfs tries to tackle some major storage issues in interesting ways… and some of them are great… but like with all “new” technology it runs into different problems and paths that are difficult to develop… not sure it’s the future of storage… but it might just be the most secure data storage method for the consumer market presently.

and lots of stuff is very easy… but inability to change the arrays is just so bad… i’m almost considering just moving on to ceph asap
but i duno if that is even a good idea… i doubt it will have zfs’s stability… but i expect it will be as adaptable as its name… cephlopod

https://ceph.io/ceph-storage/

sadly i think one needs like minimum 2 storage servers to make it just barely work and 3 for production… so yeah… maybe sticking with zfs for a while… don’t really plan on having 3 servers any time soon… but who knows … plans change lol

I’m still a bit of a rookie, this video was what kinda clued me into zfs

and this is a great guide for how to setup a system and lots of do’s and don’t’s

1 Like

Thank you @SGC going to watch :slight_smile:

hi guys! I’m trying to migrate my node to a new computer with best performance and capacity. Both pcs are in the same lan and both on windows 10. I’m using robocopy and starts fine for a few seconds but when try to access to $recicle.bin stops and show Access denied and a message saying something like “error 5 trying access to destiny”… I don’t know why it stops… a few files has been copied… and suddenly stops. Any idea, please?? Thanks!

adding: $recicle.bin seems a directory but it doesn’t exist in origen… I’m losing somthing… How could I skip this directory if it is just the trash bin??

ADDING: /XD $recycle.bin… that’s the parameter… I hope it’ll work well.

In the future I advise to always put storj data (and anything else for that matter) in a subfolder, even if it’s the only thing on the disk. That way you can just copy the subfolder and avoid such issues altogether.

1 Like

Yes, you’re right. Ather problem I am having is attributes of that directory… robocopy make the directory like hidden and system… I solved it with attrib -s -h /directory… a lot of time figuring out! Thank you very much.

OK. I’ve done this already 3 times or smth like that, but today i’ve got another situation. Migrating node’s data to the bigger hdd. Currently i’m syncing it over vpn, once finished i will bring new hdd to the same location and run rsync again for final sync.

I’m using the same command as i’ve used before, but this time i get plenty of errors, but besides that, i see that new hdd is filling up:

“/home/usr/storj_data/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/4r/gagthclxgxy5vksojqt6y3xseolty4grbbznh3xhwgpzgh32cq.sj1”: Permission denied (13)

rsync: send_files failed to open “/home/usr/storj_data/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/4r/gavoflist3ocqx2ivxllnw3pniuo2mbu7kuusastmleogkxdsq.sj1”: Permission denied (13)

rsync: send_files failed to open “/home/usr/storj_data/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/4r/gb5htyot4bqrlcoimzv6psistc43ct7aulmdrfjz4qjzyhet6a.sj1”: Permission denied (13)

I believe i can leave this sync with these errors for now, as this is first sync only. But in the privious times i did not had this error. Any sugestions whats wrong this time?

using sudo/root to rsync on both sides?

I use sudo only one one side:

sudo rsync -aP usr@192.168.1.15:/home/usr/storj_data/ /home/usr/storj/

edit: oh wait, you use sudo on the receiving side but copy using usr on the sending side. that can cause problems if all your files are owned by root on the sending side.

1 Like

I was thinking why some files are success and some NOT. And realised, that source (sending side) already had one migration few months ago. That’s probably why some files are readable and some files are not readable by “usr”.

But the question then is how should i solve this? Maybe i need to log on to remote (sending) machine and change permissions of the files.

sudo chown usr:usr /home/usr/storj_data ???

it depends on how your node is started. is your docker container running as root? then every new piece it receives will be root again and you wouldn’t win much. In that case syncing from the sending machine to the new machine might be easier.

I run my docker container as my normal user so I did:
chown -R usr:usr /home/usr/storj_data

I’ve added docker to the sudo group, so i run docker commands without sudo.

sudo usermod -aG docker $USER

But maybe you are righ, i will try to run rsync from source side…

that is a completely different topic.
So if you don’t know, you’re definitely still running the container as root. Otherwise you woud have this in your run command: --user 1000 (or similar).

Then I’d suggest syncing from the sending server instead of the receiving one. Then you send as root and receveice as usr

Ok… the problem - i can’t start rsync from “sending” side, as i have one way vpn connection… :slight_smile:

Then again i think maybe i should use this command on sending side to change permissions? But the node is online and running. Is it safe to run this command? It will not damage anything? :slight_smile:

sudo chown -R usr:usr /home/usr/storj_data

well when the docker container runs as root then it can read the files if they belong to usr but any new file the container creates will be root again and you might not be able to read those and get permission errors again.

a different (arguably less safe) method would be to give the root user on the receving side a strong password, allow ssh access for root and then rsync as root@receiver.

Maybe there are other options I can’t think of right now.

Ok… you right…
Probably i will try change permission, so i could get as much as possible files over vpn. Once the drive will be at the same location i have to make last sync any ways…

Thank you.