Transfering NODE data

Hello, have some interesting situation with old node, that needs to be upgraded.
Currently node and OS Ubuntu server runs on RAID5 (4x2Tb) drives. In total it has arround 5Tb DATA.
Want to reinstall OS on mirrored SSD’s and change the data array from RAID5 to RAID6, as well add extra drives.

Have spare 8Tb drive, thought i will stop node, will copy at about 250Mb/s speed (±5 hours) data to single drive, reinstall OS, create raid and put back data to RAID6. Thought that bottle neck will be 8Tb drives writing speed (±250Mb/s) as RAID5 in this case will have 3x read speed gain. But… reality is different. I see in zabbix, that reading/writing speed is 25Mb/s :expressionless: So it will take arround ±50hours to copy.

Two questions:

  1. Why reading/writing speed is so low? Maybe it contains a lot of small files?
  2. Any suggestions for this migration/update? My node probably will be disqualified if the copy of data will take more then 2 days.

rsync … and see if this helps:

https://documentation.storj.io/resources/faq/migrate-my-node

1 Like

Use FreeFileSync for Transfer the files from disk to disk when the node is online. Repeat the operation after the node is stopped for the sync the node completely. then restart the node with new path in the config.yaml.

I know, you know that I do not like RAID5 :slight_smile:
However, the slow read speed is expected, because the storagenode data is consist of thousands tiny files. The RAID 5/6 are not good to work with them.
The only good and fast RAID for this case is RAID10 in my opinion, or no RAID at all.

RAID can reduce your operating costs (you will manage only one node), but at the expense of the cost of HDD for redundancy.
I do not want to start a new RAID vs No RAID war, it’s my opinion
If you want to deep into - you can read here:

1 Like

I’m also don’t like a RAID5, have no idea how this is happened that i set up raid5 but not a RAID6 long time ago on this machine. That’s why i need this upgrade.

This week i had a failure of one drive in raid6 configuration. Took me ±10min. to replace a drive and everything is running further. And as i’m using old drives for this project, for me raid is still better option.

@node1
a raid5 doesn’t have more iops than the slowest drive in the array and if they are matching drives that can cause lower performance numbers.

i’m also in the process of copying out my storagenode of 8.5tb so far i’m closing on 5 million files copied, it does seem like its more difficult to work with compared to normal data, but it might be down to the data being in compressible and thus any bottleneck in the system that sometimes is fixed with compression on the bus’s or such… will have to suffer through the full raw data stream…

tho i would suspect your slow down is mainly due to low iops… i’ve also seen low performance, i think it might be down to my one raidz array having 4k drives in it and thus ended up being ashift 12 which is 4kn format basically… and thus my 512n or 512e drives run with a 8x to 16x io amplification… or i hope thats the problem… which is why i’m moving it to another pool.,…

might be on day 5 of the migration now… and this is just a temp drive… then i need to scrub it, make a new pool and copy the whole damn thing to there… so thats going to be fun…

my new pool will be a 3x raidz1 with 3 hdd’s in each so a total of 9 drives and 3 of their capacity lost to redundancy.
but this will give me iops like i’m using 3 hdds and i will get the read and write speeds of 6 drives worth.
then on top i put a MLC ssd SLOG / OS drive and a dedicated L2ARC of a 750GB QLC SSD
and all of my drives will be running ashift 9 ie 512n sectorsize because thats what most of the hardware is.

setting up raid can be really complicated and if you don’t get it done right, then you will sometimes be much worse of than you where without it.

no way to fix your IOPS issue tho… aside from maybe using some sort or tiered storage like windows storagespaces on top of your array maybe… so you can have a ssd cache of sorts to eat the iops and then spit out sequential writes to the slower drives of stuff it rarely uses…

and i can warmly recommend ZFS if you are technically minded and don’t mind using linux :smiley:
such an awesome file system… tho expanding it and …

wait… if you where on a regular raid5 then you could just add an extra drive to your raid and make it a raid6 … atleast if it is like a proper LSI raid controller…

I was wondering the same. It should also work with software RAID under Linux.

https://www.google.com/search?q=mdadm+convert+raid5+to+raid6

Correct. There is a way to reshape from raid5 to raid6. I’m using mdadm under ubuntu. But, currenly OS is on the same raid5, and these drives are bootable. As well i’m changing all hardware and already have fresh ubuntu installation on the new machine with with LSI controller waiting for the data. Of course there is a way to put these data drives, ignore currently installed OS and leave it there. But i have additional partitions on these drives as well. So the wish is to clean up everyting unwanted, and recreated fresh, clean raid, with no traces of old OS and ghost partitions.

ZFS is also under consideration. But seeing that much and different data, will it make sense? As there probably no files, that are commonly used and can be cashed. Another thought is that bottle neck in my case is 300Mbps internet connection in this place, not the reading speed of the raid (i believe)… Or i’m wrong?

Do you transfer data via internet?

No. It’s inside the machine. RAID5 running on four SATA ports, while for the temporary 8TB drive i’ve added LSI controller to the same machine as there are no more SATA ports available.

4 070 809 to be exact :slight_smile:

well there are lots of stuff to learn with zfs, if you already have a raid 6 setup ready to go, then i would stick with that… tho i would look for a file system that uses checksums…
and is Copy on Write…

those features makes the world of a difference for data integrity

zfs is nice, but it does involve a lot of learning and the gear is also a bit different.

regular raid6 is also much more easy to manage…

i kinda love and hate my zfs… if i had been using regular raid i would have finished my setup a long long time ago… ofc i’m new to zfs which is why i made some rookie mistakes… and that has cost me a lot of added time and work.

the worst thing about zfs is the inability to remove or add drives easily… no way to make a raidz1 into a raidz2 or add an extra drive or remove one… something which on an LSI raid controller is nothing more than a click away and a few hours of the system working.

in the base concepts they are sort of similar raid5-raid6 vs raidz1-raidz2
but zfs just takes the options of what one can do to the next level with all the caching and such… most lsi proper controllers can also do ssd caching tho… you just need a HW key for it.

I only have fresh installed Ubuntu server on RAID1 SSD’s. Currently data is copying to one single large drive, afterwards i’ll take RAID5 drives add some on the top and create a RAID6. Then will copy data back from single 8TB drive to RAID6.

I would like to try and learn ZFS, but i’m very short with the time as i have couple of huge projects coming on. So there will be no time for playing arround that’s the reason no1. Reason 2, this node is running for about 1 year, probably not the best idea for experiments on it.

But if you could share the information, that you was using for your build and learning, i would love to study it and if i will have some free time, i might build another node with ZFS :slight_smile:

P.S. otherwise it is so simple and easy to install RPi and attach single 8Tb drive to it :smiley:

well for zfs you want to use HBA’s or such… because its software raid, ECC memory and a good bit of it, else you want a half decent cpu, because when you do operations on the datasets, it will eat a bit of cpu time.

i’m happy that i made the switch to zfs… its by far a superior way of storing data and very developed software, its nice when stuff just works… only thing is that like now… i’m doing 1 pool using 3 sets of 3drive raidz1’s, which locks me into using 3 drives basically every time i want to add more space i should add 3 drives of equal size to the rest… it’s not strictly required… and one can load balance between them i believe…

zfs tries to tackle some major storage issues in interesting ways… and some of them are great… but like with all “new” technology it runs into different problems and paths that are difficult to develop… not sure it’s the future of storage… but it might just be the most secure data storage method for the consumer market presently.

and lots of stuff is very easy… but inability to change the arrays is just so bad… i’m almost considering just moving on to ceph asap
but i duno if that is even a good idea… i doubt it will have zfs’s stability… but i expect it will be as adaptable as its name… cephlopod

https://ceph.io/ceph-storage/

sadly i think one needs like minimum 2 storage servers to make it just barely work and 3 for production… so yeah… maybe sticking with zfs for a while… don’t really plan on having 3 servers any time soon… but who knows … plans change lol

I’m still a bit of a rookie, this video was what kinda clued me into zfs

and this is a great guide for how to setup a system and lots of do’s and don’t’s

1 Like

Thank you @SGC going to watch :slight_smile:

hi guys! I’m trying to migrate my node to a new computer with best performance and capacity. Both pcs are in the same lan and both on windows 10. I’m using robocopy and starts fine for a few seconds but when try to access to $recicle.bin stops and show Access denied and a message saying something like “error 5 trying access to destiny”… I don’t know why it stops… a few files has been copied… and suddenly stops. Any idea, please?? Thanks!

adding: $recicle.bin seems a directory but it doesn’t exist in origen… I’m losing somthing… How could I skip this directory if it is just the trash bin??

ADDING: /XD $recycle.bin… that’s the parameter… I hope it’ll work well.

In the future I advise to always put storj data (and anything else for that matter) in a subfolder, even if it’s the only thing on the disk. That way you can just copy the subfolder and avoid such issues altogether.

1 Like

Yes, you’re right. Ather problem I am having is attributes of that directory… robocopy make the directory like hidden and system… I solved it with attrib -s -h /directory… a lot of time figuring out! Thank you very much.

OK. I’ve done this already 3 times or smth like that, but today i’ve got another situation. Migrating node’s data to the bigger hdd. Currently i’m syncing it over vpn, once finished i will bring new hdd to the same location and run rsync again for final sync.

I’m using the same command as i’ve used before, but this time i get plenty of errors, but besides that, i see that new hdd is filling up:

“/home/usr/storj_data/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/4r/gagthclxgxy5vksojqt6y3xseolty4grbbznh3xhwgpzgh32cq.sj1”: Permission denied (13)

rsync: send_files failed to open “/home/usr/storj_data/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/4r/gavoflist3ocqx2ivxllnw3pniuo2mbu7kuusastmleogkxdsq.sj1”: Permission denied (13)

rsync: send_files failed to open “/home/usr/storj_data/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/4r/gb5htyot4bqrlcoimzv6psistc43ct7aulmdrfjz4qjzyhet6a.sj1”: Permission denied (13)

I believe i can leave this sync with these errors for now, as this is first sync only. But in the privious times i did not had this error. Any sugestions whats wrong this time?

using sudo/root to rsync on both sides?