Failing hdd and slow rsync

Hi there
I have a failing disk.
I already try to rsync All files but it keeps taking forever since it is 3 tb with Lots of small files…

My questions are:

  1. Is there any other way i can copy the files faster?
    I have tried it with tar until the hdd failed - it Was reasonable faster but it sucks when there are existing files.

Multiple rsync on subfolders Hit the disk to Hard that it fails to read only and the node restarting.

  1. Is the node offline disqualification thing active in the moment?

Thanks for any Support

The node can be offline for up to a month IIRC, hopefully you can copy the data faster.

I assume by “failing drive” you mean that it has some bad sectors.

Try dd or, if the drive has bad sectors, ddrescue.
It will create an image of the drive, but it will also read the drive sequentially and should be fast. After you have the image on a working drive, you can mount it and copy the files with rsync or tar to wherever you want.

Assuming the failing drive is sda and you mounted the good drive to /mnt.

  1. Unmount the failing drive

  2. do dd if=/dev/sda bs=1M of=/mnt/storj.img status=progress oflag=direct
    older versions of dd do not have the “status=progress” option, if you get an error about that, just omit it, all it does is show the progress.

  3. If dd fails with “read error”, try ddrescue. ddrescue -b 4096 /dev/sda /mnt/storj.img /mnt/storj.map
    This will skip over the bad sectors and will retry reading them later, you can also stop and resume it (if you need to reset the drive or whatever).
    ddrescue can take longer, because it will retry reading the bad sectors multiple times, but you may still end up with “holes” in the data.

  4. After dd or ddrescue completes the job, you can mount the image - kpartx -av /mnt/storj.img. This creates /dev/mapper/loop* devices, you can mount it as you would a hard drive.

  5. Copy the files to their final place with rsync or tar.

5 Likes

Thanks i’ll stick with rsync then since down time is not an issue

But Thanks for mentioning ddrescue

GitHub - wheybags/wcp can achieve a read speed of about 25 MB/s when reading many small files from a 7200RPM HDD. If you would like to try using wcp to copy a Storj node, then I recommend editing the wcp source code in order to increase the default limit of 900 open file descriptors to at least 15000 (which in turn requires ulimit -n of approximately 16000).