Node operator can be a business?

Greetings @thedaveCA my name is Dominick and I’m the author of the hot rodding document above. Your workstation is very capable so we should be able to get some great speeds. When we hit a limit as you noted it should be the ISP.

Ideally, backups are packaged into relatively large archives via something like restic + rclone. For now, let’s not overcomplicate and just focus on moving data fast.

Assuming your backup data is already flowing over we can focus on fast retrieval. Our uplink is ideal for this.

Lots of files <64M
Moving 48 files in parallel
uplink cp --recursive sj://bucket/path /tmp --recursive --transfers=48

A 10G archive
Moving 48 segments in parallel requires the file is (64M*48=3,072M). A 1G file has 16 segments so a parallelism of 16 can be used per 1G.
uplink cp sj://bucket/10G.img /tmp --parallelism 48

A mix of both
Moving 8 files in parallel and up to 8 segments at a time is at peak the same load as 48 parallel or 48 segments. In practice, this won’t be as fast as separating small and large files but its a good start. Scale --transfers up more if more small files and conversely scale down transfers and up parallelism if larger files. If you were moving 512MB files the ideal setting for a target of 48 would be --parallelism 4 --transfers=12.
uplink cp --recursive sj://bucket/path /tmp --recursive --parallelism 8 --transfers=8

For fast retrievals in your environment Uplink CLI is the way. Happy to work with you to demonstrate the best possible RTO. We commonly see speeds to a single endpoint (with a supporting network) exceed and maintain 300MB/s (2,400Mb/s).

Excited to help!

P.S. I can also help with rclone but figured id share the best way first given the strong client compute enviroment.

6 Likes