Uplink uploading a 369.53 GiB file

so I’m trying to upload a 369.53 GiB file using

uplink cp source destination -p 96

because according to the hot rod paper that is the best course of action for files over 100GB (well it says use rclone but I want uplink). I have ample CPU (i9 otherwise idling) and RAM (64GB available and just 22 in use) and I don’t care about the 2.68x upload (using uplink over S3) - I want the end to end encryption (for my backups). I have a 150 Mb upload and my router says I’m maxing that out but the powershell window says I’m uploading at anywhere from 0 - 16 MiB.

It has now crashed (uplink: stream: ecclient: successful puts (75) less than success threshold (80)) with a window with several Node IDs, IP addresses and port numbers.

I don’t want to share the screenshot because of the data but I/m surprised the error message shows me the addresses and ports of nodes? Is this considered public? (I guess shodan etc?)

so any idea about that error message and should I be seeing Node IPs?

1 Like

The information is available to you anyway. Uplink is open source. You can just modify the source to show all node IDs and IPs you upload to.

Fair enough RE: IP addresses (I suppose)

I’ve tried to rerun the command (now its failed) and I’m getting this error…is there an article that explains this as I can’t find anything.

uplink: metaclient: metabase: object already exists

When I login to the website (with the same access) I can’t see the file.

Many thanks @BrightSilence (who DM’d me) and @Pentium100 all understood now using

uplink ls sj://bucket --pending
1 Like

When you increase parallelism or amount of transfers your uplink creates 110 * parallelism * transfers upstream connections and starts them in parallel. If you maxed out your bandwidth, you may figure out the speed of each transfer. If that speed could look like slow for the node, the node will drop such connection.
So, in your case I would recommend to reduce parallelism (and you may increase --parallelism-chunk-size if you have more RAM), while your bandwidth is still maxed out.

The second thing is your router. If it’s unable to handle such amount of parallel transfers it will start to drop some of them.
Since the uplink CLI doesn’t have a retry functionality unlike rclone, the whole transfer may fail, if the number of dropped transfers is too great. The solution without replacing a router is to either reduce the number of connections or use rclone. However if the router also could not recover after dropped connections, then number of errors will grow and if you reboot your router it will starting to work normally for a while until fail again.

About IPs and ports - yes, this is a public information for authenticated clients.

I suppose you mean that your upstream bandwidth is 150 Mbps, that’s 18.75 MiB/s
Your OS uses some bandwidth for itself (especially if you did not disable tracking, collecting telemetry and sending that info to MS), you may check the bandwidth usage in the Task Manager on the Performance tab for your network adapter. Also if you have other devices, they may use your bandwidth too.

4 Likes

I used -p 10 and it maximized my 500 Mbit connection already. if i tried 20 then i got errors as there is some timeout and if there is lot of parallel connection at same time each connection have low speed and can end terminated, only after i found best parallel and my connection speed ratio it worked the best way.

1 Like

Many thanks for the info, all very interesting!

I imagine as Alexey said, the amount of connections was too much and some failed which brought the whole thing down. What do you recommend --parallelism-chunk-size value to be? I see the default is 64.

I guess chunks we’re referring to here are pieces on that doc?

I see Vadim’s point and won’t go above 10 if that maxes out a 500 Mb connection my 150 Mb will be fine I’m sure with just the default 1 (or maybe 2).

Yes, there is also

And here: