Bandwidth limitation

Could you tell me how I can limit the bandwidth when uploading files to Storj using Uplink?

I think it’s not possible with only uplink. You need to use a QoS on your router.
uplink uses --threads 1, --parallelism 1 by default, so it’s already slow by default.
You may also try to play with these options:

        --maximum-concurrent-pieces int      Maximum concurrent pieces to upload at once per part (default 300)
        --maximum-concurrent-segments int    Maximum concurrent segments to upload at once per part (default 10)
        --long-tail-margin int               How many extra pieces to upload and cancel per segment (default 50)

If you are using windows then you can limit it with windows itself or with other software.
Here the first Google result:

It needs to be limited at the application level. Unfortunately, I also couldn’t find suitable options in the documentation. I thought maybe it could be configured in config.ini somehow. It’s a pity that it’s not possible, but thank you for your response.

No, there is nothing like the direct limiting. But you may play with the options, mentioned above

Please note, any command line option you may convert to the config option by removing -- prefix and replacing = with ::

maximum-concurrent-pieces: 10

You may also convert them to the variables: add a STORJ_ prefix, change the case to the upper case and replace all dots and dashes with the underscore, e.g.

STORJ_MAXIMUM_CONCURRENT_PIECES=10
1 Like

Would something like this work for you?
pv --rate-limit 100K FILE | uplink cp - sj://bucket/FILE

2 Likes

Rate-limiting the file going in to uplink… is a clever way of thinking about it! But it may only act on complete files (if it needs to erasure-encode them)?

Erasure encoding happens on small portions of a file at a time instead
of attempting to encode a file all at once. A file is divided into segments which are then further subdivided into stripes which are erasure encoded and uploaded individually.

2 Likes