I have been experimenting with uploads using Uplink and S3 Gateway. I noticed that uploads are divided into 64MB segments and the speed of upload is great when the segment starts, but towards the end, speed drops down and upload to some nodes get canceled once enough pieces are uploaded to whole network.
I was wondering why can’t uplink start next segment while first one is finishing? Also, why upload to more nodes then needed? If we need 80 nodes per segment, why cant uplink start uploading to those 80 nodes and ones some of them finish, continue with next segment and keep 80 uploads running in parallel. That way there is no wasted upload (extra canceled uploads thats happening now). Uplink can still cut off slow nodes (have a time limit per node for minimum acceptable speed). That way my bandwidth is not wasted and consistent upload speed is maintained.
I suggest reading some of the storj documents or use the search and read through other threads. All of your questions have been answered already.
But maybe someone is kind enough to provide you with links or answers.
I see there has been extensive testing highlighting same issues that I posted. What about second part of my post? Is there a technical limitation that does not allow starting new segment before the previous one finishes?
None that I know about. It probably just makes the librariy a lot more complex and is therefore an optional optimization once everything else works reliable.