Problems uploading files with libuplink

Hi,

I’m Building the .Net-Wrapper for libuplink and it works basically. But I have huge Problems while uploading. If my file exceeds a certain amount, the upload Always Fails with an error like this:

“segment error: ecclient error: successful puts (0) less than or equal to repair threshold (35)
storj.io/storj/uplink/ecclient.(*ecClient).Put:171
storj.io/storj/uplink/storage/segments.(*segmentStore).Put:148
storj.io/storj/uplink/storage/streams.(*streamStore).upload:205
storj.io/storj/uplink/storage/streams.(*streamStore).Put:113
storj.io/storj/uplink/storage/streams.(*shimStore).Put:57
storj.io/storj/uplink/stream.NewUpload.func1:52
golang.org/x/sync/errgroup.(*Group).Go.func1:57”

I’m not sure what happens and how I can debug this. It might be a Problem with my Proxy, with the call to the Underlying CGO-libuplink, something sepcific to .Net or even an issue on the satellite. I’m uploading to “europe-west-1.tardigrade.io” and I’m using the latest GitHub-commit ( d3ef574) for libuplink to build my wrapper.

What I find strange: if I watch the Network utilisation, it gets slower and slower the more data I “upload_write” to the Uploader. Basically I do the following:
upload
loop while there are Bytes left
upload_write
endloop
upload_commit

I’m sending Bytes in Batches of 1024 Bytes. So every “upload_write”-call gets the next 1024 Bytes. the strange Thing is: the function immediately Returns even if I put around 100MB into it. Shouldn’t it send what I’ve put unto first and come back to me then? If I have a 100MB-file and cut it into 1024bytes-pieces, the uploader might struggle handling all those Little pieces?

I really Need help on this one as it makes my wrapper not reliable and I Need to know if the Problem is on my side and where. I would love to get Feedback on

  1. do I do the upload Right?
  2. what is the best amount pushed to a call to upload_write?
  3. is there any way to debug what is really being sent?
  4. shouldn’t the call to upload_write block until the data is really sent?

Thank you!

Kind regards,

TopperDEL

It seems to work if I send batches of 1/10th of the whole file size. Need to make more Tests.

But sending 1024 bytes leads to problems with files above around 3mb.

On Android I Always get a message like this after 7408 sent Bytes:

segment error: open /data/local/tmp/tee031699413: permission denied
storj.io/storj/uplink/storage/segments.(*segmentStore).Put:150
storj.io/storj/uplink/storage/streams.(*streamStore).upload:205
storj.io/storj/uplink/storage/streams.(*streamStore).Put:113
storj.io/storj/uplink/storage/streams.(*shimStore).Put:57
storj.io/storj/uplink/stream.NewUpload.func1:52
golang.org/x/sync/errgroup.(*Group).Go.func1:57

This might be the border after Which my Bytes are not stored inline on the satellite. With the same coding I can upload data from Windows.

Can you make sure you can replicate this with the latest libuplink, making sure you’ve got a fresh .so/.dll built? The first symptom you describe is an error that’s unfortunately seemed to be fairly generic in the past but failing upload from libuplink around that size was an issue at some point and has since been fixed. This may be a different issue but I would like to rule that out.

@TopperDEL You have to change the location of the temp dir for Android to something like context.getCacheDir().getPath().

You can look here how we do it for the Android bindings: https://github.com/storj/storj/blob/d65386f69efd2e39fbbed5c00004d2bcd9f71eab/mobile/uplink.go#L45

@kaloyan That is a good hint! My wrapper changed the importing-tempdir-param to an out-param and I expected an error-message there. But I have to provide a temp-dir for this!

Will check this later, but that might be the reason for the Problems on Android. It still does not solve the upload-error, I guess. It might be related, though. Will check this and come back here.

@bryanchriswhite I was using the latest commit and created new so-files. But I will check this, too. Thanks!

Ok - one Problem solved. Android now Shows the same behaviour than Windows.

@kaloyan: your hint helped alot, will do a PR for this as Setting the temp-dir is absolutely necessary.

Nevertheless I have the Problem that the bigger my upload is the more likely it will fail. So @bryanchriswhite: if you have another idea, let me know. I just cloned the latest storj-commit and definitely got new so-files created.

Might my RedundancyScheme-Settings be the Problem?

ShareSize = 256;
RequiredShares = 29;
RepairShares = 35;
OptimalShares = 80;
TotalShares = 130;

Does a share-size of 256 mean that I should provide 256 Bytes to upload_write?

^^ @bryanchriswhite Can you have a look again? Thank you!

It is working now - I don’t know what changed. I’ll close this issue and see how it goes then.

Thanks for your help here!