Upload of files in a loop gets an error after several files being uploaded

Hi friends,

I am back to discussion with some new questions. Previously you helped me just brilliantly with my implementation problems, so hopefully we’ll find the solution again :slight_smile:

I created a small GO based service to collect cyclically the data from exchanges and put it into the STORJ network. The single operations work flawlessly, however I got a new issue with uploading the multiple files in a loop (files are stored locally), and it worked previously like charm.

The upload and download functionality is implemented in the same way as in github repo example: ULR. However I am getting all the buckets and objects assigned to those buckets on every API call (possible sourse of problem?) and after perform up-/download depending on the amount of objects stored in buckets.

This is the error message, which I get from uplink: could not commit uploaded object: uplink: stream: ecclient: successful puts (77) less than success threshold (80). This occurs (with variable numbers) just after 3 or 4 files were uploaded.

Initially I thought, that the upload process isn’t closed, but this is implemented directly in UPLINK library (for downloads it is implemented at a different spot). Now I am thinking about some timing issues or maybe an issue with updating the client too frequently. But since I was initially able to upload large amoutns of files without any problems, the problem seems to be else where. Please halp :slight_smile:

Hi!

A few things from me worth checking:

  1. Make sure you reuse the opened project in the download loop and not open it on every turn;
  2. What OS are you running your program on?

Some operating systems have tight opened file descriptors limits. It might be worth checking if maybe this limit needs to be increased.

4 Likes
  1. Oh, interesting, cause that’s exactly the thing, what I do :smiley: → on every call to the API I am updating the project. Will try now without this function, but… In this case the question: what is the duration for the client session after I acquired the access rights? It’s for sure limited to some hours if not minutes :slightly_smiling_face:
  2. OSX, Big Sur 11.4

TY for the support!

1 Like

Woop-Woop, the frequent client updating was exactly the problem.

After removing that code line everything returned to the norm :slight_smile: I marked your idea as a solution.

Is the same logic also applicable to object downloads? There I also update my project on every call, to retrieve the latest data from buckets.

1 Like

Access grants are stateless, and encryption keys never leave your machine. They are meant to be long-lived. An open project inherits this property and also has some underlying features like connection pooling (I think this is the primary reason why opening a project per upload/download is a bad idea).

I’m glad everything works for you! You should apply the same logic to downloads. It’s best to have one [0] opened project throughout the whole program for all interactions with the network, even if your application runs for days or years.

[0] Of course, if you must handle multiple access grants concurrently, you will need to open multiple projects. In that case, make sure to configure a shared connection pool via uplink.Config.

3 Likes