Nodejs binding not working

Hi,
I’m currently playing around with the nodejs binding for tardigrade. Currently, whenever I use the nodejs binding to upload and download objects to my buckets, it says that everything was successful, but whenever I download the file from the storj network and try to open it on my desktop, I get a message saying the file is not supported or damaged. I’m using the HelloStorj.js file as a guide but it doesn’t seem to be working for object upload or download. Does this file no longer work?

Can anyone help me on this issue?

HelloStorj.js file

Thanks

1 Like

Hi @beez -

I just uploaded a file with the npm published version of the binding and downloaded with filezilla without issue. Can you provide more info or your HelloStorj file?

thanks!

-K

2 Likes

It’s the nodejs implementation that isn’t working.

Whenever I try to upload a file I get { bytes_written: 98734 } back from write but no data is actually uploaded. The file is created but it’s empty (0 kb).

The docs also states that I should get an error back from commit but that is not true, I’ve looked at the code and it doesn’t return anything.

No error is thrown.

Ok, I’ve spent a little bit of time looking at this, and while I don’t have a satisfactory solution at the moment, I have some observations to share that may help us in debugging the issue going forward.

Firstly - and I am going to assume that this is not the problem you are dealing with - I needed to make sure to remove the lines in HelloStorj.js related to deleting the bucket and file at the end. This way I can debug more easily:

Secondly, I made sure the encryption passphrase, api key, and satellite URL are exactly the same as the ones I used in my access for the uplink cli tool (recent binary downloads for the cli tool, some documentation for the cli tool). That way, when I upload a file with HelloStorj.js, I can download it with uplink to determine whether the upload succeeded. I am happy to provide more information about using the uplink cli if needed.

Anyway, once I did the steps above, I ran the script with a variety of files (against us1.storj.io, if that matters). First, I tried a 158 MB mp4. The js script created the bucket successfully, successfully uploaded the file, and “successfully” downloaded the file. The downloaded file was the correct size, but it did not match the file I uploaded (and could not be opened with a video player).

Interestingly, when I downloaded the file with the uplink cli, it matched the original and played correctly. So for me, the upload phase of HelloStorj.js worked perfectly - it was the download that messed up.

I experimented with some more files of different sizes. Most relevant were:

  • a 3 kB text file - HelloStorj.js managed to successfully upload and download this file, and the original matched the downloaded file. Success! But this is a tiny file, and is in fact stored inline rather than split up and stored across storagenodes. So not very impressive.
  • a 28 MB text file - this had the same issue as the mp4 I tried uploading earlier (download didn’t match original). The difference is that because it was a text file, I had an easier time determining the issue.
    Check this out:
cmp downloaded.txt ~/original.txt
downloaded.txt /home/moby/original.txt differ: byte 7409, line 110

The file actually matches all the way until the 110th line. In fact, the vast majority of the first 30k lines are identical between the two files (the original file is more like 300k lines). Anywhosies, this means that the issue isn’t some decryption issue. When I opened both files in a text editor, I noticed a lot of ^@ characters (null characters) in the bad downloaded file. In fact, when I did a find/replace on the null characters to remove them, the file went from 28 MB to 2.7 MB - meaning 90% of the file downloaded was null characters!

So tldr, it looks like (for me), the JS implementation works reliably for uploads of a variety of sizes (haven’t tested anything super huge, but at least up to around 150 MB). But for downloads of files larger than a few kB, it does not succeed reliably. I am not sure the exact reason, but it appears that we are writing lots of null bytes to the file during the download. But I can download those files just fine with the uplink CLI, so it is definitely an issue specific to the Nodejs implementation of download.

I am sorry I wasn’t able to help more, and I’m not even sure if the issue I am experiencing is the same as yours. Hopefully someone else can follow similar steps as me and see if they observe the same problem - then perhaps we will be a little closer to finding the root of this bug.

5 Likes

The default/max length of the return of uplink-c’s uplink_download_read() I have observed as 7408 bytes. So 7409 is the first byte of the second call.

2 Likes

Hmm - perhaps this issue can be replicated with uplink-c. Then we will know if the issue is with NodeJS or the C bindings.

Edit - it doesn’t sound like it is likely that this is caused by the C bindings, according to @Erikvv

@ztamizzen @moby - try setting

var BUFFER_SIZE = 7408;

Does the download behavior change?

I figured it out. Or rather I found the example in the repository for how to actually do upload. It’s an issue with the documentation more than the code, it’s not really obvious that write (and read for download) needs to be looped to get the entire file.

commit still doesn’t return an error though.

Thank’s for the help :slight_smile:

1 Like

@ztamizzen - we’ll look at modifying the documentation. Are you refering to the doco at NodeJS Bindings Documentation ?

The giveaway is if you check the size of the first bytesread you’ll likely get 7408 back, so what was happening for other users in this thread was the first chunk was being read, then padded in that iteration up to the buffer length.

Glad it’s working for you. Let us know if we can help further!