Unable to upload big files to STORJ

I’m using gateway and aws commnad to upload data to my storj buckets.
Small files are uploading just fine but bug ines will always fail after upload >1GB of data.

From S3 I see just: upload failed: ../../mnt/usb1/Software/foo.iso to s3://test/foo.iso Read timeout on endpoint URL: "http://localhost:7777/test/foo.iso?uploadId=31JgNp4tGarjzyKXqus868XNHXpd2k68aidMwokznvDgfLVn4S4BZ1qwDbkXst2spVAd795MnqKtYWVtMiCJPn5PadzFTzVxRFC7kWMYbkDeyXAGptQDed5ozUbwHefTMEccDBdBDz8LxUMDSEqsbKaFZ6XsAsy7T4XRcWTwZ137r4vZEsjjh9TxwXHw2ZqaUUuGfRyyPzwV5rUM4tSs1gAVCFf1UHb11UJcAiFw5&partNumber=4"

And in gateway I see: 2021-11-11T13:01:44.570+0100 ERROR error: {"error": "uplink: stream: metaclient: rpc: dial tcp: operation was canceled", "errorVerbose": "uplink: stream: metaclient: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.TCPConnector.DialContextUnencrypted:107\n\tstorj.io/common/rpc.TCPConnector.DialContext:71\n\tstorj.io/common/rpc.Dialer.dialEncryptedConn:220\n\tstorj.io/common/rpc.Dialer.DialNodeURL.func1:110\n\tstorj.io/common/rpc/rpcpool.(*Pool).get:105\n\tstorj.io/common/rpc/rpcpool.(*poolConn).Invoke:48\n\tstorj.io/common/rpc/rpctracing.(*TracingWrapper).Invoke:31\n\tstorj.io/common/pb.(*drpcMetainfoClient).Batch:294\n\tstorj.io/uplink/private/metaclient.(*Client).Batch:1560\n\tstorj.io/uplink/private/storage/streams.(*Store).PutPart:558\n\tstorj.io/uplink/private/stream.NewUploadPart.func1:45\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"} 2021-11-11T13:02:48.791+0100 ERROR error: {"error": "read tcp 127.0.0.1:7777->127.0.0.1:52822: read: connection reset by peer"}

The same problem is when I try to upload data by FileZilla but different errors. Small files can be uploaded but big ones cannot. I have 50/30 Mbit fiber connection with ping:

$ ping -c 5 eu1.storj.io
PING eu1.storj.io (34.141.33.112) 56(84) bytes of data.
64 bytes from 112.33.141.34.bc.googleusercontent.com (34.141.33.112): icmp_seq=1 ttl=113 time=21.4 ms
64 bytes from 112.33.141.34.bc.googleusercontent.com (34.141.33.112): icmp_seq=2 ttl=113 time=21.6 ms
64 bytes from 112.33.141.34.bc.googleusercontent.com (34.141.33.112): icmp_seq=3 ttl=113 time=23.8 ms
64 bytes from 112.33.141.34.bc.googleusercontent.com (34.141.33.112): icmp_seq=4 ttl=113 time=32.9 ms
64 bytes from 112.33.141.34.bc.googleusercontent.com (34.141.33.112): icmp_seq=5 ttl=113 time=21.2 ms

--- eu1.storj.io ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 21.217/24.174/32.910/4.468 ms

I already tried to upload from 2 different computers, from Windows and Linux Debian. Still the same problem. Internet connection is stable.

And I already linked my card to STORJ so I shouldn’t be limited by TRIAL/FREE.

How big is the file you are trying to upload with Filezilla?

~15GB, the fact is, there is no fixed value at which upload stops. It looks like to me, STORJ has problems with big files. For example, when I tried to upload now, it crashed after the first 2GB, sometimes I’m able to upload 3GB but sometimes less.

Maybe it is related to my ISP, who knows. Will try upload on a totally different uplink and will post my findings.

@kubotor it seems you are using the self-hosted Gateway-ST. Have you tried uploading to the Gateway-MT hosted by us (Storj)? Quickstart - AWS CLI and Hosted Gateway MT - Storj DCS

Do you have a hard requirement to use the self-hosted gateway?

2 Likes

I have successfully uploaded a 14 GB file to Storj DCS yesterday with Filezilla. So there does not seem to be a general issue with large files.

I suspect that 30 Mbit upstream could be a reason. The Gateway-MT would work better with low upstream bandwidth.

Yes, maybe. There is a setting in Filezilla to limit concurrent uploads. This could help in this case too.

Yep hosted gateway works. So, looks like the problem is self-hosted Gateway. I’ve tried both, Linux and Windows. But the question is why FileZilla wont work.

Try setting this to 1 upload transfer at a time only.

1 Like