Us2 beta with filezilla

Yes, that sounds reasonable.

This is where the problem is.

  1. Somebody using Filezilla is not necessarily aware of ‘Uplink’. So he has no idea how to remove such a bucket.
  2. Finishing the upload is not really an option. Users interrupting an upload (certainly more likely with larger uploads) are common. Just think of a user who realizes that he is uploading into the wrong bucket, or that the bucket has a mistyped spelling or that he is uploading the wrong file.

Even at a later state the issue persists. I have now 3 buckets that I cannot delete. I have no idea what files I had tried to upload in there. So even if I would agree to execute the upload process again to be able to delete the bucket, I don’t know which file to upload.

1 Like

Thanks to @jtolio for helping with some historical info around “Zombie Segments”. To date the cleanup process that resolves issues with these segments using our native integrations does not work with Gateway MT and Multipart uploads. This will be improved in the future.

Currently the best method for cleanup is to use the uplink cli.

List Pending
uplink ls --pending
Remove Pending
uplink rm --pending

Seems this version is not released yet.

./uplink version
Release build
Version: v1.25.2
Build timestamp: 11 Mar 21 23:21 +07
Git commit: 5d4c8ab9ec2b96010d4560466c2cd9087a871a35

./uplink ls --pending
Error: unknown flag: --pending
Usage:
  uplink ls [sj://BUCKET[/PREFIX]] [flags]

Flags:
      --access string                      the serialized access, or name of the access to use
      --encrypted                          if true, show paths as base64-encoded encrypted paths
  -h, --help                               help for ls
      --recursive                          if true, list recursively

Global Flags:
      --advanced                         if used in with -h, print advanced flags help
      --config-dir string                main directory for uplink configuration (default "C:\\Users\\USER\\AppData\\Roaming\\Storj\\Uplink")

Then there is still a big problem with uploads that get restarted from the scratch if something fails. So free users will never see a solution for that? That’ll make uploading a pain.

I am sitting on a 80GB file that I would like to upload as test but as free user I don’t have the advantages of using the hosted gateway nor the multipart advantages. This means that I must calculate almost 240 GB of uploading with the threat of it restarting anytime from zero if something happens.
That is not the best set of features even for a free user.

Seems the multipart upload is working in the free FileZilla version with a native connector, because if you interrupt the upload, you can see a multipart uploads with help of aws CLI.
But I need to confirm that. Could you please try to interrupt upload and then continue?

You mean interrupt it by pressing the triangle button and try to continue by pressing it again?

If I do that the transfer starts from zero.

I mean the stop/pause the job and then restart/unpause. I just not near my PC to test it.
I was thinking it would work…
Thank you!

Let me check what I can find.

Ok looks good. You have to use the disconnect button, not the triangle switch, then it is possible to continue the interrupted upload.
However reconnection to the server takes a while sometimes.

I’ll try to check later if this still works when there is hard disconnect of the internet connection.

2 Likes

I was able to check it now and sadly it does not work as expected. I started uploading a big file, disconnected on router, simulating ISP disconnect and the upload restarted from zero with error

|06:20:33|Error:|upload failed: stream error: ecclient error: successful puts (14) less than or equal to repair threshold (35)|
|06:20:33|Error:|File transfer failed after transferring 469.762.048 bytes in 611 seconds|

This is seriously bad as this means the upload of large files is impossible as after 24 hours ISP auto-disconnects. In my case the 84GB file would exceed upload duration of 24 hrs, so it would never ever finish. :face_with_raised_eyebrow:

While it seems that the basic functionality for resuming interrupted uploads/downloads is there, the current implementation does not really solve problems at least for free Filezilla users:

  1. Resuming requires to disconnect from the server but apparently you cannot upload/download anything else then. So it is not really pausing a file transfer. I even don’t know that would be technically possible, but pausing an upload, while being able to up- or download something else and resuming the paused upload would be the much better feature.
  2. Far more serious is the failure to being able to handle a disconnect as it prevents the upload of large files.
1 Like

Asked the team and they reported that today, although multipart is in the native uplink, it is currently inaccessible. Multipart with the native uplink is coming in a future release.

3 Likes

Much needed feature: I think Filezilla integration still needs improvement

As a customer I’d went ballistic if a 1 GB file takes 1hr34m to upload with several restarts.

Today I uploaded my 84GB file via the gateway (vortex) and it went through very smoothly. This is the kind of experience I would be looking for as a customer regardless which tool I choose for upload.

3 Likes

Could you please check with the latest version of FileZilla?

Now it’s working perfectly fine with the latest version 3.55.0

3 Likes