[Testers Needed] Filezilla Onboarding Page

Well I’ve tried everything but the speed limit just doesn’t work.

Neither during the transfer, nor before, nor after.

What seems weird to me:

  1. No error message when emails are rejected, like when entering incomplete email address.
  2. Form accepts the same email over and over, maybe on purpose?
  3. Form accepts invalid emails. I tested with email with ‘.###’ as TLD and it was accepted.
  4. No need to enter a password to secure account access, which could lead to user questions how the account is secured against unauthorized access.
  5. As the page states that the account is only free for 3 months, users could wonder how they can convert the free account into a regular account to keep using it. Users could also wonder what is going to happen after 3 months, e.g. if their data get deleted after that automatically.
  6. Another thing I noticed is that this works with encryption password empty. So again a user would ask what happens if I leave this empty as it seems not to be required.
4 Likes
  1. Good catch. Will add that.
  2. Nope, we will reject duplicates.
  3. We will look into stricter validation.
  4. In the next version we will ask for passwords. Right now all existing accounts will get an email to create/reset their password.
  5. In the next few weeks everyone will get an email on how to sign up into regular accounts. In the net version this will happen as soon as they sign up.
  6. You should not leave this empty.

We have a huge update for this on-boarding scheduled for next week. Thanks for reporting these issues, and we will work on getting them fixed in the next few days.

1 Like

Gave this a test as if it was part of a production workflow at work and observed some things.

First off- one knock and one positive:
No email is sent to the entered email with the sat & api key. Something needs to be sent for those that have one team member starting the process and others picking it up.
Once filezilla is installed, connection to tardigrade is seamless- bravo here.

We chose to test with two use cases of a folder of multiple sublevels filled with varying sizes of PDF files, and a single larger 7zip of that folder. The many individual PDF’s were found best to scale up the number of active transfers all the way up to 10 to someone ease the fact each transfer hangs at the end by a bit, otherwise prolonging a 2sec transfer to a 17sec transfer. Of all of the 3.4GiB transferred, representing 1.4k PDF’s, two of which hung in the transfer queue with 100%, with eventually one of which clearing and the other now up to 2hrs of transfer time. There was a notation of 70 of 80 expected pieces being put in the logging- “Error: finalizing upload failed: uplink: ecclient error: successful puts (70) less than success threshold (80)
Error: File transfer failed after transferring 8,365,717 bytes in 2577 seconds” but the file appears to have restarted as it is not in the failed transfers queue and the file is present in the bucket so it was properly reattempted. As for the singular large 7zip file, this progressed at avg. ~2MiBps with pauses lasting 5-6sec at a time with CPU usage sitting fairly steady at around 20% on a i5-8600. The connection was pegged during this whole time though with small dips and following spike at each pause/resume.


Overall, it appears the service could be used as a backup, but would require massively parallelized connections to not cause even a small number of files being held up by, effectively, IOWait/latency. As well, this was done on an asymmetrical fiber (1000/60) with a standing latency to Salt Lake City of about 54ms.

Tried it out. I’ve never used Filezilla before, so the quick start video was super helpful. uploaded a few 200MB videos and it worked like a charm.

Was interested in a comparison:

Set multiple uploads to 1 and uploaded a single 1.4 GB file via Filezilla:
Tardigrade time to complete: 21 minutes
SFTP upload to a server time to complete: 7 minutes

Update: Out of curiosity set multiple uploads to 10 to see if it makes any difference for the pieces uploaded:
Duration with Tardigrade this time: 26 minutes

Multiple uploads makes sense only for several files. For the one file it doesn’t have any effect

With a 1.4 GB file how much data has to be uploaded to Tardigrade? Is it 3 times the size of the original data?

Yes, roughly 2.7 times more.

I see. Then I believe the result is ok that it takes 3 times the time to upload to Tardigrade than to a single server.

yeah I forgot about that too. So the effective upload rate of 1.5 MB/s I saw is actually quite good because that means it was using 4.05MB/s upload bandwidth from my 5MB/s connection bandwidth. The remaining is probably due to gaps between pieces.
But could be confusing to people that are just onboarding because you just divide the filesize by the time for the upload (filezilla shows that one after each upload) and then see that you got a really low upload rate compared to typical ftp. Not sure you can do anything against that of course…

Couldn’t the UX of the interface be improved to explain that? Something like “6Mbps for uploding 300MB of redundant data (effective speed: 2Mbps for uploding your 100MB file)”.

I am wondering if it will just confuse potential users or outright repel them as surely they will compare it to FTP and therefore only see that it takes 3 fold the time to upload the same stuff.
However the situation could be different for users with huge upload bandwith. When the FTP server connection is the bottleneck, Tardigrade upload could become equally fast or even faster.
I cannot test it as my upload is slower than my test FTP servers connection.

Now let’s test how downloads will be doing…

The Interface could account for that but I don’t think filezille has an integration aware interface integration. It will probably be difficult to change that. It will just always display the filesize and the amount of time. But I don’t know the code, maybe it would be easy to change.

That’s what I’m afraid of too.

Probably but then the CPU might become the bottleneck as encrypting and splitting the files takes a lot of computations.

Downloads worked rather well. Those have not much overhead.

That’s what I have been seeing. Therefore for me the best setting is not to have multiple file uploads set to 10:
I tested with ten files each around 130 MB
When set to 10 the FTP Server beat Tardigrade. It took 6 minutes to download from Tardigrade and only 4 to download from the server

However when set to 1 simultaneous download, it took only around 2.5 minutes to download from Tardigrade but 10 from the server. And that is certainly very interesting.

Now the result is: Uploads slow due to the requirement to upload 3 times the size. Downloads really fast, however not, if too many files are simultaneously downloaded, at least in my case.
May be some optimal settings would be max. 2 or 3 simultaneous files but this probably as you have said depends on the hardware. It seems that the download of pieces maxes out the connection pretty well independent of how many files actually are being downloaded.

Now I am wondering if Storj has an idea how to improve upload.

1 Like

Yeah a single upload/download already saturates every typical home connection except for the short breaks between pieces so parallel uploads/downloads don’t make much sense.
Furthermore the user could use some crappy router and we’ve already seen routers go down when people use uplink. That got better with some fixes to uplink but what happens if those people use 10 parallel up-/downloads? The amount of open connections could overwhelm those routers. At least that’s my guess. Can’t test because my router is a good device that doesn’t even care about 3 nodes and tardigrade-ftp up-/downloads.

Wow, that took long:

1 file , size 1.3 GB, Upload to Tardigrade finished after 2.5 hrs.

I just want to follow up on some things-

The hung upload is still there… we’re now up to 93hrs at 100% with 1second to go. After cancelling it and reattempting myself, it was put without issue.


Downloading- The single file download went singingly at about 16-18MiBps(128-146Mbps) and there was only slight shuttering while the chunks where assembled and next section requested… pretty nice. Loading the folder for downloading (add to queue) was very rapid, and once I triggered the the queue to process at 10 simultaneous transfers which sat around 200-320Mbps and became CPU bound on a i5-8600 with all core turbo at 4.09GHz. I did test this on one of our Windows Server 2016 VM’s that has 16 cores (E5-2680v2) assigned to it and it was also plugging along at 200-400Mbps at around 6-22% CPU usage.

I did start to do some simultaneous station testing with 10 transfers each and was happy to find 2 and 3 stations where able to pull at the same speeds as a single one.

New test today:

Upload 1 file to Tardigrade. Size 1.34 GB. Upload took 1 h 29 m. Several auto restarts included.

Could you please connect to the internet directly without your router for a while and try again (make sure that the local firewall is enabled!)?