[Testers Needed] Filezilla Onboarding Page

A bit off-topic but…

Do that and all nodes with SMR drives will die (DQ) when activity picks up again, I’m afraid :confused:
And SMR drives are going to be the most frequently used I believe among home users as they’re the cheapest…

Would be better to find a way to handle these types of drives (retry requests? Stack requests? …) than to exclude them, I think.

Yes, but I saw a lot of such errors during upload to the network. I asked engineers to take a look and do not offer to the customer overloaded nodes.

1 Like

Just to complete your comparison chart, I have a cable modem (20Mbps up/400Mbps down) and a Linksys EA7500 router.

I’ll try again to upload the files I attempted to upload the first time and will let you know whether I still experience issues.

Thank you, updated the chart. I just want to be sure to do not compile wrong assumptions.
And seems the bandwidth limit is not the factor:

However, it is somehow related. Because in two other tickets users who have problems with uploads identical to yours (“the successful puts less than success threshold”) used asymmetric connection too. But from the other side - if they did not have a problem, they would not file a ticket in the first place.

So I tried again to upload 100 files 1GB each, and I am experiencing exactly the same issue.

You can see the Filezilla log here: https://nextcloud.environemenz.com/index.php/s/nk8JFqj2aA34Qe8

After 10+ hours, only 4 files have been successfully uploaded.

Maybe this could be related to the fact that I have set the max concurrent transfers to the max (10)?

As stated before, my outbound ports are open and I don’t think that my router is unable to handle the many outbound requests generated. My computer is not particularly old (i5-7600K, 8GB ram).

Try to reduce to 4 or even to 2 transfers. The 20Mbit it’s only 2.5MiB/s. For each file FileZilla will open 110 connections, in case of 10 transfers - 1100, plus some connections for checking.
Since there is expansion factor of 2.7, you uploads 270GiB, not 100GiB. With your upstream speed it should take at least 30.72 hours.
With such a low bandwidth the increasing of parallel transfers will affect speed of each of 1100 connections and storagenodes would cancels slow uploads.

Even with my 100Mbps I see such a message:

Command:	put "\test-100GiB\file004.bin" "/test/file004.bin"
Error:	upload failed: metainfo error: context canceled
Error:	File transfer failed after transferring 201,326,592 bytes in 567 seconds

Yes, concurrent uploads absolutely kills my Mac Pro.
Not sure if it’s a Mac problem but I gave up on having more than one upload at a time.

Perhaps. I used HP Pro laptop and transfer via 802.11n, the upload speed was 200Mbps with 10 simultaneous transfers, but I have had a one error during transfer of 100GiB.
The funny thing, that my upstream bandwidth is 100Mbps accordingly contract with my ISP, but they allowed 200Mbps :thinking:

I checked again - on the router the upstream is used as 98.5Mbps, but from the laptop ~154Mbps

I was able to get 10 connections going quite well on 1000/50-60 but that’s with a virtual pfsense instance that is backed by a E5-2680v2 (2 or 4 cores I believe). Bandwidth bottlenecks first before anything else (including router state table or CPU %).

The bottleneck here is upstream limit:
2.5MiB/s for 1100 connections it is 2.5 MiB/s * 1024 / 1100 = 2.327272727272727 (kiB/s).
And it should transfer 1024 MiB * 2.7 / 80 = 34.56MiB
34.56 MiB with the speed 2.3 kiB/s will take 34.56 * 1024 / 2.327272727272727 / 3600 = 4.224 (hours)
The storagenode will likely drop such a slow connection.

Positive outcome of limiting the max concurrent transfers: in the same timeframe, when switching from 10 to 2 max concurrent transfers I was able to upload 4x more 1GB files and no more errors popped up!

Yes, limiting mine to 2 made it work as well.
Perhaps speak with the FileZilla developers and set a hard limit on Tardigrade connections?
(Or at least add a warning when selecting more than 2 concurrent uploads)?

I have spent the weekend to up- and download like crazy (if you have seen a spike in traffic on your node that was probably me :wink:).
After my announced reboot of router and computer I have checked also cables and re-plugged them and now my issue is gone.
Up- and downloading went like expected and no more issues with the larger files.

Unfortunately I cannot definitely say what caused the problem. Let’s wait and see if it occurs again.

I’m downloading lots now. 10 simultaneous downloads, saturating my 500Mbit connection. Very high CPU usage but everything is going well…

It’s interesting the it seems to be simultaneous uploads that really make my system fall over :confused:

I tried to sign up for the test but the request API seems to be down.

Hello @Plemonade,
Welcome to the forum!

It runs out of accounts. It should be fixed already.

@Alexey Since the middle way I have lost what kind of feedback has been expected… Would you please let us know what is left to be tested, if any? Thanks!

@Andisers I tried to help people to solve the problems if any.

@super3 Have you got all needed feedback?

I finally had some time to try this and noticed a typo on the main page:

typo

Also once I had signed up, do I just wait for the email? Does it contain instructions? I ended up reading this thread but wouldn’t have worked out how to add a site in FileZilla without help.

As others have stated the upload kills my machine. CPU is maxed on 4 cores, 8 threads. I’m on a 500/500 leased line but my upload is bouncing all over the place.

cpu

With any low powered cpus you will see high cpu useage because of the encryption process before it uploads.