We should do a test next year on the max bandwidth tardigrade can handle sustained download

I could spin up some Azure instances if you want to test some bandwith!

1 Like

Pheww i would if i could but right no i do not even have the correct router to add an fiberchannel directly to an server which could do that. (Actually the Router from my provider has only one 2.5 GBs Port, some sort of a bad joke). They gave me 10 gbits without propper Hardware ^^

But for a test i would use some aws instances in parallel to execute a speedtest or even with azure as @peppoonline mentioned.

I think we just need a tool which orchestrate the test.

And we hopped to another topic ^^ @Alexey you might wanna move that to the right place?

Just use your server as a router :slight_smile:

1 Like

that is a great idea i suppose that would solve the hardware and internet bandwidth in one swoop

I would so but i have no HW with fiber laying around. I think it would make more sense to spin up some cloudinstances, so you can also easily scale up the bandwith in case it is required.

1 Like

If my ISP gave me 10G (or over 1G) uplink, I would get 10G hardware pretty quickly. Anyway, cloud instances are OK as well.

your router has to go… or you should complain and make them send you one that fits the internet bandwidth… whats the point in 2.5gbit … ofc there is most likely 4 ports of 2.5gbit and then they expect you to do a quad connection because they are to cheap to buy regular 10gbit routers…
ofc 10gbit regular ethernet is so expensive because of the noise filtering… much cheaper to go fiber…

Yeah you are right, no clue why they did this (actually i never asked for 10gbs, they just changed their abos (1gbps → 10gbps) and seems i got more for the same price. I haven’t really cared about it because i never needed it :wink:

But maybe i will do something about it in the future, for now i am happy how it is.

1 Like

i did teat using mine last week, 10TB link upload a single 200gb file…takes ages…400mins or so on filezilla.

1 Like

Do you mean 10Gb link?

most others seem to report that their internet bandwidth is capped out…
both for up and downloads, tho for uploads there is a 4x factor on the data… so if you can upload with 10 Gbit when you will at best get 2.5Gbit, so 300MB/s

on top of that even on downloads people are talking about running into cpu limitations, so it might not be as easy as just testing, since we will need to be able to exclude cpu limitations from the testing.

400 minutes seems excessive, thats 24000 seconds, so you are suggesting a speed of around 10MB/s

which would suggest something around the 100 Mbit and then ofc the multiplication of 4x putting you.
400Mbit utilized for uploading 200GB in 400 minutes approximately.

thats not terrible for internet speeds for most people… and you can easily run into some sort of global internet restriction which might affect such an experiment…
come to think about it, that number actually reminds me of about what i get across the atlantic… even with 500mbit or more i will usually get a couple hundred’s

while yours exact mbit usage is 277 depending on how accurate the numbers was ofc…
but still it’s not a difficult thing to imagine running into a bandwidth restriction on a 10gbit connection, when i clearly see it almost every day on 500mbit.

and then there is ofc all the questions if you ISP even has bandwidth for you got have dedicated 10gbit… i think the ratio is 10% usually… they will assume people use 10% of a connection speed at anyone time or something like that.

always nice with more data points tho.

Those are valid points but Tardigrade isn’t really focused on end consumers. It’s aimed at enterprise setups where bandwidth limitations of the sort you described are much less common.

Yes, it’s impossible to make any inferences from @azrin’s test without knowing his router, ISP, test computer, etc etc but that performance was pretty naff.

1 Like

How many parallel transfers? What is your actual speed on speedtest for your connection?

The expansion factor is 2.7. You are uploading a little more than that (because you starting with 110 but finishing with 80 connections), but it’s not greater than a 30%, because you cancel them a lot earlier than at the end of transfers.

yeah basically what the storjling that came up with the number 4 said… just he did include the tcp overhead in his calculations, which puts it a basically 4, which is then like you say an overestimate…

but nice and easy to remember, and nobody complains if getting better speed than they expect.