We should do a test next year on the max bandwidth tardigrade can handle sustained download

That would be really interesting. I don’t know how many unique SNOs are there but imagine a test where all of them download a specific file from one published link at the exact same time to simulate high demand.
That would be interesting.

1 Like

that might be very difficult to measure… but i duno
i was thinking finding some or a couple of data centers with some i duno… 1terabit i think is the max these days… tho i’m sure it’s not many places they do this still, but atleast 400gbit

i duno what the max download speed is from one service… over just one file… but i think if we could saturate a 400gbit connection… that would be pretty impressive… not sure if breaking any records…

i mean anything is really a record if one is the first distributed cloud…
and this is ofc something any user of tardigrade would have access to these speeds if their internet could handle it…

and 400gbit that would be 48 GByte/s so 1 TB in 20 seconds or so…
if it works it should be applicable over vast geographical regions, very difficult to overload and accessible to regular users…

that would be insane, but thats how it is with new technology… just like when SSD’s where new tech…
hdd’s had no chance to keep up… aside from storage capacity and pricing.

ofc most people might not understand the data ratio’s so it had to be in some form the general public would understand… like books, music or films… aside from just writing the capacity.

i know it seems straight forward for us… but most people have no idea about what or how much a TB is

usually in these kinds of things… people are like advertising their maximum bandwidths and such…

but i think it’s sort of fallen a bit by the wayside because the engineers understand that it would be ridiculously high compared to current used technology…

so it has nearly no practical purpose today… not much aside from special cases.

and it might be really difficult to measure it, because tardigrade might literally saturate any existing internet connection on the planet for all we know…

so i think we should measure it… its difficult to find a storage architecture that doesn’t know it’s own max bandwidth… and ofc its always changing… but still its a cool test… gives cool numbers which is basically free PR.

Not so sure I’d be interested in storage alone.
I don’t really want my HDDs filled with data that doesn’t go anywhere. Would much prefer content distribution so I can get some of that sweet sweet egress pay…

Then again, I’m just an SNO. I know what I signed up for. Up to Storj to make their case, I guess.

1 Like

My 8TB drives were 150€. So if that drive gets completely filled with stuff that doesn’t generate egress, it’s for e.g. 7TB*$1.5/TB=$10.5 per month.
Substract some power costs for the drive itself and nothing else: 8W*24*30.5*0.22€/kWh=1.3€ per month
So my earnings would be ~$9 per month, so maybe ~8€.
That means my HDD would pay for itself in 19 month.

So honestly, I can live with that. My setup would be running anyway so it’s fine. With an RPI you would be fine too. Some more powerful setups that were bought for storj and are only runing nodes wouldn’t be fine with that but that’s their mistake :smiley: Nobody said you should buy expensive hardware or run powerful hardware only for storj.

Of course this was just a hypothetical calculation of a single HDD full of static content. The network will (hopefully) never come close to having only fully static data.

4 Likes

anyone has a 400gbit internet connection ??? :smiley:
i suppose 40gbit might do, or even 10gbit… but the lower we get the less the chance for actually exceeding the tardigrade network output bandwidth.

1 Like

I think the most limiting factor might be the CPU for connections >1Gbps

1 Like

When I was testing the Filezilla integration/onboarding, I got to to about a quarter of that on average (~250Mbps) with an i5-8600 which was boosting to about 4GHz. I agree CPU speed for crypto actions will probably bottleneck next, or about the same time, as a server CPU (ie, Xeon Silver 4114). The more cores I threw at it, and number of simultaneous connections, the more I was able to pull down the pipe.

kinda makes me wonder if we can push it through a GPU
that might be much faster at it…

GPUs can’t access neither disk controllers nor network.

GPU (CUDA/OpenCL/etc) for crypto? I mean it sounds like a match made in heaven, for sure, but I will admit that at that point you’d probably want to eye a custom version of libuplink (a non-Golang version) to accomplish that. I’d be more so interested if they’re using anything to leverage AVX/AVX2 instruction sets.

I think you’re thinking a bit too literally here- All software runs somewhere. Performing the cryptographic hashing on a CPU or a GPU is not a one or the other answer. You can write a CUDA or OpenCL program that can leverage the stream cores of a GPU to perform cryptographic functions for encrypting or decrypting pieces for Storj. Is it a bit more involved than just opening a python/go script and bashing the keyboard to get it working- yes… but that does not mean you can not use a GPU to do some of the same things a CPU does.

Nvidia has a few words for you:

2 Likes

yeah there was something about AVX issues on AMD cpu’s atleast around ryzen 3
most likely already solved as it seemed to be a software thing… basically INTEL kicking AMD in the leg, or just basic ineptitude.

might also be important if it’s doing 64bit or 32bit… atleast in windows a couple of years ago i was still finding stuff that would run 32bit, where i could go online find a 64bit version and upgrade the process like i think it was some java stuff that was doing transcoding or streaming… but by default java was running as 32 bit… and did the world of a difference… like 50% less cpu utilization… just by switching out one exe for another… tsk tsk

so if the cpu is the limit there will be a lot of stuff to look into for optimizing.

Ok, I don’t know the details of the erasure coding scheme, but I seriously doubt hashing or encryption on CPU would be anywhere near a bottleneck on any hardware that would be fast-enough to admit a CUDA-capable GPU. The bottleneck will be at the network level.

My “AMD Athlon II Neo N36L Dual-Core”-based NAS with Passmark’s CPU benchmark of ~450 can encrypt at 500Mbps, and this is a 10-year old low-power embedded CPU that doesn’t even have dedicated encryption instructions.

The primary answer is, not a GPU. I was talking about GPUs.

I can for sure say this cpu is less powerful then my i5 8th gen laptop cpu, when I’m uploading with this it maxes out the cpu so maybe we’re talking about a different encryption here. But when uploading to Tardigrade my laptop has a hard time with the encryption and uploading.

Logically that would result in a very low max bandwidth as all the end nodes would be trying to download from a maximum of just 80 SNOs. All downloading nodes would be gated by their attempt to access the first block of the file. In many ways the test you describe would result in the worse case bandwidth value for the overall system.

A substained download test would have to be based on the accessing of a large number of files that are distributed across the different satellite regions or for performance in a region the accessing of random blocks from within a large single file.

An upper bound is the number of payments done this month: 5,091 (source: storjnet.info). Some SNOs use multiple wallets for their nodes, so the number is likely smaller than that.

cpu’s can easily be bottlenecks…

encryption and compression schemes can vary a lot in how complex they are, so just because one can do one scheme doesn’t mean another one will do nearly as well…
could also be that there is simply compression and that becomes the bottleneck.

anyways lets test it…

@deathlessdd what kind of bandwidth could you squeeze out of tardigrade with the threadripper?

and anyone got an intel comparison cpu… oh wait lol :smiley: there isn’t any before 2023
anything with some 20+ cores from intel will do fine… we just need to be sure we aren’t looking at some software limitation amd hasn’t figured out how to use or fix yet.

and then we need the worst of the worst

Who can muster the worst cpu in a working computer… :smiley:
i suppose i could just do a vm with 1 core on my 10 year old intel server cpu lol
that would actually make the whole thing a lot easily … to test it on the same server, the same internet and only change the amount of cpu time accessible.

and it should be easy to replicate in multiple locations and comparable across tests for a better idea of this is really a thing or not…

yeah i think that has to be the best approach to this… do multiple individual tests from different locations using virtualization to limit the cpu time and correlate that against tardigrade bandwidth.

I have a Nettop nT-A3500 which has an AMD E-350, 2 cores at 1.6GHz and 32bit.
Here are the specs:


maybe not the worst cpu anyone has but it’s pretty crappy.
1 Like

When I used uplink I was able to get 100MB/s pretty easy but I never had any files large enough to let my bandwidth stretch. But Id also be downloading to NVme drives which I would have no bottleneck, Alot of people still download to mechanical drives.

1 Like

you wouldn’t happen to have a ridiculously powerful server / computer to go with that 10gbit fiber, which that we could recruit for a tardigrade speed test?

1 Like