Performance on Tardigrade

when i realized it was more than a map showing the file splitting but a share link to the file, i removed the link because i dont want you drain my test account (egress) :slight_smile: It wont stream in the browser but download the file for me too because it’s an old film encoded with a old codec (Xvid)

In addition to my initial speed test, I did 3 uploads of the same file and got the same 89-90 seconds upload time. I did not expect such consistency.

1 Like

its a very good result no doubt, i had feared it was barely working from what other people have said, but maybe it is a bit heavy on the bandwidth usage because of the erasure coding multiplier.

@Alexey
i duno, think it needs a tardigrade account, ill leave it for somebody else, not really that interested just happy that tardigrade seems to have what maybe near max internet bandwidth speeds, would be interesting to see how it would perform on a 10gbit connection or 40gbit

i should be getting 1gbit internet in the near future then maybe ill come back and do some testing of my own, since my bandwidth currently is about equal to buchette’s then i wouldn’t expect a very different result currently

1 Like

assuming storj files on nodes are 2MB, and my 700MB file was divided in 777 pieces, if my math is close to correct, i uploaded 1500MB for a 700MB file.

My CPU is among the best (consumer grade) regarding instructions per second ( Gen 9 I7 8 cores without hyperthreading) and i guess when i get a 37% usage or more when uploading, on low end hardware this would translates in 100% usage, and a CPU bottleneck.

I will try the same test on my old PC (3770K I7 gen 3) to compare

Maybe we should split this topic in a benchmark one

1 Like

not sure it’s as simple as that… but maybe…
cpu progression isn’t linear some computation isn’t faster on modern cpu’s compared to older cpu’s while other stuff is much faster, ofc having more cores and threads ofc help today…

even if there is a cpu bottleneck, cpu development is making leaps and bounds right now, so give it a year or two and there might not be a cpu bottleneck anymore, if there is one… depends on what kind of computation/processing erasure coding require.

my old 5630 xeons have some advantages in cpu latencies and such… even when compared to modern cpu’s … not sure it matters tho lol in 99% of all cases
but it’s interesting, would be cool to see if the cpu could be a bottleneck.

I tested on my cpu 3970x threadripper which is 32cores used 6% cpu when uploading a 4gig file using filezilla. When I get some time I can test on my i5 laptop and a ryzen 2400g and see if theres a huge difference when uploading the same exact file.
I tested my laptop which has a i5-7200u pretty weak cpu I might add used between 62%-100% cpu while uploading 4gig file.
I blew off the dust on my x5675 dual socket cpu when uploading the file it used 14% cpu 4gig file.

Id like to see something an older cpu can do faster then a new cpu, From my experience there is nothing faster with an older cpu when compared to a new. That is why we upgrade from older cpus in general. The only thing an old cpu can do is use more electricity with less efficiency.

2 Likes

It would be nice to have a google sheet documenting all these tests so users can see how it performs on different setups.

1 Like

I could perform some tests on some pretty weak CPUs like a Neo N36L if there was a way to do in without having to have a Tardigrade account or needing to upload anything. It should be possible to extract the erasure coding/encryption algorithm from libuplink into a separate benchmark app.

2 Likes

i’m not saying it’s anything worth the potential advantages of a newer cpu, but sometimes to gain 50% performance in one place they cut 5% in another… like hyper / multithreading…
didn’t make the cpu’s faster, just utilized the FPU’s and APU’s better, which gives “faster” parallel computing, but the core still have the same number of FPU’s, but without hyperthreading a CPU is faster at serial computation.

or like that the more cores you get the more you do parallel computing rather than serial computing, but some computation is very difficult if not impossible to do in parallel… just like more cores or more cpu’s add latency…

my server has two cpu’s… that doesn’t make it double as fast… atleast not when doing interconnected cpu tasks, but if its doing parallel computation then it’s twice as fast…

on top of that the continual miniaturization and incremental tweaking of cpu’s work in favor of new cpu’s, but sometimes something will change that will make parts of it obsolete, but then other parts of the cpu might not see the same kind of development or usage, and then to save room it might be taken out because people doesn’t use it anymore.

ofc the often there is found patches to do more or less the same thing with different processes, often taking less time, but since it rarely is 100% the same and so old programs stop working, or act weird.
there are like 25 different sort of computation cpu’s can do, if not hundreds… not all of them always improve or is even kept when moving on to newer architectures.

yes newer cpu = better
but it’s not always exactly better at everything… but yes it is quite rarely it’s going backwards, most often it’s more like lost features and patches that covers 95% of all use case.

comparing between architectures is a bit like comparing GPU vs CPU it’s not easy to compare them, just like it’s not easy to compare CPU’s between AMD and INTEL, sure you can do some basic overall software tests, and sure the closer they are in generation the more they try to be the same…

but it’s just not, they have very different advantages, sure we could say we compare only one brand, but then still we will see the dropping of features… i’m sure there was 32bit FPU’s … ofc they aren’t there today because 64bit FPU is “better” ofc a bigger instruction set would mean more latency, which means slower calculation… ofc this is handled by being able to do more complex math…

my server has dual cpu’s that doesn’t make it twice as fast, it gives me a system with higher internal processing latency when running the cpu’s interlinked, however if i did only parallel processing sure it would be twice as fast…

but yes… lets just say newer better, just not that simple… but i guess it never is… :smiley: