I’m “only” seeing steady ~600Mbit/s
Guess it’s time to go fetch a new server…
Th3Van.dk
I’m “only” seeing steady ~600Mbit/s
Guess it’s time to go fetch a new server…
Th3Van.dk
Seeing about 90mbit on a measly Synology NAS. You disappoint me @Th3Van
The number of nodes was much smaller. I think they’re pouring much more now than Stefan
How are you getting so much? How many nodes and disks do you have?
This is the traffic from Location one with a 1 Gbit connection:
Sadly i forgot to turn on Traffic monitoring on my second location Will add it later here
yeah, maybe 3000-4000 nodes.
You have a lot of shared IP nodes…
This is the aggregate bandwidth of 12 nodes on 5 separate locations in 3 distinct countries.
To say that I’m disappointed is an understatement…
The performance testing has seemed to be brief. But we’re all hoping the capacity-reservation data will be continuous and overwhelming and smother us all in ingress and cash!
Lets say we keep this benchmark test running for a full week. Would that be a problem?
Personally, and from what I could see on my side, not a problem at all. All my nodes handled whatever load they got quite well.
It’s only one way to find out.
That’s one way to weed-out the weak nodes! I vote we call it the “Baked Potato” campaign!
Next few days we will try out different settings on our side to see how fast we can get. I wouldn’t expect a full week right now but maybe next week or so. We will see. Also we haven’t hit our goal yet. Not sure what we need to tune next to get to our target.
I know you probably can’t give out a lot of details but even though you’re not where you want to be performance-wise, are you reasonably satisfied with the performance?
Are there any numbers you might share?
I have a couple of Pi5s running nodes and they actually didn’t do too badly.
The one with the 18TB spinning rust struggled a bit wit IOWait (unsurprisingly) but the one with 14TB of SSD space did OK
So don’t discard the potatoes just yet
You think it’s still node performance? I just realized that all through this test my system is also running extended SMART tests on all HDD’s. The combination of all file walkers with SMART tests and heavy loads used to grind my system to a halt, but now it seems like it could easily handle more than you’re throwing at it.
The HDD’s sound downright quiet, which is an interesting unexpected side effect of writing much more efficiently.
Is Your target to upload as much in short time?
for example 1PB in 12h or similar, maybe try bigger files?
because i notice my nodes was unable to reach full ingress speed
compared to others at forum, who idk maybe won the race because faster start response,
but my nodes maybe would win if the download time was longer, idk just guessing.
(for example nodes was capable of 200-300Mbps, but got constant 15-30, spikes to 50-80Mbps)
I believe their servers of Saltlake are the bottleneck. He said he must tune the db on the sat.
Why are you running smart tests? Periodic array scrub yields similar effect, but unlike smart test is capable of recovering data.