Updates on Test Data

Haha.

Did you ever read how the crypto algorithms works?
If it’s invalid it will not be accepted even on upload. I do not mention the downloads ever.
If your node provided a crap - it will get an audit reputation impact right now without any delay.

Checksums. Ok. But we verify the hash of the file, not only checksums (what is the ancient software do you use?)…

1 Like

Ok. So you need maths, right?
Did you ever read a whitepaper (asking me from the hm, when they are very convinced against the idea…)?

how, please explain for dumbs like me? You definitely much more smart than the team and you know how exactly it should be done… 9

Is it ok for @littleskunk to still give us test updates in here… or is for general Node Operator talk now :wink:

8 Likes

For anyone wondering how big is the ingress difference between SL test sat and others, here are 2 nodes on the same machine, same IP, this month:

Node 1 with no SL (graceful exited last year):

Node 2 with SL:


And…

1 Like

Node 1 with no SL (graceful exited last year):

Node 2 with SL:

Biggest traffic in and out, I got on a machine with 2 new nodes: 46TB for this month.

1 Like

Unfortunately 1.104 doesn’t show correct bandwith numbers. You have to wait for 1.106 to see real bandwith usage.

2 Likes

I clearly do not understand why I have this a language barrier with you. I know almost any European Languages, Include the variations of a German language (as far as I can understand), including the Switzerland one (btw, I very like the Switzerland variation of German, than the native German, it sounds much more pleasant to me, even if most of my relatives lives in Germany… except my nephew, who lives in Switzerland, well, yes, but how would I know the difference in the other way?!)

I don’t know about you, but (re)enabling sync made my node run faster after upgrade to Debian 12. It’s probably because of my setup, but with sync on I get more traffic and no load spikes.

The way I understand, if enough good pieces are left on the network, the customer will be able to reconstruct his file. If too many pieces are missing or corrupt then the file is lost.

4 Likes

No load spikes on Debian 12.

All else being equal (as in, not enough traffic to make sync too slow, ignoring the weirdness with my node), I prefer to have sync enabled, not because the network would lose data without it, but because in some cases my node could get disqualified for losing too much data (even if it would not really matter to the network and the customers).

X24 i please you ! :nerd_face:

2 Likes

Noooo do not watch it!!! i can barely look at the molested bits, bytes, and the bad planning and arrrggghhhnnng. i summon @arrogantrabbit to slaughter this abdomination of something homemade. its pure cringe for handy it people, damn let the bytes rest in peace.

i failed

I believe the test data started to be deleted… I see a lot of trash piling up.

That means that TTL isn’t working. It’s supposed to be deleted immediately(=when piece expiry runs) without being moved to trash.

2 Likes

Not me. Trash is going an no new that I can see…

It seems like TTL is kicking in. Number of full nodes is decreasing significantly.

Maybe it’s from data sats, I didn’t check.
If the TTL failed and send pieces to trash, than it’s a catastrophic failure!!!

All I see is trash being deleted. Haven’t noticed any TTLing going on.
Has it been 30 days since The Big Data Push started already?? :scream:

All I see is trash increasing… it’s from data sats. It piles on nodes without Saltlake too.

It could be possible, if your databases were corrupted or deleted, the piece_expiration.db in particular, then this data would be collected by the garbage collector and moved to the trash.

So, seems your databases are ok.

Such testing should have been done long time ago and continuously since then. I am convinced we would be better off today.
Therefore I am also not surprised that we see current implementations breaking but also not satisfied with this situation at all. And it doesn’t help when everything that is supposed to help is classified as “low priority” and therefore not rolled out or slowly worked on.

I see I forgot to add the used-space filewalker stop and resume feature on my list of needed fixes. It is not even on the list of upcoming releases. As said, these are the fixes I need to get the numbers straight and make an informed decision whether or not to add capacity. If they want that sooner they should make these fixes a higher priority and roll them out faster.

3 Likes