Updates on Test Data

I am wondering if you are aware that there might be other reasons for SNOs not adding capacity? I tried to make that clear in my post:

From what I see on Github I would have to wait at least until my nodes are on version 1.106 and then maybe get a picture what is currently really used and what’s not. And all the old deleted stuff finally cleaned out.

The unreliability of the numbers everywhere but also of the storagenode software and the satellites together with slow rollout of fixes because it is “low priority” is currently my main blocker to add anything.

It is also what @BrightSilence has said:

There is a lot on the forum about issues with the databases, space discrepancies, problems with filewalkers, garbage not deleting, nodes restarting and I see all of that on my nodes too. The way this all works with so many issues at the moment is beyond my comprehension.

So even if you would make such a forum post today detailing a signed deal I would not add and go for additional capacity or nodes.

My suggestion on adding more customers with large space requirements I made here:

Edit: Just to give you an idea what I am talking about: If I look at the average used space over all nodes this gives me only 40% of the space that all my nodes claim they are using. And the trash over all nodes is telling me its size is 1/3 of the reported average used.
And this does not add up. Running du give me complete different space than the nodes do. Some must have tons of still uncollected garbage on it while some have not correct updated used space databases as filewalkers do not finish.
So basically even if I wanted to I could not tell how much space is used altogether, how much trash is there, how much garbage has notbeen collected and on.
So it is really impossible to make a decision to add even more terabytes to this obscure situation.

Edit2: The resume feature for used space filewalker is also one of the features I need to get the numbers correct.

1 Like