Saltlake vetting started?

Is this really the only parameter for vetting? No time delay or data treshold before audits starting?

Yes, that’s what gets traffic into the unvetted node. After that it’s a matter of how much data it has. More data = more likely to get an audit.

@ACarneiro no, benchmarks are unpaid.

You might want to read that text again. Especially the last sentence. I don’t understand why people still believe the current test would be unpaid.

2 Likes

So that means that only that one particular test was unpaid?

Yes. And it was just 10 segments with 64 MB each. It was a tiny amount of unpaid data.

3 Likes

Then I stand corrected.

3 Likes

I read it the same way you did. I expected the slower/continuous SLC data to be related to the TTL/capacity-reservation plans (so paid)… but the recent high-bandwidth testing to be the performance-data-not-registered-with-the-satellite (so unpaid).

1 Like

nope only this one was unpaid:

1 Like

I actually don’t care the vetting process that much. As I still don’t see much of a difference between vetted and unvetted nodes. But @Vadim 's argumentation seems to be reasonable, but isn’t. Situations where nodes choke, are especially stress situations: so much data influx in the beginning, might just prevent later problems. The failure of disks is still noteworthy hard to pretell.

So I can follow @Roberto 's reasoning, in the way that apparently those nodes are being held by people who are able to setup nodes. But I would not only check for the number of nodes behind the same IP, but also for the failed nodes behind the same IP last six months and/or from the same operator. And then it becomes quite complex, and is the question whether it’s really worth the hassle.

1 Like

Maybe change the vetting process?
https://forum.storj.io/t/strategy-for-testing-new-nodes/26101