I don’t think this is quite accurate. It isn’t as straightforward as “alpha = number of successes, beta = number of failures”. A failure changes both alpha and beta, and so does a success. You are correct that giving a higher initial alpha is seeding a positive history, but within the model as we’ve been using it, (alpha+beta) converges up toward 1/(1-lambda) as time goes on. The closer alpha is to 1/(1-lambda), the stronger the reputation history. But when alpha or beta is already equal to or greater than 1/(1-lambda), I don’t know if it can still converge toward that point anymore. I should really just find all the numbers and graph it somewhere, but I’m out of time tonight. Maybe I can do that tomorrow.
I’m not sure that follows. 19.88 looks close to 20 on a linear scale, but that doesn’t mean there’s no relevance to the difference. The difference between 19.9 and 19.99 is somewhat like the difference between “2 nines” and “3 nines” of availability, because alpha should constantly get closer to (but never actually reach) 20 as the history grows. (In practice, it probably can reach 20, because floating point values can only handle so much precision )
Good idea! It wouldn’t have to be raised all at once.
The writability check happens every 5 minutes by default, and the readability check (making sure the directory has been correctly initialized as a storj data directory) happens every 1 minute. The readability check is the one that would help in that situation, so that’s good.
A really good point. I haven’t looked at the model assumptions for how long bad nodes can remain online in too long. I’ll try to find that. (The person who would normally know all of that, and who wrote the beta reputation model adaptation for Storj, left recently, so we’re trying to keep up without them.)
Certainly true. And 0.999 is fine with me, as long as we make sure bad actors are DQ’d quickly enough (for whatever that means).