Trust process is too long

I agree to that. Each storage node can be on run in a totally different (hardware) environment and has to prove its reliability independently.
However 2 things maybe worth to consider:

  1. Once a node has proven to be reliable to one satellite, other satellites could take over the trust. Especially if those satellites are run by the same entity. Maybe there could be some setting for the satellite owner for example to trust the Storj satellites and take over the trust level for their nodes.
  2. Also I had my own thread with the suggestion to at least not artificially halt the vetting process for full nodes. As current vetting process requires the node to hold data from the respective satellite. If a node is full there is no such data and therefore no vetting can take place unless the satellite can place data pieces on it.
1 Like

can this be profitable???

by the way, I don´t think the integrity of the data is the responsibility of any SNO, storJ takes precautions and implements a lot of redundancy that as a side effects, reduces the frequency of sno having egress.
it´s a little confusing to go into ethical judgments, but of course, a very interesting matter to discuss

No. But I have the infrastructure already and it’s paid because of other projects running on it - so actually yes. I would also host other projects on it, if those are as cool as STORJ. :slight_smile:

Especially if not all facts are known.

So for me it’s clear now: I will start warming up new nodes now for new storage I will buy in some months.

That’s also what I do. I have a small full node with 700GB sitting on my main drives, just in case a HDD dies. Then I have a fully vetted, 10 month old node to replace it. If that happens, I’ll immediately start a new node with 500GB allocated.
I think there’s nothing wrong with that.
However, all those nodes are on the same machine so basically they have all proven to be reliable. This is different if you start a node in place1 as a backup and then use it in place2 where your HDD fails. Then the node’s status doesn’t represent the reliability of place2 (or maybe even place3 that did not host a node yet). And that’s the point where the discussion gets tricky and some “ethical” arguments might be made…

1 Like

I don’t think it is much different. A node gets vetted for its history of reliability. It is not a prediction of the future reliability. So I don’t see much of a difference: The node has proven to be reliable in the past, but yet to prove if it keeps being reliable in the future. If you move it to some other place no problem. Auditing keeps going and if the node fails repeatedly, it will become an unreliable node and might get disqualified.