Trust process is too long

Hello,

I currently have 10 nodes and I’m increasing capacity with “not so expensive” disks instead of letting them reside on expensive RAID-10 ssd storage. Lucily I started months ago to “warm-up” those nodes with a small (virtual) storage disk attached so that the satellites gained trust with the nodes.

However, now I bought 8 additional disks but I only have 6 of those 10 nodes on that location. I first thought that I will honor the STORJ suggestion to only use one node per disk. However, then I would have to wait months until newly setup nodes get “productive”.

So, I think, if there is no way of pushing the initial node trust level of an already known SNO I will more likely go with building RAID-0’s out of every 2 disks and risk a failure beeing more likely in years. And ofc, I would start to warmup new nodes in case some idendities get burned due to crashed disks.

Maybe I’ll should start a black marked with warmed up nodes?

Did I overlook something and there is a posibility to achieve this better initial trust level in such cases (where the SNO is (or could be) already known to the system)?

I have an additional suggestion which would save me time and meshes into the other one because I wouldn’t have to build strategies to bypass some not so well engineered rules: What if there would be a SNO key and you can exactly assign a node to a SNO? You then could (do this trust thing and) control more exactly which data the nodes of this SNO would get. Because: I’m currently evading the data delivery rules (depending on ip subnets) by spreading those nodes to 6 different ip ranges, which will give me in a worst-case-scenario 6 fragments of a file. And this will degrade the resilience of the network or at least of such files.

Thanks for reading carefully.

What do you mean with trustlevel?
I installed new nodes end of last month and after a day (one day not month) it got data, auditlevel 100% and first payment also for april period (very small obviously).

Hello @endurance,

I mean everything trust related, which currently is:

  1. How fast you will get full pay-out (currently after 15 months per satellite).
  2. How fast your node is trusted (currently after 100 audits per satellite) so that it may receive traffic at “full speed” (which is currently around ~5 TB/month for all nodes in the same ip range).

regards.

Trust isn’t just about the node operator, it’s also about whether the storage can still be trusted or is corrupting files left and right. So trust can’t simply be transferred from one node to another.

My recommendation would stay the same, but don’t start all nodes at the same time. Wait at least until the previous one is vetted before starting a new one. And after 2 or 3 nodes I would wait until the others are nearing full capacity before starting the next one. This should keep vetting times short, while still making the most of the HDD space you have.

3 Likes

this makes sense for me. The thing is that, for the system, the SNO doesn´t exist at all. An obviously if a node is trustworthy, it´s not only for its circuits quality, but mainly because of the efforts of its human operator. Makes sense that if a node is trustworthy, the possibility of any other node under his supervision would be similar.
So, yes, it would be more efficient that way IMHO.

1 Like

Would you also want bad sectors on one node causing reputation on all your nodes to fall then? I understand the underlying idea. I just have my doubts about its practicality.

It makes no sense to start all nodes at the same time as they normally are treated as one big node. More available space doesn’t give you more data.

Now you say in your last paragraph that you circumvent this by routing each node through a different subnet. You shouldn’t be trusted at all.

4 Likes

Apparently this comment drew more attention than the OP, so I have removed it.

1 Like

@BrightSilence: I think the main reason for nodes going offline is, because ppl give up on them. And the mechanism I think of doesn’t necessary lead to the situation that nodes shouldn’t be evaluated and maybe punished anymore. Nor do SNOs and nodes need to be the same.

@donald.m.motsinger & @Cmdrd: On a scale of 1 - 10 how much offended and/or mad are you right now? I really hope you guys don’t cry too much, now.

However, I know how the STORJ network is working and therefore I can decide for myself how to (or how to not) setup my nodes. So maybe let’s discuss, if an additional layer which brings in the SNO as a component may make sense.

If you are just gonna whine in your next comment, maybe (just maybe) don’t post it. Thank you in advance.

Please lets keep the conversation civilized. Someone who communicates openly about what they are doing to circumvent systems in place is not the enemy. If this is behavior that shouldn’t happen then a solution needs to be built that either makes this impossible or not profitable. Instead of trying to get rid of someone trying to circumvent the system I’d much rather focus on changing the system so this kind of abuse can’t happen in the first place.
So please all try to refrain from antagonizing each other.

5 Likes

I got 2.5TB on one fresh node this month - fresh installed last month 20.4. I am running only two nodes/one per ISP connection to still have enough bandwidth free for my other operational tasks.
Is that now more on the good side? I assumed my traffic more as average since I have nothing that specific.

Sounds like my node that also started last month, running on top 1% hardware.

To reiterate @BrightSilence’s point made above, this type of suggestion doesn’t really make sense as the network isn’t vetting a person, it is vetting a storage node. The vetting period is to determine the stability of the node relative to the network. A stable node does not mean a stable network operator and vice versa.

In addition to that, how do you propose handling one node that is having problems and fails audits to the point of suspension or disqualification? What thresholds do you set for disqualifying the SNO key as a whole and all associated node id’s as a result? This cuts both ways.

And no, I’m not offended, I think that people flagrantly abusing the network is not a positive thing. After re-reading through your initial comment I can see what your intentions are for bypassing the restrictions, which I still don’t agree with and still think this should result in a node ban, but it does support your reasoning of starting multiple nodes and having them be vetted faster. I do not think that withhold period should be modified, this opens the network up to a malicious SNO having a vetted ID and spinning up a bunch of other nodes, circumventing network restrictions, and not having a relative amount of Storj withheld, resulting in them causing harm to the network.

1 Like

Also I might add that it has no benefit to antagonize a person that openly admits breaking the TOS and trying to have his nodes banned.
There might very well be hundreds of other SNOs doing the same thing without telling anyone.
So don’t be mad at the one person who admits it and starts a discussion (even though it is still not right). That doesn’t solve the problem of why a person does what he does (and probably lots of others too).

1 Like

Disqualifying nodes of people who speak up would set a really bad precedent. The result would simply be that people do it quietly instead. In my opinion the nodes shouldn’t be disqualified at all. But rather a system needs to be built that ensures that these nodes, despite having a different IP would still share traffic, so there is no longer an incentive to do this.

actually… yes. i think the advantages (months in time) are worth this downside. But i can see someone prefering it the other way.

really? Don´t know… I think they are very related. Can we agree at least some influence between them?

I don´t know enough about networks to really appreciate why is so wrong what GhostW is doing, but, if you get two separated lines from your ISP, would it be fine? (though I don´t know if this technically or administratively possible either)

If his house burns down, gets flooded, has a power cut, [insert other reason here] then all nodes will be affected and potentially several pieces of the same file.

1 Like

Well @BrightSilence, there will never be an implementation that deals with this abuse effectively, so at this point I think it a moot point conversation, at least from my end.

This guy and anyone else abusing the network have free reign to do so, but I guess hand wringing about calling someone out about it is probably as effective as actually calling them out, so I will take that as my personal lesson learned from this.

Okay guys, actually I didn’t wanted to go into this with that much detail, but:

None of my nodes is hosted “at home”. All of them are hosted in datacenters where everything is redundant. In any case where I put a node onto a backup system where at least one component isn’t redundant, you can be sure that this single point of failure doesn’t exist in any of the other node setups at that geographical location.

I could make up the argument that almost everybody hosting a node without full redundancy (e.g. mostly at home) is endangering the network, because if that isp fails than many nodes with (very likely) more than one class C network of ip ranges will go down. (In reality this scenario happens more often than my datacenter goes down or all redundant hardware in it goes down.)

We see, at least in my case the biggest threat to the network is a SNO deciding to leave the network. And that’s the reason why I would like to take the SNO positively but also negatively into account.

So, how could STORJ combat such “network ip whatever abuse” behaviour? Well, they could try to also limit the amount of pieces sent to nodes in bigger network ranges, like class B or class A subnets (or any other CIDR). And maybe they are already doin’ so. However this won’t really solve the problem, because the ip address is not a very reliable indicator of the exact location a node resides in.

If the SNO would be known to the network and also if this would give some incentive like less audits until a node gets full in-traffic then many multi-node-hosters (I think) would go this step at least for their new nodes. STORJ could then permit that a “good” SNO can receive 2 instead of 1 piece of a file over all of his nodes or whatever. He even could specify how many real sites there are and if a node is on one or the other physical location.

I look at the success of Storj in terms of value to the customers. It makes sense that long term reliability is far more valuable to the system than someone with a very reliable node who comes and goes. It really doesn’t matter what is the reason for that long term reliability, quality drives, where something is hosted, the power quality or simple luck. So as a SNO, I accept this principle and build my system accordingly and expect it will payoff in the long run. I can see the point in not rewarding short time nodes, whether run by different people or the same person. Each node is a storage unit from a system perspective and it must prove itself over time regardless of environment.

2 Likes