Would you also want bad sectors on one node causing reputation on all your nodes to fall then? I understand the underlying idea. I just have my doubts about its practicality.
It makes no sense to start all nodes at the same time as they normally are treated as one big node. More available space doesn’t give you more data.
Now you say in your last paragraph that you circumvent this by routing each node through a different subnet. You shouldn’t be trusted at all.
Apparently this comment drew more attention than the OP, so I have removed it.
@BrightSilence: I think the main reason for nodes going offline is, because ppl give up on them. And the mechanism I think of doesn’t necessary lead to the situation that nodes shouldn’t be evaluated and maybe punished anymore. Nor do SNOs and nodes need to be the same.
@donald.m.motsinger & @Cmdrd: On a scale of 1 - 10 how much offended and/or mad are you right now? I really hope you guys don’t cry too much, now.
However, I know how the STORJ network is working and therefore I can decide for myself how to (or how to not) setup my nodes. So maybe let’s discuss, if an additional layer which brings in the SNO as a component may make sense.
If you are just gonna whine in your next comment, maybe (just maybe) don’t post it. Thank you in advance.
Please lets keep the conversation civilized. Someone who communicates openly about what they are doing to circumvent systems in place is not the enemy. If this is behavior that shouldn’t happen then a solution needs to be built that either makes this impossible or not profitable. Instead of trying to get rid of someone trying to circumvent the system I’d much rather focus on changing the system so this kind of abuse can’t happen in the first place.
So please all try to refrain from antagonizing each other.
I got 2.5TB on one fresh node this month - fresh installed last month 20.4. I am running only two nodes/one per ISP connection to still have enough bandwidth free for my other operational tasks.
Is that now more on the good side? I assumed my traffic more as average since I have nothing that specific.
Sounds like my node that also started last month, running on top 1% hardware.
To reiterate @BrightSilence’s point made above, this type of suggestion doesn’t really make sense as the network isn’t vetting a person, it is vetting a storage node. The vetting period is to determine the stability of the node relative to the network. A stable node does not mean a stable network operator and vice versa.
In addition to that, how do you propose handling one node that is having problems and fails audits to the point of suspension or disqualification? What thresholds do you set for disqualifying the SNO key as a whole and all associated node id’s as a result? This cuts both ways.
And no, I’m not offended, I think that people flagrantly abusing the network is not a positive thing. After re-reading through your initial comment I can see what your intentions are for bypassing the restrictions, which I still don’t agree with and still think this should result in a node ban, but it does support your reasoning of starting multiple nodes and having them be vetted faster. I do not think that withhold period should be modified, this opens the network up to a malicious SNO having a vetted ID and spinning up a bunch of other nodes, circumventing network restrictions, and not having a relative amount of Storj withheld, resulting in them causing harm to the network.
Also I might add that it has no benefit to antagonize a person that openly admits breaking the TOS and trying to have his nodes banned.
There might very well be hundreds of other SNOs doing the same thing without telling anyone.
So don’t be mad at the one person who admits it and starts a discussion (even though it is still not right). That doesn’t solve the problem of why a person does what he does (and probably lots of others too).
Disqualifying nodes of people who speak up would set a really bad precedent. The result would simply be that people do it quietly instead. In my opinion the nodes shouldn’t be disqualified at all. But rather a system needs to be built that ensures that these nodes, despite having a different IP would still share traffic, so there is no longer an incentive to do this.
actually… yes. i think the advantages (months in time) are worth this downside. But i can see someone prefering it the other way.
really? Don´t know… I think they are very related. Can we agree at least some influence between them?
I don´t know enough about networks to really appreciate why is so wrong what GhostW is doing, but, if you get two separated lines from your ISP, would it be fine? (though I don´t know if this technically or administratively possible either)
If his house burns down, gets flooded, has a power cut, [insert other reason here] then all nodes will be affected and potentially several pieces of the same file.
Well @BrightSilence, there will never be an implementation that deals with this abuse effectively, so at this point I think it a moot point conversation, at least from my end.
This guy and anyone else abusing the network have free reign to do so, but I guess hand wringing about calling someone out about it is probably as effective as actually calling them out, so I will take that as my personal lesson learned from this.
Okay guys, actually I didn’t wanted to go into this with that much detail, but:
None of my nodes is hosted “at home”. All of them are hosted in datacenters where everything is redundant. In any case where I put a node onto a backup system where at least one component isn’t redundant, you can be sure that this single point of failure doesn’t exist in any of the other node setups at that geographical location.
I could make up the argument that almost everybody hosting a node without full redundancy (e.g. mostly at home) is endangering the network, because if that isp fails than many nodes with (very likely) more than one class C network of ip ranges will go down. (In reality this scenario happens more often than my datacenter goes down or all redundant hardware in it goes down.)
We see, at least in my case the biggest threat to the network is a SNO deciding to leave the network. And that’s the reason why I would like to take the SNO positively but also negatively into account.
So, how could STORJ combat such “network ip whatever abuse” behaviour? Well, they could try to also limit the amount of pieces sent to nodes in bigger network ranges, like class B or class A subnets (or any other CIDR). And maybe they are already doin’ so. However this won’t really solve the problem, because the ip address is not a very reliable indicator of the exact location a node resides in.
If the SNO would be known to the network and also if this would give some incentive like less audits until a node gets full in-traffic then many multi-node-hosters (I think) would go this step at least for their new nodes. STORJ could then permit that a “good” SNO can receive 2 instead of 1 piece of a file over all of his nodes or whatever. He even could specify how many real sites there are and if a node is on one or the other physical location.
I look at the success of Storj in terms of value to the customers. It makes sense that long term reliability is far more valuable to the system than someone with a very reliable node who comes and goes. It really doesn’t matter what is the reason for that long term reliability, quality drives, where something is hosted, the power quality or simple luck. So as a SNO, I accept this principle and build my system accordingly and expect it will payoff in the long run. I can see the point in not rewarding short time nodes, whether run by different people or the same person. Each node is a storage unit from a system perspective and it must prove itself over time regardless of environment.
I agree to that. Each storage node can be on run in a totally different (hardware) environment and has to prove its reliability independently.
However 2 things maybe worth to consider:
- Once a node has proven to be reliable to one satellite, other satellites could take over the trust. Especially if those satellites are run by the same entity. Maybe there could be some setting for the satellite owner for example to trust the Storj satellites and take over the trust level for their nodes.
- Also I had my own thread with the suggestion to at least not artificially halt the vetting process for full nodes. As current vetting process requires the node to hold data from the respective satellite. If a node is full there is no such data and therefore no vetting can take place unless the satellite can place data pieces on it.
can this be profitable???
by the way, I don´t think the integrity of the data is the responsibility of any SNO, storJ takes precautions and implements a lot of redundancy that as a side effects, reduces the frequency of sno having egress.
it´s a little confusing to go into ethical judgments, but of course, a very interesting matter to discuss
No. But I have the infrastructure already and it’s paid because of other projects running on it - so actually yes. I would also host other projects on it, if those are as cool as STORJ.
Especially if not all facts are known.
So for me it’s clear now: I will start warming up new nodes now for new storage I will buy in some months.
That’s also what I do. I have a small full node with 700GB sitting on my main drives, just in case a HDD dies. Then I have a fully vetted, 10 month old node to replace it. If that happens, I’ll immediately start a new node with 500GB allocated.
I think there’s nothing wrong with that.
However, all those nodes are on the same machine so basically they have all proven to be reliable. This is different if you start a node in place1 as a backup and then use it in place2 where your HDD fails. Then the node’s status doesn’t represent the reliability of place2 (or maybe even place3 that did not host a node yet). And that’s the point where the discussion gets tricky and some “ethical” arguments might be made…
I don’t think it is much different. A node gets vetted for its history of reliability. It is not a prediction of the future reliability. So I don’t see much of a difference: The node has proven to be reliable in the past, but yet to prove if it keeps being reliable in the future. If you move it to some other place no problem. Auditing keeps going and if the node fails repeatedly, it will become an unreliable node and might get disqualified.