Enterprise SNO Members

Forgive me if this has already been answered but has Storj/Tardigrade considered working with Enterprise SNOs who can offer storage space in a datacenter environment with cooling, redundant power and redundant internet? You could certify the SNO to ensure they meet the industry standards and they can have a greater benefit, such as less withholding of Storj coins or higher payout to cover expenses.

Just food for thought.

I know the idea is to be decentralized, and that can still be achieved by having multiple SNOs in multiple datacenters.

I think they made the /24 IP limit specifically to discourage people running nodes in datacenters.

1 Like

@Pentium100 I’m new to this /24 limit. Are you saying the IP has to be in a subnet smaller than /24?

If you could share a link, I’m looking for reference.

1 Like

If there are multiple nodes in the same /24 subnet, they are treated as a one big node for storing data. Basically, if you have 10 nodes in the same subnet, each will get 10% of the traffic that a single node on the subnet would get.
https://storj.io/blog/2019/06/ip-filtering-keeps-data-distributed/

Ok, so by this logic, if I have a /20 public IP subnet, it will only treat it as if I have 16 hosts instead of 50? I don’t think that’s so bad though.

Right now, yes. Each /24 subnet gets you the traffic of one node, no matter how many nodes are in that subnet.
The difference between one big one and multiple nodes in the same subnet is that multiple nodes would get vetted slower (though you could start one node, wait until it’s vetted and then start the second one) and if one node gets disqualified, then you only lose the held amount for that node.

I have a large number of TB of unused space on my SANs and was looking to get some not profitability, but just get money back for it. What size LUNs would you recommend per node? I’ve started one with 9TB and it’s taken almost 2 months to get 1TB filled (only at 827GB).

I run one node, currently at ~16TB of data. I just increase the virtual disk size once it gets near full.
This is my traffic graph:

Most of the incoming data is tests (meaning Storj pays for it from their own pockets), very little data is from actual customers.

My concern would be the stability of the OS. If I have a large node and it crashes, that’s months of work down the drain. But if I have multiple small nodes, and one crashes, I don’t loose as much.

1 Like

On the other hand, having lots of nodes (without more traffic) means more effort monitoring them, updating them etc, so, probably more likely that one of them would fail, while if I keep them all in the same hardware, a hardware failure could take them out just as easily as a one big node.

It’s a tradeoff, I do not think there is a correct answer in this. Well, unless somebody else in my /24 has a node, then having more nodes means more traffic (as I would get a higher percentage of the “subnet” traffic).

That’s another good point. If you have more nodes, and others in your area have a node, you get more pieces of the “pie”.

On the other hand, when you start a node, it has to get vetted, until then it receives very little data. This usually takes a month for a single node, but if you started 4 nodes at the same time, it will take 4 months (because the nodes will split the data).
So, it is recommended to start one node, wait until it gets vetted, then start a new node. This way 4 nodes would still take 4 months, but you will have some vetted nodes after the first month.

On the …mmm… third hand, this staggered approach means that the held back percentage will be higher for the newer nodes.

4 nodes will only take a little longer then 1 node. Even a node that holds only a single piece should get 1 audit every 12 hours. If it hold 2 pieces it still gets only 1 audit every 12 hours.

1 Like

Really? I remember reading that vetting of multiple nodes at once can take longer. I guess this changed after I read about it.

Yes, doesn’t vetting depend on the amount of data stored? More nodes on one /24 subnet means less data stored on each individual node. I remember seeing audit numbers proportional to the amount of data stored on my nodes

Similar idea from me.