Splitting storage into multiple nodes vs a single large node

Hi, are people who operate the same amount of storage split into multiple nodes at an advantage vs those who operate it in a single large node, since they will get assigned data more quickly? Or am I overlooking something here? Basically if you have 10 TB and split it into 10x 1TB (disregarding the 10% you should be leaving free) shouldn’t you have 10x faster data allocation at first?

you wont get more data, but you separete egs between diferent nodes, and if 1 node get dq because some error you wont lost all. I am operating 25 nodes, and lost some nodes because of my own made errors. If it wold be 1 big node i wold lost all data.

1 Like

So how is data allocated? Reading through some doc it sounds like it’s randomized with chance being influenced by reputation. So if I have one node which gets allocated 10GB a day, and then I create another node, won’t I get effectively 20GB per day? Or is the amount of data allocated per identity and not per node, so both node would get 5GB per day?
And if that were the case, what is preventing people from just getting 10 IDs and creating single nodes with separate IDs?
I’m assuming of course that none of the nodes are bottlenecked at all.

No the data will be split across the nodes you have. If you run 10 nodes, then each node would get 1/10th of what a single node would get. Running multiple nodes is only useful if you want to share space on multiple HDD’s, which is why @Vadim is doing it too.

1 Like

I understand, but I guess there is nothing preventing SNOs from just setting up multiple identities and then getting more data assigned?

each node must have own indentity , there is no other way. data spread filtering is going by your external ip.

Yes there is, they would not get more traffic if they do that. So there is no advantage to it. That’s what we’ve been saying.