Performance: More Than 1 Node On The Same Array Patitioning?

Literally always been that way. But ok.

This advise is not even based on having multiple subnets. You can run multiple nodes on multiple HDD’s on the same subnet perfectly fine and would have less of an IO load on individual HDD’s. You even have a slight benefit while one node is vetted and the other isn’t as they receive data from different node selection pools.

IN YOUR FACE!!!
points to an unofficial third party domain
I mean, it even literally has a link that that points to the official requirements…

Great job, but in who’s face?

Here’s the official one: Step 1. Understand Prerequisites - Storj Docs

Absolutely, just as soon as you stop saying things that aren’t true. This is a multiuser forum and I don’t want people to be misinformed.

1 Like

You wouldn’t lose 20TB unless all your HDD’s failed, in which case no RAID is going to save anything. I’ve given my advise and you’re free to deviate from it. Storj has given their advise and you’re free to deviate from that. Just don’t misrepresent the official advise or what impact it would have. We’ve had lots of reasonable debates about this (this doesn’t seem to be one)… there are plenty of calculations (some from me) that show that even with the risk of losing a node, running one per HDD is more profitable because those risks are small and you’ll easily earn the cost of that risk back by using all your HDD space.
I’m not telling you what to do. There are plenty of reasons to still go with RAID, especially if you have other purposes for the array.

That’s ok, name calling tends to happen when your “gotcha” argument is utterly and completely destroyed.

Sure. The partitioning has barely any impact. Do whatever is convenient. Sorry for trying to point to an alternative you may not have considered.

Alright, I can expand a little. If your worry is that the head will be moving back and forth, that’s going to happen anyway as data is pretty much accessed at random from a nodes perspective. So once you start filling up the space, that head is moving all over the place one way or the other. It’ll only help a little bit with writes. Personally I would find it convenient that if for some reason one node grows faster than the other, it doesn’t matter which on fills up the space first. I generally don’t partition unless I have to. But the impact on performance is minimal. As long as you don’t use SMR disks, you’re fine with either setup.

Most probably won’t partition separately for each node then.

It does not make sense to me to start as many nodes as there are disks on the system. Even if I could get that many subnets, the motherboard would eventually run out of PCI Express slots for network cards. What I am interested in is larger nodes for the long run, so I definitely need redundancy, too. Once vetting is done, they start to fill up pretty fast from what I see on already existing nodes.

Per wallet address, not node.
However, wallet features per node

What? What? :rofl: Not getting it.

It’s not complicated. If you have multiple nodes using the same payout address, the total income of those nodes needs to cross the threshold. So even if each individual node is below the threshold, you’ll still get paid if the total goes over it.

However, if you have some nodes with zksync enabled and some without, they will count separately. So for L1 payouts it looks at all nodes with the same payout address without zksync option. For zksync payouts it’ll look at all nodes with the same payout address with zksync enabled. (threshold is currently not applied for zksync payouts, but likely will be at some point)

3 Likes

Makes sense. Thanks! Using zksync does not make sense to me for now, anyway.

Really? By what witchery?! :woman_mage:

What figures do you have in mind?
I find my nodes pretty slow to fill up these days! :confounded:

1 Like

They start getting about 20 times the traffic.

How many GB per month are we talking about?

About 600GB. Will check.

Wow that’s quite a bit !!!
It seems weird though because it should mean that there are as many nodes getting vetted as vetted nodes due to the traffic distribution system (if I understand correctly).

Check, if you have neighbors here → Neighbors

2 Likes

Last few months have been between 400-500GB per /24 subnet. I think 600GB is a bit of an overestimation.

Btw, this doesn’t mean your node will grow with that amount per month of course as it is hit with deletes as well.

4 Likes

yes I did that right after I started the node and do it periodically thanks !

Sure. Overestimation… On the start of 29th day of the month from 31 total the node has 582.16 GB of traffic.


Overestimation for sure.

There. I checked and proved. 2 more days to get 600, but it would most probably have it by tomorrow.

Since we were discussing how fast a node filled up, I assumed you were talking about the only traffic that is relevant to that, which is ingress. You’re showing the total of ingress and egress there, which is obviously more, but also not relevant to the question at hand.

1 Like

You don’t prove that by showing a screenshot of your total bandwidth tho :wink:

1 Like

I don’t care. Bye, bye…