You could set them up now, you have to wait 10 months to get 100% payout so better start soon:
Months 1-3: 75% of revenue is withheld, 25% is paid to the Node Operator
Months 4-6: 50% of revenue is withheld, 50% is paid to the Node Operator
Months 7-9: 25% of revenue is withheld, 75% is paid to the Node Operator
Months 10-15: 100% of Storage Node revenue is paid to the Node Operator
After Month 15: 50% of total withholdings are returned, with the remaining 50% held until the Node gracefully exits the network
In my opinion you should start with all 4, assign the minimum 500Gb to 3 of them and assign all the memory to the fourth. By doing this you will have the minimum retained and all 4 will be peaked
It would make sense if 4th node is an old node (have 100% payout), 3 node will just wait until 10th month pass (node incubator). Nice strat but wrongly use here…
But chance-wise, if you don’t have one disk more likely to fail, you beter can share the storage over the disks to balance risks. And it usually not the unused one that fails for some reaseon.
No, I mean that you can start all 4 of them, only that you give maximum capacity to only one, you only give 500 GB to the other 3 until the retention period ends, once the retention period is over you can maximize the memory on the 3 disks you left at 500 GB. This way you have the minimum deduction on the 3 discs and in the meantime they will have passed the vetting period
No GE
That wouldn’t work? AFAIK it split data between subnet and not node (/24 rule), meaning in the first 3 months if you collect 10TB, your payout will be 2.5TB - no matter the storage configuration of your 4 nodes.
My understanding of your strat was: first you give everything to node 4th, and the first 3 nodes only get 500GB, then after 10 months. You would exit node 4th, increase size of the first 3 nodes, and create a new 4th node AND only give the new 4th node 500GB. After another 10 months you would increase the new 4th node to it original capacity…
Did the strat just form out of thin air because you and I misunderstand each other? That is really funny…
LOL, it does seem like the strategy came out of thin air from a bit of misunderstanding!
I think what @Roberto was suggesting is more about minimizing the total held amount. By keeping 3 nodes at 500GB for the first 9 months, you’re basically allowing them to age without too much held back. Once the first node is maxed out, you can then start expanding the second one—and by the time those 9 months are up, you can push all nodes to their max capacity without worrying about too much held amount being held back anymore. It’s all about pacing!
The idea of GE while anticipating growth doesn’t seem all that intuitive to me. You’d essentially be shrinking your total capacity, when the goal is to grow, right? Plus, the network would have to balance with all those repairs—sounds like more hassle than it’s worth!
Anyway, that’s just my two cents. Everyone’s got their own way of doing things!
The implication of this is profound, it defeat the entire meaning of held back - because I just think of something else, you can run 2 node on 1 hdd, get what I mean?
Yeah, I get what you’re saying! You could run multiple nodes on the same HDD, but when you want to expand to other disks later, you’ll have to move all that data over first—which could get a bit messy. But hey, technically, it’s possible, so I guess it’s an option if you’re willing to deal with that later on.
That said, I’m not sure I fully grasp what you mean by the implication being profound—maybe I’m missing something there? But one thing I do know is if that single disk fails, you’ll lose all your nodes at once. By using multiple disks, you spread out that risk and avoid putting all your eggs in one basket. Personally, I think spreading across different disks gives you more peace of mind and saves some headaches later. But to each their own! It’s all about what works best for your setup.
When you say “memory”, you mean “storage space”, right?
I don’t understand how reducing the virtual storage capacity of the 3 first nodes would change anything. All data is spread across the 4 disks anyway (because they share the same IP).
Now I’m thinking about it too.
1node filled up 13.5TB in 3 years with great difficulty, this year I started the 2nd node with 10TB also on a 16TB disk, which filled up much faster due to the test data. I bought a 3rd x16 16TB disk for the 3rd node. However, after deleting the test data, currently node 1 is 6.5TB full, node 2 is 4.5TB full. Should I start the 3rd node with 500GB to pass the time or not?
Yes, I mean the allocated space. The input is split but at least keep the hold low on the 3 500Gb nodes. Then when they are vetted you can assign them the total available space. I’m always talking about single nodes on single hdds