True. I would try a system as follows:
For each node, potential repair costs get calculated individually and a percentage of his earnings are held back until this amount has been reached. This way the repair costs for each node would be fully covered. If node changes size, repair costs get recalculated.
If the node does a graceful exit the held back repair cost can be paid back to him. This would be an incentive not to leave the network without graceful exit.
And I would also try to find a way to increase payouts for nodes as they get older to incentive them for being reliable and make them stay on the network. So lets say after 15 months you earn base rate + 5% after 24 months base rate x 10% or something like this.
True. I would try a system as follows:
This is not precise. You WILL receive a half (50%) of held back amount after 15 months in the network.
Ok, I agree, you won’t receive 50% of the held back amount, if you stay on the network and don’t do a Graceful exit.
maybe it’s time for storj to switch and allow 6 months for the new nodes to generate 100% if it really takes that long to pay.
Storj wants nodes and at least it bans you even if you have 3-4 nodes, I understand very well that users (who have nodes) want the money from month to month, just like storj wants our nodes turned on, in the end, storj lives from our nodes .
If I have several small nodes and I am 8-7 months old, they retain 25% if that 25% is not retained in 3-4 nodes, payments are easier.
Actually the actual repair cost is huge at the moment as @Alexey mentioned in another thread a couple of months ago because the pieces are repaired on rented servers with high egress fees.
I can tell you that we’re way better off than if that repair cost was held back from our earnings.
I looked for Alexey’s post where he breaks down the cost of repair but couldn’t find it.
Looking at the calculation it does not look so high to me. With $2595,68 you can repair data from 45 nodes. That is $57,68 repair per node. Of course it is much more than $16 but it is not astronomically high.
It even looks like in theory a node can go down with all its data and not trigger a single repair job at all. I did not really think this to the end but I wonder if it would be safe to assume that a single nodes repair cost is only around 1/30th of total repair costs for the data he is carrying or based on space: $12 per TB?
It seems that the satellites are currently on Google Cloud , which means you need to take the other cost number ($31,320.00) that @Alexey has calculated. This amounts to roughly $700 per 7.2TB node or $100 per TB. That’s a lot and obviously can’t be covered by the held back amount.
Google seems really expensive compared to Hetzner, that’s right. Probably expensive compared to any other provider. But maybe others would do as well? Like OVH or something they seem to have a huge network as well. Of course nothing really compares to Google…
The repair workers are hosted on hetzner iirc.
“It seems that the satellites are currently on Google Cloud , which means you need to take the other cost number ($31,320.00) that @Alexey has calculated. This amounts to roughly $700 per 7.2TB node or $100 per TB. That’s a lot and obviously can’t be covered by the held back amount.”
So surely there is a risk that the proposed changes will merely result in cost shifting from transfer payments to rebuilding fees if it results in significantly increased SNO churn? It sure would be interesting to see the cost dynamics going on within storj at the moment.
Anyone ever asked this to be worked out on the SNO side as a feature request so no need for rented servers? You just need to install for instance a plugin to do the repair jobs for StorJ?
Interesting would be nice to see this happening. Any ETA?
One thing I want to add in gaming yes that is not the same as handling sensitive data. You can donate CPU cores to help stabilize ARMA 3 servers. For StorJ run a Repair node of an SSD/M.2 drive with a good CPU or X amount of cores and deal with some computational work where the repair nodes just see 1’s and 0’s and never actually has a decrypted file on the repair node. Or you could even consider using smart contracts through blockchain en a checksum so the sat only needs to check the file MD5 etc.
“Anyone ever asked this to be worked out on the SNO side as a feature request so no need for rented servers? You just need to install for instance a plugin to do the repair jobs for StorJ?”
Couldn’t they create their own virtual appliance like an ova file and we run that for them?
Well since they did mention it in this white paper it has to be said they must have gone over it I am just adding my 2 cents as well as you adding yours but I do not know the finer details and what kind of requirements it all has to comply with. Would be great if we can do repair jobs though it would give a nice extra income as SNO. But it has to be secure so it would not harm the StorJ reputation.
Based on that I am even wondering if it would be possible to calculate repair costs not just per node capacity but based on real amount of data each node is holding because that data is already available.
So basically if a node has 4TB capacity but only 2TB are filled in a month, repair costs would need to be calculated based on 2TB. Based on the figures above this would be 2x $12 = $24.
If he has already $24 in his collateral account, the node would receive full payout. If not, then a percentage would be deducted and put into his collateral account.
Something like this would be very dynamic solution based on real data and real costs.
The decision to stop the payouts for most storagenodes can be made in several hours, but to an official response is not there after more then 7 days …
thats exactly what i am complaining about …
Most likely an instruction was passed down from senior management due to grave concern over the transport costs. It explains the quick implementation.
The held back percentage would need to be close to 100% while the node is growing, I have a node with 6TB of data and only 42$ held back. (it’s always had free space to grow)
If you make the assumption that a node makes 2.5$ a month per TB of data stored, it wouldn’t earn anything for 5 months (100% held back) in order to get to 12$ held back per TB. It’s quite a bit worse than the actual way it’s done.
The calculations aren’t accurate because the node fills up slowly.
Another way of saying things is that a node that gets 500GB of combined ingress (supposing no deletes) would have 6$ held back every month just to compensate for the increase in capacity.
To conclude I think that we are better off leaving things as is and leave the team time to work on some more important stuff (like delegating repairs to the storagenodes), even more so because the held amount is probably waaay less than 12$/TB stored on everyones nodes