Announcement: Changes to node payout rates as of December 1st 2023 (Open for comment)

This is a problem I pointed out earlier in the thread. Node performance really won’t matter now. You can just use async writes for everything so that you win ingress races. Even though you don’t get directly paid for ingress, you need data in order to get paid for storage or egress. When it comes time to egress, who cares? You only lose 6% if you lose every single race. The audit timeout is an astounding 5 minutes (assuming the docs are still up to date) - that’s slow enough that you could use a tape drive!

Also, I thought of another way to do this in a fair manner: for nodes that win more egress races, prioritize them for ingress. If performant nodes received more traffic than laggard nodes, it would incentivize performance, and encourage more serious SNOs to stay on the network.

Guys, please stop discussing. There’s no point to waste your time any furthur. You accept or you leave. Storj will do whatever they want/need to reach their goal of making this business model profitable by the end of 2023. That’s their ultimate goal since last year, nothing else. Take a look at the old payout topic, all discussion is meaningless since they will execute their plan anyway. This is an announcement, an already decided thing. Good luck everybody.

2 Likes
  • If node adds constraints and changes requirements for your server – your server is not suitable for running a node. Don’t run a node.
  • Storj uptime requirement are quite lax, nothing unusual there.
  • Filewalker has no effect on properly configured server. If it affects yours – your server is not suitable for running a node. Don’t run a node.
  • It’s very easy to rm -rf one of the nodes when you need space back. Also, plan ahead.
  • You should not be backing up node. It’s silly. redundancy is already built into the network. You don’t need to make another redundant layer here. It’s OK to lose some data. Also – why are you messing with your production server?! Do risky stuff in the VM in the test environment.

I agree with you. That was bafflingly pointless debate. You either have raid already or not. that discussion was probably driven by people still confused about the project.

2 Likes

The fact you saw fit to commend someone who said they would continue to run their nodes even if they were completely unpaid is a very good indicator of the real situation here.

it was profitable, perhaps today is not, I didn’t check for a long time. But my pi3 recovered x6 of spent money for the 4 years until it’s stopped to boot (because I procrastinated the SD card replacement, but now I cannot be there to replace it).

of course, if you have RAID already you can use it. We just warned people to do not build RAID only for Storj, it’s not needed and have an additional costs.

1 Like

the usage of BTRFS is enough to have such a long time to respond on the write request, see

No it’s not. If something happens to part of the data, I cannot recover it and have to lose the remaining data, resulting in lots of free space and $0/month. he node may grow back to the same size it was, but it would take multiple years.
Compare this with mining - I only lose the income during the time my miner is offline or is returning invalid data. As long as the hardware works, when I fix whatever problem caused it to misbehave, I resume earning at the same rate as before almost immediately.

Because sometimes I have to do it - say, upgrading the OS. Normally, I supposed I should transfer everything from this server to another, then upgrade this server and transfer everything back. Also, sometimes stuff happens, that’s why people use backups.

1 Like

But that assumes that you’re configuring it as a write-through setup, which is pointless. Using a write-back setup instead means you’d still consistently win ingress races because you don’t have to wait for any underlying storage device, and the worst that would happen is you lose a few pieces if you have an unclean shutdown.

If something happens to part of the data – it’s’ not a problem at all. Network will recover, your node will not suffer either. You can lose few pieces here and there. I don’t see how is it possible though if you have a properly setup array, scrub periodically, and don’t mess with the data manually otherwise.

If you have a catastrophic failure and lose the whole pool – as a result of I don’t know, a lightning strike – well, multiple the alleged opportunity cost by the probability of this happening and you’ll realize this is not worth worrying about.

This is an entirely different usecase in every respect.

You never have to upgrade the OS, unless you have vetted that OS elsewhere already. Also, data and OS shall not be entangled in any way. So I don’t see how upgrading OS can mess up your pool either.

People use backups to protect important data, losing which carries significant cost. Losing storagenode data cost is insignificant, and multiple by probability of loss – negligible.

1 Like

That is not the only reason people run backups. I use my backups to shuffle around my systems as I try to tweak things. I run a cluster but not all storage is shared as I tend to upgrade storage in one system at a time thus I tend to move things around a bit and the easiest way to do that is using my backups as it means I can prove things are working before destroying the original data.

I use that a lot

You repeatedly saying this, I do not get it. 5% of data loss is enough to disqualify your node instantly. Why do you think it’s not a problem?

2 Likes

The backup is useless, as soon as you restore it and bring your node online it will be disqualified for lost data since backup.
So you need a very frequent backups, or snapshots or do not bring online after the backup is made.

1 Like

Sync forced is good, assuming the databases are on another filesystem. Data loss adds up, this can especially screw relatively new nodes.

2 Likes

Please do elaborate on what exactly it is that you’re trying to say.

3 Likes

I’m talking about few files here and there, lost due to say power loss, or a couple of files rotted.

Few files.

Not few percent of data stored! 5% is massive! 5% of 1TB is 50GB!! If someone loses 5GB of data — come on, that storage is dead long time ago. 5% is too much; if this is how much node is allowed to lose and still remain on the network — this need to be drastically reduced. I could understand 0.05% maybe, but even this is crazy.

Yes, this precisely! (I should have read all replies before typing this comment :))

2 Likes

I guess you maybe tried to refer to transcendental idealism by Immanuel Kant and in general to The Age of Enlightenment, not sure if successfully, as I have to admit that the nature of things you are outlining here is more like a fatal doctrine of Arthur Koestler, anyway, not here to discuss philosophy.

Just wanted to kindly ask if you may provide me your zksync address as I am operating 12 nodes and would be very happy to send you my upcoming October’s Storj earnings so you could go and buy yourself some … ice creams.

And if you still feeling your army sharpness please look at this from a point of view that you as the company asked for a credit line in a form of cpu, bandwitch and storage space and the credit line was and still is being provided to you. Of course there is a symmetrical relation particularly associated with Storj development efforts so no need to feel offended during your ice creams time.

I really do hope that you got the message as it is the second decrease in Storj pricing recently and what is more important it is the second time that IMO especially your posts seem to jeopardize the efforts of Storj team … or … maybe I am wrong … and your opinions are reflecting the perspective of your whole company.

If you re-read your posts I am sure that you realize that I tried to be very gentle with this post.

1 Like

I have to admit that I am reluctant to comment on your numbers as I do not have access to your data, however, in this case, I just started to wonder if your calculations are taking into account held amount that is being lost by storage node operators, I guess, there must be some percentage of node operators that are providing you service and later due to various reasons are just closing the nodes.

I understand your hard work, however, what is the outcome of this analysis if I may ask, are you sure that the outcome is not the chocked network and an incentive for a new open source cutting edge filesystem called SPSIMOFS (Storj Pricing Strategy Inspired Metadata Only File System).

This is absolutly understandable and I hope you will have a chance to look at the other side of the equation as well.

I hope you are aware of the fact that the amount of equipment battered by such projects based on a perfect competition model as Chia is growing rapidly on various auction sites around the World due to various reasons, sometimes very obvious reasons.

What is the long term path for storage node operators that you are drawing here with your new strategy if I may ask? What are the key competencies that you are trying to build for your company with this new strategy? How does this new strategy relate to your key success factors and to your competitive advantage long term if I may ask?

But this is not true, the effective expansion rate is lower.

According to Grafana the median number of pieces stored is usually around 65 or so. Little higher right now, for some reason.

Now I’m assuming the mean is close to the median, the expansion rate is around 65/29=2,2 or so.

With a little tuning it should be possible to get this down to around 2 without risk to customer data, although I have to admit I haven’t done this calculation.

1 Like

Not forced async, no, but default. As I understand, the database is wtitten to frequently, but commits/flushes are not as frequent. Remounting the filesystem as forced sync results in a lot of writes primarily from the db. I have not tried this in a while though, so maybe things changed. Maybe the node writes files in a less efficient manner (small buffers etc).

No, 864 shutdowns in total. Data loss adds up. If you lose, say, 1GB, the 1GB loss will stay with the node forever (the satellite does not remove it from the database once the node fails the audit - it can request the same piece again in the future). Of course, 864 unexpected shutdowns is still a lot, but why make a problem worse.
Sure, if my node got so much ingress that it could not keep up, I may be temped to do some kind of optimization like this (though it would probably be replacing the SATA SLOG with a NVMe one, since that would mean the node was expanding rapidly and it would probably pay for it). Not with ~4mbps of ingress.

Depends on how it happens. If using a single drive, a bad sector in the wrong place (filesystem metadata) can make multiple files disappear. IIRC on NTFS with 64K cluster size a single bad sector in the MFT makes 64 files disappear.

Come on, who has a node with this clustersize?