Update on Storage node payouts

This is great news and I did not expect the team to be so thoughtful! Thanks! I think keeping the storage payout steady and reducing egress fees is a great step for anyone who is running a node at home. Traffic is free for all home connections I know, and it’s the electricity consumption that is a steady cost for us. The 1.5$/TB cover my electricity costs pretty much and everything I get for egress is profit. Also, I am obviously still using my hardware for other purposes (e.g. streaming home entertainment) so Storj is giving my free electricity for my server. This is enough incentive for me to keep it running even without any egress fees. Please don’t overdo it on the synthetic load, rather save your Tokens for a longer runway to attract more paying customers!

7 Likes

The new node registration/creation will be limited as well?

Old nodes (I have one with about 3 years old) have a lot of test data. Will the payout coming from the synthetic data be different versus a new node? Or the amount will be almost the same?

Does it worth to start again new nodes on those cases to maximize payouts during transition?

Smaller nodes will obviously have smaller payments so less of a margin. Which is specifically why I said
“older nodes on smaller drives” (that are full). There is less options there - to get greater efficiency means replacing or adding another larger drive.

At a minimum maybe Storj could impose the new payments strategy on newly joining nodes?

I think everybody with full nodes should be happy to get rid of test data; they will get a lot of benefit from the new data added; increased egress and more egress than old data. I don’t understand why so much complaining…
The half full nodes have benefits from both… keeping test data untill it is deleted, and get rewards, or free up space for new customer data and delay HDD upgrade. Right now I’m in the second position; in 2 month I have to buy 2-3 new drives. If the test data is gone, I will delay the buy; but I like the increased income too.

I will point out that income from egress is a function of TWO inputs.

  1. The amount of egress.
  2. The rate paid.

Since we know the rate is going to be cut it is still entirely possible that even with increased egress the NET payment can still be down lower than current levels. This particularly applies to the smaller drives Do you really think a 1TB drive that is 20 months old and is currently full would earn more after these changes?

I get that - I just wish Storj would actually come out and say that up front. Maybe it is an Australian thing but generally we hate obtuse, avoidant language.

What I expected them to say was language like. “We understand the amount smaller node operators will earn is likely to drop significantly with no way to increase earnings without adding additional hardware or capacity.” and “We expect the minimum size node that will be economic to operate will increase in size as we make these changes.” To me this would be far more open and upfront about the coming changes. I

I am looking at doing exactly the reverse. I plan to replace a current 3TB full node with a 6TB drive starting from scratch. No, I won’t be migrating or doing QE. Yes, there is logic in this as the drive has buckets of bad sectors so migration is out of the question and it would fail GE as well so no point trying. This is in my Microserver and the other slots are full so to increase available space I need to pull an existing drive. I will try to minimise costs however and buy a used drive.

My current pending sector count on this drive:
Current Pending Sector 65495

The only reason I think this node is still alive is precisely because it has mainly test data that is never touched. I have been expecting this node to fail for the last 6 months. lol Somehow it still lives - for now.

In Germany we like the more direct approach too.

2 Likes

This is exactly the kind of obtuse language I am talking about - come out and say it clearly: Smaller nodes will not be able to deal with the new economics.

1 Like

There are two ways to subsidize the nodes with test data - store more data or access it more. Storing more test data won’t help the nodes that are currently full, but increasing the egress may be bad for nodes with slow internet.
I wonder how much more the test data is going to grow before it is replaced by customer data (and the node size levels off). I do not have a lot more free space in the pool (a few TB at most) and will need to buy 6 more drives rather soon. It would suck if the day I add them to the pool is the day Storj starts deleting test data.

1 Like

This is very soft control. If there are 3-8K extra nodes in the system, why they let new ones in?
From an other perspective: I GE my node from the test satellites, I will transfer my data to the new nodes which can not GE from the test satellites for a year. The new nodes will have even lower income, due to the reduced payout plus the withheld %.
Or there is a way to start a new node and exclude any test satellite data from day 1?

The expectation would be that you would only GE the test satellite that is no longer profitable to have your node participate in, and not the remaining satellites which still pay higher rates for egress. So there should not be a need to start a new node.
However, to answer your question, yes you can specify which satellites you want to have any node, old or new, connected to in order to exclude any of the satellites you do not want.

I have no idea what kind of problem you have with the information I have given to you. If you want to make a call on behave of other nodes then go for it but don’t ask me to join.

I would not buy any new drives at the moment. There is just too much movement in the system right now. I will wait until the final payout rate has been found before buying new drives.

Yes it is possible to untrust a satellite from day 1. What I don’t know is what the next round of synthetic load is going to look like. I think the chances are high that one of the test satellites will be used for that. So keep that in mind before calling graceful exit too early.

3 Likes

there is other aspect of very big hdds. As we have lot, just a LOT of small files, big hdd are going very slow, just because of seek time. Small hdds will start to win races more.

Thank you for this info, I did not know about this option.
When you wrote about next round of synthetic load, I was thinking about the increased load since late-feb/early-march through EU1.
Is it real customer data or synthetic?
Synthetic data is connected only to test satellites?

Well, I think this is not true. Please compare the WD Red 10 TB vs 3TB for example:
10TB vs 3TB

“Looking at our 4k random performance, the WD Red 10TB hit 571 IOPS write and 314 IOPS read in CIFS and 580 IOPS write and 575 IOPS read in iSCSI.”
vs
"In our 4K random transfer test, the Western Digital Red came in at the bottom of the pack for read speed, measuring 45 IOPS read and 112 IOPS write. "

2 Likes

Seek time does not depend on data density, only on RPM and other mechanical considerations limiting the arm movement speed.

I’m with you on the obtuse language in general, but not here. @littleskunk is pretty straightforward in communication and in this case there is the energy cost variable to consider. Smaller nodes can work just fine if you have low energy costs or use hardware that is always online anyway. It’s not as black and white as you suggest.

Looking at the statements made so far I don’t expect them to ever remove test data faster than new data comes in. So at worst it will stop growing for a while.

Because they also need to know whether new payouts will stop people from joining. Can’t find that out if you close the doors

You can either blacklist the test satellites or make your own trust list to use. Yes, this is possible in config.yaml settings.

My rule is that I keep expanding as long as the cost of the expansion is covered by net income since the last expansion. It’s a little easier with Synology hybrid raid though, since I can buy and expand with disks one by one, instead of having to upgrade the entire array. Just bought a new 20TB and at the moment it looks like it takes only 2-3 months of pay before I can buy the next one. We’ll see how much that changes with the new payouts.