Announcement: Changes to node payout rates as of December 1st 2023 (Open for comment)

I appreciate you arrogantrabbit. In my opinion, I don’t think Storj Labs wants to strip the ability from node operators to earn a small profit. The idea from the get-go was for someone to use devices they already have and make a few bucks in doing so. Not to take the few bucks and devote them purely to pay costs. There has to be an incentive to participate. Not saying these numbers are the right ones, but for now they are the numbers we are using.

1 Like

To put it simply: unbelievable.

1 Like

Not so unbelievable for those who have been here a while. There is a varying range of opinions in this community, there has been since the start of it all. Welcome to the debate.

In my opinion, Storj payouts should cover energy costs for running an HDD and the cost of the HDD over its lifetime, plus a modest profit on top. This would allow existing node operators to eventually expand if they’re serious about it.

Maybe it should cover running costs of ultra low power systems like a raspberry pi too. But it should absolutely not cover running costs of conventional servers or even NAS hardware, nor the purchase costs of that hardware.

In the end Storj can only compete with datacenter storage if they actually run on underused hardware. They run on untrusted nodes, which inherently requires more redundancy and there is an additional middleman looking to eventually make profit. That cost has to come from somewhere and it can’t be customer pricing or there would be no customers. So that means node operators simply can’t expect to be able to afford running datacenter like operations with Storj payouts, nor are they expected to. It’s as simple as that.

It’s a tightrope and it’s hard to find that right balance. But I believe it to be possible and I think for now, it seems balanced somewhat, given the room to still get rid of some inefficiencies. But on the other hand, some node operator setups were not built with this more realistic viewpoint in mind, it’s pretty unavoidable that those setups will eventually be squeezed out.

12 Likes

Thank you :slight_smile:

I do agree with you on that. Such a cost should be covered. And I think that a certain amount should be covered as well concerning the main system, up until 20-30w for example. That would create an interesting concept: an inverted pyramid, where such an amount becomes more sustainable, and dilutes itself per each TB stored.

That would do the 1,5USD/TB more sustainable, for example, plus it would dilute itself in large node groups for example.

Full stop. To the point.

My RPi4 setup with 12 of 20 TB does NOT cover regional energy cost. Space usage is NOT growing since a year (of ~ 2.5y in total). 25 GB in AND out a day is a mess.

I love the project. But there is absolutely no financial space to recover failing hardware in future. There’s also no market to sell the hardware at an interesting price. So the hardware is running till the end. Sooner or later.

Full stop.

I sit it out and pray. :v:t2: Best thing I can do.

2 Likes

And for example you can hedge every node leaving the network with a put or strike option in order to secure less dispersed weighted average anti-dilution so every non-participating preferred node operator is entitled to pro-rata share of proceeds of super pro-rata preemptive rights so double-trigger drag-along in case of multiple liquidation events is fully guaranteed by the rights of first refusal above average rate of change between pi and network attached storage hardware power threshold levels likely very similar for example to big blue ocean whale eating open source red strawberries for yesterdays breakfast.

BTRFS was just fine, till i had to remove a disk on a (not-recommended) raid5 level…and i probably did something dum to cause all the problems.

I wouldn’t try to go much lower than ~500 points in Passmark’s cpubenchmark.net. I’m using an N36L with roughly this score for hosting most of my nodes, and traffic peaks are barely handled.

Yes, the degraded array is a worst case, because it starting to work as a striped/RAID0 but with IOPS degradation. So better to recover it or migrate data as soon as possible.
The recover could be dangerous too though, because it would increase a load on remained disks and could produce a cascade failure. Theoretically COW filesystems should be able to recover even from bitrot if there is a redundancy, but your array likely built using mdadm above BTRFS, so I’m not sure that this filesystem feature could be used in such a setup.

Maybe something like this?

I also don’t want to go into any discussion, so we just look at the graphs and numbers, and this is 0.29 TB per month of storage addition per month for the last six months.

1 Like

Did you check your external ip for neighbors, may be you sharing traffic with someone?

1 Like

Taking into account the months when the test servers were decommissioned and the loss of that data, I find it realistic.

1 Like

Vadim!
An attentive reader would have seen my axiom above - only /24 is important, the number of nodes is fake - it could be at least a million - it’s free and legal.
And yes. Certainly. The screen above is the maximum that could be grown in the storj over the last 6 months.

PS: Perhaps someone did not understand why I think that the number of nodes is fake - because there are many /24 subnets where there are hundreds of nodes.
This is why the number of nodes is fake.

PS2 : Why do I consider the number of /24 subnets important and not the number of hosts?
Elementary!
Storj justified his reduction in operator commissions by the fact that the network is growing and nothing terrible will happen if payments are reduced. After the payment reduction, the number of nodes will decrease slightly, but then it will recover.
But here a fake enters the arena - the number of nodes, yes, is growing. BUT! Due to the small number of slots, where there are hundreds of nodes on /24.

But the number /24 is not growing anywhere - it is absolutely stable.
So in my humble opinion, or if you want advice, in posts about reducing operator fees, it is correct to use the number /24, and not the number of nodes. In this case, there will be no omissions or fakes.

5 Likes

Agreed - if I only have one IP, I could run multiple nodes or one big node, and it doesn’t change anything from my nor the network’s perspective.

1 Like

So, here is two nodes for October, same ISP. This is Ingress over the month of October. (There are two other nodes on this ISP. One is a full 4gb drive, the other was suspended and only had a couple hundred gigabyte ingress)


So, better than 1tb in ingress this month. This ISP is Cable, 800 down, 50 up. Top node is a Latte Panda Windows SBC. Bottom is an RPi4 running Raspian. Both are vanilla configurations. No other software running on them. One node each. The other two nodes, one is a RPi4, the other is an RPi3.

bro!
It doesn’t matter how much incoming traffic you have.
I gave an example of a node and this is the maximum that can now be obtained from a node.
~ 900GB of ingress-> gave an increase in storage by 300GB (see above).

2 Likes

~1TB is the ingress per single IP in /24 subnet, mostly (>99%) of synthetic load. Shared /24 subnet, downtimes, etc could lower it.

Cool screenshots, very informative! You can also crop it so that only “TB” remains)))
Meanwhile, the reality is different, and it doesn’t even depend much on the age of the node and its size (I deliberately took very different examples). On average, the “growth” for October was even slightly negative.
Oh, and I also marked with red arrows an old bug that was sort of “solved”, but it didn’t help (more precisely, it helped partially, it was much worse). And the payment goes for the TBM. Well, in fact, in reality, even more disk space is occupied than USED, and we are fighting this crap throughout 2023, although before everything was fine. As far as I know, things are better on Linux, but under Windows “filewalker” is complete nonsense. The last node, by the way, is especially clear - it is only a few months old, it has already gobbled up 2TB of disk space, while 500GB are paid for (and this does not include held% :smile:)

2 Likes

I guess it is a matter of context. Before, we had a lot of synthetic data that was generated and distributed. Storj Labs has been partially offsetting new customer data by purging the synthetic data. Which was previously announced.

Obviously some ingress is added and erased by customers as they run tests or perhaps just use data temporarily, but I believe most customer data is relatively permanent, so over time as this new data is added and the synthetic data is purged, more ingress will stick around.

It is also just a process where a new customer is going to test Storj out for their needs. That may include uploading a large data set and then downloading it and benchmarking the results, then purging that data, then testing other configurations to get what they are looking for, and assuming they then commit to the platform, we would see steady increases in ingress that doesn’t disappear.

In other words, some of this is a side effect of the company being in the early stages of growth and onboarding. The longer things go, the more customers get onboard, and those data streams go up. And eventually the synthetic data will all be purged.

But yes, right now things are slow for storage increasing.

7 Likes