Announcement: Changes to node payout rates as of December 1st 2023 (Open for comment)

mmm maybe my analogy is not correct… not as employees of a firm… but more like a gig economy?.. like how uber or food delivery services work?.. you aren’t really “hired” by the firm… you are self employed… with uber as the platform, earning whatever the rates is being laid out… then you choose to work or not to…

The problem with external audits of egress is that the code can be modified to do not throttle known auditors.
Even if we implement Distribute audits across storagenodes, it doesn’t help much - the software can be trained, which IPs/NodeIDs are auditors.
It also just a random value depending on auditors’ location, so it look like useless, but requires an additional dev efforts for nothing. The economic works better than that.

Also, the ToS say that the minimum egress bandwidth should be 5mbps and minimum ingress 25mbps (IIRC), so, I could limit my node to, say, 6mbps and 30mbps and not violate the ToS, whatever my real connection bandwidth is. I would not need to cheat any audist etc, just have the node limited to those speeds.
Cheating would only be needed if someone wanted to limit the banwidth even more.

1 Like

:+1:

And yet another problem is latency. There is no requirements for latency in ToS except for not failing audits, which have a timeout of 5 minutes. So if we had any tool that simply delays delivery of download requests by few seconds, most races would be lost “immediately”, with the node not having to do any work. Packet inspection tools could be made to do that. This is breaking ToS on a technicality, but would be quite undetectable, as it would be indistinguishable from a regular node which was not configured optimally.

well thought, that obvious things, doesn’t need any further discussion to take forum space.
Sorry for having a limited time as well, to endlessly debate.

While im NOT in a position to know the details about how it could be designed specifically.
i can say in general: that seems a check should be done from customer perspective, downloading file could make measure here.
How can the node know, which customer is measuring the performance?
So the node don’t know to whom keep the hypothetical cap, and to whom not.
It might be even as simple as that.
A customers audit or so called “Mystery Shopper”.

Summary

Im sure the company workers, will surely use any given excuse to evade proactivity, or adding to the “to do list”.
Which some are just silly obvious, like that here.
But im not payed tho to burn glucose to solve problems for storj inc. employees instead.
Or endlessly debate about course of solutions, that storj inc. don’t like.
But im finding this necessary sooner or later.
You need reinforce Your requires, if You make ones.
(A minimum resources like upstream)
And You need incentives for maximum
(the more You share, the more You can get $$$)

Only cost for storj inc, that it must play this mystery customer role.
or just partner with some customer like university for example, giving some free traffic
but wanting measures in return. I know it downloads 29 fastest parts from the 80.
Therefore the lowest from caped ones, may never been downloaded.
Thats some sign too.
Maybe the suspected capping nodes can be found by not winning the race at all over some time, or for too low wining ratio.
But i do know that You can design a solution, it just need work to figure out.
There, please don’t say “You can’t”, or “impossible”.
it’s a matter of need and will.
Maybe Storj inc. thinks, they don’t need it, that’s another subject.

i really don’t like the forum is drifting into constant repetition, only for “mine be the last word.”
I don’t think that’s how the discussion should take place.
If Ai were to read this, he could only conclude that it was written by beings with an extremely low cache memory.
The topic is exhausted, and 4-8 posts later a post appears, as if the discussion started from scratch.
Maybe, indeed, we should all get those nurallinks from Musk, I don’t know.

Not achievable. They can use S3 integration, and everything is ruined.

Then s3 should go. as It takes too much cost for Storj anyway.

You do not want to have 19GB (in average)/day?

i want much more, but won’t come anyway, with current prices.
As stated before, i think Storj is standard by itself, and should gain customers who want that standard, and not luring ppl who dont know what it is, and only join because its easy for tech guy to swap from amazon coz S3.

I would say we need both.
In the end - it’s storage and bandwidth used on our nodes (sorry, I’m SNO after all).

1 Like

Or rather “we needed” ?
to some point.
Now? i think s3 is not sustainable.
If Storj could abandon s3, could also lower prices to attract more customers.

Without S3 it would be very hard to attract new customers without a smooth migration from their current S3-compatible provider. So S3 will stay.

1 Like

Obviously, that’s why the price tag can dramatically change the game.

Nope, it would not. Even having easy migration it’s very hard to convince companies to migrate even from Amazon (but those who migrated are pretty happy and their bosses too, because now it costs less and also faster). Because it requires to change something. Even if that only the endpoint and new S3 credentials. With only native integration it will be nearly impossible.

6 Likes

im sorry.
i don’t know for sure, but seems You do know, it would not.
Just a reminder that arrogant and ignorant business falls all the time.
I’ve been taught, that if suddenly new breakthrough in tech occurs, meaning its as good or better, and its 2 times, 3 times, 4 times, cheaper, then it becomes a new standard, and at this point it doesn’t matter the costs of swap to new standard, if it’s dramatically cheaper.

And You talking with great confidence that “it would not”. Seems a language barrier to me, for example i meant if “it’s very hard to convince companies to migrate even from Amazon” that means, the price tag isn’t right. The question is: Wouldn’t they come not only by doors but also windows, for STORJ service, even without S3, if for example the price would be somehow 1$/TB store, and 1$/TB egress? That’s all i wanted to point to.

Obviously You can’t give such low prices at the moment, (on public network, maybe commercial is closer to this? closer than one could imagine?)
But just without costly S3 gateway,
Even now, it would be possible for STORJ to offer around 4$/TB storage, and 4$/TB egress.
STORJ inc. would earn only from egress.
SNOs would earn from storage, and from egress.
If the 1$ price would be revolutionary for such service?
it’s the matter of how far, a 4$/4$ would land from it, in customers awe.
And after further adjustments of Reed-solomon algo, mayby the price could be even 20-30% more slashed! Just saying whats possible!
Storj could slash prices when others rises it to take over the whole storage market!
“Think Big” or something!?
You probably did, hence commercial SNOs.

Let’s see if I got this right:
My node gets 1TB of ingress this month. Because of that, $8 is held back from my payout. Next month, 1TB of data gets deleted from my node (and no new data is uploaded). Does the $8 remain held or do I it them back?

Oh, and it probably would still be easy to get the node disqualified without actually losing data, but just by being away from the server for a couple of days (or even just a few hours) at the wrong time…

How would that be calculated for a non-empty node.
Initial conditions:
10TB stored for a year, held back accumulated $80, earning $20/month.
Now, my node gets 1TB of additional data stored in the first day of the month.

  1. Will I get $20 ( $20 for the 10TB previously stored and nothing for the new 1TB since held amount is not enough)?
  2. Or will I get $14 ($22 for 11TB, but $8 gets held back)?

If it’s the first option, it is going to be complicated to calculate. If it’s the second option, then it is going to have weird behavior in that my node is storing more data now, but I get paid less for that month.

So, what comes out of this:

  1. Let’s assume that in your example, my node never stops getting new data, it always grows 1TB/month. In that case:
    Month 4 - 4TB, earned $8 ($20 total), escrow $32, get zero
    Month 5 - 5TB, earned $10 ($30 total), escrow $40, get zero
    Month 6 - 6TB, earned $12 ($42 total), escrow $48, get zero
    Month 7 - 7TB, earned $14 ($56 total), escrow $56, get zero
    Month 8 - 8TB, earned $16 ($72 total), escrow $64, finally get $8

8 months to see the first cent of what I earned, that’s maybe a bit too much.

  1. Let’s extend your example (where the node stops getting data in the third month)
    Month 7 - no new data, earned $6 (balance $30, escrow $24) get $6, balance stays at $24
    Month 8 - got 1TB more (4TB total), earned $8 (balance $32,escrow $32), get zero, balance $32
    Month 9 - got 1TB more (5TB total), earned $10 (balance $42, escrow $40) get $2, balance $40

As you see, there’s this weird effect that if the node gets more data, I immediately get paid less. Imagine if my node filled up, I added a new hard drive after a few months, now my node gets more data, but I get paid less in the immediate months.

Also, what’s with the

If there is no way to get the $12 (graceful exit, force quitting, anything else), then that money was never earned to begin with, if is misleading to show it. Instead, say that my earnings per TB are lower or that this is a tax or something, because if I see that I “earned” some money, I will want to get it. If I can’t get it, then don’t say that I “earned” it.

By the way, currently my node has 22TB of data, with about $55 held amount. If your rules get implemented, does that mean that my payments will stop for at least 5 months?

1 Like

Props for honesty. :joy: But you have some good points. One of the biggest flaws with the native implementation is that there is no way to integrate it into a website. You need S3 for that, since browsers don’t talk Storj or DRPC. Plus there are a lot of certificate related issues as well. This alone means they can’t just drop S3 compatibility.

Though there are definitely interesting use cases like app back ends, publishing data science datasets, software distribution, large database and snapshot backups etc.
All of which have much more potential than the relatively small NAS backup market. But there is no reason they can’t do both. I use Storj for NAS backups. Though I’m forced to use S3 as well. Get some native tools going and they can break into that market more as well.

I proposed something similar a while back at $10 per TB, but payouts have dropped since so this seems reasonable. But why would GE only return half? That makes no sense and feels like they would permanently steal part of your payout. I don’t agree with that part.

That’s way too easily said. If 20% drop out, there is a near certainty that there are many segments that have more than 17% of their pieces stored on those nodes. All those segments would be lost. Tuning RS is very necessary, but it’s complicated and impacts durability, performance, parallelism, network overhead and basically every aspect of the network. Lowering the expansion factor would almost certainly require increasing the number of total pieces per segment to compensate for lost durability. And that in turn would lead to lots more and smaller pieces, making it likely a good idea to increase segment size, which in itself only works for larger files and makes Store less performant for many small files. None of this is simple and it requires running complex models to determine whether new settings still provide sufficient redundancy. I can tell you right now that an expansion factor of 1.2 is never going to happen. Let’s try getting it to 2 first.

[Citation needed]
This is severely outdated info. I have seen 1 single case where this happened and it was because someone had mounted different network drives to different subfolders in the Storj storage location. This made the file for readability and writeability checks work just fine, but the data disappeared while that drive was down. Only those kinds of messy setups would still fail like that. And in that case it’s kind of your own fault.

Escrow is a legal term and requires an independent third party holding the money. Held back is the right term here. I guess collateral might also work.

As a slight tweak, I would suggest the held back per month is never more than 50%. This way you always get paid something. New node operators need to also see that Storj pays out reliably. And otherwise they wouldn’t receive payout for many months.

2 Likes

With the new version of GE, it indeed costs Storj money. But we didn’t have a say in choosing that. And the old version was almost entirely free for them. If they choose to swallow that cost for code simplicity, that’s on them and node operators shouldn’t have to pay for that.

1 Like