Why 25k/month? Where is that number coming from? And what do you want from your ISP?
To clarify: I don’t care if it grows SLOWLY.
I do care if it doesn’t grow AT. ALL.
Well obviously that would be more IP addresses that AREN’T in the same subnet.
But this is breaking ToS…
Show me where in the ToS this is stated.
Funny:
…with the log level set to “warn”:
$ ./successrate.sh
========== AUDIT ==============
Critically failed: 2
Critical Fail Rate: 100.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== DOWNLOAD ===========
Failed: 379
Fail Rate: 100.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 11163
Fail Rate: 100.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== REPAIR DOWNLOAD ====
Failed: 34
Fail Rate: 100.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== REPAIR UPLOAD ======
Failed: 682
Fail Rate: 100.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 0
Success Rate: 0.000%
…I changed that to “info” about half an hour ago and…
$ ./successrate.sh
========== AUDIT ==============
Critically failed: 2
Critical Fail Rate: 0.676%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 294
Success Rate: 99.324%
========== DOWNLOAD ===========
Failed: 414
Fail Rate: 2.923%
Canceled: 18
Cancel Rate: 0.127%
Successful: 13731
Success Rate: 96.950%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 11210
Fail Rate: 40.786%
Canceled: 3
Cancel Rate: 0.011%
Successful: 16272
Success Rate: 59.203%
========== REPAIR DOWNLOAD ====
Failed: 34
Fail Rate: 5.502%
Canceled: 0
Cancel Rate: 0.000%
Successful: 584
Success Rate: 94.498%
========== REPAIR UPLOAD ======
Failed: 688
Fail Rate: 68.731%
Canceled: 2
Cancel Rate: 0.200%
Successful: 311
Success Rate: 31.069%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 0
Success Rate: 0.000%
So other than it looking REALLY BAD for a minute there because it wasn’t logging successes… nothing really sticks out
I cannot find the right spot, but @Alexey can surely give the right spot.
But he confirmed it too:
This directly contradicts general advice regarding multiple disks and the one-disk-per-node idea.
If they want us to take this seriously they need to both clarify AND come up with some way for SNO’s to be able to actually grow their nodes within said restrictions.
Yes, they provide a way: just let it running and it will crow with customers needs…As I said these is real data from real costumers. Not a get rich asap method
And trust me, bandwidth has nothing to do with it. And if it were allowed, I could have done it long time ago. I have a 20Gbit symmetrical connection and could easily get access to many many IPS for cheap/free. But I don’t want to risk it.
I think the test data temporarily overshadowed a different trend which was storj eliminating free accounts and being more aggressive about purging data on closed/unpaid accounts. These were forces which were removing data from nodes, and seemingly continued during the test data, which made the dropoff even more dramatic when the test data stopped.
edit: but I do understand your feelings bro. I had a couple of disks and the test data maxed them out. By the time I bought a few more and had them installed (and migrated node data which is so slow) the test period had ended and now I have many more terabytes of available then used space.
The rate at which your node actually grows (ingress less deletes) depends on the customer, nothing you can do. We’ve had waves of strong growth and regressions, that’s just the life of SNO.
Over the long term if you believe Storj will grow then these waves are just noise and you should ignore it. Otherwise you should reconsider being an SNO.
Also if it helps anyone, this is my oldest node, plotting on-disk space usage over time. Ignore 2024-01 and around 2024-07, had some issues with my Grafana/Promethus stack and node bugs then. This graph also includes all the bugs that came with the node software reporting disk used. Left axis Tri = TB.
You can see lots of noise, especially from the SLC tests, but the general trend is upwards.
Single ip? Location?
I have 30 months old nodes still around 10tb. Filled only during the tests or course
Single IP.
I’m in North Carolina.
I’m sorry if this has been answered earlier, but have you checked if there is anyone else running a node on your /24?
Bit of a long shot, but you never know…
It’s not. You may run multiple nodes in the same location/server but they should be behind the same /24 subnet of public IPs.
I can confirm what everyone has confirmed - average incoming traffic is around 50-100GB per day, and the /24 subnet usage for my three nodes was 6.14TB at the start of the month and 6.56TB on October 17th, so a net gain of around 24.7GB per day.
My oldest node is 21 months. There are no neighbors in the /24 network.
And this is the picture on all my nodes - a flat line, no growth.
It’s always depends on the customers. There is no equal nodes.
What’s your success rate?
My nodes are from 2019 FYI.
If you have this kind of energy to be so extremely active, maybe you could join Storj to help them look for more customers with more data to be stored?