I’m more concerned about the hidden threat from TTL.
After all, moving away from the recycle bin to switching to TTL means that the concepts old node, new, full… will go away.
At any moment the TTL will end and the data will be instantly deleted.
When they used to talk about living people who store something and somehow use it, there was clarity and some expectation in the world, with the transition to TTL - there is only now.
With the transition to TTL, even graphs and even predictions make no sense.
Please consider specifying in the SNO config file which TTL size they want to store, because not everyone is suitable for a short period of time and only wants long-term storage. Or separately identify data using TTL for plotting and forecasting.
Capacity-reservation with TTL means SLC will be constantly uploading data: because it will at least have to replace data that gets deleted every day. But if Storj wants to have a spare 5PB online… then SNOs will always be paid for those 5PB. The nodes may see more ingress… but they’ll be storing that extra data either way.
Nodes can’t specify that they only want to store data that customers won’t delete today: so I don’t know why you’d expect them to have control over data with a time-to-live.
I’m not sure what you mean: TTL is used very often by clients - especially for things like backups that you only want to keep X months of. If anything data with TTLs are more predictable than data that gets-deleted-whenever-clients-feel-like-it.
It’s a great feature for Storj to use for test data: since deletions are way lower-impact for nodes.
I failed my PhD program in hard sciences, but I can confidently state you cannot imply that alone from these numbers.
Yep. If you have the force sync flag disabled (and this is the new default), this change made it so temporary upload files are no longer created.
Sorry to burst your bubble, but this is just 3.5k drives even accounting for the erasure coding expansion factor. This is not an amount that a drive manufacturer would even offer you a proper discount for a “large” order, see here.
I’m not sure it is worth the I/O of my nodes and the bandwidth of my network connection to store data for less than 2 weeks with the current piece storage backend and at current pay levels. I guess I will have to add some monitoring to see the TTL distribution on my nodes.
Yeah 30PB is one-guy-in-his-garage capacity. And SPD or Horizon Tech would be happy to ship them tomorrow: their best deals have a minimum-order-quantity of 100 HDDs.
I understand for Storj it’s a legit business risk if they had more customers than capacity. But I’m convinced SNOs will bring empty drives online as fast as you can fill them.
Yes, as soon as we see disk usage rise, we will definitely add more.
I for myself want to add more, but because all my drives are pretty much empty, I won’t add more (about 30% usage right now)
But I am planning to add 3 more 20 TB drives on both locations (so 120TB new space) if that start filling again
Sometimes I imagine having multiple full drives: and how good it would feel to bring more online. Especially if the full drives had paid for themselves and were now funding the new ones!
Then I realize… by conservative estimates… it will take me 2 years to fill the space I have. So that good feeling is years away. Bah!
So… I cross my fingers that Storj’s maybe-kinda-sorta-potentially-amazing mystery customer becomes real… and they help me get there in 1 year instead!
Yeah, I fell you. I had my drives (running about 50Tb right now) about 60% filled. (Two of my nodes were full at this time). But in the last two months I lose Soo much data. It was kinda hurting.
both drives (have some more with same trend) were full a while ago - now it is more trash then data. I will monitor it, so far it was profitable.
I run several nodes as it is good for comparison of trends or software bugs, since nodes typically do not update all at the same time. second reason I use simple HDDs not raids or anything and I hit the max on some nodes so I added fresh ones. Limiting factor was/is network bandwidth, as I do not want to utilize my internet connection with storj only.
Nope, actually that’s not how math works. See, I am that one SNO that runs multiple nodes. So since I don’t have 17500 nodes, that means that there is at least another one running multiple nodes. The highest I’ve seen is @Th3Van, but that still leaves many nodes behind. So yea, argue about semantics all day long
In that case allow me to help:
We’ve established the fact that there are 100 big SNOs. (undeniable fact, the numbers are there as I have stated, in a recent payout report).
So let’s do some basic math, don’t worry you don’t need a PhD for any of this and can follow along
There are ~22000 active nodes on the network. Subtracting the ~800 that @Th3Van has (http://th3van.dk/ ,scroll down, last node ID + ~500 prevetted as far as I can tell). that leaves 21200 active nodes.
~4500 active wallets. OK cool, let’s use the 100 big SNOs: 100 big SNOs running 50 nodes each. 21200-5000=16500 nodes left. Hm… that’s still not close enough. How about 100 big SNOs running 100 nodes each? 100x100 = 10000 … 21200-10000=11200.
for the remaining 4400 wallets, that means that each is running 0.25 nodes (11200/4400). That can’t be true, so our big SNO numbers have overshot a lot.
Let’s go back and try to nail this. 21200-4400=16800 that’s the amount of nodes left for the SNOs, which means each of them is running 168 nodes. Still sounds way too high for me personally. How can we back this up? I’m not running anywhere near 168 nodes and still made the payments, so the actual per-big-SNO number should be way less.
how about 20 nodes each? 100x20=2000. What was our total left? ah yes, 21200-2000=19200. What’s that per regular SNO? 19200/4400=4.36. Let’s say an even 4.
We therefore come to a logical conclusion that indeed, most SNOs are running multiple nodes based on the actual numbers. The only source of verification for this is storj themselves. Otherwise, the numbers don’t lie.