Which satellite is “pmw…”? That’s the one with 2.6TB of trash.
The node database is wrong, the trash amounts of each satellite are small according to it, but the total is correct.
You should bookmark that:
This is the main number I’m watching:
I want to see if Storj wants to reserve 5PB, or 10PB, or whatever. We’ll eventually level-off at something when daily-SLC-uploads matches daily-TTL-deletes.
TTL delets shold not end in the trash, they just deleted.
Our internal dashboard is showing 7.6 PB at the moment. I would estimate that there is another 6 PB or so in the trash folder. So in total we should be at 13-14 PB. Our target is 20 PB. Just to give you an idea of what to expect.
For your node you can simply assume that you are holding used space + trash and you will grow by another 50% or so (without us adding surge nodes). In 6 days your trash folder will get cleaned up. That will set us back a bit but we can still make it to the 20 PB just 10 days later or so.
At the moment the node count with enough free space is getting low. Next week we are going to run tests with some surge nodes that we add to the network. Later the week I might be able to give you a new estimation. For now I would estimate that the surge nodes have to take 25% of the load = 5PB. Your node might still grow by another 50%. But it could also mean your current size including the trash folder will be your final size. There are too many variables to make a prediction. I would guess the fast nodes still get +50% but the nodes that are still the bottleneck might stay at their current size. And no I can’t tell you which nodes that would be. There is only one way to find out.
Thanks, I don’t know what possessed Storj to use three different formats for the satellite id but here we ar .
So, it’s saltlake that has 2.6TB of trash.
Oh yes, I would like to know too… Either hex or base58 should be ok for the filesystem. Maybe except FAT?
But who would use FAT these days? With a high limit of 2TB (for FAT32) it sounds rediculos.
Then, perhaps, your node was lucky to receive a duplicated keys uploads…
is this 20PT include expansion factor or it is pure data only?
It’s amazing that you can upload so quickly: but worrying that nodes are staying filled. I had hoped filled SNOs would be enthusiastically posting in the rig thread with pics of the new space they were bringing online. But is it maybe true that thousands don’t even realize their setups are out of space?
Adding your burst nodes is understandable: it’s just disappointing the community couldn’t shoulder the load themselves
Once the test data materialises into customer data I’m sure more people will expand their setup.
Doing so at the moment is a bit of a gamble.
I am waiting on a new 20TB Exos X24 but they are on back order with my favourite supplier…
If the reserved data is intended to be melted-down as real customer data displaces it… and peak reservation should be in about 10 days… then we’re close to our medium-term max capacity now. Like isn’t littleskunk saying some of the slower nodes may be near their max-size today (because the next 10 days of uploads may simply replace their trash)… and the fastest nodes can maybe just expect another 50%?
For SNOs that are full now… add space now. There’s always going to be some natural growth too: but it sounds like we’ve almost accounted for whatever the new-large-customer could bring.
No contracts have been signed as far as we’ve been made aware, therefore this is all optimism right now.
Sure, we can buy a lot more storage and fill it up quickly if it all goes well, or we could be left with dozen of terabytes of empty disk space which will take years to fill up at the rate we’ve seen so far.
So it’s all a bit of a gamble.
I think I am cautiously going to expand some more, but I won’t go nuts just yet
One day the community will grow enough to take the load. It doesn’t happen in just a few days. It is also very silent outside of this forum. I haven’t asked why. Maybe they haven’t noticed yet.
We are not talking about a few people. If the full data set is uploaded we are talking about more than 10K nodes (current estimation). → Most likely we will need some surge capacity. If I run the math it should take only a few servers to make it work.
I think there will be 2 peaks. One in 6 days when you take used space + trash and another one in 16 days when we managed to replace the space that was in the trash folder. ± a few days. Most likely it will take longer because less nodes with free space also means lower throughtput.
I know you can obviously not promise anything but… what you’re saying suggests very exciting times ahead!
Does anyone know what the X24 or X20 mean ? I initially thought X24 could mean 24 TB HDDs but its not the case.
We look like a sect waiting for the arrival of the chosen one.
I believe it means that range of HDDs has a platter number and areal density which allows it to go up to 24TB
The X20 maximum was 20TB and so on…
There are 20k nodes… and…
https://storjstats.info/d/storj/storj-network-statistics?orgId=1&viewPanel=405
37.6 PB free…
After weeks of testing my overall used space is more or less unchanged.
There is one node I started just for fun in a data center. This one is full and it’s amost 100% SLC data. It costs me only 10x the $1.50 they pay.
The potato nodes I am running at home storing some SLC data too but it is only like 10% of overall data. That’s ok to me, I never wanted that short TTL data. So no need to change anything now.
So… is it a wise to use a datacenter node in that case? (sorry!)