Current situation with garbage collection

At this rate you’re only going to be left storing a single cat video! :wink:

3 Likes

it’s not possible without breaking rules, the node can store only 1/80 of pieces of the segment of that video. If it has more than a one segment, well, it has 80 independent nodes per segment.

2 Likes

@snorkel, the “used this month” metric is at 1.7TB average today, so I fully expect much more deletions on this node. It’s my oldest node, and it has been sad to see it go from 9TB used to … well, what ever it reaches :slight_smile:

@Roxor, please upload one to StorJ, and I’ll happily store it for you.

@Alexey, at this pace, it’s tempting to travel all over the world, building additional nodes, just to be sure I at least hoold a single cat video :slight_smile:

5 Likes

Yes, I can understand. However, accordingly this

it likely will be filled soon at high speed and with TTL to do not wait for GC.

1 Like

My oldest, from 01.2021. It reached at it’s peak 14.5TB.

No problem, after 7 days you would have 1.76TB free more

In all my time in StorJ combinded, this week has by far been the roughest.

It’s with a small tear in my eye that the nodes on one location together are down almost 10TB, where.

Looking forward to new ingress normal in the future! :slight_smile:

almost 10TB in trash over all nodes. its a lot.
I hope it gets filled rather quickly.

1 Like

I feel your pain. My 6 nodes when from a total of 23TB down to 11TB. Ouch!

2 Likes

this isn’t that reassuring either:

why is it that bumpy in the graph?

Because nodes are deleting 7/10 days old GC. Someone takes one/two days for deleting tb of data.
I think this is the reality of actual situation. Most of data stored was free tier

1 Like

Pieces that were deleted (as far as the client is concerned) should be caught by blooms. The thing is, the satellite should stop tracking those pieces the moment they are deleted, so I don’t think the graphs contain that data anyways. It’s all a matter of being excluded by a bloom and moved to trash, then cleaned up by trash-cleanup.

1 Like

That pie looks like my nodes. :sweat_smile: Customers! We are waiting! We have plenty of space!

1 Like

I am still not fully happy with the storage overhead.

Just bumped the max BF size to 17Mbyte (from 12). Will see how does it work.

8 Likes

AAAAaaaaaaaaaaaaaaaaaaaaand we’re back to pre clean level. Very impressive ingress numbers for thes testing. Now we patiently wait for the “all my data is being deleted!” posts, when the TTL runs out :slight_smile: I loe this product.

What am I looking for?

Ahh yes, the week column is missing. Two weeks ago, I lost 9TB of stored space. The week afterwards had almost 6TB ingres, and this week (ending tuesday) has 4.4TB so far, which puts be above my before high.

I think it’s a current normal?

1 Like

Yes, that is exactly what I said in the first post :slight_smile: All is well, no issues.

1 Like

Is there an estimated date when these tests will end?
It would be nice to see the nodes grow but with data that will not be deleted.

1 Like