2024-05-06T14:54:33Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-05-06T14:54:33Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 24352, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1363663, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "13m22.8536556s", "Retain Status": "enabled"}
US1 BF generation is finished during the night. Satellite is just sending out BFs.
I believe EU1/AP1/SLC were sent earlier during the week…
We are bumping size to use 12Mb as the max…
BF generation will be more frequent (it’s an earlier plan, but didn’t work well. Now it should).
Just thinking aloud: Would it help if more frequent generation of bloom filters was limited only to nodes crossing the former maximum of 4 MB bloom filters? This would reduce the load on smaller nodes (which don’t need more frequent bloom filters), while potentially reducing amount of compute necessary to generate these bloom filters satellite-side (maybe making them more frequent for larger nodes).
It will stop when hitting zero used space.
At this rate you’re only going to be left storing a single cat video!
it’s not possible without breaking rules, the node can store only 1/80 of pieces of the segment of that video. If it has more than a one segment, well, it has 80 independent nodes per segment.
@snorkel, the “used this month” metric is at 1.7TB average today, so I fully expect much more deletions on this node. It’s my oldest node, and it has been sad to see it go from 9TB used to … well, what ever it reaches
@Roxor, please upload one to StorJ, and I’ll happily store it for you.
@Alexey, at this pace, it’s tempting to travel all over the world, building additional nodes, just to be sure I at least hoold a single cat video
Yes, I can understand. However, accordingly this
it likely will be filled soon at high speed and with TTL to do not wait for GC.
No problem, after 7 days you would have 1.76TB free more
In all my time in StorJ combinded, this week has by far been the roughest.
It’s with a small tear in my eye that the nodes on one location together are down almost 10TB, where.
Looking forward to new ingress normal in the future!
almost 10TB in trash over all nodes. its a lot.
I hope it gets filled rather quickly.
I feel your pain. My 6 nodes when from a total of 23TB down to 11TB. Ouch!
Because nodes are deleting 7/10 days old GC. Someone takes one/two days for deleting tb of data.
I think this is the reality of actual situation. Most of data stored was free tier
Pieces that were deleted (as far as the client is concerned) should be caught by blooms. The thing is, the satellite should stop tracking those pieces the moment they are deleted, so I don’t think the graphs contain that data anyways. It’s all a matter of being excluded by a bloom and moved to trash, then cleaned up by trash-cleanup.
I am still not fully happy with the storage overhead.
Just bumped the max BF size to 17Mbyte (from 12). Will see how does it work.