I’m not entirely sure that’s for sure the case. Another possible explanation for the low numbers in the TTL database could be that the nodes are full and therefore not accepting more data. Given that my database sizes are in the gigabyte range, I would expect them to have contained many records.
That said, I’m open to exploring this avenue further to ensure we cover all possibilities. Let’s look into why the TTL entries might not be properly inserted, just to make sure we’re not missing anything.
Okay… So I’ve taken a closer look and counted all the pieces in the ‘piece_expirations’ db by expiration date (see the left side of the screenshot). I also counted the successful uploads from SL from the logs, starting on July 26, since I only have INFO logs from halfway through July 25 (as shown on the right side of the screenshot). These pieces should expire around August 25 (30 days after). I’ve then calculated the percentage difference between them.
@littleskunk, I’m curious about what these numbers truly indicate. Additionally, I’ve noticed some pieces are set to expire on 9999-12-31. Could someone explain what this means?
Edit: Actually I made a mistake, which I now corrected in the screenshot. I’m usually close to 100%, so it seems there is no issue with inserts into the db, right?
Your TTL DB looks good. We don’t know for sure because there arn’t as many uploads as for example 2024-08-12 with almost a million but the existance of that datapoint indicates that it can handle it.
In your case the cleanup is also up to date. One of my nodes was falling behind a week and that would be visible in the TTL DB.
So far that is matching my observation as well. How about your success rate? There was a ticket around upload canceled still getting stored on disk.
Anyway, these unpaid files should have been deleted by GC, but in reality, they were not. This issue only occurred on the SaltLake satellite, and there was not much difference between the actual payment and disk usage on other satellites
The received Bloom filter file is too large. According to the usage space reported by the satellite, the Bloom filter file should not be so large. I think it is due to parameter errors when generating the Bloom filter, and expired files were not taken into account.
@littleskunk, Instead of counting successful uploads from the logs, I now checked the number of pieces in the SL blob folders for the days with a lot of TTL data (find /mnt/storj03/storage/blobs/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa -maxdepth 2 -type f -daystart -mtime 23 | wc -l).
I want to help by looking at my nodes and comparing these upload success logs with the TTL DB. Does the log show something different when an upload has a TTL vs when it doesn’t? Or does the piece_expiration.db store a line for every uploaded piece?
A lot has already been said in this topic about TTL expiration and GC:
@littleskunk, Yes, while it’s true that TTL data should automatically be cleaned up after expiration, I’m starting to wonder if @donald.m.motsinger might be right about the issue with TTL data not being properly deleted. If that’s the case, then GC should be able to handle it as uncollected garbage. Given that the TTL database seems fine, and considering that SL reports indicate I have around 50% less data stored than I actually do on my hard disks (according to payouts), the BFs should be addressing this. Right? Why might this not be working as expected?
Any news from @elek about this would be appreciated.
It looks like that my payout from saltlake was what the node estimated it to be - for 18TBm, even though the use space on disk is much more (probably twice as much). So, looking at the earnings.py script, what it reports as “uncollected garbage” appears to be correct - not all TTL data is deleted for some reason, it looks like only a small part of it is deleted. The script says that my node currently has 32TB of uncollected garbage and I am inclined to believe it.
So, there are two questions:
Why does the TTL data not get deleted when it should
Why the bloom filters do not catch the TTL data that has expired?
I don’t have an update yet. I was looking into my TTL DB and noticed there are no recent TTL uploads. Sounds like a great trace to follow but there are no uploads from SLC for the last few hours beside some repair activity. SLC uploads have now been restarted. I will have to check my TTL DB again.
interesting, that 2 days was no TTL new data, so old data should be deleted and all over space should shrink in 2 days, but in my case it raised, not much, but raised from other satellite uploads.
But from SLC i get about over 2-3 TB as day.
I am on 1.109 version