4 TB node has been full the past few months, still only 2.17 TBm storage paid

I also hit a good GC:

2023-07-06T22:00:42.142Z        INFO    retain  Prepared to run a Retain request.       {"process": "storagenode", "Created Before": "2021-01-30T02:33:55.130Z", "Filter Size": 597951, "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2023-07-06T22:02:24.695Z        INFO    retain  Moved pieces to trash during retain     {"process": "storagenode", "num deleted": 0, "Retain Status": "enabled"}
2023-07-07T10:46:16.236Z        INFO    retain  Prepared to run a Retain request.       {"process": "storagenode", "Created Before": "2023-07-03T11:59:59.912Z", "Filter Size": 229, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-07-07T10:47:35.624Z        INFO    retain  Moved pieces to trash during retain     {"process": "storagenode", "num deleted": 0, "Retain Status": "enabled"}
2023-07-07T23:50:59.064Z        INFO    retain  Prepared to run a Retain request.       {"process": "storagenode", "Created Before": "2023-07-04T15:00:09.006Z", "Filter Size": 597951, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-07-07T23:56:44.193Z        INFO    retain  Moved pieces to trash during retain     {"process": "storagenode", "num deleted": 368116, "Retain Status": "enabled"}

2nd retain hit:

2023-07-10T15:36:24.994Z INFO retain Prepared to run a Retain request. {“process”: “storagenode”, “Created Before”: “2023-07-06T05:59:59.833Z”, “Filter Size”: 597951, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-07-10T15:55:14.437Z INFO retain Moved pieces to trash during retain {“process”: “storagenode”, “num deleted”: 1078438, “Retain Status”: “enabled”}

resulting in:

this is going the right way, hope it helps you Zeebo to confirm this was a good fix. I will report in a month or so to tell the delta between average and total diskspace when enough of those retains have been done.

Same for me. Glad we have a fix.

oh no…

This is only partially because of the bug being resolved. Don’t forget they are also removing data for the 2 test satellites being shut down. Which is now also handled by GC.

1 Like

As that might be this is definitely an improvement from node operator perspective, I lost half my storage for junk that didn’t pay anything. Which I could not clean out myself.

2 Likes

I see over all my nodes there was over 20tb deleted, and over last 2 weeks over 40TB was deleted from europe-north satellite

1 Like

Hello all. My issues was resovled by the update as well. Ouch :smiley:

None of these screenshots show that the issue has been resolved. In fact, they show the opposite as there is clearly still a big discrepancy between average disk used on the left and used on the right. I think people are confusing the trash due to clean up of the satellites that will be decommissioned with a solution for the original issue posted here.

3 Likes

BrightSilence: as the one started the thread: this does solve my issue. I had a full node, and only 2.1 TB was paid for, that last 1.9 TB was on my disks for the past months and was not getting marked as trash. After the fix with the bloom filters this space is being marked as thrash. And starting to clean up. I am currently at 200 GB difference between Used and Average. Diskused doesn’t get paid, Average is.

Also, the deletes are from all satelites, not just one.

Now, ofcourse the main bug is why did I have stale data on my node for months, how did that came to be. But to be honest i am glad i got my 1.8 TB of dataspace back that i was not getting paid for anyway.

Feel free to start your own thread on the decommissioning Satelite. But my issue was the stale data… and the delta between Used and average.

Well I am precise in my wording and what I said is that none of the screenshots show that it has been resolved. Your last screenshot still had a 1TB discrepancy as well. I’m glad it’s getting smaller though. I’m just baffled to see people posting screenshots that still show a big issue and then say it’s resolved.

For some of us, the issue is getting worse, not better. So this month I saw my average disk space used dropped from about 7.5TB to 6TB, while reported used space stayed at 11TB.

As it stands my node is now not generating enough income to cover its electrical running costs. It would be rather helpful if the issue could be resolved sooner rather than later.

1 Like

Same, the average disk even declines over time. What is happening?

Exit of the 2 test satellites. This is to be expected since a lot of data is being removed.

2 Likes

May i ask If you tried to restart your node? I Had the Problem too and after a restart IT got quite a bit better. But i Had to wait a bit till GC was finished (Drive was a bit overloaded)

The reported 7.5TB of “average disk used” had been going on for a number of months while ‘Used’ stayed at 10.95TB. So the drop of 1.5TB in the last few weeks is likely to be due to satellite changes, but there remains a very large difference between the 2 reported ‘Used’ values.

1 Like

The node was last restarted 4 weeks ago.

1 Like

I think this is my last post in this thread:

There is only a 0.06 TB difference left between Used and Average diskspace left. And the retains still remove small amounts. So for me this makes the issue of the nodes not cleaning out them selfs nicely is resolved.

3 Likes


I have exactly the same figure, problem solved.

Well no luck for me I think