4 TB node has been full the past few months, still only 2.17 TBm storage paid

The container of my storage node is indeed having 4 TB of data on it, but the payment and average disk space is 1.8 TB lower: Anybody any advice?

I hope this picture makes the picture a bit more clear:

Hmm… Can you tell me what version your storagenode is running? Also, if you have logs, can you look for entries that contain the string “retain”?

1 Like

Version is: 1.78.3
But the issue has been there for some time (also in previous versions), just was not sure if my interpretation of average disk space used this month was correct, so waited till the node was full for a month, in any definition the average of a full month full should be the full amount. :-).

docker logs |grep retain

gives 0 results

How long do those logs go back? I’d expect a retain call around a couple of times per week.

1 Like

less then 24 hours, due to os reboot.

Dang. That’s not unexpected that there would be no retain calls then. The only thing I can think of is that for some reason the garbage collection process isn’t working for your node. Can you keep watching the logs for lines with “retain” in them? It might take a couple of days, unfortunately.

1 Like

I will get back to you when there are some log lines with retain in them. Thanks a lot for responding so far.

2 Likes

Hello @Sfynx,
Welcome to the forum!

Please check all your databases:

What’s filesystem on your disk?

df -T --si

Thanks for the welcome,

FS is btrfs

Results for all db checks return a ok:

./info.dbok
./bandwidth.dbok
./orders.dbok
./piece_expiration.dbok
./pieceinfo.dbok
./piece_spaced_used.dbok
./reputation.dbok
./storage_usage.dbok
./used_serial.dbok
./satellites.dbok
./notifications.dbok
./heldamount.dbok
./pricing.dbok
./secret.dbok
./notifications.db.dbok

Also fsck on filesystem show all is correct.

The actual amount on the ‘blobs’ directory is that 4GB…

Ow, still no retain in the log’s… so will report back on that when that happens.

Thanks for all the help so far.

This happened to me before. You should try to recreate all databases. And don’t forget to backup before doing.

Tried that, removed all db’s… let them rebuild, took a while for the data to return, but still the same result.

Day 3 still no retain… will keep you all posted.

This could be a reason, unfortunately. It’s also proven as slow for Storj and have many other issues, see Topics tagged btrfs

If you already rebuilt databases, then seems you cannot do anything else.

You may also check the actual usage (replace to your path):

du -s --si --apparent-size /mnt/storj/storagenode/storage

for me it took a month, something are scheduled to run monthly

Did you ever get a fix to this problem? I Have the same issue on one of my containers :slight_smile:

Do this and if you see correct size — the problem is large sector size.

Done that: du -s --si --apparent-size, the results is 4.1 TB too. Problem still remains… it did jump to 2.25 TBm though… no clue why. Still no retain, but i really need to get this docker image to write it’s logs outside the container, every restart i loose the logging.

hoping to get some insights from that retain logging.

logging is fixed, and made a small script when my logs mention retain… Hope this will catch the suggestion of zeebo. I’ll keep you all posted

1 Like

Three of the six satellites have started done some retain actions: these where in the logs…

2023-06-07T21:45:01.796Z INFO retain Prepared to run a Retain request. {“Process”: “storagenode”, “Created Before”: “2023-04-16T02:09:42.044Z”, “Filter Size”: 29, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”}
2023-06-07T21:45:02.031Z INFO retain Moved pieces to trash during retain {“Process”: “storagenode”, “num deleted”: 0, “Retain Status”: “enabled”}
2023-06-07T23:13:13.245Z INFO retain Prepared to run a Retain request. {“Process”: “storagenode”, “Created Before”: “2023-06-04T05:59:59.694Z”, “Filter Size”: 8625, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2023-06-07T23:13:17.535Z INFO retain Moved pieces to trash during retain {“Process”: “storagenode”, “num deleted”: 0, “Retain Status”: “enabled”}
2023-06-08T06:41:30.642Z INFO retain Prepared to run a Retain request. {“Process”: “storagenode”, “Created Before”: “2023-01-22T20:31:29.101Z”, “Filter Size”: 1422, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”}
2023-06-08T06:41:31.906Z INFO retain Moved pieces to trash during retain {“Process”: “storagenode”, “num deleted”: 0, “Retain Status”: “enabled”}

So it should remove deleted/expired data from your node.
Will hope it will match what’s reported.

Small update:

There still is a discrepancy between total used disk space and average disk space used this month.

5/6 satellites have had a retain thing done so far.

only us1.storj.io is missing yet… hopeing that will clean out a lot of files.

1 Like