So it almost finished 14/32 of the xx folders if it was at folde nz this morning. Maybe better check with lsof
.
As far as I understand a potential reason why the way of deletion is so terribly slow on ext4 is that for every piece its metadata must be retrieved additionally to get its size so the node data can be updated with how much space has been freed.
My /mnt/x/storagenode/storage/blobs/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa folder is full of stuff. The du command would take forever, and I suspect will come back with about 3TB.
This suggests that your node maybe disqualified or suspended on that satellite.
It says it’s not. My Suspension, Audit, and Online are all at 100%. There was no reason to suspend, since it was never online for any extended period of time
Please also remove the data of untrusted satellites: How To Forget Untrusted Satellites
Wouldn’t this block me from using saltlake at all?
There is a faster way to get a rough estimate of the disk usage for 1 satellite. Get the size of on xx folder (or the average of a few of these folders) and multiply it by 1024.
@Alexey please ban this scammer
3.0G on one of them, so times 1024 is 3.072G
Best is to report him as spam. Then a team member will deal with him
Finally, I have back some free space. I didn’t do anything, just wait. Thank you for the fix.
Interesting how it can have free space and overused at same time?
@Alexey any idea?
See:
Aitor shows his multi node dashboard where one or more nodes are overused, while others have free space available.
Yes, multinode. I still don’t understand the overused because I read that after detect 5 GB more stops accepting data.
Easy. Some of the nodes has overused space, some - not. The summary for multinode dashboard just sum-ups all indicators.
Yes. But overused meaning that the node used more space than allocated. When it has had the issue with the incorrectly calculated used space, it was not aware that it’s actually used all allocation. When the issue has been fixed (the used-space-wilewalker did its job), the node become aware about an overusage.
Please also remove the data of untrusted satellites: How To Forget Untrusted Satellites
This node is too new and was started after those two satellites were shut down. I could use it to forget saltlake, since it’s not doing anything but taking up space, but I have about $4 held back by it. What do you suggest? I don’t know why saltlake is not seeing any of the 3TB of data that it’s storing.
How you think it is necessary to keep the “storage2.piece-scan-on-startup” option “true” all the time? Or is it enough to run it occasionally to collect information on files and display information correctly on the dashboard, and mostly keep the option disabled “false”?
I don’t know if it makes sense to do a full scan of files after each node restart, even with the “badger cache” enabled. Although it takes less time, it still takes hours.
If you face frequent restarts and the additional load caused by the scan is being problematic, it could be a good idea to disable the piece scan on startup feature.
The drawbacks from disabling start up piece scans is that the statistics on the dashboard may drift from the real values (Im not sure if there are still bugs remaining which cause this drift).
The main issue that can arise from the inaccurate stats, is that the node might believe its full when its not actually full (stopping ingress), or that its using more space than it should because it believes its using less space than it actually is.
If you dont mind that the dashboard shows possibly inaccurante values, and you keep an eye out for these statistics, then turning it off should be fine.
If you do not have data of old satellites, then you do not need to run the forget satellite command. You also do not need to forget the Saltlake satellite. If you want to leave this satellite, you may call a Graceful Exit on it: Graceful Exit Guide (new procedure as of 2023-10-?)
The garbage should be moved by the retain
process to the trash and deleted 7 days later.
When your node did receive a BF from the Saltlake satellite last time?
I didn’t find records from it in your logs excerpt.
docker logs storagenode 2>&1 | grep "retain" | grep "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE" | tail
I have a suspicion: since the Saltlake satellite is sure that your node doesn’t have any data (it’s literally zero accordingly your graphs and the fact that other nodes in the network received at least one correct report from this satellite), and also amount of audits doesn’t change for days accordingly your logs, I can guess that the satellite do not issue the BF too, because it should contain pieces, which your node should keep. But since there shouldn’t be any pieces accordingly its databases, it doesn’t generate a BF…
In that case you need to receive something from that satellite, to allow it to generate a BF. But it cannot, because it’s full even with overusage… You need to have a free space to allow the node to receive something from the Saltlake satellite.
I can only offer to remove all files from the temp
folder. If there is nothing, the next suggestion is to perform an extremely dangerous action - try to remove pieces from one of the subfolders in the blobs folder for that satellite (see Satellite info (Address, ID, Blobs folder, Hex)) and restart the node to allow it to have a free space. Then maybe something will arrive from the Saltlake satellite and then wait when it would send the next BF.
I can only offer to remove all files from the
temp
folder. If there is nothing
There was nothing…
the next suggestion is to perform an extremely dangerous action - try to remove pieces from one of the subfolders in the blobs folder for that satellite (see Satellite info (Address, ID, Blobs folder, Hex) ) and restart the node to allow it to have a free space.
I was going to, but I’m going to try slightly different approach. It’s a 4TB drive (as in 3.6TB), so I increased my storage to 3.2TB to give it some space. Hopefully that will give it some space to write and check as you said it might. I had to reduce it to 3TB because when it was filling up space before, every time I restarted it, it kept claiming that it didn’t use up any space, and kept filling it up until the drive was full. This time I restarted and it’s reporting the amount I actually have. Hopefully this fixes it. Thanks
I submitted a bug for your case:
Saltlake added (ingress) 21mb to my node (the remaining almost 200 gigs were added by other satellites). Let’s see if this will be enough to trigger it to work. Despite adding 21mb, my node is reporting that 42.16KB is now being used by saltlake.
Then it shall receive a BF soon. This should trigger a GC and retain.