Bloom filter (strange behavior)

I checked. No, it does not delete any file.
Removes bloom filter file only if storagenode starts working with it! (INFO retain Prepared to run a Retain request…)

If, while the storagenode is running, you copy the bloom filter file to the /retain folder and then stop the service storagenode, then nothing will happen - the bloom filter file is not deleted.

The next start of service storagenode starts working with the bloom filter file available in the /retain folder. (INFO retain Prepared to run a Retain request…) And after stopping the service storagenode in this case, the bloom filter file is deleted

Ok, so it is clear that it is specific to bloomfilter file only, not some directory purging function.
And it does only delete the bloomfilter on the second stop of the node after it had started to process the file.
For me this sounds that it gets an incorrect signal. Instead of processing failed or not finished and therefore keeping the file it receives a processing successful which makes it ok for the code to delete the file.
Or the signal is given at the wrong time when it starts to process the file instead of at the end.
Edit: Does the bloomfilter file get correctly deleted after successful completion of the gc?

1 Like

I had 23 nodes on Windows 10 and 23 nodes on Windows 11.
bloom filter file was deleted on all 46 nodes.

All nodes are separate. 1 node - 1 system.

I migrated 23 nodes that were running Windows 10 to freshly installed Windows 11.

But it didn’t change anything.
Now I have 46 nodes on Windows 11 which all have problems deleting the bloom filter file

If the node is not restarted during processing of the bloom filter file, then the processing completes correctly/successfully with the entry in the log file "INFO retain Moved pieces to trash during retain {“cachePath”: “X:\!STORJ901-2\WORKDIR/ retain”, “Deleted pieces”: 6252, “Failed to delete”: 0, “Pieces failed to read”: 0, “Pieces count”: 1067886, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Duration”: “2m36. 7287917s”, “Retain Status”: “enabled”}
" and after that the bloom filter file is deleted from the /retain folder since its processing is completed

So there is basically a premature deletion of the file during processing instead at the end when the node gets stopped. The coders should be capable to nail that down.

1 Like

Yes. It is very important to fix this problem because it is no longer possible. Nodes contain several terabytes of garbage, which takes up space and needs to be collected and removed. The innovation is excellent, and works properly on Linux systems, but on Windows there is a problem

the owner (SYSTEM) has an ability to remove the file. The storagenode service is running under user SYSTEM.
You need not only change security permissions, which wouldn’t help, but set a read-only attribute instead.

Ok.
I tryed with owner SYSTEM and set a read-only attribute for bloom filter file.

I set permissions:

  • DENIED - SYSTEM - DELETE
  • ALLOW - SYSTEM - READ

After 5 minutes I stop working service storagenode.
BUT NOW, as soon as the service storagenode is stopped, the bloom filter file is NOT DELETED from the /retain folder. And the following lines appeared in the log:

WARN retain encountered an error while removing request from queue {“cachePath”: “X:\!STORJ913\WORKDIR/retain”, “error”: “remove X:\!STORJ913\WORKDIR\retain\v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa-1714759199976870000: Access is denied.”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}

Based on the error, we can now say for sure that the request to delete the bloom filter file is made by the service storagenode. Which should not happen, since the processing of the bloom filter file is not completed and, as a result, there should not be a request to delete the bloom filter file.

@jtolio, @elek - What do you think?

1 Like

Was there anything more around that log line?

Just as a note, since I’m not sure I read you correctly, if you used a bloom filter from another node on a node it wasn’t meant for, you will almost certainly start failing audits soon. Just to be clear, you should absolutely not use any bloom filter not meant for the node you’re using it on. The bloom filter is a probabilistic list of data that should be kept, not of data that should be removed. So if you use it on the wrong node, the other node will clear itself out. Perhaps your other problem is preventing that from happening entirely, but it’s time to retire the node if that’s otherwise what happened.

We’ll keep looking, thank you for this discovery so far.

2 Likes

No. It was bloom filter from this node, saved a few days ago. There’s nothing to worry about.

Before this log line was:

INFO Stop/Shutdown request received.

**ERROR filewalker failed to reset progress in database {“error”: “gc_filewalker_progress_db: context canceled”, “errorVerbose”: "gc_filewalker_progress_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*gcFilewalkerProgressDB).Reset:58\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash.func1:171\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:244\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:565\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:369\n\tstorj.io/storj/storagenode/retain.(Service).Run.func2:258\n\tgolang.org/x/sync/errgroup.(Group).Go.func1:78"}

**ERROR retain retain pieces failed {“cachePath”: “X:\!STORJ913\WORKDIR/retain”, “error”: “retain: filewalker: context canceled”, “errorVerbose”: "retain: filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:178\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:565\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:369\n\tstorj.io/storj/storagenode/retain.(Service).Run.func2:258\n\tgolang.org/x/sync/errgroup.(Group).Go.func1:78"}

There’s nothing more

How else can I help? I’m interested in a quick solution to the problem as it really needs cleaning on 46 nodes. Please keep updated on progress on this issue.

1 Like

Нou are not alone with this error. I also have problems like this under Windows (all my nodes are on Windows). I did not collect statistics on where and under what circumstances filters are removed, but I saw this several times on different nodes after reading this thread. I’m just trying not to restart them

2 Likes

Well, this is not a solution to the problem. It’s the same as life before - without the new function “saving bloom filter to disk”. No difference.
Considering that the bloom filter is sent to nodes once a week (or even every 2 weeks), this is not possible. It is too easy to lose the bloom filter by accidentally restarting the node. And the nodes really need to be cleaned. Now this problem, I think, is for many. Everyone was waiting for an effective cleaning mechanism.

1 Like

There’s the problem! Now, I have no idea why your node is getting shut down, but for sure, the current code will consider the GC run failed and delete the bloom filter.

We are working on changes that will allow a storage node to resume after a restart from where it left off, but these changes will take some time. Here is one: https://review.dev.storj.io/c/storj/storj/+/12806

Anyway, it is certainly not the case that we’re expecting storage nodes to restart regularly. Looking at a random sample of nodes, the average node has no process restarts for over a week, and many have not been restarted at all in 20 days. Notably, we’re in a middle of a rollout, so node process uptime is lower than it might otherwise be.

Whether or not we get pause/resume completed soon, I think the key takeaway is your node will be happier if you can help it stay running. Do you have any indication of why it restarted?

2 Likes

I don’t understand that. Wasn’t the whole idea of storing the bloomfilter on disk to resume on a node restart? And the feature that introduced it was completed:

It also clearly states:

The bloomfilter is now stored on the disk from this change
https://review.dev.storj.io/c/storj/storj/+/12651, so we don’t lose
the bloomfilter on restart. But that means that when the bloomfilter
on the disk is loaded and added to the retain queue, it will start
scanning the directories all over again. This will continuously be the
behavior till the node is able to complete a full scan which might never
happen on a large node.

And why the assumption to consider a GC as failed when a node gets restarted? This is not necessarily the case, a node can be restarted for hundreds of reasons. Only when the GC has failed the bloomfilter should be removed if the result instead would be that it continuously keeps scanning and failing.

As you are saying the https://review.dev.storj.io/c/storj/storj/+/12806 would be the resume part this means that it is not just for used-space filewalker like the title says

storagenode/pieces: save-state-resume feature for used space filewalker

but also for the GC-filewalker? So at the moment no filewalker really can resume from a node restart, is that correct?

If this is the case the other posts about errors with gc-filewalkers or trash problems are solved, the feature is simply not finished:

It was my impression that nodes in fact do get restarted regularly because of the updates roughly every 14 days. In general it would be better if you would assume the nodes getting restarted often. There are hundreds of reasons for a node operator to restart a node. From a Windows update restart to simply changing the allocated space. If you assume nodes do not get restarted then this will lead to the same situation that made you not save states before like the bloomfilter or the state of the filewalkers. It would be better you assume nodes get restarted any time and should then resume whatever they were doing before.

I hope you do.

Because now it has turned out that it does not “just” prevent the used-space filewalker from finishing but also the gc-filewalker which makes it worse to get rid of the garbage and have correct numbers on occupied space and trash.

And please consider to have a separate log entry for start and resume of filewalker. This would help to verify from the logs that is is in fact resuming and not starting from the scratch.

10 Likes

Maybe this would be an idea:

You surprise me. This is a standard line in the log that is added if the node receives a shutdown command.
This is a reboot/shutdown of the host/computer/system. All this gives the command to stop the service. This is an action that is INITIATED BY THE NODE OPERATOR. This is NOT AN ARBITRARY NODE DISABLED. This line will be added each time the node is rebooted/shutdown initiated by the node operator. There is nothing unusual about this. So I don’t understand why you think this is a problem. How can a developer not know this? If you think the problem is that the node operator restarts/turns off the node (which of course gives a command to stop the storagenode service), then I don’t know what to say

4 Likes

Perhaps I will answer by quoting my colleague jammerdan:

“In general it would be better if you would assume the nodes getting restarted often. There are hundreds of reasons for a node operator to restart a node. From a Windows update restart to simply changing the allocated space.”

Do you want frankness? I live in Ukraine. Our energy infrastructure has suffered greatly from russian missile attacks on generating facilities. This is why power outages often occur.
Sometimes communication channels are disconnected. And the node is transferred to work through another channel (naturally by changing the configuration file and restarting the node)… And much more.

In general it would be better if you would assume the nodes getting restarted often. This topic would not have arisen if it were not so

8 Likes

As well as other issues in the past… :face_with_raised_eyebrow:

4 Likes

I believe there is a bug. It deletes the BF in any case of shutdown of the runner. On finish or kill or reboot. I’m sure - it’s a bug.

2 Likes

https://review.dev.storj.io/c/storj/storj/+/13214

2 Likes