I tried debugfs to find the info, but it just gets stuck without giving me output. However, when I access the file over SMB from my windows PC, I can see that it’s really old.
What could be modifying these files? I’m in the process of migrating this node and would like to clean up the almost 2TB of trash before moving it unnecessarily. Worst case, I won’t migrate the trash and take the risk, but I would rather not do that.
I’m not sure if atime would change when the file was last modified, but maybe it would. In that case each time the node goes over the trash it would be updated.
Can you check if it is enabled? On Synology it looks like it is in Storage Manager > Volume > […] > Settings > Record File Access Time.
Using standard fstab you would have noatime in the mount options to not record file’s last access time.
It’s set to never for a long time already. I didn’t want the constant additional writes. That also should only update the last access time though. Not last modified.
Interesting… I would take a look tomorow at my nodes. Can you check what happened on 01.03.2024? Node update? DSM update? Package update? Etc? Do you have an antivirus on Syno or on PC that could scan the files modifying some checksum maybe? Check the general logs on DSM and the update schedule. The SMART scan. I can’t tink of anything else at the moment.
It also could be something in the storagenode code.
I’ll check some of those things tomorrow. I have a daily short SMART test scheduled, so that isn’t it. Though an extended test did start on the first. But I’ve noticed this issue before then as well, when it didn’t coincide with the extended SMART test. No antivirus running on the system and I’ve disabled the search indexing service. There hasn’t been a recent DSM update. I don’t think it coincided with a node update, but I’ll check. As far as I can tell there shouldn’t be anything other than the node accessing these files to begin with, let alone modifying them.
Edit: I did notice this comment which might be part of the issue.
But with the planned changes to how trash works, it might not matter anymore.
I do not have files older than 7 days in my trash, but I run only 2 nodes in the Docker Desktop for Windows and a one node Windows service.
It could be Synology specific…
Because docker nodes actually Linux nodes.
However, I saw this one:
And if something is modifying the modification time, then my scripts will not detect this…
I have those too. I believe it’s from the new big bloom filter that deletes the old pieces. We all have the same date, so bloom filter is the one sent to all in the same day.
I emptied trash very recently, like a month ago. So before 1.03 there weren’t these files there.
I’m assuming that what is shown as “Created Date” is the file’s birth time, as tracked in newer filesystems like ext4. Since these files are moved intact from the blobstore into the trash, the birth time doesn’t change when the file is moved to the trash. So what you see as the created time is probably when the piece was first stored on your node, and not when the piece was put in the trash.
Given that, does it still look like something is updating the mtime on files in the trash?
Have you run any app against the nodes files: maybe something like rsync? (or maybe had the rsync command syncing the wrong way once accidentally?). I don’t know if rsync touches files in a way that would update modification time: I’m just thinking out loud.
Or could a backup program be doing something like that… as a way to detect changes between runs?
For me, this is the oldest node, moved to this drive 2 years ago. noatime, no resync, no backup services running. The machine is dedicated to Storj, no other use, or programs running.
Hi, my nodes have combined 300gb of trash. But I never saw a problem in that, because after a few days it went down to the normal amount. It spikes suddenly and then goes back. So maybe just huge deletion waves?