usually mean the timeout, so disk is unable to response in time. Since it’s NTFS, you need to perform a defragmentation and enable it if it was disabled for this disk (it’s enabled by default).
If that wouldn’t help, then you may disable the lazy mode.
it will use more IOPS, but should finish without interruption and should update databases at the end.
However, you need to have all 5 filewalkers successfully finished their work for every trusted satellite.
All data from untrusted should be deleted though: How To Forget Untrusted Satellites
My dashboard only shows 4 satellites, AP1, US1, EU1, and Saltlake. Does this mean i am good with regards to untrusted satellites, or is it possible they wouldn’t show up in the dashboard, but still be impacting my node
A node has deleted part of the garbage and another part of the garbage is marked to be deleted. Now it already marks an almost real capacity.
Other nodes have found garbage and the capacity is closer to the real one.
One node had garbage, it has been updated to version 1.101.3. The garbage has disappeared, the occupied space has increased, it is as if the garbage has become occupied space. Is it some problem with the dashboard?
Hello i have a old node which had alot of iops issues, it has now been migrated to a new datastore and all the filewalkers has finished successfully. However my disk discrepancy looks like this on this particular node:
Many pieces have been deleted in the past month. Leave the node online, without restarting it. In the coming weeks the trash can fill up and be deleted little by little. Read the forum.
Hello,
I noticed a discrepancy in my storage usage between my TrueNAS Core dataset and what is shown in the dashboard.
Can it be that the later versions of Storj didn’t cleanup older excessive storage/trash e.g.?
I run Truenas 13U6.1 (latest) the dataset uses 9.04TiB with 1.09 compression, so effectively ca. 10TiB total storage I guess.
I have given Storj 8.5 TB inside Storj config. And it currently uses 7.5TB with 330 GB Free and 0.67TB in Trash.
Given that 1 TiB is more than 1 TB, I am surprised.
I would assume that 8.5TB Storj is 7.7TiB on TrueNAS and with the compression of 1.09 gives 7TiB space usage.
I don’t use snapshots on this dataset.
Can it be that there is more trash but not shown as trash and not deleted before in older versions of Storj and the latest version doesn’t pick it up?
If so, how can I purge/cleanup the dataset somehow without wiping all?
Hello @etienneb,
Welcome back!
This is the exactly the same issue discussed here.
You may have a discrepancy, if your filewalkers are unable to finish their work without errors.
Thanks, I have looked at the logs and learning everyday
Plenty of these in my logs of last week (I unpacked a bunch of them):
lazyfilewalker.used-space-filewalker subprocess finished successfully
lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker completed
lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed
I am now removing some satellites which didn’t happen automatically I learned today.
storagenode forget-satellite --all-untrusted --config-dir /mnt/storagenode/config --identity-dir /mnt/storagenode/identity
But the diagnose only stated there is around 240GB in it.
The Temp folder has some older files (from june/july 2023) but that folder is only 400MB.
Would be great if you can shutdown the node, run a full cleanup command before continuing.
I have the feeling the storagenode disk is really slow in I/O.
This is my oldest node, around 5 years old and the average disk space used has been out by 2T for months. I had become used to it. Perhaps that was resigned to it. Now however its nearly 4T out vs the disk space used. No errors, cleaned out old Satellite data way back.
Before I trash this node (most of the data has gone to trash lately anyway), any ideas about how to get rid of or back that 4T? The way circumstance for operating these nodes has been going south lately, I’ve hardly got the patience to sort out these issues any more.