Disk usage discrepancy?

So I give a try to this:

You should add following line to config.yaml storage2.piece-scan-on-startup: true then restart the node, let it run for about 3 4 days then check the dashboard again.

1 Like

You need to forget these satellites with a --force flag:

But the filewalker is failed to finish almost all client facing satellites as far as I can see.
The “context canceled” error mean, that your disk cannot keep up, so there are two possible workarounds (without change of the hardware):

  1. disable a lazy mode and restart the node. However, the non-lazy filewalker doesn’t print anything into logs, but you may use either a debug port or this trick:
  1. set the allocation below the used on the dashboard, this will stop any ingress and will allow the lazy filewalker to finish its job faster

Screenshot 2024-06-25 232640
Screenshot 2024-06-25 232713
today I have lot of nodes like this.
Space on HDD is actualy free, but node think it is not, some process forgot to write results to Db?
Because of this bug i have lot of TB that I cant use.
i think it is something with TTL, because there never was so much previusly


Most of this nodes show they are full or almost full
Is is very big waste of space.

2 Likes

Maybe this?

I dont this this is the case, as you cant delete “nothing”

Hello!
I recently found out that some of my SNOs have strugling to get free space. The drive is completely full, but the storj dashboard is showing a different story.
The setup: I am using small PCs that have attached a 12TB drive via USB port. The drive is formatted in NTFS file system. The drive does not have any other files, just the storj data.

I am attaching screenshots to see clearly the problem here.


We see this problem All over the nodes, it is certainly some node bag, because even Average disk space used show around 8TB used.

It happened to me too. I discovered that I have almost 1Tb of space occupied between db file and storj folder (trash, retain…), not data

I checked my basket ammount on one node it show 900GB of trash, but in reality there is only 150GB of trash, HDD is 3.9 TB 3 TB is USED, 0.9 TB show Trash(real trash folder contains only 0.15 TB) disk is 0.75TB free in reality. Usualy if node is full only 100+ GB left.

does someone know where can i find garbage database or where is this info stored?

Does a trash folder on this disk matches the trash on the dashboard?
Because the not updating databases by the trash filewalker is fixed only after 1.105.x

I have 105 vestion as soon as it was relised.
OS counted 150GB Trash files.
Dashboard show 0.9 TB of trash

Does no storjling tests these versions on Windows nodes? It always seems to be the Windows nodes that have problems.
It makes me believe that all the developement and testing is made on linux nodes.

It should be updated after the used-space-filewalker would finish its work for all trusted satellites. If you disabled it, the situation will not improve just with an upgrade.

I’m experiencing a weird issue.

I assigned 8TB for Storj, but it somehow doesn’t show the right amount of storage in use. It does show a higher usage in the Average Disk Space Used This Month graph.

On TrueNAS it says I’m using 8.17TiB (8.9TB).

Any way this can be fixed? It started doing this when I changed my assigned value from 5.5TB to 8TB.

One of the node i found this
2024-06-25T23:10:53+03:00 ERROR lazyfilewalker.gc-filewalker.subprocess failed to save progress in the database {“satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Process”: “storagenode”, “error”: “gc_filewalker_progress_db: database is locked”, “errorVerbose”: “gc_filewalker_progress_db: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*gcFilewalkerProgressDB).Store:33\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash.func2:191\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces.func1:66\n\tstorj.io/storj/storagenode/blobstore/filestore.walkNamespaceWithPrefix:1012\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).walkNamespaceUnderPath:882\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).walkNamespaceInPath:847\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).WalkNamespace:840\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).WalkNamespace:317\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:54\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:181\n\tstorj.io/storj/cmd/storagenode/internalcmd.gcCmdRun:104\n\tstorj.io/storj/cmd/storagenode/internalcmd.NewGCFilewalkerCmd.func1:35\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267”}
2024-06-25T23:43:46+03:00 ERROR pieces lazyfilewalker failed {“error”: “lazyfilewalker: exit status 1”, “errorVerbose”: “lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkSatellitePiecesToTrash:160\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:561\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:373\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:259\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}
2024-06-25T23:43:46+03:0

My DB always on SSD, so i dont know what happened there.

Perhaps there is no solution so far. If the databases are on a SSD, they should not be locked, unless you put many of them there.

Yes, there is many of them, I moved then now to separate NVME