Disk usage discrepancy?

Hi, so I’ve been checking a bit. As you know this node is the oldest I had, years old. For like a year or so I haven’t done anything. It updated automatically etc.
But as per my post before since 6 weeks ago the discrepancy between used space and actual free space started to show.
So last night I removed the docker image and started again with the regular docker command, I realized though that the logs were going to a file which is now 300 GB big so decided to let it run in docker again.

An hour after restart I started to get suspended / unsuspended emails … like every 20 minutes or so:

Dashboard also shows this:

The logfiles are now quite short, so yesterday I saw stuff like this after start:

 33600K .......... .......... .......... .......... .......... 99%  121M 0s
 33650K .......... .......... .......... ....                 100% 88.0M=0.4s

2024-08-25 18:15:53 (81.0 MB/s) - '/tmp/storagenode.zip' saved [34492566/34492566]

2024-08-25 18:15:56,612 INFO RPC interface 'supervisor' initialized
2024-08-25 18:15:56,612 INFO supervisord started with pid 1
2024-08-25 18:15:57,615 INFO spawned: 'processes-exit-eventlistener' with pid 50
2024-08-25 18:15:57,617 INFO spawned: 'storagenode' with pid 51
2024-08-25 18:15:57,620 INFO spawned: 'storagenode-updater' with pid 52
2024-08-25T18:15:57Z	INFO	Configuration loaded	{"Process": "storagenode-updater", "Location": "/app/config/config.yaml"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "operator.email"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "server.debug-log-traffic"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "contact.external-address"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "operator.wallet"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "server.private-address"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "server.address"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "storage.allocated-bandwidth"}
2024-08-25T18:15:57Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "storage.allocated-disk-space"}
2024-08-25T18:15:57Z	INFO	Anonymized tracing enabled	{"Process": "storagenode-updater"}
2024-08-25T18:15:57Z	INFO	Running on version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.110.3"}
2024-08-25T18:15:57Z	INFO	Downloading versions.	{"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2024-08-25T18:15:57Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-08-25T18:15:57Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-08-25T18:15:57Z	INFO	Operator email	{"Process": "storagenode", "Address": "XXX"}
2024-08-25T18:15:57Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "XXX"}
2024-08-25T18:15:58Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.110.3"}

And then looked ok:

2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "bytesDeleted": 0, "numKeysDeleted": 0, "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "84.442498ms"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "KQ6CJ5AQWFRVBS4I634NTCJJ2DHIVKMPEBBB6MWNB2QS2FHPFOUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 30720, "Remote Address": "172.17.0.1:58086"}
2024-08-25T18:16:04Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "RCLHCZHYLME3K6AY4CP5DY6SVHHWZL2YYDRIWSZOFDCG6ZTG7Y7Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 181504, "Remote Address": "172.17.0.1:58090"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "V4IEY6EXOVAVBI4GPWQ2U3GOJCVDJF3SUW34QDVQX34W7RC7YVWQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 14848, "Remote Address": "172.17.0.1:58098"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.gc-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2024-08-25T18:16:04Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "YLYH4IR5MFPQ3YMKCKK2ZOPPW7I7SZXL7PUCQEJ6S2UZZBHLKEJQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 13824, "Remote Address": "172.17.0.1:58118"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "94.179112ms"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-25T18:16:04Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "YXQ7D2ONYIW377YB7GHRCO43WTJJNND42CPTQGH42VEISOBJXFNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 1792, "Remote Address": "172.17.0.1:58088"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "numKeysDeleted": 0, "Process": "storagenode", "bytesDeleted": 0}

Then quite some started filewalker:

2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-25T18:16:04Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.gc-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "dateBefore": "2024-08-18T18:16:04Z"}
2024-08-25T18:16:04Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}

Also this:

2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "84.442498ms"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "94.179112ms"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "93.753741ms"}
2024-08-25T18:16:04Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

Now the logfiles also spit out stuff like this:

2024-08-26T07:08:01Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "RJX77M27TDWQXHFPTWXHUKAEAVBRJ5BSF54GLPKAM3WUINXQO2MQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 12800, "Remote Address": "172.17.0.1:35446"}
2024-08-26T07:08:01Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "RJX77M27TDWQXHFPTWXHUKAEAVBRJ5BSF54GLPKAM3WUINXQO2MQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 12800, "Remote Address": "172.17.0.1:35446", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/rj/x77m27tdwqxhfptwxhukaeavbrj5bsf54glpkam3wuinxqo2mq.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/rj/x77m27tdwqxhfptwxhukaeavbrj5bsf54glpkam3wuinxqo2mq.sj1: permission denied", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/rj/x77m27tdwqxhfptwxhukaeavbrj5bsf54glpkam3wuinxqo2mq.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/rj/x77m27tdwqxhfptwxhukaeavbrj5bsf54glpkam3wuinxqo2mq.sj1: permission denied\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:357\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-08-26T07:08:01Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "5LHGB7C2SD5PVRZ4FTCRMBY5CBLPHT2PEZJD6ECMHI7ZWVEB3XPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 450048, "Remote Address": "172.17.0.1:35452"}
2024-08-26T07:08:01Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "5LHGB7C2SD5PVRZ4FTCRMBY5CBLPHT2PEZJD6ECMHI7ZWVEB3XPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 450048, "Remote Address": "172.17.0.1:35452", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/5l/hgb7c2sd5pvrz4ftcrmby5cblpht2pezjd6ecmhi7zwveb3xpa.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/5l/hgb7c2sd5pvrz4ftcrmby5cblpht2pezjd6ecmhi7zwveb3xpa.sj1: permission denied", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/5l/hgb7c2sd5pvrz4ftcrmby5cblpht2pezjd6ecmhi7zwveb3xpa.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/5l/hgb7c2sd5pvrz4ftcrmby5cblpht2pezjd6ecmhi7zwveb3xpa.sj1: permission denied\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:357\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-08-26T07:08:02Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "7KYD7J74YASGQLZOAOMXYX3VXNQJF7F2BRGBTUEIYFAFIX342UCA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 122368, "Remote Address": "172.17.0.1:35444"}
2024-08-26T07:08:02Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "KXF5YXTAFTXZX7KL57FRWAUJW4LOPNES55ZLWULFYCEBSYIZWUBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2319360, "Remote Address": "172.17.0.1:35454"}
2024-08-26T07:08:02Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "KXF5YXTAFTXZX7KL57FRWAUJW4LOPNES55ZLWULFYCEBSYIZWUBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2319360, "Remote Address": "172.17.0.1:35454", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/kx/f5yxtaftxzx7kl57frwaujw4lopnes55zlwulfycebsyizwuba.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/kx/f5yxtaftxzx7kl57frwaujw4lopnes55zlwulfycebsyizwuba.sj1: permission denied", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/kx/f5yxtaftxzx7kl57frwaujw4lopnes55zlwulfycebsyizwuba.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/kx/f5yxtaftxzx7kl57frwaujw4lopnes55zlwulfycebsyizwuba.sj1: permission denied\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:357\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-08-26T07:08:02Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "UAHW6MULZQRNMN3GAJLPSA7S662J2HMIUZP4CEW2TVXYKPC7RBWQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 685312, "Remote Address": "172.17.0.1:35456"}
2024-08-26T07:08:02Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "S36F5OUYNF3GKU7UCNVBKCBHZI4236ETBNWC3F2SZPXZ54JE3WBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 36864, "Remote Address": "172.17.0.1:35458"}
2024-08-26T07:08:02Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "UAHW6MULZQRNMN3GAJLPSA7S662J2HMIUZP4CEW2TVXYKPC7RBWQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 685312, "Remote Address": "172.17.0.1:35456", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/ua/hw6mulzqrnmn3gajlpsa7s662j2hmiuzp4cew2tvxykpc7rbwq.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/ua/hw6mulzqrnmn3gajlpsa7s662j2hmiuzp4cew2tvxykpc7rbwq.sj1: permission denied", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/ua/hw6mulzqrnmn3gajlpsa7s662j2hmiuzp4cew2tvxykpc7rbwq.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/ua/hw6mulzqrnmn3gajlpsa7s662j2hmiuzp4cew2tvxykpc7rbwq.sj1: permission denied\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:357\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-08-26T07:08:02Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "S36F5OUYNF3GKU7UCNVBKCBHZI4236ETBNWC3F2SZPXZ54JE3WBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 36864, "Remote Address": "172.17.0.1:35458", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/s3/6f5ouynf3gku7ucnvbkcbhzi4236etbnwc3f2szpxz54je3wba.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/s3/6f5ouynf3gku7ucnvbkcbhzi4236etbnwc3f2szpxz54je3wba.sj1: permission denied", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/s3/6f5ouynf3gku7ucnvbkcbhzi4236etbnwc3f2szpxz54je3wba.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/s3/6f5ouynf3gku7ucnvbkcbhzi4236etbnwc3f2szpxz54je3wba.sj1: permission denied\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:357\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-08-26T07:08:03Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "FS6RQQTQVESTK37N7YFNFX3OBZQ62M43BOQZVA5LSHE7EDM353DQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 12032, "Remote Address": "172.17.0.1:35460"}
2024-08-26T07:08:03Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "FS6RQQTQVESTK37N7YFNFX3OBZQ62M43BOQZVA5LSHE7EDM353DQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 12032, "Remote Address": "172.17.0.1:35460", "error": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/fs/6rqqtqvestk37n7yfnfx3obzq62m43boqzva5lshe7edm353dq.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/fs/6rqqtqvestk37n7yfnfx3obzq62m43boqzva5lshe7edm353dq.sj1: permission denied", "errorVerbose": "pieces error: filestore error: unable to open \"config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/fs/6rqqtqvestk37n7yfnfx3obzq62m43boqzva5lshe7edm353dq.sj1\": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/fs/6rqqtqvestk37n7yfnfx3obzq62m43boqzva5lshe7edm353dq.sj1: permission denied\n\tstorj.io/storj/storagenode/blobstore/filestore.(*Dir).Open:357\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Open:94\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:305\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:719\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-08-26T07:08:03Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "JGKHQ36QK24SCAEGEXQTOPMNQPULLIKLR722AH5A4BTWJW5EB44Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "Offset": 0, "Size": 10240, "Remote Address": "172.17.0.1:35462"}

Hard to say what to do?
This doesn’t look good.

Tks. I’ve just run

storagenode forget-satellite --force 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB  12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo   --config-dir /path/to/config --identity-dir /path/to/identity

I’ll check it after some days.

Seems you use --user $(id -u):$(id -g) in your docker run command, but you didn’t use it before, so now the container cannot access data because your user doesn’t have permissions to open it. Thus suspension score is falling.
You need either remove the --user option, or update the owner recursively for the data location to that user and group.

1 Like

It would finish when the command would be completed, it’s synchronous.

Oh shoot you’re right, updated the docker command with that. I just stopped it, and removed the --user part.
Now I’ll let it run for a while and lets see what happens. Then I’ll check the logs as per your suggestion. Thanks for the swift reply. Still a n00b it feels :smiley:

Update: Looking good:

2024-08-26T08:26:34Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "B65QR6IR2VYAWG3KT2D5DQPF6KYZVENB5AWUJOPHHN2MHQF4CAKQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 362752, "Remote Address": "172.17.0.1:60224"}
2024-08-26T08:26:34Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "4JMKWQA2VCWNOCNTQM5D4FMUN3FR32QZ4WPZ6J7UILBKTMK45HSA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 34816, "Remote Address": "172.17.0.1:60228"}
2024-08-26T08:26:34Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "DTJ2FFDLKEF7AKMGKVSQAXKCJPTYAM7UEMIV36DAS5DYQ267ODRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 15360, "Remote Address": "172.17.0.1:60230"}
2024-08-26T08:26:35Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "4JMKWQA2VCWNOCNTQM5D4FMUN3FR32QZ4WPZ6J7UILBKTMK45HSA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 34816, "Remote Address": "172.17.0.1:60228"}
2024-08-26T08:26:35Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "DTJ2FFDLKEF7AKMGKVSQAXKCJPTYAM7UEMIV36DAS5DYQ267ODRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 15360, "Remote Address": "172.17.0.1:60230"}
2024-08-26T08:26:35Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "2L42OT2B4RRAYR3HLJD6TUL5AEGI34AB7POY6MFCGAOZHROZU2XA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 5376, "Remote Address": "172.17.0.1:60234"}
2024-08-26T08:26:35Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "X5U73FQUALUURQOWHAO22FQKOM4SY7MTCAL2A72NILIZYERWFY6Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 71424, "Remote Address": "172.17.0.1:60226"}
2024-08-26T08:26:35Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "USEUU4WOSBICQZPGZP2HDI5LN6QIYQEVLUW3VWKPBT636CAB66OQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 768, "Remote Address": "172.17.0.1:60236"}
2024-08-26T08:26:35Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "CN3FIMZEGBIZCPDMFMLDECUGLGQJCRVMO4HO5TWETG5PAPP3J7MA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 726784, "Remote Address": "172.17.0.1:60238"}
2024-08-26T08:26:35Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "2L42OT2B4RRAYR3HLJD6TUL5AEGI34AB7POY6MFCGAOZHROZU2XA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 5376, "Remote Address": "172.17.0.1:60234"}

I’ll report back on the rest of the logs if I see some “ERROR|FATAL” at some point.

1 Like

Hey @Alexey,
thanks, node is now back to 100% Suspension so thanks for your quick help.
I now checked the logs and I cannot find anything that shows errors or so, see here:

2024-08-26T12:53:20Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "dateBefore": "2024-08-19T12:53:21Z"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "dateBefore": "2024-08-19T12:53:21Z"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-26T12:53:21Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-26T12:53:22Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "dateBefore": "2024-08-19T12:53:21Z"}
2024-08-26T12:53:22Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-08-26T12:53:22Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

Here was one error but doesn’t feel major:

2024-08-26T12:28:23Z	INFO	collector	no expired pieces to collect	{"Process": "storagenode"}
2024-08-26T12:28:23Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 1457}
2024-08-26T13:21:30Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-08-26T13:22:01Z	WARN	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ZYZB3SMXR775BGHZAOINGE5MCTUFPC6NKSG7G4BFZDV2GFZQVCMQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-08-26T13:26:25Z	INFO	collector	no expired pieces to collect	{"Process": "storagenode"}
2024-08-26T13:26:25Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 2108}
2024-08-26T14:21:30Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-08-26T14:29:15Z	INFO	collector	no expired pieces to collect	{"Process": "storagenode"}
2024-08-26T14:29:15Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 2758}
2024-08-26T15:21:30Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}

Trash

2024-08-26T12:53:21Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "4h31m50.841676975s"}
2024-08-26T12:53:21Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-26T12:53:21Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "537.734506ms"}
2024-08-26T12:53:21Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-08-26T12:53:21Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "67.419134ms"}
2024-08-26T12:53:21Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-26T12:53:22Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "elapsed": "65.794795ms"}

collector:

2024-08-26T16:21:30Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-08-26T16:22:14Z	INFO	collector	no expired pieces to collect	{"Process": "storagenode"}
2024-08-26T16:22:14Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 1}
2024-08-26T17:21:30Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-08-26T17:22:16Z	INFO	collector	no expired pieces to collect	{"Process": "storagenode"}
2024-08-26T17:22:16Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 6}
2024-08-26T18:21:30Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-08-26T18:22:27Z	INFO	collector	no expired pieces to collect	{"Process": "storagenode"}
2024-08-26T18:22:27Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 10}

Since start I have this one running: sudo docker logs storagenode2 -f 2>&1 | grep "filewalker" | grep -E "ERROR|FATAL" but nothing ever showed.

So what else to check why space doesn’t free up for STORJ but on filesystem?

Filesystem              Size  Used Avail Use% Mounted on
/dev/md0                2.3G  1.4G  821M  63% /
devtmpfs                3.9G     0  3.9G   0% /dev
tmpfs                   3.9G  236K  3.9G   1% /dev/shm
tmpfs                   3.9G   24M  3.8G   1% /run
tmpfs                   3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                   3.9G  1.5M  3.9G   1% /tmp
/dev/mapper/cachedev_0   37T   20T   17T  55% /volume1

Wait until you would have all used-space-filewalker finished for all trusted satellites and you have removed the data of untrusted satellites using this guide: How To Forget Untrusted Satellites

You may track the used-space filewalker progress by logs (could be not possible, if you didn’t redirect logs to the file, because by default docker handles 5 logs by 20MiB each, and 100MiB is too small amount to track the used-space filewalker, because it could take a week or more). If you redirected logs to the file, then you may track it:

grep "\sused-space" /mnt/x/storagenode2/storagenode.log | grep -E "started|completed" | tail

The other way is to use a debug port and the method /mon/ps.

I have also had strange calculations for 2 months now. Is this normal?


Ok great thanks.
First I ran this:
docker exec -it storagenode2 ./storagenode forget-satellite --force 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo --config-dir config --identity-dir identity
(my node is called storagenode2 just FYI)

It gave back:

12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB  In Progress
12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo   In Progress

So that is looking ok.

I’ve changed the config.yaml with the log redirect to file but didn’t restart the node yet as I thought I should keep it running for a bit longer due to the forget-satellite as it said no restart needed and didn’t want to interfere. Not sure if that will do something as in the webinterface the old satellites didn’t show any more anyway:

If I look for forget I also saw this in the logs:

2024-08-27T06:59:42Z	INFO	forgetsatellite:endpoint	initializing forget satellite	{"Process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "action": "InitForgetSatellite"}
2024-08-27T06:59:45Z	INFO	forgetsatellite:endpoint	initializing forget satellite	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "action": "InitForgetSatellite"}
2024-08-27T07:00:30Z	INFO	forgetsatellite:cleaner	removing satellite from trust cache	{"Process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-27T07:00:30Z	INFO	forgetsatellite:cleaner	cleaning up satellite data	{"Process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}

Or do you think - restart and wait and collect all logs is better for now?
Thanks!

Only if the command is finished (please also check by logs entries).
But I guess that the used-space filewalker already running, you may check that by the commands above for the used-space.

Likely you have errors related to filewalkers/databases in your logs, you need to fix/workaround them.

1 Like

So it looks like it finished:

2024-08-27T19:02:28Z	INFO	forgetsatellite:cleaner	removing satellite info from reputation DB	{"Process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-27T19:02:38Z	INFO	forgetsatellite:cleaner	removing satellite v0 pieces if any	{"Process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-27T19:02:38Z	INFO	forgetsatellite:cleaner	cleanup completed	{"Process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-08-27T19:02:38Z	INFO	forgetsatellite:cleaner	removing satellite from trust cache	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2024-08-27T19:02:38Z	INFO	forgetsatellite:cleaner	cleaning up satellite data	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2024-08-27T20:11:49Z	INFO	forgetsatellite:cleaner	cleaning up the trash	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2024-08-27T20:12:05Z	INFO	forgetsatellite:cleaner	removing satellite info from reputation DB	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2024-08-27T20:12:15Z	INFO	forgetsatellite:cleaner	removing satellite v0 pieces if any	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2024-08-27T20:12:15Z	INFO	forgetsatellite:cleaner	cleanup completed	{"Process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}

I haven’t seen any big movement on used space, still looks the same.

So I would now restart the node, redirect logs to a file and then watch it. Or should I keep it running for longer just in case?

Just check, that used-space-filewalker is still running and not failed.

and

grep error /mnt/x/storagenode2/storagenode.log | grep "used-space" | tail

Disabled Lazy and enabled badgerCache. It seems to be in perfect sync now. Thanks!

1 Like

Thanks. So the gc-filewalker ran through:

2024-08-29T21:09:15Z	INFO	lazyfilewalker.gc-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-29T21:09:16Z	INFO	lazyfilewalker.gc-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-08-29T21:09:16Z	INFO	lazyfilewalker.gc-filewalker.subprocess	gc-filewalker started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-19T17:23:28Z", "bloomFilterSize": 8770379}
2024-08-29T23:09:20Z	INFO	lazyfilewalker.gc-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-29T23:09:20Z	INFO	lazyfilewalker.gc-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-29T23:09:20Z	INFO	lazyfilewalker.gc-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-08-29T23:09:20Z	INFO	lazyfilewalker.gc-filewalker.subprocess	gc-filewalker started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "createdBefore": "2024-08-22T17:59:59Z", "bloomFilterSize": 2234328}
2024-08-29T23:17:03Z	INFO	lazyfilewalker.gc-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-29T23:17:03Z	INFO	lazyfilewalker.gc-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-29T23:17:03Z	INFO	lazyfilewalker.gc-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-08-29T23:17:04Z	INFO	lazyfilewalker.gc-filewalker.subprocess	gc-filewalker started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "createdBefore": "2024-08-22T17:59:59Z", "bloomFilterSize": 18575318}
2024-08-29T23:53:54Z	INFO	lazyfilewalker.gc-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

So you can see it passed (like always before as well). But now it shows this:


So it looks like finally the space has been ‘freed up’. I don’t know why / what happened.
a) Maybe it is because of the satellite forget?
b) Maybe it is because it runs on v1.110.3 finally?
c) I did a change yesterday on the Synology from the on my SSD cache as the hit rate went super low in the last 2 months. So I removed it, re-build it but this time added “Pin all Btrfs metadata” like I have on my newer node. This reduced the load on the volume

But also the cache hit rate is up again to 100% almost:

Haven’t restarted the node, I will keep it running and then the logs will redirect once it updates to the new version I guess.

As said, I have never had any Fatal errors before. Any idea what you think happened / why it worked now?

Thanks again!

1 Like

You again talking about a GC. But the main thing - your used-space-filewalker must finish without any errors.
The GC just moves the deleted data to the trash, it would update a usage, however, if you have a discrepancy, it will remain.
First of all, you need to have a successfully finished used-space-filewalker for all trusted satellites. You should not have any data from the untrusted satellites.
And only then you may think about GC, which would update the usage with what is it moved to the trash. If you want to have a clear picture, of course.

Ah shoot :stuck_out_tongue:
Now I have this one running now and keep it on a shell that doesn’t disconnect when I close my laptop.
sudo docker logs storagenode2 -f 2>&1 | grep "used-space-filewalker" | grep -E "started|finished"

So I looked for the wrong logs, my fault. Sorry.
But - as you can see the space / trash etc. is now showing better, I guess it ran and I didn’t see it. I’ll check this now but for now it looks good I would say.

Thanks again for your great support!

1 Like

thanks for posting the screenshots of this btw. Someone else had a synology setup and I was speculating that the “pin all btrfs metadata” option would be useful, and it looks like it really is. storj tends to do a TON of directory reads and listings, and it’s all very metadata intense, so this should help with used space filewalkers (definitely) and maybe garbage collection as well.

2 Likes

10TB took about 7 days, my second run has been running for 2 hours. My badger cache runs on my normal HDD. (Windows GUI) I’m curious how long it will run this time.

The first time when you enabled it, it would take the same amount of time as before, only the second and next will be faster.

See