Used space gets reset after restart

Isn’t the minimum 500GB? I would expect a different error if it’s unhappy about space though.

Thanks for your answer and yeah 500GB is the minimum
I got plenty of space, but this is just to test and get the node working
I have this issue even with 18tb
And really no idea why, something is not right for sure

How much time have passed after the successful scan on startup?
And do you have it finished for all trusted satellites?
Did the dashboard show the correct numbers after 2 hours since the last used-space-filewalker has finished its scan?

Also please show the result of the lsblk command.

i did a new restart, dashboard was showing 44.44GB used. (/mnt/storj 89GB used on disk)
filewalker finishes really fast
now the dashboard is showing 8.58GB


You need to have a used-space-filewalker finished for all trusted satellites, i.e. you need to have 4 finished filewalkers.

docker logs storagenode 2>&1 | grep "\sused-space" | grep -E "started|completed"

After that you need to wait for 2 hours before restart (if needed).

I did tests on new nodes where i let it run for hours before i restart it the first time.
For testing i did restart the current node Before 2 hours to see if i have errors in the logging.
But between our messages this node was running for a while first.
But then still it resets to 8.5GB
Filewalker finishes (really fast) for all 4 sattellites without issues.

Is there a way to set up a meeting with you or someone else and go through a clean setup?
Because it really cant be this hard to get a node running correctly. (Wondering why a clean raspberrypi with dietpi on it without a external disk where i just installed docker and got this same issue.)

started a new vm, fresh node, 600GB storage, no extra drive mounted
i will let it run for a couple of hours and let you know what the result is after i restart it

ok now i really dont know what the issue could be anymore other then a bug in the system.
allocated space is 600GB, perimissions 100% correct
let the node run for about 12hours before i did the first restart, to check.
usedspace dropped around 1GB after the restart (±9.43GB)
i let it run a couple of hours more and rebooted the node again, (node started up with the same amount of GB’s as last time ±9.43GB)
“du -sh” on storage folder is showing 13GB

i know rebooting the node is not done this much but i have to check and see if its working as expected, and for me i would never reboot the vm, its running behind a UPS. but an auto update will restart the node and it would mess things up.

filewalker completes in seconds, and logging is not showing any errors (other than upload/download failed errors)

root@storj1:/var/lib/storj# docker compose logs storagenode 2>&1 | grep "\sused-space" | grep -E "started|completed"
storagenode-1  | 2024-11-11T12:26:47Z   INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
storagenode-1  | 2024-11-11T12:26:47Z   INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "Total Pieces Size": 378112, "Total Pieces Content Size": 376576, "Total Pieces Count": 3, "Duration": "83.86798ms"}
storagenode-1  | 2024-11-11T12:26:47Z   INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
storagenode-1  | 2024-11-11T12:26:47Z   INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "Total Pieces Size": 818429696, "Total Pieces Content Size": 817452800, "Total Pieces Count": 1908, "Duration": "228.481746ms"}
storagenode-1  | 2024-11-11T12:26:47Z   INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
storagenode-1  | 2024-11-11T12:26:48Z   INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "Total Pieces Size": 150508032, "Total Pieces Content Size": 150378496, "Total Pieces Count": 253, "Duration": "256.690391ms"}
storagenode-1  | 2024-11-11T12:26:48Z   INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
storagenode-1  | 2024-11-11T12:26:48Z   INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "Total Pieces Size": 8452687194, "Total Pieces Content Size": 8423301466, "Total Pieces Count": 57394, "Duration": "124.230582ms"}

again sum of “Total Pieces Content Size” = 9.391.509.338 = 9.3GB, so filewalker is not doing something alright and not scanning everything i think

Same issue here, also on v1.115.5 and TrueNAS Scale. I can’t see any errors in the entire startup sequence (except for the “invalid config”, but I’m using a container, so everything should be set via env variables, and the dashboard is displaying the right values for all those “missing” entries).

The ˋused space filewalkerˋ claims to have scanned >100GB of data in less than a second (all 4 complete on second ::48). I find that highly suspicious, especially since total memory usage remains ~150MB, and my netdata monitoring shows effectively zero read activity (and no io-wait) before normal operation starts. I’m running with default settings, so ˋlazy file walker: trueˋ, but no badgercache or anything like that.
I also tested a complete system reboot to rule out ARC caching somehow accelerating those scans to an unreasonable extend. The results are identical (less than 1 second to scan “everything”) even on a cold start.

The tracking of new data during runtime works with, or without the integrity check (that for me also always returns ok), so after leaving the node running for a few hours the “used” section shows the amount of ingress that session + the ~8GB it started with. But whatever was actually in use before (the 100GB prior to the restart) never show up again.

[download updater]

2024-11-11 12:39:43.669187+00:002024-11-11 12:39:43 (13.6 MB/s) - '/tmp/storagenode.zip' saved [35490062/35490062]
2024-11-11 12:39:43.669261+00:002024-11-11T12:39:43.669261052Z
2024-11-11 12:39:44.650903+00:002024-11-11 12:39:44,650 INFO RPC interface 'supervisor' initialized
2024-11-11 12:39:44.651084+00:002024-11-11 12:39:44,650 INFO supervisord started with pid 8
2024-11-11 12:39:45.654758+00:002024-11-11 12:39:45,654 INFO spawned: 'processes-exit-eventlistener' with pid 71
2024-11-11 12:39:45.657069+00:002024-11-11 12:39:45,656 INFO spawned: 'storagenode' with pid 72
2024-11-11 12:39:45.659131+00:002024-11-11 12:39:45,658 INFO spawned: 'storagenode-updater' with pid 73
2024-11-11 12:39:45.713360+00:002024-11-11T12:39:45Z	INFO	Configuration loaded	{"Process": "storagenode-updater", "Location": "/app/config/config.yaml"}
2024-11-11 12:39:45.713442+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "storage.allocated-bandwidth"}
2024-11-11 12:39:45.713484+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "healthcheck.enabled"}
2024-11-11 12:39:45.713524+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "console.address"}
2024-11-11 12:39:45.713562+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "server.private-address"}
2024-11-11 12:39:45.713600+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "operator.wallet"}
2024-11-11 12:39:45.713640+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "contact.external-address"}
2024-11-11 12:39:45.713688+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "operator.email"}
2024-11-11 12:39:45.713742+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "storage.allocated-disk-space"}
2024-11-11 12:39:45.713823+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "operator.wallet-features"}
2024-11-11 12:39:45.713881+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "server.address"}
2024-11-11 12:39:45.713937+00:002024-11-11T12:39:45Z	INFO	Invalid configuration file key	{"Process": "storagenode-updater", "Key": "healthcheck.details"}
2024-11-11 12:39:45.714009+00:002024-11-11T12:39:45Z	INFO	Anonymized tracing enabled	{"Process": "storagenode-updater"}
2024-11-11 12:39:45.714086+00:002024-11-11T12:39:45Z	INFO	Running on version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.115.5"}
2024-11-11 12:39:45.714755+00:002024-11-11T12:39:45Z	INFO	Downloading versions.	{"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2024-11-11 12:39:45.768012+00:002024-11-11T12:39:45Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-11-11 12:39:45.769420+00:002024-11-11T12:39:45Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-11-11 12:39:45.772737+00:002024-11-11T12:39:45Z	INFO	Operator email	{"Process": "storagenode", "Address": "momivolz@gmail.com"}
2024-11-11 12:39:45.772853+00:002024-11-11T12:39:45Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xDD8260E78f937f343dFd6E1966230Af60b131198"}
2024-11-11 12:39:45.947935+00:002024-11-11T12:39:45Z	INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
2024-11-11 12:39:45.948036+00:002024-11-11T12:39:45Z	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
2024-11-11 12:39:46.281901+00:002024-11-11T12:39:46Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.115.5"}
2024-11-11 12:39:46.282032+00:002024-11-11T12:39:46Z	INFO	Version is up to date	{"Process": "storagenode-updater", "Service": "storagenode"}
2024-11-11 12:39:46.309519+00:002024-11-11T12:39:46Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.115.5"}
2024-11-11 12:39:46.309621+00:002024-11-11T12:39:46Z	INFO	Version is up to date	{"Process": "storagenode-updater", "Service": "storagenode-updater"}
2024-11-11 12:39:46.431104+00:002024-11-11T12:39:46Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1qK4wVPH8CiHWaPPCeFyaPMwnedGFqsysGF3VoZcbuyLq835uw"}
2024-11-11 12:39:46.431224+00:002024-11-11T12:39:46Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "1qK4wVPH8CiHWaPPCeFyaPMwnedGFqsysGF3VoZcbuyLq835uw"}
2024-11-11 12:39:46.488881+00:002024-11-11T12:39:46Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 62}
2024-11-11 12:39:47.078467+00:002024-11-11T12:39:47Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
2024-11-11 12:39:47.078567+00:002024-11-11 12:39:47,078 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-11 12:39:47.078605+00:002024-11-11 12:39:47,078 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-11 12:39:47.078828+00:002024-11-11 12:39:47,078 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-11 12:39:47.970724+00:002024-11-11T12:39:47Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
2024-11-11 12:39:47.971528+00:002024-11-11T12:39:47Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "8h12m46.756313693s"}
2024-11-11 12:39:47.971641+00:002024-11-11T12:39:47Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-11-11 12:39:47.971682+00:002024-11-11T12:39:47Z	INFO	bandwidth	Persisting bandwidth usage cache to db	{"Process": "storagenode"}
2024-11-11 12:39:47.971861+00:002024-11-11T12:39:47Z	INFO	Node 1qK4wVPH8CiHWaPPCeFyaPMwnedGFqsysGF3VoZcbuyLq835uw started	{"Process": "storagenode"}
2024-11-11 12:39:47.971953+00:002024-11-11T12:39:47Z	INFO	Public server started on [::]:28967	{"Process": "storagenode"}
2024-11-11 12:39:47.972006+00:002024-11-11T12:39:47Z	INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
2024-11-11 12:39:47.972512+00:002024-11-11T12:39:47Z	INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.	{"Process": "storagenode"}
2024-11-11 12:39:47.973478+00:002024-11-11T12:39:47Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:47.973746+00:002024-11-11T12:39:47Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:47.975126+00:002024-11-11T12:39:47Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:47.975276+00:002024-11-11T12:39:47Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:47.975868+00:002024-11-11T12:39:47Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:47.977147+00:002024-11-11T12:39:47Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:47.985899+00:002024-11-11T12:39:47Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 34}
2024-11-11 12:39:48.010519+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "dateBefore": "2024-11-04T12:39:47Z"}
2024-11-11 12:39:48.027700+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-11-11 12:39:48.027982+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2024-11-11 12:39:48.036428+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:48.036696+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "63.904855ms"}
2024-11-11 12:39:48.036787+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.036831+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.037193+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.038315+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-11-11 12:39:48.048791+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-11 12:39:48.049000+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker completed	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "Total Pieces Size": 67289088, "Total Pieces Content Size": 67261440, "Total Pieces Count": 54, "Duration": "74.011982ms"}
2024-11-11 12:39:48.049076+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.049115+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.049959+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.070025+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "dateBefore": "2024-11-04T12:39:48Z"}
2024-11-11 12:39:48.086007+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-11-11 12:39:48.086104+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2024-11-11 12:39:48.092267+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.092347+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "55.751109ms"}
2024-11-11 12:39:48.092452+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.092560+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.093008+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.121457+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-11-11 12:39:48.124797+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "dateBefore": "2024-11-04T12:39:48Z", "Process": "storagenode"}
2024-11-11 12:39:48.142415+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-11-11 12:39:48.142552+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "bytesDeleted": 0, "numKeysDeleted": 0, "Process": "storagenode"}
2024-11-11 12:39:48.149108+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.149188+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "56.826404ms"}
2024-11-11 12:39:48.149238+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.149282+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.150270+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.202833+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-11 12:39:48.202947+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker completed	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "Total Pieces Size": 818019072, "Total Pieces Content Size": 817038592, "Total Pieces Count": 1915, "Duration": "153.752457ms"}
2024-11-11 12:39:48.202995+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.203034+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.204029+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.205140+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "dateBefore": "2024-11-04T12:39:48Z", "Process": "storagenode"}
2024-11-11 12:39:48.220595+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-11-11 12:39:48.220708+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2024-11-11 12:39:48.228631+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.228832+00:002024-11-11T12:39:48Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "elapsed": "79.528916ms"}
2024-11-11 12:39:48.274491+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-11-11 12:39:48.352542+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-11 12:39:48.352651+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker completed	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "Total Pieces Size": 1218636032, "Total Pieces Content Size": 1216466176, "Total Pieces Count": 4238, "Duration": "149.653939ms"}
2024-11-11 12:39:48.352701+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.352741+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.353682+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.424314+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-11-11 12:39:48.507474+00:002024-11-11T12:39:48Z	INFO	lazyfilewalker.used-space-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-11 12:39:48.507580+00:002024-11-11T12:39:48Z	INFO	pieces	used-space-filewalker completed	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "Total Pieces Size": 8743848262, "Total Pieces Content Size": 8711617862, "Total Pieces Count": 62950, "Duration": "154.817362ms"}
2024-11-11 12:39:48.851162+00:002024-11-11T12:39:48Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "GQQ2MVJPDM3E3KYYGWLWLZV3UTVXL3KMCOOTXHJPZ6BBEXYEE75Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "10.11.1.82:41098", "Available Space": 4989154046778}
2024-11-11 12:39:48.855414+00:002024-11-11T12:39:48Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "GQQ2MVJPDM3E3KYYGWLWLZV3UTVXL3KMCOOTXHJPZ6BBEXYEE75Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "10.11.1.82:41098", "Size": 6912}

[Normal operation]

EDIT: As we can see here the “free space” metric gets updated without any errors, it just has a false starting point due to the filewalker not reporting the space actually in use

Application Name:storj-1Container ID:eae230d083df4e897f264833b425ac3a5baf3a1899a08e84a22b21d991cf71af
2024-11-11 13:04:05.171797+00:002024-11-11T13:04:05Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "5NHOWR3O7RHGVKEE37N6RJ5R53WGCYIIOFLPIP6VGM7YHO2FPQIQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:35798", "Available Space": 4988678636080}
2024-11-11 13:04:05.322768+00:002024-11-11T13:04:05Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "RD2575YBMYNB6VVMLSE2SP45KUJ5QNGM6GCQPD3ATYLBIN5BDFLA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:58000", "Size": 262144, "error": "unexpected EOF", "errorVerbose": "unexpected EOF\n\tstorj.io/common/rpc/rpcstatus.Error:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:584\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-11-11 13:04:05.420743+00:002024-11-11T13:04:05Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "66VXTUDR77EO7O4KPOWP2WNRLGGZADEEAU32HTK62NSB5ZEEFHOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "10.11.1.82:56386", "Size": 181504}
2024-11-11 13:04:05.773455+00:002024-11-11T13:04:05Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "X7CZZTSZYSHZJVR7SU5MPOT4QXFGTIC5LM3BSHCJCB7QSNFGTFWA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:47924", "Available Space": 4988678454064}
2024-11-11 13:04:05.775111+00:002024-11-11T13:04:05Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "X7CZZTSZYSHZJVR7SU5MPOT4QXFGTIC5LM3BSHCJCB7QSNFGTFWA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:47924", "Size": 3584}
2024-11-11 13:04:05.807949+00:002024-11-11T13:04:05Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "5NHOWR3O7RHGVKEE37N6RJ5R53WGCYIIOFLPIP6VGM7YHO2FPQIQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:35798", "Size": 300032}
2024-11-11 13:04:06.008049+00:002024-11-11T13:04:06Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "GKADLP7QLMWGP4HSJLZ33KNIOX3BNTPBGD2NMSKAIO7AFUZFJNFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:51816", "Available Space": 4988678149424}
2024-11-11 13:04:06.009879+00:002024-11-11T13:04:06Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "GKADLP7QLMWGP4HSJLZ33KNIOX3BNTPBGD2NMSKAIO7AFUZFJNFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:51816", "Size": 768}
2024-11-11 13:04:06.143725+00:002024-11-11T13:04:06Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "MRQDZ24O23SJ63VVVIR23W4HI3AZWRL7MJ6SX3IB7J3OA5P2TN2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:46424", "Available Space": 4988678148144}
2024-11-11 13:04:06.145233+00:002024-11-11T13:04:06Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "MRQDZ24O23SJ63VVVIR23W4HI3AZWRL7MJ6SX3IB7J3OA5P2TN2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:46424", "Size": 12032}
2024-11-11 13:04:06.227393+00:002024-11-11T13:04:06Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "PKIM3ERNMT5VZA2BVZH43FQQJY2CUEWBU5VCWAMX7ZDMSUHVYOAQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "10.11.1.82:33318", "Size": 290304}
2024-11-11 13:04:06.335652+00:002024-11-11T13:04:06Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "QLW44DVO7MVCR2NIFRWRLJD4JRQLGW3J46FYRLJP6WCO4RBF2CGA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:38262", "Size": 284160}
2024-11-11 13:04:06.616847+00:002024-11-11T13:04:06Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "XDBCRYUWLGUHBTNATXKPO5NMLLZNPFJGKOKJR7NO7I4FZWBNRMOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:45230", "Available Space": 4988677560112}
2024-11-11 13:04:06.618765+00:002024-11-11T13:04:06Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "XDBCRYUWLGUHBTNATXKPO5NMLLZNPFJGKOKJR7NO7I4FZWBNRMOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:45230", "Size": 6656}
2024-11-11 13:04:08.507197+00:002024-11-11T13:04:08Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "OHC7EBR3EDYQD7C3UXLFA376JYWXZAH7ZS3ZP2C36BTIUHPF7YUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:46440", "Available Space": 4988677552944}
2024-11-11 13:04:08.507297+00:002024-11-11T13:04:08Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "4C4FXVEZHOTA4GYUDGGEGPXOTV3QOET5EEP7BMYBRGLA2OFPJJRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "10.11.1.82:55190", "Available Space": 4988677552944}
2024-11-11 13:04:08.508420+00:002024-11-11T13:04:08Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "OHC7EBR3EDYQD7C3UXLFA376JYWXZAH7ZS3ZP2C36BTIUHPF7YUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:46440", "Size": 16896}
2024-11-11 13:04:08.673568+00:002024-11-11T13:04:08Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "4C4FXVEZHOTA4GYUDGGEGPXOTV3QOET5EEP7BMYBRGLA2OFPJJRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "10.11.1.82:55190", "Size": 36608}
2024-11-11 13:04:09.276776+00:002024-11-11T13:04:09Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "PIDCO3DQXPEPFXZW5JHQWRO6EBJWZMRFW6ANCTACB7R2GNSD6QIA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:35518", "Available Space": 4988677498416}
2024-11-11 13:04:09.278494+00:002024-11-11T13:04:09Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "PIDCO3DQXPEPFXZW5JHQWRO6EBJWZMRFW6ANCTACB7R2GNSD6QIA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:35518", "Size": 36864}
2024-11-11 13:04:10.144708+00:002024-11-11T13:04:10Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "55FW6SEE4KXIMMUYCDQT74GCLGRHT7ZI7WQAL5VLC5L62YALEFQA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "10.11.1.82:46456", "Available Space": 4988677461040}

and if you restart the node 1 more time, it will start from 8GB again right?

Exactly. Even after waiting several hours (or even over a day). It just never takes the data that’s actually present prior to a node start into account properly.

With the multiple people here having issues. I’d think you would want to start treating this more like a bug and get to the root cause of it. I’m happy to provide any detail but my experience mirrors the other reports exactly.

1 Like

Is there any chance this is just a visual bug and not impacting the backend storage and the credit we are getting? Is there any way to confirm?

Could someone please try to test that (my nodes doesn’t have this issue, or I cannot detect it easily)?

A bug is reported here:

2 Likes

so i notice 2 things.
so i have 2 testnodes (started at the same time)
storj1;
i didnt restart at all, and it has around 32GB, restarted it for the first time now, and it started corectly. (i suspect if i restart it again in a couple of hours it will restart with 32GB’s)

storj2;
i restarted the node for the first time around 8GB’s used, and every restart it starts again with 8GB’s
i tried this from above, but made 1 mistake by deleting all the databases (instead of moving).
after the restart i did see the right “used size” (32gb’s, should be also around the same as storj1) but because i delete other db’s other weird things happend…

so it could be that there is a bug in filewalker.
i will restart storj1 in a couple of hours and see if it restarts with 32GB’s (i think it will)
if yes, i will try the steps from above but instead of deleting all the db’s i will move them

I think this did work

1 Like

You need to restart it only after 2 hours since the used-space-filewalker is finished its scan for every satellite (you should see all 4 as completed).

yes, all the current stat would be lost.

1 Like

got same behavior on TrueNAS Scale ElectricEel-24.10.0.2 :frowning:

don’t know for sure if it’s caused by TrueNAS itself or Storj container

what Alexey mentioned above is a temp fix for now, but i guess a fix is comming soon.

Unlikely, if we wouldn’t identify the issue. For now it sounds like a cache problem (restart is happened before the cache is propagated to the disk).