Used space gets reset after restart

Hello,

Is it normal that the used space drops after a reboot?
my node is a couple of days old, and i see in the dashboard 108gigs used space

First time i restarted the vm at around 40gigs, used dropped to ±7gigs
Now i restarted again, used dropped from 108gig to 8.2gig
trash and overused are both 0

but if i do df -h i see that there is 167gig in use

there are so many logentries but i dont really see errors at first

node is running in a dedicated storj proxmox vm and with no additional drives mounted inside the vm.

before restart


after restart

Can you see any errors in the logs?

It could be the case that there are some permission issues with the storj folter and therefore the databases arent being save correctly, and when you restart you lose the used space data stored in memory.

Welcome to the forum @mahir !

Check databases for errors by following steps given below

1 Like

First, clean your logs and check the databases.

To check dbs, stop the node, and do a pragma check; you can do a vacuum too.
Download and unzip the sqlite binaries in C:\sqlite:
https://www.sqlite.org/download.html
Here is the command if you moved the dbs in C:\Downloads:

In Power Shell (user mode) for C:\Downloads\

cd C:\sqlite

Get-ChildItem C:\Downloads\*.db -File | %{$_.Name + " " + $(C:\sqlite\sqlite3.exe $_.FullName "PRAGMA integrity_check;")}

Get-ChildItem C:\Downloads\*.db -File | %{$_.Name + " " + $(C:\sqlite\sqlite3.exe $_.FullName "VACUUM;")}

To clean the logs, set custom log level:

https://forum.storj.io/t/log-custom-level/25839/18?u=snorkel

Stop node, rm node, start node with new params.
Let the used space filewalker to finish for all sats.

docker logs storagenode 2>&1 | grep "used-space-filewalker"

Than check the dashboard, used space, logs.

I am suffering same issue.

Right now restarted the node and dropped from 33GB to 26GB.

lazyfilewalker is showing “subprocess finished successfully” for the four nodes.

I can not understand.

EDIT: performed database check and answers OK for all. But after restarting node data is appearing correctly so something is fixing with that checks.

Same here. After restart, node is dropping from 100GB to zero.
Cannot be any “database problem” or something else blaming the user, it’s storj software doing that. storj should handle reboots, like every other piece of software out there.

$ df -h /dev/sdg1 
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdg1       3.7T  332G  3.4T   9% /mnt/WD-WCC7K3NHF837
$ docker logs 98e788947ffa | grep "used-space-filewalker"
2024-10-29T16:19:47Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-10-29T16:19:47Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-10-29T16:19:47Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-10-29T16:19:47Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-10-29T16:19:48Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "Total Pieces Size": 483529472, "Total Pieces Content Size": 481223936, "Total Pieces Count": 4503, "Duration": "674.434259ms"}
2024-10-29T16:19:48Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-10-29T16:19:48Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "Total Pieces Size": 966152704, "Total Pieces Content Size": 965258752, "Total Pieces Count": 1746, "Duration": "609.845974ms"}
2024-10-29T16:19:48Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-29T16:19:48Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-10-29T16:19:49Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-29T16:19:49Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "Total Pieces Size": 793810432, "Total Pieces Content Size": 792836608, "Total Pieces Count": 1902, "Duration": "319.773467ms"}
2024-10-29T16:19:49Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-10-29T16:19:49Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-10-29T16:19:49Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-10-29T16:19:49Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-10-29T16:19:49Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-10-29T16:19:49Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "Total Pieces Size": 37841664, "Total Pieces Content Size": 37824256, "Total Pieces Count": 34, "Duration": "197.666732ms"}

Hello @edortaprz,
Welcome to the forum!

@edortaprz @xgDkAbzkp9yi
It can be, but when the used-space filewalker would finish the scan it should update databases. Check that there is no errors related to neither of them in your logs and that’s all used-space-filewalkers are finished their scan for each trusted satellite. The databases have a flush interval of 1h by default, so do not restart the node earlier, than after the hour is passed since finished filewalkers.

In addition, you need to use SI measure units (base 10) to check the usage by the OS:

df --si -T

This is valid only for the case, if you provided the whole disk, not a part of it. Otherwise you need to use du --si instead.
@xgDkAbzkp9yi the Avg used space is unrelated to a local usage, it’s reported from the satellites. Accordingly your graph, some satellites didn’t send reports about their usage.
It’s also the expected behavior in the beginning of each month. This information is updated once per 12h by default on the nodes and once per 24h on the satellites (if their chore can keep up and would finish before the next day, otherwise you could see gaps, see Avg disk space used dropped with 60-70%). This is not a bug on the satellite, but inconvenient UI issue on the node. The feature to ignore gaps:

i started a fresh node up, to see if i messed things up with permissions.

after 1 day i see the same behavior.
i dont see any errors in logging

node had arount 40 gigs used
restarting the docker resulted in 1.2gigs used but still 35gig storage used on drive

i will try the database fix

2024-11-04T16:58:20Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-04T16:58:20Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "Total Pieces Size": 10531840, "Total Pieces Content Size": 10527744, "Total Pieces Count": 8, "Duration": "71.259357ms"}
2024-11-04T16:58:20Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-04T16:58:20Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-04T16:58:21Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "Total Pieces Size": 1443580416, "Total Pieces Content Size": 1440282624, "Total Pieces Count": 6441, "Duration": "137.127719ms"}
2024-11-04T16:58:21Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-04T16:58:21Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "Total Pieces Size": 260153856, "Total Pieces Content Size": 259963904, "Total Pieces Count": 371, "Duration": "424.033671ms"}
2024-11-04T16:58:21Z    INFO    pieces  used-space-filewalker started   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-11-04T16:58:21Z    INFO    lazyfilewalker.used-space-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-04T16:58:21Z    INFO    pieces  used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "Total Pieces Size": 1157481472, "Total Pieces Content Size": 1154353152, "Total Pieces Count": 6110, "Duration": "119.645463ms"}

docker run --rm -it --mount type=bind,source=storage/storage/,destination=/data sstc/sqlite3 find . -maxdepth 1 -iname "*.db" -print0 -exec sqlite3 '{}' 'PRAGMA integrity_check;' ';'
./orders.dbok
./heldamount.dbok
./storage_usage.dbok
./garbage_collection_filewalker_progress.dbok
./notifications.dbok
./reputation.dbok
./pricing.dbok
./used_space_per_prefix.dbok
./satellites.dbok
./bandwidth.dbok
./info.dbok
./used_serial.dbok
./pieceinfo.dbok
./piece_expiration.dbok
./piece_spaced_used.dbok
./secret.dbok

this seems alright?

will the used space catch up after a while?

data is clearly still there
image

Please try this:

  • Stop node.
  • Check all databases and receive an OK.
  • Restart node.

Is now showing real used space? This is the way I repaired my node.

1 Like

Please, wait the one hour after the used-space-filewalker has finished its work for all trusted satellites (the databases cache is flushed once a hour to the disk by default, but you may change it) before restart.
Otherwise you would need to wait until the used-space-filewalker would again rescan all used space.

so i was trying and experimenting, i really cant get a node up and running correctly
or there is a bug… or its like expected?

i tried so many things, and always with a fresh linux ubuntu installed and fresh identity generated/authorized
its like 3 commands, how can it be so hard?

tried everything with normal user, with root, with docker user, on a vm, on a raspberrypi,
docker run, even with docker compose, not even mounting a external drive (or you are supposed to mount a drive?)

the problem is always when you restart the server the disk used on the dashboard is reset always.

apt-get update && apt-get install docker.io

docker run --rm -e SETUP=“true” --mount type=bind,source=“/root/storjdata/ident”,destination=/app/identity --mount type=bind,source=“/root/storjdata/storage”,destination=/app/config --name storagenode storjlabs/storagenode:latest

docker run -d --restart unless-stopped -p 60001:28967 -p 14002:14002 -e WALLET=“XXXX” -e EMAIL=“xxxxxx" -e ADDRESS=“XXXXX:60001" -e STORAGE=“600GB” --mount type=bind,source=“/root/storjdata/ident”,destination=/app/identity --mount type=bind,source=“/root/storjdata/storage”,destination=/app/config --name storagenode storjlabs/storagenode:latest

its 3 commands…
so this case everything was run with root used, so there cant be any permission issue right?
pragma integrity_check checks are all ok
logs show no error at all, and filewalker is completed

i have no clue anymore what to do to get a node up and running and surviving a restart, and it is actually so simple
or is it normal that the used is reset always because its a fresh node?

Same issue on a vanilla truenas 24.10 app using the latest version available in the truenas app catalog. Filewalker completes but my disk usage reset back down to nearly zero. have several blobs in the blobs dir but seems to be only walking the most current. V 1.115.5

DBS are ok:

find . -maxdepth 1 -iname “*.db” -print0 -exec sqlite3 ‘{}’ ‘PRAGMA integrity_check;’ ‘;’

./heldamount.dbok
./piece_spaced_used.dbok
./used_serial.dbok
./bandwidth.dbok
./used_space_per_prefix.dbok
./piece_expiration.dbok
./secret.dbok
./satellites.dbok
./garbage_collection_filewalker_progress.dbok
./orders.dbok
./notifications.dbok
./info.dbok
./storage_usage.dbok
./reputation.dbok
./pieceinfo.dbok
./pricing.dbok

Yes, the drive must be partitioned, mounted and formatted to the preferred FS, for example - ext4. Or you may use lvm instead of partitioning.
Please use this instruction to statically mount your drive:

Hello @Juberstine,
Welcome to the forum!

@Juberstine @mahir
Please make sure that you do not have errors related to a databases in your logs (search for error and database).

i tried with an HDD mount in fstab, like on the docs explained, no difference
can someone from the team test a fresh node that it is still working as expected (and handling reboots fine)

Of course I tested before ask any questions :slight_smile:
So, lets start from the beginning:

df --si -T

and

ls -l /mnt/storj/storagenode1

where /mnt/storj/storagenode1 is your data location.

okay great, lets try to debug,

dashboard showing

after restart, i am checking the logs, no errors but what i notice is the sum of lazy file walker total pieces content size is “8.292.246.096” (8.2GB) and thats also what the dashboard is showing after restart, but the total size should be much more (16GB)

Please show the content of

ls -l /mnt/storj/
ls -l /mnt/storj/config/storage

Could you please check the size for folders inside storage/blobs?

du --si -d 1 /mnt/storj/config/storage/blobs

after new docker restart (no errors in docker logs)

image


fstab
image

docker command


su docker
docker stop storagenode
docker remove storagenode
docker run -d --restart unless-stopped --stop-timeout 300     -p 60002:28967/tcp     -p 60002:28967/udp     -p 14002:14002     -e WALLET="xx"     -e EMAIL="xx"     -e ADDRESS="xx"     -e STORAGE="100GB"     --user $(id -u):$(id -g)     --mount type=bind,source="/mnt/storj/ident",destination=/app/identity     --mount type=bind,source="/mnt/storj/config",destination=/app/config     --name storagenode storjlabs/storagenode:latest