Disk usage discrepancy?

That and as reported by other XFS users, this FS could be slower on listing. See Topics tagged xfs

Hi @Alexey.
One week have gone by, added the settings as mentioned and still 1.9TB missing from Disk Space Dashboard.
Anything I must do to sync and correct this?

pieces.enable-lazy-filewalker: false
storage2.piece-scan-on-startup: true

only wait for two more weeks, your node should receive a bigger Bloom filter and remove more deleted pieces.

On one of the nodes here the trash was quite insignificant for months now, roughly in tens of GBs. After the last lazy GC run it trashed roughly 800GB, I believe about 200GB from US1 and the rest from EU1.
Still ~1TB to go, it would appear majority from US1.
Running ext4 on this, 4GB RAM with 50GB LVM cache.

2024-02-16T16:07:31Z    INFO    lazyfilewalker.gc-filewalker    starting subprocess     {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-02-16T16:07:31Z    INFO    lazyfilewalker.gc-filewalker    subprocess started      {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-02-16T16:07:31Z    INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
2024-02-16T16:07:31Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode", "createdBefore": "2024-02-07T17:59:59Z", "bloomFilterSize": 4100003}
2024-02-16T18:58:20Z    INFO    lazyfilewalker.gc-filewalker    starting subprocess     {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-02-16T18:58:20Z    INFO    lazyfilewalker.gc-filewalker    subprocess started      {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-02-16T18:58:20Z    INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode"}
2024-02-16T18:58:20Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "process": "storagenode", "createdBefore": "2024-02-12T17:59:59Z", "bloomFilterSize": 731469}
2024-02-16T19:05:27Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "piecesSkippedCount": 0, "process": "storagenode", "piecesCount": 1247997}
2024-02-16T19:05:27Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-02-16T21:14:13Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "piecesCount": 40757447, "piecesSkippedCount": 0, "process": "storagenode"}
2024-02-16T21:14:22Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-02-17T02:27:42Z    INFO    lazyfilewalker.gc-filewalker    starting subprocess     {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-02-17T02:27:42Z    INFO    lazyfilewalker.gc-filewalker    subprocess started      {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-02-17T02:27:42Z    INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode"}
2024-02-17T02:27:42Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode", "createdBefore": "2024-02-13T17:59:42Z", "bloomFilterSize": 334161}
2024-02-17T02:29:22Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "process": "storagenode", "piecesCount": 559530, "piecesSkippedCount": 0}
2024-02-17T02:29:22Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-02-18T02:01:31Z    INFO    lazyfilewalker.gc-filewalker    starting subprocess     {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-02-18T02:01:31Z    INFO    lazyfilewalker.gc-filewalker    subprocess started      {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-02-18T02:01:31Z    INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode"}
2024-02-18T02:01:31Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode", "createdBefore": "2024-02-13T17:59:59Z", "bloomFilterSize": 3122834}
2024-02-18T02:21:25Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "process": "storagenode", "piecesCount": 5739026, "piecesSkippedCount": 0}
2024-02-18T02:21:26Z    INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}

They still didn’t changed the log tag for important services like walkers… :unamused:
If we could watch them without keeping log level to info, what a great expirience would be.

1 Like

If you are adventurous you can set the log level to INFO, and override the docker container’s custom_entrypoint to filter out the stuff you don’t want.

For example, you can create a copy of the custom_entrypoint file, and use a docker mount (-v) to override the container’s version. Below you edit the file to filter out piecestore and collector:

#  sed -i \
#  "s#^command=/app/storagenode\$#command=/app/storagenode run ${SNO_RUN_PARAMS} ${*}#" \
#  /etc/supervisor/supervisord.conf
  sed -i \
  "s#^command=/app/storagenode\$#command=/bin/bash -c '/app/storagenode run ${SNO_RUN_PARAMS} ${*} 2>\&1 | grep -v -e piecestore -e collector'#" \
  /etc/supervisor/supervisord.conf

As always for container mods do at your own risk.

2 Likes

The node and trash space reports a larger space than the one occupied on the hard drive.

I wrote about this problem a few days ago.

What could be the problem?

The dashboard always reports the same trash size. The node is filling up. The trash folder space is 39Gb at the moment. These captures are from today a few minutes or a few hours ago.


You shouldn’t have errors in the logs and all filewarkers should finish scan for each trusted satellites:

All untrusted should be deleted: How To Forget Untrusted Satellites

Did gc-filewalker finish the scan for all satellites?
Please also check for errors for the used space filewalker:

sls "used-space-filewalker" "C:\Program Files\Storj\Storage Node\storagenode.log" | sls "failed|error"

to check the used space filewalker processing (it should be performed for all trusted satellites):

sls "used-space-filewalker" "C:\Program Files\Storj\Storage Node\storagenode.log" | sls "started|finished"

If I delete piece_expiration.db-wal and piece_expiration.db-shm from the database, would it solve the problem?

Are you appointing to the new Release 1.98? There should be coming updates regarding the used storage space.

Please don’t delete db-files. If you have problems with the performance to keep up, move them on a SSD. This should help.

2 Likes

Those files are removed when you stop the node. They are temporary. The db files are the actual databases. Don’t delete them if they are OK, when you pragma-integrity-check them.

2 Likes

OK so since the last garbage collection gave me the “success” result, there seemed to be another process that is actually moving the bulk of the pieces to trash. Since the “success” exit of the garbage collector filewalker process my node has accumulated 700GB of additional trash, and it’s still going. Not sure what process this is, but it might be retain since I see in the logs it has initiated one for EU1 but not exited yet.

I have these files on an SSD.

The dashboard does not display the correct size.

1 Like

If you restart the node when GC runs, I get that you loose the bloom filter because it’s only in RAM. But what about the pieces already scaned? Are they moved to trash during the walk? Or when GC finishes?
And if the GC is interupted, they remain in trash? Or moved back to blobs?

can I delete the piece_expiration.db file?
It has a size of 462Mb

contend indexed should be diasbled !

if the drive is storj only or the seach function is not needed for the other data!

@Vicente same for you!

Yes, the drive is Storj only.
How can I do that? Is it on windows?
Or is it set via config.yaml?