Resetting node used data amount keeps returning to 0

These errors are related to not updated databases with your current usage, as I’m already explained in your other post

To fix them you need to have the updated databases with the current usage.
So, everything what I already suggested several times are still applied. Until you wouldn’t fix the underlaying issues, they will be here.

There are several reasons for the disk usage discrepancy:

  1. Windows shows usage in the binary units (base 2), but uses a wrong measure units (they should be TiB, GiB, MiB). Our software uses the SI measure units (base 10). So, for example, 6.28 TB shown by Windows is actually 6.28 TiB and in SI it will be 6.91 TB. However, please note, the dashboard shows usage in the allocated space, not on the whole disk.
  2. The cluster size is too great (for example, in exFAT it will be 128KiB, so any file less than the cluster size will occupy 128KiB on the disk). NTFS has a ranged diapasons for the cluster size, 0-15TiB: 4KiB, 16-31TiB: 8KiB, etc. You may check that by measuring a folder on the disk with data, you will see a difference between the data size and the occupation on the disk.
  3. You have data for untrusted satellites and they are not shown in the satellites list on the dashboard, but may use the disk.
  4. You have errors, related to the databases
  5. You have errors, related to a used-space-filewalker

You need to forget the untrusted satellites with the flag --force : How To Forget Untrusted Satellites

Please search for errors in your logs (PowerShell):

sls error "$env:ProgramFiles\Storj\Storage Node\storagenode.log" | sls "database|filewalker"

If you have a database errors like malformed or not a database , you need to fix/recreate them: How to fix a “database disk image is malformed”/How to fix database: file is not a database error.
if you have a database errors like database is locked , then you need to move databases to a system drive/SSD.
If you have a filewalker errors, then you need to enable scan on startup, if you disabled it (it’s enabled by default), disable a lazy mode in your config.yaml file:

# run garbage collection and used-space calculation filewalkers as a separate subprocess with lower IO priority (default true)
pieces.enable-lazy-filewalker: false

save the config and restart the node. Please note, in a non-lazy mode the filewalkers doesn’t print messages to the log, so you can either:

  • increase the log level to debug (the parameter log.level in your config or using a debug port)
  • use the debug port and the method /mon/ps
  • use a Resources Monitor and track, what folder is processed by a storagenode.exe process in the blobs folder (they are scanned in an alphabet order)

When a used-space-filewalker would successfully finish the scan for all trusted satellites and successfully update the databases, the piechart on the dashboard will show the correct values.
Please note,

  • used-space-filewalkers will be started only on the node restart, if you didn’t disable the scan on startup (it’s enabled by default);
  • used-space-filewalkers updates the databases only on successful finish; they doesn’t update databases during the scan (so the progress is kept in memory and not shown on the dashboard);
  • the scan may take days. If you would restart the node before they finish, the current progress will be lost and the scan will start from the scratch;
  • if the used-space-filewalker is failed with exit code 1, it will not be restarted automatically; the only solution is either optimize the filesystem or disable a lazy mode (pieces.enable-lazy-filewalker option);
  • the database errors will prevent the used-space-filewalker from updating the usage, so the progress will be lost in this case, this is why you first need to fix any database-related errors.

So, what filewalker/database errors do you have?