Resetting node used data amount keeps returning to 0

These are the errors on the bottom of the log.

2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -3328}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -2816}
2024-07-17T13:42:06-04:00	ERROR	blobscache	piecesTotal < 0	{"piecesTotal": -52480}
2024-07-17T13:42:06-04:00	ERROR	blobscache	piecesContentSize < 0	{"piecesContentSize": -51968}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -52480}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -51968}
2024-07-17T13:42:06-04:00	ERROR	blobscache	piecesTotal < 0	{"piecesTotal": -3072}
2024-07-17T13:42:06-04:00	ERROR	blobscache	piecesContentSize < 0	{"piecesContentSize": -2560}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -3072}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -2560}
2024-07-17T13:42:06-04:00	INFO	piecestore	uploaded	{"Piece ID": "7AOFDH7GXXZEAXTGEHWJSJPL7Z2K7P4LX3VNOGXM5SPYVBFKDQ7Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.201.212:46834", "Size": 2319360}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -250368}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -249856}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -3840}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -3328}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -3840}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -3328}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -3840}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -3328}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -3840}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -3328}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesTotal < 0	{"satPiecesTotal": -2560}
2024-07-17T13:42:06-04:00	ERROR	blobscache	satPiecesContentSize < 0	{"satPiecesContentSize": -2048}

These errors are related to not updated databases with your current usage, as I’m already explained in your other post

To fix them you need to have the updated databases with the current usage.
So, everything what I already suggested several times are still applied. Until you wouldn’t fix the underlaying issues, they will be here.

There are several reasons for the disk usage discrepancy:

  1. Windows shows usage in the binary units (base 2), but uses a wrong measure units (they should be TiB, GiB, MiB). Our software uses the SI measure units (base 10). So, for example, 6.28 TB shown by Windows is actually 6.28 TiB and in SI it will be 6.91 TB. However, please note, the dashboard shows usage in the allocated space, not on the whole disk.
  2. The cluster size is too great (for example, in exFAT it will be 128KiB, so any file less than the cluster size will occupy 128KiB on the disk). NTFS has a ranged diapasons for the cluster size, 0-15TiB: 4KiB, 16-31TiB: 8KiB, etc. You may check that by measuring a folder on the disk with data, you will see a difference between the data size and the occupation on the disk.
  3. You have data for untrusted satellites and they are not shown in the satellites list on the dashboard, but may use the disk.
  4. You have errors, related to the databases
  5. You have errors, related to a used-space-filewalker

You need to forget the untrusted satellites with the flag --force : How To Forget Untrusted Satellites

Please search for errors in your logs (PowerShell):

sls error "$env:ProgramFiles\Storj\Storage Node\storagenode.log" | sls "database|filewalker"

If you have a database errors like malformed or not a database , you need to fix/recreate them: How to fix a “database disk image is malformed”/How to fix database: file is not a database error.
if you have a database errors like database is locked , then you need to move databases to a system drive/SSD.
If you have a filewalker errors, then you need to enable scan on startup, if you disabled it (it’s enabled by default), disable a lazy mode in your config.yaml file:

# run garbage collection and used-space calculation filewalkers as a separate subprocess with lower IO priority (default true)
pieces.enable-lazy-filewalker: false

save the config and restart the node. Please note, in a non-lazy mode the filewalkers doesn’t print messages to the log, so you can either:

  • increase the log level to debug (the parameter log.level in your config or using a debug port)
  • use the debug port and the method /mon/ps
  • use a Resources Monitor and track, what folder is processed by a storagenode.exe process in the blobs folder (they are scanned in an alphabet order)

When a used-space-filewalker would successfully finish the scan for all trusted satellites and successfully update the databases, the piechart on the dashboard will show the correct values.
Please note,

  • used-space-filewalkers will be started only on the node restart, if you didn’t disable the scan on startup (it’s enabled by default);
  • used-space-filewalkers updates the databases only on successful finish; they doesn’t update databases during the scan (so the progress is kept in memory and not shown on the dashboard);
  • the scan may take days. If you would restart the node before they finish, the current progress will be lost and the scan will start from the scratch;
  • if the used-space-filewalker is failed with exit code 1, it will not be restarted automatically; the only solution is either optimize the filesystem or disable a lazy mode (pieces.enable-lazy-filewalker option);
  • the database errors will prevent the used-space-filewalker from updating the usage, so the progress will be lost in this case, this is why you first need to fix any database-related errors.

So, what filewalker/database errors do you have?

How do I fix them?

Is this not where I get support (the troubleshooting forum?) Not everyone is as experienced as everyone else and you just say things are wrong - that’s not very helpful. (you’re name says Leader beside it so I am guessing you are a more senior individual with storj).

I have, both on this request and your previous, to which you did not want the results posted. Did you want this occrence posted?

What am I meant to do with this?

I do not know.

The reason for those blobscache errors is probably deleted databases. Those messages are not critical, its basically more like an info and can be ignored.

I guess used space filewalkers are still running. Check the logs for that.

Why would they still be there if the db files have been recreated? These are my database files in the dir… (all seem to be recently recreated after they were deleted, going by the dates)

What do I search for in the logs? filewalkers?

EDIT: found this - so looks like it completed. Correct? Is that just for that particular satellite?

2024-07-18T12:41:00-04:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

The recreated database must be filled with correct numbers by filewalkers. Until then some calculations might result in numbers below zero which doesn’t make sense. The error message is the result of a failed plausibility check.

1 Like

Do I need to do anything?

Just ignore those “< 0” messages. Once all filewalkers finished successfully they should stop.

1 Like

The first thing - fix the crashes. In your case seems your disk is failing a writeability check with the default timeout (the disk subsystem is slow).
Optimize it

Or increase a timeout for the check if you know that you did everything, but it’s still slow.

See also:

Put/uncomment in your config.yaml, save the config and restart the node. Then wait until it will be finished for all trusted satellites (may take several days to finish). Do not restart the node/PC without a strongly requirement to let it finish. Otherwise it will start from the scratch.

exactly the reason. Since the databases doesn’t have an actual info (they are empty), your dashboard will be way off until the used-space-filewalker would fill it with the collected usage.

Yes. For this exact satellite. Three more to go.