Wrong used & remaining disk space

Hey nyancodex

do you have an update?

Itā€™s bad. Lol. Disk stats remain the same. Earning and bandwidth not updatedā€¦(yet?)

Guys, itā€™s fixed. I havenā€™t checked it for a while. Yesterday I checked the dashboard and yeah itā€™s fixed. So the solution is to recreate all databases. Well, itā€™s not a recommended way to go but it works. Thank you @Alexey Thank you @Paku

1 Like

yeahh great that it works again.

:+1: :grinning:

2 Likes

Iā€™ve got the same problem on all my nodes. Is there a way to force recalculations without removing the databases?
I afraid storj node will have no free space left and databases will corrupt as a result.

Just set storage2.piece-scan-on-startup to true and restart.

1 Like

Also of note is capital H should be used to display units in powers of 1000 as that is what storj uses.

df -H

Isnā€™t it enabled by default? :open_mouth:
Also is there a way to check whether config option was read and applied by the storj?

Filesystem Type Size Used Avail Use% Mounted on
DATAPOOL zfs 1.7T 132k 1.7T 1% /mnt/DATAPOOL
DATAPOOL/STORJ zfs 18T 17T 1.7T 92% /mnt/DATAPOOL/STORJ

image

Hello @PocketSam,
Welcome back!

Itā€™s enabled by default, but many SNOs are disabled it. So, if you too - you need to enable it back and restart.
You also need to make sure that you do not have errors related to a databases and filewalkers in your logs, otherwise the databases would not be updated with the actual values.

Databases should be OK. The issue with incorrectly reported free space happened while node was running. The first thing I did was checking the databases and all of them were fine.
Iā€™ve searched through the log and found a few errors that may be related, what can I do to fix this? Iā€™ve already recreated the container with no luck.

2024-06-25T07:07:29Z ERROR services unexpected shutdown of a runner {ā€œProcessā€: ā€œstoragenodeā€, ā€œnameā€: ā€œforgetsatellite:choreā€, ā€œerrorā€: ā€œdatabase is lockedā€}
2024-06-25T07:07:29Z INFO lazyfilewalker.trash-cleanup-filewalker subprocess exited with status {ā€œProcessā€: ā€œstoragenodeā€, ā€œsatelliteIDā€: ā€œ1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGEā€, ā€œstatusā€: -1, ā€œerrorā€: ā€œsignal: killedā€}
2024-06-25T07:07:29Z ERROR pieces:trash emptying trash failed {ā€œProcessā€: ā€œstoragenodeā€, ā€œerrorā€: ā€œpieces error: lazyfilewalker: signal: killedā€, ā€œerrorVerboseā€: ā€œpieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:187\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:419\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89ā€}

The ā€œdatabase is lockedā€ issue can be solved by tuning your disk subsystem or simplifying it or adding more RAM or even SSD. All depends on the setup.
The simplest one is to move databases to another less loaded disk/SSD and configure your node to use this new path for databases.
The more complicated are:

  • add more RAM if you have it;
  • make your storage tiered, e.g. by adding an SSD cache (a special device if you use ZFS);
  • reconfigure the whole storage subsystem if itā€™s not optimal.

Usually the ā€œone node - one diskā€ doesnā€™t have issues, unless this is a Windows VM on a VMWare on a Linux host. However, I believe itā€™s not your case anyway.

Thanks for the advices. But what can I do about incorrect space reporting? Storage node takes more space then Iā€™ve assigned to the storagenode. :frowning:

Space reporting depends on databases and filewalkers. So make them working.

1 Like

Iā€™m not sure I can do something about it cause everything is in a container. I think my databases work just fine. I have no errors in the current log. And I have no idea how to run filewalker.
Iā€™ve got a default container with default parameters in the config, the only edit is explicitly enabling storage2.piece-scan-on-startup: true , all stats get updated except they are wrong. :slight_smile:
How can I make filewalkers work? Iā€™ll try to find documentation for that, but it sounds like it requires developers skills.

The used space filewalker runs once after start. So it needs a node restart to trigger.

Make sure it is not disabled in config file and restart node. Then check logs if it finished successfully for all satellites. Depending on hardware it might need days to finish.

1 Like

It requires skills to find errors in the logs and post them here.
The popular ones:

It can be fixed by optimizing your disk subsystem or adding a cache (RAM and/or SSD).
The alternative is to run in with lazy mode disabled:

The other popular is

but not only bandwidth.db, here you could find other databases are locked, this is mean that your disk subsystem cannot keep-up, and there are several solutions:

  • move databases to another less loaded disk/SSD
  • add cache

Iā€™ve restarted a node and still see no errors related to anything except the piecestore. :frowning:

I removed an email and wallet address from the log. The log is quite long so Iā€™ve posted it here JustPaste.it - storj node log

Could you please post a whole error? Is it related to a database or to a filewalker?
If so, then this is enough to screw up the stat.

In the provided excerpt I do not see any error, and also that any of a used-space-filewalkers are finished.

I was referring to this error:

Summary

2024-06-27T14:33:47Z ERROR piecestore upload failed {ā€œProcessā€: ā€œstoragenodeā€, ā€œPiece IDā€: ā€œNEE52DB6NUQI5OT7AV5J4HNTVH5HLHL4466A54NTL3C4YWTT4MIQā€, ā€œSatellite IDā€: ā€œ1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGEā€, ā€œActionā€: ā€œPUTā€, ā€œRemote Addressā€: ā€œ109.61.92.72:50854ā€, ā€œSizeā€: 196608, ā€œerrorā€: ā€œmanager closed: unexpected EOFā€, ā€œerrorVerboseā€: ā€œmanager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229ā€}