Hey nyancodex
do you have an update?
Hey nyancodex
do you have an update?
Guys, itās fixed. I havenāt checked it for a while. Yesterday I checked the dashboard and yeah itās fixed. So the solution is to recreate all databases. Well, itās not a recommended way to go but it works. Thank you @Alexey Thank you @Paku
yeahh great that it works again.
Iāve got the same problem on all my nodes. Is there a way to force recalculations without removing the databases?
I afraid storj node will have no free space left and databases will corrupt as a result.
Just set storage2.piece-scan-on-startup to true and restart.
Also of note is capital H should be used to display units in powers of 1000 as that is what storj uses.
df -H
Isnāt it enabled by default?
Also is there a way to check whether config option was read and applied by the storj?
Filesystem Type Size Used Avail Use% Mounted on
DATAPOOL zfs 1.7T 132k 1.7T 1% /mnt/DATAPOOL
DATAPOOL/STORJ zfs 18T 17T 1.7T 92% /mnt/DATAPOOL/STORJ
Hello @PocketSam,
Welcome back!
Itās enabled by default, but many SNOs are disabled it. So, if you too - you need to enable it back and restart.
You also need to make sure that you do not have errors related to a databases and filewalkers in your logs, otherwise the databases would not be updated with the actual values.
Databases should be OK. The issue with incorrectly reported free space happened while node was running. The first thing I did was checking the databases and all of them were fine.
Iāve searched through the log and found a few errors that may be related, what can I do to fix this? Iāve already recreated the container with no luck.
2024-06-25T07:07:29Z ERROR services unexpected shutdown of a runner {āProcessā: āstoragenodeā, ānameā: āforgetsatellite:choreā, āerrorā: ādatabase is lockedā}
2024-06-25T07:07:29Z INFO lazyfilewalker.trash-cleanup-filewalker subprocess exited with status {āProcessā: āstoragenodeā, āsatelliteIDā: ā1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGEā, āstatusā: -1, āerrorā: āsignal: killedā}
2024-06-25T07:07:29Z ERROR pieces:trash emptying trash failed {āProcessā: āstoragenodeā, āerrorā: āpieces error: lazyfilewalker: signal: killedā, āerrorVerboseā: āpieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:187\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:419\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89ā}
The ādatabase is lockedā issue can be solved by tuning your disk subsystem or simplifying it or adding more RAM or even SSD. All depends on the setup.
The simplest one is to move databases to another less loaded disk/SSD and configure your node to use this new path for databases.
The more complicated are:
Usually the āone node - one diskā doesnāt have issues, unless this is a Windows VM on a VMWare on a Linux host. However, I believe itās not your case anyway.
Thanks for the advices. But what can I do about incorrect space reporting? Storage node takes more space then Iāve assigned to the storagenode.
Space reporting depends on databases and filewalkers. So make them working.
Iām not sure I can do something about it cause everything is in a container. I think my databases work just fine. I have no errors in the current log. And I have no idea how to run filewalker.
Iāve got a default container with default parameters in the config, the only edit is explicitly enabling storage2.piece-scan-on-startup: true , all stats get updated except they are wrong.
How can I make filewalkers work? Iāll try to find documentation for that, but it sounds like it requires developers skills.
The used space filewalker runs once after start. So it needs a node restart to trigger.
Make sure it is not disabled in config file and restart node. Then check logs if it finished successfully for all satellites. Depending on hardware it might need days to finish.
It requires skills to find errors in the logs and post them here.
The popular ones:
It can be fixed by optimizing your disk subsystem or adding a cache (RAM and/or SSD).
The alternative is to run in with lazy mode disabled:
The other popular is
but not only bandwidth.db
, here you could find other databases are locked, this is mean that your disk subsystem cannot keep-up, and there are several solutions:
Iāve restarted a node and still see no errors related to anything except the piecestore.
I removed an email and wallet address from the log. The log is quite long so Iāve posted it here JustPaste.it - storj node log
Could you please post a whole error? Is it related to a database or to a filewalker?
If so, then this is enough to screw up the stat.
In the provided excerpt I do not see any error, and also that any of a used-space-filewalkers are finished.
I was referring to this error:
2024-06-27T14:33:47Z ERROR piecestore upload failed {āProcessā: āstoragenodeā, āPiece IDā: āNEE52DB6NUQI5OT7AV5J4HNTVH5HLHL4466A54NTL3C4YWTT4MIQā, āSatellite IDā: ā1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGEā, āActionā: āPUTā, āRemote Addressā: ā109.61.92.72:50854ā, āSizeā: 196608, āerrorā: āmanager closed: unexpected EOFā, āerrorVerboseā: āmanager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229ā}