Disk usage discrepancy?

I already have this activated in the node.

I have lowered the node capacity to 500Gb, I have restarted the node.

The node has been running for 53 hours, the disk activity is at 100% usage.

I’m seeing this same thing. System says 7.1tb free, Dashboard says 12tb free.

I did that, and Storj came back and said it was using LESS storage, it went from 13tb in use, to 11tb in use after the restart. The system maintained 8tb free. Seems to be an issue somewhere? At the rate the node is growing lately, I am taking in about 1tb a day, so I am going to run out in the next few days if things dont sync

Hello.
Node is running on Windows 10.
total disk space for node 12.5 Tb.
06/16/2024 node go offline. PC worked, internet worked, node down.
I restarted my PC and node started normally.
Before the reboot ~8.2 Tb were occupied, and after 4.7 Tb.
3.5 Tb lost from node.
But this data was not erased from the HDD.
Now I have 3.5Tb of data on my disk that node no longer takes into account.
About 30 hours passed.
Will the problem be resolved over time? or do I need to run a command?
sry for my eng.
google translate

add following line to config.yaml storage2.piece-scan-on-startup: true

after reboot used space decreased.
free space displayed incorrectly

1 Like

It seems that the used-space filewalker can not finish it’s job:
Windows node, this is what I can see in the log:

2024-06-16T08:03:30+02:00 INFO lazyfilewalker.used-space-filewalker.subprocess Database started {satelliteID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Process: storagenode}
2024-06-16T08:03:30+02:00 INFO lazyfilewalker.used-space-filewalker.subprocess used-space-filewalker started {satelliteID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Process: storagenode}

Tha disk is now close to idle as it is full and seems that it is not running any file-walkers.

The node was restarted a few min before this line of the log.
Also,. the dashboard’s total disk space is not correct, the avg disk space used seems ok compared to the actual disk usage in Windows explorer.
It is a 4TB drive and shows around 800GB free space in Explorer.
2,45 TB avg used + 550GB trash + 800GB actual free is about right for a 4TB drive.
But 3,19TB used + 550 GB trash + 800GB actual free is a bit too much. :slight_smile:

Batman is watching… :man_supervillain:t2:

2 Likes

The same thing has happened to me several times in recent months on various nodes. It has always happened to me when they are receiving a large amount of data, I think that the hard drive is not capable of writing such an amount of data and doing other node operations at the same time, it has happened to me on 7Tb nodes, on different nodes , a node that had 1 to 2 Tb, has never happened to me.

Would it be possible for less data to enter the node when it is scanning parts at the beginning or when it is moving the parts to the trash?

I think I’m going to disable lazy filewalker on my nodes.
I’m happy to sacrifice ingress for a few hours to make sure I have correct database information.

Is that just a config tweak?

Yes, it’s a setting in config.yaml
It’s disabled by default.

pieces.enable-lazy-filewalker: true

2 Likes

Hello @netcvet,
Welcome to the forum!

do you have a FATAL errors in our logs?
If so, see

You need to enable a scan on startup, if you disabled it (it’s enabled by default) and restart the node. You need to make sure that you do not have any errors related to databases, otherwise data on the dashboard will not be updated.

If you mean the Avg Used Space, then it is related to the satellites, not your node. See Avg disk space used dropped with 60-70%, it will be fixed eventually. It doesn’t affect billing or payout though, only estimations.

Perhaps you need to wait more? I don’t see any error in your logs?

No see fatal errors in logs.

Do it. Nothing changed

2024-06-17T19:05:40+03:00 WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 9030623180888}

there are many such errors

2024-06-12T23:41:59+03:00	ERROR	piecestore	upload failed	{"Piece ID": "BW2QADQIA7MCBENNFDN3JN6BXGZMWZKXPLSOJZVHPKYW2W6WMSHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "63.251.87.53:42674", "Size": 917504, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:04+03:00	ERROR	piecestore	upload failed	{"Piece ID": "WAOBRIHQJE6JS53PZXNWAPO5FM753ULWK43RSN2GJRFOW3CHHEAQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.80:50470", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:04+03:00	ERROR	piecestore	upload failed	{"Piece ID": "RZKGYMSDJYYPTN3BEZFN56NEWW7WAVRPMZ4Y2UWWVM3HONHAAWNQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.244:46894", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:04+03:00	ERROR	piecestore	upload failed	{"Piece ID": "MTI7SEJFZZA3GKVM3CS2QBI337SIO5EBDQPLOYYOB4A334HWVKOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "63.251.87.53:60538", "Size": 917504, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:09+03:00	ERROR	piecestore	upload failed	{"Piece ID": "FHAPMVF25F3ZRG2LO6U5HWGIGRNU6C23CELMD6LMJ2MART7XQYTA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.225:53606", "Size": 131072, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:10+03:00	ERROR	piecestore	upload failed	{"Piece ID": "QYKJAILDXBIRIWOUNBM5XRTRIGJSNJREZPUVTB5SNDUY227OMHLQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.81:58612", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:11+03:00	ERROR	piecestore	upload failed	{"Piece ID": "5KTE6H7WFTH37ZH7WKYDHG5COI3TEV4DXROG5VPYTF3CTO2DGKWA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.239:42140", "Size": 131072, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:12+03:00	ERROR	piecestore	upload failed	{"Piece ID": "PLBOG3OOVS6ZNZC76NSPP5AJ445NTCP5H7MTXCULYVLHKAP3XZOQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.36:41520", "Size": 131072, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:12+03:00	ERROR	piecestore	upload failed	{"Piece ID": "HGVRYWRZKPYSASSIY3UUVRVCSJ3G364L5QXJ2DZ3IGMLWR7ZDDGA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.72:41982", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-12T23:42:12+03:00	ERROR	piecestore	upload failed	{"Piece ID": "3UAJYDNLM5M4YDHAJJZ6FQ6PTXIP4D4BEV3PPJ3DR35KUXKZWVAA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.42:44602", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
type or paste code here
INFO	lazyfilewalker.used-space-filewalker	subprocess exited with status	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "status": 1, "error": "exit status 1"}
2024-06-17T19:05:37+03:00	ERROR	pieces	failed to lazywalk space used by satellite	{"error": "lazyfilewalker: exit status 1", "errorVerbose": "lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-06-17T19:05:37+03:00	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2024-06-17T19:05:37+03:00	ERROR	pieces	failed to lazywalk space used by satellite	{"error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-17T19:05:37+03:00	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2024-06-17T19:05:37+03:00	ERROR	pieces	failed to lazywalk space used by satellite	{"error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-06-17T19:05:37+03:00	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024-06-17T19:05:37+03:00	ERROR	pieces	failed to lazywalk space used by satellite	{"error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-17T19:05:37+03:00	ERROR	piecestore:cache	error getting current used space: 	{"error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-06-17T19:06:09+03:00	ERROR	piecestore	upload failed	{"Piece ID": "I5Q56QT3YZWCVYYF2HQQA6GMUR6HGKUKMFG4FYF74SAZVZMY5H4Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.77:49232", "Size": 131072, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-17T19:11:44+03:00	ERROR	piecestore	upload failed	{"Piece ID": "USKSFTLONR4G2OLWQX7ECDCAWQ2FIAVHSH5DLTHZ2RUGYSTANXIQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.226.97:42118", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-17T19:15:10+03:00	ERROR	piecestore	upload failed	{"Piece ID": "VTBNMUMLZKFAF2PIS4LNR763M3KV6QYTYIVBA7MMLOM4JEPJPYYA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.213.33:58968", "Size": 262144, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-17T19:15:33+03:00	INFO	lazyfilewalker.used-space-filewalker	subprocess exited with status	{"satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "status": 1, "error": "exit status 1"}
2024-06-17T19:15:33+03:00	ERROR	pieces	failed to lazywalk space used by satellite	{"error": "lazyfilewalker: exit status 1", "errorVerbose": "lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}

mine was enabled too. disabled, restarted, left over night. doesn’t look any different.

Has it finished, though?
Depending on the size of your node it can take a few days…

Hello,

I’m having an issue with my storj node. Regarding my system (debian), my drive is filled to the brim (12TB), but when looking at the dashboard, it only shows around half full. Today I only got 10gb, telling me, the drive really is full (but the dashboard ain’t showing). Why does the storj node dashboard has this abnormal discrepancy? I hope somebody can help me out. Either the other half is unpaid data, or wasted potential space for further data. Thanks in advance.


what is your block size on hdd? if your block size is big, then it will take much more space then it shold take, but storj count only file size itself. Block is the minimum amount that file can take.
Storj writes lot of small files.

Hey all,

I’ve been running my node on TrueNAS scale for nearly a year now with minimal/no issues. Unfortunately, I had an HDD fail and am currently resilvering. This shouldn’t really impact the storage capacity much but may impact some server performance until finished. One thing that I was just warned about is that the amount of physical space that I’ve reserved for Storj is almost at 95% capacity.

I’ve allocated about 30TiB to the Storj node that I am running:

When I look at the Storj Node Dashboard the disk space used is MUCH lower:

I’m not sure why there is such a huge discrepancy, but is there any way to clear the unused on-disk space to make room for more usage? Or is there some way to get Storj to recognize the space it uses on disk?

You need to make sure that the used-space-filewalker is successfully finished for all trusted satellites, you also need to remove data of the untrusted satellites (How To Forget Untrusted Satellites).

This is mean that the filewalker is failed to complete the scan due to slowness of the disk.
Since this is Windows, then you need to stop the node, check and fix errors on the disk, then perform a defragmentation. You need also to enable an automatic defragmentation if you disabled it (it’s enabled by default). You may also disable 8.3 names: NTFS Disable 8dot3name and atime:

After that start the node again and make sure that filewalkers are not failing anymore.
You need also make sure that you do not have errors related to databases in your logs.

Hello @t0x ,
Welcome back!

@t0x @gingerbread233
The suggestions are the same, you need to have all filewalkers finished their work, remove data of the untrusted satellites and make sure that you do not have errors related to databases in your logs.