Why such low ingress

Few days ago there was >1tb but now 60b

Logs are active:

FhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 2816, "Remote Address": "199.102.71.54:52658"}
2024-06-13T05:33:20Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "7EJJH2JS3LGXBQOXLQCL327P77COGHRW7MASMVJDK7FCMDDXES4Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 11520, "Remote Address": "5.161.246.105:45892"}
2024-06-13T05:33:20Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "L2B4UZLKPOBQUGGCAZYW6TO4GBKL5V5QNOGKOFHYRK6UEWBA37SQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 2816, "Remote Address": "79.127.213.33:45934"}
2024-06-13T05:33:20Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "6RIH2KIOYJN67M7EQ637MSOIRUCSUSSDJ43FOINKUFXZA4YGTPAQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 9216, "Remote Address": "109.61.92.74:42936"}
2024-06-13T05:33:20Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "7EJJH2JS3LGXBQOXLQCL327P77COGHRW7MASMVJDK7FCMDDXES4Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 11520, "Remote Address": "5.161.246.105:45892"}
2024-06-13T05:33:20Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "BLPVQ6TIPB3Z6KWD65H3PCQIHW7QZMNHUIIC4LMVLOEI3JXQRYGA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 4352, "Remote Address": "199.102.71.58:50876"}

What’s version of the node?
Do you have free space?
Do you have any PUT requests in your logs?

Please also note this:

@Alexey Yep, there are put requests also and free space 2,5tb. v1.104.5 both.

One more strange behavior with the second node it is freeze on 3,80 - 3,90 tb usage, but has 7tb free space. it is balance on Β±3,80 tb somehow. But first node once time was like a rocket and upload all existing free space. it was some time before data starts move to trash.

How is the RAM/CPU usage on the physical hardware? How’s the storage?

it is not too much 20-35%, on one node I use 4 core atom with 8ram, second based on rockpro64

This is only mean that the amount of deletions is equal to amount of successful uploads. Equilibrium.

Second node looks like that today:

logs:

2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesContentSize < 0	{"Process": "storagenode", "satPiecesContentSize": -249856}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesTotal < 0	{"Process": "storagenode", "satPiecesTotal": -250368}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesContentSize < 0	{"Process": "storagenode", "satPiecesContentSize": -249856}
2024-06-15T10:12:08Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "X5OSK5KQEJPCHJ4IDQK4WF7IHHU5S76ML6B6TJWX2LUMJ4OIGN5A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "5.161.214.198:55788", "Size": 181504}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesTotal < 0	{"Process": "storagenode", "satPiecesTotal": -3840}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesContentSize < 0	{"Process": "storagenode", "satPiecesContentSize": -3328}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesTotal < 0	{"Process": "storagenode", "satPiecesTotal": -250368}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesContentSize < 0	{"Process": "storagenode", "satPiecesContentSize": -249856}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesTotal < 0	{"Process": "storagenode", "satPiecesTotal": -250368}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesContentSize < 0	{"Process": "storagenode", "satPiecesContentSize": -249856}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesTotal < 0	{"Process": "storagenode", "satPiecesTotal": -250368}
2024-06-15T10:12:08Z	ERROR	blobscache	satPiecesContentSize < 0	{"Process": "storagenode", "satPiecesContentSize": -249856}

Tring to restart …

seams to be fine …

2024-06-15T10:15:07Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "YVPDYZ7CUIEHW4VU2MBZKJOGLQS4BIYGJGSCWP25EICH667PNG6Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.56:49042", "Size": 51200}
2024-06-15T10:15:07Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "HHS243LKLHMYPUWAB3P6IQS766CM5OHEHYSPCVTXQ77RWMM27GCQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 8448, "Remote Address": "79.127.219.43:47678"}
2024-06-15T10:15:07Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "HHS243LKLHMYPUWAB3P6IQS766CM5OHEHYSPCVTXQ77RWMM27GCQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 8448, "Remote Address": "79.127.219.43:47678"}
2024-06-15T10:15:07Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "ICWGN5IXN6LESBRGJG2MRQIRT4MQCBJ7D5IDI4CRRMCF4E4CNBZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 2319104, "Remote Address": "107.6.113.184:36508"}
2024-06-15T10:15:07Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "7WQSVF33SSRLSYN7RP3OQXED2XHVLRV27FZZEAWD246WEE3TLLPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.201.211:43002", "Available Space": 154450661376}
2024-06-15T10:15:08Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "7WQSVF33SSRLSYN7RP3OQXED2XHVLRV27FZZEAWD246WEE3TLLPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.201.211:43002", "Size": 3584}
2024-06-15T10:15:08Z	INFO	piecestore	upload canceled (race lost or node shutdown)	{"Process": "storagenode", "Piece ID": "JYDWUSV5LMPLVKQSVJVIKTSAD64PNVGRSZ6OAESUXJ6JQ6UBNVWA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "72.83.198.17:47672"}

This is only mean that the used-space-filewalker didn’t finish its job for this particular satellite. Please enable the scan on startup if you disabled it and restart the node, then wait until it will be finished for all trusted satellites:

If it would fail with a β€œcontext canceled” error, then try to disable the lazy mode, save the config and restart the node. The used-space-filewalker should finish its job for all trusted satellites successfully to update databases. And as a result the dashboard.

Please do not forget to remove the untrusted satellites’ data:

Thx @Alexey will try to understand what happening.

I use this running settings, actually don’t have complete understanding what does it mean. But of course I don’t have a goal to disable filewalker. Try to find a way how to turn it on back.

    --filestore.force-sync=false \
    --storage2.monitor.minimum-disk-space="1MiB" \
    --filestore.write-buffer-size="2MiB" \
    --storage2.min-upload-speed=16kB \
    --storage2.min-upload-speed-grace-duration=10s

please try to remove all these additional settings.

done. thx @Alexey. Looks like nice log for now.

1 Like

Todays log is just 150 bytes

some grep errors:

2024-06-17T14:03:48Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "H7MT6FLVZJTYRTX4XVDHA6VYUKRDRDZDBFET6VU3HM7C4FQLKCSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 2319104, "Remote Address": "103.214.68.73:48626", "error": "write tcp 172.17.0.3:28967->103.214.68.73:48626: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:28967->103.214.68.73:48626: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:409\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:470\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:408\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).sendData.func1:863\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2024-06-17T14:07:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:07:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:07:16Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:07:16Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:24:23Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:24:23Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:24:24Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:24:24Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:24:51Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "MN3UIHPFMCYW4JCMNQKI7GRPUA5LLKMW2WX7JJAMGG66PWXSLMTQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 20480, "Remote Address": "5.161.61.255:53566", "error": "manager closed: read tcp 172.17.0.3:28967->5.161.61.255:53566: read: connection timed out", "errorVerbose": "manager closed: read tcp 172.17.0.3:28967->5.161.61.255:53566: read: connection timed out\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:234"}
2024-06-17T14:35:50Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "36VKJT5ABTE7KCNBNIEVFH57YADE6RF63NMDCDWCY664GUC3MVOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 2319104, "Remote Address": "103.214.68.73:37814", "error": "write tcp 172.17.0.3:28967->103.214.68.73:37814: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->103.214.68.73:37814: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:409\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:470\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:408\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).sendData.func1:863\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2024-06-17T14:40:14Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:40:14Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:40:15Z	ERROR	collector	error during collecting pieces: 	{"Process": "storagenode", "error": "pieces error: database disk image is malformed", "errorVerbose": "pieces error: database disk image is malformed\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:574\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:83\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:56\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:52\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-06-17T14:40:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T14:40:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:40:14Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:40:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:40:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:40:15Z	ERROR	collector	error during collecting pieces: 	{"Process": "storagenode", "error": "pieces error: database disk image is malformed", "errorVerbose": "pieces error: database disk image is malformed\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:574\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:83\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:56\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:52\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-06-17T15:40:15Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:41:47Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "4BUIXAA2TLOHQCBQP7TGKFTQVI7C24QBWLR7IQX6M4DIBCMCK5UQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2319104, "Remote Address": "103.214.68.73:53262", "error": "write tcp 172.17.0.3:28967->103.214.68.73:53262: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->103.214.68.73:53262: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:409\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:470\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:408\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).sendData.func1:863\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2024-06-17T15:53:47Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:53:48Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:53:48Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:53:48Z	WARN	contact:service	Your node is still considered to be online but encountered an error.	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
2024-06-17T15:55:54Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "PZAZKARZLT52VXHCCSY4OO3W3KCZ5YZ5MGOMPO34B2XPPSFI73GA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 0, "Size": 2319104, "Remote Address": "103.214.68.73:36924", "error": "write tcp 172.17.0.3:28967->103.214.68.73:36924: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:28967->103.214.68.73:36924: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:409\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:470\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:408\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).sendData.func1:863\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2024-06-17T16:22:54Z	WARN	console:service	unable to get Satellite URL	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "error": "console: trust: satellite is untrusted", "errorVerbose": "console: trust: satellite is untrusted\n\tstorj.io/storj/storagenode/trust.init:29\n\truntime.doInit1:6740\n\truntime.doInit:6707\n\truntime.main:249"}
2024-06-17T16:22:54Z	WARN	console:service	unable to get Satellite URL	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "console: trust: satellite is untrusted", "errorVerbose": "console: trust: satellite is untrusted\n\tstorj.io/storj/storagenode/trust.init:29\n\truntime.doInit1:6740\n\truntime.doInit:6707\n\truntime.main:249"}

Also there was a lot of errors yesterday like this:

2024-06-15T19:37:45Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "3WH6DLELESFVALUGGYKEZSDEQEGQCAOQPNFB7PIOSP42CTNH327Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.242:43910", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-15T19:37:46Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "IRL6JJQTNXRVWMBW7OMJGXRHJHBSMQY2KTWSYGLCX7AU5FYX6FXA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.69:56530", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-15T19:37:46Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "QU2FQPI66CSP7JMMQIE4I23LR7HTAP4I6MVPYTYP4RW3SAI3CXWQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.71:43596", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-15T19:37:47Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "JLEQWQP3XNRWD33RVBRPOMOFXDVYWAO7LCWHHJD2JHVGA6UCA36Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.74:41514", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-06-15T19:37:47Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "ANLP3LFV7MZWOUIBXF54I4AXDQPSVWS5CA352A3DW2IHY6XIHO4Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.68:40520", "Size": 196608, "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}

After simple full reboot looks like better:

2024-06-17T16:30:20Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "WJ6H3USRU4LVZR4BA7OJ44VWK3NF2HGCAAZUWUNIW3CKYLNXZT6A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 30208, "Remote Address": "199.102.71.67:33098"}
2024-06-17T16:30:20Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "NRJBQGLOFJRBFU3Q5IKUV2ZHEMTVJDECWNJLRW4FWFPP3GE3WBUA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.225:39436", "Available Space": 6599839744}
2024-06-17T16:30:20Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "233D54V4TSKDGMTS62TCCRT3ILGKQ5SML4NIDDE7MQU2YEEQGDFQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.75:50476", "Available Space": 6599839744}
2024-06-17T16:30:20Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "7V27CYHC57GTFJZCENDREPAS6GCXMCXXCXLK2QWMT7YKTRAWUUVQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.235:44178", "Available Space": 6599839744}
2024-06-17T16:30:20Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "CTNSJMGJNSF3SW6NVCFH2CJU53NN42P3LHZ4PQ5GXEYUYNTSP52Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.84:40046", "Available Space": 6599839744}
2024-06-17T16:30:20Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "VIP5JMYXDGQ3OWUWOZQ5YDXDNFYK5RW3I3NXEBX6KXQDG3ON4VWQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.243:36868", "Available Space": 6599839744}
2024-06-17T16:30:20Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "2Y2HEY7HE2YSFFJ35FSFHGQUNI7WDZ3UHSPXUHZP53SPIMQ35CEQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.39:52596", "Available Space": 6599839744}
2024-06-17T16:30:21Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "NMNAUJPEVJ677REHTC4T6AXUWAOQP54XD5BG4FDMRVYOQ3RKQCDA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.233:48744", "Available Space": 6599839744}
2024-06-17T16:30:21Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "7XHKJYSXPWQ6V3LLCFE7ZZNYI2TEGXYJDAHHAC456AANSUMJEQUQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.230:36546", "Available Space": 6599839744}

A post was merged into an existing topic: Fatal Error on my Node

second node restarted few times during few hours

filewalker jobs complete successfully but node does not feels good due to regular restarts

|2024-06-17T18:57:17Z|INFO|pieces:trash|emptying trash started|{Process: storagenode, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S}|
|---|---|---|---|---|
|2024-06-17T18:57:17Z|INFO|lazyfilewalker.trash-cleanup-filewalker|starting subprocess|{Process: storagenode, satelliteID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S}|
|2024-06-17T18:57:17Z|INFO|lazyfilewalker.trash-cleanup-filewalker|subprocess started|{Process: storagenode, satelliteID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S}|
|2024-06-17T18:57:17Z|INFO|lazyfilewalker.trash-cleanup-filewalker.subprocess|trash-filewalker started|{Process: storagenode, satelliteID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Process: storagenode, dateBefore: 2024-06-10T18:57:17Z}|
|2024-06-17T18:57:17Z|INFO|lazyfilewalker.trash-cleanup-filewalker.subprocess|Database started|{Process: storagenode, satelliteID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Process: storagenode}|
|2024-06-17T18:57:17Z|INFO|lazyfilewalker.trash-cleanup-filewalker.subprocess|trash-filewalker completed|{Process: storagenode, satelliteID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, bytesDeleted: 0, numKeysDeleted: 0, Process: storagenode}|
|2024-06-17T18:57:17Z|INFO|lazyfilewalker.trash-cleanup-filewalker|subprocess finished successfully|{Process: storagenode, satelliteID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S}|
|2024-06-17T18:57:17Z|INFO|pieces:trash|emptying trash finished|{Process: storagenode, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, elapsed: 37.629283ms}|
|2024-06-17T18:57:17Z|INFO|collector|collect|{Process: storagenode, count: 1}|
|2024-06-17T18:57:17Z|INFO|piecestore|upload started|{Process: storagenode, Piece ID: U7NIIHXIMTG74VV2OVJHLPQITHVFBXBLC75RBXBXDK4NR3D47VUA, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, Remote Address: 79.127.226.101:39240, Available Space: 5873617797120}|

Usual long tail cancelation, your node is too slow to provide or accept the piece (the loosed race). The resulting error depends on what moment the client canceled the connection, so stack-trace would be different. But the reason is the same.

If regular restarts are related to

then the solution is provided there:

few days have passed

strange but one day it upload Β±500 gb but next day Β±100 byte. no one settings was changed. hm…

second node are different … upload 500Β± gb per day every day

but restarting every Β±1 hour

setting of nodes are same.

still not all suggesting applied. I will need to go deeper to the prevues post from Alexey

Perhaps the version difference?
However, my nodes (1.104.5 at the moment), have a little ingress though (they are full again).

So, as soon my nodes have a free space (the trash is emptied), they have an ingress (in Russia, if you wonder). So, I believe, that your nodes are full (you have less than 5GB of the free space either in the allocation or on the disk).

version are same, free space exist… will watching what happened. oh you right) different




And how much free space reported by the OS?