Storagenode.exe stopping for too long, for 10h+

Gooood super morning, God Bless it!

A question:
After stop command to storagenode, storagenode.exe is stopping for many hours with such log:

(normal node work, untill internet connection lost due to ISP restart:)
2024-02-24T03:30:33+01:00	INFO	piecestore	download started	{"Piece ID": "N3BYBRJZFBVX4XVPAPKPPLXPN2GD2M2GRRNDO33ZNBRW2OANQNDQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 1979136, "Size": 4864, "Remote Address": "79.127.226.98:52652"}
2024-02-24T03:30:33+01:00	INFO	piecestore	uploaded	{"Piece ID": "XGSLMO5Z2GU6A2D3YH43DKAH3TRHYJKM7SYPAUTGBN6RCFG7QDFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504, "Remote Address": "5.161.220.231:50310"}
2024-02-24T03:30:33+01:00	INFO	piecestore	download started	{"Piece ID": "ECJYCAKNW3IDPG7XQXUHIS3JXZMVU7GAYH5V4BYMQMU7MUKCZFTQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 1481728, "Size": 2560, "Remote Address": "79.127.220.99:34854"}
2024-02-24T03:30:33+01:00	INFO	piecestore	download canceled	{"Piece ID": "SOXEQIAVGGSBNMRDWRAVJJ274AWOLPNGNF4MDRE32JO6I72LVN2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 2029056, "Size": 131072, "Remote Address": "5.161.61.225:47242"}
2024-02-24T03:30:33+01:00	INFO	piecestore	uploaded	{"Piece ID": "RDX257SX72QCRSTHYTCOFH2L6VD5VCRUKNEWPOJ7IDRZAMRH3XQA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504, "Remote Address": "79.127.201.210:35110"}
2024-02-24T03:30:33+01:00	INFO	piecestore	download canceled	{"Piece ID": "ECJYCAKNW3IDPG7XQXUHIS3JXZMVU7GAYH5V4BYMQMU7MUKCZFTQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 1481728, "Size": 0, "Remote Address": "79.127.220.99:34854"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	download failed	{"Piece ID": "N3BYBRJZFBVX4XVPAPKPPLXPN2GD2M2GRRNDO33ZNBRW2OANQNDQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 1979136, "Size": 4864, "Remote Address": "79.127.226.98:52652", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.226.98:52652: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.226.98:52652: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "WNTMHOHJHHA33HA4UFVAGFZU3PHWFLO3RY5SWZFRVT5EPUWQ3XUQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.220.99:59700: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.220.99:59700: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 131072, "Remote Address": "79.127.220.99:59700"}
2024-02-24T03:30:33+01:00	INFO	piecestore	upload canceled (race lost or node shutdown)	{"Piece ID": "VUBHZVWMQBMFRICLTKE2WJWQCYDXT4TQOAMH22JISWYWCCSKNA7Q"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "UH74TQH52HH23GDPCQ22CBQR2HXOOHW7ZJ7GBTGNHBWM64SOTJLA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.220.97:46710: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.220.97:46710: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 720896, "Remote Address": "79.127.220.97:46710"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "FUWMFKL2YPMDVLP5724A2ZWPXMVGIKGHTVYWWEDNBCJDH62OKUSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.219.33:54736: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.219.33:54736: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1179648, "Remote Address": "79.127.219.33:54736"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "TURBSKME6I2AQMVHAOX3RVFK4WZKMDVE4AMB4GGK6C7TBLBGVIDQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.220.97:46716: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.220.97:46716: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 655360, "Remote Address": "79.127.220.97:46716"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "ZP4Y5LCMSKQRBA5ORNGVZNRLRGS3A4ZSOFYFWRY7K4ANJG7PXTVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.219.33:33656: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.219.33:33656: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 65536, "Remote Address": "79.127.219.33:33656"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "VQZ2PE2UC5OV6EI33NQR27TZVMNZZLNHHLM5KY7XIB4ZUBUSQIKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:39052: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:39052: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 2031616, "Remote Address": "79.127.219.34:39052"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "VEH6TOQBBZSTQOOZJZQA4KF65FMXI7ARTHOG3X6HT23EHGIPKQ3A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.205.225:43832: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.205.225:43832: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1310720, "Remote Address": "79.127.205.225:43832"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "YA2TCRA53YEZWLRAURDJQH44TNOBE5CSBUCYNDD6DVOZ7LUW4FAA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:48536: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:48536: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 786432, "Remote Address": "79.127.219.34:48536"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "A6BF7I5256KK3IL3SD2BZRM34HIMGENMVJ4H3JWXUCDEIYEKJF6A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:52382: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:52382: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 458752, "Remote Address": "79.127.205.226:52382"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "L4SCCNUOZI72XXF6F5JBYO6VGNSASHOP7V2PRBO2HDVPA5AV7H4Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:45434: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:45434: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1114112, "Remote Address": "79.127.205.226:45434"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "GYFVXO2SV2VEK42CRVSBWNNRB5ZJ3ZMRP3EWMND4OPNCS2C3OM6Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.226.97:45566: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.226.97:45566: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 196608, "Remote Address": "79.127.226.97:45566"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "7F3BQGUMIYR5KCNL6AIFJR26RT5Q3E3PL2VCZDMRCR5DLVOV5QYA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:34336: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:34336: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1638400, "Remote Address": "79.127.219.34:34336"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "IZPMGU4RZSLYYAR575TU7N46YB563HJRONNG2JIQ7NTQCJKBSLMQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:40212: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.219.34:40212: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1441792, "Remote Address": "79.127.219.34:40212"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "URJFDE64LYYMSUQLIQD5JAQV54V455KUI6UDXSOXLJHIR5EGIHQA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:42462: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:42462: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 589824, "Remote Address": "79.127.205.226:42462"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "ULXPURIWL5CK3QWRCCHMKRW5HLFYTZKUG7SZAW4QKL766APMFOJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.205.225:60872: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.205.225:60872: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1835008, "Remote Address": "79.127.205.225:60872"}
2024-02-24T03:30:33+01:00	ERROR	piecestore	upload failed	{"Piece ID": "MWHG35DN3FQUGCB6EA43JSR2YOAUEBLBDML2G7NANXJ4VUI7NVEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:42484: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:60598->79.127.205.226:42484: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 983040, "Remote Address": "79.127.205.226:42484"}
2024-02-24T03:31:32+01:00	WARN	pieces	failed to migrate v0 piece. Piece may not be recoverable
2024-02-24T03:31:32+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SKR6SJRY5O3MPJ5YABZ3UMKUMMRNF4ONLIG43RXPRPED3SBVXGQA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:110\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:404\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:31:57+01:00	WARN	pieces	failed to migrate v0 piece. Piece may not be recoverable
2024-02-24T03:31:57+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SKTRVUNGDUWPPGZY67QVU7K6V6JYBQFHAJISDZPMFUPUXUILMJOQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:110\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:404\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:32:13+01:00	WARN	pieces	failed to migrate v0 piece. Piece may not be recoverable
2024-02-24T03:32:13+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SKVB5ZI6Z3JHP5QS5PVQM4EN67PRBO5QXWP2SEYIYPD4NF7AHWLQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:110\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:404\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:32:17+01:00	WARN	pieces	failed to migrate v0 piece. Piece may not be recoverable
2024-02-24T03:32:17+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SKVNZHYDG4M2VYXTLLS67NUKRLC3K57X3WYB6TWCNIQFIJZ3YBJQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:110\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:404\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:32:58+01:00	WARN	pieces	failed to migrate v0 piece. Piece may not be recoverable
2024-02-24T03:32:58+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SKZWGUNFTO2G2QIJLBS7V27MMLEJYUY4UGYHNFW4OZDJTWXGGNRA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:110\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:404\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:33:06+01:00	INFO	Interrogate request received.
2024-02-24T03:33:06+01:00	INFO	Stop/Shutdown request received.
2024-02-24T03:33:06+01:00	ERROR	piecestore:cache	error getting current used space: 	{"error": "filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:33:06+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SL2O2RT6FGFUYSYAA33PZ5EXBOL5F6I2ZZARCEUIHRCBCM6GIRLQ", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:33:06+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SL2O37WJIQLYB4MXNYY6CIYVYF74WCDDSP7636LADN7TY63XQC5A", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
(...)
2024-02-24T03:33:21+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SL5HD3D57NAH7ALHOXL46YAWVMHQG6N245RN4I4ULT2ZZTG3XRTA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:33:21+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SL5HIIJPST5XIUU2D76KOHLQ2DCQKNC6FR7CBOCWJVVURZM7TNIA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:33:21+01:00	WARN	services	service takes long to shutdown	{"name": "retain"}
2024-02-24T03:33:21+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SL5HLAM7YJQ73XMPO4RGD7ZKOPGR2PNLCQICUEIWF7RL7WZL5S7A", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T03:33:21+01:00	INFO	services	slow shutdown	{"stack": "goroutine 976 [running]:\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace.func1()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:107 +0x71\nsync.(*Once).doSlow(0xc000000f00?, 0xc0027a6e40?)\n\t/usr/local/go/src/sync/once.go:74 +0xbf\nsync.(*Once).Do(...)\n\t/usr/local/go/src/sync/once.go:65\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace(0xc00023beb8?)\n\t/go/src/storj.io/storj/private/lifecycle/group.go:104 +0x3c\nstorj.io/storj/private/lifecycle.(*Group).Run.func1({0x1f4cf08?, 0xc000747710?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:77 +0x2a5\nruntime/pprof.Do({0x1f4d100?, 0xc00050dae0?}, {{0xc000728c00?, 0x0?, 0x1f4cf40?}}, 0xc000720e40)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\ncreated by storj.io/storj/private/lifecycle.(*Group).Run in goroutine 20\n\t/go/src/storj.io/storj/private/lifecycle/group.go:64 +0x509\n\ngoroutine 1 [syscall, locked to thread]:\nsyscall.SyscallN(0x7ffabb925f10?, {0xc0...
(...)
2024-02-24T12:00:16+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANHV4EGB53TE4IHEU7722YUGEGIVWEEPJBYMWGUKE6RTHBSCGA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T12:00:16+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANLK6MKAE4AGHLYIUY5OSR6RWRWI4KIVNMTHJ6SRKBUCRMEOZQ", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T12:00:16+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANNO6XSKP3VIUQWCXM56PQRREBQTMPXKGWAKSURKX56O7IK3PQ", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T12:00:16+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANQEATRLEK4PKPQGU3JEH7RWPUWQCFFRPPPLZTJG4CLHHMLYXA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T12:00:16+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANREKR5FWJO55SKLH43X7R5BSHY6TUMKT4YZ3UKIJCI5DEGZVA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T12:00:17+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANRHXDWYAYBWXKPQPLOF5Y4KE2MZ5SICQK2CMNJZDNFUBTC2QQ", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T12:00:17+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YQANZ23KM3KOEUX2PD2MYBJICQL6OUU3VOPMUWTYGMHL5NSLRP4A", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

I wonder if i can restart computer while its ongoing, or it has to be finished.
i know it can indeed finish this actions after some day or two.
But i dont know if its bad to interrupt it while its ongoin.
This is probbably some saving files to delete to database, when storagenode was intertupted to restart while it was filewalking. Because im seeing the disk dedicated to Storj only, is writing constantly like ~ 1,2MB/s and reading only like ~300KB/s (avr. acces is good ~13ms, disk is fine, its under VM tho, its 1.95.1 ver)
My question is, is this need to be allowed to finish its “stopping” or machine can be just blutantly turned off wihout consequences?

Its 1.210.906 lines from “Stop/Shutdown request” at 3:31 to 12:00 and still ongoing.
It was just a connection cut from ISP in night, and it was back few minutes later, but the node stopping is ongoing, and i cannot restart it. Must i wait till it finish that stopping?
(The log groweth like ~400MB in 10h from this.)

Hello @Ruskiem,
I do not see any FATAL errors, which could stop your node…
So…
Everything else looks not so dangerous… The retain process cannot delete what is should be deleted anyway,… upload failed because your node is slower than other 80… not happy, but kind of expected…
What else?

during delete… well… disk is busy (too slow), but - ok. It would be deleted in the next try, and perhaps your disk would not be so busy that time…

I believe that you can restart it any time, the protocol should take care of the event. The only possible problem is your OS - how it will react on restart?

  • no doubt. Just wonder how it will affect my node.

I’m on a neverending quest to finish all filewalkers, and to not interrupt the nodes.
in order to slain any space discrepancy.

I saw the situation once in other node, it was perfectly fine after i let him finish that “stopping”.
Just don’t know what he is doing, or trying to do with those “retain failed to delete piece” for 8 hours.
Is he trying to delete files from trash?

the log level is: info

But i remember from some other node, that i had similar situation,
the stopping was long, but i had clear in the log there, that the node is trying to delete pieces, that are not there anymore.
there just was something about “piece does not exist” and ratain was goin line after line like here.

wonder if the same situation is ongoing here, because i cannot remember what the other node was, to go back there and check logs. it would just cost me time to find what it was and when, and to compare logs, was the situation the same, or else. But i can do it if needed.

Don’t quite understand why the node is considered “slow” now, but when it works normally, sends and receives pieces, it has over 90% win ratio, so is not so slow hmmm.

The disk is rather new, produced in 2022 or so.
its Ultrastar 16TB, enterprise grade, but sATA tho.

Why You Admins, don’t bet as much on that, the code might be acting not as intended, as much You do on betting on node is slow/SNO did something wrong ?
Something to think about :>

Edit:
Oh sh*t, it just finished (after 10hours and 17min) :

2024-02-24T13:48:40+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ZZZZ2ICCSRMJZ3MHSJC5LW7B3AXLH7TNWJYJW4RUNGDCCHFKFFGA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T13:48:40+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ZZZZATX7TJVYVK3NF2XIOWAQ6J63UA66OWC3IWH2KGYRVCVUASAA", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T13:48:40+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ZZZZCVGA3LIAJDANTXWNCUT4KBCEFCJ3JVQKRJV2HGU7ZVDZMCKQ", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T13:48:40+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ZZZZJOU3IJFBX6DRBNANHEOR6AS57YD6GMU6U46FM3ODTPKBYK4A", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T13:48:40+01:00	WARN	retain	failed to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ZZZZLEQDXTGZSL6D2YUKYTWHBAD4N6A7XA3RYVMZ5XDMRCTID2PQ", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).Trash:112\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:403\n\tstorj.io/storj/storagenode/retain.(*Service).trash:364\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-24T13:48:40+01:00	INFO	retain	Moved pieces to trash during retain	{"num deleted": 4641902, "Retain Status": "enabled"}

“num deleted”: 4641902
whats that?
4,6 milion deleted pieces?

Seems it’s slow only for the lazy filewalkers. You may turn it off to speedup a process. Not sure, if this is really needed, if you do not have a big discrepancy on your node (my node recently moved 0.5TB to the trash after update to 1.96.6)

the lazy is turned OFF in config.yaml, from months :

storage2.pieces-scan-on-startup: true
pieces.enable-lazy-filewalker: false

And how is it doing?
Do you have a discrepancy?

This is a bug I’ve noticed too.

I first noticed a few months ago, about when a quick fix was implemented for an error condition when an expected piece was missing from disk, upon an attempted deletion. The fix was, to simply force continue the process, IIRC. However, what seems to have manifested in addition to the quick n’ dirty fix, that if the parent process (same .exe) has exited, the file-walker orphaned child process, continues frolicking among the files, or rather just reading it’s list aloud. Without it’s parent process active, I’m not sure it accomplishes anything - worst case, I suppose, it may misreport success when it eventually finishes. Meanwhile, your online time goes to hell-inna-handbasket. For this reason, when I notice this condition, I’ll just end-task that child process… The service stopping conditions having been met, you can re-start the service. If for nothing but the chance of it overstating it’s deletion ‘accomplishments’ to the pieces database or what have you (thus widening the incongruency gap in reported used storage - it thinks it just freed). As once it’s on that path, I’ve never actually seen it ‘hit’ a file successfully deleting it in the logs - ie: pointless execution. That’s my current resolution - kill it. Unshared global functions, expected communication with parent app (cuz it’s non-existent- dead) to execute deletes, update db’es, etc., I dunno what it’s unable to do. It’s a somewhat frequent bug.

And yes there are numerous other bugs quietly plaguing SNOs; however, things are improving.

Just thought you should know, you’re not seeing things.

1 Like

Hello @Julio,
Welcome to the forum!

I cannot reproduce this issue. Usually the long-stopping or not stopping service means hardware issues or problems in the underlaying OS (for example - “tweaked” Windows or a kernel bug), I had not met any exceptions so far.
Regarding a separate lazy used-space-filewalker process - it will finish its work and update databases, I do not expect any issues there. However, the gc-filewalker process is a different story - it needs orchestration, since it only collects the garbage, but only the retain process actually moves pieces to the trash, so, this one definitely useless if the main process in the termination state. See GC Filewalker should trash pieces immediately · Issue #6803 · storj/storj · GitHub

However, I may be wrong and we could introduced a bug in the stopping the process under Windows and it may not kill the child process, just we all were so lucky that didn’t try to restart the service exactly during run of gc-filewalker.

It does not have usage disk discrepancy.
Only with trash discrepancy.

Continuing:

After that 10h 17minutes i started the node again. Dashboard showed 4,3GB in trash as on screen:

  1. so i count trash folders with windows, it counted me 1,2TB in trash (6 milion files):

trash_counted_after10hstarted_took57h

  1. but already 57h passed, i checked dashboard again, it showed me 106GB in trash this time:

  1. so i count trash folders again, it counted me 186GB up to this point (1,4 milion files) after some 14h from point 2.

trash_after_69h_

One can say: already 14h passed, i say so what: still close time from last dashboard check what showed me: 100GB in trash,
and now, 14h later windows showing me it’s 186GB and still counting.
So can’t be blamed that on “oh but too much time passed”.
The bloom filter is sent once a week?
so node could not be gaining any new trash, right?

Gotta determinate is it a bug, or maybe just “broken by design”.
Means: possibly it will eventually correct itself in 2-3 weeks.
But till that time, node got like more trash than it things it has.
All because of some unexpected connection lost in the middle of filewalking.

Altho the used space showes it has like 5,5TB, but the disk is dedicated to STORj with total 14,5TB space,
which windows shows 6.02TB free, and non-STORJ files are 46,5GB only.
So 14,5TB - 6,02TB - 0,05TB = 8,43TB occupied disk space for STORJ total.

Today is 27.02.2024.
Day 1 to day 27 ingress was 1,41TB dashboard shows,
if trash still is 1,2TB (idk, windows still couting, at 186GB atm) then yes - it gives roughly 5,8TB for solid data to pay me for,
and Used space in dashboard shows 5,5TB
but the parameter on which payment is made is “Average Disk Space used This Month”,
and this is 6,4TB
so yea i will happily accept ~6,4TB x $1,5 as fair payout for the month.
But yeah, that 5,5TB vs real 6,4TB is some kind of disk usage discrepancy too.
Its more on the HDD than the dashboard thinks. (theres 8,43TB with the trash)
Could be dangerous if disk full.

How ever from payment perspective NOT a scam at all.
Just the Trash process got derailed because of a sudden internet connection lost.
And it took this storagenode.exe 10h to stop, because:

  • the trash was big 1,2TB
  • the node is not the fastest (The workstation is overwhelmed,
    despite plenty of CPU power, and disk power,
    there’s too much going on on the machine,
    and its motherboard/CPU architecture must be a bottleneck,
    especially for a lot of small files operations)

But hey, got to do it low power,
its ~15 Watt total consumption per node.
~1,36 Watt per 1TB allocated.
With 200 Watt per 13 Nodes.
So its like 1752 KWh/Year for ~147TB allocated, and mostly full.

And We in Poland are again threaten by goverment of energy cost rising like +40%, starting July of 2024.
So consolidation of all nodes under 1 machine unit was necessary to survive.
(earlier i got it splited on 3 computers)

After 69h, dashboard showing 0 GB trash,
but the windows folder showing 186GB and counting.
(the node is cleared from 2 recent decomisioned sattelites)

So yea, the trash is there, and node dashboard says its 0B.
So it did not deleted, and wasted 10h for pointless retain work.
on faster nodes it would be like 2h, but still a flood of 1,2 milion lines in log file.

I will update what GB is in Trash finally,
when windows finish the counting

EDIT:

it finished. it deleted half and 650GB is still in trash. But Dashboard says its 0 in Trash.

and windows says there is 5.99TB free of 14,5TB.
So weird, used space didnt change, only trash, and its still almost same total disk space occupied as it was 6.02TB free with 1,2TB in trash hmmm?
Does the node rolled back files from trash to blobs or whaaat? naaah?..

All these file walking processes turned into a real mess.
Nobody knows which are running and when and for what purpose. Counting is wrong. Some processes never complete. If they get interrupted, they forget what they have been doing. Some start from the beginning, some don’t. Bloom filters are to small to hold all pieces that shall be deleted.
Nothing of that sounds like the way it should be. It is a total and complete mess.

1 Like

It’s deleted independently of what is showed on the dashboard. You may easily check that:

But yes, there is a time delay between the data is accounted and databases are updated (and showed on the dashboard).

Windows/PowerShell

Logs should be at least info level

Used space filewalker

Regulated by the parameter in the config.yaml file or as a command line argument (add -- before the parameter, remove :, place it after the image name in your docker run command):

# if set to true, all pieces disk usage is recalculated on startup (default true)
storage2.piece-scan-on-startup: true

to watch for a progress:

cat "C:\Program Files\Storj\Storage Node\*.log" -Wait | sls "used-space-filewalker" | sls "started|finished"

gc-filewalker

cat "C:\Program Files\Storj\Storage Node\*.log" -Wait | sls "gc-filewalker" | sls "started|finished"

retain

cat "C:\Program Files\Storj\Storage Node\*.log" -Wait | sls "retain"

collector

cat "C:\Program Files\Storj\Storage Node\*.log" -Wait | sls "collector"

trash

cat "C:\Program Files\Storj\Storage Node\*.log" -Wait | sls "piece:trash"

Linux/Docker

Right now I can see the only one workaround for almost any problems with filewalkers, include the not killing child processes, is to disable the lazy filewalker with the parameter:

Thanks Alexey, good to be here.

I will follow your advice; disabling lazy walking.
Cheers!

1 Like

Windows finished counting. Node indeed deleted some, and 650GB is still in trash. But Dashboard says its 0 in Trash.

and windows says there is 5.99TB free of 14,5TB.
So weird, used space didn’t change much, only trash, its still almost same total disk space occupied as it was 6.02TB free with 1,2TB in trash hmmm?
Does the node physically moved files back from trash to blobs or whaaat? naaah?.. :thinking:

No, just the trash is considered as used by the node, but not by the satellite.