I think this should help:
2022-02-01T14:56:56.565Z INFO piecestore upload started {"Piece ID": "CUXOQUI4W27VVUTZQRWRYEPMQZJ5344HUBXT7NSKNVMGBTV6E2CA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 8109283744708}
2022-02-01T14:56:57.058Z INFO piecestore uploaded {"Piece ID": "CUXOQUI4W27VVUTZQRWRYEPMQZJ5344HUBXT7NSKNVMGBTV6E2CA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-02-01T14:56:57.603Z INFO piecestore download started {"Piece ID": "5KRXHY6R57MV5QFEHSN726WRRAI544VJOPF5DVTSSXPFEMY64VOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-02-01T14:56:57.829Z INFO piecestore downloaded {"Piece ID": "5KRXHY6R57MV5QFEHSN726WRRAI544VJOPF5DVTSSXPFEMY64VOQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-02-01T14:57:01.204Z INFO piecestore upload started {"Piece ID": "QQTQ5E3ZDUFFD6LODJ24NVADFGWIUXTK63NY2HSATXEHIP2BO6QQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 8109283562692}
2022-02-01T14:57:02.296Z INFO piecestore uploaded {"Piece ID": "QQTQ5E3ZDUFFD6LODJ24NVADFGWIUXTK63NY2HSATXEHIP2BO6QQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 592128}
2022-02-01T14:57:02.628Z ERROR piecestore:cache error getting current used space: {"error": "readdirent config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/5k: not a directory"}
2022-02-01T14:57:02.630Z ERROR services unexpected shutdown of a runner {"name": "piecestore:cache", "error": "readdirent config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/5k: not a directory"}
2022-02-01T14:57:02.634Z INFO piecestore upload canceled {"Piece ID": "YLPLWFA46MOL35Q6R5ZYBXL5OGSOKC3T7ZIAU4YH67OKIXGHADAA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 0}
2022-02-01T14:57:02.676Z INFO piecestore upload canceled {"Piece ID": "MI7IFBKIBMZJEMIXH7T6GM22OBAHIU4S2S5ANPJME5I2RP5OMCMA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "PUT_REPAIR", "Size": 1843200}
2022-02-01T14:57:02.689Z INFO piecestore upload canceled {"Piece ID": "TTGD7AT667HILMIR2SFSM7UR5Y4EMJULTIUQIG2WMV6WBZ3A5EQQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 311296}
2022-02-01T14:57:02.701Z INFO piecestore downloaded {"Piece ID": "NSTGGXNGUALZ56BQQQTDRTX7F7MLDPJX2KNCLSORIQVXUQJJUJNQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET"}
2022-02-01T14:57:02.791Z INFO piecestore upload canceled {"Piece ID": "ISNKTNW4OMTNUVWB4STCKZHMBRIM4G3BIB6LDW4VKSLJBD363IBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 311296}
2022-02-01T14:57:02.902Z INFO piecestore upload canceled {"Piece ID": "HFGID4UCCV7T5LDW4AWU3GTAJHL4XIL5MI6MIWXSE5B64UNYXRAQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 1056768}
2022-02-01T14:57:03.022Z INFO piecestore upload canceled {"Piece ID": "HH37ZC3HWIHBFWUVLPCEYDOWS4BSJIUQOCTAUEFUK4VZ5DR5T4EA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 1318912}
Error: readdirent config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/5k: not a directory
2022-02-01T14:57:04.694Z INFO Configuration loaded {"Location": "/app/config/config.yaml"}
2022-02-01T14:57:04.701Z INFO Operator email {"Address": "xxx@gmail.com"}
2022-02-01T14:57:04.701Z INFO Operator wallet {"Address": "xxx"}
2022-02-01T14:57:05.721Z INFO Telemetry enabled {"instance ID": "xxx"}
2022-02-01T14:57:05.979Z INFO db.migration Database Version {"version": 53}
2022-02-01T14:57:06.462Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock.
2022-02-01T14:57:07.284Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock.
2022-02-01T14:57:07.285Z INFO Node 1veqEG5xuBNkt6cK1NbmL5fbNxXq6E7JzB7CiiaptoapoWnB1i started
2022-02-01T14:57:07.285Z INFO Public server started on [::]:28967
2022-02-01T14:57:07.286Z INFO Private server started on 127.0.0.1:7778
2022-02-01T14:57:07.286Z INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
2022-02-01T14:57:07.455Z INFO piecestore download started {"Piece ID": "IW5CBB6SIMLOFRFO4PS2AFPUCLJSUA5UPUUQT7SF7WCCWNILBZZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT"}
2022-02-01T14:57:07.917Z INFO trust Scheduling next refresh {"after": "2h7m28.998479763s"}
2022-02-01T14:57:07.919Z INFO bandwidth Performing bandwidth usage rollups
2022-02-01T14:57:08.161Z ERROR collector unable to delete piece {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QZ5KTCJ7ZMOKGAMBPAPAF7RFNWPGOANXVZAJKSHT2GL6S2J3Z26Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-02-01T14:57:08.286Z ERROR collector unable to delete piece {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "JA4FZOG6QNQV5YWZMFXSJIUOYGHO4IRJW2RMZ3ZPJXMNCETNNTCA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-02-01T14:57:08.293Z INFO piecestore downloaded {"Piece ID": "IW5CBB6SIMLOFRFO4PS2AFPUCLJSUA5UPUUQT7SF7WCCWNILBZZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT"}
Seems that the config file could not be read for a second?
And I see on a very regular base something like these (as part of the log extract above):
2022-02-01T14:57:07.919Z INFO bandwidth Performing bandwidth usage rollups
2022-02-01T14:57:08.161Z ERROR collector unable to delete piece {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QZ5KTCJ7ZMOKGAMBPAPAF7RFNWPGOANXVZAJKSHT2GL6S2J3Z26Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-02-01T14:57:08.286Z ERROR collector unable to delete piece {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "JA4FZOG6QNQV5YWZMFXSJIUOYGHO4IRJW2RMZ3ZPJXMNCETNNTCA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
On my second node I experience, also not very often (it’s another HDD, too):
2022-02-01T18:05:40.657Z INFO piecestore uploaded {"Piece ID": "HFZJVYGEZQEWSYQOSKEXCLJYYXQQ3BVKA2BI2Q3MZPGW7WZJRCIA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-02-01T18:05:43.056Z ERROR orders cleaning filestore archive {"error": "order: lstat config/orders/archive/archived-orders-1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE-1643569200000000000-1643576684730631097-ACCEPTED.v1: input/output error", "errorVerbose": "order: lstat config/orders/archive/archived-orders-1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE-1643569200000000000-1643576684730631097-ACCEPTED.v1: input/output error\n\tstorj.io/storj/storagenode/orders.(*FileStore).CleanArchive.func1:366\n\tpath/filepath.walk:438\n\tpath/filepath.Walk:505\n\tstorj.io/storj/storagenode/orders.(*FileStore).CleanArchive:364\n\tstorj.io/storj/storagenode/orders.(*Service).CleanArchive:163\n\tstorj.io/storj/storagenode/orders.(*Service).Run.func2:141\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-02-01T18:05:57.706Z INFO piecestore upload started {"Piece ID": "LMVRGJMRKJWD7PDWUPYZ353DQCNBBAG535YZTQTUGWYD24KUMCUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 946898538942}
I have the feeling it might have something to do with the file system of both HDDs:
node 1: Apple_APFS
node 2: Apple_HFS
Both HDDs are connected with an Icy Box directly via USB-C / Thunderbolt to a Mac mini M1.