Node Randomly Goes Offline (and/or QUIC Misconfigure) Until Restart

Hi everyone

My node have been running for 3 months with no big issue, until quite recently (the last 2 weeks), it started to randomly say “QUIC Misconfigured”, and sometimes went offline straight (last contact a long time ago).

To be clear, “QUIC Misconfigured” and offline is two completely separate symptoms (not sure if they are because of the same cause). Sometimes it is running fine (online, last contact 0m ago), but just says “QUIC Misconfigured”, sometimes I receive emails says my node went offline and when I check, the last contact time is more than an hour ago.

Both of the situations can be easily solved by a simple restart of the docker (podman if it matters) instance. It is just quite annoying having to restart it from time to time.

Here’s my setup:

  • Host OS: Red Hat Enterprise Linux 9.4
  • Host Container: Podman 4.9.4
  • SELinux enabled and volume mounted with :Z option
  • Firewall enabled with port 28967 TCP/UDP open
  • Port 28967 forwarded to the internet (and can confirm from an external network that UDP 28967 is indeed open while my node is reporting “QUIC Misconfigured” every single time)
  • Static IP so no DDNS configured. DNS TTL set to 1 hour.

Does anyone have any idea?

Here is a part of the log (due to character limit of the forum)

Node Log (Very long)
2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "E3TX27XEYVPYXHMHEPR4XOMN7KKX66CHZGA5U7ZNX6EGPHVDZXSA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "MQW655BBIKKKLX5RDUHJVOKY74ALKEWOJF7JF4PRUNN5TNJ5IYVA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "CQTBS5JD5OYDLFSHSC32CQYYGNYGOQJZSSS7QQDRVZR2FBHGZDAQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "Y4SFXTKRGMGYZKEQKDKLTE4YURB2YEWMSB6VZOZ3IVLZWJOMETSQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "IG5XCGYGZBZWC5IBISOKWU7B7XGTCFKK22PD5PLURNILRSQIIJYA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "IINKJB7ECK25ZVS24A5X65STXB2KX4GGVNCZQ5YSDEAYGCE4HXAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "DI3WVV4VFAYGQV4XRFLQLKQNMVRGBSDI6FVAOIE5FKHZEIQJASHA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/st
e/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ITPZ2ZIYZVE6BG6UPSLL7SAPRK2GJLQNWWTFHSYQFQEQAMEZVFSQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "COUNMEW3P4QW34COWVAINTMWYHVBGKMJRRSG6PYAMLOX5LWEAA3Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "T23CHDECPW7FDP56OMJY3BNCBQQPTLQRSLS5WM5ZKRMU2Z4Z4VLQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "YM2VH7MJWXQKNXOHPJZKDRF6NVYEK7A6RTBQTXCSNQTFX7N7YHAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "PEDKJHTBJUZMODF6MUOCIRQU3ZK3TD5Q5NE4JTKR4NSEKH7QP4PQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "X5PZKMDADONKW2IOYQJMWCULR6WVSDGGJST55X7YNPKF2BUPQUDA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "PW2LC4QCAX6HFACPJW3FJ2A64KU5AFUF7NBL5LGNPAXDW6WOWZGA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "XP3VDMSH62MFRQMAOKNUULELYJZMA32AZ5GJNSMFDAXWJYHG76OQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/st
e/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6NHOIZTVNHLFKYTBWMXNROBCOJDFND6KRZKRZ2PMLQL476KV3ZRA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    WARN    collector       unable to delete piece  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "2SB2IJ64GINLJZXY6P2K4SAQNJJXMRTHWQ4JCW2WSPUOWJLDHA3Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:124\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:340\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:320\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:359\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:112\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:19Z    INFO    collector       no expired pieces to collect    {"Process": "storagenode"}

2024-08-31T18:33:19Z    INFO    collector       expired pieces collection completed     {"Process": "storagenode", "count": 890}

2024-08-31T18:33:21Z    ERROR   Error retrieving version info.  {"Process": "storagenode-updater", "error": "version checker client: Get \"https://version.storj.io\": dial tcp: lookup version.storj.io on 10.89.0.1:53: read udp 10.89.0.3:54253->10.89.0.1:53: read: no route to host", "errorVerbose": "version checker client: Get \"https://version.storj.io\": dial tcp: lookup version.storj.io on 10.89.0.1:53: read udp 10.89.0.3:54253->10.89.0.1:53: read: no route to host\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tmain.loopFunc:20\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tmain.cmdRun:138\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tmain.main:22\n\truntime.main:271"}

2024-08-31T18:33:44Z    INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      sending {"Process": "storagenode", "count": 4479}

2024-08-31T18:33:44Z    INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      sending {"Process": "storagenode", "count": 463}

2024-08-31T18:33:44Z    INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      sending {"Process": "storagenode", "count": 139}

2024-08-31T18:33:44Z    INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       sending {"Process": "storagenode", "count": 1}

2024-08-31T18:33:54Z    INFO    orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      finished        {"Process": "storagenode"}

2024-08-31T18:33:54Z    ERROR   orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6      failed to settle orders for satellite   {"Process": "storagenode", "satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:60714->10.89.0.1:53: i/o timeout", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:60714->10.89.0.1:53: i/o timeout\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:272\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:227\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:54Z    INFO    orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       finished        {"Process": "storagenode"}

2024-08-31T18:33:54Z    ERROR   orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE       failed to settle orders for satellite   {"Process": "storagenode", "satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io on 10.89.0.1:53: read udp 10.89.0.3:44399->10.89.0.1:53: i/o timeout", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io on 10.89.0.1:53: read udp 10.89.0.3:44399->10.89.0.1:53: i/o timeout\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:272\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:227\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:54Z    INFO    orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      finished        {"Process": "storagenode"}

2024-08-31T18:33:54Z    INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      finished        {"Process": "storagenode"}

2024-08-31T18:33:54Z    ERROR   orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S      failed to settle orders for satellite   {"Process": "storagenode", "satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:38136->10.89.0.1:53: i/o timeout", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:38136->10.89.0.1:53: i/o timeout\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:272\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:227\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:33:54Z    ERROR   orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      failed to settle orders for satellite   {"Process": "storagenode", "satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:45407->10.89.0.1:53: read: no route to host", "errorVerbose": "order: failed to start settlement: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:45407->10.89.0.1:53: read: no route to host\n\tstorj.io/storj/storagenode/orders.(*Service).settleWindow:272\n\tstorj.io/storj/storagenode/orders.(*Service).SendOrders.func2:227\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

2024-08-31T18:34:14Z    ERROR   nodestats:cache Get stats query failed  {"Process": "storagenode", "error": "nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io on 10.89.0.1:53: read udp 10.89.0.3:44399->10.89.0.1:53: i/o timeout; nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:43395->10.89.0.1:53: read: no route to host; nodestats: rpc: tcp connector failed: rpc: dial
okup us1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:55769->10.89.0.1:53: read: no route to host; nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:59268->10.89.0.1:53: read: no route to host", "errorVerbose": "group:\n--- nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io on 10.89.0.1:53: read udp 10.89.0.3:44399->10.89.0.1:53: i/o timeout\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190\n--- nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:43395->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190\n--- nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:55769->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190\n--- nodestats: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:59268->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}

2024-08-31T18:37:31Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 12, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:40286->10.89.0.1:53: read: no route to host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:40286->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}

2024-08-31T18:38:05Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 12, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io on 10.89.0.1:53: read udp 10.89.0.3:52884->10.89.0.1:53: read: no route to host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io on 10.89.0.1:53: read udp 10.89.0.3:52884->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}

2024-08-31T18:38:16Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 12, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:59923->10.89.0.1:53: read: no route to host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:59923->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}

2024-08-31T18:38:37Z    ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 12, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:33349->10.89.0.1:53: read: no route to host", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io on 10.89.0.1:53: read udp 10.89.0.3:33349->10.89.0.1:53: read: no route to host\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}

Your configuration has issues with the network configuration. If you use the snap (or other virtualization package managers) version of podman, I would recommend to uninstall it and install a native one either from the packages manager or using the official documentation.

Perhaps it’s the same problem with a firewall as for CentOS:

You should forward it not to the internet but in the reverse direction - from the internet to your node. If you have any outbound rules, please either delete them or disable or add another one to allow connections from your host and any port as a source to any port and any host in the internet as a destination.

Hi Alexey, thanks for the reply. Here are some more detailed explanation of the setup:

By the look of the error message, I agree.

I do not use snap/flatpak nor do any of these sandboxed pm available on my server for stability and simplicity consideration. My podman package is indeed a native version installed via the official RHEL RPM repository.

I had a look on the issue, it seems like the original OP could not run the node at all. However in my case it works perfectly for a few days, and then it fails at random.

Also the port 28967 is indeed allowed as a permanent rule in public zone for both TCP and UDP.

Allow my apology for mis-describing the port forwarding. Yes, the port 28967 is forwarded from the internet (WAN:28967) to my server (LAN:28967), same port for both WAN and LAN.

I guess I originally meant by “Port 28967 opened to the internet” instead of “forwarded to the internet”.

I do not have outbound rules, because by default my router and firewall is configured to allow any outbound connections from any source port to any destination port.

I hope this better explains my situation.

Yes, thank you. Then seems the networking is unstable for some reason in this setup. I would like to suggest to search for solutions related to sporadic network issues in podman under RedHat, because seems it’s not related to storagenode.
You may try to use the --network host option, but then please be aware, that all local ports of the node will be available on the host. So, if you would run multiple nodes, you would need to assign unique ports to each address parameter of every next node, see How to add an additional drive? - Storj Docs.

You may also to try to update podman to the latest stable version following their guide, because usually the package managers have an outdated versions of the packages, or may be to try to use docker instead.

By the way, do you have Unrecoverable and/or FATAL errors somewhere earlier before you starting to see error “no route to host”?

Thanks for the suggestion. I haven’t changed anything but it seems like the problem has magically disappeared ever since I posted this post (it had happened multiple times across couple of weeks before I posted it).

Unfortunately there wasn’t a way of knowing because the log was flooded (and overwritten since it reached the log limit) by WARN and ERROR from STORJ (as shown above in the “Very long node log”).

Anyway, I will continue to monitor and if similar error happens again, I will update accordingly.

Thanks for your help!

1 Like