I have no idea of what's happening

Hello there,
^title says it all: i have no idea of what’s wrong.
I restarted my rasp a few hours ago, since then: this is happening => storj going wtf

for those who can’t see the vid:

Dashboard says everything is fine, the next second: quic misconfigured. next second: dashboard can’t load, next second: everything’s fine.

(everything worked fine for months before this.)

Do you guys have any idea of what could do this? i’m not even sure if this is really making the storagenode going down or if it’s only the dashboard that’s acting crazy.
My network connection is stable, i didn’t change a thing so everything should be correct in the config file.

Could you try another browser?
And also - please provide the last 20 lines of the log: How do I check my logs? | Storj Docs

thanks for the fast reply.
tried with another browser, same result.

from what i’m seeing, the node is actually going down and up, and down and up… and so on.
last 20 lines of logs:

> 2022-05-10T19:53:35.442Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.442Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.442Z	ERROR	gracefulexit:chore	error retrieving satellites.	{"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:149\n\tstorj.io/storj/storagenode/gracefulexit.(*service).ListPendingExits:89\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run.func1:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:50\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.444Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.444Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.445Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.442Z	ERROR	bandwidth	Could not rollup bandwidth usage	{"Process": "storagenode", "error": "bandwidthdb: context canceled", "errorVerbose": "bandwidthdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:301\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
> 2022-05-10T19:53:35.446Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"Process": "storagenode", "error": "context canceled"}
> 2022-05-10T19:53:35.447Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "TUXTM4ZH2LDFA63ZMFCXFTCPOK6XPKZFDDNSLBVPKWDDXI3WPU6Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "error": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:140\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:498\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}
> 2022-05-10T19:53:35.448Z	ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
> Error: piecestore monitor: disk space requirement not met
> 2022-05-10 19:53:36,939 ERRO pool processes event buffer overflowed, discarding event 761
> 2022-05-10 19:53:36,940 INFO exited: storagenode (exit status 1; not expected)
> 2022-05-10 19:53:37,949 INFO spawned: 'storagenode' with pid 8585
> 2022-05-10T19:53:38.134Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
> 2022-05-10T19:53:38.136Z	INFO	Operator email	{"Process": "storagenode", "Address": "xxx@protonmail.com"}
> 2022-05-10T19:53:38.136Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "xx"}
> 2022-05-10T19:53:38.857Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "12b9atoX4gQcAYcsEGy99BG4SrtXuxQKmysLrV9XxhoniLaPqnf"}
> 2022-05-10T19:53:38.974Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 53}
> 2022-05-10 19:53:38,975 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

another 20 lines of logs, if it helps:

2022-05-10T19:58:49.777Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "8h13m57.061413797s"}
2022-05-10T19:58:49.778Z	WARN	piecestore:monitor	Disk space is less than requested. Allocated space is	{"Process": "storagenode", "bytes": 459498717952}
2022-05-10T19:58:49.778Z	ERROR	piecestore:monitor	Total disk space is less than required minimum	{"Process": "storagenode", "bytes": 500000000000}
2022-05-10T19:58:49.778Z	ERROR	services	unexpected shutdown of a runner	{"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: disk space requirement not met", "errorVerbose": "piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:125\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.779Z	INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
2022-05-10T19:58:49.779Z	ERROR	bandwidth	Could not rollup bandwidth usage	{"Process": "storagenode", "error": "bandwidthdb: context canceled", "errorVerbose": "bandwidthdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:301\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.779Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"Process": "storagenode", "error": "context canceled"}
2022-05-10T19:58:49.780Z	ERROR	gracefulexit:blobscleaner	couldn't receive satellite's GE status	{"Process": "storagenode", "error": "context canceled"}
2022-05-10T19:58:49.780Z	ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
2022-05-10T19:58:49.781Z	ERROR	collector	error during collecting pieces: 	{"Process": "storagenode", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:521\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.780Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.782Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.783Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.784Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.785Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.786Z	ERROR	gracefulexit:chore	error retrieving satellites.	{"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:152\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:164\n\tstorj.io/storj/storagenode/gracefulexit.(*service).ListPendingExits:89\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run.func1:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:50\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-05-10T19:58:49.785Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
Error: piecestore monitor: disk space requirement not met
2022-05-10 19:58:51,287 ERRO pool processes event buffer overflowed, discarding event 815
2022-05-10 19:58:51,287 INFO exited: storagenode (exit status 1; not expected)

tried to rm storagenode and did a fresh install, same problem.

“Error: piecestore monitor: disk space requirement not met”
i’m actually having some torrenting stuff on this HDD,
i’m having like 1TB+ of storj data for now AND 6+TB of torrent stuff.

It never was an issue before but, was there an update that’s checking the total available size of the HDD before doing anything?
'Cause in the config file i put “7TB” of available space, but in reality theres only 150~GB of available space. I’m deleting torrent stuff from time to time, when storj needs more space. (so i can enjoy torrenting stuff while storj fills the HDD.)

Well,
I deleted 2TB of my (torrent/personnal) data to make some space available, restarted the node and now it’s back to normal.

I guess that i went under the “required minimum” space. Was this “security” always present or is this new? cause as i was saying, never had a this problem before and i’m doing this with multiples HDD/nodes.

The 500gb minimum has always been there.

That’s weird.
I’m running a node with a 16TB HDD that’s under 176 GB of available space, (500gb of STORJ data & 14~TB of personnal data and it’s acting totally normal.
Another one with a 8TB HDD that’s under 116 GB, also acting normal.

Storj needs 500gb minimum, but that of course includes space already in use. It’s only a problem when the space used by Storj + free space is less than 500gb.

1 Like

I’m not sure to understand, but hey, at least i fixed the problem pretty easily, thanks to the logfile.