Storj restart always

Hi i have a problem with my node at the last 3 days. He restart always.

2023-09-16T21:27:16Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2023-09-16 21:27:16,754 INFO stopped: storagenode-updater (exit status 0)
2023-09-16 21:27:19,759 INFO waiting for storagenode, processes-exit-eventlistener to die
2023-09-16 21:27:19,865 INFO stopped: storagenode (exit status 1)
2023-09-16 21:27:19,868 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)
2023-09-16 21:27:21,613 INFO Set uid to user 0 succeeded
2023-09-16 21:27:21,635 INFO RPC interface ‘supervisor’ initialized
2023-09-16 21:27:21,636 INFO supervisord started with pid 1
2023-09-16 21:27:22,640 INFO spawned: ‘processes-exit-eventlistener’ with pid 11
2023-09-16 21:27:22,646 INFO spawned: ‘storagenode’ with pid 12
2023-09-16 21:27:22,652 INFO spawned: ‘storagenode-updater’ with pid 13
2023-09-16T21:27:22Z INFO Configuration loaded {“Process”: “storagenode-updater”, “Location”: “/app/config/config.yaml”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-bandwidth”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.private-address”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.address”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “operator.email”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “operator.wallet”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “operator.wallet-features”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “contact.external-address”}
2023-09-16T21:27:22Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-disk-space”}
2023-09-16T21:27:22Z INFO Invalid configuration file value for key {“Process”: “storagenode-updater”, “Key”: “log.caller”}
2023-09-16T21:27:22Z INFO Invalid configuration file value for key {“Process”: “storagenode-updater”, “Key”: “log.encoding”}
2023-09-16T21:27:22Z INFO Invalid configuration file value for key {“Process”: “storagenode-updater”, “Key”: “log.level”}
2023-09-16T21:27:22Z INFO Anonymized tracing enabled {“Process”: “storagenode-updater”}
2023-09-16T21:27:22Z INFO Running on version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.86.1”}
2023-09-16T21:27:22Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}
2023-09-16T21:27:23Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.86.1”}
2023-09-16T21:27:23Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2023-09-16T21:27:23Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.86.1”}
2023-09-16T21:27:23Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
2023-09-16 21:27:24,214 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-16 21:27:24,214 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-16 21:27:24,214 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-16 21:27:25,806 INFO exited: storagenode (exit status 1; not expected)
2023-09-16 21:27:26,819 INFO spawned: ‘storagenode’ with pid 45
2023-09-16 21:27:26,821 WARN received SIGQUIT indicating exit request
2023-09-16 21:27:26,824 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2023-09-16T21:27:26Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2023-09-16 21:27:26,835 INFO stopped: storagenode-updater (exit status 0)
2023-09-16 21:27:29,841 INFO waiting for storagenode, processes-exit-eventlistener to die

Looks like storage node updater restarts the node even though there is no update.

How are you running node?

Can you stop storagenode updater and see if the behavior continues?

how can i stop him. I use raspberry pi4 with docker storj.

the last 20 lines.

2023-09-16T22:47:18Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “contact.external-address”}
2023-09-16T22:47:18Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-disk-space”}
2023-09-16T22:47:18Z INFO Anonymized tracing enabled {“Process”: “storagenode-updater”}
2023-09-16T22:47:18Z INFO Running on version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.86.1”}
2023-09-16T22:47:18Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}
2023-09-16T22:47:18Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.86.1”}
2023-09-16T22:47:18Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2023-09-16T22:47:18Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.86.1”}
2023-09-16T22:47:18Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
2023-09-16 22:47:19,766 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-16 22:47:19,767 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-16 22:47:19,767 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-16 22:47:20,875 INFO exited: storagenode (exit status 1; not expected)
2023-09-16 22:47:21,887 INFO spawned: ‘storagenode’ with pid 44
2023-09-16 22:47:21,888 WARN received SIGQUIT indicating exit request
2023-09-16 22:47:21,890 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2023-09-16T22:47:21Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2023-09-16 22:47:21,897 INFO stopped: storagenode-updater (exit status 0)
2023-09-16 22:47:24,517 INFO stopped: storagenode (exit status 1)
2023-09-16 22:47:24,519 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

Well, looks like storagenode simply fails to start.

Storagenode-updater tries to launch it – but it again fails with the error code 1.

And weird that there is no other logs.

Can you enable more verbose logging to see at which point in the startup does storagenode fail? (I’m not familiar with docker so can’t give you step by step instructions; but the is a wiki somewhere on how to configure log levels)

Seems you redirected logs to the file. Please post the last lines from the file (replace /mnt/storj/storagenode/storagenode.log to your path):

tail /mnt/storj/storagenode/storagenode.log

2023-09-17T05:45:31Z FATAL Unrecoverable error {“process”: “storagenode”, “error”: “piecestore monitor: disk space requirement not met”, “errorVerbose”: “piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:135\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T05:45:34Z ERROR piecestore:monitor Total disk space is less than required minimum {“process”: “storagenode”, “bytes”: 500000000000}
2023-09-17T05:45:34Z ERROR services unexpected shutdown of a runner {“process”: “storagenode”, “name”: “piecestore:monitor”, “error”: “piecestore monitor: disk space requirement not met”, “errorVerbose”: “piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:135\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T05:45:34Z ERROR nodestats:cache Get pricing-model/join date failed {“process”: “storagenode”, “error”: “context canceled”}
2023-09-17T05:45:34Z ERROR gracefulexit:blobscleaner couldn’t receive satellite’s GE status {“process”: “storagenode”, “error”: “context canceled”}
2023-09-17T05:45:34Z ERROR gracefulexit:chore error retrieving satellites. {“process”: “storagenode”, “error”: “satellitesdb: context canceled”, “errorVerbose”: “satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:149\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:59\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T05:45:34Z ERROR piecestore:cache error during init space usage db: {“process”: “storagenode”, “error”: “piece space used: context canceled”, “errorVerbose”: “piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:73\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:81\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T05:45:34Z ERROR collector error during collecting pieces: {“process”: “storagenode”, “error”: “pieceexpirationdb: context canceled”, “errorVerbose”: “pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:561\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T05:45:34Z ERROR bandwidth Could not rollup bandwidth usage {“process”: “storagenode”, “error”: “sql: transaction has already been committed or rolled back”}
2023-09-17T05:45:34Z FATAL Unrecoverable error {“process”: “storagenode”, “error”: “piecestore monitor: disk space requirement not met”, “errorVerbose”: “piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:135\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}

Please show result of the command

df -T --si

and mount options in your docker run command

pi@raspberrypiNODE:~ $ df -T --si
Dateisystem Typ Größe Benutzt Verf. Verw% Eingehängt auf
/dev/root ext4 246G 5,6G 241G 3% /
devtmpfs devtmpfs 3,9G 0 3,9G 0% /dev
tmpfs tmpfs 4,2G 38M 4,1G 1% /dev/shm
tmpfs tmpfs 1,7G 2,9M 1,7G 1% /run
tmpfs tmpfs 5,3M 4,1k 5,3M 1% /run/lock
/dev/sda1 vfat 268M 33M 236M 13% /boot
/dev/sdb1 ext4 5,0T 4,4T 597G 88% /mnt/storagenode-1
/dev/sdc1 ext4 16T 4,6T 12T 29% /mnt/storagenode-2
tmpfs tmpfs 825M 21k 825M 1% /run/user/1001
tmpfs tmpfs 825M 25k 825M 1% /run/user/1000

docker command
docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 14002:14002
-e WALLET=“xxxxxxxxxxxxxxxx8DA0fC53A530D3”
-e EMAIL=“xxxxxxxxxxxxxxxxxine.de
-e ADDRESS=“xxxxxxxxxxxxxx:28967”
-e BANDWIDTH=“860TB”
-e STORAGE=“4.5GB”
–mount type=bind,source=“/mnt/storagenode-1/identity”,destination=/app/identity
–mount type=bind,source=“/mnt/storagenode-1”,destination=/app/config
–mount type=bind,source=“/mnt/storjlog/log1”,destination=/app/log
–name storagenode-1 storjlabs/storagenode:latest
–storage2.piece-scan-on-startup=false

all other storj nodes with other hdd´s are ok

Please check all databases for this node:

If all databases ok, you may temporary add parameter --storage2.monitor.minimum-disk-space 400GB to your docker-run command after the image name.
When the filewalker would finish calculation of the used space, you may remove this parameter.

ok test with sqlite3 db all ok. the new docker

comand is:

docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 14002:14002
-e WALLET=“xxxxxxxxxxxxx0fC53A530D3”
-e EMAIL=“xxxxxxxxxxxxx-online.de
-e ADDRESS=“xxxxxxxxxxxxxxxxxxx:28967”
-e BANDWIDTH=“860TB”
-e STORAGE=“4.5GB”
–mount type=bind,source=“/mnt/storagenode-1/identity”,destination=/app/identity
–mount type=bind,source=“/mnt/storagenode-1”,destination=/app/config
–mount type=bind,source=“/mnt/storjlog/log1”,destination=/app/log
–name storagenode-1 storjlabs/storagenode:latest
–storage2.monitor.minimum-disk-space 400GB

–storage2.piece-scan-on-startup=false

the new log from docker is:

pi@raspberrypiNODE:~ $ sudo docker logs -f --tail 20 storagenode-1
2023-09-17T06:45:16Z INFO Invalid configuration file value for key {“Process”: “storagenode-updater”, “Key”: “log.level”}
2023-09-17T06:45:16Z INFO Invalid configuration file value for key {“Process”: “storagenode-updater”, “Key”: “log.output”}

the new log from node logfile is:

pi@raspberrypiNODE:~ $ sudo tail /mnt/storjlog/log1/node.log
2023-09-17T06:46:11Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“process”: “storagenode”, “satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “error”: “context canceled”}
2023-09-17T06:46:11Z ERROR pieces failed to lazywalk space used by satellite {“process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-09-17T06:46:11Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“process”: “storagenode”, “satelliteID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “error”: “context canceled”}
2023-09-17T06:46:11Z ERROR pieces failed to lazywalk space used by satellite {“process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”}
2023-09-17T06:46:11Z ERROR nodestats:cache Get pricing-model/join date failed {“process”: “storagenode”, “error”: “context canceled”}
2023-09-17T06:46:11Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“process”: “storagenode”, “satelliteID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “error”: “context canceled”}
2023-09-17T06:46:11Z ERROR pieces failed to lazywalk space used by satellite {“process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
2023-09-17T06:46:11Z ERROR piecestore:cache error getting current used space: {“process”: “storagenode”, “error”: “filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled”, “errorVerbose”: “group:\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T06:46:11Z ERROR bandwidth Could not rollup bandwidth usage {“process”: “storagenode”, “error”: “sql: transaction has already been committed or rolled back”}
2023-09-17T06:46:12Z FATAL Unrecoverable error {“process”: “storagenode”, “error”: “piecestore monitor: disk space requirement not met”, “errorVerbose”: “piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:135\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-09-17T06:45:16Z INFO Anonymized tracing enabled {“Process”: “storagenode-updater”}
2023-09-17T06:45:16Z INFO Running on version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.86.1”}
2023-09-17T06:45:16Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}
2023-09-17T06:45:17Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.86.1”}
2023-09-17T06:45:17Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2023-09-17T06:45:17Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.86.1”}
2023-09-17T06:45:17Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
2023-09-17 06:45:18,325 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-17 06:45:18,325 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-17 06:45:18,326 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-09-17 06:45:18,589 INFO exited: storagenode (exit status 1; not expected)
2023-09-17 06:45:19,597 INFO spawned: ‘storagenode’ with pid 44
2023-09-17 06:45:19,599 WARN received SIGQUIT indicating exit request
2023-09-17 06:45:19,601 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2023-09-17T06:45:19Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2023-09-17 06:45:19,607 INFO stopped: storagenode-updater (exit status 0)
2023-09-17 06:45:21,661 INFO stopped: storagenode (exit status 1)
2023-09-17 06:45:21,662 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

This looks wrong

you should not specify the bandwidth limit (it’s not used), and the STORAGE should be 4.5TB, not 4.5GB
The parameter --storage2.monitor.minimum-disk-space 400GB can be removed in this case.
Or do you really want to specify 4.5GB? In that case you need to reduce the monitoring parameter to this value as well.

ah yes thats was the failure 4,5gb =4500gb or 4.5tb