Hello Alexey,
It’s me again, and unfortunately I do not bring good news for myself.
I have been checking on the status of my node each day through:
docker ps -a
And today I saw this output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22e3da993205 storjlabs/watchtower "/watchtower node1 w…" 4 days ago Up 4 days watchtower
12524c955e58 storjlabs/storagenode:latest "/entrypoint" 4 days ago Up 10 hours 127.0.0.1:14002->14002/tcp, 0.0.0.0:28967->28967/tcp, 0.0.0.0:28967->28967/udp node1
Check the logs like this:
docker logs 12524c955e58
The entries that grabbed my attention were the following:
2022-04-11T06:37:13.920Z INFO piecestore uploaded {"Piece ID": "Y6LAR6PLVNJNJW6UN6KRRQK5C6BXCBE6F7MTTY55C7CY6YLVAO6A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"
, "Action": "PUT", "Size": 181504}
2022-04-11T06:37:21.911Z INFO Downloading versions. {"Server Address": "https://version.storj.io"}
2022-04-11T06:37:22.437Z INFO Current binary version {"Service": "storagenode", "Version": "v1.52.2"}
2022-04-11T06:37:22.437Z INFO Version is up to date {"Service": "storagenode"}
2022-04-11T06:37:22.456Z INFO Current binary version {"Service": "storagenode-updater", "Version": "v1.52.2"}
2022-04-11T06:37:22.456Z INFO Version is up to date {"Service": "storagenode-updater"}
I assume that this must be the watchtower updating my node or am I mistaking here?
Nevertheless, the following is what I believe had caused the node to change stastus:
2022-04-11T06:42:00.487Z ERROR services unexpected shutdown of a runner {"name": "trust", "error": "satellitesdb: database is locked", "errorVerbose": "satellitesdb: database is locked\n\tstorj.
io/storj/storagenode/storagenodedb.(*satellitesDB).SetAddress:40\n\tstorj.io/storj/storagenode/trust.(*Pool).Run:124\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.i
o/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-11T06:42:04.111Z ERROR piecestore failed to add bandwidth usage {"error": "bandwidthdb: database is locked", "errorVerbose": "bandwidthdb: database is locked\n\tstorj.io/storj/storagenod
e/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:723\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:435\n\tstorj.io/common/pb.DRPCPiec
estoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drp
cserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}
2022-04-11T06:42:04.111Z INFO piecestore uploaded {"Piece ID": "CTJYU4BLHE6CXIY34PZH4UCW2ARDON5ONMGRQOKB3CU6XUQ55FTQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"
, "Action": "PUT", "Size": 768}
2022-04-11T06:42:15.488Z WARN services service takes long to shutdown {"name": "gracefulexit:chore"}
2022-04-11T06:42:15.488Z WARN servers service takes long to shutdown {"name": "server"}
2022-04-11T06:42:15.512Z INFO servers slow shutdown
And after a long string of lines, it seems like the node restarts on itself:
Error: satellitesdb: database is locked
2022-04-11 06:44:46,378 INFO exited: storagenode (exit status 1; not expected)
2022-04-11 06:44:47,386 INFO spawned: 'storagenode' with pid 15276
2022-04-11 06:44:47,387 WARN received SIGQUIT indicating exit request
2022-04-11 06:44:47,387 INFO waiting for processes, storagenode, storagenode-updater to die
2022-04-11T06:44:47.388Z INFO Got a signal from the OS: "terminated"
2022-04-11 06:44:47,392 INFO stopped: storagenode-updater (exit status 0)
2022-04-11T06:44:49.002Z INFO Telemetry enabled {"instance ID": "1goJaKCBNoXAcojEDrXqQDm4NJYRGEaPjjDadysXZSeT3syJJH"}
2022-04-11T06:44:49.985Z INFO db.migration Database Version {"version": 53}
2022-04-11T06:44:50.354Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock.
2022-04-11T06:44:50.449Z INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-
Size for details.
2022-04-11 06:44:50,449 INFO waiting for processes, storagenode to die
2022-04-11T06:44:50.963Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock.
2022-04-11T06:44:50.964Z INFO Node 1goJaKCBNoXAcojEDrXqQDm4NJYRGEaPjjDadysXZSeT3syJJH started
2022-04-11T06:44:50.965Z INFO Public server started on [::]:28967
2022-04-11T06:44:50.965Z INFO Private server started on 127.0.0.1:7778
2022-04-11T06:44:51.272Z INFO trust Scheduling next refresh {"after": "1h32m26.648513425s"}
2022-04-11T06:44:51.272Z INFO bandwidth Performing bandwidth usage rollups
2022-04-11T06:44:52.425Z INFO piecestore upload started {"Piece ID": "TKUFFQPMHBE6HZIFULY2QZCBVO5LXGK6E4T6NAZWTUC64ULANZCA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"
, "Action": "PUT", "Available Space": 837885647104}
2022-04-11 06:44:54,429 INFO waiting for processes, storagenode to die
2022-04-11T06:44:56.204Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {"count": 26}
2022-04-11T06:44:56.204Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {"count": 391}
2022-04-11T06:44:56.203Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {"count": 103}
2022-04-11T06:44:56.204Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {"count": 2}
2022-04-11T06:44:56.204Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB sending {"count": 8}
2022-04-11T06:44:56.355Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB finished
2022-04-11T06:44:56.544Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2022-04-11T06:44:56.616Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2022-04-11T06:44:56.693Z INFO piecestore upload started {"Piece ID": "4SLQZWDPB56OYTPQ3LQBWBI6QZ2KJY7LYB7UICU46YGSHKN2TETQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"
, "Action": "PUT", "Available Space": 837885647104}
2022-04-11T06:44:56.902Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2022-04-11T06:44:57.030Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2022-04-11T06:44:57.208Z INFO piecestore uploaded {"Piece ID": "4SLQZWDPB56OYTPQ3LQBWBI6QZ2KJY7LYB7UICU46YGSHKN2TETQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"
, "Action": "PUT", "Size": 42752}
2022-04-11 06:44:58,210 WARN killing 'storagenode' (15276) with SIGKILL
2022-04-11 06:44:58,210 INFO waiting for processes, storagenode to die
2022-04-11 06:44:58,435 INFO stopped: storagenode (terminated by SIGKILL)
2022-04-11 06:44:58,436 INFO stopped: processes (terminated by SIGTERM)
2022-04-11 06:44:59,893 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file
to avoid this message.
2022-04-11 06:44:59,899 INFO RPC interface 'supervisor' initialized
2022-04-11 06:44:59,900 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2022-04-11 06:44:59,900 INFO supervisord started with pid 1
2022-04-11 06:45:00,903 INFO spawned: 'processes' with pid 9
2022-04-11 06:45:00,906 INFO spawned: 'storagenode' with pid 10
2022-04-11 06:45:00,909 INFO spawned: 'storagenode-updater' with pid 11
And afterwards, it started storing and accepting connection as usual, after it “died”, the first one coming in at:
2022-04-11T06:45:07.578Z
Would you be able to provide me any insight based on the above, as I really don’t understand what may be the cause? Internet is running on RJ-45 non-stop, electricity as well.
I really just want to have this setup done once and for all - looking forward to your prompt response as always.
Kind regards,