Stop/Shutdown request received - Why?

My node shut down all on it’s own last night…

I was asleep at 9:14pm and the log has this:

2022-04-06T09:14:43.682-0400	INFO	Stop/Shutdown request received.
2022-04-06T09:14:43.684-0400	INFO	piecestore	downloaded	{"Piece ID": "BXZCUV7NF3YIQ75J7SOHGI63M57JD6TUAWKXKCZID52K76JC54MA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-04-06T09:14:43.685-0400	INFO	piecestore	upload canceled	{"Piece ID": "HB54QLWRU6PYYOJGAASI5OGXKMXQCHR72L7SGY6E6CGKHVMZENJA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT_REPAIR", "Size": 794624}
2022-04-06T09:14:43.685-0400	INFO	piecestore	upload canceled	{"Piece ID": "DN2SJ2QSOSUVUSI2FHOVHX5D4NYH5IETTQKO62K6ALJL4L5DIOSA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 65536}
2022-04-06T09:14:43.685-0400	INFO	piecestore	upload canceled	{"Piece ID": "QIOS3NFA3AYNERDM4N75JBQRPT6UE3YPRMAWGCODAJRKFH6WDDBQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 163840}
2022-04-06T09:14:43.686-0400	INFO	piecestore	upload canceled	{"Piece ID": "B4URVOEFGXXIGSYCFHEAJTQPX3ABDLIW3ZLGAEN5N36KC3H6GR5Q", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "PUT_REPAIR", "Size": 1581056}
2022-04-06T09:14:43.693-0400	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T09:14:43.693-0400	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T09:14:43.693-0400	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T09:14:43.802-0400	ERROR	pieces:trash	emptying trash failed	{"error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T09:14:43.939-0400	ERROR	piecestore:cache	error getting current used space: 	{"error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}

Hi @7tigers
What OS? This looks like a standard log entry when I reboot the computer or manually stop or restart the node service on my Windows 10 node.

You need to check your logs before the line with “Stop/Shutdown request received”. It could be related to not readable or not writeable folders/files.
Please search for “ERROR” or even “FATAL”, the other way is to search for “blobs”.

nodes will shutdown if they run out of memory, which can happen quite rapidly…
this can be caused by low cpu, but usually is caused by the storage being unable to keep up and the ingress fills up the memory and eventually the node shuts down if not something worse happens.

In these cases it will not print this message, because it will be killed by OS.
There are two reasons for such message:

  • file/folder corruption; especially storage-dir-verification;
  • external request to shutdown, like OS reboot or restart of the service.
1 Like

I got this same issue happen for 3 nodes…
no clue why

image

i know i didn’t ask them to do that…
not all nodes seems affected tho… which is weird…

looks like it might have been the new updater… i included a large log portion to give something to work with…

seems to update / restart itself and then shutdown again permanently

one of them was even still running the pre v1.50.4 beta, but two others was on latest… but they had been used for testing the same beta version. before v1.50.4 latest
maybe there was two different version of v1.50.4 storagenodes…
and one had a broken update

because not all my nodes have been affected… or maybe the rest just haven’t gotten the update signal yet…

that would be a nope … just checked everything is v1.52.2

i don’t really think this is an issue… like i said its limited to those that took part in the beta test.
and so will most likely have been overwritten now…

but atleast we know why it happens.

2022-04-06T23:27:02.538Z        INFO    piecestore      downloaded      {"Piece ID": "K2XCMSGJQCS7U752EC2UTKIJQLE2HYVVUUZOGKW2453RI52S5NZA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-04-06T23:27:02.783Z        INFO    piecestore      uploaded        {"Piece ID": "62UWGYQ2E3VUWCK474FBP7WA5HKAHC4XNSEYCXHTXKKD7XV4YIJQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 2319360}
2022-04-06T23:27:02.799Z        INFO    piecestore      uploaded        {"Piece ID": "ETHOQJU2HVFE4BUFXIKBZUQ4ODVBTAGAZVQ2BZWSORLLGXSEHYXA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 73472}
2022-04-06T23:27:04.136Z        INFO    piecestore      uploaded        {"Piece ID": "MXCM6UOWD2ZSMIJ5EPCK3DW6GMOI7WH4BA6SH7AR4SZH6QO2CVCA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 217856}
2022-04-06T23:27:04.211Z        INFO    piecestore      upload started  {"Piece ID": "EC7Y2LKNMZQL3MNAWGQCFN6BM4CWIIMOIBAMFXVRWENCZ3JQDKEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725975081397}
2022-04-06T23:27:04.447Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "NY4IVEVGPUGKL4NBL6DETNMKNU62U7N7D3O7YL7FLZDLZNFT5MIA"}
2022-04-06T23:27:04.698Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "SWGFMHLUNWXH2HLIN4L4WD7LGSVNY25UAGO3PCB2CIRGYF3QTT7A"}
2022-04-06T23:27:04.806Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "EVZFFS2EBOLPQUGGP3BQROLDJTTUUUCNWKAL2AXDNDTOZ7RY3QSA"}
2022-04-06T23:27:05.722Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "NLME5YQSKAJGRCFV36BAQ3Z7VV3WF6HDGK4WWIFG3W5EASFGRICQ"}
2022-04-06T23:27:05.799Z        INFO    Downloading versions.   {"Server Address": "https://version.storj.io"}
2022-04-06T23:27:06.046Z        INFO    piecestore      uploaded        {"Piece ID": "EC7Y2LKNMZQL3MNAWGQCFN6BM4CWIIMOIBAMFXVRWENCZ3JQDKEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-04-06T23:27:06.362Z        INFO    Current binary version  {"Service": "storagenode", "Version": "v1.50.4"}
2022-04-06T23:27:06.365Z        INFO    Download started.       {"From": "https://github.com/storj/storj/releases/download/v1.52.2/storagenode_linux_amd64.zip", "To": "/tmp/storagenode_linux_amd64.3825147910.zip"}
2022-04-06T23:27:07.149Z        INFO    piecestore      download started        {"Piece ID": "6JXR7N7HPZL4GK5W5CLWZUEOOTQE4XLWF36MAPKLCEWVCPLFY7XQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-04-06T23:27:07.973Z        INFO    piecestore      upload started  {"Piece ID": "A5OVXBLEIBP5KA5RQ5JOS2MLC5BFCFTXR7DHX7UFEYV3FUI3X3ZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725974899381}
2022-04-06T23:27:08.078Z        INFO    piecestore      upload started  {"Piece ID": "IAIICQPCS5GFGTMLZGIPXKS5M5ICIQEPAFZKQ2CGN4HJGXMJJBMA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Available Space": 725974879925}
2022-04-06T23:27:08.087Z        INFO    piecestore      uploaded        {"Piece ID": "A5OVXBLEIBP5KA5RQ5JOS2MLC5BFCFTXR7DHX7UFEYV3FUI3X3ZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 18944}
2022-04-06T23:27:08.089Z        INFO    piecestore      uploaded        {"Piece ID": "IAIICQPCS5GFGTMLZGIPXKS5M5ICIQEPAFZKQ2CGN4HJGXMJJBMA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Size": 8448}
2022-04-06T23:27:08.334Z        INFO    piecestore      downloaded      {"Piece ID": "6JXR7N7HPZL4GK5W5CLWZUEOOTQE4XLWF36MAPKLCEWVCPLFY7XQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-04-06T23:27:09.053Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "XYLSWL33JXFOYKLAMFXTKPPANKY53MCB3PPO2KN7C3FWFQAIRRNA"}
2022-04-06T23:27:09.126Z        INFO    piecestore      upload started  {"Piece ID": "ASZOTCBGAXW3RPKN4MXTWBIVQCMUXWPHLPO7PRVDPF5K332TG6FA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725974870965}
2022-04-06T23:27:09.548Z        INFO    piecestore      download started        {"Piece ID": "XD4EOJGGL3XSY4WIA3EDL3WRRKWKT5FFKCSWNDCEXQHPXWQONIHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-04-06T23:27:09.586Z        INFO    piecestore      upload started  {"Piece ID": "IGJKWD7AXAWUGA32DTM72MGCN24X5I5PHHVZOPQ4ZQY3J4IAVISA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725974870965}
2022-04-06T23:27:09.612Z        INFO    piecestore      upload started  {"Piece ID": "QJKOLKDBFQFOKVBIEEFGXUYTV4QWJVUNHSOXTNX46AMUQZGPBA5Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725974870965}
2022-04-06T23:27:09.758Z        INFO    piecestore      downloaded      {"Piece ID": "XD4EOJGGL3XSY4WIA3EDL3WRRKWKT5FFKCSWNDCEXQHPXWQONIHQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-04-06T23:27:09.824Z        INFO    piecestore      uploaded        {"Piece ID": "QJKOLKDBFQFOKVBIEEFGXUYTV4QWJVUNHSOXTNX46AMUQZGPBA5Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 8192}
2022-04-06T23:27:09.846Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "SZ2PR5Q7WZAWKBFT6RD4ZYK3HT7FRFVUTV3PBBUYV56F47UCNDOA"}
2022-04-06T23:27:10.237Z        INFO    piecestore      upload started  {"Piece ID": "VCSSQWMRSIQN3ARJCKIC2SK2GMSZSTPMMN4JFC23MT2MCU3VU3IA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725974862261}
2022-04-06T23:27:10.281Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "XD4EOJGGL3XSY4WIA3EDL3WRRKWKT5FFKCSWNDCEXQHPXWQONIHQ"}
2022-04-06T23:27:10.395Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "BZPDAF3IOCFJXZEFA673OIKZQ6EKCBGNRWXXKNKXIHHQ54LGKLZQ"}
2022-04-06T23:27:10.673Z        INFO    piecestore      uploaded        {"Piece ID": "IGJKWD7AXAWUGA32DTM72MGCN24X5I5PHHVZOPQ4ZQY3J4IAVISA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 145408}
2022-04-06T23:27:10.785Z        INFO    piecestore      upload started  {"Piece ID": "ZVHNC4WJ64VVDH74BQORUTP4JWKHPYRT4MIG26XI3MFRKQX4EEHA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 725974716341}
2022-04-06T23:27:10.822Z        INFO    piecestore      download started        {"Piece ID": "QIBUEMDZAXDENVH6DLNW2RPOCTMVAHRBK5Z5LCUCUZQLE2FWK7ZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-04-06T23:27:11.159Z        INFO    piecestore      uploaded        {"Piece ID": "ZVHNC4WJ64VVDH74BQORUTP4JWKHPYRT4MIG26XI3MFRKQX4EEHA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 37120}
2022-04-06T23:27:11.305Z        INFO    piecestore      download started        {"Piece ID": "2QEZWMC6MQZ4JZW2IHJ32ODESPCGIADTFK36BJJEYXBGKLXLHCNA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-04-06T23:27:11.676Z        INFO    piecestore      uploaded        {"Piece ID": "VCSSQWMRSIQN3ARJCKIC2SK2GMSZSTPMMN4JFC23MT2MCU3VU3IA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-04-06T23:27:11.822Z        INFO    Download finished.      {"From": "https://github.com/storj/storj/releases/download/v1.52.2/storagenode_linux_amd64.zip", "To": "/tmp/storagenode_linux_amd64.3825147910.zip"}
2022-04-06T23:27:11.876Z        INFO    Restarting service.     {"Service": "storagenode"}
2022-04-06T23:27:11.893Z        INFO    piecestore      downloaded      {"Piece ID": "2QEZWMC6MQZ4JZW2IHJ32ODESPCGIADTFK36BJJEYXBGKLXLHCNA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-04-06T23:27:11.914Z        INFO    piecestore      upload started  {"Piece ID": "Q7TQADQVNRNHVR2H4JUPKMRW346XVDWA2V3MBVHLEI6PTTM22BZQ", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "PUT_REPAIR", "Available Space": 725974496693}
2022-04-06T23:27:11.960Z        INFO    piecestore      uploaded        {"Piece ID": "Q7TQADQVNRNHVR2H4JUPKMRW346XVDWA2V3MBVHLEI6PTTM22BZQ", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "PUT_REPAIR", "Size": 768}
2022-04-06T23:27:11.972Z        INFO    piecestore      uploaded        {"Piece ID": "ASZOTCBGAXW3RPKN4MXTWBIVQCMUXWPHLPO7PRVDPF5K332TG6FA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-04-06T23:27:12.969Z        INFO    piecestore      download started        {"Piece ID": "IARXQAACJB7FPAHBDR4CSEZRMXGUF6TFMJJ2LROCG3QCYOUPPE2Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-04-06T23:27:13.052Z        INFO    piecestore      upload started  {"Piece ID": "FUXT4JP24WUVDSV5S3VIN7FSNASLTAKRM6WWV4XRQLNXBDWZYUNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 725974313397}
2022-04-06T23:27:13.085Z        INFO    piecestore      uploaded        {"Piece ID": "FUXT4JP24WUVDSV5S3VIN7FSNASLTAKRM6WWV4XRQLNXBDWZYUNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 4608}
2022-04-06T23:27:13.217Z        INFO    piecestore      downloaded      {"Piece ID": "IARXQAACJB7FPAHBDR4CSEZRMXGUF6TFMJJ2LROCG3QCYOUPPE2Q", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-04-06T23:27:13.714Z        INFO    Service restarted successfully. {"Service": "storagenode"}
2022-04-06T23:27:13.724Z        INFO    Got a signal from the OS: "interrupt"
2022-04-06T23:27:13.746Z        INFO    Current binary version  {"Service": "storagenode-updater", "Version": "v1.50.4"}
2022-04-06T23:27:13.747Z        INFO    Version is up to date   {"Service": "storagenode-updater"}
2022-04-06T23:27:13.759Z        INFO    piecestore      downloaded      {"Piece ID": "QIBUEMDZAXDENVH6DLNW2RPOCTMVAHRBK5Z5LCUCUZQLE2FWK7ZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-04-06T23:27:13.763Z        INFO    piecestore      downloaded      {"Piece ID": "OZY3VFUJWRVQIAX46ZLZNCVPN4EYQJNGUTSZHENXRQQ6SNPGZZZA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2022-04-06T23:27:13.764Z        INFO    piecestore      upload canceled {"Piece ID": "2NW3REAQJ3HDT7EZUJV5JLGNJR4DKBAFH6QYDWD4LMPCJOEIFODQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 65536}
2022-04-06T23:27:13.772Z        INFO    piecestore      upload canceled {"Piece ID": "SMTBYIGYFAI42WVGRB42WO6DXQQO2NKRIYETR632MLRMBWI3WXAQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 532480}
2022-04-06 23:27:14,038 INFO exited: storagenode (exit status 0; expected)
2022-04-06 23:27:15,087 INFO spawned: 'storagenode' with pid 19014
2022-04-06 23:27:15,088 WARN received SIGQUIT indicating exit request
2022-04-06 23:27:15,096 INFO waiting for processes, storagenode, storagenode-updater to die
2022-04-06T23:27:15.095Z        INFO    Got a signal from the OS: "terminated"
2022-04-06 23:27:15,115 INFO stopped: storagenode-updater (exit status 0)
2022-04-06T23:27:15.252Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2022-04-06T23:27:15.316Z        INFO    Operator email  {"Address": "email@me.com"}
2022-04-06T23:27:15.316Z        INFO    Operator wallet {"Address": "0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
2022-04-06T23:27:16.259Z        INFO    Telemetry enabled       {"instance ID": "1jJuZK7voyKJxCimPUFTkGTZr5iXfRujpwAMF2gVWdZaM42b7W"}
2022-04-06T23:27:16.329Z        INFO    db.migration    Database Version        {"version": 53}
2022-04-06T23:27:16.593Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2022-04-06T23:27:17.202Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2022-04-06T23:27:17.203Z        INFO    Node 1jJuZK7voyKJxCimPUFTkGTZr5iXfRujpwAMF2gVWdZaM42b7W started
2022-04-06T23:27:17.203Z        INFO    Public server started on [::]:28967
2022-04-06T23:27:17.203Z        INFO    Private server started on 127.0.0.1:7778
2022-04-06T23:27:17.258Z        INFO    piecestore      download started        {"Piece ID": "XN3YMB4TBLXW5CFIHPYAVSUXCOAOT2QONRR5ATQJYHIZD6KFLVVQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR"}
2022-04-06T23:27:17.381Z        INFO    trust   Scheduling next refresh {"after": "6h21m1.033046451s"}
2022-04-06T23:27:17.414Z        INFO    bandwidth       Performing bandwidth usage rollups
2022-04-06T23:27:17.428Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "EC5SVTAZ6UHRSNOKZV3KUVQ3LKXQGSV3FNX3ANABA23BTSZRFACA"}
2022-04-06T23:27:17.455Z        INFO    piecestore      download started        {"Piece ID": "ZCVQ4SPRVQBUNX3FCRHDA5J2U2M3JI5QZGCA67QFA64MPEKIYWSA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"}
2022-04-06T23:27:17.462Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "XLDUVEKVLAAKDFCK733VOGZNKAEQBLGOOLROXSCMD6KZKAQFZVXQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.471Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "65J4XMZTDCGHT3KPDFXRRMJ2E3TCO5KDQ2DIUGTDKS56HP4L75OA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.473Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "NRRXXFMM4YXM3N2O24GHFPIVJW5K7CDKC5XFVJUVQ6XRACIDBLWA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.474Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ST5GAIKNYOZNDP7YGWOG56D5KTCI4ZMAKOGYF4K24HZ3BZBGJ3HQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.475Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "63L4C5ELWN452ZPQGEPSLLWGG2YL3LMEMVXBBOUZ6LBSHKCLZG3A", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.477Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "3YT4B4MT7EYV77LNUKVMZI42NDEYBMT6WUOQPZF67NOMSQEJNKSA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.479Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "YKOLTMLLV3WRVKZ2NY5M4EVMYLQ7FRHSSOEI6YOK7PJQCSYAH6PQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.481Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "RDT75ACDUED4C6SU2GHUMQRIMMVQTQXN5YEM4DRXF4RL2PLX5CJQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.483Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "FP2NGQZF7QN4FCCGAEI5TUVAOWLWMJAY7GWLR44FXFDNBAJEK3JA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.485Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "GM6NPGFCO2K6CY4S3EQSAUDUYSHOEBFNYN75Y5KNGMY3EQM3MXQA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-04-06T23:27:17.491Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "PJGKUITBJ6YGN3H4RQ6EKRA77FDVM3XRYGZNMYZP6WRRFBKUHLUA"}
2022-04-06T23:27:17.499Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "XCOJU7FTO7PSW7XZIJRHFKJQETX44ZHGH6BHRZCJOI6Y75IC3GKQ"}
2022-04-06T23:27:17.505Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "XQT4OEVUKJAEXPMEWWZTY4DJXASFGNTVEFVHMFPHPIS4H7FEPDPA"}
2022-04-06T23:27:17.512Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "I7HW6I473SZU5P7UPT5ELX6AV7QSITZFBLZ5K6ADBTAHRHOLNSVQ"}
2022-04-06T23:27:17.517Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EF52FYDAMQKATELO6LBH2CWWYIF5P5ATDSOMEJN6KDRV6XTZ5FLA"}
2022-04-06T23:27:17.524Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "6JI76VWFH5WZ5S64V2TL77EW4Q7KOQYU5KGE6WFZFMV2E2EED7EQ"}
2022-04-06T23:27:17.533Z        INFO    collector       delete expired  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "YNVRRCLCHWQEKN75PK5HNZBYXLCU6YMROA7E6L2JEAT7PWQP5TQQ"}
2022-04-06T23:27:17.568Z        INFO    collector       collect {"count": 7}
2022-04-06T23:27:17.817Z        INFO    piecestore      upload started  {"Piece ID": "ZNHPEKP3A2JDDY4ROFZ2MDRWQMY3YDTO33NU7P6KBASMKKUJIYIQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 726294004917}
2022-04-06T23:27:17.888Z        INFO    piecestore      downloaded      {"Piece ID": "XN3YMB4TBLXW5CFIHPYAVSUXCOAOT2QONRR5ATQJYHIZD6KFLVVQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR"}
2022-04-06T23:27:17.889Z        INFO    piecestore      downloaded      {"Piece ID": "ZCVQ4SPRVQBUNX3FCRHDA5J2U2M3JI5QZGCA67QFA64MPEKIYWSA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"}
2022-04-06T23:27:18.004Z        INFO    piecestore      upload started  {"Piece ID": "E67GCXXBQ33ZHBAOUBCSJYOQQKGFFZUB7PFEWBC3VWYSDF7DCAOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 726294004917}
2022-04-06 23:27:19,006 INFO waiting for processes, storagenode to die
2022-04-06T23:27:19.019Z        INFO    piecestore      download started        {"Piece ID": "QIBUEMDZAXDENVH6DLNW2RPOCTMVAHRBK5Z5LCUCUZQLE2FWK7ZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-04-06T23:27:19.171Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "6JL3XHYXU3BCRYRO5DHU2EOJWJTFMVEEL54PKP3C6LC3SNY2ETFA"}
2022-04-06T23:27:20.186Z        INFO    piecestore      uploaded        {"Piece ID": "E67GCXXBQ33ZHBAOUBCSJYOQQKGFFZUB7PFEWBC3VWYSDF7DCAOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-04-06T23:27:20.512Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "V2CTY6RVMNBYXAREB3QRST5JQIPN6KERDBG2VL6JPFFZAYNGRGNQ"}
2022-04-06T23:27:20.955Z        INFO    piecestore      download started        {"Piece ID": "6Z36ULSJTJXYJK73J3H43ADBVRRC566LHJNC5OO6T5E33IOF5TRQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-04-06T23:27:21.458Z        INFO    piecestore      uploaded        {"Piece ID": "ZNHPEKP3A2JDDY4ROFZ2MDRWQMY3YDTO33NU7P6KBASMKKUJIYIQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2022-04-06T23:27:22.060Z        INFO    piecestore      downloaded      {"Piece ID": "6Z36ULSJTJXYJK73J3H43ADBVRRC566LHJNC5OO6T5E33IOF5TRQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-04-06 23:27:22,061 INFO waiting for processes, storagenode to die
2022-04-06T23:27:22.060Z        INFO    piecestore      download started        {"Piece ID": "QIBUEMDZAXDENVH6DLNW2RPOCTMVAHRBK5Z5LCUCUZQLE2FWK7ZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-04-06T23:27:22.067Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "5PWPAGHH35QJTYAA45YZWFWQDUIHF37Y57PJ7XOVV4FHBJXDNBJQ"}
2022-04-06T23:27:22.505Z        INFO    piecedeleter    delete piece sent to trash      {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "4KW55QWG2PJC267JDIBBLZVJPDFFXXJJD3MHYVVB6WKRV2YC4D2Q"}
2022-04-06T23:27:22.968Z        INFO    piecestore      downloaded      {"Piece ID": "QIBUEMDZAXDENVH6DLNW2RPOCTMVAHRBK5Z5LCUCUZQLE2FWK7ZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-04-06T23:27:24.079Z        INFO    piecestore      download started        {"Piece ID": "QIBUEMDZAXDENVH6DLNW2RPOCTMVAHRBK5Z5LCUCUZQLE2FWK7ZA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-04-06 23:27:25,081 INFO waiting for processes, storagenode to die
2022-04-06T23:27:25.451Z        INFO    piecestore      download started        {"Piece ID": "AN45BLA4DEKTLYHULAPNOWFRCXFUFVZMLQSM3XSWQ3RCDPVXQ56A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2022-04-06 23:27:25,452 WARN killing 'storagenode' (19014) with SIGKILL
2022-04-06 23:27:26,467 INFO stopped: storagenode (terminated by SIGKILL)
2022-04-06 23:27:26,469 INFO stopped: processes (terminated by SIGTERM)

Yes it looks like the update process can at some point kill both the storagenode and updater, which in turn stops the container. I just checked and all 3 of my nodes containers have an uptime of 9 hours now. Which means the containers all restarted during the update.

--restart unless-stopped is no longer optional it seems. Though honestly, I think this should be resolved. The container shouldn’t have to stop in order to update.

we could use the…–restart always

i wonder if … yeah it seems like the new update process didn’t force a filewalking…

not i can tell anyways and im pretty sure i should be able to tell lol

Adding some logs. Note: these logs don’t include storagenode logs as those have been redirected to a file.

2022-04-06T23:30:09.129Z	INFO	Downloading versions.	{"Server Address": "https://version.storj.io"}
2022-04-06T23:30:09.511Z	INFO	Current binary version	{"Service": "storagenode", "Version": "v1.50.4"}
2022-04-06T23:30:09.512Z	INFO	Download started.	{"From": "https://github.com/storj/storj/releases/download/v1.52.2/storagenode_linux_amd64.zip", "To": "/tmp/storagenode_linux_amd64.3764899732.zip"}
2022-04-06T23:30:10.257Z	INFO	Download finished.	{"From": "https://github.com/storj/storj/releases/download/v1.52.2/storagenode_linux_amd64.zip", "To": "/tmp/storagenode_linux_amd64.3764899732.zip"}
2022-04-06T23:30:10.274Z	INFO	Restarting service.	{"Service": "storagenode"}
2022-04-06T23:30:10.632Z	INFO	Service restarted successfully.	{"Service": "storagenode"}
2022-04-06T23:30:10.640Z	INFO	Current binary version	{"Service": "storagenode-updater", "Version": "v1.50.4"}
2022-04-06T23:30:10.640Z	INFO	Version is up to date	{"Service": "storagenode-updater"}
2022-04-06 23:30:10,824 INFO exited: storagenode (exit status 0; expected)
2022-04-06 23:30:11,917 INFO spawned: 'storagenode' with pid 22109
2022-04-06 23:30:11,918 WARN received SIGQUIT indicating exit request
2022-04-06 23:30:11,918 INFO waiting for processes, storagenode, storagenode-updater to die
2022-04-06T23:30:11.918Z	INFO	Got a signal from the OS: "terminated"
2022-04-06 23:30:11,920 INFO stopped: storagenode-updater (exit status 0)
2022-04-06 23:30:14,924 INFO waiting for processes, storagenode to die
2022-04-06 23:30:17,926 INFO waiting for processes, storagenode to die
2022-04-06 23:30:20,929 INFO waiting for processes, storagenode to die
2022-04-06 23:30:21,930 WARN killing 'storagenode' (22109) with SIGKILL
2022-04-06 23:30:22,950 INFO stopped: storagenode (terminated by SIGKILL)
2022-04-06 23:30:22,951 INFO stopped: processes (terminated by SIGTERM)
2022-04-06 23:30:24,344 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-04-06 23:30:24,351 INFO RPC interface 'supervisor' initialized
2022-04-06 23:30:24,351 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2022-04-06 23:30:24,351 INFO supervisord started with pid 1
2022-04-06 23:30:25,353 INFO spawned: 'processes' with pid 9
2022-04-06 23:30:25,356 INFO spawned: 'storagenode' with pid 10
2022-04-06 23:30:25,357 INFO spawned: 'storagenode-updater' with pid 11
2022-04-06T23:30:25.369Z	INFO	Configuration loaded	{"Location": "/app/config/config.yaml"}

It seems to me that at 2022-04-06 23:30:11,917 INFO spawned: 'storagenode' with pid 22109 the update process should be done. So where does that SIGQUIT on the next line come from? Is this the container trying to stop, because the storagenode process stopped maybe?
More importantly, 10 seconds later it gets impatient and sends a SIGKILL, which I believe is standard behavior for docker, but somehow seems to ignore the --stop-timeout 300 setting. This is far from ideal as we’ve seen these interruptions cause db issues in the past.

Then at 23:30:24,344 the container has restarted. I don’t think this is intended behavior. It’ll work most of the time if nodes have restart set to always or unless-stopped, though at some risk to db’s.

1 Like

Same thing happened again when updating the updater. Seems completely unnecessary to restart the node and trigger the file walker AGAIN, just because the updater was updated.

3 Likes