I would try something like this. Restart RPi. Let it boot up completely then check the free memory using free -h
. You could then use the --memory=5G
flag in your docker run command. You can set the memory according to the available free memory.
Could you please search for OOM in the system logs?
journalctl --grep="OOM"
Hey so it looks like the Pi did run out of memory an extended number of times. Here are the logs:
pi@raspberrypi:~ $ journalctl --grep="OOM"
-- Boot 62c47f29026346ed83fd232790484428 --
-- Boot 711b9ab9bdb84635baebd374f6ddfd49 --
-- Boot 0f8dba1cc9294afb8099c93c60c5286e --
-- Boot 33e9808c6651423ebc67b41193783f08 --
-- Boot ad140f62ade643d9b181c98af854fd4c --
-- Boot 5012958c1b7844658b042dd7beab8ab3 --
-- Boot 2b2fc5b6d0fd4a528ff043b7a223beca --
-- Boot 7b29c4c80347415183d8bcce7a43a093 --
-- Boot 880d40f880494225a867b749b154f71f --
-- Boot f9b13e08a3a44ee49cda2a41c2ae2c6c --
-- Boot 880d40f880494225a867b749b154f71f --
-- Boot f9b13e08a3a44ee49cda2a41c2ae2c6c --
-- Boot e8a345022a484d00922853f237da3e12 --
-- Boot 620f36d9a3314727a620d4fbf217756b --
-- Boot c7b17fedffa9451396102726dcad0593 --
-- Boot 8cccd83f42634430b7b1ed5b62ec3a64 --
-- Boot d6203e8c7f54432987507bb916077046 --
-- Boot 32389fe58c9b40e18c3bb76d8120d3dc --
-- Boot 7d77ea002c404c36bee8ebba6c1a098d --
-- Boot 5be3bc12a89d45e19aa0de8bc99239ad --
-- Boot 7d77ea002c404c36bee8ebba6c1a098d --
-- Boot 5be3bc12a89d45e19aa0de8bc99239ad --
-- Boot 63d504ceec9a4978b902508558cbd763 --
-- Boot 1e9b44e533fe4345958c4c4857241344 --
-- Boot 3f789fd2369f41f3b8504048822172b8 --
-- Boot a808b9782d734b1ca259b322c2fc48a0 --
-- Boot 8f3f0494dd7040238d649398c5837751 --
-- Boot 413f9d52d68f42cfaf55f3248c149fce --
-- Boot 8f3f0494dd7040238d649398c5837751 --
lines 1-29
However, I am also happy to report that the system has sorted itself out after almost 3 days of instability. I’ve been able to see the dashboard for 4 consecutive hours now.
Actually, nothing were found, so no OOM.
Then it’s worth to check the journal on the time, when the node were killed to figure out what caused that.
Hi all,
I have a similar issue. After watchtower installed v1.108.3, node is stuck (offline) and dashboard not reachable any longer. I am using CLI + docker on QNAP. I tried removing and adding storagenode and watchtower docker again. Also restarting Container station did not help. Both docker are running, but nothing seems to happen:
18150K … … … … … 85% 66.5M 0s
18200K … … … … … 85% 41.5M 0s
18250K … … … … … 85% 38.5M 0s
18300K … … … … … 86% 45.7M 0s
18350K … … … … … 86% 45.9M 0s
18400K … … … … … 86% 43.1M 0s
18450K … … … … … 86% 68.7M 0s
18500K … … … … … 87% 56.4M 0s
18550K … … … … … 87% 18.3M 0s
18600K … … … … … 87% 67.9M 0s
18650K … … … … … 87% 156M 0s
18700K … … … … … 88% 35.4M 0s
18750K … … … … … 88% 36.1M 0s
18800K … … … … … 88% 75.8M 0s
18850K … … … … … 88% 50.5M 0s
18900K … … … … … 88% 78.2M 0s
18950K … … … … … 89% 101M 0s
19000K … … … … … 89% 21.3M 0s
19050K … … … … … 89% 84.8M 0s
19100K … … … … … 89% 64.2M 0s
19150K … … … … … 90% 38.0M 0s
19200K … … … … … 90% 70.4M 0s
19250K … … … … … 90% 54.4M 0s
19300K … … … … … 90% 25.9M 0s
19350K … … … … … 91% 59.0M 0s
19400K … … … … … 91% 90.1M 0s
19450K … … … … … 91% 45.7M 0s
19500K … … … … … 91% 75.8M 0s
19550K … … … … … 92% 41.7M 0s
19600K … … … … … 92% 62.1M 0s
19650K … … … … … 92% 37.1M 0s
19700K … … … … … 92% 72.3M 0s
19750K … … … … … 92% 54.5M 0s
19800K … … … … … 93% 40.4M 0s
19850K … … … … … 93% 50.8M 0s
19900K … … … … … 93% 64.4M 0s
19950K … … … … … 93% 37.1M 0s
20000K … … … … … 94% 70.1M 0s
20050K … … … … … 94% 41.0M 0s
20100K … … … … … 94% 55.1M 0s
20150K … … … … … 94% 43.7M 0s
20200K … … … … … 95% 21.6M 0s
20250K … … … … … 95% 37.5M 0s
20300K … … … … … 95% 48.3M 0s
20350K … … … … … 95% 68.1M 0s
20400K … … … … … 96% 34.8M 0s
20450K … … … … … 96% 42.8M 0s
20500K … … … … … 96% 71.1M 0s
20550K … … … … … 96% 65.9M 0s
20600K … … … … … 96% 58.5M 0s
20650K … … … … … 97% 38.0M 0s
20700K … … … … … 97% 44.2M 0s
20750K … … … … … 97% 70.7M 0s
20800K … … … … … 97% 26.2M 0s
20850K … … … … … 98% 30.0M 0s
20900K … … … … … 98% 154M 0s
20950K … … … … … 98% 48.8M 0s
21000K … … … … … 98% 39.5M 0s
21050K … … … … … 99% 34.2M 0s
21100K … … … … … 99% 164M 0s
21150K … … … … … 99% 35.2M 0s
21200K … … … … … 99% 65.7M 0s
21250K … … … … … 100% 146M=0.5s
2024-07-23 10:38:02 (43.5 MB/s) - ‘/tmp/storagenode.zip’ saved [21808398/21808398]
2024-07-23 10:38:03,535 INFO Set uid to user 0 succeeded
2024-07-23 10:38:03,552 INFO RPC interface ‘supervisor’ initialized
2024-07-23 10:38:03,552 INFO supervisord started with pid 1
2024-07-23 10:38:04,555 INFO spawned: ‘processes-exit-eventlistener’ with pid 61
2024-07-23 10:38:04,557 INFO spawned: ‘storagenode’ with pid 62
2024-07-23 10:38:04,560 INFO spawned: ‘storagenode-updater’ with pid 63
2024-07-23T10:38:04Z INFO Configuration loaded {“Process”: “storagenode-updater”, “Location”: “/app/config/config.yaml”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “operator.wallet”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “operator.wallet-features”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “contact.external-address”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-disk-space”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.private-address”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-bandwidth”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.address”}
2024-07-23T10:38:04Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “operator.email”}
2024-07-23T10:38:04Z INFO Anonymized tracing enabled {“Process”: “storagenode-updater”}
2024-07-23T10:38:04Z INFO Running on version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.108.3”}
2024-07-23T10:38:04Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}
2024-07-23T10:38:04Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2024-07-23T10:38:04Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2024-07-23T10:38:04Z INFO Operator email {“Process”: “storagenode”, “Address”: “myemail”}
2024-07-23T10:38:04Z INFO Operator wallet {“Process”: “storagenode”, “Address”: “mywallet”}
2024-07-23T10:38:05Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.108.3”}
2024-07-23T10:38:05Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2024-07-23T10:38:05Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.108.3”}
2024-07-23T10:38:05Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
2024-07-23 10:38:06,010 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-07-23 10:38:06,037 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-07-23 10:38:06,037 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-07-23T10:38:06Z INFO server kernel support for server-side tcp fast open remains disabled. {“Process”: “storagenode”}
2024-07-23T10:38:06Z INFO server enable with: sysctl -w net.ipv4.tcp_fastopen=3 {“Process”: “storagenode”}
2024-07-23T10:38:07Z INFO Telemetry enabled {“Process”: “storagenode”, “instance ID”: “12YFV957NNxk78EuzevSMWVvUYBEGtZ5McS3BCKwQjaiYeXcCVo”}
2024-07-23T10:38:07Z INFO Event collection enabled {“Process”: “storagenode”, “instance ID”: “12YFV957NNxk78EuzevSMWVvUYBEGtZ5McS3BCKwQjaiYeXcCVo”}
2024-07-23T10:38:07Z INFO db.migration.60 Overhaul piece_expirations {“Process”: “storagenode”}
2024-07-23T10:53:04Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}
2024-07-23T10:53:05Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.108.3”}
2024-07-23T10:53:05Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2024-07-23T10:53:05Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.108.3”}
2024-07-23T10:53:05Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
I have tried the re-adding the storagenode docker config again. It is running the same v.1.108.3 install process above again. Seems that it cannot complete the process properly.
Just wanted to report back that node is back online. It had more than 5h downtime, then recovered without any obvious changes. Not sure the update process is supposed to do that. If yes, it is a bit confusing not knowing what is going on.
My guess would be database conversion to newer version. This can take hours on slow systems.
Hmmm. I am running a AMD Ryzen™ Embedded V1500B 4-Core/8-Thread 2,2 GHz Prozessor with 8GB DDR4. Shouldn’t be too bad for running a small database. For future update processes that may lead to longer downtimes, I propose including more obvious log descriptions, e.g. “Database migration started. This may take up to a few hours. Please wait …” This would avoid node operators like me interfering, potentially interrupting the migration unintentionally, or worse corrupting data while being processed.
Yeah, stopped working again… it keeps going offline after about 1 hour every time; should I make a new thread, or just keep commenting here. For the mean time, got a big chunk of logs here:
The entire Storj dir is chmod -R 777 so it is NOT a permission issue; additionally, it is connected via ethernet to 500mbps down and 100mb down, and the drive does not show signs of failure (2 8tb samsung ssd’s; watch movies saved on it regularly)
(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 205995\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:265\n\ngoroutine 205996\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStreams:319\n\ngoroutine 206072\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206016\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206066\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206298\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206077\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStreams:319\n\ngoroutine 206075\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206120\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206142\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206271\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func8:798\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:859\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206251\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206286\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206275\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func8:798\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:859\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206281\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206341\n\tinternal/poll.runtime_pollWait:345\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.(*pollDesc).waitRead:89\n\tinternal/poll.(*FD).Read:164\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:185\n\tio.ReadAtLeast:335\n\tio.ReadFull:354\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:209\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229\n\ngoroutine 206342\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStreams:319\n\ngoroutine 206358\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206435\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func8:798\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:859\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206317\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206459\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206551\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func8:798\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:859\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206495\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206669\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206710\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206717\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 206718\n\tinternal/poll.runtime_pollWait:345\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.(*pollDesc).waitRead:89\n\tinternal/poll.(*FD).Read:164\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:185\n\tio.ReadAtLeast:335\n\tio.ReadFull:354\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:209\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229\n\ngoroutine 206719\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStreams:319\n\ngoroutine 206742\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206743\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206725\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206670\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206672\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206726\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206727\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 206728\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207077\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207599\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207635\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207665\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func8:798\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:859\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207604\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207472\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207504\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207500\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207516\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207518\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207619\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207466\n\tsync.runtime_notifyListWait:569\n\tsync.(*Cond).Wait:70\n\tstorj.io/common/sync2.(*Throttle).ConsumeOrWait:50\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func7:780\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\ngoroutine 207666\n\tinternal/poll.runtime_pollWait:345\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.(*pollDesc).waitRead:89\n\tinternal/poll.(*FD).Read:164\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:185\n\tio.ReadAtLeast:335\n\tio.ReadFull:354\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:209\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229\n\ngoroutine 207693\n\tinternal/poll.runtime_pollWait:345\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.(*pollDesc).waitRead:89\n\tinternal/poll.(*FD).Read:164\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:185\n\tio.ReadAtLeast:335\n\tio.ReadFull:354\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:209\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229\n\ngoroutine 207694\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStreams:319\n\ngoroutine 207674\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func8:798\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:859\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207699\n\tinternal/poll.runtime_pollWait:345\n\tinternal/poll.(*pollDesc).wait:84\n\tinternal/poll.(*pollDesc).waitRead:89\n\tinternal/poll.(*FD).Read:164\n\tnet.(*netFD).Read:55\n\tnet.(*conn).Read:185\n\tio.ReadAtLeast:335\n\tio.ReadFull:354\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:209\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229\n\ngoroutine 207684\n\tsync.runtime_SemacquireMutex:77\n\tsync.(*Mutex).lockSlow:171\n\tsync.(*Mutex).Lock:90\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:94\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35\n\ngoroutine 207700\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStreams:319\n"}
2024-07-26T01:20:16Z INFO Downloading versions. {"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2024-07-26T01:20:16Z INFO Current binary version {"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.108.3"}
2024-07-26T01:20:16Z INFO Version is up to date {"Process": "storagenode-updater", "Service": "storagenode"}
2024-07-26T01:20:16Z INFO Current binary version {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.108.3"}
2024-07-26T01:20:16Z INFO Version is up to date {"Process": "storagenode-updater", "Service": "storagenode-updater"}```
You have both down which is why it failed on upload request
Does it show any more errors before what you have posted ? It will help get a better understanding.
You have a corrupted file.
If it restarts, at near about the same time everytime… it’s in a loop, where it hits the same corrupted file everytime.
fix dat, good luck
.25 cents
lol, nice one; here’s a better formatted version of the json log, a lot of upload failures prior to the getting latest version loop.
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "52S4Q26W3AVZVFUVC32RUNJGSXOBJEW2WB262YKNVKZOXFEAEZ6Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.242:33618", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "DRCEMR4ZDJ2G5FVKMFYUKHN4A2XGAUZ5E3ZSBLBXIGJRUXL6QUWA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "109.61.92.80:48678", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "TGNLO6Z6KPIFN335O5FNBRKQCI6O7ZN62DPW33V4ZKOTJI7RYHTA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.201.210:45936", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "64QOFL2RZXTSMDZVYFHTGYP54BBQLQ3KRHYYTE5OFR7P6WSAZZBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "5.161.246.105:52870", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "Y3DGDUY2KLYORO5SETK3ICQ5UHE73ZLUUMZZSO4IRDLP5PLSK6YQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.76:40270", "Size": 2031616}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "NL3EXZ7CQSX7OVT3DHDEUNRMQGQMQPDOHLQ5WLG637BH2XLQBNJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "5.161.111.89:34536", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "Q2AUWCHZCQX24M2O3XC5QC2HW4WTVY5JO4WFYRSITQAS25G6CDPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.238:46006", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "YCMSFOZNPFKWZEGWX3YPXC7X6QYCYY6M5PHRQRNPFZHMRN7V3XUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.213.33:48424", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "R7PY44N56KKIWVJZ4NGKHWWB2P3DUG6ZHXH2FP2KECL3VJGD2OYA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.100:51464", "Size": 2162688}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "6OCDBTWL6AEWVZPCUWZMVBJHAUBNWUW2KVCFNNQ4EBYMBS2R2SUA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.234:44546", "Size": 2818048}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "V3HXGDH7MASGXJVFNJFCLWUZ76XJ2FIOQVX23TRLQJEH6FDZCGRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.64:39368", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "TMKONQQ2W3JFA66TOWUAWJCH2G6CJDJBETVJ5GAPEGYRSKUTT32A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.230:40470", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "PZBV365RUWDB6VRDO4ZEMHLOTDIPPHD2HSFNDTGD7DKUPXNB36UQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.99:38994", "Size": 1507328}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "L4XA2SVH52XSZ2GOG4SJM2EJAYQOMHKZSQAPCXUKNHEZLPJF2SXQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.42:44874", "Size": 1769472}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "KI5IU5K6QC6FMYBGBJOXUFLPSKVOMBWT6GNRUV2WG4XR4FLJM4OA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.234:59498", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "TME5XTVLCMEMCUNXRWJA2OXIKRT6SBMPAGVRHNQNVF46PGWZIZPQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.242:37966", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "TM2KNUKS3BU3JQBWEUHE4OPVSDT4LSMQCGDDVBQJ3J2CDQTAQX7A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "5.161.236.118:54398", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "RD4DAVXN6QEUOOYT62Z42XEHZYSHVWBBYPTPBG22BYSQVC44O3ZA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.54:33644", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "RWDFXD5IYZMOBMLZLNDMVU35KMBU2XWV3F44AF2PEXLRVKX2GFLQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.69:44016", "Size": 327680}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "LYVJY5HRX4BBF6LHEGJDI4RNJBDPPCUWHKOKDVMWLMZWAK66ZSKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.43:41228", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "4Z6O7LKABJHX4AGZMACLCGX3KHCB7KXT3ZAV3APNQ47ZO7CGMI5A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "109.61.92.72:45848", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "FRL7TMRAHGRB37FHQFBF5I3BL4XICE7TOLJY26BXF7E3N57ZYGWA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.35:49626", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "VLCL7J4FKTC7SKQK7PAW674GTUT2D6KZT6ZXQSZQTONTGIDKVNCA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.44:40624", "Size": 1245184}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "CGIETDDIBQATARD2NGHHGQQJPTRPXWVCL5QQPLJBFTVV3WSRNHVQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.201.209:43690", "Size": 1769472}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "5NW3MEOECVQK7IRTZ4QX6D664X5UTRR6AGZ55ZETU2FFN5TBERYQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.226:45122", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "PQCZCBBJIXTHAUTULUWYKNJYQXWOOCVDA7AGSOOA757D4PYQZMWA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.227:34214", "Size": 327680}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "2LXY6TXSPGMLT2LQ62RMGMCDAZ436F2HUSEIDTRFCG6YKINDP63A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.45:45756", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled (race lost or node shutdown) {"Process": "storagenode", "Piece ID": "MJIIIVR4LBGJFYZLVOYSOHKOZ2MYARKVOVZSOZGK2N5C45Y5C2QA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.46:51724"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "IW2VKY2OZHFLO2YVBGUB6O4CQHMW6RP24AZVBOMEQFWEZC6TSOYA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.64:36988", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z INFO piecestore upload canceled (race lost or node shutdown) {"Process": "storagenode", "Piece ID": "YKCED6KAXFPI64BITA3VCRKHQ6WSAHWEDYEVVYRJMSBMRKV7NIKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.22:48110"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "4DBKMKHTZBK3UD6XOOJD33AIBRMBCWR6TDA7TU3BBAJE23QD7RGA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.235:53258", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z ERROR piecestore upload failed {"Process": "storagenode", "Piece ID": "IM6WMD752EZOEV74JVQNCJRBEOPPJ3WF33XMVR3BCYLNV5JTL7XQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.213.34:50262", "Size": 0, "error": "order: grace period passed for order limit", "errorVerbose": "order: grace period passed for order limit\n\tstorj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue:103\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder:905\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:418\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z DEBUG piecestore upload failed {"Process": "storagenode", "Piece ID": "YKCED6KAXFPI64BITA3VCRKHQ6WSAHWEDYEVVYRJMSBMRKV7NIKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "199.102.71.22:48110", "Size": 12544, "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload.func6:526\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:564\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:16Z DEBUG piecestore upload failed {"Process": "storagenode", "Piece ID": "MJIIIVR4LBGJFYZLVOYSOHKOZ2MYARKVOVZSOZGK2N5C45Y5C2QA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.219.46:51724", "Size": 18176, "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload.func6:526\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:535\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "B23TNOJNLWBTGR4SXATY3J2Z2RCGEELV4LOINSSPKY2F5P2WV3OQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.232:45002", "Size": 589824}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "OZZPBMEE6NSFMFCQTSBRTGQUTFIDMULB2DJMV3Q55GKYLAHRWETA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.83:49920", "Size": 1245184}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "DW6DIED2MEVE7HJUYIBMTD4JGIIVXTFJ5VUL7PBPTTOSDCSSLSWQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.229:53666", "Size": 1638400}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "OA437TWY7EO74ZFWYUIDJNCSYD2GMN4S3GSXXL4YUVZ7H2AIIFTQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.80:45502", "Size": 983040}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "BF7XQKNVUOL7LLOEDA7GZZJHAYRD3IOYIG4UEBPGHNK7UMCI3BKA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.239:41544", "Size": 1376256}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "CCYLT66BYYQRBSLIDKG6OPK4UMA5VO7G2PGBRREF2T2MVJOGNDNA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.34:37096", "Size": 1507328}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "255GKW3WACKNQZM4TOPCB3TU6RJKT3GA5YQHHL4MOOIYS7S4YYQA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.242:44374", "Size": 1769472}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "QWKYMRABFONRLZPUZ6IYDOHBWALRCT3S4TJ5URVHCXSDXHYDGZOA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.45:44228", "Size": 196608}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "2VQWLXZUHI5V63Z2R3KYQEFCGQAJZHAD74KKAX3U5YCHCTEDZLQQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.66:43788", "Size": 196608}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "YZAUNKAQY72VEDTGHKAKKCLJSUGWIDOQQBVY6ZVF3SLPYO3A7X7Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.84:52324", "Size": 196608}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "O6UNCYF5QZBEM5PQ47X3YGWVRCY5W55EMBDAF27LZ5HVNUCBYE5A", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "109.61.92.79:34198", "Size": 2424832}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "GL3BR3OICN7JNZGD75Q66QMRR4DYLAZZ6PIXJ6XL6RV3PY4UJSCA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.201.213:50630", "Size": 1769472}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "DS6XQHWGRWTXLE24C3VAIXL6CDA3U73J3CDJDTFM5EUP2LQNAMAQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.219.46:35850", "Size": 1245184}
2024-07-26T06:09:36Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "K57WHNS6IWJ36D63Y32DEBNBHT77ZQ6FO5S2UJSW5VM7N6L3SBBA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "79.127.205.240:41582", "Size": 1245184}
2024-07-26T06:20:16Z INFO Downloading versions. {"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2024-07-26T06:20:16Z INFO Current binary version {"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.108.3"}
2024-07-26T06:20:16Z INFO Version is up to date {"Process": "storagenode-updater", "Servi
Do you have any suggestions as far as confirming/potentially solving this? With my uptime already down the drain, is there any potential “consequence” for deleting Storj files from the storage device?
Just checked via “date” command on the pi, it is 3 seconds out of sync with time-time.net. regardless, installed ntp and made sure it got more accurate time; restarted the docker container, I’ll see what happens.
Edit: Node still goes offline after some time. I have gotten all of the Storj logs from the initial docker run command (previously provided in this thread) and uploaded it to mediafire. The character limit for replies is relatively low. Storj docker container log text link:
I downloaded your log file.
Searched the log:
counted 407 “upload started”
counted 2404 “upload failed”
counted only 1 “uploaded”
counted 89 “download started”
counted zero “downloaded”
So it’s failing to complete almost all of the upload and download requests for some reason. Also there is a period in the logs where it seems to have gone quiet for about 2 hours and then all the sudden it produced thousands of log lines about “grace period passed for order limit”
How many TB of data does this node have?
I don’t have any idea what the root problem is but I would start by trying to make things easier for your node and see how it responds. To test this , personally, I would:
disable the startup file walker:
storage2.piece-scan-on-startup: false
and temporarily set the node storage capacity to a low value (500GB) so that your node will be considered full and prevent any customer upload attempts.
You will probably keep getting those “unable to delete piece” messages until the clean up task is able to successfully work its way through the entire list, That’s when the process will finally update the expired pieces database supposedly.
You won’t be getting uploads but you should be able to monitor the logs for downloads. See if downloads begin to finish successfully.
I took a screenshot of the dashboard last time it was up for ~ an hour before going offline, and these are the stats (screenshot attached for reference):
Used: 8.73TB
Free: 0.71TB
Trash:556.81GB
Overused: 0B
I’ll look into disabling startup file walker you referenced, but just to confirm, this should be in the config.yaml file? For context, I am pasting the unmodified config.yaml; I’ve noticed some of the settings set to 1 hr coincide with the node going offline after ~ an hour of being online (or at least, being able to see the dashboard).
# how frequently bandwidth usage rollups are calculated
# bandwidth.interval: 1h0m0s
# how frequently expired pieces are collected
# collector.interval: 1h0m0s
# use color in user interface
# color: false
# server address of the api gateway and frontend app
console.address: 0.0.0.0:14002
# path to static resources
# console.static-dir: ""
# the public address of the node, useful for nodes behind NAT
contact.external-address: ""
# how frequently the node contact chore should run
# contact.interval: 1h0m0s
# protobuf serialized signed node tags in hex (base64) format
# contact.tags: ""
# Maximum Database Connection Lifetime, -1ns means the stdlib default
# db.conn_max_lifetime: 30m0s
# Maximum Amount of Idle Database connections, -1 means the stdlib default
# db.max_idle_conns: 1
# Maximum Amount of Open Database connections, -1 means the stdlib default
# db.max_open_conns: 5
# address to listen on for debug endpoints
# debug.addr: 127.0.0.1:0
# If set, a path to write a process trace SVG to
# debug.trace-out: ""
# open config in default editor
# edit-conf: false
# in-memory buffer for uploads
# filestore.write-buffer-size: 128.0 KiB
# how often to run the chore to check for satellites for the node to exit.
# graceful-exit.chore-interval: 1m0s
# the minimum acceptable bytes that an exiting node can transfer per second to the new node
# graceful-exit.min-bytes-per-second: 5.00 KB
# the minimum duration for downloading a piece from storage nodes before timing out
# graceful-exit.min-download-timeout: 2m0s
# number of concurrent transfers per graceful exit worker
# graceful-exit.num-concurrent-transfers: 5
# number of workers to handle satellite exits
# graceful-exit.num-workers: 4
# Enable additional details about the satellite connections via the HTTP healthcheck.
healthcheck.details: false
# Provide health endpoint (including suspension/audit failures) on main public port, but HTTP protocol.
healthcheck.enabled: true
# path to the certificate chain for this identity
identity.cert-path: identity/identity.cert
# path to the private key for this identity
identity.key-path: identity/identity.key
# if true, log function filename and line number
# log.caller: false
# if true, set logging to development mode
# log.development: false
# configures log encoding. can either be 'console', 'json', 'pretty', or 'gcloudlogging'.
# log.encoding: ""
# the minimum log level to log
log.level: debug
# can be stdout, stderr, or a filename
# log.output: stderr
# if true, log stack traces
# log.stack: false
# address(es) to send telemetry to (comma-separated)
# metrics.addr: collectora.storj.io:9000
# application name for telemetry identification. Ignored for certain applications.
# metrics.app: storagenode
# application suffix. Ignored for certain applications.
# metrics.app-suffix: -release
# address(es) to send telemetry to (comma-separated)
# metrics.event-addr: eventkitd.datasci.storj.io:9002
# instance id prefix
# metrics.instance-prefix: ""
# how frequently to send up telemetry. Ignored for certain applications.
# metrics.interval: 1m0s
# maximum duration to wait before requesting data
# nodestats.max-sleep: 5m0s
# how often to sync reputation
# nodestats.reputation-sync: 4h0m0s
# how often to sync storage
# nodestats.storage-sync: 12h0m0s
# operator email address
operator.email: ""
# operator wallet address
operator.wallet: ""
# operator wallet features
operator.wallet-features: ""
# move pieces to trash upon deletion. Warning: if set to false, you risk disqualification for failed audits if a satellite database is restored from backup.
# pieces.delete-to-trash: true
# run garbage collection and used-space calculation filewalkers as a separate subprocess with lower IO priority
# pieces.enable-lazy-filewalker: true
# file preallocated for uploading
# pieces.write-prealloc-size: 4.0 MiB
# whether or not preflight check for database is enabled.
# preflight.database-check: true
# whether or not preflight check for local system clock is enabled on the satellite side. When disabling this feature, your storagenode may not setup correctly.
# preflight.local-time-check: true
# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5
# allows for small differences in the satellite and storagenode clocks
# retain.max-time-skew: 72h0m0s
# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled
# public address to listen on
server.address: :28967
# whether to debounce incoming messages
# server.debouncing-enabled: true
# if true, client leaves may contain the most recent certificate revocation for the current certificate
# server.extensions.revocation: true
# if true, client leaves must contain a valid "signed certificate extension" (NB: verified against certs in the peer ca whitelist; i.e. if true, a whitelist must be provided)
# server.extensions.whitelist-signed-leaf: false
# path to the CA cert whitelist (peer identities must be signed by one these to be verified). this will override the default peer whitelist
# server.peer-ca-whitelist-path: ""
# identity version(s) the server will be allowed to talk to
# server.peer-id-versions: latest
# private address to listen on
server.private-address: 127.0.0.1:7778
# url for revocation database (e.g. bolt://some.db OR redis://127.0.0.1:6378?db=2&password=abc123)
# server.revocation-dburl: bolt://config/revocations.db
# enable support for tcp fast open
# server.tcp-fast-open: true
# the size of the tcp fast open queue
# server.tcp-fast-open-queue: 256
# if true, uses peer ca whitelist checking
# server.use-peer-ca-whitelist: true
# total allocated bandwidth in bytes (deprecated)
storage.allocated-bandwidth: 0 B
# total allocated disk space in bytes
storage.allocated-disk-space: 2.00 TB
# how frequently Kademlia bucket should be refreshed with node stats
# storage.k-bucket-refresh-interval: 1h0m0s
# path to store data in
# storage.path: config/storage
# a comma-separated list of approved satellite node urls (unused)
# storage.whitelisted-satellites: ""
# how often the space used cache is synced to persistent storage
# storage2.cache-sync-interval: 1h0m0s
# directory to store databases. if empty, uses data path
# storage2.database-dir: ""
# size of the piece delete queue
# storage2.delete-queue-size: 10000
# how many piece delete workers
# storage2.delete-workers: 1
# how many workers to use to check if satellite pieces exists
# storage2.exists-check-workers: 5
# how soon before expiration date should things be considered expired
# storage2.expiration-grace-period: 48h0m0s
# how many concurrent requests are allowed, before uploads are rejected. 0 represents unlimited.
# storage2.max-concurrent-requests: 0
# amount of memory allowed for used serials store - once surpassed, serials will be dropped at random
# storage2.max-used-serials-size: 1.00 MB
# a client upload speed should not be lower than MinUploadSpeed in bytes-per-second (E.g: 1Mb), otherwise, it will be flagged as slow-connection and potentially be closed
# storage2.min-upload-speed: 0 B
# if the portion defined by the total number of alive connection per MaxConcurrentRequest reaches this threshold, a slow upload client will no longer be monitored and flagged
# storage2.min-upload-speed-congestion-threshold: 0.8
# if MinUploadSpeed is configured, after a period of time after the client initiated the upload, the server will flag unusually slow upload client
# storage2.min-upload-speed-grace-duration: 10s
# how frequently Kademlia bucket should be refreshed with node stats
# storage2.monitor.interval: 1h0m0s
# how much bandwidth a node at minimum has to advertise (deprecated)
# storage2.monitor.minimum-bandwidth: 0 B
# how much disk space a node at minimum has to advertise
# storage2.monitor.minimum-disk-space: 500.00 GB
# how frequently to verify the location and readability of the storage directory
# storage2.monitor.verify-dir-readable-interval: 1m0s
# how long to wait for a storage directory readability verification to complete
# storage2.monitor.verify-dir-readable-timeout: 1m0s
# if the storage directory verification check fails, log a warning instead of killing the node
# storage2.monitor.verify-dir-warn-only: false
# how frequently to verify writability of storage directory
# storage2.monitor.verify-dir-writable-interval: 5m0s
# how long to wait for a storage directory writability verification to complete
# storage2.monitor.verify-dir-writable-timeout: 1m0s
# how long after OrderLimit creation date are OrderLimits no longer accepted
# storage2.order-limit-grace-period: 1h0m0s
# length of time to archive orders before deletion
# storage2.orders.archive-ttl: 168h0m0s
# duration between archive cleanups
# storage2.orders.cleanup-interval: 5m0s
# maximum duration to wait before trying to send orders
# storage2.orders.max-sleep: 30s
# path to store order limit files in
# storage2.orders.path: config/orders
# timeout for dialing satellite during sending orders
# storage2.orders.sender-dial-timeout: 1m0s
# duration between sending
# storage2.orders.sender-interval: 1h0m0s
# timeout for sending
# storage2.orders.sender-timeout: 1h0m0s
# if set to true, all pieces disk usage is recalculated on startup
# storage2.piece-scan-on-startup: true
# allows for small differences in the satellite and storagenode clocks
# storage2.retain-time-buffer: 48h0m0s
# how long to spend waiting for a stream operation before canceling
# storage2.stream-operation-timeout: 30m0s
# file path where trust lists should be cached
# storage2.trust.cache-path: config/trust-cache.json
# list of trust exclusions
# storage2.trust.exclusions: ""
# how often the trust pool should be refreshed
# storage2.trust.refresh-interval: 6h0m0s
# list of trust sources
# storage2.trust.sources: https://www.storj.io/dcs-satellites
# address for jaeger agent
# tracing.agent-addr: agent.tracing.datasci.storj.io:5775
# application name for tracing identification
# tracing.app: storagenode
# application suffix
# tracing.app-suffix: -release
# buffer size for collector batch packet size
# tracing.buffer-size: 0
# whether tracing collector is enabled
# tracing.enabled: true
# how frequently to flush traces to tracing agent
# tracing.interval: 0s
# buffer size for collector queue size
# tracing.queue-size: 0
# how frequent to sample traces
# tracing.sample: 0
# Interval to check the version
# version.check-interval: 15m0s
# Request timeout for version checks
# version.request-timeout: 1m0s
# server address to check its version against
version.server-address: https://version.storj.io
Also, it is my understanding that the # symbol before something in config files like this comment that line out entirely, is this not true?
Edit 1: ok, so I’ve noticed there is a discrepancy between my docker start command of 10tb storage size, and the config.yaml (it says 2tb); I am going to manually change the config.yaml storage.alocated-disk-space to 10.00 tb and see if that is the issue.
yes, the # comments out the line. The config you posted has a line that says:# storage2.piece-scan-on-startup: true
you can remove the # symbol and the space after it and change the true to false. so that the line becomes:
storage2.piece-scan-on-startup: false