Repeating download error in log

An example of a repeating error is below. How might I fix a signature verification error?

2021-04-04T06:14:38.194Z INFO piecestore download started {“Pi
ece ID”: “YA2DOV36WYG7TZTFQ3XV76TDZ7XNW7RAJT72OP4P7FIV6A4NQ3LQ”, “Satellite ID”: “12
L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET_REPAIR”}

2021-04-04T06:14:38.195Z ERROR piecestore download failed {“Piece ID”:
“YA2DOV36WYG7TZTFQ3XV76TDZ7XNW7RAJT72OP4P7FIV6A4NQ3LQ”, “Satellite ID”: “12L9ZFwhzV
puEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET_REPAIR”, “error”: “untrus
ted: invalid order limit signature: signature verification error: signature is not v
alid”, “errorVerbose”: "untrusted: invalid order limit signature: signature verifica
tion error: signature is not valid…

Hello @systems.reply ,
Welcome to the forum!

It’s a cut of long tail, so actually the download has been canceled by downloader too fast (it has very fast nodes around) without sending any message to the rejected nodes, so the node is caught the cancel in the beginning of the handshaking, thus this misleading error is showed up…

In short - you does not need to fix something, this behavior is normal (though the message is confusing, I’m agree).

Thank you for your warm welcome. I feel as if we met a long time ago as I’ve been reading your responses for most of the last two years. Thanks for your continued contributions to this community.

Something needs to be fixed on this node, however, because it just got suspended. I just went to the log, and I’m still getting the same message. Here’s some background, and please be kind as some of the decisions involved were made at 1:30AM. :grin:

I have multiple nodes in the same machine. I was cleaning up and backing up the .local/…/identity folder and I changed the of the folder containing the credentials for this node, as well as the mount point for the drive containing the node. When I grabbed the docker start command from my nano history, that my “–mount type=bind,source=” option had been pointing to the wrong mount point before I stopped it, which means that it has been this way for possibly months.

I then corrected the mount location, but log showed preflight errors for reputation.db and heldamount.db. There were versions of these two databases at the old mount point, so I copied them into the node’s /storage folder after my execution of instructions found in some posts I read regarding using sqlite to remedy the issue had no effect. I thought that the databases at the old mount point may have been leveraged in the prior run state. I did back up all the databases before I tried any of this however, so if the solution means going back to that state, I can do it.

Please advise. Thanks.

Sounds like a path to disqualification if you now changed the mountpoint.
You need to move all data from a wrong location to the correct one, you likely will lose some statistics but should be fine.

What is the current error in the logs?

2021-04-04T12:56:39.070Z ERROR piecestore download failed {“Piece ID”: “5FSNV3SLMPJOTB22J3OSX5SGLKWXGM7VSY5QTDL2T766YRAW33QQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET_AUDIT”, “error”: “untrusted: invalid order limit signature: signature verification error: signature is not valid”, “errorVerbose”: “untrusted: invalid order limit signature: signature verification error: signature is not valid\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:145\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:467\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

Agreed. I would have expected the disqualification to have happened earlier. I’m surprised that the container ran while pointing to the wrong mountpoint. My problems started when I tried to correct it.

I rolled back heldamount.db and reputation.db to the local versions and got the following messages:

Error: Error during preflight check for storagenode databases: storage node preflight database error: heldamount: expected schema does not match actual: ...

and

Error: Error during preflight check for storagenode databases: storage node preflight database error: reputation: expected schema does not match actual: ...

Maybe this is a wrong data for the current identity?
You should not perform the setup step in the second time. If the identity is wrong, the node will not start. The setup step will override that. So, did you perform it before running of the container?

I didn’t perform the setup command. I only did ‘docker run…–mount type=bind,source="<storage-dir>" …’

1 Like

Then your new mount point has a file with the correct NodeID, and this is weird, unless you moved all data from the wrong location to a new one.

You can fix them:

  1. Stop storagenode
  2. Delete reputation.db and heldamount.db
  3. Move all remained databases to a different folder
  4. Start storagenode
    It should create empty databases
  5. Stop storagenode
  6. Move backed up databases back with overwrite
  7. Start storagenode
  8. Check logs

sudo docker stop -t 300 storagenode
sudo docker rm storagenode

mkdir ~/holding_pen
sudo su -
cd /mnt/storj/storage
mv heldamount.db ~/holding_pen
mv reputation.db ~/holding_pen

mkdir /mnt/storj/storage/db_holding
mv /mnt/storj/storage/*.db /mnt/storj/db_holding
logout

sudo docker run -d --restart unless-stopped …

root@storjbox:/mnt/storj/storage# ls /mnt/storj/storage
bandwidth.db  heldamount.db     orders.db            piece_spaced_used.db  satellites.db     trash
blobs         info.db           piece_expiration.db  pricing.db            storage_usage.db  used_serial.db
garbage       notifications.db  pieceinfo.db         reputation.db         temp

logout

sudo docker stop -t 300 storagenode
sudo docker rm storagenode

sudo su -
mv /mnt/storj/db_holding/* /mnt/storj/storage/
logout

sudo docker run -d --restart unless-stopped …

2021-04-04T14:41:00.383Z INFO Configuration loaded {“Location”: “/app/config/config.yaml”}
2021-04-04T14:41:00.392Z INFO Operator email
2021-04-04T14:41:00.392Z INFO Operator wallet
2021-04-04T14:41:00.700Z INFO Telemetry enabled
2021-04-04T14:41:00.709Z INFO db.migration Database Version {“version”: 42}
2021-04-04T14:41:01.112Z WARN trust Failed to fetch URLs from source; used cache {“source”: “https://tardigrade.io/trusted-satellites”, “error”: “HTTP source: Get https://tardigrade.io/trusted-satellites: x509: certificate signed by unknown authority”, “errorVerbose”: “HTTP source: Get https://tardigrade.io/trusted-satellites: x509: certificate signed by unknown authority\n\tstorj.io/storj/storagenode/trust.(*HTTPSource).FetchEntries:63\n\tstorj.io/storj/storagenode/trust.(*List).fetchEntries:90\n\tstorj.io/storj/storagenode/trust.(*List).FetchURLs:49\n\tstorj.io/storj/storagenode/trust.(*Pool).fetchURLs:240\n\tstorj.io/storj/storagenode/trust.(*Pool).Refresh:177\n\tstorj.io/storj/storagenode.(*Peer).Run:688\n\tmain.cmdRun:200\n\tstorj.io/private/process.cleanup.func1.4:359\n\tstorj.io/private/process.cleanup.func1:377\n\tgithub.com/spf13/cobra.(*Command).execute:840\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:945\n\tgithub.com/spf13/cobra.(*Command).Execute:885\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.ExecCustomDebug:70\n\tmain.main:320\n\truntime.main:203”}
2021-04-04T14:41:01.129Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-04-04T14:41:01.724Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-04-04T14:41:01.724Z INFO bandwidth Performing bandwidth usage rollups
2021-04-04T14:41:01.724Z INFO trust Scheduling next refresh {“after”: “7h1m40.84683929s”}
2021-04-04T14:41:01.725Z INFO Node … started
2021-04-04T14:41:01.725Z INFO Public server started on [::]:28967
2021-04-04T14:41:01.725Z INFO Private server started on 127.0.0.1:7778
2021-04-04T14:41:01.837Z INFO piecestore download started {“Piece ID”: “XYWAMMIXTO7DTYBLA2JW5MOFFSAT2CDO3I53BGM5PUSU45G34UQA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”}
2021-04-04T14:41:01.917Z INFO collector delete expired {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “SM3TGWAVEDLNE2OA4V7NCVVYAUCGBHXVFIV3BLH4JN6RVEYOD6OQ”}
2021-04-04T14:41:02.057Z INFO collector delete expired {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “I7W3XETYJMGKHF3PYN44R4LZRTAX2D2DHDRUY7IWGIQAQTVDV6HA”}
2021-04-04T14:41:02.065Z ERROR piecestore download failed {“Piece ID”: “XYWAMMIXTO7DTYBLA2JW5MOFFSAT2CDO3I53BGM5PUSU45G34UQA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”, “error”: “untrusted: invalid order limit signature: signature verification error: signature is not valid”, “errorVerbose”: “untrusted: invalid order limit signature: signature verification error: signature is not valid\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:145\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:467\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2021-04-04T14:41:02.164Z INFO collector delete expired {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “FZ7ATHXMXGHAE7AFDK44RKRN2ZFL7456VXVQRCGIBZYFFF72TOPQ”}
2021-04-04T14:41:02.164Z INFO collector collect {“count”: 3}
2021-04-04T14:41:02.304Z INFO piecestore download started {“Piece ID”: “7GG5ZW4ERZJDK5RW7RVZ6T3X33AC63BDQTKJZK3AQMWHJD4TJKEA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”}
2021-04-04T14:41:02.304Z ERROR piecestore download failed {“Piece ID”: “7GG5ZW4ERZJDK5RW7RVZ6T3X33AC63BDQTKJZK3AQMWHJD4TJKEA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”, “error”: “untrusted: invalid order limit signature: signature verification error: signature is not valid”, “errorVerbose”: “untrusted: invalid order limit signature: signature verification error: signature is not valid\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:145\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:467\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1004\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

docker image rm storjlabs/storagenode:latest
docker pull storjlabs/storagenode:latest
docker stop -t 300 storagenode
docker rm storagenode

Then docker run ...
Make sure that you start with storjlabs/storagenode:latest, not :beta or :alpha

storagenode:beta was accidentally placed in the run command; restarted the container with storagenode:latest.

system.reply@storjbox:~$ sudo docker logs --tail 500 storagenode
2021-04-04T14:58:18.940Z INFO Configuration loaded {“Location”: “/app/config/config.yaml”}
2021-04-04T14:58:18.950Z INFO Operator email
2021-04-04T14:58:18.950Z INFO Operator wallet
2021-04-04T14:58:20.154Z INFO Telemetry enabled {“instance ID”: …}
2021-04-04T14:58:20.217Z INFO db.migration.43 Add table payments
2021-04-04T14:58:20.235Z INFO db.migration.44 Add online_score and offline_suspended fields to reputation db, rename disqualified and suspended to disqualified_at and suspended_at
2021-04-04T14:58:20.248Z INFO db.migration.45 Add offline_under_review_at field to reputation db
2021-04-04T14:58:20.249Z INFO db.migration.47 Add audit_history field to reputation db
2021-04-04T14:58:20.250Z INFO db.migration.48 drop uptime columns
2021-04-04T14:58:20.250Z INFO db.migration.49 Add distributed field to paystubs table
2021-04-04T14:58:20.251Z INFO db.migration.50 Make distributed field in paystubs table not null
2021-04-04T14:58:20.252Z INFO db.migration.51 Assume distributed == paid for paystubs before 2020-12.
2021-04-04T14:58:20.252Z INFO db.migration Database Version {“version”: 51}
2021-04-04T14:58:20.837Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-04-04T14:58:21.426Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-04-04T14:58:21.427Z INFO Node started
2021-04-04T14:58:21.427Z INFO Public server started on [::]:28967
2021-04-04T14:58:21.427Z INFO Private server started on 127.0.0.1:7778
2021-04-04T14:58:21.428Z INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See UDP Receive Buffer Size · lucas-clemente/quic-go Wiki · GitHub for details.
2021-04-04T14:58:21.427Z INFO bandwidth Performing bandwidth usage rollups
2021-04-04T14:58:21.427Z INFO trust Scheduling next refresh {“after”: “5h20m15.846691992s”}
2021-04-04T14:58:24.265Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {“count”: 294}
2021-04-04T14:58:24.265Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {“count”: 268}
2021-04-04T14:58:24.265Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB sending {“count”: 391}
2021-04-04T14:58:24.265Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {“count”: 607}
2021-04-04T14:58:24.265Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {“count”: 54}
2021-04-04T14:58:24.590Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2021-04-04T14:58:24.916Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2021-04-04T14:58:25.064Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2021-04-04T14:58:25.119Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB finished
2021-04-04T14:58:25.598Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2021-04-04T14:58:25.837Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {“count”: 130}
2021-04-04T14:58:25.837Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {“count”: 53}
2021-04-04T14:58:25.837Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {“count”: 189}
2021-04-04T14:58:25.837Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {“count”: 287}
2021-04-04T14:58:25.837Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB sending {“count”: 185}
2021-04-04T14:58:26.333Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2021-04-04T14:58:26.344Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2021-04-04T14:58:26.521Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2021-04-04T14:58:26.543Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB finished
2021-04-04T14:58:27.125Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2021-04-04T14:58:39.603Z INFO piecestore download started {“Piece ID”: “M7Z6VKFEZWYP76XHGRTFT644KAQWCYCBQGLQD7MG6YVVL5ESFPLA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”}
2021-04-04T14:58:44.600Z INFO piecestore downloaded {“Piece ID”: “M7Z6VKFEZWYP76XHGRTFT644KAQWCYCBQGLQD7MG6YVVL5ESFPLA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”}
2021-04-04T14:58:50.939Z INFO piecestore download started {“Piece ID”: “ECOHYTXJE3KQVWTMDZLBICBYXPJFUAA5KKWCOIECUXXN4XSXBX7Q”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET”}
2021-04-04T14:58:53.894Z INFO piecestore downloaded {“Piece ID”: “ECOHYTXJE3KQVWTMDZLBICBYXPJFUAA5KKWCOIECUXXN4XSXBX7Q”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET”}
2021-04-04T14:58:56.654Z INFO piecestore download started {“Piece ID”: “6DHVYECJGPL4JQJLPY232E3L3YFOTKRRDKTPSG64GI4ALGY3XWPQ”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “Action”: “GET_AUDIT”}
2021-04-04T14:58:56.726Z INFO piecestore downloaded {“Piece ID”: “6DHVYECJGPL4JQJLPY232E3L3YFOTKRRDKTPSG64GI4ALGY3XWPQ”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “Action”: “GET_AUDIT”}

This seems to have worked! Thank you!

Do you know what the future of this node is now that I have multiple suspensions? None hit 0.0%.

1 Like

While it’s suspended, it will not receive ingress, but audits and egress will be performed as usual.
As soon as suspension score would be greater than 60%, your node will went out of suspension and should behave normally.

Feeling fortunate, as the drive is nearly full already. Thanks again.

1 Like