Errors in log after upgrade to v0.15.2

After upgrade to version 0.15.2
I see strange errors in log:

2019-07-16T22:53:09.457616128Z 2019-07-16T22:53:09.457Z INFO running on version v0.15.2
2019-07-16T23:08:09.453841798Z 2019-07-16T23:08:09.453Z INFO running on version v0.15.2
2019-07-16T23:08:48.169797794Z 2019-07-16T23:08:48.169Z INFO piecestore upload started {“Piece ID”: “TPRGDV2YGLMGUQBYA7AYSAZU7B4XFTGVO6ELEDH4FP7MXX4ZCVFQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:09:09.777808745Z 2019-07-16T23:09:09.777Z ERROR untrusted: trust:: context canceled
2019-07-16T23:09:09.777814462Z storj.io/storj/storagenode/trust.(*Pool).GetSignee:120
2019-07-16T23:09:09.777814806Z storj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:127
2019-07-16T23:09:09.777815107Z storj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimit:59
2019-07-16T23:09:09.777815385Z storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:177
2019-07-16T23:09:09.777815650Z storj.io/storj/pkg/pb._Piecestore_Upload_Handler:701
2019-07-16T23:09:09.777815915Z storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
2019-07-16T23:09:09.777816169Z google.golang.org/grpc.(*Server).processStreamingRPC:1209
2019-07-16T23:09:09.777816430Z google.golang.org/grpc.(*Server).handleStream:1282
2019-07-16T23:09:09.777816696Z google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-16T23:14:36.168333289Z 2019-07-16T23:14:36.168Z INFO piecestore upload started {“Piece ID”: “TNXPYGFPCFHQG5WKBEMNW2Y7JC2EWIHYWYB37BGG3KXJSH4ZHLIQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:14:56.341248538Z 2019-07-16T23:14:56.341Z ERROR untrusted: trust:: context canceled
2019-07-16T23:14:56.341251283Z storj.io/storj/storagenode/trust.(*Pool).GetSignee:120
2019-07-16T23:14:56.341251627Z storj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:127
2019-07-16T23:14:56.341251891Z storj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimit:59
2019-07-16T23:14:56.341252148Z storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:177
2019-07-16T23:14:56.341252388Z storj.io/storj/pkg/pb._Piecestore_Upload_Handler:701
2019-07-16T23:14:56.341252650Z storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
2019-07-16T23:14:56.341252944Z google.golang.org/grpc.(*Server).processStreamingRPC:1209
2019-07-16T23:14:56.341253182Z google.golang.org/grpc.(*Server).handleStream:1282
2019-07-16T23:14:56.341253435Z google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-16T23:23:09.455534614Z 2019-07-16T23:23:09.455Z INFO running on version v0.15.2
2019-07-16T23:38:09.467675307Z 2019-07-16T23:38:09.467Z INFO running on version v0.15.2
2019-07-16T23:47:46.873199239Z 2019-07-16T23:47:46.873Z INFO piecestore upload started {“Piece ID”: “U5WCX57SVNODYCKB4ZBHIG77YK2XLCUSCRYC6WIAWHEBIVZ6NXOQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:48:07.041967121Z 2019-07-16T23:48:07.041Z ERROR untrusted: trust:: context canceled
2019-07-16T23:48:07.041968524Z storj.io/storj/storagenode/trust.(*Pool).GetSignee:120
2019-07-16T23:48:07.041968844Z storj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:127
2019-07-16T23:48:07.041969104Z storj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimit:59
2019-07-16T23:48:07.041969410Z storj.io/storj/storagenode/piecestore.(*Endpoint).Upload:177
2019-07-16T23:48:07.041969684Z storj.io/storj/pkg/pb._Piecestore_Upload_Handler:701
2019-07-16T23:48:07.041969928Z storj.io/storj/pkg/server.logOnErrorStreamInterceptor:23
2019-07-16T23:48:07.041970162Z google.golang.org/grpc.(*Server).processStreamingRPC:1209
2019-07-16T23:48:07.041970402Z google.golang.org/grpc.(*Server).handleStream:1282
2019-07-16T23:48:07.041970653Z google.golang.org/grpc.(*Server).serveStreams.func1.1:717
2019-07-16T23:53:09.443529793Z 2019-07-16T23:53:09.443Z INFO running on version v0.15.2

1 Like

I get the same, i’m not sure if it means the node is untrusted, or the satellite. My gut says it’s the satellite since all the certs were revoked.

All i’m seeing is these, and version status messages.

Thanks @KernelPanick
I pay attention for it because of another node I not see errors like this, just see 3 PUT requests and this data successfully stored without errors:

2019-07-16T23:03:39.655174243Z 2019-07-16T23:03:39.655Z INFO running on version v0.15.2
2019-07-16T23:08:48.189886731Z 2019-07-16T23:08:48.189Z INFO piecestore upload started {“Piece ID”: “7RMQPHF5SOVO7KL2USCNX6I43RMLBQ3JDH7QZI7LP6VODHVY6HOA”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:09:03.690952850Z 2019-07-16T23:09:03.690Z INFO piecestore uploaded {“Piece ID”: “7RMQPHF5SOVO7KL2USCNX6I43RMLBQ3JDH7QZI7LP6VODHVY6HOA”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:14:36.199892483Z 2019-07-16T23:14:36.199Z INFO piecestore upload started {“Piece ID”: “PEOUXKJSYD5MKC345N2JMOCLPSIPCE5WODVVNCUIERPOBHHGJPYA”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:14:36.420727928Z 2019-07-16T23:14:36.420Z INFO piecestore uploaded {“Piece ID”: “PEOUXKJSYD5MKC345N2JMOCLPSIPCE5WODVVNCUIERPOBHHGJPYA”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}
2019-07-16T23:18:39.658293590Z 2019-07-16T23:18:39.658Z INFO running on version v0.15.2
2019-07-16T23:33:39.053003127Z 2019-07-16T23:33:39.051Z INFO piecestore:orderssender.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW sending {“count”: 2}
2019-07-16T23:33:39.660607496Z 2019-07-16T23:33:39.660Z INFO running on version v0.15.2
2019-07-16T23:34:02.663538724Z 2019-07-16T23:34:02.663Z INFO piecestore:orderssender.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW finished
2019-07-16T23:48:39.659933066Z 2019-07-16T23:48:39.659Z INFO running on version v0.15.2

So, it very strange for me

Well, maybe we do have DQ’d nodes then. I did have a few missing pieces (~0.07%), due to a short (~30min) mishap with the -v mapping before --mount was recommended. A statement from the team says they will reinstate nodes in most cases (besides audit failures…). Because or your successful node i think i’ll be emailing the team.

Hopefully we can get some data again. If I get DQ’d and reset 3 months back i’ll be pretty disappointed since I used their configurations. If that happened, I’ll be losing out on the 5x, and resetting escrow from 50%, back to 75%…

If you aren’t getting data, please email us at support@storj.io. We’ll look into all issues on a case-by case-basis and hopefully get you back up and running.

If nodes get disqualified during the Alpha releases (other than for audit failures) we will continually reset those disqualifications so that storage nodes can continue to receive data or otherwise work with storage node operators to bring the nodes back online.

I don’t think there is need to panic. It’s more likely an uplink trying to use old credentials. There is only 1 user on the network right now and no tests are going on. Give it a bit. This upgrade was a high impact one. It may take a bit for traffic to return.

Thanks @BrightSilence !
I just reported somthing unusual that I see after upgrade, I understood that this release is heavy and need more time to solve all issues before evrything is back to usual state.

Oh I know. I was referring to @KernelPanick suggesting nodes were DQ’ed. I don’t think that is the case.

1 Like

Confirm, see it too. So, we must waiting …