Does anyone else notice "Write TCP" error while submitting orders

Log file should contain something like this

rpc client when sending new orders settlements {“error”: “order: sending settlement agreements returned an error: write tcp 172.17.0.3:52332-> _ use of closed network connection”

same here

ERROR piecestore download failed {“Piece ID”: “B2FLPQUTLIOKU7EDI5FRNPOGEY7US5ULO2GT6EAT4QARCALNFXPQ”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”, “error”: “write tcp 172.17.0.4:28967->95.217.158.23:40888: use of closed network connection”, “errorVerbose”: “write tcp 172.17.0.4:28967->95.217.158.23:40888: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).pollWrite:221\n\tstorj.io/drpc/drpcwire.SplitN:29\n\tstorj.io/drpc/drpcstream.(*Stream).RawWrite:276\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:318\n\tstorj.io/common/pb.(*drpcPiecestoreDownloadStream).Send:1080\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload.func5.1:640\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22”}

and another similar one

ERROR piecestore download failed {“Piece ID”: “H3OVX4W2LIAEFKR4RZIPOX3FBLDKRB3M3JNRWGIR43XACI5A7K7Q”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “GET”, “error”: “write tcp 172.17.0.4:28967->95.217.187.75:8456: write: broken pipe”, “errorVerbose”: “write tcp 172.17.0.4:28967->95.217.187.75:8456: write: broken pipe\n\tstorj.io/drpc/drpcstream.(*Stream).pollWrite:221\n\tstorj.io/drpc/drpcwire.SplitN:29\n\tstorj.io/drpc/drpcstream.(*Stream).RawWrite:276\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:318\n\tstorj.io/common/pb.(*drpcPiecestoreDownloadStream).Send:1080\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload.func5.1:640\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22”}

While similar to the original post, these aren’t the same. This is just another form of the download being cut off because others are beating your node to the finish line. Nothing to worry about.

@nerdatwork: I haven’t personally seen this error. At least not this month. Are you seeing this every time orders are sent or only every once in a while. Orders that haven’t been sent in previous batches will be sent again later on, so unless it’s consistently failing, it probably fixes itself.

1 Like

I know but I just found out 1423 orders expired because of this error. I just have to figure out how to filter out downloaded orders from unsent_order table.

What are you trying to do exactly that you need to filter downloads? There must be some info in that serialized field, but it’s probably going to be a bit of a hassle deciphering that.

Are there any archived orders with a status <> 1? What is the earliest expiration date in unsent orders?

Trying to figure out how many orders are expiring. Out of these how many download orders I am not getting paid for.

We are told they are retried every hour but has anyone noticed if they are expiring or not. If they are expiring then how many of them are getting expired. Is this a bug on satellite’s end or storagenode’s end ?

Query: select count(*) from order_archive_ where status <> 1;
result: 2741

2020-05-17 12:03:06.180961486+00:00

yeah, that does look like something is off… but I’m kind of stuck on where to go from here.