Context cancelled on all uploads/downloads


Since a few months i was running a node that was working perfectly, now few days ago its throwing “Context cancelled” errors massive, for uploads and for downloads, i tried to reinstall and rebuild the databases, and even no work, still the same problem,

Its any Known Bugs or problems about this?

I dont find any paste-service for paste the code, so i use Google Drive

Welcome to the forum @encabn!

Its how the system is designed. Its perfectly normal. Your node lost the race for the piece hence ‘context canceled’.

Don’t do anything to jeopardize your node by messing with the databases. Whenever you face any issue, visit forum & search for the issue. If you search for ‘context canceled’ you would find many posts that explain in detail.

If your node is not failing audits then you are good.

1 Like

@encabn If you see log messages like this:

2020-02-04T01:30:19.306+0100 INFO piecestore upload failed {“Piece ID”: “OQQBKVMCGVFXTHVSRTHQJINZZUXBWM5DPL327IJRP7WPEALDWRUQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\\n\*Endpoint).doUpload:483\n\*drpcEndpoint).Upload:257\n\\n\*Server).doHandle:175\n\*Server).HandleRPC:153\n\*Server).ServeOne:114\n\*Server).Serve.func2:147\n\*Tracker).track:51”}

it does not always mean that your node failed to get the piece.

Some time ago, we changed the uplink to aggressively close the connection after upload without waiting for the last acknowledgment from the storage node. This significantly improved the upload performance.

As a result, we have this pollution in the storagenode logs with “upload failed” messages due to “context canceled”. For this specific piece OQQBKVMCGVFXTHVSRTHQJINZZUXBWM5DPL327IJRP7WPEALDWRUQ you can see in the logs that it is deleted later. This cannot happen if it wasn’t uploaded successfully in the first hand.

We are now working on cleaning up the logs. There is a change in review:


It’s not entirely clear to me whether this is just about the remaining transfers when the success threshold is reached or actual finished transfers as well. Has this been a recent change?

The change in the code seems to be mostly a change in log terminology for all cancelled transfers, which suggests this would change the message for both incomplete as well as complete uploads like the example you gave with the piece that was later deleted. Could you clarify this?

Thanks for taking time to review.

Im not working on Docker either (due to linked post)

Isn’t this also incredibly dangerous? Like issuing a COMMIT statement to a database server and then immediately closing the connection, and assuming your data is safe? Or is there some sort of three-way final acknowledgement and the uploader received the first one then doesn’t wait to hear the third?

This change is still a work in progress. It will be improved to clearly distinguish between successful and canceled uploads.

After the piece is completely uploaded, the storage node returns a signed hash of the piece to the uplink to confirm it got the expected data. The optimization I mentioned is related to closing the connection afterwards. So, it’s not dangerous at this stage.


It closes CONNECTION (TCP) not internal commit, that explains all