INFO piecestore upload canceled

I am getting this error after pulling latest version of storagenode image (1.1.1):

pi@raspberrypi:~ $ docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                  NAMES
37ecb6883692        storjlabs/watchtower         "/watchtower storage…"   8 minutes ago       Up 8 minutes                                                               watchtower
9dca91c5bea0        storjlabs/storagenode:beta   "/entrypoint"            12 minutes ago      Up 12 minutes       127.0.0.1:14002->14002/tcp, 0.0.0.0:28967->28967/tcp   storagenode
pi@raspberrypi:~ $ docker restart storagenode 
storagenode
pi@raspberrypi:~ $ docker logs -f --tail 10 storagenode
2020-04-08T19:39:09.134Z	INFO	version	running on version v1.1.1
2020-04-08T19:39:09.236Z	INFO	db.migration	Database Version	{"version": 33}
2020-04-08T19:39:09.965Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.
2020-04-08T19:39:10.853Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.
2020-04-08T19:39:10.855Z	INFO	trust	Scheduling next refresh	{"after": "4h35m42.109416295s"}
2020-04-08T19:39:10.855Z	INFO	bandwidth	Performing bandwidth usage rollups
2020-04-08T19:39:10.856Z	INFO	Node 128roxUQHtWPRZQ5TAmx3x7qGiDAgZPNrvkMpfKUHGcmdLxL4GC started
2020-04-08T19:39:10.856Z	INFO	Public server started on [::]:28967
2020-04-08T19:39:10.857Z	INFO	Private server started on 127.0.0.1:7778
2020-04-08T19:39:11.097Z	INFO	version	running on version v1.1.1
2020-04-08T19:39:25.747Z	INFO	piecestore	upload started	{"Piece ID": "5XLXUKCA7F6DIM5L5WKDRKYCSJOHFMB4FOU3NMBNUIZB2EQM5YJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 248292972544}
2020-04-08T19:39:25.838Z	INFO	piecestore	upload canceled	{"Piece ID": "5XLXUKCA7F6DIM5L5WKDRKYCSJOHFMB4FOU3NMBNUIZB2EQM5YJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "context canceled", "errorVerbose": "context canceled\n\tstorj.io/common/pb/pbgrpc.init.0.func3:70\n\tstorj.io/common/rpc/rpcstatus.Wrap:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:452\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:987\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:105\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:56\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:93\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

Could you be more specific about the error? All I see is the context cancelled message, which is normal. From the log lines you provided, the node looks healthy, unless I missed something. Check this thread for explanations of common error messages: Error Codes: What they mean and Severity Level [READ FIRST]

@baker I am keep getting this message over and over again. I don’t see anything else. Just upload started and upload canceled

What you are seeing is due to your node loosing the race to receive the piece from the uploader. You can use the script in this post to get a more complete picture of how your node is doing with respect to successful uploads. I suggest you try it out, and if your successful upload count is zero (and this is not a brand new node), that may indicate a problem. If this is the case it would be best for you to start your own thread to begin troubleshooting.

cancelled upload’s, download and such isn’t an error even tho the log says so…
like baker says, it is caused by you loosing the race.
this can be cause by a multitude of reasons, such as lack of bandwidth, high number of actions allocated to your node, the machine hosting your node can have high loads causing latency.
the storagenode will also sometime perform clean up and various other tasks which can slow it down.
antivirus, you downloading over your internet … you most likely have a much better idea why this is happening.

wait it out, see if the problem persists and/or try to figure out what is causing a bottleneck in your setup.

the successrate.sh script linked by baker can be very useful for seeing if it’s persistent problem or just some random stress causing a minor issue once in a while.

2 Likes

I opened an issue here: https://github.com/storj/storj/issues/2802 almost a year ago to reduce the logging level, but so far no traction (other than https://github.com/storj/storj/issues/3763 (unrelated) being posted there and then split off) – the original request will hopefully be dealt with at some point since newer users see an error that isn’t an error and get confused what’s broken.

As far as I can see it’s marked as INFO, so not a error.

You are correct @Alexey – they must have adjusted the error level down at some point to INFO.

I’ve updated the github ticket as I believe the errorVerbose stacktrace field should be removed or replace with something more useful to an SNO.

it says errorVerbose in the log line…
for a programmer or advanced user they will disregard that, but others may be confused by that…
ofc the log is pretty easy to read imo, but i bet it’s cryptic to some…

1 Like

a programmer or advanced user won’t disregard this.

error means error…

1 Like