Error piercestore protocol: rpc error: code = canceled desc = context canceled

Makes sense. Meaning the most robust nodes with better bandwidth will perform better then correct?
Because I have 5 nodes between Portugal (100/100) and The Netherlands (100/40), and until 2 hours ago, I had 0 upload success, they’d all fail!

At full SN I haven’t seen new uploads for a long, long, long time …
But I do not complain :slight_smile:

1 Like

No, no nodes would perform better because of this change. Success rates are either the same or lower. As simply fewer transfers succeed in total. Right now though, we’ve also shifted from a lot of Storj test traffic from one location to only customer traffic from all around. If your node did particularly well on the Storj test uplinks, you may also see degradation from that. In that case other nodes nearer to these customers would actually see an uptick in successful transfers.
But the bottom line is, your nodes are fine and there is very little use in micromanaging these kinds of patterns. Traffic is going to become less predictable anyway. Don’t worry too much about it.

2 Likes

My node is 4vcpu, 4 gig of RAM, 1 To SSD and bandwith 1G/200M.
My successful upload ratio is now of one for ten :frowning:

Hello everyone, I have exactly the same problem only that I don’t have a successful upload all day. Now I wonder whether it makes sense for private individuals who work with low / medium hardware to run a node as long as the “big ones” win the races. Sure, if the big ones were to be “full” at some point, the small ones would also get something, but that will take time.

I only have 10% success too at the moment but it might just be a temporary problem.

Its no temporary Problem.
There are no longer unused test files. So that there is more space for real user data.
We only lose the races and faster/bigger ones win. So we have to wait for a lot of new User for more Traffic. But how long that will need? Weeks/Month/Years? No one knows.

It all depends.

They announced uploads would slow to stop for the moment after the upgrade. But, I carefully watch the uploads and downloads and I’m dealing with a 10/100 connection. My uploads coming from client have been much faster than before.

Now, on downloads, I’m limited to 10 Mbps, what I"ve noticed is small file size segments move through pretty fast, but large segments get more context cancelled. My link loses the race. I get a lot of download requests and many will context cancel, maybe 50-80% but at the same time I have many that get through on successful download.

It’s not a reason to quit. I think that these slow bandwidth links can do better with smaller files. And as new clients join, you could find one in your back yard that has you winning the file race most of the time. Its not distance to the satellite but distance to the client sending and receiving the files.

So, the system is working ok from my SNO position.

Remember, right now the network seems a bit idle compared to a few days ago and the month of January.

2 Likes

2 posts were split to a new topic: Error getting current space used calculation

Hi everyone,

I was looking at the logs and noticed some weird behavior where a piece upload is started but fails and is followed by download and delete requests. Anyone seing this? Is this some kind of test?

2020-01-31T19:28:20.384Z INFO piecestore upload started {“Piece ID”: “RY7VNUZLF3AWAOWUVVE65UGIG6DFPJKVOH4KYV5V6Z2BVUUAJPDQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”}
2020-01-31T19:28:22.291Z INFO piecestore upload failed {“Piece ID”: “RY7VNUZLF3AWAOWUVVE65UGIG6DFPJKVOH4KYV5V6Z2BVUUAJPDQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “error”: “context canceled”, “errorVerbose”: “context canceled\n\tstorj.io/common/rpc/rpcstatus.Wrap:79\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:483\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:257\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2020-01-31T19:28:28.255Z INFO piecestore download started {“Piece ID”: “RY7VNUZLF3AWAOWUVVE65UGIG6DFPJKVOH4KYV5V6Z2BVUUAJPDQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET”}
2020-01-31T19:28:29.786Z INFO piecestore downloaded {“Piece ID”: “RY7VNUZLF3AWAOWUVVE65UGIG6DFPJKVOH4KYV5V6Z2BVUUAJPDQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET”}
2020-01-31T19:28:30.925Z INFO piecestore deleted {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “RY7VNUZLF3AWAOWUVVE65UGIG6DFPJKVOH4KYV5V6Z2BVUUAJPDQ”}

Welcome to the forum @Gank !

Yes

Yes

PS: Make sure you are not failing audits and keep your node online.

1 Like

My storage node has been update to v0.31.12
Many uploads continue to fail.

========== AUDIT =============
Successful: 1
Recoverable failed: 0
Unrecoverable failed: 0
Success Rate Min: 100.000%
Success Rate Max: 100.000%
========== DOWNLOAD ==========
Successful: 13
Failed: 10
Success Rate: 56.522%
========== UPLOAD ============
Successful: 6
Rejected: 0
Failed: 26
Acceptance Rate: 100.000%
Success Rate: 18.750%
========== REPAIR DOWNLOAD ===
Successful: 0
Failed: 0
Success Rate: 0.000%
========== REPAIR UPLOAD =====
Successful: 3
Failed: 17
Success Rate: 15.000%

Something very odd is definitely going on.
Last week my upload and download were 96%+
Today, bandwidth has dropped like people have noted. Low bandwidth isn’t a big concern, it’s the failure rate.

========== AUDIT =============
Successful: 40
Recoverable failed: 0
Unrecoverable failed: 0
Success Rate Min: 100.000%
Success Rate Max: 100.000%
========== DOWNLOAD ==========
Successful: 42
Failed: 5
Success Rate: 89.362%
========== UPLOAD ============
Successful: 118
Rejected: 0
Failed: 2560
Acceptance Rate: 100.000%
Success Rate: 4.406%
========== REPAIR DOWNLOAD ===
Successful: 0
Failed: 0
Success Rate: 0.000%
========== REPAIR UPLOAD =====
Successful: 184
Failed: 11
Success Rate: 94.359%

They’re all context cancelled.
Considering my 1000/500 connection with great pairing to Asia and USA, It’s not a bandwidth/latency issue.

1 Like

I think the issue is because theres not alot of data moving so theres alot more nodes that arent busy beating you to everything.

How do you know it’s not a latency issue? The stats on my biggest node look like this

========== AUDIT ============= 
Successful:           544 
Recoverable failed:   0 
Unrecoverable failed: 0 
Success Rate Min:     100.000%
Success Rate Max:     100.000%
========== DOWNLOAD ========== 
Successful:           4442 
Failed:               82 
Success Rate:         98.187%
========== UPLOAD ============ 
Successful:           2254 
Rejected:             0 
Failed:               488 
Acceptance Rate:      100.000%
Success Rate:         82.203%
========== REPAIR DOWNLOAD === 
Successful:           3095 
Failed:               1 
Success Rate:         99.968%
========== REPAIR UPLOAD ===== 
Successful:           443 
Failed:               72 
Success Rate:         86.019%

And that is with a 80/20 connection.

why am I having such bad luck in this area
"Failed Upload Pieces:\t\t$(cat "tmpf" | grep PUT_REPAIR | grep failed -c)" Failed Upload Pieces: 249 Jesuss-Mac-Pro:~ neo echo -e “Successful Upload Pieces:\t$(cat “$tmpf” | grep PUT_REPAIR | grep downloaded -c)”
Successful Upload Pieces: 0
and that is with a 1000/40 connection

I highly doubt it’s about bigger or faster nodes. 2 changes have happened recently. Uploads start 110 transfers instead of 95, but still only 80 are finished. So in general more connections are cancelled. The other is that tests from germany have stopped at the moment. I’m guessing from your name that those tests were very close to you and what remains now is just customers from all around the world using the network. My node is in the Netherlands and I’ve seen a drop from around 95-99% before to just 50% success now. With the previously mentioned changes this is to be expected.
People don’t tend to complain on the forums when their nodes see an increase in success rate, but I’m pretty sure those closer to the customers who are testing the most right now will see a higher success rate now.
From what I can tell, all that matters is latency. Now hardware does introduce some latency, but not nearly as much as distance.

back in the rocket.chat days I remember uploads were chosen from the fast 80 nodes out of a selected 135 when did the change to 90 was introduced.

It was 95 originally and was raised temporarily to 130. I don’t know the exact timelines, but it has switched back to 95 before now switching to 110.

Hi there!
I just want to check if everythinkg is ok because same happend to me too. I’m running a node for a 2-3 days and this is what i get in the log file:

2020-06-15T11:28:51.723+0200 INFO piecestore upload canceled {“Piece ID”: “6OS7EWKJYKX6MHTPRRKDJP3B7PVBXXOJPHB2OQICHG4FYLMJML2A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:28:52.518+0200 INFO piecestore uploaded {“Piece ID”: “KXQYLVBXLEEWNZZKOPOVGQ4EHGP2DFFNBBU35HWAVKMKCKYX5NLQ”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “PUT_REPAIR”}
2020-06-15T11:28:59.136+0200 INFO piecestore upload started {“Piece ID”: “FN2PMQHP55N4UV5KXWSYAPDG4JUAMBH2DAGCS4N4DLHNCJHSXGMQ”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “PUT_REPAIR”, “Available Space”: 869970121088}
2020-06-15T11:28:59.173+0200 INFO piecestore uploaded {“Piece ID”: “FN2PMQHP55N4UV5KXWSYAPDG4JUAMBH2DAGCS4N4DLHNCJHSXGMQ”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “PUT_REPAIR”}
2020-06-15T11:29:02.062+0200 INFO piecestore upload started {“Piece ID”: “6L2SAHDXE5LMAQQGDUDXIZ4RS56YVJM5MGGDYFZKXJAAS5SHPT5A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869970120064}
2020-06-15T11:29:02.099+0200 INFO piecestore uploaded {“Piece ID”: “6L2SAHDXE5LMAQQGDUDXIZ4RS56YVJM5MGGDYFZKXJAAS5SHPT5A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:07.640+0200 INFO piecestore upload started {“Piece ID”: “6ERR5U56ZFENPVCFXW6LAMW5TLKXV3IXWTX44NN4QHH6WKMR5OMA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869970119040}
2020-06-15T11:29:07.674+0200 INFO piecestore uploaded {“Piece ID”: “6ERR5U56ZFENPVCFXW6LAMW5TLKXV3IXWTX44NN4QHH6WKMR5OMA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:10.102+0200 INFO piecestore upload started {“Piece ID”: “KLZX2JLNUYRLRBCBNEUAVFHHGVKUBMULKBOXGLOX4Y3CBTIPOYDA”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “PUT_REPAIR”, “Available Space”: 869970118016}
2020-06-15T11:29:17.742+0200 INFO piecestore uploaded {“Piece ID”: “KLZX2JLNUYRLRBCBNEUAVFHHGVKUBMULKBOXGLOX4Y3CBTIPOYDA”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “PUT_REPAIR”}
2020-06-15T11:29:20.377+0200 INFO piecestore upload started {“Piece ID”: “FVDEU6PH7T7YHHQ45LYJ5G7QER37LYDHDKPPWEVYZG3C7A36JKQQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967798144}
2020-06-15T11:29:20.433+0200 INFO piecestore uploaded {“Piece ID”: “FVDEU6PH7T7YHHQ45LYJ5G7QER37LYDHDKPPWEVYZG3C7A36JKQQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:25.657+0200 INFO piecestore upload started {“Piece ID”: “MKEX3EVCSP6IVGBW2IFKAGF3VMCGNFLHLQO3BULLLLARTCQYAWPQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967797120}
2020-06-15T11:29:25.694+0200 INFO piecestore uploaded {“Piece ID”: “MKEX3EVCSP6IVGBW2IFKAGF3VMCGNFLHLQO3BULLLLARTCQYAWPQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:34.907+0200 INFO piecestore upload started {“Piece ID”: “QWCGF2BLJKVHMLLE2477XO53EJUV3KXPIIUOMXKVJVEIQG3CBR3Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967796096}
2020-06-15T11:29:34.928+0200 INFO piecestore uploaded {“Piece ID”: “QWCGF2BLJKVHMLLE2477XO53EJUV3KXPIIUOMXKVJVEIQG3CBR3Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:41.467+0200 INFO piecestore upload started {“Piece ID”: “7MH5QACX2TUXECZ4LA5LZGIVXZGGITYMOHLG7ZIZNEZOUGXKJ7CA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967795072}
2020-06-15T11:29:41.504+0200 INFO piecestore uploaded {“Piece ID”: “7MH5QACX2TUXECZ4LA5LZGIVXZGGITYMOHLG7ZIZNEZOUGXKJ7CA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:42.427+0200 INFO piecestore upload started {“Piece ID”: “RKHJANDXY3U4B7S534DKIXTOFZO3ZWBKXOUOYARGY657HP2BAZUQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967794048}
2020-06-15T11:29:42.454+0200 INFO piecestore uploaded {“Piece ID”: “RKHJANDXY3U4B7S534DKIXTOFZO3ZWBKXOUOYARGY657HP2BAZUQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:49.112+0200 INFO piecestore upload started {“Piece ID”: “BXH3NPTFXBL42SBEYGBJAR4ACCW2IZEJO6ETJG5BRSKST7A2LMWQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967793024}
2020-06-15T11:29:49.145+0200 INFO piecestore uploaded {“Piece ID”: “BXH3NPTFXBL42SBEYGBJAR4ACCW2IZEJO6ETJG5BRSKST7A2LMWQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:29:51.305+0200 INFO piecestore upload started {“Piece ID”: “KWDWKLKM46FEI727DAAZKNWUGJ2Z52ZNSJFCUCXLG6DJMDYLWHIA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 869967792000}
2020-06-15T11:29:51.481+0200 INFO piecestore uploaded {“Piece ID”: “KWDWKLKM46FEI727DAAZKNWUGJ2Z52ZNSJFCUCXLG6DJMDYLWHIA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”}
2020-06-15T11:29:58.818+0200 INFO piecestore upload started {“Piece ID”: “KZNGBAOMKLI7EPLCYUUBC6I66WSTJZ3A3P6SN35XCZHRIZTLUNZQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967703680}
2020-06-15T11:29:58.838+0200 INFO piecestore uploaded {“Piece ID”: “KZNGBAOMKLI7EPLCYUUBC6I66WSTJZ3A3P6SN35XCZHRIZTLUNZQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:03.528+0200 INFO piecestore upload started {“Piece ID”: “TRCOKJSONBSY7ETFZDJRJ26COTSJPH4M42CQEKVDRDBEHBPND5CQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967702656}
2020-06-15T11:30:03.914+0200 INFO piecestore uploaded {“Piece ID”: “TRCOKJSONBSY7ETFZDJRJ26COTSJPH4M42CQEKVDRDBEHBPND5CQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:06.068+0200 INFO piecestore upload started {“Piece ID”: “EUMY45BYLY7WKPCCXUBZIEWEL24O2ZSWETZKH7ILPJFVAJ4LGWGQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967672448}
2020-06-15T11:30:06.105+0200 INFO piecestore uploaded {“Piece ID”: “EUMY45BYLY7WKPCCXUBZIEWEL24O2ZSWETZKH7ILPJFVAJ4LGWGQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:14.130+0200 INFO piecestore upload started {“Piece ID”: “PWPCQEYVG4VPL5LAB4CJPLR2BMUKA6JRPZCQQ3YNQWEFKAWAFA2A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967671424}
2020-06-15T11:30:14.150+0200 INFO piecestore uploaded {“Piece ID”: “PWPCQEYVG4VPL5LAB4CJPLR2BMUKA6JRPZCQQ3YNQWEFKAWAFA2A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:18.058+0200 INFO piecestore upload started {“Piece ID”: “7WL7SPCABXZUA4TDAVRCQKFR3WJ2IY2O7YCBLA73R25TFW5JRF4Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967670400}
2020-06-15T11:30:18.081+0200 INFO piecestore uploaded {“Piece ID”: “7WL7SPCABXZUA4TDAVRCQKFR3WJ2IY2O7YCBLA73R25TFW5JRF4Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:21.307+0200 INFO piecestore upload started {“Piece ID”: “2MY7Y4S6R7KHPYILQPP6I6YTY3SPAP5THCHSCLBDNVBUTWNSGYJA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967669376}
2020-06-15T11:30:21.332+0200 INFO piecestore uploaded {“Piece ID”: “2MY7Y4S6R7KHPYILQPP6I6YTY3SPAP5THCHSCLBDNVBUTWNSGYJA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:32.894+0200 INFO piecestore upload started {“Piece ID”: “2KS5574EOEYGFGO33TB627HZHE5M6TYYJ5U7LU4MCSJMIK2UCCTA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967668352}
2020-06-15T11:30:32.933+0200 INFO piecestore uploaded {“Piece ID”: “2KS5574EOEYGFGO33TB627HZHE5M6TYYJ5U7LU4MCSJMIK2UCCTA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}
2020-06-15T11:30:44.507+0200 INFO piecestore upload started {“Piece ID”: “DHNVF44FVWRLPUX76LNXXSL27N6B3DSXENJLYKSFJBNVLYNH7N5Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “Available Space”: 869967667328}
2020-06-15T11:30:44.545+0200 INFO piecestore uploaded {“Piece ID”: “DHNVF44FVWRLPUX76LNXXSL27N6B3DSXENJLYKSFJBNVLYNH7N5Q”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”}

Also in the update file:

I get that after i restarted the service

2020-06-15T11:19:52.079+0200 INFO Stop/Shutdown request received.
2020-06-15T11:20:42.246+0200 INFO Configuration loaded {“Location”: “C:\Program Files\Storj\Storage Node\config.yaml”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “contact.external-address”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “server.address”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “server.private-address”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “storage.path”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “storage.allocated-disk-space”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “storage.allocated-bandwidth”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “operator.email”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file key {“Key”: “operator.wallet”}
2020-06-15T11:20:42.417+0200 INFO Invalid configuration file value for key {“Key”: “log.development”}
2020-06-15T11:20:42.418+0200 INFO Invalid configuration file value for key {“Key”: “log.level”}
2020-06-15T11:20:42.418+0200 INFO Invalid configuration file value for key {“Key”: “log.output”}
2020-06-15T11:20:42.418+0200 INFO Invalid configuration file value for key {“Key”: “log.caller”}
2020-06-15T11:20:44.549+0200 INFO Downloading versions. {“Server Address”: “https://version.storj.io”}
2020-06-15T11:20:46.397+0200 INFO Version is up to date. {“Service”: “storagenode”}
2020-06-15T11:20:46.397+0200 INFO Version is up to date. {“Service”: “storagenode-updater”}