Many canceled downloads

I have my golden ticket again on my node and huge amount of downloads there (~120GB/day on 1.5TB node, noraml was about 10x smaller) However I see cancel rate ~25%-50%. Tones of canceled downloads like this:

2025-09-03T15:26:28Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "R6ZEQVNMGCJVAH7QJEN3PBAMYHS62AOQEEJFUNDEN7BK35OQKWPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 130304, "Size": 1989888, "Remote Address": "3.255.35.220:46676", "reason": "downloaded size (1218560 bytes) does not match received message size (1989888 bytes)"}
2025-09-03T15:26:28Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "R6ZEQVNMGCJVAH7QJEN3PBAMYHS62AOQEEJFUNDEN7BK35OQKWPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 133888, "Size": 1986304, "Remote Address": "3.255.35.220:46694", "reason": "downloaded size (655360 bytes) does not match received message size (1986304 bytes)"}
2025-09-03T15:26:28Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "R6ZEQVNMGCJVAH7QJEN3PBAMYHS62AOQEEJFUNDEN7BK35OQKWPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 1397760, "Size": 722432, "Remote Address": "3.255.35.220:46676", "reason": "downloaded size (655360 bytes) does not match received message size (722432 bytes)"}
2025-09-03T15:26:28Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "R6ZEQVNMGCJVAH7QJEN3PBAMYHS62AOQEEJFUNDEN7BK35OQKWPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 823552, "Size": 1296640, "Remote Address": "3.255.35.220:50214", "reason": "downloaded size (655360 bytes) does not match received message size (129

Still the same remote address, still the same piece.
However there are as well ocasionally records like:

2025-09-03T15:32:35Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "R6ZEQVNMGCJVAH7QJEN3PBAMYHS62AOQEEJFUNDEN7BK35OQKWPQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 1352960, "Size": 767232, "Remote Address": "3.255.35.220:48522"}

So the same remote address, the same piece. (and of course a lot of good downloads or random pieces from random addresses - no reason to posting here I think)

  1. Is something wrong with my node/configuration? Could I improve something?
  2. I have huge download now, so I don’t complain, but why is this one client so obsessed with this one piece? It looks like he is downloading the same piece again and agiain. Most of downloads fail, but some are successfull, so he must have it downloaded.

Nothing is wrong. Someone maybe still faster than you. Check success rate. If it is over 90% — notning to worry about.

If you want to optimize it further, as a matter of sport, you can see if you can reduce latency: offload metadata, enable SQM, depending on your connection, etc.

1 Like

My success rate was 99%+ before this ritch period. Now it drops to 50-75% (I don’t complain as I still have 10x more download, but…)
Metadata are offloaded on this node (OK, only SSD on SATA, but I don’t want to get crazy) and I’m not going to spent huge amount of my time and/or money to have 2% better rate. And yes, it’s very weak node (N150 CPU and actually 32GB of free RAM), but this is simply case of ‘old 4TB disk on proxmox server running anyway’
Anyway - thanks for confirming that ‘nothing is wrong’.

Try tracerouting the IPs this piece is fetched from, preferably at times it is downloaded.

You haven’t switched to hashstore yet, have you?

No, I did not. I updated my proxmox on Saturday 8.4->9 and it was disaster. Not able to run docker there, no able to start VMs with PCIe passthrough. Fresh-installed v8.4 again, finished setting up everything on Moday, but everything is set as before, the same disks (one rotating + 2 SSD mirrored for metadata), save storj version (1.135.5) as before. Just from start this big downloads (nice) and masive drop of success rage (not so nice, but whatever)

ec2-3-255-35-220.eu-west-1.compute.amazonaws.com
No luck here, it’s blocked on the route to this address soon. I have just few hops in my provider’s nodes (under 1ms) and than timouts.

Eh, maybe hping3 then? It’s just a measure of network topology distance. Any place that is 200 ms roundtrip or more would be very far, 100 ms far enough to explain cancellations like that.