from what i can tell upload successrate is dependent of a couple of factors.
Bandwidth and latency, last but not least…
Bandwidth isn’t that important only if you are handling more requests than your bandwidth can keep up with, and even my connection barely exceeds what yours can do… so lets disregard the bandwidth, as a factor but not a deciding one.
*Latency becomes a very complex subject, but lets disregard egress, as most have little trouble with that.
What does the Ingress latency consist of… first the internet connection type… gear and what not, which most of the time will be a fixed number… or fairly static, locally i get like 5-10ms, so lets assume since we are pretty in global geographic terms that we both are even in latency to different points in the world.
thus you DSL or whatever broadband it is… is on wires not fiber and thus latency generally higher.
maybe 20-30ms.
so now we ingress to the local latency, which on even a strained local network your latency would be 1-5ms so basically nothing of note, we hit the storagenode host and then everything speeds up because now we go from semi long distance signalling to short or micro distance super high bandwidth and thus lower latency… and before anything we can explain moves we are at the storage medium. HDD / SSD / NVME / DRAM(raid card) i know nvme isn’t a drive type… but not sure what else to call it.
HDD : 7200rpm seek time 6ms (if idle) grabbing this from memory, might be slightly off…
10k rpm seek time of 2ms… basically the faster it spins the lower the seek time, so 5400rpm owners be aware… but ill assume a 7200rpm since its the common affordable version, while 5400rpm are more mobile / external use cases.
my 7200rpm drives can during heavy loads give me a backlog in netstat( which i’m not sure how accurate it is… but i’m sure its bad enough that its relevant.
1200ms backlog, meaning if our data stream arrives in this case, then it may in theory be stuck for 1.2seconds before the acknowledgement is sent all the way back that the data to lets say from us to the united states which is like at dsl maybe 70ms and on fiber maybe 50ms
making it a total round trip of 1.3sec or 1300ms this means over 90% of your bottleneck in latency is in the storage array and it’s ability to deal with IO of the incoming.
sure if the drives are idle, then 6ms or less is a great time… a 7200rpm at moderate use gets like maybe 100ms easy… in backlog… so even at the best of times… with the node running your hdd latency is the primary factor…
so what does a petty 100ms or 1.2sec do… well lets see…
a 40mbit ingress connection will be able to transfer about 4.8 MB/s
and each file is 2.2MB so not even a second to respond…
and then you still get the request have to start working on it, then cancel it, wasting bandwidth and IO
causing more latency.
if you however wrote directly to a write cache, then it will fill slowly, get flushed to disk and a sequential write and thus reduce IO and throughput over all, and it would keep your latency down… way way way down… even if your hdd raid was at 1.2 sec backlog… a dedicated write cache would take the data (if in case of a modern semi idle low grade ssd in a few ms) and send the acknowledgement that the data is on disk…
essentially your system may download 100% of the file, but fail to committed it to disk fast enough and thus loose the race because of it… i duno how often or if this happens, but in theory it could…
make yourself a write cache… you can most likely just install some software and dedicate like 512mb ram to it at first… or install a space little ssd. or just make a partition on a ssd drive you already got … just need like 1gb and some software to make it into a write cache for your storage…
try that, any type of non hdd write cache would most likely do wonder… because disk latency is the name of the game… atleast at first
i think i’m at 10-20ms peaks with my ssd write cache, most of the time its less tho…
anyways downloading some software for it is free… and you can just use ram for a test… so only wasted thing is time if it doesn’t work.
good luck
on a side note… maybe the success rate is split between the two nodes… tho i duno… i only plan to run one massive node, so not really relevant to me…