You can’t win all races even if you have M.2 NVME pcie 5.0, epyc cpu and 1TB of RAM. You can’t be closest to every client requesting pieces that you store. The success rate will tend to 100%, but never be 100%. Instead, it will decrease with the increase in data stored. Loosing races is normal.
If you mean the QUIC misconfigured, then it’s unrelated to used storage, it’s a network issue, when the satellite cannot reach your node through UDP port of your node.
it worked fine for months, all of a sudden i got a quic error daily , and nothing changed . Will also add that the logs kept saying upload failed every 2 secs. rebooted. Worked fine for hours then check again - same issue quic issue and log says same thing. So i just figured my HDD was having issues and couldnt keep up sinse it was 100% usuage and also starting kicking in errors to repair drive. so i just used my SSD for now .
I see. Seems your HDD started to struggle with the load and thus your node cannot respond on requests in time. Interesting.
What are errors related to QUIC in your logs on that time?
I didnt look to deep into the log , wish i would’ve . Maybe ill merge back on to the HDD and see what happens. Not sure yet. Is there a command to be more specific in the log? or like a debug command?
I did notice there was 32Gb of trash . was thinking it was alot at the time. And my HDD never settled. 100% for like 6 months st8. Seems to be working fine on my SSD with no quic errors like when i used my HDD , literally nothing changed on mynetwork . Keep an active log reader to try and catch anything spiked up .
replace /mnt/storj/storagenode/storagenode.log to your actual path, if you redirected logs to the file, if you did not - the past logs are gone with the container.
my hdd is SMR , just looked it up, Would it be possible to Put the DB on my ssd nvme and will that help the drive not get errors and be bottled ? its only 1 TB . Noticed ovr a few threds SMR drives fail eventually anyway. Only asking cause i have no use to really store anything on the HDD as its too slow for most of my needs.
The only known method to put it to work is to run the second node with own disk and identity in the same /24 subnet of public IPs (they will split the ingress and reduce a load on SMR node) or reduce the number of concurrent requests (storage2.max-concurrent-requests:) to low value (this will significantly slow down a usage of this node).
We want to be decentralized as much as possible, so we select only one node from /24 subnet of public IPs for each piece of each segment, the unvetted node has a 5% chance to be selected for uploads.