Abysmal Upload Success Rate

A post was split to a new topic: Success rate не соответствует полученным данным

The corruption issues of SQLite DB’s has been fixed in recent versions just FYI.

2 Likes

This is a great news, thanks!

Do you know, are they fixed the start of docker before the users disks mount?

To my understanding, the default mount in docker implementation for unraid is -v. I have never had any issues but since reading about it here, I have looked into the issue. A mitigation would be to store the appdata for the pod on an ssd cache and cache all db and storage shares as well. This very well may be why I have not had any issues. If the cache doesn’t exist, then app data can’t mount and therefor the service is not available to the network.

To manually replace the -v with --mount, this will require manually setting additional values, doable but would require knowing about the issue and knowing where to look under advanced.

Coming from someone who uses storj in unraid with no issues, my recommendation would be run everything through a cache, never allow HDD’s to spin down and manually start the pod after a reboot to ensure all shares are available.

I replaced the -v with --mount via a docker run on the command line. Question is, will this persist in a next reboot? I’m unclear as to where this config would be on unraid but I am investigating when I have time.

My Upload Success Rate has increased to 47.484% after changing.

I don’t have issues with unraid either, aside from the sqlite db corruption which like you said is fixed. I’m very careful with this pod, treating it with white gloves.

Replacing it in docker run won’t be perminant. Unraids configuration will override it. You should be able to replace it though in the advanced configuration of the container in unraid. Just delete the existing mount config and manually add it to additional configuration.

Thanks, I figured as much. Just made that change, adding the --mounts under Extra Parameters and removing the builtin directory entries.

Another variable which in some cases might be very relevant, even if its fairly self evident…
power management / spinning rust not spinning when upload request comes in.
This would put one at a near impossible position to actually win an upload contest.

ofc another factor that also should be taken into account is that there only exists a certain pieces on the network, and depending on the competition and what the competition is doing, upload / download rates would flux depending on network traffic.

more overall storj network pressure, better success rates further down the “ladder”

anyways read a few threads on upload ratios and didn’t see anyone mention the spinning rust or network factor, and found them pretty relevant.

the spindle drive / power management issue could very well make some nodes abysmal in success rates.

========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 0
Fail Rate: 0.000%
Canceled: 479
Cancel Rate: 1.058%
Successful: 44779
Success Rate: 98.942%

20 hours since last restart.

Any idea how you manage over 98% success rate?

i mean i’m getting this on 400mbit/400mbit
tho i am running raidz, which might cause some latency.

========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 568
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 46
Fail Rate: 0.132%
Canceled: 18
Cancel Rate: 0.052%
Successful: 34872
Success Rate: 99.817%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 0
Fail Rate: 0.000%
Canceled: 38376
Cancel Rate: 24.548%
Successful: 117951
Success Rate: 75.451%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 131
Cancel Rate: 23.435%
Successful: 428
Success Rate: 76.565%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 17655
Success Rate: 100.000%

Not right now.
I am wondering too.

Where’s your node’s physical location?

Denmark, maybe the 24% is getting lost in the atlantic…

It’s located in Germany.

thats interesting… ill have to wait until i’m sure my node is vetted.
also kinda got you two mixed up… so answered dragonhogans questiong thinking it was jam xD

maybe after its vetted it will go up… else ill have to dig into the reason behind it, tho i suspect the even latency from my raidz could have something to do with it.

you don’t have 10k or 15k rpm drives or run on an ssd do you?

i’m running on 7200rpm sata 5 drive raidz with an ssd L2 ARC on a dual Xeon 5640, i think… so my memory and cpu frequency isn’t that great, but i don’t suspect that has much effect.

would be getting really interesting if you are on faster drives… xD

The majority of my node’s tardigrade traffic is coming from the European satellite, which is a big reason you’re seeing such nice success rates being so close to it. I’m in the US, and my success rates as a whole are ~20%.

i’ve only been up for like 11 days, but 95% of my traffic is from the saltlake tardigrade, and i’m in europa but yeah not much data to go on here yet.

seems odd that highest traffic seems to be from the nodes that are the in different regions… saw that mentioned somewhere else on the forum also…

It’s not that strange. Data is spread out worldwide. It’s just a matter of demand. You may not “win” as many races from further away satellites, but you still get data from them. And if the demand is much higher, you’ll get more data from them than from nearer satellites.

yeah i just assume that was it, until i read a recent post, where somebody said the exact opposite, having a node in the US and seeing mainly traffic only from the European satellite.

kinda made me wonder, but for now ill just ascribe it to vetting…

on a related note, what script or tool do i use to check success ratios on a per satellite basis…???
i just remembered that i only checked my total success rate not on a per sat basis…
i just know my traffic is from saltlake or thats what the dashboard says…

tried a few different suggestions, with limited luck… i did get the ./audits_satellites.sh one to work…

I don’t check it per satellite as it doesn’t matter that much.

As for europe vs us, the european satellite has had relatively high traffic throughout the month, but saltlake started ramping up in recent days. Since your node is new you only saw the last part. And the report from that other user is probably older and based on the first part of the month.

It’ll change over time. End users will use the satellites when they need them and there is really no way to predict usage. It’s also possible that your node has been vetted on saltlake, but not yet on others.