Abysmal Upload Success Rate

i’ve only been up for like 11 days, but 95% of my traffic is from the saltlake tardigrade, and i’m in europa but yeah not much data to go on here yet.

seems odd that highest traffic seems to be from the nodes that are the in different regions… saw that mentioned somewhere else on the forum also…

It’s not that strange. Data is spread out worldwide. It’s just a matter of demand. You may not “win” as many races from further away satellites, but you still get data from them. And if the demand is much higher, you’ll get more data from them than from nearer satellites.

yeah i just assume that was it, until i read a recent post, where somebody said the exact opposite, having a node in the US and seeing mainly traffic only from the European satellite.

kinda made me wonder, but for now ill just ascribe it to vetting…

on a related note, what script or tool do i use to check success ratios on a per satellite basis…???
i just remembered that i only checked my total success rate not on a per sat basis…
i just know my traffic is from saltlake or thats what the dashboard says…

tried a few different suggestions, with limited luck… i did get the ./audits_satellites.sh one to work…

I don’t check it per satellite as it doesn’t matter that much.

As for europe vs us, the european satellite has had relatively high traffic throughout the month, but saltlake started ramping up in recent days. Since your node is new you only saw the last part. And the report from that other user is probably older and based on the first part of the month.

It’ll change over time. End users will use the satellites when they need them and there is really no way to predict usage. It’s also possible that your node has been vetted on saltlake, but not yet on others.

well that would certainly explain the saltlake vs europe thing…

and i really should get around to looking at checking my audits… that would make sense to check on a per sat basis right? ofc vetting is only once… so… maybe just a waste of time, because dashboard :smiley:

If you don’t mind me pimping my own shit, I recommend using the earnings calculator. It will show you vetting progress per satellite for now, but also includes a LOT of information on expected payouts.

1 Like

thx, thats an awesome suggestion… had considered getting an earnings calculator but kinda just ended up doing the math in my head… ill give it a look and see if i can install it…

my server is a bit of a hack job and i’m trying to limit my downtime until the node is going good…
and i just switch to linux like 1 week before setting up the storj node…

not sure if i love or hate my decision to switch yet… still a bit of both worlds…
gotta love it for zfs tho… but very much a fish out of water, trying to grow legs… lol

Well, that’s how you learn. If you need any help just ask in that topic, then I’ll get notified as well.
I think Linux is an excellent choice for Storj and other things than need to run 24/7 on a stable base.

And what is the egress for this month with your 76% success rate?

They aren’t related as egress = download traffic.


I have seen a significant drop in success rate too. I understand this is how the network works now. Here are my thoughts and experiences

  • my RPI4 with 5400 WD Red drive working on USB 3.0 over SATA2 has a success rate of 4%, but it mainly comes from the pings I have there
  • some of hybrid-cloud and bare metal servers are doing quite good

I’m sure the network works better this way. However, this race for data has a cost

  • the RPI drive’s load is really heavy - ingress is very big, a lot is partially stored and then deleted, and you’re paid only for avg monthly storage, not for data written on the disk.
  • single bare metal’s success rate is 25%. no complains. Sadly, I need to have RAID10 there, what a waste… :wink:
  • some AWS-like cloud solutions I have - I am able to get above 80% with the success rate, but I would need to pay for the remaining 20% too. Even if I am not paying for network traffic, it doesn’t really calculate much for 2.50$/TB. I would really like to see some tokens for the data stored and deleted

what are your thoughts?

Well there are 2 sides. Whatever you are paid for the customer has to pay for. It’s industry standard to pay for storage and egress. If you make customers pay for ingress as well, you can’t be competitive.

That said, there are other things that can be turned in either the network or the node. From the network side, the RS settings could be changed to lower the over provisioning of uploads. That would lead to fewer cancelled uploads but could also slow down the upload for the customer. So that’s a balancing game. It could also increase repair cost in cases where the success threshold isn’t reached but the minimum is. Which would lead to segments needing repair more quickly.

From the node side it could be that simultaneous transfers slow each other down. In that case it might be useful to restrict those. Although lately we’ve been told not to use that setting, I can actually imagine it might make sense on nodes with very low success rates. I wouldnt go that route at this point without some storj advise on that though.

i’ve been thinking a bit about success rates, which seems a bit weird… i mean from what i read people are either around > 70 or < 20 % on either up or down success rates… that lead me to ponder if it was related to ip v4 vs ip v6 or both… i haven’t had time to test it…

but it would fit what i’ve seen people write… people with both would then get near 100%, those with only ip v4 setup would get 70-80% and the ip v6 people would be the last 20-30%

total gut shot tho, but def something i’m going to test when i get to it.

it does however seem the storj load / storage balancing works pretty good, and thus your success rate might not early on with lots of resources (bandwidth up and down/storage space) to spare barely feel their success rates being bad…

the paid for data stored and data downloaded makes pretty good sense, and the race for data is only relevant when storj overall network is low, but that one can upload and delete the data does seem like something that could be exploited to put strain on the network with low cost for the “attack/exploit”

i’m sure eventually a small cost will be added to avoid such things, but this stuff is still very new…
besides one has to assume that people uploading stuff will want to download it again… else there seems to be little point to it… many people might also be testing these days…

with an abysmal successrate i would see if i could figure out the root cause of the issue…and yes i would regard 20% as abysmal.

I run a dual stack…

Here are my stats using the successrate script since March 13th…

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            2438 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                9 
Fail Rate:             0.021%
Canceled:              52 
Cancel Rate:           0.120%
Successful:            43145 
Success Rate:          99.859%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                11 
Fail Rate:             0.005%
Canceled:              45895 
Cancel Rate:           20.378%
Successful:            179309 
Success Rate:          79.617%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            2586 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                144 
Fail Rate:             3.466%
Canceled:              704 
Cancel Rate:           16.943%
Successful:            3307 
Success Rate:          79.591%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            136423 
Success Rate:          100.000%

i will assume my high downloads is due to my node being new, else very similar stats.

my failed download rate is multitudes higher than yours and your success rate is slightly better… what kind of bandwidth, geolocation, and system are you on?

checked the error on my fails, seems to be related to satellites being unreachable, so not sure if i can do anything to prevent that from my end.

Log from March 11th onward
on IP v4 stack
========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 1002
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 98
Fail Rate: 0.143%
Canceled: 32
Cancel Rate: 0.047%
Successful: 68481
Success Rate: 99.810%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 1
Fail Rate: 0.000%
Canceled: 57858
Cancel Rate: 24.133%
Successful: 181890
Success Rate: 75.867%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 0
Success Rate: 0.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 182
Cancel Rate: 22.304%
Successful: 634
Success Rate: 77.696%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 24196
Success Rate: 100.000%

It’s not due to IPv4 or IPv6. I’ve run dual stack for a while until my ISP switched my modem with one that doesn’t have any settings to open ports for IPv6 (thanks for nothing ISP…). Anyway, my success rates have barely moved. So that’s not it. From everything I have seen it is almost always related to either location or using network storage instead of local storage.

1 Like

  • 4 x E5-4669 V4
  • 512GB RAM
  • Intel DC P3700 NVMe
  • 10Gbit/s Interent
  • OS : Ubuntu 18.04.3 LTS
  • Satellite: Europe-West-1
  • Location : Denmark

It’s about the same for me in Germany, ipv4 only, Ryzen 2600, 16GB RAM, 1Gbit/s internet

I am ipv4 only and upload sucess is 2% . Download is near 100% successful, probably due to the SSD.

Maybe your node in the vetting process. Also, do you use the network connected storage?