Successrate.sh comparison thread

I think it will actually. Bandwidth contracts are created for an increasingly larger part of a piece throughout the transfer and I’m pretty sure this scenario means the bandwidth contract for the entire piece was signed by the uplink and will be sent on to the satellite to be settled for payout. Even transfers that only have signed bandwidth contracts for part of a piece transfer are paid I believe. Though I could be wrong. Anyone feel free to correct me if I am.

@anon27637763: you only get DEBUG lines if you set the log level to debug in the config.yaml. It’s on INFO by default.

1 Like

need to switch log level to debug in config.yaml
log.level: debug

1 Like

anyone else noticing bad performance of the upload stats?
Mine have dropped to +/- 30%, I used to be on 95% or more. Is it my node that is performing badly on the newer releases or is it because of the vetting of the new sattelite or something else

sucsesful rate is good for statistics but we all need to understand, that all peases size is not equeal. so there can be lass pieses successful, but they are bigger than and havier than other moore pieces but smaller in size. Logs not show how big this pieses are so cant mesure it apparently. Also play the role delete operastions at same time, thay taking HDD speed olso.

Here’s an update on the latest script as well latest Node version. Important for my numbers to understand, I run two nodes on once as I have been disqualified due to a RAM / Swap issue on two of the 5 satellites which is why I opened a new one, so some of the traffic is split between the two nodes though it’s not on the new satellite and the ones that are paused on the other one.
Stats are since the update:

Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid
Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up
Location : Amsterdam
Node Version : v0.34.6
Uptime : 108h 30m
max-concurrent-requests : DEFAULT
successrate.sh :

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            1134 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                3 
Fail Rate:             0.021%
Canceled:              4 
Cancel Rate:           0.028%
Successful:            14174 
Success Rate:          99.951%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                26 
Fail Rate:             0.013%
Canceled:              63744 
Cancel Rate:           32.874%
Successful:            130135 
Success Rate:          67.113%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            54 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                1 
Fail Rate:             0.037%
Canceled:              716 
Cancel Rate:           26.776%
Successful:            1957 
Success Rate:          73.186%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            19029 
Success Rate:          100.000%
1 Like

Hardware : Supermicro server, 2x Intel Xeon X5687, 100GB RAM. 6x4TB hard drives in raidz2 with two SSDs for L2ARC and ZIL. The node runs inside a VM with 32GB RAM. The node is not the only VM there.
Bandwidth : Home GPON with 1gbps down and 600mbps up. Backup connection is DOCSIS with 100mbps down and 12mbps up
Location : Lithuania
Version : 0.34.6
Uptime : 120h32m42s

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            3813 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                19 
Fail Rate:             0.033%
Canceled:              16 
Cancel Rate:           0.027%
Successful:            58326 
Success Rate:          99.940%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                14 
Fail Rate:             0.005%
Canceled:              88899 
Cancel Rate:           30.090%
Successful:            206528 
Success Rate:          69.905%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            26143 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              1870 
Cancel Rate:           35.906%
Successful:            3338 
Success Rate:          64.094%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            66111 
Success Rate:          100.000%

Is this why all I get all zeroes??? Dashboard says I have 0.7 TB stored.
I just switched to log.level: debug

If you use a non-default container name you need to pass it as a parameter to the script. If you have logs written to a file, you need to pass the path to that file as a parameter. This is the default on windows GUI installs.

if you are on ubuntu you should perform the script with root privilages

1 Like

Thx, sudo did the trick.

New update is also out for docker, v0.35.3… will wait 24 hours and also update here to see if any changes and then do a longer term update after a week.

Hardware : Raspberry Pi 4 (4GB RAM), 1x10TB WD Elements Desktop HDD connected by USB 3.0
Bandwidth : 200mbps down and 20mbps up.
Location : USA
Version : 0.35.3
Uptime : 14h21m
successrate.sh :

========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 298
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 604
Fail Rate: 10.269%
Canceled: 477
Cancel Rate: 8.109%
Successful: 4801
Success Rate: 81.622%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 5
Fail Rate: 0.053%
Canceled: 8206
Cancel Rate: 86.379%
Successful: 1289
Success Rate: 13.568%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 770
Success Rate: 100.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 923
Cancel Rate: 83.605%
Successful: 181
Success Rate: 16.395%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 8594
Success Rate: 100.000%

1 Like

Hi

Just a comment for newbies like me, because I dont see this in this thread and its maybe not obviuos for everyone at the first time. UPLOAD means the ingress, and DOWNLOAD means the egress traffic.
I notice a significant increase in successrate for egress after my 1TB node went full, means it was around 35%, now it is above 75%. Its maybe related to the USB3 connection as probably it was too much to handle both up and downloads at the same time, but still its good to know. So dont panic if your stats are low at first :slight_smile:

I’ve noticed a nice uptick in upload success rate since the latest version. See below:

Hardware : Raspberry Pi 4 (4GB RAM), 1x10TB WD Elements Desktop HDD connected by USB 3.0
Bandwidth : 200mbps down and 20mbps up.
Location : USA
Version : 1.1.1
Uptime : 9h24m
successrate.sh :

========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 117
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 699
Fail Rate: 11.843%
Canceled: 562
Cancel Rate: 9.522%
Successful: 4641
Success Rate: 78.634%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 37
Fail Rate: 0.033%
Canceled: 52730
Cancel Rate: 47.318%
Successful: 58671
Success Rate: 52.649%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 491
Success Rate: 100.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 399
Cancel Rate: 47.163%
Successful: 447
Success Rate: 52.837%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 1290
Success Rate: 100.000%

1 Like

the network is getting more load atm i think… or atleast my ingress have been climbing the last 24+hours or so.

stuff like that will have a huge effect on low successrates, if more people are busy then its easy to win the race to be first to get the data.

My Egress have been abysmal lately tho…

Latest update running and what I see is that my download success rate is stable at 99% since the last couple of updates. But my upload success is constantly going down, now at 53.72% … it was before the 1 update at 60%+ and even higher before.
I think this might be due to more SNOs in my area uploading faster now?

Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid
Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up
Location : Amsterdam
Node Version : v1.1.1
Uptime : 73h 52m
max-concurrent-requests : DEFAULT
successrate.sh :
========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 719
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 3
Fail Rate: 0.014%
Canceled: 14
Cancel Rate: 0.065%
Successful: 21466
Success Rate: 99.921%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 67
Fail Rate: 0.024%
Canceled: 128964
Cancel Rate: 46.252%
Successful: 149800
Success Rate: 53.724%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 186
Success Rate: 100.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 1247
Cancel Rate: 31.097%
Successful: 2763
Success Rate: 68.903%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 5531
Success Rate: 100.000%

personally i find storj demands a lot of my drives, so i can see my successrates go up and down depending on what kind of work i’m doing on my drives.

Hardware : Dual xeon 5630, 48GB RAM - 5 drive raidz1 with L2ARC SSD
Bandwidth : 400mbit full duplex fiber
Location : Denmark
Node Version : v1.1.1
Uptime : 72hr or so
log dates : 08/04 - 11/04

========== AUDIT ==============
Critically failed:     0
Critical Fail Rate:    0.000%
Recoverable failed:    0
Recoverable Fail Rate: 0.000%
Successful:            489
Success Rate:          100.000%
========== DOWNLOAD ===========
Failed:                10
Fail Rate:             0.056%
Canceled:              15
Cancel Rate:           0.084%
Successful:            17826
Success Rate:          99.860%
========== UPLOAD =============
Rejected:              0
Acceptance Rate:       100.000%
---------- accepted -----------
Failed:                1
Fail Rate:             0.000%
Canceled:              96984
Cancel Rate:           22.647%
Successful:            331251
Success Rate:          77.352%
========== REPAIR DOWNLOAD ====
Failed:                0
Fail Rate:             0.000%
Canceled:              0
Cancel Rate:           0.000%
Successful:            11
Success Rate:          100.000%
========== REPAIR UPLOAD ======
Failed:                0
Fail Rate:             0.000%
Canceled:              548
Cancel Rate:           23.951%
Successful:            1740
Success Rate:          76.049%
========== DELETE =============
Failed:                0
Fail Rate:             0.000%
Successful:            8769
Success Rate:          100.000%
1 Like

@tankmann, most of the data comes from the saltlake node. I see a similar drop for my node in succesrate and I am based in the netherlands as well.

So It could also be that the sattelite is further away compared to stefan-bentens sattelite

1 Like

Gotya, I didn’t look at that at all - very good point, feels like that’s the solution. Thanks for sharing.
That also means: I download everything fine which means what come onto the node, but the traffic that gets back to users - your assumption is I guess - goes also back to the US area which means my uploads go down as slow.