Successrate.sh comparison thread

You are right, I am in the Netherlands, so I will naturally have a latency advantage. I wouldn’t say it’s a disk issue (unless it’s nearly dying). My two nodes with 1TB WD greens that are ~10 years old get great percentages. I do use ZFS on them so that may improve performance

Hi @BrightSilence. Thanks for responding.

That’s interesting that most of the test traffic originates mostly from Germany. Isn’t the Storj team in Atlanta, GA?

You are right about the slow disk. I misspoke when I said it was a 1TB drive. It’s actually 2TB. This is the exact disk that I bought, specifically for my Storj node. Here is the exact disk (Amazon link) I’m using. But is the latency/throughput of the disk really a gating factor compared to the internet connection? I assumed I could get away with a slower disk. If I had known it made such a difference, I would have bought an SSD or something.

I would say location is by far the biggest reason for the difference. The HDD is a distant second to that. Buying an SSD is not worth it. It would be really hard to earn that investment back.

Hardware : Custom server (2x Xeon E5-2623 v3, 12TB SATA)
Bandwidth : 1000/1000 Mb/s
Location : Switzerland
Node Version : v0.26.2
Uptime : 142 h
max-concurrent-requests : 20
successrate.sh :

========== AUDIT ============= 
Successful:           2405 
Recoverable failed:   0 
Unrecoverable failed: 0 
Success Rate Min:     100.000%
Success Rate Max:     100.000%
========== DOWNLOAD ========== 
Successful:           110547 
Failed:               92 
Success Rate:         99.917%
========== UPLOAD ============ 
Successful:           157880 
Rejected:             0 
Failed:               499 
Acceptance Rate:      100.000%
Success Rate:         99.685%
========== REPAIR DOWNLOAD === 
Successful:           0 
Failed:               85 
Success Rate:         0.000%
========== REPAIR UPLOAD ===== 
Successful:           338 
Failed:               0 
Success Rate:         100.000%

v.0.26.2

Hardware : Qnap TS-1277 AMD Ryzen 5 1600 6 cores/12 threads 3.2 GHz processor (Turbo Core 3.6 GHz), 64 GB RAM
64 TB HDD, raid 5
1 TB NVMe cache, raid 0
2 TB Qtier SSD
Location : Oslo
Uptipme : 241h7m13s
max-concurrent-requests : Default
Bandwidth : Fibre 500/500

> ========== AUDIT =============
> Successful: 2090
> Recoverable failed: 0
> Unrecoverable failed: 0
> Success Rate Min: 100.000%
> Success Rate Max: 100.000%
> ========== DOWNLOAD ==========
> Successful: 159948
> Failed: 12
> Success Rate: 99.993%
> ========== UPLOAD ============
> Successful: 241225
> Rejected: 0
> Failed: 3838
> Acceptance Rate: 100.000%
> Success Rate: 98.434%
> ========== REPAIR DOWNLOAD ===
> Successful: 0
> Failed: 0
> Success Rate: 0.000%
> ========== REPAIR UPLOAD =====
> Successful: 513
> Failed: 0
> Success Rate: 100.000%

New update v.0.27.1

Hardware : Qnap TS-1277 AMD Ryzen 5 1600 6 cores/12 threads 3.2 GHz processor (Turbo Core 3.6 GHz), 64 GB RAM
64 TB HDD, raid 5
1 TB NVMe cache, raid 0
2 TB Qtier SSD
Location : Oslo
Uptipme : 35h38m13s
max-concurrent-requests : Default
Bandwidth : Fibre 500/500

========== AUDIT =============
Successful: 222
Recoverable failed: 0
Unrecoverable failed: 0
Success Rate Min: 100.000%
Success Rate Max: 100.000%
========== DOWNLOAD ==========
Successful: 30329
Failed: 2
Success Rate: 99.993%
========== UPLOAD ============
Successful: 49857
Rejected: 1
Failed: 242
Acceptance Rate: 99.998%
Success Rate: 99.517%
========== REPAIR DOWNLOAD ===
Successful: 0
Failed: 0
Success Rate: 0.000%
========== REPAIR UPLOAD =====
Successful: 331
Failed: 0
Success Rate: 100.000%

there are doubts that the logs can be used to find out the efficiency correctly
look this

2019-12-13T14:00:31.500Z INFO piecestore download started {“Piece ID”: “GRJP4K4PAZ34ZIRWVQJFBMQDDZHCZH4TS76RQNXIHN4RXF74G73Q”, “Satellite ID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “GET”}
2019-12-13T14:00:34.643Z DEBUG piecestore client canceled connection
2019-12-13T14:00:34.644Z INFO piecestore downloaded {“Piece ID”: “GRJP4K4PAZ34ZIRWVQJFBMQDDZHCZH4TS76RQNXIHN4RXF74G73Q”, “Satellite ID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “GET”}

Debug level say that client connection was canceled but it count as success download.
I doubt that billing will appreciate this as a successful get

I don’t have any of these entries in my logs…

However, I did find this somewhat humorous entry:

...piecestore	upload started	{"Piece ID": "CJ3TDEBUGGLU2R ...

When searching for debug messages.

I think it will actually. Bandwidth contracts are created for an increasingly larger part of a piece throughout the transfer and I’m pretty sure this scenario means the bandwidth contract for the entire piece was signed by the uplink and will be sent on to the satellite to be settled for payout. Even transfers that only have signed bandwidth contracts for part of a piece transfer are paid I believe. Though I could be wrong. Anyone feel free to correct me if I am.

@beast: you only get DEBUG lines if you set the log level to debug in the config.yaml. It’s on INFO by default.

1 Like

need to switch log level to debug in config.yaml
log.level: debug

1 Like

anyone else noticing bad performance of the upload stats?
Mine have dropped to +/- 30%, I used to be on 95% or more. Is it my node that is performing badly on the newer releases or is it because of the vetting of the new sattelite or something else

sucsesful rate is good for statistics but we all need to understand, that all peases size is not equeal. so there can be lass pieses successful, but they are bigger than and havier than other moore pieces but smaller in size. Logs not show how big this pieses are so cant mesure it apparently. Also play the role delete operastions at same time, thay taking HDD speed olso.

Here’s an update on the latest script as well latest Node version. Important for my numbers to understand, I run two nodes on once as I have been disqualified due to a RAM / Swap issue on two of the 5 satellites which is why I opened a new one, so some of the traffic is split between the two nodes though it’s not on the new satellite and the ones that are paused on the other one.
Stats are since the update:

Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid
Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up
Location : Amsterdam
Node Version : v0.34.6
Uptime : 108h 30m
max-concurrent-requests : DEFAULT
successrate.sh :

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            1134 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                3 
Fail Rate:             0.021%
Canceled:              4 
Cancel Rate:           0.028%
Successful:            14174 
Success Rate:          99.951%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                26 
Fail Rate:             0.013%
Canceled:              63744 
Cancel Rate:           32.874%
Successful:            130135 
Success Rate:          67.113%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            54 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                1 
Fail Rate:             0.037%
Canceled:              716 
Cancel Rate:           26.776%
Successful:            1957 
Success Rate:          73.186%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            19029 
Success Rate:          100.000%
1 Like

Hardware : Supermicro server, 2x Intel Xeon X5687, 100GB RAM. 6x4TB hard drives in raidz2 with two SSDs for L2ARC and ZIL. The node runs inside a VM with 32GB RAM. The node is not the only VM there.
Bandwidth : Home GPON with 1gbps down and 600mbps up. Backup connection is DOCSIS with 100mbps down and 12mbps up
Location : Lithuania
Version : 0.34.6
Uptime : 120h32m42s

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            3813 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                19 
Fail Rate:             0.033%
Canceled:              16 
Cancel Rate:           0.027%
Successful:            58326 
Success Rate:          99.940%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                14 
Fail Rate:             0.005%
Canceled:              88899 
Cancel Rate:           30.090%
Successful:            206528 
Success Rate:          69.905%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            26143 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              1870 
Cancel Rate:           35.906%
Successful:            3338 
Success Rate:          64.094%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            66111 
Success Rate:          100.000%

Is this why all I get all zeroes??? Dashboard says I have 0.7 TB stored.
I just switched to log.level: debug

If you use a non-default container name you need to pass it as a parameter to the script. If you have logs written to a file, you need to pass the path to that file as a parameter. This is the default on windows GUI installs.

if you are on ubuntu you should perform the script with root privilages

1 Like

Thx, sudo did the trick.

New update is also out for docker, v0.35.3… will wait 24 hours and also update here to see if any changes and then do a longer term update after a week.

Hardware : Raspberry Pi 4 (4GB RAM), 1x10TB WD Elements Desktop HDD connected by USB 3.0
Bandwidth : 200mbps down and 20mbps up.
Location : USA
Version : 0.35.3
Uptime : 14h21m
successrate.sh :

========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 298
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 604
Fail Rate: 10.269%
Canceled: 477
Cancel Rate: 8.109%
Successful: 4801
Success Rate: 81.622%
========== UPLOAD =============
Rejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 5
Fail Rate: 0.053%
Canceled: 8206
Cancel Rate: 86.379%
Successful: 1289
Success Rate: 13.568%
========== REPAIR DOWNLOAD ====
Failed: 0
Fail Rate: 0.000%
Canceled: 0
Cancel Rate: 0.000%
Successful: 770
Success Rate: 100.000%
========== REPAIR UPLOAD ======
Failed: 0
Fail Rate: 0.000%
Canceled: 923
Cancel Rate: 83.605%
Successful: 181
Success Rate: 16.395%
========== DELETE =============
Failed: 0
Fail Rate: 0.000%
Successful: 8594
Success Rate: 100.000%