I know there are lot’s of threads where we talk about successrate.sh results and hardware but it’s comes up when issues are there - I would love to share outputs for comparison, not for hardware geeking…
How do you feel about this format: Hardware: Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth: Home ADSL with 40mbit/s down and 16mbit/s up Location: Amsterdam Node Version: v0.21.3 max-concurrent-requests: DEFAULT, so 7 successrate.sh:
My biggest issue is my bandwidth, hardware is now fine but with home ADSL Router I don’t think more “max-concurrent-requests” etc. will help
Success Rates so far are ok for me, but not sure what other are.
I’m fairly new to Storj, so my information may be a little bit incorrect…
However, after paging through the logs and some of the documentation/forum posts, it seems that the “success” rates have the following practical meaning:
Audits:
These are sent to a particular node as a test. Under normal operations, this should be 100%. And if the number falls below 60%, your node will likely be disqualified on the network.
Download and Upload percentages:
These are at least partially related to:
Your geographical distance from a satellite.
The speed at which your node processes requests.
If your node is too slow in comparison with other nodes in the same geo-ip and/or subnet than your node will lose the race to get the data in or our first. When your node loses the race, the “success” rate declines.
In my case, I’m not close to a satellite node and I’m tunneling traffic through a secure pipe in order to avoid being blocked by my ISP. So, there’s a little bit extra overhead in my IP traffic… However, I’m also the only node in my /24 IP block.
So, the success rates parsed from the node log don’t tell the complete story of your node. Judging by my own rates of Upload and Download, your node is doing quite well. It is beating out the competing nodes most of the time.
Also… the log parsing script is not very efficient… it works, but there’s definitely room for improvement. Multiple grep statements per line creates significant a bottleneck. However, it’s all volunteer… and my own shell programming is bit utilitarian as well. I was thinking of taking some time this week to look at some improvements and posting them. But I see other activity in making the web interface much more informative.
Please correct my errors in reasoning if I’ve got the “success” rate information incorrect.
Thanks for your reply. I’m fairly clear on what what means, but thanks for the explanation from your side. Main concept of this thread was to just compare output between SNOs just to get a feeling where others are netting out.
Hardware : Crappy HP laptop (AMD E1-2100, 1.0Ghz, 4Gb RAM), 1 TB hdd on usb 3.0 dock Bandwidth : Home optical fiber 100 mbit/s up and down Location : Oulu, Finland Node Version : v0.21.3 max-concurrent-requests : 10 successrate.sh :
Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up Location : Amsterdam Node Version : v0.22.1 max-concurrent-requests : DEFAULT, so 7 successrate.sh :
For the first time in a while got an REPAIR DOWNLOAD and some REPAIR UPLOAD.
No change vs. before just my UPLOAD Success Rate went a bit down which is IMO normal because of my ‘small’ connection. So far only 2 rejected which means concurrent rate for now ok.
Well, let’s go: Hardware : Supermicro server, 2x Intel Xeon X5687, 100GB RAM. 6x4TB hard drives in raidz2 with two SSDs for L2ARC and ZIL. The node runs inside a VM with 32GB RAM. The node is not the only VM there. Bandwidth : Home GPON with 1gbps down and 600mbps up. Backup connection is DOCSIS with 100mbps down and 12mbps up Location : Lithuania Node Version : v0.22.1 max-concurrent-requests : 64
Update, 40 hours of v0.23.3 Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up Location : Amsterdam Node Version : v0.23.3 max-concurrent-requests : DEFAULT, so 7 successrate.sh :
Update, biggest change beside latest update is installed 1TB SSD cache on my Synology, this seems to actually boost performance significantly for UPLOADS even on my slow ADSL connection:
Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up Location : Amsterdam Node Version : v0.24.5 Uptime : 135h20m50s max-concurrent-requests : DEFAULT successrate.sh :
Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up Location : Amsterdam Node Version : v0.25.1 Uptime : 73h33m12s max-concurrent-requests : DEFAULT successrate.sh :
Update also from my side, number improved even more for the last two versions. Having 99% successful download and in particular 98% upload is super high…
Even though yours is minimal better still @Tulip
Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid Bandwidth : Home ADSL with 40mbit/s down and 16mbit/s up Location : Amsterdam Node Version : v0.26.2 Uptime : 23h41m30s max-concurrent-requests : DEFAULT successrate.sh :
All downloads and uploads are over provisioned. So some requests are always interrupted. These numbers over 95% are very good and in fact quite a lot better than average.
I’m not sure if these numbers are still accurate.
For uploads, 130 transfers are started, of which at least 80 are finished. The rest is interrupted.
For downloads 35 are started and at least 29 are finished.
Think of it this way, for every node with above 90% success rates there are other nodes with below 50% success rates.
And it’s important to note, it is unlikely that those nodes with below 50% success rate have any method to improve the success rate. I contend that this network attribute will, eventually, lead to centralization of data.
However, predictions are difficult, especially about the future.
You made a leap there that I’m not following. How would that lead to centralization? Those nodes still get significant amounts of data. There is just a preference for faster nodes, but there are still plenty of fast nodes to be highly distributed.
Presumably, a node that has an average of 50% success rate at catching data pieces will gather data slower than nodes that have 90% success rates. Therefore the slower aggregation of data on the less successful nodes will ultimately lead those nodes to be less profitable per hardware uptime/usage. A node operator who is gathering data slowly is much less likely to maintain a long term node and those who do choose to continue will see more hardware failures per MB/TB of data stored.
Eventually the closest nodes geographically with the largest available instantaneous bandwidth will store most of the network data. And those nodes will most likely be located inside a data center already due to the uptime requirement, as well as the noted reality in other threads by other posters that it is unlikely to be profitable to run a node on dedicated hardware.
I think there is a ton of speculation in your post. I’m seeing similar success rates as mentioned here by others and I see everyone mentioning home connections (including myself). Clearly you don’t need to be in a datacenter to see good performance. And the IP filtering limits how many nodes could actually be successful in data centers to begin with. Yes, 50% success rate means you get half the data of a 100% successrate node, that still leads to a lot of distribution. Add to that that the system by definition distributed every piece across many nodes and I doubt any ‘centralization’ that might be caused by lower performing nodes dropping out gets anywhere close to a problematic level.