You made a leap there that I’m not following. How would that lead to centralization? Those nodes still get significant amounts of data. There is just a preference for faster nodes, but there are still plenty of fast nodes to be highly distributed.
Presumably, a node that has an average of 50% success rate at catching data pieces will gather data slower than nodes that have 90% success rates. Therefore the slower aggregation of data on the less successful nodes will ultimately lead those nodes to be less profitable per hardware uptime/usage. A node operator who is gathering data slowly is much less likely to maintain a long term node and those who do choose to continue will see more hardware failures per MB/TB of data stored.
Eventually the closest nodes geographically with the largest available instantaneous bandwidth will store most of the network data. And those nodes will most likely be located inside a data center already due to the uptime requirement, as well as the noted reality in other threads by other posters that it is unlikely to be profitable to run a node on dedicated hardware.
I think there is a ton of speculation in your post. I’m seeing similar success rates as mentioned here by others and I see everyone mentioning home connections (including myself). Clearly you don’t need to be in a datacenter to see good performance. And the IP filtering limits how many nodes could actually be successful in data centers to begin with. Yes, 50% success rate means you get half the data of a 100% successrate node, that still leads to a lot of distribution. Add to that that the system by definition distributed every piece across many nodes and I doubt any ‘centralization’ that might be caused by lower performing nodes dropping out gets anywhere close to a problematic level.
There are currently at least 2,000 SNs per the last known contacts from the now defunct unauthorized storjnet.info … SNOs are probably only going to post “positive” success results, especially on threads such as this one. A prior post of mine much further up included my thoughts on why this particular metric is a poor metric to judge an individual node’s performance. It’s a relative measure of the ability of a given node to catch data, but the top level measure itself leaves out most of the differentiating factors – most of which are not under the control of the SNO.
My own success rates hover in the low to mid 80% mark. And I believe --I haven’t re-checked threads-- that I’ve seen at least two or three posters on this forum list success rates in the mid 60s. And your own analysis above clearly states the reality that 90% success rate for a particular node necessarily means that another node will have a 50% success rate.
It’s impossible for every node to have a 90% success rate, and the slower nodes are likely to be home networked and geographically far from satellites.
A home connection is certainly fine. However, a typical home scenario includes significant risk of hardware failure and electricity outage. Both of these risks combine to exclude – in the long view – most home built node operators.
The most profitable scenario is a data center server running multiple services on regularly maintained servers. In such case, the data center is recovering a little more profit from the sunk costs of the hardware. It doesn’t cost much more to run a Storj node on a server that is running 50 small web sites simultaneously…
I beg to differ. Running my slow node since we announced Alpha. It costs me nothing to keep it going as electricity around here is dirt cheap. My entire house including the node and a fan that is constantly blowing on it don’t consume even any more than the minimum charge every month. Internet is on anyway for other uses. So why not keep it running, if it makes even a little extra income? And btw my sucess rate has been less than 50% until just this last update. Why should I give up my held amount by quitting now?
My statements above are not meant to apply to any given individual SNO. Certainly, there will be operators who are at or below 50% success rate and still continue to run a node. However, the network is still in Beta stage, and my statements are meant to be applied to a statistical collection of node operators.
An individual node represents 1/2000 th of the SNs and a varying low percentage of the overall networked storage space. My conjectured argument is that a statistically significant portion of home run nodes will be slower and have less storage space than a statistically significant portion of the data center run nodes.
If you consider the standard computer game of life and try to imagine each SN as a cell adjacent to a faster SN cell, you’ll get the idea. The faster cell will beat the slower cell to the data and thus will eventually grow to occupy more of the networked storage space. Eventually, nodes that are slower will become even less likely to catch the data pieces as faster neighbor cells accumulate.
I don’t want to comment to much on the ‘centralization’ point but in case pure speed / hardware / connection would lead to more data then the main holders of the data could be considered ‘more centralized around the performance points’.
Nevertheless I wanted to make sure. My node is connected to a very slow / private ADSL connection here in Holland - there are much more / much faster options available but I don’t see the point of paying for it - Yes, my hardware is a little bit overpowered but the Synology would run every day anyhow for backups and some playing around… just the 2x 1 TB SSDs / SSD Cache is really overpowered but had the feeling drove significant speed increase for storj.
Will post my specs here as well.
Ik have a homeserver which I use for some downloading, nextcloud and storj. Since 0.26.2 I have the feeling my stats for UPLOAD and DOWNLOAD have improved. Allthough it was always above 95%. However, I also think my REPAIR DOWNLOAD stats have decreased. But the numbers are still quite low. So hard to tell
Hardware : Ubuntu server, i5-2500k with 4 gb RAM. in total 24 TB ish diskspace, only 8 TB for storj atm
Bandwidth : Home fiber 100 mbit up and down
Location : Rotterdam
Node Version : v0.26.2
Uptime : 40h27m
max-concurrent-requests : DEFAULT
========== AUDIT ============= Successful: 554 Recoverable failed: 0 Unrecoverable failed: 0 Success Rate Min: 100.000% Success Rate Max: 100.000% ========== DOWNLOAD ========== Successful: 34114 Failed: 154 Success Rate: 99.551% ========== UPLOAD ============ Successful: 56369 Rejected: 9 Failed: 1224 Acceptance Rate: 99.984% Success Rate: 97.875% ========== REPAIR DOWNLOAD === Successful: 151 Failed: 71 Success Rate: 68.018% ========== REPAIR UPLOAD ===== Successful: 115 Failed: 0 Success Rate: 100.000%
Maybe i’m wrong but i will try my luck here
Saw the last post from you @fonzmeister and noticed you’re on Ubuntu Server?
I Upgraded my Node from Synology to Ubuntu Server 18.04.3 LTS.
Now i get weird output from successrate.sh:
Unfortunately i have no idea what the hell is wrong with Line 55/81
Did you have the same Problems?
I run successrate.sh with same Logfile on my Rpi and it works fine…
May Ubuntu Server does not fit with this script?
Thanks for your help guy’s
Perhaps this could help: https://github.com/ReneSmeekes/storj_success_rate#locale-error-fix
Thanks a lot
On a raspberry pi 4 with an 8TB usb drive network 500/500. My other server (an actual powerful server) on a 250/25 connection has roughly the same rates. I really wonder what kind of crappy hardware the people who get 50% use, if a raspberry pi and 25mbit upload seems sufficient. (even 10mbit should be fine according to the posts above!)
========== AUDIT ============= Successful: 2604 Recoverable failed: 0 Unrecoverable failed: 0 Success Rate Min: 100.000% Success Rate Max: 100.000% ========== DOWNLOAD ========== Successful: 148376 Failed: 310 Success Rate: 99.791% ========== UPLOAD ============ Successful: 210527 Rejected: 1614 Failed: 510 Acceptance Rate: 99.239% Success Rate: 99.758% ========== REPAIR DOWNLOAD === Successful: 2 Failed: 0 Success Rate: 100.000% ========== REPAIR UPLOAD ===== Successful: 445 Failed: 0 Success Rate: 100.000%
I now have an uptime for over 300 hours since the last update. The numbers have not really changed. I am pretty happy about it, but the REPAIR DOWNLOAD is a bit of a concern for me
========== AUDIT ============= Successful: 3625 Recoverable failed: 0 Unrecoverable failed: 0 Success Rate Min: 100.000% Success Rate Max: 100.000% ========== DOWNLOAD ========== Successful: 180413 Failed: 614 Success Rate: 99.661% ========== UPLOAD ============ Successful: 219045 Rejected: 9 Failed: 2015 Acceptance Rate: 99.996% Success Rate: 99.089% ========== REPAIR DOWNLOAD === Successful: 441 Failed: 343 Success Rate: 56.250% ========== REPAIR UPLOAD ===== Successful: 453 Failed: 1 Success Rate: 99.780%
My numbers don’t look as good as the rest of the stats posted in this thread. What am I doing wrong?
Hardware: Rasbpi 3B+, Geekworm SATA expansion board, 2TB drive
Bandwidth: home gigabit fiber
Location: San Francisco Bay Area
Node Version: v0.26.2
Uptime: 65 days
max-concurrent-requests : DEFAULT
========== AUDIT ============= Successful: 2134 Recoverable failed: 2 Unrecoverable failed: 0 Success Rate Min: 99.906% Success Rate Max: 100.000% ========== DOWNLOAD ========== Successful: 19409 Failed: 5556 Success Rate: 77.745% ========== UPLOAD ============ Successful: 114759 Rejected: 32 Failed: 114085 Acceptance Rate: 99.972% Success Rate: 50.147% ========== REPAIR DOWNLOAD === Successful: 381 Failed: 297 Success Rate: 56.195% ========== REPAIR UPLOAD ===== Successful: 468 Failed: 35 Success Rate: 93.042%
Seems like my upload success is really bad. What can I do to improve?
This may answer both @Derkades’s and @zcopley’s question. You’re both using similar hardware on fast connections. There are 2 elements that could still explain the difference. Most likely one is just distance to the uplink. Right now most traffic is still test traffic, which originates mostly from Germany. @Derkades didn’t list location, but I’m guessing that node is probably in Europe somewhere. The other thing is disk write speed. I’m going to guess that the 1TB disk is both older and slower than the 8TB one @Derkades uses. So you’re not necessarily doing anything wrong, this is just what your current setup performs like with current test traffic. No worries though, customers will be spread out globally and this distance effect will eventually even out when traffic moves to actual customer traffic and away from test traffic.
You are right, I am in the Netherlands, so I will naturally have a latency advantage. I wouldn’t say it’s a disk issue (unless it’s nearly dying). My two nodes with 1TB WD greens that are ~10 years old get great percentages. I do use ZFS on them so that may improve performance
Hi @BrightSilence. Thanks for responding.
That’s interesting that most of the test traffic originates mostly from Germany. Isn’t the Storj team in Atlanta, GA?
You are right about the slow disk. I misspoke when I said it was a 1TB drive. It’s actually 2TB. This is the exact disk that I bought, specifically for my Storj node. Here is the exact disk (Amazon link) I’m using. But is the latency/throughput of the disk really a gating factor compared to the internet connection? I assumed I could get away with a slower disk. If I had known it made such a difference, I would have bought an SSD or something.
I would say location is by far the biggest reason for the difference. The HDD is a distant second to that. Buying an SSD is not worth it. It would be really hard to earn that investment back.
Hardware : Custom server (2x Xeon E5-2623 v3, 12TB SATA)
Bandwidth : 1000/1000 Mb/s
Location : Switzerland
Node Version : v0.26.2
Uptime : 142 h
max-concurrent-requests : 20
========== AUDIT ============= Successful: 2405 Recoverable failed: 0 Unrecoverable failed: 0 Success Rate Min: 100.000% Success Rate Max: 100.000% ========== DOWNLOAD ========== Successful: 110547 Failed: 92 Success Rate: 99.917% ========== UPLOAD ============ Successful: 157880 Rejected: 0 Failed: 499 Acceptance Rate: 100.000% Success Rate: 99.685% ========== REPAIR DOWNLOAD === Successful: 0 Failed: 85 Success Rate: 0.000% ========== REPAIR UPLOAD ===== Successful: 338 Failed: 0 Success Rate: 100.000%
Hardware : Qnap TS-1277 AMD Ryzen 5 1600 6 cores/12 threads 3.2 GHz processor (Turbo Core 3.6 GHz), 64 GB RAM
64 TB HDD, raid 5
1 TB NVMe cache, raid 0
2 TB Qtier SSD
Location : Oslo
Uptipme : 241h7m13s
max-concurrent-requests : Default
Bandwidth : Fibre 500/500
> ========== AUDIT =============
> Successful: 2090
> Recoverable failed: 0
> Unrecoverable failed: 0
> Success Rate Min: 100.000%
> Success Rate Max: 100.000%
> ========== DOWNLOAD ==========
> Successful: 159948
> Failed: 12
> Success Rate: 99.993%
> ========== UPLOAD ============
> Successful: 241225
> Rejected: 0
> Failed: 3838
> Acceptance Rate: 100.000%
> Success Rate: 98.434%
> ========== REPAIR DOWNLOAD ===
> Successful: 0
> Failed: 0
> Success Rate: 0.000%
> ========== REPAIR UPLOAD =====
> Successful: 513
> Failed: 0
> Success Rate: 100.000%