Yes, AFAIK Storj has to store each file 2.8 times to ensure it’s available, so for each request from a user, there would be 2.8 nodes that get a request for it. I assume that Storj sends requests to multiple nodes to ensure customers get a fast response instead of sending a request to one node - waiting - and then sending it to another node if the first doesn’t deliver. So based on that a success rate of 35% would mean an even distribution of request!?
If load and latency would influence how requests are distributed and so on, Storj users that have their HDD connected via USB or use slower hardware like a Raspberry Pi would have “bad cards” or not?
Also, nodes with, for example, 1GBit up- and download would have an advantage over users with slower connections?
not sure you can run it just on node name… but you can use a log file instead, if you are exporting your logs.
./successrate.sh /temp/logdir/logfile.log
or such
if you aren’t exporting your logs you can use something like this for docker.
i suggest this command line because you can basically adapt it to do whatever
you can also remove the -until parameter and date, if you just want the last 24, 48, 1 hour whatever… pretty self evident how it works… and there are ofc more options in the docker documentation.
I usually update my node manually after checking the github release of a new version.
The updater usually never updates the node correctly, I don’t know why. This is a windows box.
I run the successrate script on the old storagenode.log, here are the results