Successrate.sh comparison thread

the successrate.sh script should per default grab the logs from the docker container and process it…

this will however not work, if you have a redirect on your docker storagenode logs to an external location.

if you do have this, then you can add the location of the log you want to process with successrate.sh

./successrate.sh /your/log/location/storagenodewhatever.log

Here you can find the script + additional information


and if you cannot make it work, i’m sure @BrightSilence will want to debug it with you.

my money would be on that it’s a matter of the right location…

link to log redirection, if you done this then you will need to define the log output location when running successrate.sh
https://documentation.storj.io/resources/faq/redirect-logs

1 Like

Your link had a new line in it.

@msallak1 you shouldn’t edit the script at all. Copying the script might not be a good idea either as sometimes people copy some html mess with it.

Just do

wget https://raw.githubusercontent.com/ReneSmeekes/storj_success_rate/master/successrate.sh

Then call the script with:

  • Default container name (storagenode)
./successrate.sh 
  • Different container name
./successrate.sh containername
  • Logs redirected to a file
./successrate.sh /path/to/node.log
2 Likes

Raspi 4 8Gb running 64bit Raspbian and Storj 1.9.5 on an external USB 3.0 disk (WD MyBook):

1 Like

Thank you both for explaining. I re-downloaded it with your command, I also should have just went with default because I thought I had to type the path.

This is the result from new node about 4-5 days old

1 Like

Before you can run the script you need to make it executable:

chmod +x successrate.sh

2 Likes

Still waiting for my docker to be updated…

3 Likes

then just manually update it…

just make sure you got your full docker run command,

docker shutdown storagenode
docker rm storagenode
docker pull storjlabs/storagenode:latest
docker run… very long and complicated command i usually just copy paste…even tho i should have put it in a bash script

but yeah really it’s a 2min deal… if that…

Yeah sure I know how that works but we’ve also had multiple discussion here in the forum about not forcing to update / or keep watchtower to ensure ‘how it should be’…
I’m just whining here that’s all :D:D

3 Likes

why does it matter if you force an update… so long as you know it’s good…
ofc i’m not aware if doing a manual update can confuse watchtower… since i only manual update

well you better strap in… it might be days until it comes around to updating your SN :smiley:

the real test of the network will be the day something like 60% of the states blackout… or when the internet breaks or is otherwise impaired across like say the atlantic…

some major shift, that might take 30% of the network offline for hours if not days…
that will be the first true test of if this really works.

i think updating will be hard pressed to make that effect… but it most certainly could… even if kinda tricky to pull off

Yeah I’ll sit it out :slight_smile: Wasn’t it roughly within a week after the first docker update starts? I remember reading something about it but forgot it… Well. #waiting :slight_smile:

version is launched for windows, then it takes about a week before it comes to docker and then maybe a few days to update everything… ofc maybe a bit more… but the further towards the end it gets the less likely it is that, you will be the one left ofc…

hitting 10% is not all that easy consistently :smiley:

Well this is interesting. I thought 1000 uploads would be a decent sample size, but I ran the script again after 76,000 uploads and my success rate went from 91.2 percent to 98.1 percent. I didn’t change anything on my node. Perhaps it has something to do with time of day or distance to the customers that were active at the time of measurement.

2 Likes

there may be other factors that will play a role…
such a server or hdd’s activity, storage node activity, internet activity…
and then depending on what type of setup you have… then you might add , user activity, network activity…

then you might also have to take into account the storj network conditions… like say if nodes near you or not are performing up to standard… then you might also have some boot time where your system doesn’t run as smoothly, or just after you restarted the storagenode it will perform a lot of processes that will strain the computer and thus affect your successrate % over the short term… ofc long term it sort of drowns out… but it might be why you aren’t seeing higher than 98% but 98% is basically perfect…

i so want the last bit of DL and upload… would be awesome to get it to 99.99% in both just for kicks…

yeah, I forgot about the full node file scan that happens on node restart.

I was enjoying the sun the last two days and didn’t check, updated almost 40 hours ago … just moved old logs aside and will collect and share tomorrow :slight_smile:

Finally here’s my update as well, also bringing back the initial format :slight_smile:

Summary: Looks like almost nothing is failing any longer :slight_smile:
Biggest change is the connection from ADSL 40down and 16 up to a new cable connection. And the logfiles I also re-started 40 hours after the automatic update plus have two nodes running on the same IP (where one I’m gracefully exiting one satellite after the other).

Hardware : Synology DS1019+ (INTEL Celeron J3455, 1.5GHz, 8GB RAM) with 20.9 TB in total SHR Raid
Bandwidth : Home Cable Connection with 300mbit/s down and 40mbit/s up
Location : Amsterdam
Node Version : v1.9.5
Uptime : 88h 19m
max-concurrent-requests : DEFAULT
successrate.sh :

========== AUDIT ============== 
Critically failed:     0 
Critical Fail Rate:    0.000%
Recoverable failed:    0 
Recoverable Fail Rate: 0.000%
Successful:            2000 
Success Rate:          100.000%
========== DOWNLOAD =========== 
Failed:                3 
Fail Rate:             0.006%
Canceled:              8 
Cancel Rate:           0.015%
Successful:            53650 
Success Rate:          99.980%
========== UPLOAD ============= 
Rejected:              0 
Acceptance Rate:       100.000%
---------- accepted ----------- 
Failed:                2 
Fail Rate:             0.002%
Canceled:              39 
Cancel Rate:           0.030%
Successful:            128441 
Success Rate:          99.968%
========== REPAIR DOWNLOAD ==== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            21637 
Success Rate:          100.000%
========== REPAIR UPLOAD ====== 
Failed:                0 
Fail Rate:             0.000%
Canceled:              0 
Cancel Rate:           0.000%
Successful:            23976 
Success Rate:          100.000%
========== DELETE ============= 
Failed:                0 
Fail Rate:             0.000%
Successful:            23507 
Success Rate:          100.000%

Hello , i´m new here :slight_smile:

I have a test node to try storj.
Now it work and in a few days it will be a Raid 5 System with 4 TB Storage

Hardware : Intel i7 6700k
Bandwidth : Home Cable Connection with 1059mbit/s down and 54,2mbit/s up
Location : Kaarst
Node Version : v1.9.5
Uptime : 133h 40m
max-concurrent-requests : DEFAULT
successrate.sh

========== AUDIT =============
Critically failed: 0
Critical Fail Rate: 0,00%
Recoverable failed: 0
Recoverable Fail Rate: 0,00%
Successful: 57
Success Rate: 100,00%
========== DOWNLOAD ==========
Failed: 0
Fail Rate: 0,00%
Canceled: 2
Cancel Rate: 0,18%
Successful: 1125
Success Rate: 99,82%
========== UPLOAD ============
Rejected: 0
Acceptance Rate: 100,00%
---------- accepted ----------
Failed: 0
Fail Rate: 0,00%
Canceled: 17
Cancel Rate: 0,03%
Successful: 55227
Success Rate: 99,97%
========== REPAIR DOWNLOAD ===
Failed: 0
Fail Rate: 0,00%
Canceled: 0
Cancel Rate: 0,00%
Successful: 0
Success Rate: 0,00%
========== REPAIR UPLOAD =====
Failed: 0
Fail Rate: 0,00%
Canceled: 0
Cancel Rate: 0,00%
Successful: 35357
Success Rate: 100,00%

1 Like

@Phillip
raid 5 with 4 tb capacity…???

so 3 x 2tb drives?

raid 5 will work for 4tb but make sure you can make it into a raid6 because raid5 has fundamental flaws that will long term kill larger storages arrays… it’s just a matter of time…

raid6 doesn’t have this solution or issue… also you might want to strongly consider if you would be better off making multiple nodes instead of a raid5

raid 5 disadvantages… 1 all disk in the array work in harmony… meaning the array has 1x hdd iops

nr 2. raid 5 vs non raided gives you 50% more storage…

nr.3 raid 5 cannot protect against bit rot, and thus essentially the more drives in a raid5 the higher the odds of bit rot becomes, and the array cannot detect where the corruption is coming from, and thus may have you data and then might delete it for migrating the corrupted data to ensure the redundancy.
something raid 6 has no trouble with…

so if i was in your case… i would want to run raid later in my storagenode adverture… but for starting up… might as well make multiple nodes… many advantages to that

9 posts were split to a new topic: And how i can make 4 nodes @ one pc?!

Well, the solution for the high error rate I had previously using the same hardware finally arrived. Some time ago I created a post indicating the disaster of the cancellation rate, and now it seems that the numbers have been turned around.

The photograph indicates all the data provided from a cleaning done at the beginning of the month.

2 Likes