Hi All,
Just wondering if anyone has done any performance testing on this scenario - I’ve seen some successrate examples but not direct disk…? I am trying to determine how my drives compared to a good baseline PI4 setup, but haven’t found any decent comparisons on this forum yet…
Mine seems pretty slow, and successrates are very ordinary, but I’m not sure what the optimal values should be to test with either…
As far as I understood, the successrate depends on many factors like location, internet speed, your setup etc…
So I think there is no absolute baseline with which you can compare your results.
If you want to compare different setups, then I think you will have to test them locally at your location. Otherwise a comparison of the results won‘t be really meaningful.
Mmm, you see to get much better results than I do - I understand many, many things influence results.
I remain curious to compare the direct disk performance as opposed to Storj’s performance…
For example, if you have hdparm installed compare some read speeds
sudo hdparm -Tt /dev/sda
/dev/sda: (WD Red - 2TB)
Timing cached reads: 1682 MB in 2.00 seconds = 840.77 MB/sec
Timing buffered disk reads: 446 MB in 3.01 seconds = 148.28 MB/sec
/dev/sda:
Timing cached reads: 1598 MB in 2.00 seconds = 799.19 MB/sec
Timing buffered disk reads: 350 MB in 3.01 seconds = 116.37 MB/sec
/dev/sdb:
Timing cached reads: 1604 MB in 2.00 seconds = 802.27 MB/sec
Timing buffered disk reads: 394 MB in 3.01 seconds = 130.91 MB/sec
For the 1 TB SSD
/dev/sda:
Timing cached reads: 1704 MB in 2.00 seconds = 852.18 MB/sec
Timing buffered disk reads: 1044 MB in 3.00 seconds = 347.87 MB/sec
For me it doesn’t matter if the results are good or not, i won’t change anything. For what i know, what matters the most is the distance between your node and the client that is using Tardigrade. So if you have shitty successrate is because you don’t have anny client nerby. You need to compare with another RPI4 node in your country before taking any action.
Thanks for that @dann1 !
Those results are quite similar to mine, so gives me some confidence my hardware combo is performing about the same as others. As you say must be down to laws of physics and local demand etc
Pretty sure not SMR, i didn’t find them on that list. For echonomical reasons these are autopowered external drives model Seagate STEL8000200 year 2016, cheapest external HDD i could find 17£/TB.
I’ve heard that too, no problems here…yet. Everyone is talking about possible problems, never knew someone who had or has an isue with SMR hard drive. I will just wait and see.
SMR can be a problem during RAID rebuilds… but since you’re using (scary) RAID0, there isn’t an option to rebuild anyway. So… yeah, that’s a very bad idea. Just use separate nodes per HDD instead please.