Post pictures of your storagenode rig(s)

It’s mostly based om awk
I’ve made a light version of it, but the main script are custom made to my Ubuntu setup so it may or may not work, due to the time stamp format in the storagenode logfile.

Link → http://www.th3van.dk/scripts/success-rate-script-light.sh
(MD5 : 65fab5460f640b74653ae36b65a4ac3c) :wink:

root@server030:/disk102/storj/logs/storagenode# time ./success-rate-script-light.sh storagenode.log 
-------------------------  ---------------------- PUT --------------------  ----------------- PUT_REPAIR ------------------  --------------------- GET ---------------------  ----------------- GET_REPAIR ------------------  -------------- GET_AUDIT --------------
  Name            Date            Pieces      Success Rate        Bytes            Pieces      Success Rate        Bytes            Pieces      Success Rate        Bytes            Pieces      Success Rate       Bytes           Pieces    Success Rate    Bytes   
Server stats - 2023-03-30 -     126.849/126.961     99.912 %   67.387.105.024      6.259/6.260       99.984 %    4.586.976.256    131.714/172.722     76.258 %   15.445.626.368      6.786/6.786      100.000 %    4.210.595.584    4.892/4.892    100.000 %    1.252.352  

real    0m5,135s
user    0m5,003s
sys     0m0,132s

root@server030:/disk102/storj/logs/storagenode# cat storagenode.log | wc -l
682456

Th3van.dk

1 Like

ext4 is consistently faster for storage node uses than btrfs, even when you strip btrfs from all the features that make it better than ext4. I’ve run tests in which I replayed the exact I/O performed by a node. btrfs took twice as much time to run it and was 3-4 times slower at performing the file walker process. btrfs for storage nodes sound like a good choice if you want your drives to die faster.

No idea about zfs though, I haven’t measured it.

2 Likes

If you’r looking for the system specification you can find it here → http://www.th3van.dk/SM01-hardware.txt

When there’s a new version of storage node :

  • Update go to latest version
  • Update the local storj source using git fetch and git checkout
  • Compile the executable storagenode files using the “./scripts/release.sh”
  • Run a custom made script that cycles all 105 nodes, and restarts one node every 822 seconds. This way all nodes should be restarted in about 24 hours and running the new version of storagenode.
    This benefit that every node are down for only a few seconds, and the main server isn’t swamped with file walkers. Each nodes take about 1-2 hours for a complete filewalk.

Th3Van.dk

The time for an upgrade has come to one of my machines. Wich whould be the best option for Storj, leting price aside, as a 1-2-3 top, between?

  • WD Gold 20TB - 22TB
  • WD Ultrastar DC 560 20TB
  • WD Ultrastar DC 570 22TB
  • Seagate Exos X20 20TB
  • Toshiba MG10 20TB

I wold chose Toshiba they have flash buffer, so if data in the buffered when power restore will be written to plates.
Seagate i dont like, in my statistics of video surveillance they fail more.

4 Likes

Please, do not use BTRFS for your node: Topics tagged btrfs

1 Like

Wd 20TB also has nand buffer too in case of a power outage.

1 Like

I will start a new topic about HDD top choises, to centralise all options there, with specs, tests, reviews, etc. for whoever is interested in upgrades these days. Fist I must gatter all the info I want.
Is there a way to insert tables (like xls) in posts?

1 Like

Synology’s implementation of btrfs over md with lvm extension “as is” — definitely. I would not use it for anything else either, much less storage node. It has drawbacks of both conventional raid and complexity of btrfs:

As you mentioned in one of the threads there, (Node randomly restarting. Possibly high memory usage - #21 by Alexey) btrfs configurations other than mirror “are not considered stable”, for the past decade, whatever that means. Synology however I did not “fix” anything: They avoided the problem by using conventional raid instead. So you have regular raid, and then you have btrfs laid down on top, and you have volume manager extension for repairing rot. That last bit is what they innovated with in dsm version 6.2.

I skimmed through those threads now and read a few of them before, and I don’t think I saw attempts to tune the filesystem: such as disabling checksumming, turning off access time updates, etc.: you can get comparable to ext4 or better performance from btrfs, just not on Synology: they put marketing first by picking “huge list of features at low cost” approach, and their users pay with loss of performance for their design choices.

And separately, I agree their TiVo approach to using opensource is a bit disgusting. They do release the changes back: but those changes are no more than hooks into their opaque shim layers, and on old and buggy version of software. I had to look trying to fix multiple issues they acknowledged but never actually fixed. (Yes, I strongly dislike Synology. I went from oh, cool NAS OS and hardware to profanities every time I had to interact with the boxes in a span of two years)

So I would rephrase your advice to “avoid using non-mirror btrfs configs on Synology for hosting storage node, especially on low end hardware, and without a lot of tuning”. BTRFs itself is a fine FS.

The best hard drive is the one that provides lower cost per byte stored. All other metrics are irrelevant to various degrees and much less impactful than the price.

The settings I’ve used are as follows:

If you can suggest better ones, please do, this would be very interesting.

1 Like

If you range-select from a spreadsheet (tested from Excel) and paste in the post window, it will auto-format it to have the table markups.

Besides that, I’d just lean on publicly available disk reports, like the one made by Backblaze. Anything other than high volume statistics is just an anecdote, so not useful at all.

I’ll need to review your post carefully to give it justice, but these don’t look unreasonable.

nodatacow effectively disables checksumming, and there is no reason to bloat the tree with embedding, so max_inline=0 is fine too.

Which system was it? I.e. was this btrfs on a single disk or overlay on another storage solution?

I would add the following to your mount options (I don’t use btrfs anymore; this comes from my notes from the past): nodiratime, commit=300 (default is 30), data=ordered, nobarrier. Last two may help when write IO of storagenote increases to some significant levels.

You can also go nuclear and set journal_data=writeback (undocumented), but do have a working UPS :slight_smile:

1 Like

Single disk, following official Storj recommendations.

The default on ext4 is commit=5, bumping it to 60 was a few percent difference IIRC. No idea how big would it be for btrfs, though I’d expect a similar effect.

For the hardware I was using this makes no difference. nobarrier probably would make sense on more enterprise’y drives, this was an old consumer laptop drive. Though, again, would expect the same effect on ext4.

That’s a new thing for me!

@Vadim
Yep, you’re right. The Toshiba MG10 20TB SATA 512e is the best option right now. Same price as Exos, and 33% cheaper than WD. Bigger cache than Exos, power failure protection, lower energy consumption. The performance is the lowest, but for Storj is plenty, because all data is accesed from internet, not locally.
It also has 10-disk; WD and Exos have 9-disk.
I don’t know if this matters much.
There are 2 manufacture places for this drive, Japan and Philipines.

1 Like

What are you doing here? here for publishing photos

1 Like

IMG_1611

I have moved my home toy server to the new case (Fractal Node 804) and added new disks, so this one currently has 3x18TB + 3x20TB Toshiba disks. The case has free space for 4 more disks, 5 with bit of tweaking the case (currently there are 5 unused SATA ports).

It is running Debian on Ryzen 5 4600 CPU with 32 GB of RAM. It has 6 sata connectors on mainboard + 6 on PCI extender card.

To my surprise, it draws less than 60W, which is not ideal since it is powered with Seasonic Focus+ 550W source…

10 Likes

3D printed MATX 6Bay case
6 * 3T SAS HDD


7 Likes

That looks really nice

Next time you gotta print that case in all purple :slight_smile:

1 Like