Post pictures of your storagenode rig(s)

Only 12TB used space

Of courseā€¦ :upside_down_face:
I sometimes forget people have lives outside storjā€¦
Nice rig thoughā€¦ :+1:

I think using different IPs

For anyone that wants to see my new rig:

7 Likes

200+Tb (40x SMR 5Tb HDDs) ZFS-based storage for keeping data of 4 storj nodes + personal backups space.


4 Likes

Newest node - currently a 2TB disk but will be expanding when near full. Most likely migrate to a 10TB drive

2 Likes

Wow! You really nailed that cable management!


P.S. Why so high up? Or is it a basement?

8 Likes

You mean in 2 months :smile: ā€¦ or even quicker.

1 Like

How does it behave? Storj is well known for having the file walker process that checks all the small files on disk on every restart/update. It can be disabled but if you havenĀ“t done it yet, how does the system works?

I was thinking about going ZFS all the way, but ended up on different disks, one per node.

Haha yea - i could have done better!

Its an a garage and needs to be out of the way :slight_smile:

Yea - most likely - allready 10% full

Working fine, but DBs placed on another ZFS pool used for VMs and backed by SAS SSDs (Zeus IOPS).
I keep file-walker on all nodes because itā€™s not affect pool performance so much. But it will be a good idea to turn it off in the future to speed up node startup after an upgrade.
You donā€™t get any benefits with ZFS if you use only one disk (for storj workload), try XFS instead.

I would not recommend ZFS for large node (4TB+).
I only use ZFS for new node to vet and move them to 8TB once they hit around 4TB. I use 4x 4TB with raidz1(aka raid5).
Because every write, and read it has to read from multiple harddrives in your raid, which will slow it down.
Just moving my storage node out of zfs to ext4 takes at least 3-4x longer compared to ext4->ext4 node migration for the same amount of data.

I use ext4, since ext4 generate all the inode when you format the harddrive, which I think will be sightly faster than XFS.
XFS creates inode as you add files to it.

image

Did you know why system architects increased the number of spindles in storage arrays before the SSD era?

P.S. ext4 also has inode limit, and itā€™s not increasable without reformat.

2 Likes

use zfs send zfs recv ā€¦
when you donā€™t exit zfs moving data to new media is like 10 to 12x faster than when using something like rsync with storj data.

Not when you want to move data zfs->ext4

well ext4 doesnā€™t really need much of a manual, with zfs you can just ask it to go faster by doing zfs set sync=disabled.

there are also a lot of other hardware consideration, if you have enough RAM and other activity on the media being migrated, zfs can be much faster than ext4.

its not really a binary answer if one thing is faster than the other, it depends on a lot of conditions.

zfs can be a bit demanding
ext4 is nice simple and light weight

1 Like

@Th3Van
Iā€™m following your project. Respect.
Is it possible to share with us the success rate script that you run every hour? Is it based on ReneSmeekes or something faster and lighter?

1 Like

May you share more info about systemd units for run on bare-metal and how you update nodes?

Few comments

Before disabling sync, a few less drastic measures can be taken:

  1. Access Time updates can be disabled (almost halves the IO): zfs set atime=off pool/dataset
  2. Cache metadata only: zfs set primarycache=metadata pool/dataset

But for storagenode ā€“ indeed, turning off sync is absolutely fine. Data loss is not a problem, and you wonā€™t have any if you got UPS in the first place.

ext4 is ancient history. There are no benefits in running ext4 on the system that can support zfs, especially in a rather demanding scenarios (random access to a massive number of small files).

(I remember reading interview with one of the ext4 developers years ago where they recommended to use BTRFS of ZFS for all new deployments whenever possible. Could not find it now to link here)

2 Likes