Yea. I have no idea why, but I really have an urge to build a low power n100 server ![]()
Hah, I thought you’d have found (and bashed) the xpenology project long time from now.
I ran it shortly some time ago, before getting my first Synology boxes.
I started with a whitebox build, running xpenology, then upgraded to some thrifted two-bay PoS → rs819 → ds1819+ → rs3617xs and then lastly a ds3622xs+.
Xpenology is great, if you just want the synology ecosystem.
SHR implementation, while terrible for performance, is a great way to mix and match drives for maximum performance, DSM is pretty polished and most of the community apps are good enough for most people.
Why you’d choose it today against some of the more polished options (I’m only looking at ZFS based solutions), I don’t really know - but if you just want BTRFS, cache optimization and have it be in a familiar environment, I think it could be an alright contender.
N95 is better, performance wise.
SHR and bit-rot recovery was done before synology went all-in on marketing, and I commended their efforts in DSM6.1 quite a bit.
Yes, DSM is an eye candy, but then why not buy synology box? It’s like running hackintosh on shoddy hardware – missing the point and half the value.
You can do that with zfs too. create a bunch of partitions to fill the space and create a bunch of vdevs. ArtOfTeServer even made a video about this, and I even watched it ![]()
With live rebalancing if you change the drive setup? Would be nice to see this in ZFS.
Why… I mean, who needs that?
This feature is useful for giving a second, albeit of very low quality, life, to old crappy disks. Not to evolve the whole ecosystem around it!
But even then – old disk go to ebay, different old disks of the right sizes come from ebay, this is the right approach, not unraid or shr.
Unraid implemented this feature from scratch, they essentially built the whole company around it, and by today they have adopted zfs ![]()
Small business, as a cost-saving measure. Seen this in action, good thing: the place had an old, one of the still decent 12-slot boxes, filled with 4TB drives. A friend was helping there. Thanks to the feature, they could replace drives almost one by one as requirements slowly grew over time, with no downtime and minimal friend sysadmin involvement. Delayed CAPEX was apparently worth it there.
Here’s mine! Runs 4x StorJ nodes, Chia farming (trying to move all that to StorJ but it’s slow going), a custom-built bridge for home automation to Apple HomeKit (Shelly, EV charger balancing, energy and water logging, etc), Plex media server and Teslamate logger.
Specs:
- Intel Core i7
- 64 GB RAM
- 1x 1TB SSD
- 4x 14TB HDD
- 2x 16TB HDD
OS is Ubuntu Server. All services run in docker containers. Data disks are on ZFS, mostly because it is easier to manage that way. Only Plex is mirrored. Power consumption is 65W which I think is incredible, especially given the number of HDDs! That was one light bulb in them olden days.
Is that a Rosewill 4u case? Clean setup!
Thanks! It’s actually a UNYKAch 4129; got it for $150 on Amazon which I thought was incredibly cheap compared to other rack-mountable chassis. If I bought today I would’ve bought something with hot-swappable drives on the front instead though - not needed often, but right now it is incredibly cumbersome to get a HDD out…
Ah, there seem to be a few places selling cases from the same factory. I also regret not getting hotswap on more of my cases: like you said replacing drives may be rare BUT it’s really nice to do so without shutting everything down and tearing a system apart!
hot swap is good, but need faster vent to be cooled, I made case inside myself and temperatures dropped on HDD from 40 to 27-30 degree, each case can have 18-20 HDDs
it was old GPU mining cases.
now most of my nodes look like this
If you get a single comment about fire hazards, I swear, I’m building my next enclosure completely from wood, cardboard and nothing else.
That is a fire hazard. ![]()
In the left upper corner, I see something about an automatic water dispenser. So, definitely no fire hazard there ![]()
Wait servers don’t spontaneously combust in flames?
But I really like the cardboard idea, it’s cheap and simple. And it fulfilled its use ![]()
I for myself have a tower for all my HDDs (Fractal Define 7 XL if anyone wants to know) and I printed the brackets for the HDD myself with the 3D printer i have at home.
Synology just reverted their “only Syno drives allowed” policy for the newest lineup. What a dumb policie to begin with.
Janky nodes are back on the menu lads! I love this
And on wifi, nontheless. How did you handle the load test last year?
The node is still young. I don’t know about the load tests.


