I would like to replace my little home server with 1-2 Pi5 to reduce electricity consumption.
I will go with the 8GB version. The extra RAM will be useful for caching metadata. I was also planning on taking the Neo 5 case with NVMe header and add a 16 GB intel optane as a secondary metadata cache. 16GB might be on the lower end but they are cheap and fast. I want to give it a try to see how fast things like garbage collection will be.
Now I am missing some kind of HDD case. There are too many options available and I can’t decide which one is best. On the lower end I could go with a HDD docking station. There are different sizes available up to 5 hard drives. Or full enclosure also in different sizes up to 5 hard drives. Any recommendations?
This one is the one I have on my Pi5. Single disk.
Not too expensive, not too ugly, UASP compliant and as it has a USB-C hub you could daisy-chain another one to it if you run out of USB3 ports on your Pi5.
This.
If you dont need the GPIO, you would be better with a N100.
I run some Nodes on a 70€ N100 Mini Pc as a test.
So far very smooth, Power Consumption under Full load Not Even 20watts.
Not to pile-on: but that’s how I use my Pis now too. If I don’t need GPIO-connected-to-a-network… I use something else. Lots of x64 configs only sip power too.
Temperature is fine. The whole case is metallic so it’s one big heatsink, really.
That’s the only case I ever bought from them so can’t really speak for their multiple-HDD cases
For my first drive: IB-377-C31 | ICY BOX - Create Your Workspace
It is not the same model you linked. This one is a bit cheaper but doesn’t has the build in USB hub for chaining them together. I don’t need the USB hub for the first drive but I will take the model with the build in USB hub when extending the setup later.
I might skip extending the setup later. It depends a bit on how fast I can get the filewalker to run on a Pi5. If it looks like I need more like 4 drives I will upgrade to: IB-3805-C31 | ICY BOX - Create Your Workspace
It comes with a 120mm fan. So it should have good cooling and hopefully still be silent enough for my living room.
There are also some other 4+ bays not only from this manifactor. But most of them have 60mm, 80mm or 92mm fans. I don’t like small fans and will go with something that has at least a 120mm fan.
My long term plan is to eventually setup 2 systems at my friends home. I doesn’t make sense to setup a 4 drive system on day 1. Instead I will start with this single drive first, let that run for a few weeks and test how stable their internet connection and router is. If all works fine I can replace it with 4 drives and use that 1 drive setup for the next person. Like a vetting system I setup at their location to see if it makes sense to give them a permanent installation with more drives.
With both I am not happy. I ran them on a Pi4 (4GB), a Dell Wyse 5070 with 16GB and even on a 128GB RAM machine with a Ryzen 5950X and modern Mobo. On all servers, the performance for STORJ was really not good. The USB-connection limits the possible iops extremely compared to a SATA connection and so all the drives were always 100% utilized and file walkers didn’t finish before the next update kicked in.
I moved all my nodes to SATA connections now and don’t have any issues anymore
For other use cases where I don’t store millions of very small files, they work fine though.
Yes, I guess sharing one USB lane for more than 1o or 2 drives would only result in that sort of problems in a use case like Storj.
I am still hoping for some sort of PCIe-SATA bridge for the new Pi5, that would be ideal (but not sure if it’ll ever happen)
Well I guess soon I will find out. Hardware is already here but setting everything up is a challenge for me. I now realize that the pi OS is great for beginners but the moment you want to install it on a NVMe drive with a GPT partition table it gets complicated.
I did go with the 5 bay enclosure at the end just to speed up the testing period a bit. My payout is increasing fast these days so I am willing to risk it.
Oh, that’s interesting.
I thought it would be trivial to install and boot from the NVMe drive, it’s just a boot loader setting, isn’t it?
At least with RaspberryPiOS, I think it is…
Yes that part is easy. But how do you install Pi OS with a GPT partition table? It will make it MBR without asking you. There is no such option.
Edit: This limits the NVMe SSD to 2TB. MBR doesn’t support more than 2TB. I am using a 1TB SSD but I had this problem once with my current AMD system. I installed on MBR first and later wanted to clone the OS on a 8TB SSD. So taking the easy path backfired once and this time I don’t want to run into this trap again. Also I need more partitions then MBR provides. So that is another one problem I don’t want to work around. GPT partition table is the way to go.
I will make this a blueprint / guide for others later on. First HDD is connected and running. It is running with a high success rate and full speed. Now it just needs to keep that performance even with 4 drives. Lets start the migration of the next drive
The first results are looking promising. The node is running full speed. I still end up with a high success rate. Now I need to run some filewalker benchmarks to find out how fast the Pi5 can run it with the metadata cache. It will also give me some numbers to calculate how big the metadata cache needs to be.
Edit: lol now I did celebrate it twice. So exiting
How much load is on the node right now and how large is the node?
Most of my new nodes are blazingly fast when all caches are empty, all disks are empty, CPU and RAM is completely free and filewalkers don’t really have anything to di
The node I am currently benchmarking is 5TB in size with at least 20 million pieces. My plan is to move my hard drives one by one over to the Pi5. I have 8 drives available. I would expect that the Pi5 can handle maybe 6 drives or so. We will see. One of the nodes will be 10TB in size.
Edit: So far the performance is excellent. With sync=disabled the size of the node shouldn’t matter. Download performance will suffer for sure but there is no incentive to optimize for that. Upload performance should stay the same even when the node gets bigger. At some point garbage collection might take forever. My hope is that the SSD cache keeps that in check but lets wait for the test results first.