Rasberry Pi 4 with Sata Hat

That’s a great result! That’ll rival dedicated NAS devices. What did it end up costing you? This looks to be a great build for a multi HDD multi node system.

I’d choose a PSU with a sufficient amount of SATA power connectors actually…

I think it works better this way because of the shucked drives.
They have a LV dodgy pin I think

1 Like

150 EUR (VAT incl.) for these:

  • MB: Asus PRIME A320M-K
  • CPU: AMD Athlon 200GE
  • RAM: 1 x 8 GB DDR4 2666 MHz A-Data
  • SSD: 240 GB Kingston UV500 M.2(2280)
2 Likes

The peak power consumption is at boot time - when all the drives and fans are spinning up. As long as the system boots, I believe the current PSU will be fine. If it stop booting when I add yet another drive, I will think about this problem.

I was talking about the PSU not having enough physical connectors, so you have to use adaptors - a common source of error.

Despite this thing not being any sort of highend PSU, it will of course provide enough current to power up the drives, even if you still add plenty more :slight_smile:

Oh, yes. It’s a 10-15 years old PSU. It has only one SATA power connector. I already used an adapter to split one Molex to 2 SATA connectors. I will use more of these for sure.

Cheaper PSUs of this age tend to not be the most reliable, you should probably consider getting a 30-40 USD one once you have made some tokens with your node :slight_smile:

SeaSonic, Enermax are generally good picks no matter the pirce point, but depending on where you live, other brands might provide better bang for the buck.

This is a thing I can buy and replace within a day. I am not worried about bringing the whole system down for a couple of days. I am careful about issues that could DQ nodes.

1 Like

Yeah I know this is the way to go, just keep in mind that a cheap defective PSU might damage your fancy new hardware.

Also, most of the older power supplies aren’t good at handling the very low idle power consumption of newer CPUs (this phenomenon started with Intel’s 4th Gen Core i CPUs and applies to AMD as well), but if you didn’t experience irregular shutdowns yet, you’ll be good :slight_smile:

Not meaning to be elitist/“teacher-like” (a term we use in Germany, not sure how to translate that properly), just trying to help and explain some things…

1 Like

I am really open to hearing about potential problems in advance. Your advice is great.

1 Like

If you plan to make 4-5 nodes there you will need 8+ GB ram, better 16 GB, windows start to work slow when for some reasons even if it show that RAM is not full. I have big experience in making 5-8 nodes on one windows pc

I am running non-GUI Debian 10 on this system. The current RAM utilization is 461 MB for the OS + 2 running nodes (I will launch a 3rd one soon) in Docker. The remaining RAM is used for file buffering.

Is Debian automatically caching files in ram when there is free ram? Or do you have to configure that somehow?

Because there is no GUI, I reduced the GPU RAM to the minimum of 64 MB in BIOS. This made 2 GB RAM more available for general purpose.

1 Like

AFAIK, all Linux distros do it by default. You can switch it off by a magical command you can google.

:grinning: ok thanks for the hint

do you use zfs on your nodes?

No, I use ext4. My strategy is one HDD - one storage node.

I am not familiar with zfs. Is there any advantage over ext4 for running storage nodes?

I guess it depends on how you define advantages in this case.
ZFS will give you increased data security due to additional checksums, so if you run a regular scrub (checking of file integrity) you will notice corrupted files earlier and some failures can be repaired due to checksums. So you might notice a failing drive earlier (although SMART should do that too) and if you copy all files to a new location, you already know which files are corrupt and if it’s worth the effort or too much is gone already.

Additionally you could hook up an SSD that acts as a write cache for database writes decreasing the load on the HDD. You could also use the SSD as a read cache but that’s only really helpful for the DB (which is in RAM cache anyway when using zfs) and for the tests that download the files that just got uploaded (as they would then reside in the cache for a while).

Additionally you could activate compression (which is rather cheap), which might give you a little bit more space (but probably not much since storj data is encrypted and therefore hardly compressable).