Rasberry Pi 4 with Sata Hat

Ran into this video the other day where someone had released a sata hat for RPi4 that can support 4x 2.5" drives: https://www.youtube.com/watch?v=Eix0PCB0byQ

Wonder if anyone in the community has got one of these and tested it.

That’s a neat little thing…
not really optimal for storage tho… however it could certainly work.

4 drives - redundancy so lets call it 3, and then the 2.5" drives are like 25% more expensive than the 3.5" for the same capacity.
which puts you at 1/4th less drive space, and then comes the added latency and low cpu power…

on the upside it doesn’t take much space nor uses any power worth talking about…
would be interesting to actually go through the entire math on that and see how it would hold up theoretically and then test it afterwards.

Depends on how much data you want to store, and what you are doing with it. If you are running a Storj node then you don’t want redundancy.

According to https://diskprices.com/ a good 2.5" drive is only 8% as long as you are storing less than 16 TB total raw. After that you are right it is more expensive.

It seems like the board itself can support 3.5" drives, just not the case.

There is something really attractive about such a small device packing 4 HDD’s. It’s a very affordable way to get some redundancy going without it becoming a massive device.

I don’t personally have a use for it, but I definitely see the appeal.

1 Like

I like redundancy…

and yeah devices like this makes me worried that my old server might be so fucked when it comes to wattage usage pr live TB stored.

i mean that little thing filled with drives most likely uses somewhere between 5% and 10% of what my server does…idle… which is kinda scary for my profits xD

2.5" drives will use more power per capacity than 3.5" drives for large drives.

seems unlikely… atleast if one compares equal sized drives, but yeah in a big server / datacenter comparing drives of the same generation and taking transfer speeds and such into account…

i’m all for 3.5", but after i got my 12 bay server setup the whole redundancy thing works a lot better with 24 bays rather than 12… and then in other server with like 4 or 8 x 3.5" bays vs 8 or 16 x 2.5" bays
i really see why people like them so much in some servers.

and then on top comes the future upgrade option to using old 2.5" ssd’s
but i digress… i’m sure you are right from some perspective, but i’m sure an argument can be made for 2.5", its certainly not something that’s is an easy answer to.

2.5" was from the time of 10k and 15k RPM HDDs, now they are only useful for SSDs with the new U.2 slots. 2.5" drives are too low capacity for the new age. If you have stocks of old 2.5" drives or a way to get them cheaply, that’s great, it just makes no sense if you’re buying new.
Talking about hard drives only here, to clarify.

I would say the upside is mostly the small package. A 4-bay 3.5" HDD NAS can quickly take up quite a bit of space, but something like this you can easily stick in a corner somewhere.

1 Like

well all mechanical drives are pretty much doomed, spinning rust might come back later for capacity, but for now it’s only affordability and durability they survive on.

and really ssd’s are silicon so they will beat down the cost with ease.

i don’t think it’s really about storage cost in the server cases… or maybe thats just how it use to be, not like i can afford the high end stuff of today, so i try not to look and drool to much. xD

from what i’ve understood the 2.5" hdd’s was used because you could put twice the number of drives in basically the same space as 3.5"… which makes it so that with in a 2U server or a 1U server you would either use 1/3 or 2/3 of the front for drive bays.

which would give you 4 drives, meaning you can run a raid 10, giving you a 2/3 chance of surviving a second disk failure, when trying to recover from the first… its all about redundancy of your critical data, cost is far down the line…

i suspect it’s much the same logic behind this HAT (what does HAT even mean?) setup, 4 drives are basically the minimum if you want to be fairly certain your data is semi reliably stored.

hell that might even be why desktop pc’s sata bus supports 4 drives on each or why sas lanes comes in 4’s

4x 3.5" would also be pretty noisy on a desk, even tho they are so silent today compared to when one spins up a really old one… maybe HAMR will give magnetic spindle drives more room, most likely corona delayed now tho… :confused:

and wow one can get a 30tb 2.5" SSD these days… i’m just not going to even attempt to find the price…

but i guess that little thing might be able to hold 120tb storage lol keel me nau

I will get this soon and maybe a Rock Pi whit a Hat for this. I will test this for you and send a report here after i test it.
Im in near contact whit a dealer who run some test too.

3 Likes

Well, I know that it was stated many times that a network has redundancy incorporated so from losing-files-by-the-network point of view You don’t need to add another layer of redundancy but from SNO point of view it is different story. I just entered 100% payouts and heading closer to receiving held amount. I would not like to loose files, be disqualified, loose held amount of storj and start over after such a long period of time. On a different station, that serves me as media hub I got bad IronWolf that died after 2 months of light use - the drive that is designed for much bigger workload and for arrays particularly. So I got another one from Seagate, which died after 2 weeks…

Cool finding though!

1 Like

SSDs pricing scales quite linearly, when I last checked Samsung 15TB SSDs it was $10K so 30TB would probably be around 20K. Highest capacities are always a little bit more expensive as manufacturers of such devices know that when you are at a capacity limit you would spend even more per GB than upgrade a whole machine. (sometimes “more” means newer technology in the same packaging but then everything “less” is old, making a baseline price :slight_smile: )
I like to think about it this way - for most of us being at capacity limit means bigger computer case or another shelf in a rack, for hyper scalers it means another building (with extra personel, cooling, racks, security, equipment etc.) - they will pay extra to squeeze more GBs and Compute under the same roof.

What makes Storj stand out is that it doesn’t have this scaling issues. There is always a lot unused bandwidth at ISPs end and SNO most of the time will fit a “bigger case” in his or her room. Also SNOs scales horizontally quite easily :slight_smile:

1 Like

yeah the ssd cost is basically the chip production cost, which will go down… i was reading about that 30tb and apperently the data density is like 3 times what mechanical hdd’s get… and so since blasting lasers at silicon is most likely much cheaper that making mechanical hdd, then in the near future ssd tech might make hdd obsolete for value capacity storage.

my server has a meek 45TB capacity, ofc its price point is multitudes below 20k, but another factor for SNO’s is that stuff stored on SSD can bring down the power bill… in my server’s case thats basically a lost cause, because its an old power hog, which is why i’m trying to go for decent capacity and performance to attempt to offset the extra costs.

i could see a world where a storagenode was a tiny box / sata hat like this on someones desk, with 100tb capacity, and basically no power bill.

From logical point of view YES, but manufacturers will keep this disproportion as long as they can - it is like petrol cars vs electrical ones. At the end it is cool that you have options at different price points, then you can choose optimal to a scenario.

If you look for cost optimisation, I would recommend looking in downgrading other components rather than upgrading disks to more expensive SSDs.

Recently I tried downgrade components as much as I can. I got cheapest CPU I could - 1.6 GHz Celeron (with one core LOL…my phone is faster than this) and it was enough. Then You could downsize your motherboard - going down in size is more expensive than bigger version but saving on electricity in the long run.

PSU performs most effective when utilised at 50% so < YOUR-POWER-USAGE > times 2.Those are also cheap. I use 350W which is still too much but I got it cheap. RAM can be downvoltage (same as CPU).

Minimalize your OS processes. Linux and similar UNIX based are best at this. As this machine is used for Storj only, you don’t need GUI. Unless Storj is a side project on your main workstation.

Length of ethernet cables can waste a lot of energy especially in large quantities (think datacenter). In home environment it is still worth to consider having your wires as short as possible, You can go as short as you can and place your node next to the router as this is the closest place to get to the internet:)

Hope it helps. Good Luck!

Nope, the SATA connectors are way too narrow to use 3.5 inch drives with the board.

You can, you’d just have to use cables.

+the documentation says you need an external ATX PSU to support 3.5inch HDDs however…

1 Like

well the case is most likely one of the more expensive parts of the sata hat… so making it a getto rig would be a bit of a pity.
lots of cheaper options for that.

This board and similar doesn’t support 3.5" from very simple reason - not enough power from USB, even if it be USB 3.0 .
3.5 drives needs 5V and 12V when You take Amps into account you are overboard - USB 3.0 is 900 mA max. That is why 3.5 enclosures have additional power supply that you plug into power outlet.

1 Like