Actually, it’s probably very overrated.
I would say a SSD array for these purposes most probably always is.
I’m running 17 HDDs over one USB 3 5Gbps bus, for also 17 nodes. It’s totally fine, also given the fact it’s all random IO (significantly slower per drive than the 100-150MBps they’re capable of, when reading or writing sequentially). Besides, the 2.5Gbps Ethernet port already is a bottleneck, smaller than the 5Gbps (except when full duplex and saturated).
Given ingress of 100-500GB/day and fraction of it as egress, STORJ seldom exceeds 50Mbps / 6.5 MBps.
Why, see above?
Only reason I can think of, is that my children can accidentally disconnect the drives and I need powered USB Hubs to keep our working.
Usually quoted reasons are often lack of SMART and other side channel communication support, excessive power and resource consumption by USB mass storage stack, or garbage firmware of your USB enclosure that claims but does not implement features properly, like UAS. This can lead to data loss, and/or bad performance.
I see… It essentially boils down: what’s your opinion about your task as a SNO.
So, my stance is: why bother about them, because even the worst SMART data would influence me. Because, if the drive dies then it dies. Storj is already covering the redundancy and repair.
Although, all my appliances seem to be able to show SMART data.
Alright, probably got lucky then. Including the mini-PC it’s all using 100W.
Also can’t imagine why it would use too much power. Especially since standby mode is useless in this context anyway.
I’ve found all the ones I’ve tested to be questionable and the ones I’m using were the best of those. Yes, my nodes have worked with them for 3.5 years, but there have been issues along the way.
First of all I had to disable UAS to get decent throughput, with UAS I was only getting 10MB/s in testsing.
Random drive resets, sometimes just from brushing against the shelf their sitting on. I’ll have to log on, remount the drive, and restart the storagnode.
Many recoverable HDD errors in syslog that I initially thought were from a bad drive, but web search said was from bad USB controller.
The only issues I had, were random resets. But they were always gone as soon as I made sure they were properly powered. Roughly 5W per bus-powered HDD and 3W per buspowered SDD.
So, I’m wondering with all those issues whether you used powered USB-hubs or something.
Sometimes there pass topics by, in which people seem to think that an Raspberry Pi can power several bus-powered drives. Which just is a recipe for failure. But I’m wore sure, that even a Raspberry Pi can handle at least 10 storagenodes. As far as my experiences stretches now, is that if you just use ext4 or xfs (and not much memory demanding file systems like zfs), even low-end devices like a Raspberry can be part of a rock solid setup.
I haven’t tried a powered hub. The USB SATA adapters I have each have a 12v input and provide their own 5v supply. The wall warts that came with my adapters weren’t powerful enough to start my drives so I switched to a 12v 10A PSU.
I saw those, that’s why I was curious about the model. It’s very nice to have the power cables from the adapters free, with no connectors on them, to be able to connect them to whatever power source you want. I have a similar psu for my surveillance system, and it’s very DYI style, with screw connectors and a tunning switch to adjust the tension.
Phew, almost finished with the grand house move. I need to rethink this rack location; it looks like shit.
For clarification, because someone sure will ask: Yes: all of the drive bays in the bottom disk shelf is indeed open. Before I ran a bunch of different disks in all bays. Some 4TB, some 20TB and a lot in between. I kept having performance issues, no matter how nice disk arrays I built, so I bit the sour apple and did three things.
I admitted to my self, that that instead of being smug, I should jo go ahead follow the documentation. No RAID arrays, single disks for single nodes
Sell all small disks and use the funds to buy larger disks
Move all of the storage from nodes to each of the disks.
In the picture below, I’ve finished decomissioning the middle disk shelf, have moved all data from both shelfs to the top unit, now with 8x20TB disks and 4x 2TB caching disks, and am in the process of selling disks in the lowest diskshelf. The 20TB disks each have a 400GB read/write cache in front of them, and everything is grand. Powerusage is much lower because fewer disks, noise is much lower - both because fewer disks, but also becasue the ones that remain don’t do as much aaaaaaaaaaaand performance is through the roof.
Hallo, I got the opportunity to host a new node at my workshop- so I went all out and build a server one node for now, more coming when they are full tho.
I have a similar server case and found that I have had problems with vibrations when mounting the hard disks with these plastic stripes in the cages. The hard disks would lock up randomly.
First I thought it’s the controller or the cables, but after buying these backplanes from Icybox the problem went away