Post pictures of your storagenode rig(s)

Actually, it’s probably very overrated.
I would say a SSD array for these purposes most probably always is.

I’m running 17 HDDs over one USB 3 5Gbps bus, for also 17 nodes. It’s totally fine, also given the fact it’s all random IO (significantly slower per drive than the 100-150MBps they’re capable of, when reading or writing sequentially). Besides, the 2.5Gbps Ethernet port already is a bottleneck, smaller than the 5Gbps (except when full duplex and saturated).

Given ingress of 100-500GB/day and fraction of it as egress, STORJ seldom exceeds 50Mbps / 6.5 MBps.

Why, see above?
Only reason I can think of, is that my children can accidentally disconnect the drives and I need powered USB Hubs to keep our working.

Usually quoted reasons are often lack of SMART and other side channel communication support, excessive power and resource consumption by USB mass storage stack, or garbage firmware of your USB enclosure that claims but does not implement features properly, like UAS. This can lead to data loss, and/or bad performance.

What do you mean by excessive power consumption of the USB mass storage stack?

I see… It essentially boils down: what’s your opinion about your task as a SNO.
So, my stance is: why bother about them, because even the worst SMART data would influence me. Because, if the drive dies then it dies. Storj is already covering the redundancy and repair.

Although, all my appliances seem to be able to show SMART data.

Alright, probably got lucky then. Including the mini-PC it’s all using 100W.
Also can’t imagine why it would use too much power. Especially since standby mode is useless in this context anyway.

What brand and model are your USB SATA adapters?

I’ve found all the ones I’ve tested to be questionable and the ones I’m using were the best of those. Yes, my nodes have worked with them for 3.5 years, but there have been issues along the way.

  • First of all I had to disable UAS to get decent throughput, with UAS I was only getting 10MB/s in testsing.
  • Random drive resets, sometimes just from brushing against the shelf their sitting on. I’ll have to log on, remount the drive, and restart the storagnode.
  • Many recoverable HDD errors in syslog that I initially thought were from a bad drive, but web search said was from bad USB controller.

Adapters like:

The only issues I had, were random resets. But they were always gone as soon as I made sure they were properly powered. Roughly 5W per bus-powered HDD and 3W per buspowered SDD.

So, I’m wondering with all those issues whether you used powered USB-hubs or something.

Sometimes there pass topics by, in which people seem to think that an Raspberry Pi can power several bus-powered drives. Which just is a recipe for failure. But I’m wore sure, that even a Raspberry Pi can handle at least 10 storagenodes. As far as my experiences stretches now, is that if you just use ext4 or xfs (and not much memory demanding file systems like zfs), even low-end devices like a Raspberry can be part of a rock solid setup.

I haven’t tried a powered hub. The USB SATA adapters I have each have a 12v input and provide their own 5v supply. The wall warts that came with my adapters weren’t powerful enough to start my drives so I switched to a 12v 10A PSU.

I saw those, that’s why I was curious about the model. It’s very nice to have the power cables from the adapters free, with no connectors on them, to be able to connect them to whatever power source you want. I have a similar psu for my surveillance system, and it’s very DYI style, with screw connectors and a tunning switch to adjust the tension.

Phew, almost finished with the grand house move. I need to rethink this rack location; it looks like shit.

For clarification, because someone sure will ask: Yes: all of the drive bays in the bottom disk shelf is indeed open. Before I ran a bunch of different disks in all bays. Some 4TB, some 20TB and a lot in between. I kept having performance issues, no matter how nice disk arrays I built, so I bit the sour apple and did three things.

  1. I admitted to my self, that that instead of being smug, I should jo go ahead follow the documentation. No RAID arrays, single disks for single nodes
  2. Sell all small disks and use the funds to buy larger disks
  3. Move all of the storage from nodes to each of the disks.

In the picture below, I’ve finished decomissioning the middle disk shelf, have moved all data from both shelfs to the top unit, now with 8x20TB disks and 4x 2TB caching disks, and am in the process of selling disks in the lowest diskshelf. The 20TB disks each have a 400GB read/write cache in front of them, and everything is grand. Powerusage is much lower because fewer disks, noise is much lower - both because fewer disks, but also becasue the ones that remain don’t do as much aaaaaaaaaaaand performance is through the roof.

13 Likes

RPI 3 Model B
10TB HDD
makes about 10$/month and only uses ~10W power

6 Likes

Isn’t too much for storj? Do you have all the space used?

No, I use my homelab for much more than just StorJ

1 Like

You can’t have too much. You have no idea :slight_smile:

4 Likes

Hallo, I got the opportunity to host a new node at my workshop- so I went all out and build a server :slight_smile: one node for now, more coming when they are full tho.

8 Likes

stor-jay :stuck_out_tongue:

20 characters

3 Likes

Haha - sorry the danish can get in the way :smiley:

1 Like

I have a similar server case and found that I have had problems with vibrations when mounting the hard disks with these plastic stripes in the cages. The hard disks would lock up randomly.

First I thought it’s the controller or the cables, but after buying these backplanes from Icybox the problem went away

https://www.amazon.co.uk/gp/product/B0193X3FYC

Just keep that in mind in case you run into similar problems.

2 Likes

Oh damn, that’s good to know, thanks a lot!

1 Like

Thanks alot for the tips!

A few points:
Os is Windows as i have another need for it, other than STORJ :slight_smile:

Docker Will not be used for hosting the node, i Will use Windows service

Docker Only for Small stuff like watching stats and stuff.

Vibration is an issue yes.
Currently the server rests on some pc foam from some gpu cases.

I Will setup and watch the temps of the hdd’s - i Can allways up the airflow if needed

Raid i dont really see the need for in such simple setup.

Ipmi, whould be Nice. But i Got a lot of custom stuff that means once Im booted into Windows i Can monitor everything from all of the World

And i live 800m from My workshop Where server Will be so not to bad

I hope drives dont die so fast, is it because of temperature you Think they Will die fast or vibration?

Again Thanks!

2 Likes