Post pictures of your storagenode rig(s)


1x14 TB on a sata 2 controller soon to be on a usb3 pic-e,OS on a pic-e nvme 512 GB ssd xeon 3.3 Ghz 32 GB ram ecc circa late 2010 original owner very fast ssd. this is one of 3 nodes two behind same wan ip pictured node on different wan ip of the Jersey turnpike. lighter is required for party favorites.

6 Likes

I had already posted picture(s) a while back of my first node and the first few iterations, experimenting with different cases to keep the RPi cool.

But thought I’d share a recent picture. I have two nodes, both running on Raspberry Pi 4 (4GB versions) each with a WD Elements HDD attached. Node 1 has a 10TB HDD and Node 2 has a 12TB HDD.

The RPis are in the following cases and they work super well at keeping the temperatures ~35 deg C :

The WD Element HDDs have “smart capabilities” and looking at the temperatures of those drives, they were averaging around 44-49 deg C, which I was a little concerned about. So just recently I purchased the little fan and set it up behind them, pointing at the back air vents. It’s a little 8cm usb fan that I have plugged into a usb hub on my desk. So far it has brought the temperatures of both HDDs down below 40 deg C and they’ve been averaging between 35-38 deg C over the past day or two since I’ve had the fan going.

Here’s a link to the fan:

11 Likes

Little upgrade for the home balcony rack, storj is running on the middle server, with 4 nodes and 8 more slots that are waiting to be populated with 3tb drives. Same setup at the office.

12 Likes

This fan comes straight from another century…and I don’t mean the upcoming one :wink:

3 Likes


Using some rubber grommets I randomly found in combination with the plastic HDD decoupler standoffs more than halved the noise of this 7200rpm drive.

The drive is now freely standing on the floor of the case :slight_smile:

11 Likes

Hah, that thing looks like it’s going to walk away. I like it! :slight_smile:

7 Likes

The “eyes” (rubbers on the front face of the drive) actually prevent it from touching the motherboard tray/aluminum panel, since it is installed in that direction and might slip closer to it after some days of vibration :smiley:

2 Likes

Don’t forget about the “grounding” cable

You can easily run both nodes on the same rpi :slight_smile:

1 Like

Have you checked whether they will turn on after a power failure?

I can’t confirm that. I have the RPis and HDD power supplies connected to a relatively large UPS, also shared with a lot of other devices (NAS, etc.) And luckily I’ve been home and awake the past few times we’ve lost power, so when that happens my UPS beeps loudly and I typically just come in and shutdown the nodes manually since the USB com of the UPS is connected to the NAS so that it will automatically shutdown after a certain amount of time on battery power.

Although, I can say that when I reboot the RPis the HDDs do typically come back online by themselves when the RPis reboots (i.e. doesn’t require me to press the power button on the back of the HDD). However, I have seen a few instances where my Node 2 RPi (Rev 1.2) has required me to press the power button on the HDD, whereas I never seem to have to do that on Node 1 RPi 4 (Rev 1.1). I know I’ve discussed this on another thread before, but I believe it has to do with some of the minor changes made between the rev’s of the board with respect to the USB power supply compatibility issues. Here’s an article about it, but there are quite a few if you google it. Raspberry Pi 4 Rev 1.2 Fixes USB-C Power Issues, Improves SD Card Resilience - CNX Software - Embedded Systems News

Here’s a couple of images that have the minor hardware changes on the two different revs of the board:


1 Like

“STEERAGE Node”, started 08/28/2020,this my second node finished with proceeds of storj tokens;
gigabyte z370-hd3p, Intel core i5 9th gen 9400, 16 GB ram,pny ssd for OS Windows 10 pro 1909, data dir wd dc hc530 14 TB sata 3 spindle and here is the ugly part,

as soon as I can I will kill that RGB stuff. almost forgot it’s a Docker node.

1 Like

You’re not building a new system for every node though, are you? You could easily run several nodes in that system. Time to get some good use out of all those sata ports and drive bays. Speaking of which, please screw in that HDD. It just hanging there is making me nervous. :wink:

1 Like

No I’m not, I have a total of 3 nodes that were started spread out the years this last one and the first one I posted are behind the same wan the Mac node is on separate subnet, my goal is to have I high capacity nodes, the dell r710 node if can’t affordably overcome PERC H 700 1GB controller 6 TB limitation I will GE it and just have two forty TB nodes on separate subnets.PS I had 1880 tokens when the price hit 0.60xxx.

You’ve got 2 PCIe bays!

That board should fly… but I wouldn’t slow it down with MS Windows :slight_smile:

Add a bunch more RAM, fill the 2 PCIe drive bays… and you could earn some nice HIVE:

https://hive.blog/steemit/@khanhsang/a-quick-guide-on-how-to-mine-steem-on-windows

1 Like

I would not run windows for nodes…espically to run docker… Linux is so much more stable.

3 Likes

You could sell this combo and replace it with something less energy consuming (and make 200 USD in the process)

Okay, i finally got my racks a little bit cleaned up after some upgrade/reorg!

Top Part: Computing & Routing
Rack_Top

  • 1 out of 2 Firewall thats provide highly available access to the WWW
  • UBNT 48 Port PoE Switch, provides management and connectivity to the house.
  • 3x HP DL380p G8 (2x 2680 v2 10*2,8Ghz, 256GB RAM, 2x 40Gbit/s Networking each)
  • 3x HP DL380 G9 (2x 2680 v4 14*2.4Ghz, 256GB RAM, 2x 40Gbit/s Networking each)

Bottom Part: Storage/SAN and Power Backup
Rack_Bottom

The two 4U Servers share the HGST 4U60G2 as shared storage. The HGST Chassis is filled with 56x 12TB SAS drives, as well as 4 SAS SSDs for caching. They build an highly available ZFS Cluster that provides iSCSI storage volumes to the 6 HP Servers, which run VMware. In addition each of the 2 4U Servers has a couple of older arrays of drives, that i reuse for various things.
An older 48 Port PoE switch as replacement for the UBNT in case of failure, as well as 2 lower equipped HP servers for testing.
The 2 UPS at the bottom are APC SMT3000 which feed into two PDUs at the back of the chassis.

What i missed to picture is the Arista 40Gbit/s Switch in the back of the rack, that provides together with another one in the rack on the other side of the wall my backbone. Each Server is connected with at least 1x 40Gbit/s to each of those switches.

This is 1/3 of my setup, another rack similar to this is as mentioned on the other side of the wall and another one in a colocation.
My personal storagenode (only 1!) is running on the pictured setup and is currently willing to take up to 60TB of data. Once that node is nearly full, i’ll add the next one, etc.

17 Likes

impressive, and now I don’t have to ask or read about capacity limitations :wink:, if I may ask how do one overcome backup ISPs with different wan ips with out the use of a ddns provider.

BGP Is the keyword here :+1:

1 Like