What os is baset this
I am using Windows 10 Pro on all my nodes
one node per pc or more
One node per PC
That is some serious networking level sir
Two self assembled storagnode servers. Each is a 5 bay hdd chassis with 120mm fan on the âfrontâ, four 6tb drives, and a rpi4 mounted on a 2.5" to 3.5" adapter plate. Each powered by a single 12v 10A power supply. Each drive has its own node, so each pi4 is running 4 nodes. Watchtower configured to update the storagenode docker image automatically.
Finding decent USB3 SATA adapters and hubs was a challenge. Some cheaper SATA adapters spew errors to syslog, which make it seem like your disk is going bad, but it is just the cheap adapter. These US3 âgatorâ brand off eBay work very well. I only had to disable UAS with a kernel boot parameter.
These have been operational for over a year and I luckily havenât had to perform any maintenance in months. The Raspbian OS is running on an SD card which gave me some trouble last last summer. The storagenodes write a lot of logs, which were wore out the first set of SD cards. I disabled all logging from being written to disk in the docker configuration.
In my basement which stays cool, the disks keep between 36 and 40 C.
Really cool!
I have some questions:
even the pi is powered from the 12v?
The pi already have 4 usb port, why adding the hub? And isnât better to keep UAS enabled?
One of the disks canât be used to boot the OS and get rid of the SD card?
The pi4 is powered with a 12v to 5v USB type C converter.
The pi4 only has 2 USB 3 plugs, the other two are USB 2 which transfers much slower.
Yes, UAS would be better but either the adapters or pi4 caused UAS to perform worse than usb-storage and log a lot of read errors to the syslog.
I did consider switching to using a disk to boot after the first SD card corruption but it would have been more work than replacing the card and turning off logging. The pi4 also supports network PXE boot! If I ever build more of these Iâll see about switching to that and I wonât need any boot disk, just a boot server.
Iâm not totally without logging because I did set docker to send logs over UDP, I just havenât configured a permanent log server to store them yet.
You can also just write them to their disks: How do I redirect my logs to a file? | Storj Docs
My setup is currently based on a single CL3100 server, hosting ten nodes, each on a dedicated HDD which size varies between 2 and 6 TB:
Until November 2021, these drives were hosted by my old tower computer, with a veery reliable drives mounting system:
The raw drives were then exposed to my main server through iSCSI.
I am now exploring the possibility of migrating my nodes to a new Ceph cluster (no overengineering, at all) allowing me to significantly increase the storage capacity allocated to StorJ, and to allocate some of it to other projects.
afaik ceph doesnât have the best iops, but am very interested in hearing how that turns out.
iops is the bane of running multiple nodesâŚ
+1
I like creative and reliable drive mounting systems.
You should not âovercoolâ your hard drives, so you could remove the extra fan if they are below 30°C anyways
Here is my current rig - running two diffrent storage providing softwares - storj has 1 TB allocated - ready to add more when it gets filled.
I paid 40 dollars the the whole machine (including a 2TB drive and ssd) plus a screen, keyboard and mouse. I sol the screen for over 60 dollars

So Storj has already been a win for you
An old broken Lenovo laptop that took a fall and broken the plastic case and LCD. Been in storage for a while and just recommissioned 6month ago for Storj and Plex. Its a 5th gen intel I7, 16gb ram, 3tb wd, 5tb wd my passport. CAREFULLY screwed into a piece of plywood
As expansion goes, wifi module still can be swapped to USB ( nvme to usb converter ), dvd sata drive to hdd, and have one usb taken by the ethernet as the plastic case was keeping the network cable in place so maybe later i need to solder the cable to the port to free up the usb
Pros:
Recycled, 3hours battery backup, cheap to run
Cons
SMR, Filewalker
You win most interesting storagenodes yet.
Nice HDDs which you are having there in the list. Can you tell which application it is producing this overview and how to run it?
Thanks and kind regards,
That HDD list has been updated a bit, since i posted it about a year ago.
You can find the updated list on my public site â Th3Van.dk (Scroll down a bit for the HDD list)
The public site is meant for me to have a technical overview of all nodes, to quickly spot issues.
Some may find i a bit messy, since itâs best viewed on a 27" monitor
There are more ways to produce such a list, but since I run all my 103 Storj nodes on Ubuntu, Iâve made a little script that transform the output from the df command.
A modified version of the script Iâm using (up to 26 HDDâs) looks like this :
#!/bin/bash
echo "------------------------------------------ HDD overview ----------------------------------------------"
echo "Dev Available hdd space Used hdd space Free hdd space Mount point"
echo "------------------------------------------------------------------------------------------------------"
for slice in {a..z}; do echo -n "/dev/sd$slice"1 ; df --block-size=1 /dev/sd"$slice"1 | grep "/dev/sd$slice"1 | awk '{printf " %21\47d %21\47d (%6.4f %%) %21\47d %13s \n" , $2, $3, (($3/$2)*100), $2-$3, $6"/" }' ; done
echo "------------------------------------------------------------------------------------------------------"
df --block-size=1 | grep "^/dev/sd[a-z]1" | awk '{totalavaible=totalavaible+$2; totalused=totalused+$3; totalfree=totalfree+$4 } END {printf "Totals : %21\47d %21\47d (%6.4f %%) %21\47d \n", totalavaible, totalused ,((totalused/totalavaible)*100), totalavaible-totalused }'
The script are then added to Cron/Crontab, so the list updates on the main web site every hour.
The full (non-modified) script produces a list like this :
Th3Van.dk
Wow you got a very nice machine there running with +100 Nodes and +2500 USD estimated for 2022-12.
Are all HDDs and Nodes running on one Machine? AMD EPYC or Intel Ice Lake/Cooper Lake, which one are you using?
I am also running multiple nodes, but I have to distribute them into multiple Alder-Lake-Machines as the load is too high for one machine.
So you were just using df and a script. Also very nice and the website is also very nice. You can also try out duf.
Unfortunately you did buy the Seagate ExosâŚAre you always running a Preclear when receiving new HDDs?
Thanks and kind regards,