The ideal Storj rig - Linux version

I know that ideal varies by person and environment - so people have discussed NAS servers, Raspberry PI systems, and cloud based systems. I have experience with both Windows and Linux systems as a GPU miner and have learned to repurpose them (cheaper than buying new). In the crypto mining world everything is about efficiency, especially regarding power such as hashes per watt at the wall.

I am interested in a conversation about components and ideas to be successful with Storj.
I’ll throw some ideas out and see what others think.

Hard drives - durable and efficient seem key and perhaps performance.
I am enamored of the large WD (HGST) UltraStar drives (12-14 TB helium filled and rated at 24x7 service 6 watts max draw)

UPS power? If I had unreliable power I would consider several aspects: ’

  1. Power filtering (to smooth out poor quality power)
  2. Graceful shutdown (to protect the databases during outages)
  3. Extended power (power during extended outages - really a back generator)
    Since I am not (yet) worried about these today, they seem too costly

Motherboard Low power consumption and high bandwidth to HDDs
Today I use an AMD x370 base chipset with PCIe 3 bus and SATA 3 (6 Gbps)
I have 4 SATA connectors but can added a PCIe 3 card for 6-8 more SATA connectors.
I am assuming the desire to expand to 10 HDD drives (no RAID)
I want the lowest power GPU (current rig does not support onboard graphics) so I just downgraded to a Radeon EAH6450 (circa 2011) which pulls 9 watts.
Minimal DRAM (not sure what that is yet)
CPU - minimal CPU that supports high speed bus low power with enough cores for Storj
Today Ryzen 5 1600 6 cores/12 threads (barely idling) 65 watts rated
Linux OS - today running Ubuntu 18.04
1 Gbps Ethernet port (for the day that the demand might be there)
PSU - high efficiency running at 240v (again overkill for such small power consumption) Power cost today is about $6/mo potentially $12/mo with fully loaded rig. My GPU crypto rigs run around $200/mo.
60 GB SDD for OS and apps like Docker (Dedicate each HDD to one storagenode each)

Internet connectivity - ATT 1 Gbps (up and down with no cap)
Probably overkill but very addictive - connected with gigabit switches

What would my ideal rig look like, assuming my conditions and a long term view?

Storj is really not that resource intensive. I think a self built system is going to quickly be more of a power draw than it needs to be. The best options are always 1st. A system that you already have that is on 24/7. 2nd a low power NAS or RPi.

It feels like you need to get out of the mining mindset where you have high costs and high rewards. This is more of a low cost and medium rewards thing. You generally have a much higher ROI on energy costs, as long as you keep those costs down.

Thanks for the response. Crypto mining has been devastated in the last few years and very few have found much profit in it aside from large commercial operations running ASICs and access to very low power costs, hence the focus on efficiency in order to keep costs down. I don’t know very much about the operational models for NAS and RPi but it would be a great topic for other posts. At first blush NAS seems high cost and RPi very low cost, almost a throwaway model where if the system goes down you are not out much. If so, probably a good model for Storj.

I agree that existing hardware is ideal if the systems meet the requirements to run economically. For instance I have some 12 PCIe slot motherboards running on a single dual core Celeron 3900 2 core/2 thread (LGA1151 socket) but from the Storj guidelines, it seems that each node should have one core or at least one thread per node.

This is not about getting rich quick, but figuring out the ideal parameters for efficiency, for an ideal rig (or various ideal rigs) that can return a profit over the long run. For instance one rig that can only drive one node is probably far less efficient than a rig that and run many nodes due to the mobo overhead (both in power consumption and capital cost).

Ultimately this is about trying to understand what the real bottlenecks are (cores, drives, bandwidth, etc) as you scale up. At some point, at least in a typical residence, there is a practical limit to scale.

1 Like

I guess they do say 1 core per node. I think that’s complete overkill. Especially considering that multiple nodes on the same ip subnet share traffic anyway. A decent low voltage quad core should be able to run 10 nodes just fine. However, you shouldn’t be looking to spin up 10 nodes right away. You mentioned 12-14TB drives, at that size I would start out with 2. Don’t spin them up at the same time, vetting would take way too long. Wait until the first one is vetted, then start the second one. That 24TB will take a long time to fill already. My node has 12TB of data after about 10 months since the last network wipe. So don’t expect to be needing 10 of those any time soon.

2 Likes

if i got check my storagenode running on 8 cores 16threads with access to all of them then at peak egress spikes atm it will jump to 6.8% or 9% utilization tho the step from 6.8% to 9% isn’t the storagenode by the OS… i haven’t really see a ton of cpu usage, and i’m only like 2.14ghz with 2.5ghz turbo
i duno if the cpu pressure gets lower with more nodes as vm ram and processing often seems to be merged into shared processing tasks when possible… thus meaning the more vm’s one gets up and running the more efficient they will start to be…

but if i look at my storagenode right now… say my 6.8% is 1/16th … then 1 of my cores would have been into the turbo range to keep up… so i don’t think its totally unrealistic to atleast consider having some head room… but how much is needed i think is very difficult to guess without testing it.

ofc node size might also play into that…i would suppose more processing could be needed for bigger nodes dealing with bigger databases… just like ram usage and what might go up…

with 12 pcie you could hook up a hell of a lot of hba’s xD then do a custom hdd mounting solution using cables directly from the hba’s into the drives… 8 drives for a 30$ hba x 12 for 96 drives and costing 240$ i will assume you have a nice powerful PSU, ofc you will also need cables and drives…
and then if all goes well i’m pretty sure you can get atleast 4 cores 8thread for the 1151 for less than 100$
in case you decided you did need more cpu power… and didn’t what to tear everything apart. i even think there is an 8 core 16 thread option… but its price is ridiculous…

the HGST helium are great can’t say about the 12-14tb i got 1st generation 6tb with 45k hours on them and very happy with them, when i first heard about helium drives i was a bit … WHO WOULD WANT A DRIVE THAT CAN RUN OUT OF GAS!!! apparently not a problem, and it does save some amount of power…

tho really the hdd’s power often ends up being the least cost in this… you need to go beyond i think it was 5-6+ years before the electrical cost of running the drive equal the cost of the drive, and this ofc goes up to the 20 year ranges at 16tb ranges i forget the exact numbers i got… so really the electricity cost is secondary to most SNO so long as one tries to get away from small drives…

so if you use the cost of the drive in electricity to power it for 20 years… then the 30-40% power saving on helium could change that number, but still the main cost will be the price of the drive and in 10 years a drive is over it anyways…you might have good usage from it for 5-6 year before it becomes kinda outdated technology and profits on it will reduce greatly… so really you can imo almost disregard the power issue when getting close to the 10tb hdd range…

ofc if you got a datacenter, which has a max number of drives and a max cooling capacity, then how much wattage you use might be a lot more relevant and you might spend extra to make stuff fit in your cramped datacenter case that is a grand computer warehouse… lol

for us mortals cooling capacity just means moving the air a few cm in the right direction and maybe out of a box

1 Like