Anyone see anything wrong with this ODYSSEY build?


M.2 Adapter to 5 SATA

Will the 5 SATA adapter work with one of the odyssey’s M.2 ports? and would it support up to 5 HDDs?

I’d like to set up an LVM using ubuntu so I can just add on additional HDD each time my node fills up. I realize the HDDs will need an alternate power supply.

You also should not use a simple volume (stripe), because with one disk failure the whole node is lost. If you want to use RAID, then you need to use something with redundancy, at least RAID6.
It’s better to run a new node on a separate HDD instead. See How to add an additional drive? | Storj Docs
See also RAID vs No RAID choice thread.
By the way, if you plan to buy something, please take a look on Realistic earnings estimator first. We do not recommend to invest to anything with purpose only for Storj - you may not have a ROI any time soon, it’s better to use what you have now.


Saw this post before a meeting and wanted to respond after. But there is no need to anymore. So now I just want to say I’m constantly amazed at how on point you always are @Alexey. You mentioned everything I would have said, but with more context and links and some things I would have forgot to mention. So @Alexey, I just want to give you a big thanks again for what you do for this community!


@Alexey @BrightSilence I have read the RAID vs no thread and have made up my mind on it.

It seems to me that the ability to string together a few drives is conducive to repurposing of old drives.

So there is a bit of a contradiction , at least IMHO, to say don’t stripe, but don’t go out and buy new hardware.

I imagine there are a lot more people out there with a handful of 500GB & 1TB drives laying around than say… a lot of people with 4TB drives laying around. I think a strategy of striping repurposed drives together early in the nodes life and than moving it to a single larger drive (or mirror 2 drives) in the 2nd year is worth consideration. You risk little in the first year, but a lot by the 3rd year when your node is potentially at 10GB or more.

If you have bunch of drives, you can still run nodes one by one, when the previous almost full or at least vetted.

You can also go with RAID option if you want, but use RAID with redundancy. A stripe volume (RAID0) without redundancy is especially dangerous with older drives, so don’t do it.
In case of separate nodes if drive would die, you will lose only that node, it’s only part of your common data (all nodes behind the same /24 subnet of public IPs are treated as a one node for ingress and as a separate ones for egress, audit and repair traffic). In case of RAID0 you will lose all data at once.

Probably, but there is a perfect solution like Alexey explained. Just run separate nodes. I even have nodes running on a 160GB drive and on 2x320GB drives. Now in my case, those are mostly to warm up a node and go through vetting before I have a larger drive available. You would have to consider whether the income of small drives are worth the power costs to begin with.

So you’ve probably read a lot of my posts in that discussion already. I should add that I have slightly weakened my stance against using RAID with redundancy. But this is mostly for people who have more HDD space then they ever will be able to fill up. In that case using redundant RAID may be worth the sacrifice of HDD space. But you need to be careful. RAID 5 like solutions often fail during rebuild when even a single unrecoverable read error is encountered. There are systems that just accept the file corruption and move forward with the rebuild, which would be fine for Storj as a single file corruption is not going to kill your node, but is unwise for most other uses. I’ve personally moved to using separate disks or at the least dual disk redundancy, depending on the use case as I view RAID 5 as mostly wasting space while trading one risk for another.

If you have a bunch of small HDD’s only, I absolutely still advise to run separate nodes on each disk. Which is also why I now have a large node on a RAID 6 (like) array, which I already had for other purposes and a lot of nodes on separate disks, which are single purpose for Storj only.

JBOD or RAID0 are just never a good idea. You get the worst of all worlds and then some. Instead of protecting against failure, you amplify failure by losing everything if anything fails.

If you want to use HDD’s of 500GB and less, look into whether that is worth it in the first place. But there is a setting that allows you to assign less than 500GB. Look for storage2.monitor.minimum-disk-space for that. You can use this to determine whether that is worth it to you: Earnings calculator (Update 2022-04-14: v11.1.0 - Detailed earnings info and health status of your node, including vetting progress)

You made me finally find a way to memorize which Raid is which: Raid0 is the w0rst.