Make running a quality node easy and viable

  1. Make software parity/Raid 1 an option in the config

This should be relatively easy to implement as you could just set it to have 2 drive destinations for 2 copies (maybe even let this be a factor for node selection when data is available)

  1. Make all the DB / blob temp / install files run from a single folder by default to allow it to be run from SSD and then a folder store the data on separate HDD to minimise IOPS ( this can also have an option for Raid 1 install / DB )

I feel that 2 would make it very easy to run multiple nodes and to keep files optimal as you can run say 4 nodes on 1 computer with the DB and IO intensive stuff on a small SSD and the Blobs can be saved on there own on a HDD ( obviously 1 large node with raid 10 would be the best but its not really viable at this time and to make the most return people will just run many small nodes to minimise loss on a drive fail )

as for 1 i think that once large nodes are viable making Redundancy something that is encouraged would benefit both the network and the node operator if there was incentive not to just care about cost per TB of node rather than the quality of that storage obviously 2 would help way more in the short term as 1 could be done by anyone with just simple raid knowledge

The way things are structured there is near 0 benefit

It’s not a good idea to include OS functions into application. We doesn’t have enough resources to make it robust as it done by dedicated teams which implements RAIDs. So, if you want - you can configure RAID and use it as a storage.
Take a look on these threads also:

The RAID, mail server, Telegram client, S.M.A.R.T. checker, etc. will not be part of the storagenode binary.
All those functions are implemented as an external services for the reason.
The storagenode binary should be small and effective to be able to run on small SOC systems like Raspberry Pi or even router: Running node on OpenWRT router?

Already implemented


the raid part is not an issue for me but its something that will be needed for lots of people as everything matures its not so bad is a 1-4TB drive fails and data is lost but as drives are now hitting 18TB that’s not something a host can fill back up in a couple months

not everyone wants to use docker and if it was all done in 2 folders from install with the option to number the install it should work on all platforms and make it very easy to manage

i wont lie you guys have several competitors and yours is the harder one to install there is loads of ways to improve

also there is numerous posts stating that storj does not recommend raid and that 1 node per drive is fine but if you do the calculations it proves that running a node on a single drive until its vetted then putting it in a raid 1 is optimal in regards to cost / risk ratio

personally i don’t want to lose any data and have any down time and i think that planning to lose nodes is not a good thing to encourage not only does this increase repair data on the network it costs node operators in the long run and has a $ cost to it

is it not much better / cheaper to reward people that run redundancy’s than it is to rebuild large drives

the cost to the network of a redundancy is just disk space and is minimal / TB a full 16TB drive would be $11.50 a month if it paid 50% of the normal storage as it would have no associated upload and download costs given a 16TB drive would take 32 months to cover its cost it would then be viable to run redundancy

personally i would be playing the long game so redundancy is a must in fact when you calculate as a 10 year setup the electrical cost is going to be nearly 50% of the total expenditure

We will not force you to use RAID and we do not have plans to invent a wheel (a new brand RAID implementation) and include it to the storagenode code. However, you are free to do so, if you believe it’s economically viable.

If we increase payouts for SNO who selects to invest in not necessary hardware, we should increase prices for customers, and they could become uncompetitive with centralized cloud providers like AWS, Google or Azure, so it’s a last option to consider. There is no benefits for customers in your super reliable setup, the network itself is handling that, independently of reliability of each node.

I would not start this discussion again, the thread RAID vs No RAID choice is waiting for you :slight_smile: