Just run 1 node per hard drive, or you could run raid 0 to get the most space possible because 320gigs really isnt worth anything… These old drive arent worth to run in any other raids cause there probably close to end of life anyways, But least you could see how long they last.
Which kind of RAID should I use to set up my STORJ node.....? Which level is the best? RAID 0,1 2,3,4,5,6, etc
Raid 0 seems pretty dangerous on old drives.
I mean if you have old drives and a computer that runs 24/7 anyway it doesn’t hurt to plug them in and run a few small nodes. Zero investment means that even if they die right away you haven’t lost anything.
I was actually thinking about running that kind of setup just to give a second life to old hard drives.
I have an old mining motherboard that has 8 pcie1x slots so I could plug in quite a few drives using pretty inexpensive sata controllers. Then just run the drives until they die.
The only downside is that it creates node churn on the network and at the moment storj pays a lot to repair data (@Alexey explained on another thread a while ago but the price is pretty ridiculous if I remember correctly, in the hundreds of dollars for a single TB) so it would kind of hurt the network…
It doesnt hurt the network that much less your storing alot of data lets say my 1 8TB node dies tomorrow or if my 2 nodes with 8TBs of data die that would be worse then 320gig drives in raid 0 with 3 drives.
Unless electricity is free which seems to be @Pietro.anon’s case, running a very small node would make it hard to be profitable because an old 300GB disk probably consumes as much (if not more) electricity than a more or less modern 2+TB one, for instance.
This said… Apparently running a node on disks smaller than 500GB (550GB in fact to allow for the recommanded 10% overhead) isn’t possible anyway, so… in this specific case a raid setup seems like the only situation where it makes sense for a Storj node.
It is possible, you just need to change the configs accordingly:
# how much disk space a node at minimum has to advertise storage2.monitor.minimum-disk-space: 500.0 GB
Changing this should allow you to allocate whatever you want. (I have not personally tested it)
Anything is possible but as per SNO Terms and Conditions (which is currently down) you are NOT ALLOWED to operate a node < 500GB
apparently thats also more of a guideline… like so many other supposed rules
the rules are… there are no rules… but we will write some anyways to keep people from doing stupid stuff
seems like a slippery slope to me
What I meant is that old drives will fail much faster than new ones. You might fill up your node in a couple of months and then it dies with a very small held amount.
What he said. His /24 ip would have more than 500GB
Ah I see, so… wait what?
If there’s an option to overwrite the default minimum so we can put any allocated size we want to the node, why exactly is there this option
storage2.monitor.minimum-disk-space in the first place? I don’t get it. The node could simply accept any size by default, period.
Sometimes I’ve got that feeling too.
That contradicts the ToS, that is so weird. So according to the ToS, we are NOT allowed to do that, but in practice that’s fine
Maybe ToS should be “eased” a bit by removing these hard limits then, and just say that StorjLabs highly recommend this and that instead?
Why not… The software is open-source anyway so you could easily fork it, change the minimum required size and use your own version.
As littleskunk said “in your special case 3 nodes with 320GB each should be fine for us” since the reason for 500GB in the TOS is probably that a few single nodes with less than 500GB are just not viable for the network since the transaction fees woul regularly cost a lot more than those nodes earn. But if you got multiple smaller nodes on the same IP using the same wallet, that is not much differently from using one bigger node. Except for a little overhead on the satellites that have to manage data for 3 different nodes instead of 1 bigger.
But you are right that the TOS is not reliable anymore (it actually hasn’t since a long time) because it also says 1 core per node and 1 node per /24 subnet and neither of those is true.
But let’s not make this thread about the ToS Storjlabs is aware of the problems with the ToS but changing it has to go through legal etc and that takes ages…
But we wanted to solve the OPs problem of having lots of smaller drives. And the solution is that he can change the config and just use his 3 little HDDs for 3 smaller nodes.
I just changed the config.yaml file to allow for a smaller disk size, restarted my node that originally had 500GB of files and now the dashboard is going crazy, the node seems to be working fine so I’ll let it run and see what happens. The size of the node should gradually decrease as pieces get deleted.
@kevink Alright, fair enough, all that sounds pretty sensible to me