Which kind of RAID should I use to set up my STORJ node.....? Which level is the best? RAID 0,1 2,3,4,5,6, etc

So, do you think I can use also th stuf i find in the dump, 15 to 5 years old

If it’s been sitting outside in the sun or with other stuff piled on top of it then I would be very surprised if it’s in a usable condition. You can always try and see, though. :slight_smile:

usually i find kind of good stuf (to be in a dump) because people trust away pc that usually works. they get them away becuse they don’t know how to update them or they are gust told that it is bed pc and they should buy a new one.

1 Like

I am using raidz2, which is the zfs equivalent of raid6 - two drives for parity. I am using 6 drive vdevs, which means that out of 6 drives I get the capacity of 4 drives and can tolerate a failure of 2 drives.

DQ would mean loss of about a year, so I’d rather use more drives than do something I know would be less reliable and pointless to do in the first place.

if i had the drives and the bays for it i would run raidz2… but i also want good raw iops, which means a raid two setup would require a minimum of 12 drives and 18 for similar iops and 24 for better capacity utilization with same iops…

i know it would be stupid to do like a 6 drive raidz1 because the more drives i add and the more tb capacity they have each the more difficult it becomes to resilver and it’s really there raid6 / raidz2 shines, due to it not getting corruption if there is issues during the process…

still new to zfs, so maybe eventually i will only run raidz2… but for now with a 12 bay server and having a few drives that are behaving different from the test, i’m also trying to take into account how i upgrade.

luckily i found out my onboard 6 port sata controller can handle +2tb sized drives, so that gave me a good tho ghetto looking upgrade option.

ofc my pool is mainly vm’s and the primary storagenode + whatever else i find to put on it when i get some more room in the near future, my own data is on a mirror… not the performance pool
mirrors are supposedly so safe it’s almost riddiculous, ofc raid 6 is in theory better for reliability, because one wouldn’t have any exposure to the chance of bit corruption during resilverings.

but it’s ofc a trade off because a raid6 array as 1 disk worth of iops
however raid5 one should never use… because it sucks… raid6 or mirrors are essentially the only standard raid setup that should be used when considering redundancy…

not taking into account the hybrid raid5’s which i’m sure are very safe if they have checksums and what not

i’m to stubborn and to cheap to run raidz2, ill have to learn the hard way lol

I used to work at my university’s IT department and was “forced” to demagnetize and throw away a few hundred 2TB hard drives. They tore down the old storage servers and didn’t want to resell the hard drives (which were used but still in good condition) because they were concerned about people recovering “sensitive” data from the drives.
Now that I learned a bit more about hard drives I know that they might have been at the end of their life. But thinking about those hundreds of destroyed hard drives it still hurts my heart.

1 Like

Just run 1 node per hard drive, or you could run raid 0 to get the most space possible because 320gigs really isnt worth anything… These old drive arent worth to run in any other raids cause there probably close to end of life anyways, But least you could see how long they last.

1 Like

Raid 0 seems pretty dangerous on old drives.
I mean if you have old drives and a computer that runs 24/7 anyway it doesn’t hurt to plug them in and run a few small nodes. Zero investment means that even if they die right away you haven’t lost anything.
I was actually thinking about running that kind of setup just to give a second life to old hard drives.
I have an old mining motherboard that has 8 pcie1x slots so I could plug in quite a few drives using pretty inexpensive sata controllers. Then just run the drives until they die.

The only downside is that it creates node churn on the network and at the moment storj pays a lot to repair data (@Alexey explained on another thread a while ago but the price is pretty ridiculous if I remember correctly, in the hundreds of dollars for a single TB) so it would kind of hurt the network…

It doesnt hurt the network that much less your storing alot of data lets say my 1 8TB node dies tomorrow or if my 2 nodes with 8TBs of data die that would be worse then 320gig drives in raid 0 with 3 drives.

Unless electricity is free which seems to be @Pietro.anon’s case, running a very small node would make it hard to be profitable because an old 300GB disk probably consumes as much (if not more) electricity than a more or less modern 2+TB one, for instance.

This said… Apparently running a node on disks smaller than 500GB (550GB in fact to allow for the recommanded 10% overhead) isn’t possible anyway, so… in this specific case a raid setup seems like the only situation where it makes sense for a Storj node.

1 Like

It is possible, you just need to change the configs accordingly:

# how much disk space a node at minimum has to advertise
storage2.monitor.minimum-disk-space: 500.0 GB

Changing this should allow you to allocate whatever you want. (I have not personally tested it)

Anything is possible but as per SNO Terms and Conditions (which is currently down) you are NOT ALLOWED to operate a node < 500GB

1 Like

apparently thats also more of a guideline… like so many other supposed rules

Storj club.
the rules are… there are no rules… but we will write some anyways to keep people from doing stupid stuff

seems like a slippery slope to me

What I meant is that old drives will fail much faster than new ones. You might fill up your node in a couple of months and then it dies with a very small held amount.

What he said. His /24 ip would have more than 500GB

Ah I see, so… wait what?

If there’s an option to overwrite the default minimum so we can put any allocated size we want to the node, why exactly is there this option storage2.monitor.minimum-disk-space in the first place? I don’t get it. The node could simply accept any size by default, period.

Sometimes I’ve got that feeling too.

That contradicts the ToS, that is so weird. So according to the ToS, we are NOT allowed to do that, but in practice that’s fine :no_mouth:

Maybe ToS should be “eased” a bit by removing these hard limits then, and just say that StorjLabs highly recommend this and that instead? :slight_smile:

1 Like

Why not… The software is open-source anyway so you could easily fork it, change the minimum required size and use your own version.

As littleskunk said “in your special case 3 nodes with 320GB each should be fine for us” since the reason for 500GB in the TOS is probably that a few single nodes with less than 500GB are just not viable for the network since the transaction fees woul regularly cost a lot more than those nodes earn. But if you got multiple smaller nodes on the same IP using the same wallet, that is not much differently from using one bigger node. Except for a little overhead on the satellites that have to manage data for 3 different nodes instead of 1 bigger.

But you are right that the TOS is not reliable anymore (it actually hasn’t since a long time) because it also says 1 core per node and 1 node per /24 subnet and neither of those is true.
But let’s not make this thread about the ToS :smiley: Storjlabs is aware of the problems with the ToS but changing it has to go through legal etc and that takes ages…
But we wanted to solve the OPs problem of having lots of smaller drives. And the solution is that he can change the config and just use his 3 little HDDs for 3 smaller nodes.

2 Likes

I just changed the config.yaml file to allow for a smaller disk size, restarted my node that originally had 500GB of files and now the dashboard is going crazy, the node seems to be working fine so I’ll let it run and see what happens. The size of the node should gradually decrease as pieces get deleted.

1 Like

@kevink Alright, fair enough, all that sounds pretty sensible to me :+1:

See here:

1 Like