Which kind of RAID should I use to set up my STORJ node.....? Which level is the best? RAID 0,1 2,3,4,5,6, etc

Right, and if you aren’t using redundancy in the first place then the ability to transparently repair from a good copy doesn’t do anything for you, and loudly reporting corruption is pointless (assuming you know the drive is on its way out) since you don’t really care to replace it. So for a single-drive node, such configurations don’t make much sense. (Though I do love btrfs’ ability to shrink a volume while it’s mounted! HFS+ is the only other filesystem in common usage that can do that.)

1 Like

zfs can take a while to fix a drive… took me 14+ scrubs on my sas mirror before the bad drive stopped making errors… now it’s been running fine… i know the one drive is dying which is why i put them in a mirror…

haven’t had an problems, zfs is great… tho maybe i should try to put in a horrible damage drive and see what it says… still fairly new to zfs

i’m never going back from copy on write and checksummed files systems… so for now zfs all the way even for one drive setups

I am running 2 very old 1 TB drives. They must be 10 years old. No issues so far. They keep running. Time to place some bets on which one failes first.

they have plenty of they don’t build them like they use to factor… so they will most likely last until the sun grows cold… lol

my old enterprise sata drives have like 6 years of spinning time… lol

I have a lot of hdd because recover them from not working pc in most of them the HDD works perfectly so I thought was a good idea to make money running a storj node… I have a constant flow of free HDD so if one will broke is no problem for me

1 Like

wow you are creazy men i like the idea to make money running old pc stuff

below 200gb i might not bother… electricity costs will come into play around there if not sooner depending on what you take into account…

my base assumption is storage which pays 1.5$ pr tb pr month… so thats my base profit, meaning my minimum hdd i will use is +1tb

but i pay a lot for electricity here and my server is a power hog

i know it can make much more than that… but i need to have some easy to work with concepts… i mean its fun to think about having 100 hdd’s but thats like 700-1000 wattage + controller and psu wattage… so ends up maybe being 1400 watts, thats a lot of juice if its all 10gb drives…

I’m running a 1TB external drive by itself (no redundancy). I got the drive in maybe… 2006 or 2007? I don’t recall exactly. It’s slow as hell but still managed to fill up with data and is making me a few bucks a month. Mostly I want the dang thing to die already so I can throw it away without feeling like I’m wasting something that still works… but it just refuses to die. :slight_smile:

And I used to take it everywhere so it even sustained a fair bit of physical abuse in my backpack in college. No idea how it’s survived this long.

1 Like

I would start even with a smaller drive to get my node vetted and later migrate the node on a bigger drive.

perfect little drive to start new nodes :smiley:

There is a limit of 500GB.
I tried to spin up a node with a 400GB disk out of curiosity and the log gives this error:

2 Likes

I don’ t have any electricity cost because I have 22 solar panel

1 Like

Is SNO T&C being updated to reflect the same ?

1 Like

In italy there is a lot of sun all’the year

1 Like

So, do you think I can use also th stuf i find in the dump, 15 to 5 years old

If it’s been sitting outside in the sun or with other stuff piled on top of it then I would be very surprised if it’s in a usable condition. You can always try and see, though. :slight_smile:

usually i find kind of good stuf (to be in a dump) because people trust away pc that usually works. they get them away becuse they don’t know how to update them or they are gust told that it is bed pc and they should buy a new one.

1 Like

I am using raidz2, which is the zfs equivalent of raid6 - two drives for parity. I am using 6 drive vdevs, which means that out of 6 drives I get the capacity of 4 drives and can tolerate a failure of 2 drives.

DQ would mean loss of about a year, so I’d rather use more drives than do something I know would be less reliable and pointless to do in the first place.

if i had the drives and the bays for it i would run raidz2… but i also want good raw iops, which means a raid two setup would require a minimum of 12 drives and 18 for similar iops and 24 for better capacity utilization with same iops…

i know it would be stupid to do like a 6 drive raidz1 because the more drives i add and the more tb capacity they have each the more difficult it becomes to resilver and it’s really there raid6 / raidz2 shines, due to it not getting corruption if there is issues during the process…

still new to zfs, so maybe eventually i will only run raidz2… but for now with a 12 bay server and having a few drives that are behaving different from the test, i’m also trying to take into account how i upgrade.

luckily i found out my onboard 6 port sata controller can handle +2tb sized drives, so that gave me a good tho ghetto looking upgrade option.

ofc my pool is mainly vm’s and the primary storagenode + whatever else i find to put on it when i get some more room in the near future, my own data is on a mirror… not the performance pool
mirrors are supposedly so safe it’s almost riddiculous, ofc raid 6 is in theory better for reliability, because one wouldn’t have any exposure to the chance of bit corruption during resilverings.

but it’s ofc a trade off because a raid6 array as 1 disk worth of iops
however raid5 one should never use… because it sucks… raid6 or mirrors are essentially the only standard raid setup that should be used when considering redundancy…

not taking into account the hybrid raid5’s which i’m sure are very safe if they have checksums and what not

i’m to stubborn and to cheap to run raidz2, ill have to learn the hard way lol

I used to work at my university’s IT department and was “forced” to demagnetize and throw away a few hundred 2TB hard drives. They tore down the old storage servers and didn’t want to resell the hard drives (which were used but still in good condition) because they were concerned about people recovering “sensitive” data from the drives.
Now that I learned a bit more about hard drives I know that they might have been at the end of their life. But thinking about those hundreds of destroyed hard drives it still hurts my heart.

1 Like