RAID vs No RAID choice

Nice table. Nice try. Especially because after fixing the initial values to more real ones, it begins to show the profitability of the raid. And she still does not take into account the vetting process and lost egress profit.

What i mean about initial values. 200GB/day from 1IP during years - absolutely unrealistic. Change this to 15-20. More over ingress does not mean storage. Data is sometimes deleted yet.

But the apologists for individual disks are the biggest logical mistake - why, when you show the profit in your calculations, do you continue to think that such a moment will come when I cannot push new disks? When you correct this your mistake there will be nothing to calculate. It turns out that you lose data and, accordingly, money, but I do not.

1 Like

I currently have a NAS with 2 HDD’s in a Synology SHR1 configuration. I still have 2 empty bays available for future expansion. These drives are not cheap and 100% of the NAS is being used just for StorJ. This was a hefty investment to begin with but I am in it for the long haul. So I want to protect my node from a HDD failure and retain the existing reputation to ensure after 10 months I am getting the full benefit rather than starting over and making 1/4 of the income again which most people here seem to think is a good idea to do.

Both HDD are new enterprise 12TB drives and have been running since V3 alpha launched.
I have Read & Write cache enabled using 2x 512GB NVME’s as well.

Sure I could have created 2 separate nodes on the same NAS for each drive, but that’s increasing CPU and MEM usage and twice the headache if something goes wrong for a small gain in the first 10 months, and is a high risk after 10 months…

Also if your hosting from the same External IP, don’t StorJ block this and still treat you like 1 node anyway, so what’s the point???

1 Like

Dear Krey,

thanks for reviewing my calculations. There are some things a agree and I disagree and thing I do not understand from you.

Agree:

  • Vetting Process is not included yet, as I menthioned myself already. You can do vetting process for new nodes any time in parallel unsing space for just 500GB which might be availible while your other nodes still fill up. Vetting takes at longest 1-2 mounth.
  • Ingress is not equal to stored data. Correct. And indeed the filling rate of the node is an important variable on this calculation which will make the protection of your “hard-earned” ingress more important.

Disagree:

  • Ingress. The first mounths this year I had constantely 200-250GB ingress a day on both my nodes. Since the network went production phase, this indeed droped, but currently I have 150GB/day. Guys, please tell me if this is way to much as to use it for average value.
  • I really have to drop the numbers (in my table) to 10GB/day ingress for the raid5-setup to be more profitable (after 55mounth!). Everything above 10GB/day makes the non-raid better.
  • Disagree on lost egress profit. I am not sure where you picked this, but I surely implemented the loss of the heldback amount (escrow) when your non-raid node drops death. This is kind of the whole idea about the comparison!

Things I do not understand:

  • But the apologists for individual disks are the biggest logical mistake - why, when you show the profit in your calculations, do you continue to think that such a moment will come when I cannot push new disks?

Maybe there is a language barriere. I do not get this sentance.

  • It turns out that you lose data and, accordingly, money, but I do not.

And that is exactly what I tried to show with the calculations: You lose data=you lose money but you have earned and will earn (after disk replacement) more due to more availible storage…

But hey, since you found so many logical and implemented mistakes in the spread sheet. Please feel free to correct them and post your proof here. I would be eager. By now I still see no numbers from you. Not for nothing I have names the file v01… please let it be a v02 in near future (since I have to decide to rearrange my NAS-setup)…

2 Likes

@ Sahsa:

Also if your hosting from the same External IP, don’t StorJ block this and still treat you like 1 node anyway, so what’s the point???

No, as I understand, the wording is: Two nodes on one IP do not earn more ingress then one (big) node. So please, just run one node instead of many. Moreover, multi-nodes will disturb each other and the vetting process takes longer.
But this is no taking the raid-discussion into account.

Please take a look at my calculation to see if it really is not worth it, running two node on your two disks. Even if it breaks before the 10months are over.

Question into the forum: Do disks break so quick!? Usually a disk runs for 4-5 years (see the statstics of the big storage centers that are often published). Why don’t you trust your disk (even in 24/7 workflow) more lifetime than a few months…

1 Like

may I draw your attention to this thread?

2 Likes

Because where is huge egress tests in this unique months for all v3 period.
For today my nodes get by 2GB ingress. Not 20 or 200!

But, did escrow in you table contains 20$ per TB incomings from egress?

This is mistake iam talking about. I always have free space. My free space does not depend of disks quantity. When storj fill first disks i add another disks or disk shelve or replace it with more capacity disks.

1 Like

Nothing to calculate. When disk fail or start to raise pending sectors i replace it. So i have zero loss with 3 changed hdds. And 30tb free space right now with 75tb used.

1 Like

But we’re not talking about you, we’re talking about the average SNO. We got it by now that you have an almost unlimited supply of hard disks. For you it makes sense to have a RAID. But you generalize your case and say RAID is better for everyone. IT IS NOT.

2 Likes

It is not true i never said this. And opposit i say that raid must have for people who want earn seriosly money from this project. For others i just suggest for using raid.

Moreover in this table defaults is 8 hdds. Did you meant 8 hdds is average sno or not?

And by the way, the title is nothing about average sno.

Your remark does not make any sense at all, since many sno have only one drive i.e. any raid there useless.

1 Like

Hey Wolf, thanks for the sheet. I just would say you could maybe convert (by importing) and add it online on Google Sheets for a easier access.

1 Like

Lost egress profit is the amount of money I would have made if my node survived.

Let’s say I have a 10TB node, it takes a year to fill up to 10TB and then is DQ. I set up a new node, it also takes a year to fill up to 10TB.
During the year where my new node is filling up:

  1. I would most likely have earned more for egress from my old node
  2. My old node had lower escrow percentage.

Also, it would be nice if ingress stayed at 200GB/day all the time, but it usually is much lower.

1 Like

@Floxit: Nice suggest. Feel free to do yourself and go ahead :wink:
@Krey: regardless that the preadsheet has 8 disks predefined, the number of disc is important as follows: the more disks you have in your raid5-array, the less impact against a non-raid you have.

@Krey+Pentium100
The game-changer is the ingress speed. Assuming 2GB/day changes everything. If your fill-up time is 1 year an you lose income over that periode, it surely is essential to take care of your node not to go down.

1 Like

Add egress income to you spreadsheet please, say 10% from ingress by default. And we all see that is game changer.

1 Like

While @Krey’s setup makes financial sense (cheap $20 used drives, makes them cheap as well as more likely to fail. Additionally raidz1 is more resilient during rebuilds), yours absolutely doesn’t. Unfortunately SHR1 is based on RAID5 and very likely to fail a rebuild with drives that size. The upside is that you’re using enterprise drives, which do slightly better. But I believe the failure rate of a rebuild would still be around 50%. And in that case you would lose your entire node. Now with 2 disks, like you are using right now, that’s 50% of losing the entire node or 100% of losing half your nodes if you ran 2 separate nodes on those two HDD’s. Except the latter would also allow you to share twice as much space. The equation becomes much worse when you start adding disks. A single failure could still have you lose everything with about a 50% chance, while a single failure when running 4 nodes on 4 disks would only lose you 25%.

Even in your current situation you ignore that with a drive failure you would only lose 1 node, not both. You won’t be making 1/4th. You’d be making 100% of your first node and 25% of the second. The more HDD’s you add the better that options becomes.

2 Likes

It’s not mining. I could have a 10G connection, but if nobody is uploading data to my node, it won’t fill up.
Right now my node is filling up at about 278MB/hour over the last day.

Here’s how the actual data on my node graph looks like (yes, there are gaps):


The drop to zero was a network wipe, not DQ.

OTOH, if my node was filling up at 200mbps constantly, I would not mind DQ that much. I could have a spare node that is vetted ready to take over if the main one gets DQ.

I also would be setting up more nodes because my server does not have an infinite number of drive slots.

2 Likes

Running multiple nodes from the same external IP will not net you more profit since StorJ will still treat you as 1 node. I don’t see the benefit of running 2 nodes (1 per HDD) if StorJ still see you as just 1 node not 2.

1 Like

How about sharing 24TB instead of 12TB?

2 Likes

As I said above… If StorJ see 2 nodes from the same external IP, they will treat you as 1 node. If u have 2x 12TB you will probably only get 6TB for each node. Or less traffic to remove you as a risk from the network since you’re just 1 node / entity (external IP).

1 Like

Yeah you’re misinterpreting that. Node selection selects an IP subnet first, then picks a node in that subnet. So you get the same amount of traffic no matter how many nodes you are running. But it won’t suddenly stop sending you data if the 12TB is full. Each node can fill up the full 12TB per node for a total of 24TB.

3 Likes

What?

Some more characters…

1 Like