2nd node or just increase the 1st node?

Hi

I run a now 5TB node. It’s built in a Synology NAS with raid 5.
I have 2 options and I’d like your input. I can increase the size of the node (I have the space) or I can create a new volume and run a 2nd node on this new volume.
Are there any advantages to any of the options?
I remember reading a rule against multiple nodes using the same IP, but everyone seems to be doing it… so, where are we on that subject?
Also, in case I create a 2nd node, the hardware will be exactly the same. Will this be acknowledge by the satellites? If not, is there any disadvantages in the vetting period?

thanks

personally, I would prefer to create a second node - as the hdd is fresh and you will be independent, in case one node is dying. very large nodes mean higher risk of losing everything. just my thoughts on it - but not confirmed yet, as it was not the case for my node yet. I guess I’ll have this question by end of this year, too.

2 Likes

More than one node behind the same IP address is a violation of T&C. While it is not currently enforced, Storj Inc. may start enforcing it any time, so I’d advise against.

You can, however, rent a separate IP address (e.g. by getting another line from your ISP), and then it will be fine.

sure? have not read it, but i am not aware of that - traffic is handled like one node, but if the same network cannot handle the “normal” traffic, because there is not enough space available → that’s exactly the question to be answered: increase space or second node in the same network.

You are a node operator and you have not read the Node Operator’s Terms and Conditions? That’s curious.

you will not […] * Operate more than one (1) Storage Node behind the same IP address;

1 Like

haha, no :sunglasses:

“We recommend to run a new node, if you want to add an additional hard drive.”

2 Likes

That doesn’t contradict NOTaC. You can operate multiple nodes on the same hardware, as long as they’re behind different IPs. So when you get a second hard drive, you’re also supposed to get a new IP.

i’ve not seen a clear statement from storj on this yet. :v:t2:
as well as @humbfig i would be very happy to have that.

1 Like

While it is true that this is still there, it is extremely outdated. This was there back when there was no IP based node selection. At the time you could technically spin up 10 nodes and get 10x as much data. This has long not been the case anymore and you just get the same total amount of data now, no matter how many nodes you run. Ever since that has been implemented Storj Labs has said it was ok to run more than one node. Though they still recommend against running multiple nodes on the same storage device. They’ve promised that updated T&C were being worked on for years now, but the terms were never updated to fix this.

However, they can’t possibly enforce this clause now after years of precedence of them saying it is fine to run multiple nodes. I don’t like telling people it’s fine to ignore certain terms, but at this point they’ve brought it on themselves. You can safely ignore this specific term. There are others that seem outdated to me… but I would stick to the rest of them.

@StorjLabs for peace of mind of node operators, please finally fix the terms now… This will only lead to confusion and you don’t want to set a precedent of SNO’s ignoring T&C en masse.

11 Likes

Not the point. My drives are in raid 5. There are no fresh drives to add.

Its not recommended to use a large raid5 node because auf the growing i/o. Its better and cheaper to use single drives with no raid. i would suggest anything between 3 and 8 tb per disk.

As you have RAID resiliency you have reduced the chance of your node failing due to a disk error, as such I would increase the allocated space for the node. That is what I’ve done… gradually increased the available space for the one node on my RAID6 array.

1 Like

With the existing setup and scenario, I agree this is the best solution.

Just increasing the size of the existing node is the easiest, plus you don’t need to wait for a second node to finish vetting.

Also I really wouldn’t use RAID5.

Well, I already had the NAS in raid5…
Anyway, why not?
The thing is, if I had 10 nodes of 5TB and one of them crashed, that would be ok… I would just start another one…
But I only have one 5TB node. It took 1.5 years to get there. If the disk would crash, I wouldn’t even start all over again… too painfull…

1 Like

I see the point of raid5 not being that safe. But my NAS has 6 bays and only 3 are occupied. I can still grow the raid5 and somewhen opt into raid6… I believe raid6 is pretty much safe in terms of disk failure (if you use the right disks, meaning Japanese). Maybe not so much in terms of NAS hardware failure… but nothing is 100% infallible.
Loosing my 5TB node, now that it’s starting to pay off, that would be heart breaking…

I know that this topic has probably been answered a long time ago, but I would like to comment: I personally recommend having one super node with the best possible redundancy and continuously adding disks / increasing disk capacity.
That’s right, if one big node fails, you’ve lost everything, that’s why the redundancy.
If a small node fails, you haven’t lost everything, but will you wait for it to fill up so that it becomes profitable?
I personally use RAID1 with two identical disks, above which I have LVM, which connects these smaller arrays into a large logical volume.
If one disk fails, the data is on the other.
If I need to enlarge the disks, I will move the PV to another available space and replace the smaller disks / one array with larger ones…
Of course, just my observation. SNO can decide as it sees fit :slight_smile: .

1 Like

You mean, you have several disk pairs in raid 1 and combine them all in a large volume? That seems a bit overkill…
I believe the typical SNO who hates raids have like 10 nodes in 10 separate disks. Sometimes they loose 1 node to disk failure issues, start another one and still have 9 full throttle. I could take that. But I just have one 5TB node. I can’t afford to loose it because I have no will to wait another 1.5 years to reach the same point. That’s why I keep it in a pretty safe raid (not perfect!).
I think I will initiate another node in a single not too large disk, and so on when it fills up… I just don’t want to loose my very first node, so, I will keep it “raid protected”. Eventually, if I’m lucky to build up to 10 nodes, I will become a raid hater also…

That article, and other derivatives of the original one that first appeared at zdnet (I think), are flawed and massively overestimate the probability of the failure; the results contradict common sense: if their conclusion (last sentence in the quote above) was even remotely plausible we would be finding new bad/rotten sectors and seeing checksum failures on “almost” every scrub (which, as a side note, shall be scheduled periodically to keep data viable!): scrub involves exactly the same steps as rebuild (read data from all disks, compute and compare checksums). This of course does not happen in reality.

I recommend RAID5 owners (and especially, Synology owners – that run btrfs on top of md raid5, where scrub involves scan of entire disk surface regardless of utilization) to look at their (hopefully, at least monthly) scrub logs for the past few years and see how many correction were actually made. You’ll be surprised how close it would be to 0 and how far that is from a predicted mayhem.

Bottom line – RAID5, RaidZ1, and other singe-disk fault tolerance arrangements with reasonably small number of disks (I’d say up to 12-16) are totally fine, provided they are scrubbed periodically. if the array is not scrubbed for years – then sure, I too would not trust it to rebuild either, nor data to be viable.

(Note: RaidZ1, unlike Raid5, is special in that its replace operation on the vdev preserves access to redundancy afforded by all present disks, including the one being replaced, for the duration of the replacement, so it’s not a fair comparison, as it is much safer)