Virtualization will slow down, but generally - doesn’t matter.
Multiple nodes behind the same IP will act as a one big node, so if you plan to use them to increase your income - it will not happen. This is useful only if you have several separate HDD or separate devices. All these nodes are acting as a RAID on the network level.
Each new node must be vetted, while unvetted the node could receive only 5% of customers’ uploads until got vetted. To be vetted on one satellite the node should pass 100 audits from it. For the one node in the same /24 subnet of public IPs it should take at least a month.
For the multiple nodes, if you start them at once, the vetting process could take in the same amount times longer as a number of unvetted nodes. So, it’s better to start a new one only when the previous one almost full or at least vetted.
We always recommend to use an existing hardware which would be online anyway - with Storj or without. So, start small, then increase capacity as your node growing up. If the disk is full and you have another one - start the next node.
Wow thank you so much.
What a awesome community here.
You have really helped me wrap my head.
A Little more about my setup if You have the time maybe u or others could make further suggestions.
I was given my familys servers. Lots of space. Not so much personal data. Just a few services running nothing serious.
~server has 14x 2tb sas
~1 server with 8x 4tb sas
~1 server with 4x 1tb sas
Ssd’s for hosts os’s a proxmox cluster
Than I have this old fiber channel san
~50 148gb 15k sas in jbod mode.
And a couple disk shelf’s jbod
~2x 14x 1tb
Goto do somthing with this extra space. Father did chia… I’m slowly moving away from it. Or perhaps do a little of both.
Another odd question
Say I have a node running with the minimum specs to run storj. a Ubuntu lxc. Get it all setup and running well. Fully vetted.
What if I where to make a perfect clone of this machine. Change the IP address -and give it a new MAC address/ device identifiers. Identical filestems…
Would that equal 2 vetted nodes?
Or can I clone a node before vetting. A template.
and do as u say let one vet. Than clone template to make another node to vet separately.
Can I use a vpn to give each node it’s own IP address? At a the router level.
It’s an attempt to game the system.
The filtering by /24 subnet of public IPs here is for the reason - we want to be decentralized as much as possible: different hardware, different locations, different ISPs and desirable - different SNOs.
We do so to achieve a goal: no one event could shutdown the customers’ data.
Uptime seems like a big deal
my 3 servers share the san.
I could setup a zfs storage replication so that if server goes down. The node will automatically jump to a different server.
Would the network notice the change?
It would be identical.
Would the 10 seconds of down time be counted against me?
If a server where to fail.
When the server came back up. The node would jump back to original server.
Causeing there to be small gaps in the data it held possibly.
Not sure how that would affect an end user.
Is that a no no?
Has anyone attempted this for maximum HA/uptime/availability?
Not needed, I would strongly recommend to do not use replication and especially automatic switching unless they uses a shared storage (please note, the storagenode is not compatible with any network filesystems like SMB or NFS, the only compatible network storage protocol is iSCSI).
You can easily lost your node, if your replication would not keep up: when you switch the route, the node could not have all pieces due replication is not finished or stuck and your node will be quickly disqualified for losing pieces.
So, please, do not overcomplicate things. Make setup as simple as possible.
And no, the main factor is data integrity, not the online score.
If your node lost pieces, it will be disqualified within 4 hours. If you would use cloned identity (more than one node is online with the same identity but own storage), the disqualification will happen within a hour or less.
Your node could be disqualified for downtime, if it would be offline for more than 30 days. If the online score is below 60%, your node will be suspended (no ingress) until the online score would grow above 60%. To fully recover the node should be online for the next 30 days. Each downtime will require another 30 days online.
If it’s offline, pieces will be recovered from other more reliable nodes and will be slowly deleted from your node by the garbage collector when you bring it online.
So the HA setup is not only waste of resources, but can also lead to disqualification in case of data loss.