I just read, that you can only assign 24tb per node and now my queston is how can I contribute more than 24tb of storage to the storj network without having a second location. Would it work if I just open several nodes with diffrent identeties and diffrent ddns domain names or is the storj team working on any kind of solution for bigger nodes? Thank you all in advance for your help guys
Take a look at this thread.
thank you for that tip. I saw some guys are running multiple nodes from one location with diffrent identities so that is the answer to my problem right?
Hello you can have more nodes in one ip/… So you can have 5x10tb nodes and one public ip adrress.
But to full 5x10 tb nodes take very long time
You can check this How to configure multiple nodes on a single server?
And i see this in tutorial there is no maximum but its only recomanded not have node with allocated capacity more than 24 tb
thank you for your help @Krystof I´m gonna check that out
You can have several nodes behind the same IP address (or ddns domain name). If you are using Linux, you don’t have to setup another ddns, just change the external port of your Docker container (it is possible on Windows too but don’t know how to since I never used Storj on Windows).
In this case, just allocate part of your whole disk to your node and the rest for other(s) node(s).
Juts keep in mind that having a huge disk is more risky. Indeed, if your disk fails (and one day or another, it will), all your nodes using this disk may be disqualified.
It is not recommended, and there is absolutely no advantage to running more than one node on the same physical drive. You should only run multiple nodes on multiple independent drives.
I didn´t want to run multiple nodes on one drive My idea was to put multiple 12tb drives in raid and use them as one big node with 36 or 48tb depending on how many drives I can get
1 node per disk, 1 disk per node is the golden rule.
I don’t get why you would like to aggregates disk to make a bigger one, it doubles the risk of being DQed for a disk failure.
You can’t run a second node on your machine ? I do run 3 nodes on 3 separate HDD without any issues on low end hardware.
I just think that its more complicated to manage like 20 or 30 nodes if i wanna scale it a lilttle bit thats why I would prefer one big node and if I use the right RAID configuration a disk can fail and I can just replace it without any data loss
i see, but you talked about 48Tb on 3 disks, not 30 disks, hence my irrelevant questionment
if you start building beyond one drive then you will need to include redundancy, which is fine… so long as you know what you are doing…
however raid is slow on the iops side of things, ofc it can be good for data integrity if done right… but if you are new to raid you may as well end up doing something you will regret…
also you do know that nodes doesn’t just magically fill up with data right… over the last 3 months we’ve gotten maybe a few tb at best… hopefully that will change soon…
i run 3 pool’s with 1 of them being a 2x4 drive raidz1 and my two other new nodes being on mirrors because i kinda hated the iops i was getting from the raid setup…
you think you want raid… because it’s enterprise and fancy… but it’s often will cause you more trouble than it solves because you will have a ton to learn about it.
if you do decide to do raid, you should only use raid1 or raid6…or you could do raid 10 and raid 60 (but that may be a bit larger than what you are thinking off i suspect). or maybe hybrid raid5 solutions which corporate checksums, or you can use something like zfs, maybe btrfs, but i cannot say how reliable btrfs is but some people use it…
that doesn’t make it reliable tho
straight up raid0 or raid5 should be avoided, it’s just plain bad long term
so just setup a couple of nodes one on each drive and see how that goes… if you got that much hardware to throw at it, you can always setup more later, when you get a bit involved with the project.
you might also want to do other projects on the hardware, so keeping some stuff free until you know your plans might be very wise.
Sure, but at the trade-off of permanent loss of storage space. Running multiple nodes is a little bit more setup, but after it’s set up watchtower takes care of everything. I would definitely recommend 1 node per disk.
I second you on this point. The more i read on this forum, the more i see the compromise has do be done to content every kind of SNO regardless their investment. Being a small SNO i think, being in this category, it is a flawless experience since many months. On top of the actual infrastructure, medium business might be in need of high level tools to monitor and manage their nodes. All in all the network need motivated people whatever they can afford if it allows incomes.Every SNOs try to optimize their setups just like Storj devs do on their side and this dynamic is promising.
would be very cool if this could become a source of revenue, but for me because i took way to long to get a setup i deemed worth the time, and insuring that i can scale beyond reason… my setup is still after 7 month’s barely breaking even on the power bill… but i’ve learned a lot and will most likely try and expand into other similar projects so i can expand my setup and hopefully start to actually have a profit on doing this.
i really like storj because they doesn’t force their storagenode operators to fill the space will arbitrary data nor burning value to prove their nodes are real… because that seems very wasteful to me… been in the process of looking at what else it out there and stuff like filecoin and sia seems to utilize very different approaches which seems vastly inferior concepts imo…
tho having loads of free capacity i will most likely atleast try out sia … filecoin is just out of my reach with my current hardware… so far as i can tell storj labs are well positioned in the distributed storage market and seems to have some well thought out fundamentals…
even if i give them a hard time lol when i don’t understand their reasons or agree…
might well still be the best distributed storage project around… which i guess is why i picked this project to begin with… was just so long ago i can’t remember what my thought process was at the time…