8 TB and a maximum of 24 TB of available space per node
Minimum of 500 GB with no maximum of available space per node
Well you found the right page… But you didn’t read it all.
Surprisingly going by that same reading you should also think the minimum requirement is 8TB… but nobody seems to think that.
There is no maximum limit. 24TB is listed as max recommended only and that recommendation predates any real life experience with the platform. Once you approach that kind of size I think you can determine for yourself whether it’s good to expand the node or not.
Regarding recommendation of do not run more than 24TB in one place…
Decentralization. If 24TB would lost in once it is a noticeable amount of lost data. It’s much better to have those 24TB as small nodes across the globe.
Makes no sense to bring so much space online at once in one physical place. My 10 TB (three nodes) have been filled up after year. How much time it takes to fill up 24TB? Take in consideration non-linear usage.
Of course you can bring up more, if you wish, it’s just not recommended yet.
I think this is the most important argument. By the time you get up to 24TB, you probably have some idea about how useful it is to expand more. But spinning up more than that right away really makes no sense.
The decentralization is a decent enough argument, but shouldn’t really be the concern of SNOs. You can’t assume SNOs will be altruistic like that. If there is more money to be made by sharing more space, it will happen.
well i’m at 1/3 full after 10 weeks only started out with 6tb dedicated to the node, but it was growing with a decent pace so i decided to give it some room to grow… good to know it’s only a recommendation…
i did catch that 8tb was a recommendation tho… i just figured when it said max then it was the max…
will most likely split it up for logistical reasons anyways… but again… it’s most likely all going to live in one big zfs pool… so from a redundancy stand point if i run two nodes or 10 nodes doesn’t really make any difference, if the pool dies they all die…
this last few weeks have been slow going… but it gives me time to experiment and test a bit.
just managed to copy 1mil files internally in less than 1minutes on the storage pool … got my hba’s moved to only be on my northbridge, my slog and l2 arc ssd has gotten split so they now each have their own 3gbit sata controller, and switched from really old sata cables for the ssd’s to 6gbit ones.
woop woop imma gonna get a ticket… still run into some weird 50mb/s limit on the network access to it… but seems like it must be the nfs server i use for local file sharing from the pool to my vm’s
also managed to do 1.14gbyte/s nearly sustained reads during my scrub last night
the storagenode is baremetal
well the rational behind having 1 or 2 big pools rather than many smaller is that any one storagenode then will have full bandwidth and io of the pool if needed, also allows me to focus more on having some decent redundancy without it taking away to much of the capacity.
and it will allow me to use the pool locally for other network related stuff and experiments, like trying to run my workstation directly off the pool instead of having local harddrives.
long term i plan to build it into a cluster, so that no matter what breaks the node will survive, maybe even get a second internet connection for redundancy, if this ends up being profitable.
oh and they have an earning calculator so you can see exactly how much you will earn… that is so cool… xD
i bet it’s really accurate, i’m sorry that i assumed that maximum meant maximum…
not like it matters much atm anyways… but if it isn’t a max then whats the point of writing it…
having new storagenode operators that might be unfamiliar with running servers its not help if they look at the earning calculator and think they can buy a shit ton of drive and make money…
lol its most likely there because of their lawyer… so they cannot get sued straight into hell lol that would be the corporate thing to do… lol well done storj
not totally unprofessional…
Take a breath man, it’s all good. We all overlook things from time to time.
I think Alexey already explained why that is the recommendation. If you have a more specific question, please state it.
There are already other discussions about the earnings estimator in other places, no need to get this conversation even further off topic. Please discuss that in the relevant topics. I even made a suggestion to improve it here. If you want to discuss this there, go ahead. More realistic earnings estimator
@Sasha the idea is valid but its just very difficult to apply,
@BrightSilence nah i’m good thanks, i could just see how having a big sign promising free money if one buys the coolaid could require a limit of 24 purchases for legal reasons, before one had to make that extra effort to go standing in the back of the line again… xD
on another note, so they said they where going to like 4x the amount of data on the network or was it x8… no matter a couple of magnitudes atleast, over the next couple of months…
6k nodes, that right isn’t it… i mean seem kinda close to what i remember it being during v2
pretty sure the score board we had was like many thousands if not 10.
any how… i suppose the easy way is just to say 400% increase in my data… i should start looking at more drives, tho even if we say 4x regular speed that gives me like 14days of reaction time before my pool fills…
hdd’s are also so unreasonably priced right now… either corona or storj or both xD
think i will go check drive prices tho… all of a sudden they will get around to uploading
my parents teach me not to believe what you see on tv advertisements.
if anyone looks at a web that says you can get rich and believe it without carefully investigation, i guess they got what they deserve
but yes, absolutely agree that earning mis-calculator should be looked into
Not sure it´s fair been DQ forever as you are proposing. You can go carefully nurturing a node for 17 months, and then having a catastrophic power failure on the grid, a problem from the ISP that last 6h (rare, not impossible), a HDD failure, a fan failure that boils your CPU… a lot of things can happen not meaning you are careless.
if you are out of the game for things like that, there could be only professional SNO with tons of infrastructure, huge UPS, multiple internet lines, doubled disk for RAID 1 insurance… and I think it´s not the spirit of Tardigrade, isn´t it?
Somehow this has never felt restrictive in any way to me. I don’t know how I manage.
Here’s the thing, if you follow the golden rule of “don’t be a jerk” none of these questions even come up. Most entire sets of forum rules can basically be summarized by that single line. And you know what, it’s a pretty great guideline for life in general.
it’s why i often use plural or non descriptives ?? like we and one, so that when making an argument for a point, then those one is trying to state a point to, doesn’t feel targeted and thus is less likely to get their panties in a bunch.
we can be stupid like that sometimes… all because of “you” … see xD
You shouldn’t do that… feels much more individually targeted than, one shouldn’t do that.
because one is more inclusive… but yeah word’s are powerful tools to our minds, but people often ascribe then more meaning than is required, because most of us see words in our own individual understandings / perspectives.
on another note… think i’m going to try using this on my server
lockstep to improve memory management between vm’s and sparing to provide a cheap layer of extra redundancy for my server.
didn’t quite get how much capacity i loose on going to sparing, but i assume just 1 dimm worth…but not totally sure…
If a members threads are flagged repeatedly by multiple members of the forum as lessening the value of the conversation for everybody, then its possible for a post to be muted.
When the posts are being flagged by other community members, then it is not initiated by the mods, nor is it individually targeted. The community is letting itself know what it does and doesn’t want to see.