I have two synology NAS and 1 public internet IP with 1000M bandwith. I host 10 storj nodes which announced 20+ TB space in docker 32 days ago.There’s no err in log, except one 10mins storjs update with normal stop docker and one 5 mins update physical machine restart without stop docker, after physical machine restart, one node audit became to lower than 70, other 9 nodes still 100% . But util now there’s few data (abt 230M ingress per day ) stored in my node. Is it normal? Base on these datas, it maybe spend serveral years to reach TB data. I’m thinking about give up. Is sombody give me any suggestion? Thanks.
I would check the logs carefully if the audit score dropped to 70 because if it drops below 60, that node is disqualified.
Until your node is fully vetted, it only gets around ~5% traffic so it can take a while.
After the vetting period, all ingress in a /24 subnet (one public ip) is being divided, so you won’t get more traffic with 10 nodes than you would with 1 node. Current ingress for vetted nodes is ~35GB/day (for one public ip).
Thanks for the reply, how do I know if it is fully vetted? Is it a page or somewhere to check?
Hello @Leo,
Welcome to the forum!
also Search results for 'check vetting' - Storj Community Forum (official)
Yes, I installed the dashboard when create the node. and it shows like this.
after update to version 1.14.7. How to know the progress of vetting?Either via API, or with an Earning Calculator.
The dashboard does not show this information.
See a feature request:
Got the vetting progress both by api & script,thx.
wouldn’t 10 nodes take the better part of a year to vet …
I don’t know. Should I turn off 9 nodes?
I believe this changed recently:
So it shouldn’t take that long.
The question is more: is there any advantage to run that many nodes.
It is recommended to run one node per drive.
Having 10 nodes on one drive doesn’t really add any value, it’s even going to be way more complicated to administer for checking their status, scores, etc. until the multi-node dashboard StorjLabs is working on is ready.
The most common and recommended way to start with Storj I believe is simply to start a single node on a disk, and once it is almost full, start a second one if you have another disk with spare space, and so on.
If the vetting process might not take for ever if you have 10 nodes, the sure thing however is that they are going to have 10 times less ingress each than a single node, so you won’t receive any more data with 10 nodes than with 1 node.
If you have 10 disks within your NASes (I’m not using any NAS myself so I’m not clear on how they make space available when they have many disks), and if they all run anyways for other things, I guess it does not hurt to start 10 nodes.
On a side note, you might want to make a copy of @BrightSilence’s Realistic earnings estimator on your own GDrive and play with numbers to see how relevant or not it is to create a node with as much as 20TB of space, as it may take 2 to 3 years to fill up (based on current activity).
In my humble personal opinion, it does not hurt to start more than one node, like 2 or 3 maybe, with a total shared space of around 8TB to begin with. But starting with 10 nodes sounds a bit overkill to me
yes, I try to find out if it can short the time of vetting by multi nodes, cos I have 12 disks in my NAS .
If fill up 20TB need 2-3 years,10 nodes don’t make sense , 2-3 nodes will be better.
I don’t know much about storj’s rule. It will be become more simply if I start a topic in the forum first.
my node is creeping up towards 15tb at 14.5 atm
its on it’s barely 9 months now, but the last 3 months have been very slow on ingress… for various reasons… 1 being that i ended up sharing a subnet with another SNO for a while, but i think most will agree that the last 3 months have been very quite on the ingress side…
so if you are talking the average ingress of the last 3 months as your avg expected ingress then it will take years… last month was 500gb… so thats like 3½ years to fill 20tb… so yeah we cannot know what amounts of ingress we will see, and we will surely expect it to go up as tardigrade ages and becomes more used by more and more people and corporations.
on top of that the internet grows at an amazing speed… but for now it sure does seem like there is plenty of storagenodes, and not enough data to go around.
Good experience. I would like to get first 1 TB data in next month. It will be more funny than earn $1-2.
well with a little luck we will get some test data for xmas…
How many time for receive your first 1TB data ?
2 to 3 months in average.
Check Realistic earnings estimator
I don’t know yet, it’s just my expectation.
@Alexey I want to close 7 nodes, and try to use graceful exit, but it shows "
Error: You are not allowed to graceful exit on some of provided satellites
", it should be these nodes did not meet the graceful requirements (just opened few days and not vetted yet). If I delete them directly, does it affect other nodes I want to keep working? They use the same IP or the same email/wallet, Is it possible a score or traffic reduced happened?
Because your node should be older than 6 months to be able to call a Graceful Exit. By TOS it’s 15 months, but temporary reduced to 6 months.
That’s one thing i never really understood… how exactly are we suppose to get rid of a node if we change out minds for whatever reason… i mean sure we can just crash and burn it… but the whole GE thing is about limiting repair and tho newer nodes will have less data… they would still help cause earlier repair when nodes gets wrongly created like this…
i don’t think his request is unreasonable, why can’t one delete nodes…? i mean that would be a “beneficial feature” for the network, and one that will only be more and more requested as the network grows…
is there any reason there isn’t a clear way to delete a node
i’m guessing it’s an oversight… so made this