Yes this is normal. My new nodes are doing that bad too. Too many TB’s available for the network demand right now.
It doesn’t seem out of the normal.
Keep in mind that if you have 3 nodes, they will share incoming traffic. The earnings estimator will show what you can expect for traffic on a single IP /24 subnet. Not per individual node.
In the first month(s) vetting determines for a large part how much traffic you would get. The estimator tries to take that into account but uses a rough estimate by subtracting 75% of traffic in the first month. This is only accurate if vetting takes less than a month on the majority of satellites. If it takes a little longer, the estimator might be a bit optimistic. Due to the nature of how vetting works, if you started 2 new nodes at the same time, the vetting would take roughly twice as long as well. Which could explain why your nodes have been in vetting for longer. This isn’t necessarily a problem, you just need a little more patience before you get more traffic.
If you want to make sure these nodes are still working fine. Just check to see if they have any incoming traffic at all. If not, Alexey is probably right that you’re running an old version of the software. So make sure to check that.
Thank you for your answers
I am in version 1.19.6
How do I know if the verification is complete?
Make sure you have watchtower set up so your nodes are updated on a regular schedule. I think you should still get data on that version right now, but your nodes should also already have updated. Refer to documentation.storj.io for details on how to update and set up auto update.
New theoretical maximum: 20TB
Over the past months I have made continuous small adjustments to the network behavior stats used in the estimator. Most importantly we’ve seen an increase in delete behavior in recent months. Initially I didn’t want to immediately adjust the delete percentage to account for this, but it turned out to be a frequently returning pattern that in the end deserves to be taken into account. It’s still not entirely accounted for in the numbers as it has proven to be intermittent and has shown some slow down recently.
I don’t have the exact changes, but here’s a list of what I’ve seen change.
- Increased delete traffic
- Slightly decreased egress
- Offset by an increase in repair egress
- Ingress total had dropped for a while but saw a recent slight increase again
- Vetting takes a little longer on certain satellites
With all these accounted for, the bottom line is that due to the increase in deletes, the theoretical maximum potential for nodes on a single IP has dropped significantly. Even though not all the deletes have been adopted into the new numbers, the maximum potential storage would be around 20TB. My nodes store a total of just under 18TB and have actually shown a slight drop in total stored recently.
This leads me to reiterate by now well known advise. Start with the storage you already have. If you want to buy, don’t go crazy. It’ll take almost 2 years to fill up 8TB and 5 years to fill up 16TB. Don’t buy more than that for now. In the mean time keep an eye on the estimator as I update it frequently. Network behavior can change at any time. I’ll be sure to let you know when I think more investments than this could become valuable.
for the past 4 months my node has been lingering around 14.5 - 14.7TB
so even getting to 20TB might be a challenge, but certainly a good guess of what the max might be presently.
my node has also had some negative factors affecting it, so maybe that’s partially to blame that it hasn’t really gotten higher.
Just to add to the stats, my nodes have been stuck at around 12TB total for the past 3-4 months, that seems to be my ceiling right now even if I still have 5TB free. 20TB also seems high to me as a maximum.
A post was split to a new topic: I have a node that has been stuck for the past month on 1TB
You’re right, the past two months have been worse. But I want to present a bit more longer term view of what we have seen and I have noticed we have seen improvement in the last weeks. Hence why I didn’t account for all the deletes in the worst months. Hopefully we were dealing with a temporary spike in deletes, but if this kind of traffic stays the same, I will adjust the stats further to reflect that eventually.
… and we’re gonna get sadder and sadder as SNOs
I finally got tired of the constant notifications for people requesting edit access. Everyone now has edit access, but the entire sheet is protected against edits with the exception of the input fields. The upside is that anyone can now enter their own inputs to see the results without copying the sheet. The downside is… Well… That anyone can enter their own inputs without copying the file. In short, someone may overwrite the inputs while you’re looking at your results. Please be respectful and give everyone the chance to have a look or copy the sheet to your own account if someone is already looking at it.
Without getting into specifics on how, I know that this new approach opens up the file to be vandalized. I trust this community to not take advantage. But I have backups just in case.
Worked on an update today that implements some fixes and adds some new features. Now that I can see what people are filling in, I noticed that quite frequently people fill in either very low numbers for network speeds or very high numbers for available storage space. So I wanted to implement some errors and warnings when values are too low. And… then I kind of fell down a rabbit hole of wanting to provide more useful information. The estimator has always taken into account speeds, but it was never all that whether speeds were impacting performance. So I implemented some dynamic info lines that tell you which parts might be a bottleneck and what the potential benefit of resolving that is.
Here’s an example
Making this work was actually kind of a pita, since these recommendations have a lot of interdependencies. The amount of storage impacts whether upload is fast enough, but is itself a function of ingress speed. Fix one thing and you may need to fix something else as well to get optimal performance. I think I got it to work in such a way that it provides all useful hints without showing anything that isn’t relevant. Let me know what you think!
Here’s an error and some warning examples.
Changelog
- Fixed a bug that resulted in incorrect ingress calculation for slow speeds. This value was missing a multiplication by the max ingress, but went unnoticed because at the time max ingress TB was set to 1. It’s been fixed now and correctly shows the lower ingress.
- Changed the way max potential storage is calculator to also take into account ingress performance on slower connections.
- Added errors, warnings and info lines with recommendations
- Tweaked network performance values to better match recent performance
Such dedication!
As it keeps getting more and more complex, you’ll be better off coding an actual web application at some point
Nice work!
Awesome, I appreciate your work maintaining this and providing transparency into what one can expect @BrightSilence
I can actually volunteer to make a website for this together with BrightSilence, it wouldn’t be that much work since I have a background website development. If people would actually be bothered to go in use it @BrightSilence ?
Quick question: Would it help to faster fill the node with 4x faster up/download? Meaning from 50/10 Mbit to 250/40 Mbit in my particular case. Do you think it’s worth to invest? I am not convinced, as the spreadsheet does not show a difference, when I modify the bandwidth options.
If the sheet doesn’t change, then that would be my best estimation. Upload starts to matter more the more data you store. If your upload is limiting the performance of a full node given the data you entered, you’d see a message indicating that.
These numbers are of course estimates, but if you don’t see this message, you almost certainly won’t see significant impact of upgrading.
ok, there’s absolutely no change - but I wonder why: 250mbit vs. 50mbit is a huge difference in performance (ingress), also 40 to 10 mbit (egress). I understand that minimum requirements need to be met, but why not scaling upwards, too?
Because at the moment 50Mbit down is enough to have near perfect success rates on transfers. Higher speeds will not get you more data because you are already receiving all data you could receive.
Something similar is true for up speeds, except how much egress you get is dependent on how much data you store. So at some point it may become relevant. But you still don’t need high speeds to succeed on basically all egress transfers.
This was of course intentionally designed like this. Individual node speeds don’t matter as much since the aggregated speed is what the customer benefits from. So everyone can participate, which is great for decentralization.