Node stuck at 108 US1 audits - Rate limiting or connectivity issue? (v1.141.2)

Hello,

I’m running a Storj node (v1.141.2) on Ubuntu and have encountered an issue where my audit progression has completely stalled. I’d appreciate help determining if this is normal rate limiting or a connectivity problem.

Node Status (Current):

  • Version: v1.141.2
  • Runtime: ~13 days (312 hours)
  • Dashboard Status: Online :white_check_mark:, Audit Score: 100%, Suspension Score: 100%, Online Score: 100%
  • Last Contact: 2 minutes ago
  • Bandwidth Used This Month: 8.6 GB
  • Node ID: 12TFts8NwkZLAJq2eKdYiczWGqJuPLMCUaUFjESxtmSBT1Uc7

Audit Progression by Satellite:

Satellite Audits Status
US1 (us1.storj.io) 108/100 :white_check_mark: Target reached
EU1 (eu1.storj.io) 33/100 :hourglass_not_done: In progress
AP1 (ap1.storj.io) 5/100 :snail: Very slow
Salt Lake (saltlake.tardigrade.io) 0/100 :cross_mark: No audits yet

The Problem:

  • Sunday Nov 30 (5 PM): US1=108, EU1=33, AP1=5, Salt Lake=0
  • Monday Dec 1 (11:53 PM): US1=108, EU1=33, AP1=5, Salt Lake=0 (NO CHANGE for 30+ hours)

Progression Pattern Before Stall:

  • US1: ~10.8 audits/day (108 in 11 days)
  • EU1: ~3.3 audits/day (started rapid, then slowed)
  • AP1: ~0.5 audits/day
  • Salt Lake: 0 audits from day 1

My Questions:

  1. Is this normal rate limiting? Does Storj intentionally pause audits after a node receives ~100 audits on one satellite while waiting for other satellites to reach 100?

  2. Must I wait for ALL satellites to reach 100 audits to be marked “VETTED”? Or can a node be “VETTED” on a per-satellite basis?

  3. Why has Salt Lake sent 0 audits in 13 days? Is this normal for European nodes, or is there a configuration issue?

  4. What’s the expected timeline for a node to reach full vetting with all 4 satellites at 100 audits?

  5. Should I investigate connectivity or is this stall expected behavior?

Additional Context:

  • I have 18 TB allocated for Storj across 2 nodes (currently only 1 active 4To for this test)
  • Port 28967 is properly forwarded
  • Node is receiving ingress traffic (8.6 GB this month)
  • All audit/suspension/online scores are at 100%

I appreciate any insight into whether this is normal or requires troubleshooting!

1 Like

Vetting used to take “too long”. Then they changed it and it was “too fast”. Now they’re fiddling with it again… and I don’t know what’s going on.

But I wouldn’t even bother to look into it until after at least a month. And saltlake is the dev satellite… so isn’t important.

1 Like

Hello @sebulbasdm,
Welcome to the forum!

Perhaps related:

Yes, the vetting is happening individually on each satellite. While the node is unvetted it should receive 1%-3% of the satellite’s customers uploads until got vetted. For the one node in the same /24 subnet of public IPs it should take at least a month (or more).

Thank you so much @Roxor and @Alexey for your responses!
I now understand that I probably designed my initial setup incorrectly. I think this is the root cause of the problem.
What I did initially (mistake):
I launched 5 Storj nodes simultaneously ~14 days ago (2 weeks) :
• Aspire machine (same public IP): 2 nodes (ports 14002 + 14003)
• P5K machine (same public IP): 3 nodes (ports 14004 + 14005 + 14006)
Total: 18 TB allocated across the 5 nodes
The problem I caused:
All nodes shared the same public IP address (Proximus Belgium ISP, /24 subnet). I believe this divided the audit traffic among the 5 nodes, which slowed down the vetting of ALL nodes.
Result after 14 days:
• Node 14002: US1=108, EU1=33, AP1=5, Salt Lake=0 (the best)
• Node 14003: US1=18, EU1=4, AP1=0, Salt Lake=0 (very slow)
• Nodes 14004/14005/14006: Even slower
What I did to correct this (yesterday):
I stopped 4 out of 5 nodes and kept only node 14002 active (the one with the most audits).
I also:
• Kept only port 28967 open (for node 14002)
My questions:

  1. Did I make the right decision by stopping the 4 other nodes to let node 14002 vet faster?
  2. How long should I wait before restarting the other nodes? Should I wait until node 14002 is “VETTED” on all satellites (maybe 1-2 months according to your answers)?
  3. Did launching 5 nodes simultaneously on the same IP “waste” these first 14 days, or do the accumulated audits still count toward each node’s individual vetting?
  4. When I restart the other nodes later, should I bring them back online ONE BY ONE (with a few weeks between each) or can I restart all of them at once after node 14002 is fully vetted?
    Additional context:
    • I don’t pay for electricity, so keeping the machines running is not an issue
    • I also have 5 TB on Sia which is working well
    • My goal: maximize revenue with the 18 TB available
    Thank you again for your patience and advice! :folded_hands:

What you did wasn’t really wrong, it was just spreading any traffic across a few more nodes. Given that many people run nodes for years: a couple weeks of anything doesn’t hurt.

However, now that the Storj network has heard of all five nodes: it’s going to be trying to track them to make sure they’re alive… and if they’re offline for too long eventually they get disqualified (and they’ll never be sent data again: you’d have to reinstall them). Maybe none of them really have any data yet, so you don’t care. However it can be worth leaving the extra ones running (with minimal space) just so they complete vetting and the withholding period.

To do that, leave your first one running as normal, and adjust the space given to the other four before starting them too. In your docker-compose.xml it would look something like this:

  • STORAGE=600GB

(Or adjust the size in your Docker CLI command. I think the minimum space without altering defaults is actually 550GB)

Those four nodes will hit a low cap (like 600GB) fairly quickly… then go idle-but-online… but they’ll still pass vetting and withholding. And your first node will get all the traffic and grow faster. Some SNOs do that intentionally (leave one or more small extra nodes running)… so when they DO need them (like if a disk fills)… they already have one on-the-shelf-and-ready-to-go. Just uncap its space and it will start to grow again!

If you only have one IP… it’s best to have only one node growing as fast as it can (and any extra nodes size-capped so they’re just completing vetting/withholding). Your four extra nodes are probably still tiny so you won’t miss them by deleting them - your choice!

(But to be clear: it’s true if you only have one IP (one /24)… adding more nodes won’t help fill your space quicker: because you’re correct that Storj will divide uploads across them all)

There is nothing to gain from finished vetting. Unvetted nodes still having more ingress than vetted nodes.

Thank you @alpharabbit @Roxor for that insight!

So if unvetted nodes can receive similar ingress as vetted nodes, I’m reconsidering my strategy:

Instead of having 1 huge node (24TB) trying to get fully vetted, I could run 2-3 medium nodes (8-12TB each) on the same IP.

The trade-off:

  • 1 big node = faster vetting but divided by 1 IP

  • 2-3 nodes = slower vetting but more resilience + possibly similar revenue if unvetted get good ingress

Question 1: Should I restart my Node2 (600GB) alongside Node1 (4TB)? Or is there a downside to having multiple unvetted nodes on same IP?


One last important question: At what point will my node start receiving traffic and generating revenue?

Currently:

  • Node1 (14002, 4 TB): US1 vetted :white_check_mark:, EU1 33/100, AP1 5/100, Salt Lake 0/100

  • Node2 (14003, 600 GB): Not yet restarted

Question 2: Can an unvetted node (like my Node2 currently) already receive traffic and generate revenue? Or must I wait until at least partially vetted?

Question 3: Can my Node1 with US1 vetted but EU1/AP1/Salt Lake incomplete already earn money? Or does it wait until all satellites reach 100?

Question 4: How long after launch before seeing first revenue? (days? weeks?)

Question 5: If I restart Node2 alongside Node1 on the same IP, will revenue be split between them? Or does each node receive its allocation independently?

Thank you for your help! :folded_hands:

Your nodes will begin storing data (that you’re paid for) from the moment you start them: even unvetted. Payouts are sent to your ETH address within the first few days of the month (usually around the 3rd or 4th). So if you started your nodes a couple weeks ago, you’ll probably see you earned something in the next couple days (and often a forum post like this).

However… you do need to earn enough value in tokens to greatly exceed the ETH fees required to send them. If you haven’t earned enough… it will keep adding up until the next month… you also see this in your Node UI And the first nine months nodes have some held-back earnings.

Basically you don’t reach 100% monthly payouts until you’re past those nine months BUT you do get all of them if you run long enough to graceful-exit when you leave.

Revenue is mostly based on how much data you store… and although Storj will try to spread it over many-nodes-behind-the-same-IP… there’s no guarantee: so sharing nodes will still grow at different rates.

1 Like

It already does, vetting will change nothing.

Immediately. But keep in mind, with storj it’s

  • Setup the node
  • Forget about it for a year
  • Oh look! Ten bucks!

Yes. But unvetted nodes receive 5% of traffic, the rest goes to vetted, IIRC. Makes no difference when node is young.

Satellites are independent. Saultlake is a test sattelite. There are no tests running now, so it will not going to be getting vetted. No traffic.

Months. You will get paid once the earned amount is significantly higher than the etherium gas fee. (don’t bother with L2 solutions btw)

For ingress purposes, nodes on the same /24 are treated as one node. For egress purposes - separate.

1 Like

Thank’s for your help ;o)