What happens when bandwidth limit is reached?

I currently have my bandwidth limit set to 0.6TB. I think the limit will be reached in about two days if Stefan keeps going. Will all requests stop at that point? Are there any exceptions for emergency data repair or audits?

Also, does the node tally all data traffic or just traffic from transfers that were successful (did not loose the long tail race against other nodes)?

How can you use 0.6TB as bandwidth if minimum requirement is set at 2TB ?

https://documentation.storj.io/before-you-begin/prerequisites#hardware-requirements-recommended

As of now if bandwidth limit is reached the node will be disqualified.

How? It just let me. I suppose I’ll have to consider increasing the limit to 2TB. Strange that it will allow a setting if it plans on disqualifying me for it later. I’ve tried low limits for hard drive storage and it will refuse to even start the node if it’s not set to at least 500GB but it didn’t seem to have a problem with my bandwidth setting.

Because its beta and bandwidth limit isn’t checked yet IMO but better to fix it now before you get disqualified.

You can follow this:

docker stop -t 300 storagenode
docker rm storagenode

Edit the docker run command with minimum bandwidth set to 2TB then start your node.

I’m now set at 2.0 TB. In theory, what happens if a customer on one satellite uses up all my bandwidth? Are the customers on the other satellites out of luck? or is the bandwidth setting per satellite?

The 2TB definitely used to be checked. A friend of mine tried setting it to 0.5TB when he started his node but it refused to run that way. He ended up setting it to 2TB as well.

You have 384 B left on disk? :slight_smile:

1 Like

543GB Trffic this Month?
How you do that? ^^

What? That can’t be right. If the bandwidth limit is reached the node should just stop getting new data. Of course the customer should still be able to download his data. What else would be the purpose of setting the limit? If you’re right, then the only way to avoid disqualification would be setting the limit to a value higher than what your line could possibly serve in a month. But then, why setting a limit at all?

Wow, I think you took that out of context. In the link they were talking about Graceful Exit. That’s a completely different thing.

2 Likes

I hadn’t considered this before, but nodes with a bandwidth limit will effectively be offline when the bandwidth limit is reached: customers won’t be able to send data to or receive data from the node. The 99.3% uptime requirement is very strict, but this bandwidth setting basically invalidates that. You can be “offline” without affecting reputation. If many SNOs decide to set bandwidth limits, would file availability go down towards the end of the month as bandwidth is used up by more nodes?

You will still have audit checks so you can’t be offline. These audits are ~1KB.

Yeah, that’s why I put “offline” in quotes. My point was that even though the node is not offline, the pieces stored on that node are essentially unavailable. If many nodes set bandwidth limits that get reached during the month, files may become inaccessible.

Its not possible. There are measures in place where upon certain threshold repair is triggered. A file is sent to 80 nodes and of those only 29 pieces are needed to rebuilt it.

Reference

I think you’re making a leap there assuming repair would be triggered if a node reaches its bandwidth limit. I think it’s a valid question. Though it could easily be avoided if uploads stopped when say 75% of the bandwidth limit is reached.

I’m not saying repair would be triggered. Just that, say, it’s almost the end of the month, and 50 nodes out of 80 nodes storing pieces of a file have reached their bandwidth limit. That could cause issues. I don’t think this is a realistic scenario though, since most users wouldn’t have low enough bandwidth caps hopefully.

EDIT: Sorry, didn’t notice nerdatnetwork’s comment

1 Like

Guess who I was responding to :wink: