Bandwidth parameters


I’m new here and I’m beginning to host multiple nodes.
I wanted to know if some bandwidth rate usage limitations are possible to implement as parameters to facilitate a QOS among different applications ?
If there are none, is it planned at some point ?
Thanks in advance

From what I understand it is against TOS to limit the bandwidth now but no way to enforce it the minimal is set to 2TB and Im pretty sure its not planned to add it back. Because it was once part of it but then removed later. If your talking the speed of the storage nodes you can limit them with QOS, It tells you what the requirements are when you sign up.

Hardware Requirements

  • 1 Processor Core
  • Minimum 550GB of available disk space
  • Minimum of 2TB of available bandwidth a month
  • Minimum upstream bandwidth of 5 Mbps
  • Minimum download bandwidth of 25 Mpbs
  • Keep your node online 24/7

I was talking about the speed not the total amount of data usage which I totally understands.
And it would be better if it was selected on the app and not enforced by network equipment. No ?
The goal is to have a lot of nodes to be able to de duplicate the data no? So if you facilitate the management of that it would be adaptable to more customers or in this cas providers cases.

If your running more nodes on the same network/IP they would all share the amount of data and split the traffic anyways. There wouldn’t be any duplicate of any data within the same network they would all have different files. Which would be kinda pointless to limit the amount of bandwidth for the nodes if your planning on running alot on the same IP.

1 Like

There is no duplication even of pieces of the same file - they all unique.
The goal is to have a transfer speed as fast as possible and have distribution of nodes across the globe.

1 Like

Guys I understand your point of views but I hope(but I could be wrong) that the whole nodes system is working as a big raid array with a lot of duplication because my god… that would be kind of a disaster in case of data corruption on one of the nodes which could happen very easily because of hdd failing.

And I don t have a big internet connection guys. So please don’t treat people as they should a fiber optics just to join, this would be totally counterproductive in terms of participation and so the whole point of this project in my opinion.

So yeah I would like to have a limiter in the app itself for the speed because people need to get a good ping nowadays for everything. So the upload bandwidth shouldn’t use the max speed and should be limited for that specific reason because the higher upload speed is used the higher ping you have.

And to have a speed as fast possible the team should rely on duplication of data so several nodes have the same data to actually get the desired speed. Another reason which each host around the world should not get different unique data for each and everyone of them.

But again people need to have their internet connection available for other things than this project too. And that s why I still think à limiter should be implemented to not put an extra toll on network equipment

Bandwidth (speed) limiting is available in pretty much every router on the market today. It doesn’t make sense for Storj to reinvent a feature that pretty much every end user already has access to.

With respect to duplication, you should check out this blog post:


Take a look at the requirements before signing up, if you dont have the speed that is the minmal you shouldnt then proceed to signing up for it.
Also If you look how storj works it doesnt store the same files or parts of files in one place so your likely to be able to still get all the data back. If say your network is holding the main files and your internet goes down no one would be able to pull there file back to complete it because to many files were in the same place.
Anyone with the minmal requirements are welcome to join but once you limit the bandwidth more then the minimal then you shouldnt be running a node in the first place.

Then you don’t understand my point guys and you don’t know or you don’t want to consider that people needs their latency right.
It’s absolutely not about scaling down the requirements. The requirements can be met.
But don t come tell me with your arguments that people that use your storj process on their raspberry pi are having datacenter speed range… they probably have PS4 and are playing over watch or Warzone or I don’t know what else. So they need their latencies.
If you don’t take that into account well shame on you because it’s a basic thing to keep in mind when developing those kind of apps or process when you want the joining of as much people as possible or are we discussing about just make an elite join your project here ?

Because let’s stay honest. Your project here can’t be host only at home. No companies whatsoever unless they have a big interest in it will allowed several T to be stored on their hardware that they bought.
And in datacenter you can’t obviously host those kind of apps because of the prices that each Tera is costing.
So obviously you want the most people with their own home ressources to join.

Besides this debate, I will look into this blog but that doesn’t answer the big problem which is, what happens when several bad blocks are appearing and unfortunately it is on your set of files ? I don’t see any technical response to that.

And about the network equipment, it’s not because they all have this feature now, that this is actually working great or even their processing power is working optimizing my enough to endure those kind of limitations.
And I m sorry guys but this is not a whole new feature to implement. Syncthing is fully open source and we can easily copy paste complete libraries for that purposes. So don’t be that melodramatic in a where we have actually reinvent the wheel. It’s already been available freely and opensource.

Every single project of data cloud storage is offering this feature and with reason. If you don’t take it into account and you don’t see as useful where actually every single team having worked on project like that have implemented in it.

So I don’t want to scale down the requirements! But it would be nice to be able to just limit the bandwidth at the 5Mbps which is the smallest requirement.

And by the way guys, please learn to listen and don’t send people read again the requirements etc etc I m plenty aware of the available docs etc. Be just a little bit more considerate and a bit more respectful. Thanks in advance

If your aware of the requirements why bother to come and lecture us on what storj needs and should do? Your pretty much saying that storj needs to change everything for people who dont really need to host but they want the ablitity to disable it whenever they want so they can play video games…
You can have an opinion about it but whatever your talking about just isnt what storj already built there entire network around.

1 Like

Then you didn’t read me at all. Which is proof that you don’t even try and don’t even try to be considerate either or understanding or open minded.

Did I say the storj needs to change anything ? No
Since I m agreeing with the requirements.
I was surprised with no de duplication policy but that s another problem.

What I asked is if a limiter could be implemented so people can slide their speed up and down according to their need of latency
And to give it more weight to that I gave the argument point of the de duplication than can speed up things if well implemented if the problem is speed.

Well if you read you would know that you can use your router to limit the bandwidth.

Could you please explain what you mean by this? If you mean what happens when several pieces of a file are corrupt, this is not an issue because of the repair mechanisms built into the network. Since launch, the storj network has not lost a single byte of data. That is 100% reliability. It does this through erasure coding. I would never be able to explain it as well as the link.

I’m not sure if there is a language barrier here, but I feel like everyone in this thread has been respectful and considerate.

Is there a reason you can’t use your router to limit bandwidth?

I ve read it and you didn’t read my point about the fact that it s not that I t supplemented into routers that they actually have the processing power to actually make it work properly

Can you explain to me what processing power it takes to limit the bandwidth on your router? If the routers have the setting there made to be able to do it I don’t understand why you can’t just use the router.
There is no need to have a setting in the program to limit the bandwidth when every single modern router is made to do it.

@baker I’m going to respond to you in a moment but @deathlessdd is being that much aggressive that apparently I need to answer him first.

But it will give you an answer.
So I don’t know if you actually tried consumer routers but I can say that plenty of them don’t work well about QOS and don’t prioritize traffic correctly mostly because of their bad processing power and their lack of buffer memory I guess.
Asus, netgear, tp-link. Oh yeah they all have this feature. Does it work great ? Really not. Maybe on high end models but sure not on middle range.
And that’s even not considering what we call here ‘boxes’ in Europe which are the providers routers that’s are even shittier. Again… them too implemented QOS.
So what’s the results ? Well you have dropping packets all over your network.

So again good for you if you have all fiber optics but should we take Eurostats spreadsheets about the internet speed in Europe and see the average? Plenty of countries can actually meet the 5Mpbs in upload, but if storj Daemon decides to actually use it all without a limiter , they sure will experience big packets dropping, higher latencies (this is simple conséquences all well known in network management) .
And storj is not all alone in the world who needs bandwidth. You also have the big expanding market of remote gaming whether it’s or Xbox gaming or Stradia etc. All those need good latencies to work. And just try to actually use QOS on provider boxes in France, Belgium, Netherlands, Switzerland. I can clearly assure anybody that’s laughing result that you will get. And again guys, I m sorry but you have people with raspberry pi 4 wanted to put storj on it, so don’t act please as anyone should actually have a fiber optic and a datacenter in its basement to join. Again please a bit open mindedness and address the problem.
If you never experience problems with provider boxes or even middle range consumer router I can give you model/references/and brands just so that we make a test lab. I can even send samples

Im getting this feeling your a gamer maybe? So I just wanna say this there are plenty of programs that can limit bandwidth per program. AS I said before there is no need to have another thing in the program for a storagenode to create more over head. I personally run a RPI4 to run 2 nodes and I can’t say I have ever had any issues with latency issues because of it. I do have a personal router an Asus for my second network which I do infact use for limiting bandwidth for one of my desktops And it works perfectly fine which I never had any kinda issues. The router never even hits 100% cpu or 100% ram.

So im still confused of why were even having this conversation still your making this a bigger issue then it actually is. You can A sign up for storj or B not sign up for it. or C create a voting post of something to add to our storagenode software to create even more overhead on the systems ontop of running a storagenode.

Which I probably wont vote cause theres alot more important things that need to be updated before considering this. An I dont have any issues using my router to limit bandwidth.

1 Like

@deathlessdd I mean actually not a gamer but I use shadow tech for example for a lot of work and latency is crucial.
So it would be actually bad programming to let a software run full speed on upload at all time or even for hours to actually get what? 60-90 ms in place of 9-17 which you should get ? Speed on dns queries is even crucial when you speak with actual network architect or web developper.

I can give you plenty of asus references where you would get dropping packets just because you ve implemented a qos and the cpu from Broadcom if I remember well are badly managing this. Ac66and ac68 i can demonstrate that easily.

And again with the melodramatic about the overhead ? Seriously? Jeez, is syncthing that with an overhead because of their limiter in terms of speed ? To my knowledge the thing is barely using 16MB of RAM.

And sorry mate but of course with your little rasp pi4 I certainly won’t never read the breakpoint of your internet line… I m hosting it right now on Ryzen servers and intel servers which are way bigger and if the daemon want to use it all well it would be bad.
But again thanks for your input.