Option for SNO node to opt out of DEV load data

So this is probably going to be down voted if it’s a thing, but thought I would post following the past 48hrs of data ingress to my Node in Feb 2021.

Appreciate it won’t be on the list anytime soon, but could we have an opt out of dev data loads added to the Storagenode config parameters, so that when test data is running the node isn’t a major part of this - maybe prioritise the traffic to nodes that are older than 6 months so that they are proven stable.

My reason is that, The Dev load from what was GitBackup.org workers, and now seems to be coming from gitbackupreplacement00.loadtest.dev.storj.iogitbackupreplacement08.loadtest.dev.storj.io is ;

  • Using up 99% of the allocated network bandwidth for my storagenode
  • Causing piece downloads to fail as link is saturated with dev load - some of this looks like valid customer IP’s and I’m getting dropped on Egress due to slow response. (won’t be issue with UDP, but TCP ack’s and retransmissions make my firewall sad)
  • The SMR drive in the storagenode to be covering it’s 2 year duty cycle in a matter of months - the write rate isn’t sustainable, my node will fail, I liken it to a targeted denial of service on smaller nodes (although I know that’s not the intention, that is the effect I am seeing especially in the gitbackuprepleacement - maybe Dev could take a look)

I know the TOS, no need to quote them :rofl: so appreciate it’s all my problem - but the level of traffic was very aggressive 24hrs ago, to the point my firewall decided it was a denial of service and blocked the satelite us-central-1 - again, my problem :stuck_out_tongue: but I’ve only just notice when my online scores were dropping, and I found that satellite blocked.

I’m just going to say that impression from forum is implying using old kit, not buying anything. and no need for redundancy for SNO’s (huge split on opinions) - but the data loads are expecting enterprise class setups, or expecting SNO nodes to die within 15 months :frowning: It’s creating from reading forum a three tiered SNO structure, the Pro hosters , the enthusiast, then the hobbyist :heart:

Having the opt out parameter would make it more easier for SNO’s to choose how their node is utilised - I’m sure many will be very grateful of the GB’s of data load, but some like myself would sooner see a few GB’s of real data, and their hard drive living for more than 1 year :slight_smile:

#Edit : Please I not complaining, I know I can leave project tomorrow - I wanted to highlight issue, as it might impact other small SNO.

Thanks

CP

If your downloads fail because your link is saturated, your interent is too slow…

the write is rather boringly low…it’s at most 6TB ingress per year, every HDD will survive this (typically, on average). Sure your SMR drive might have some trouble with the past 24 hours ingress and deletes which is a special problem of SMR drives…

Now that’s kind of funny… what kind of firewall are you using? Haven’t heard of anyone with this problem yet, might be good to know more.

You can opt out of satellites (or get yourself disqualified on them :rofl:). Officially only satlake, europe-north and us2 are test satellites. The rest are (supposedly) customer-only satellites with (supposedly) no test data traffic.
Therefore looking at the past 24 hours, there’s nothing that would have changed for you (again: supposedly because personally I don’t believe that there is no test data traffic on us-central).

A different question however: Why would you want to opt out of test data traffic? Your node would get almost no traffic and probably yield earnings after 2 years that the average node could yield after a few months.

3 Likes

For statistical measurements making the test traffic different from actual traffic is a big problem—Storj would no longer be able to measure actual performance of the network. And it is of crucial importance to have correct measurements. Imagine a situation where only good nodes were opting in to test traffic: real traffic would have a much bigger chance to be directed to slow nodes.

Yes, indeed so. But there must be some lower limit, and you’re apparently hitting it. You wouldn’t host a node on a 486…

2 Likes

Your only problem is that the SMR drive is not keeping up. Storj definitely doesn’t require enterprise class setups, but SMR drive are exceptionally bad at sustained write. That doesn’t mean any consumer drive without SMR wouldn’t work either. They would work perfectly fine. Your problem is not that you have a consumer HDD, but you have a consumer HDD that was made for mostly static storage.

Now here’s where you get things wrong in your suggestion:

  1. GitBackup is a legitimate customer of the Storj network. It provides an actual service to end users. Just because it’s a Storj project doesn’t mean this is garbage data. It’s not.
  2. If you think Tardigrade customers aren’t going to do load tests of their own to test whether the network can handle their loads, you’re dreaming.
  3. Peaks in upload don’t just happen due to load tests. They can happen when onboarding large customers and running large backups.
  4. Even if such a switch would be implemented, it would require anyone doing load testing to signify that traffic as load testing. But since doing that would change it to no longer be a valid production load test. So nobody would actually want to do that.

So, instead of suggesting a patch that doesn’t even work for any of the scenarios you’re trying to prevent, I suggest you look into how you can make your node cope with the load. An option would be to add an additional node on the same IP on a different HDD. This would cut the load on the SMR drive in half and most are able to keep up if they only have to deal with half the load. No guarantees though, you may need to add a third one.

As for the problem you’re running into. Search for SMR on this forum and you will find this particular issue is already widely discussed. It’s not something Storj Labs can fix. You’ll have to do something about it yourself.

If you have at least the minimum listed. Your connection is not your problem. What kind of speeds are we talking about here?

As long as you are using SMR, resist this temptation forever. SMR and ZFS don’t mix. Google it.

2 Likes

We are just trying to help you and discuss your proposal. What’s wrong with that?

1 Like

Please understand, just like you others have opinions and they are just trying to help you understand whether; what you are trying to ask is possible or not. If it’s possible how you can do it and if it’s not possible what you should do.

Just because you might not like their reply doesn’t mean they are arguing with you or want you to stay quiet. This is not their intention at all. Everyone replying to you wants to help you that’s the great thing about our community. They could have easily kept quiet but they want to help, share their knowledge and experience with you so YOU can make the right decision.

Again speak your mind, express how you feel, bad or good but with respect. Make suggestions and leave it up to the Storj whether to accept it or reject it.

3 Likes

Your topic is helpful. If you have these questions/suggestions, others might too. It’s good to keep this around in case someone else searches for it. Please know that we weren’t trying to attack you, but simply point out why it’s not feasible.
I attempted to at least also point out something you could do to fix it. It’s just that unfortunately some SMR drives simply won’t work on their own. There is not much you can do about that.

Isn’t it possible to decrease the max number of connections to help with SMR drives ?
I think that the best way to avoid getting test traffic is to gracefully exit the official test satellites as @kevinkn said.
But I do get the point, I have a feeling that part of the test data is used to keep SNOs happy while waiting for customers to join the network. (It’s just a feeling !)
I get that some people want to store “real” data (even though test data from github is also “useful data”). However I don’t think the developers should spend time on that, it’s way easier to just GE or get disqualified on test satellites.

1 Like

I don’t get this at all. All data is paid the same, egress is paid the same, so it does not matter to me how my node is used, as long as I get a lot of egress.

If the hard drive cannot keep up, then maybe you should get a better drive :). If the drive cannot keep up with 3mbps of ingress, then it would take you forever to fill the node and get anything resembling a normal payment.

However, since the OP has that drive and the drive get overloaded by 3mbps, then he should limit incoming data, by just limiting download using tc or some other traffic shaping method. He could also limit simutaneous connections to the node (the node used to have this as an option, but it could be done with iptables).

7 Likes

Hi @CutieePie, just seeing your tag now. I am sorry that youre feeling like your conversation went a bit sidewise. I’ll PM you with a few of my thoughts. Its important that people feel good about the time they spend here in this commuity, so I hope we can chat a bit.

Im pretty sure we can iron this out without any hard feelings from anybody. :slight_smile:

P.S. – Regarding deletion: Im not sure why its not allowing you to delete, but I suspect its either because there are too many replies to the original thread, or because it is a Voting-enabled thread . I will check the documentation and see what I can learn

2 Likes