Yeah same here, but all upload and download activity has now stopped too. Went from a consistent average around 20mbps download for weeks to about .5mbps just instantly like someone flipped a switch. In other words from about ~10GB/hr to ~150MB/hr for about the last 20 hours now. I hope that’s normal too.
I definitely wouldn’t call this “normal” from a SNO’s perspective as this is totally inconsistent with how nodes have been behaving over the course of many months, however that doesn’t mean it’s not ok and perfectly normal according to the people more knowledgeable in how the network operates and of course the testing schedules. Obviously data is going to be deleted and bandwidth usage can vary drastically, but I’m going to assume this is mainly related to testing at the moment as normal customer usage wouldn’t all just drop off suddenly and start deleting things. If this is test related, maybe Storj just making an announcement when there’s going to be some major swing like this would help alleviate a lot of the panic.
This should be taken as real life experience on how customers would behave uploading/deleting files. Also its not always possible for Storj to post an update before/after/during each testing. There is nothing to panic as long as your node is alive with no audit failures.
As Storj isn’t mining you can’t expect a regular flow of traffic and you certainly cannot compare last month’s traffic to this or any other month’s traffic.
Storj will probably not send updates about this as you say. But when there hopefully will be a lot of customers on the network, this will not be normal behavior. Since even if one customer deletes it’s data, or stops using the network it will not be this noticeable for SNOs as there will be hundred or thousands of other that will keep working as usual.
This would not be normal behavior with real customers. Unless, say, 1000 or 10000 people managed to agree “OK,’ let’s all delete our data from the network at exactly 15:04 GMT” which is not possible. Traffic would be somewhat predictable day-to-day.
I do agree with you, but I also think your slightly missing my point. Real life behavior would assume you have more than 1 or maybe only a small handful of customers. This would make for a fluctuating however largely consistent flow of traffic which is what it has looked like for months. However when you throw this into the mix (see image) it makes you worry just a little. That’s been pretty consistent (while gradually increasing) for months with only a couple quick dips here and there but since I moved the node I don’t have all the history. This from the perspective of a SNO is totally out of the ordinary lets call it, and can be cause for concern as many people appear to monitor their nodes by watching bandwidth activity as I primarily do. When it looks like everything just stops people get concerned. That’s all I’m trying to point out. To me this looks like it’s either from testing, or all of the data is coming from one customer. I find it hard to believe this is all only from one customer though. It’s perfectly reasonable to say you can’t necessarily compare one month with another, but you typically don’t have swings this drastic either.
I was never really concerned with files being deleted, moreso just confused as to why everything was being deleted at the same time all the bandwidth dropped to nothing. Now that data is flowing again I’m sure it’ll overtake whats being deleted relatively soon though. I have to assume this is mostly testing. I’m not complaining though… I’m in this for the long haul so I’m more interested in simply understanding what’s going on and why and making sure the abnormal is still umm… normal.
I believe that it is not normal and it is part of the recovery action for the saltlake satellite which could no longer account for disk space used on storagenodes.
If you filter by saltlake in your dashboard, you will see that it is struggling to make sensible daily accounts for disk space used in TBh. You might see May 3rd or 4th at zero.