1.104.1 Huge Ingress load on US1?

Any one else seeing hourly bursts on US1 since upgrading to 1.104.1 on single node ?

I’ve dialled allocation back from ~50 Mb/s to ~25 Mb/s, is that going to be ok for the node selection ? just trying to get an idea of how much of my 1 G link needs to be allocated - you can’t have all of it, that’s just greedy :joy:

Also, is this testing data, or real data ? As if it’s real data will need to grow disks as already done 1 days ingress in a few hours…

Sorry, just trying to plan to see if I need to buy more disks quickly :dollar:

CP :heart:

Why are you allocating 25Mbps out of 1000Mbps? At least give that thing half of the available bandwidth.


Now would be a good time to “shields up, Scotty”, because my photon torpedos are on their way to mess your nodes up. :rage:

1 Like

Send everything to me. All 30PB of it, I can handle it both space and bandwidth wise.


All Storj testing and space reservation is done on Saltlake. But it depends on your definition of real data. It’s all paid and that’s real enough for me. It’s customer data at least, but it’s highly likely for some customers to also do their own testing of performance. So what even is real data?

1 Like

I think this is the result of bandwidth rollups in the code, where commits to node SQLite database are done in batches now instead of continously. It was mentioned by someone already in a different thread and I see similiar thing, but there is no such load on the interface itself.


It wouldn’t be polite to allow Mitsos to bear such a burden alone. I selflessly offer my services as well to store both ones and/or zeroes in any arrangement or quantity. :money_mouth_face:


ah you are right, the actual firewall interface hasn’t seen much change, it’s just the API and data scrapper is now broken. which I use to plot stats on Grafana and plot usage stats.

Will have to update the code to use Telegraf agent to scrape the interface stats instead.

Thanks for spotting this :slight_smile: panic over.