I’ve dialled allocation back from ~50 Mb/s to ~25 Mb/s, is that going to be ok for the node selection ? just trying to get an idea of how much of my 1 G link needs to be allocated - you can’t have all of it, that’s just greedy
Also, is this testing data, or real data ? As if it’s real data will need to grow disks as already done 1 days ingress in a few hours…
Sorry, just trying to plan to see if I need to buy more disks quickly
All Storj testing and space reservation is done on Saltlake. But it depends on your definition of real data. It’s all paid and that’s real enough for me. It’s customer data at least, but it’s highly likely for some customers to also do their own testing of performance. So what even is real data?
I think this is the result of bandwidth rollups in the code, where commits to node SQLite database are done in batches now instead of continously. It was mentioned by someone already in a different thread and I see similiar thing, but there is no such load on the interface itself.
It wouldn’t be polite to allow Mitsos to bear such a burden alone. I selflessly offer my services as well to store both ones and/or zeroes in any arrangement or quantity.
ah you are right, the actual firewall interface hasn’t seen much change, it’s just the API and data scrapper is now broken. which I use to plot stats on Grafana and plot usage stats.
Will have to update the code to use Telegraf agent to scrape the interface stats instead.