The past few days my average used space has gone down significantly, how would i approach this
No information is required for this specific issue, these are missed reports from US1, because it took more than 24 hours to calculate the missed days, and the node does not have the new functionality to make a forecast based on non-daily reports received (the graph still shows the average, not the exact value, as before, when the usage was shown in GB*h). High accuracy is not required for average usage.
The team is aware about the graph gaps and we have the issue on our GitHub waiting for an estimation and adding to the one of the sprints or a Pull Request from the Community.
на русском
По конкретно этой проблеме никакой информации не требуется, это пропущенные отчёты с US1, потому что подсчёт пропущенных дней занял больше 24 часов, а на узле не имплементирована новая функциональность делать прогноз на основании полученных не ежедневных отчётов (на графике всё равно отображается среднее, а не точное значение, как раньше, когда использование показывалось в ГБ*ч). Для среднего использования высокая точность не требуется.
Команда в курсе о пробелах на графике среднего использования в месяц. Запрос на новую функциональность есть на нашем GitHub, он ожидает оценки и добавления в один из спринтов для разработки либо Pull Request от Сообщества.
This time it’s not related to a disk usage discrepancy, they reported that the Average Disk Space Used This Month way below the usage on the piechart, and it’s because the graph is not smooth, and the average then likely calculated wrong (some days from US1 and SLC have had a zero usage reported, or more likely the records for these dates are completely missing for these satellites).
…and there is zero commitment from storj devs to fix this. That’s the ugly truth.
нет нет, я не имею в виду конкретную проблему, просто на будущее. просто подумалось что вдруг и у этой команды есть команда тестировщиков, на которых проверяются свежие выпуски, да понятно что тут есть и риски, но наверняка должны быть и плюсы
My US1 chart is now receiving updates on some days but not others. I was comparing the segment count for the satellites and US1 seems to have the most by far. Is this why it struggles to keep up? Is this the new normal until they start filling in the gaps with estimates?
Edit:Off topic, but now I’m wondering what will happen if storj acquires their new mystery client. Will they be assigned to an existing satellite or do they add a new satellite to scale up?
Second edit: Hope this post doesn’t sound critical of the Storj project. There is enough negativity and I’m not trying to add to it. I don’t operate a node for the money. It’s just a fun little side hobby and looking at dashboard graphs is part of the fun. Overall I’m enjoying it despite the occasional bug.
Yes, we have a QA team. Most of the tests are automated. But you probably meant a group in the Community? Then there is that too:
на русском
Да, у нас есть команда тестировщиков. Большей частью тесты автоматизированные. Но вы наверняка имели ввиду группу в Сообществе? Тогда такое тоже есть:
The tally is taking longer than 24h duration, if that happen the report is not sent in time, but if it’s longer than 28h-30h, they may overlap, so the next one either would be skipped or prolonged more.
There is no a good solution for this at the moment (except manually filling the missing records for the specific days), so I would expect that this would continue to happen. At the moment the simplest one is to estimate a missing days on the node instead, since it’s an average anyway.
there’s no way to speed up the calculation process?
Perhaps, I do not have details of implementation, but I guess they are costly.
How does it have a cost ?
Do Storj not have unmetered satellite servers ?
Or do you mean upgrading CPU/RAM on these servers ?
Would Storj not have facility to place their own boxes in data centres (rented rack) ?
Apologies, I’m not familiar with the infrastructure that does actually support satellites …. Not sure if anyone else is …
Thanks
Hass
You need to run a heavy process to summarize all data, so, I would assume that the best way is to use either a replicated instance or a backup, otherwise it will affect customers.
we have… However, it’s not free.
I’m too (I’m not a part of the infrastructure team), but I guess, that it’s costly (based on my personal experience in the similar situation, though. We wanted to offload all related costs to the customer, who is really need it. That time and as turned out it was a right move…). However, my personal experience is not applied to the current situation, but I would assume that out team have the same problem.