Efficient algorithm for sum of storage node usage for a given period satellite

“incorrect Uncollected Garbage” is a major pain point for SNOs, the gist of it - the slow/incomplete calculation of storage node usage for a given period, and I don’t think any generic database have what it take to do this task efficiently.

I thought of an algorithm to calculate sum of storage for a node in a range of time, it can answer correctly how much data on a node, at light speed. Here is how it work:

Assuming we want to query: how much storage node A have between Aug 4 2019 (2019-08-04 09:44:04) until Aug 4 2024 (2024-08-04 09:10:05).

We need 2 engines:

1. Time splitting engine, it will able to split that time range into:

• from 9h:44:04 (Aug 4 2019) until 00:00 (Aug 5 2019) # will discuss granular level below
• from 00:00 Aug 5 2019 until 00:00 Aug 6 2019 # daily
• from 00:00 Aug 6 2019 until 00:00 Aug 7 2019 # daily
• from 00:00 Aug 31 2019 until 00:00 Sep 1 2019 # daily
• from 00:00 Sep 1 2019 until 00:00 Oct 1 2019 # monthly
• from 00:00 Oct 1 2019 until 00:00 Nov 1 2019 # monthly
• from 00:00 Nov 1 2019 until 00:00 Dec 1 2019 # monthly
• from 00:00 Dec 1 2019 until 00:00 Jan 1 2020 # month
• from 00:00 Jan 1 2020 until 00:00 Jan 1 2021 # yearly
• from 00:00 Jan 1 2021 until 00:00 Jan 1 2022 # yearly
• from 00:00 Jan 1 2023 until 00:00 Jan 1 2024 # yearly
• from 00:00 Jan 1 2024 until 00:00 Feb 1 2024 # monthly
• from 00:00 Feb 1 2024 until 00:00 Mar 1 2024 # monthly
• from 00:00 Feb 1 2024 until 00:00 Mar 1 2024 # monthly
• from 00:00 Jul 1 2024 until 00:00 Aug 1 2024 # monthly
• from 00:00 Aug 1 2024 until 00:00 Aug 2 2024 # daily
• from 00:00 Aug 2 2024 until 00:00 Aug 3 2024 # daily
• from 00:00 Aug 3 2024 until 00:00 Aug 4 2024 # daily
• from 00:00 Aug 4 2019 until 09:10:05 Aug 4 2024 # will discuss granular level below

2. Sum of storage for those time range above:
No comment as it pretty self explanatory.

The trick here is the space-time tradeoff when we precompute those value.

Another example: we want to report how much storage usage from a node for a given month, that pretty much instant because we already precompute that value - an O(1) read operation.

We have to decide between how much granular level we want to achieve vs how much storage we can tolerance (I assume hourly is good enough here?). It look like this:

If when input some raw data is wrong and we want to update it, simply open a transaction that also update the upper hour, day, month and year as needed.

P/s: edit to make this section clearer, for example, an original data point is 13, now it become 42, the delta is +29, just +29 to hour/day/month/year above in a transaction.

Another issue I currently saw is shift in usage of google spanner (database), StorJ is an opensource project, but with usage of Spanner, it pretty much tie itself into GCP with no escape. If either Google or StorJ no longer exists, community cannot inherit much from it, would StorJ reconsider it…

2 Likes

We already have timely based information about the usage in the database. And the tally doing the same thing - it calculates the usage for the previous day.
But you seems suggesting to store it in a real-time database after the calculation. And I do not know how it would change the fact that you still need to calculate the usage for the previous day (this is what the tally doing and sometimes cannot complete it for the 24h).
Or do you suggest to run the tally hourly for a previous hour? If so, it would be a high load on the production database. It’s better to do that on a backup, however, the backup is a long running process, so short intervals not a solution.

Hi @Alexey, thank for the “tally” keyword. What is the size of calculation we are talking about here, could you provide some insight?

P/s: hi, apology in advance, I don’t know much about StorJ source, so maybe most of my assumption will be wrong, though I’m sure that “Uncollected Garbage” is feel by most of SNOs.

If you could share the process of doing tally (and some number), maybe someone could suggest something and community will understand more how the software work.

The numbers you can get there: https://stats.storjshare.io/, I do not have any other insights.

Unfortunately I do not have details, here we need developers I believe or maybe @littleskunk have more knowledge about it.
I only know that it’s running every 24h for the previous 24h, then the reports are sent to the nodes.
I think it’s here: storj/satellite/accounting/nodetally/observer.go at daa2ddb7758b56a8cdbb9cee2f4a9ffb9c582663 · storj/storj · GitHub

Yes tally is the job that calculates the used space for each node * the time since the previous tally run.

So if we would run 1 tally per hour and your node is 1 TB in size we would have 24 entires with 1 TBh each = 24TBh total for the day.

Once per day a rollup job will sum up these TBh values and combine it with the data from settled bandwidth. The rollup job will do this for the previous 3 days or so because a node can submit orders for the previous day. So we run the rollup job more than once to make sure all the settled bandwidth makes it into the sum.

I can lookup the table names if that helps. You should be able to reproduce this with storj-up. I can provide a short script to setup a storj-up environment, upload some data and you can take a look at the tally and rollup results yourself, change code and repeat it as many times as you want.

2 Likes

Can you also share what is the specific bottleneck that prevents satellites from sending the right estimates on time?

The expectation is wrong. Lets say we have the same 1 TB node but only one tally run every 3 days. The result would still be the same payout. So there is no need to have a tally result per day. The problem is more on the storage node side. The storage node is already able to understand that a 3*24TBh value should be displayed as 1TB in the graph. Thats great. Just don’t show 0 for the other 2 days. The satellite has reported the tally result for 3 days and not just 1 day.

2 Likes

Hi @littleskunk, I have trouble understand new concepts, I’ve try to look through Home · storj/storj Wiki · GitHub, but the last commit is in “Tue Apr 18 17:48:10 2023”, is this the entire docs, do we have docs host for developer somewhere else?

I think it is partially in the docs folder in each repository or just a `README.md` in the root.
For example,

2 Likes