While the number of nodes seems to be around the actual number, the capacity is not what according to Grafana is currently the size of the network.
yeah displaying live data would be much more awesome, static numbers are boring…
also helps inspire confidence when people realize it’s continually growing.
Not sure why you both assumed the numbers are static, but since they changed since the screenshots, I’m pretty sure they’re not. I can’t explain the capacity number though, maybe they wanted to only show customer data. Not sure.
Right, currently displaying as 12301. So the node number seems to reflect the correct number. But the other figures seem to be incorrect. If I read the Grafana page correctly, the network size is a little over 15 PB. So overall the display of network stats are at least not consistent.
I don’t know where this data is comming from but I would expect customer side without storage expansion factor and stats for storage nodes inclusive storage expansion factor. So I would expect the numbers to differ depending on the context.
I mean this here: Grafana
Free reported space: 6.12 PB
Stored customer data: 9.21 PB
For me this reads as round about 15 PB capacity.
I would understand these numbers as storage node perspektive inclusive storage expansion factor. A customer wouln’t be able to store 15 PB of raw data because from his perspektive that would be exclusive storage expansion factor. Something around 5-6 PB raw data sounds more realistic for customer point of view.
Ok, but what if somebody reads the display as 5.4 PB / 2.7 = 2 PB raw data capacity?
This seems to be exactly it. 80/29 = 2.76 expansion factor.
15 / 2.76 = 5.4
It may be a slight underestimation as the average number of healthy pieces per segment is a little lower than 80 (around 70), but I guess you would rather underestimate than overestimate. And in theory you want to be able to store the target number of healthy pieces for all segments.
I actually appreciate not going for the biggest possible number, but for the most reasonable one from a customers perspective. I think that settles it, thanks for clarifying @littleskunk
We get the maximum value across satellites and convert it to Petabytes.
[5990398717825150, 6004403070856310, 5987762550486050, 5943161678351343, 5990544872599332, 5976919219625497]
6004403070856310 bytes to Petabytes ~ 5.3Pb (current value on storj.io at the time I write this response)
We do use a conservative approach to calculate storage_free_capacity_estimate_bytes since it is an estimation we do calculations based on the actual free disk distribution removing the extremes.
We also offer the median value in the same endpoint: "median_available_bytes"
I’ll ask to replace “Capacity” with “Free Capacity” on the storj.io homepage to avoid confusion.
Ahh, looks like we came to the wrong conclusion after all. Thanks for explaining.
While you’re requesting changes, may I suggest another one?
Storj consistently uses correct SI decimal units and notation across all software, pricing, billing and node payouts, yet this is the only place where binary Pebibytes seem to be used (to make it worse, they’re displayed with the incorrect decimal unit PB instead of binary PiB).
I suggest aligning with the standards set by Storj in other components and using actual decimal units, so 6004403070856310 bytes should just be displayed as 6.0PB.
I would also suggest to change units or to display more decimal places so the numbers change more frequently and reflect that these are live numbers.
We can see now that this change has been made:
But why only display the free capacity? As a potential customer I certainly would be interested to see how much capacity is already being used as well.
Maybe they don’t want to show that information, as it could be misinterpreted as stalling growth when looking at the past 90 days ?
But that would not really fit the Storj claims of transparency.
Sia for example is showing that number very openly: