Since a month I’m a proud storagenode-owner, actually owning five storage nodes with a total capacity of about 15GB. However, every time I start a new node, it happens to me that there is a big gap between ingress and increase of used storage.
Therefore I wrote a script in order to confirm or refute my suspicion.
This is the dasboard of my newest node:
As you can see, total ingress this month is about 250GB but total storage is about 96GB. So about 38% efficacy.
Daily ingress is 34-48GB/24h, although storge increase everyday by about 16GB, boiling down to an efficacy of 33-50%.
I even wrote a script:
echo "$( wget -qO - localhost:14002/api/sno | jq -r .satellites[].id | while read -r sNode; do echo ','; wget -qO - localhost:14002/api/sno/satellite/$sNode; done )]" | sed -z 's/^,/[/' |
jq -r '
"Satellite per day: ",
(
.[] |
(
" - " + .id + " (" + .audits.satelliteName + ")",
.bandwidthDaily[] as $bw |
(.storageDaily | map(select(.intervalStart == $bw.intervalStart)) | .[0].atRestTotalBytes) as $space |
(.storageDaily | map(select(.intervalStart > $bw.intervalStart)) | .[0]?.atRestTotalBytes) as $spaceTomorrow |
(($spaceTomorrow // $space) - $space) as $spaceInc |
($bw.ingress | .repair + .usage) as $ing |
(
" * Date: " + $bw.intervalStart,
" # IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
" # Increase used space: " + ($spaceInc / 100000 | round | . / 10 | tostring) + "MB",
" # Efficiency: " + ($spaceInc / $ing * 100 | round | tostring) + "%"
)
),
"",
" * Total : ",
(
([.bandwidthDaily[].ingress | (.repair + .usage)] | add) as $ing |
([.bandwidthDaily[].intervalStart] | min) as $mindate |
([.bandwidthDaily[].intervalStart] | max) as $maxdate |
((.storageDaily | map(select(.intervalStart == $maxdate)) | .[0].atRestTotalBytes) - (.storageDaily | map(select(.intervalStart == $mindate)) | .[0].atRestTotalBytes)) as $space |
" # IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
" # Total used space: " + ($space / 100000 | round | . / 10 | tostring) + "MB",
" # Efficiency: " + ($space / $ing * 100 | round | tostring) + "%",
""
)
),
"",
"Per day: ",
(
[.[].bandwidthDaily[]] as $bw |
[.[].storageDaily[]] as $stor |
(
$bw | group_by(.intervalStart)[] | (
.[0].intervalStart as $today |
([$stor[].intervalStart] | map(select(. > $today)) | min) as $tomorrow |
($stor | map(select(.intervalStart == $today).atRestTotalBytes) | add) as $space |
($stor | map(select(.intervalStart == $tomorrow).atRestTotalBytes) | add) as $spaceTomorrow |
( [.[].ingress | (.repair + .usage)] | add ) as $ing |
(($spaceTomorrow // $space) - $space) as $spaceInc |
" - Date: " + $today,
" * IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
" * Increase used space: " + ($spaceInc / 100000 | round | . / 10 | tostring) + "MB",
" * Efficiency: " + ($spaceInc / $ing * 100 | round | tostring) + "%"
)
),
"",
"Overall: ",
(
( [$bw[].ingress | (.repair + .usage)] | add ) as $ing |
( [$bw[].intervalStart] | max) as $maxdate |
( [$bw[].intervalStart] | min) as $mindate |
(($stor | map(select(.intervalStart == $maxdate).atRestTotalBytes) | add) - ($stor | map(select(.intervalStart == $mindate).atRestTotalBytes) | add)) as $space |
" - IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
" - Total used space: " + ($space / 100000 | round | . / 10 | tostring) + "MB",
" - Efficiency: " + ($space / $ing * 100 | round | tostring) + "%"
)
)'
The output is:
Satellite per day:
- 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo (us2.storj.io:7777)
* Date: 2023-05-07T00:00:00Z
# IN: 1.1MB
# Increase used space: 0.5MB
# Efficiency: 48%
* Date: 2023-05-08T00:00:00Z
# IN: 2.9MB
# Increase used space: 0.5MB
# Efficiency: 18%
* Date: 2023-05-09T00:00:00Z
# IN: 5.4MB
# Increase used space: 0.7MB
# Efficiency: 13%
* Date: 2023-05-10T00:00:00Z
# IN: 3.7MB
# Increase used space: 3.1MB
# Efficiency: 83%
* Date: 2023-05-11T00:00:00Z
# IN: 10.8MB
# Increase used space: 1.8MB
# Efficiency: 17%
* Date: 2023-05-12T00:00:00Z
# IN: 6MB
# Increase used space: 1.5MB
# Efficiency: 25%
* Date: 2023-05-13T00:00:00Z
# IN: 6.3MB
# Increase used space: 0MB
# Efficiency: 0%
* Total :
# IN: 36.1MB
# Total used space: 8.1MB
# Efficiency: 23%
- 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE (saltlake.tardigrade.io:7777)
* Date: 2023-05-07T00:00:00Z
# IN: 88.9MB
# Increase used space: 335.5MB
# Efficiency: 378%
* Date: 2023-05-08T00:00:00Z
# IN: 599.8MB
# Increase used space: 691MB
# Efficiency: 115%
* Date: 2023-05-09T00:00:00Z
# IN: 694.4MB
# Increase used space: 629.3MB
# Efficiency: 91%
* Date: 2023-05-10T00:00:00Z
# IN: 626.1MB
# Increase used space: 658.7MB
# Efficiency: 105%
* Date: 2023-05-11T00:00:00Z
# IN: 566.5MB
# Increase used space: 684.4MB
# Efficiency: 121%
* Date: 2023-05-12T00:00:00Z
# IN: 760.3MB
# Increase used space: 390.3MB
# Efficiency: 51%
* Date: 2023-05-13T00:00:00Z
# IN: 504.3MB
# Increase used space: 0MB
# Efficiency: 0%
* Total :
# IN: 3840.2MB
# Total used space: 3389.3MB
# Efficiency: 88%
- 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 (ap1.storj.io:7777)
* Date: 2023-05-07T00:00:00Z
# IN: 100.4MB
# Increase used space: 321.2MB
# Efficiency: 320%
* Date: 2023-05-08T00:00:00Z
# IN: 765MB
# Increase used space: 652.4MB
# Efficiency: 85%
* Date: 2023-05-09T00:00:00Z
# IN: 1046.8MB
# Increase used space: 624.8MB
# Efficiency: 60%
* Date: 2023-05-10T00:00:00Z
# IN: 969.9MB
# Increase used space: 664MB
# Efficiency: 68%
* Date: 2023-05-11T00:00:00Z
# IN: 957.6MB
# Increase used space: 760.2MB
# Efficiency: 79%
* Date: 2023-05-12T00:00:00Z
# IN: 1251MB
# Increase used space: 425MB
# Efficiency: 34%
* Date: 2023-05-13T00:00:00Z
# IN: 1013.4MB
# Increase used space: 0MB
# Efficiency: 0%
* Total :
# IN: 6104.1MB
# Total used space: 3447.6MB
# Efficiency: 56%
- 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S (us1.storj.io:7777)
* Date: 2023-05-07T00:00:00Z
# IN: 5756.6MB
# Increase used space: 3943.7MB
# Efficiency: 69%
* Date: 2023-05-08T00:00:00Z
# IN: 32038.4MB
# Increase used space: 7477.1MB
# Efficiency: 23%
* Date: 2023-05-09T00:00:00Z
# IN: 24973.4MB
# Increase used space: 7300.6MB
# Efficiency: 29%
* Date: 2023-05-10T00:00:00Z
# IN: 35875.6MB
# Increase used space: 10192.4MB
# Efficiency: 28%
* Date: 2023-05-11T00:00:00Z
# IN: 27796.6MB
# Increase used space: 12180.8MB
# Efficiency: 44%
* Date: 2023-05-12T00:00:00Z
# IN: 35726.9MB
# Increase used space: 12377.4MB
# Efficiency: 35%
* Date: 2023-05-13T00:00:00Z
# IN: 30903.6MB
# Increase used space: 0MB
# Efficiency: 0%
* Total :
# IN: 193071.1MB
# Total used space: 53472MB
# Efficiency: 28%
- 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs (eu1.storj.io:7777)
* Date: 2023-05-07T00:00:00Z
# IN: 1016.8MB
# Increase used space: 2698.1MB
# Efficiency: 265%
* Date: 2023-05-08T00:00:00Z
# IN: 6900.1MB
# Increase used space: 4161.8MB
# Efficiency: 60%
* Date: 2023-05-09T00:00:00Z
# IN: 7822.3MB
# Increase used space: 5942.5MB
# Efficiency: 76%
* Date: 2023-05-10T00:00:00Z
# IN: 10169.1MB
# Increase used space: 5409.1MB
# Efficiency: 53%
* Date: 2023-05-11T00:00:00Z
# IN: 7957.3MB
# Increase used space: 4329.4MB
# Efficiency: 54%
* Date: 2023-05-12T00:00:00Z
# IN: 9272MB
# Increase used space: 6831.8MB
# Efficiency: 74%
* Date: 2023-05-13T00:00:00Z
# IN: 7556.7MB
# Increase used space: 0MB
# Efficiency: 0%
* Total :
# IN: 50694.3MB
# Total used space: 29372.7MB
# Efficiency: 58%
- 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB (europe-north-1.tardigrade.io:7777)
* Date: 2023-05-07T00:00:00Z
# IN: 80.3MB
# Increase used space: 288MB
# Efficiency: 358%
* Date: 2023-05-08T00:00:00Z
# IN: 453MB
# Increase used space: 426MB
# Efficiency: 94%
* Date: 2023-05-09T00:00:00Z
# IN: 367.2MB
# Increase used space: 404.5MB
# Efficiency: 110%
* Date: 2023-05-10T00:00:00Z
# IN: 468.8MB
# Increase used space: 436.7MB
# Efficiency: 93%
* Date: 2023-05-11T00:00:00Z
# IN: 493.6MB
# Increase used space: 534.1MB
# Efficiency: 108%
* Date: 2023-05-12T00:00:00Z
# IN: 425.3MB
# Increase used space: 481.8MB
# Efficiency: 113%
* Date: 2023-05-13T00:00:00Z
# IN: 291.6MB
# Increase used space: 0MB
# Efficiency: 0%
* Total :
# IN: 2579.8MB
# Total used space: 2571.1MB
# Efficiency: 100%
Per day:
- Date: 2023-05-07T00:00:00Z
* IN: 7044.1MB
* Increase used space: 7587.1MB
* Efficiency: 108%
- Date: 2023-05-08T00:00:00Z
* IN: 40759.2MB
* Increase used space: 13408.9MB
* Efficiency: 33%
- Date: 2023-05-09T00:00:00Z
* IN: 34909.5MB
* Increase used space: 14902.4MB
* Efficiency: 43%
- Date: 2023-05-10T00:00:00Z
* IN: 48113.2MB
* Increase used space: 17364MB
* Efficiency: 36%
- Date: 2023-05-11T00:00:00Z
* IN: 37782.3MB
* Increase used space: 18490.6MB
* Efficiency: 49%
- Date: 2023-05-12T00:00:00Z
* IN: 47441.6MB
* Increase used space: 20507.8MB
* Efficiency: 43%
- Date: 2023-05-13T00:00:00Z
* IN: 40275.9MB
* Increase used space: 0MB
* Efficiency: 0%
Overall:
- IN: 256325.8MB
- Total used space: 92260.8MB
- Efficiency: 36%
So, the efficiency is only 36%. Although this appears the worst figure of all nodes, no node exceeds 50% overall efficency and the mean is about 40-45%. Also interesting to see is that some days the efficiency exceeds 100% (Recoveries? Other cut-off time for the bandwidth than the storage-at-rest-time?). Also there is a big difference between the efficiency between the nodes, especially satellites in the US turn out to be the worst.
I suspected that it might someting with not winning the race (especially because I’m located in Europe, and US has the worst efficacy for me). So I checked the logs:
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "uploaded"
255340
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "upload started"
256718
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "piecedeleter"
25745
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "sent to trash"
25743
However, these logs don’t seem to support this; because it seems to be the case this node wins over 99.4% of the races.
Also the fact, the trash is only 17GB and the small amount of piecedeleter-messages doesn’t support this as the source of the discrepancy.
Any other thoughts / explanations from your side concerning this gap?
Any reason to make some adjustments to the settings?
Is this also a focus point for the developers?
Post-script:
- My first post concerning this subject does seem to be removed, and not reposted after review from the staff? An explanation would be great.
- I doubted whether this topic should have been posted in the developer section. Since this is of more importance to SNO’s and I’m a SNO myself, I decided to post it here.