Some decadency in graphs

Yes, Storj dashboard use MB/GB (decimal SI prefixes) not MiB/GiB/etc (binary prefixes). So looks like your node overestimate ingreass traffic even more. Like 300-350% of actual traffic. My nodes only lies about 160-200% from actual ingress traffic.
So i guess bug is related to traffic itself? Like count max size of uploaded data piece instead of actual size.
And the smaller the average size of the real pieces of data that a particular node receives from satellites, the more the statistics of incoming traffic will be overestimated by node software.
Or something like that.

The difference is that I have very old nodes (about 3 years of work on the network), and you measured on a new, freshly created one. The nature of incoming traffic in these two extreme cases is very different.

No, it looks like you are confusing trash and garbage. What you described (pieces of data that were deleted on the satellite, but the node is not aware of this, for example, because it was offline at the time of deletion) are placed in the garbage folder when/if the garbage collector finds them (which, by the way, is not displayed on the dashboard at all, but you can view it manually by opening the stored data folder, folder is called “garbage”). And normal commands to delete data received and processed by the node transfer deleted pieces of data to “trash” folder, where they are stored by default for exactly one more week (the default value) and only after that they are finally deleted from the disk. This is called “trash” in Storj terminology and is displayed on the dashboard.

1 Like

It also might be the node is summing the size that is advised when satellite/uplink is opening the connection when in fact it ends up cancelling the upload uploading 0 bytes.

I’m monitoring this behaviour since a couple of hours.
Storagenode reported Ingress 16,38 GB
Logging at Network 4,662 GB
Successfull Ingress from SN logs: 98,63%

I will add another logging at OS level to capture the next 24 Hours.

The garbage collector places data to the trash folder. The garbage folder is not used as far as I know.

1 Like

I see some files in “garbage” folder on my nodes. Not much, just few files usually. But all nodes have > 99% uptime so it should not get a lot of garbage anyway and the garbage collector shouldn’t have a lot of work here.

And i have checked the code and found garbage folder is still in use by storagenode: storj/dir.go at f5020de57c9200e61aaa842013263075de229333 · storj/storj · GitHub

// garbagedir contains files that failed to delete but should be deleted.
func (dir *Dir) garbagedir() string { return filepath.Join(dir.path, "garbage") }

// trashdir contains files staged for deletion for a period of time.
func (dir *Dir) trashdir() string { return filepath.Join(dir.path, "trash") }

Although I didn’t delve into the code details there.

I have actually observed garbage folder being used whenever garbage collection is underway. You can verify this by checking modified date under Windows for the garbage folder.

I did not see any files there in any time, but I did not track it closely.

The folder is always empty but during GC the files are moved there and deleted immediately. I have seen it.

the ratio seems very close to the expansion factor…
maybe there is a mistake somewhere

I believe I found the root cause of this and filed an issue in our bug tracker: Ingress graphs are skewed by upload orders · Issue #5853 · storj/storj · GitHub. I can’t guarantee when this will get resolved, but the team dealing with the development of storage node software is already aware of this.

11 Likes

Personally I’m not that fussed, but thanks for looking into it :slightly_smiling_face:

2 Likes

I also noticed on my nodes that the data ingress in the dashboard graph does not match the actual received volume, it is greatly inflated. It’s the same in multinode dashboard.

Since a month I’m a proud storagenode-owner, actually owning five storage nodes with a total capacity of about 15GB. However, every time I start a new node, it happens to me that there is a big gap between ingress and increase of used storage.
Therefore I wrote a script in order to confirm or refute my suspicion.

This is the dasboard of my newest node:


As you can see, total ingress this month is about 250GB but total storage is about 96GB. So about 38% efficacy.
Daily ingress is 34-48GB/24h, although storge increase everyday by about 16GB, boiling down to an efficacy of 33-50%.

I even wrote a script:

echo "$( wget -qO - localhost:14002/api/sno | jq -r .satellites[].id | while read -r sNode; do echo ',';  wget -qO - localhost:14002/api/sno/satellite/$sNode; done )]" | sed -z 's/^,/[/' |
jq -r '
	"Satellite per day: ",
	(
		.[] |
		(
			" - " + .id + " (" + .audits.satelliteName + ")",
			.bandwidthDaily[] as $bw |
			(.storageDaily | map(select(.intervalStart == $bw.intervalStart)) | .[0].atRestTotalBytes) as $space | 
			(.storageDaily | map(select(.intervalStart > $bw.intervalStart)) | .[0]?.atRestTotalBytes) as $spaceTomorrow |
			(($spaceTomorrow // $space) - $space) as $spaceInc |
			($bw.ingress | .repair + .usage) as $ing |
			(
				"  * Date: " + $bw.intervalStart,
				"   # IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
				"   # Increase used space: " + ($spaceInc / 100000 | round | . / 10 | tostring) + "MB",
				"   # Efficiency: " + ($spaceInc / $ing * 100 | round | tostring) + "%"
			)			
		),
		"",
		"  * Total : ",
		(
			([.bandwidthDaily[].ingress | (.repair + .usage)] | add) as $ing |
			([.bandwidthDaily[].intervalStart] | min) as $mindate |
			([.bandwidthDaily[].intervalStart] | max) as $maxdate |
			((.storageDaily | map(select(.intervalStart == $maxdate)) | .[0].atRestTotalBytes) - (.storageDaily | map(select(.intervalStart == $mindate)) | .[0].atRestTotalBytes)) as $space |
			"   # IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
			"   # Total used space: " + ($space / 100000 | round | . / 10 | tostring) + "MB",
			"   # Efficiency: " + ($space / $ing * 100 | round | tostring) + "%",
			""
		)
	),
	"",
	"Per day: ",
	(
		[.[].bandwidthDaily[]] as $bw | 
		[.[].storageDaily[]] as $stor | 
		(
			$bw | group_by(.intervalStart)[] | (
				.[0].intervalStart as $today |
				([$stor[].intervalStart] | map(select(. > $today)) | min) as $tomorrow |
				($stor | map(select(.intervalStart == $today).atRestTotalBytes) | add) as $space |
				($stor | map(select(.intervalStart == $tomorrow).atRestTotalBytes) | add) as $spaceTomorrow |
				( [.[].ingress | (.repair + .usage)] | add ) as $ing | 
				(($spaceTomorrow // $space) - $space) as $spaceInc |
				" - Date: " + $today,
				"  * IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
				"  * Increase used space: " + ($spaceInc / 100000 | round | . / 10 | tostring) + "MB",
				"  * Efficiency: " + ($spaceInc / $ing * 100 | round | tostring) + "%"				
			)
		),
		"",
		"Overall: ",
		(
			( [$bw[].ingress | (.repair + .usage)] | add ) as $ing |
			( [$bw[].intervalStart] | max) as $maxdate |
			( [$bw[].intervalStart] | min) as $mindate |
			(($stor | map(select(.intervalStart == $maxdate).atRestTotalBytes) | add) - ($stor | map(select(.intervalStart == $mindate).atRestTotalBytes) | add)) as $space |
			" - IN: " + ($ing / 100000 | round | . / 10 | tostring) + "MB",
			" - Total used space: " + ($space / 100000 | round | . / 10 | tostring) + "MB",
			" - Efficiency: " + ($space / $ing * 100 | round | tostring) + "%"	
		)
	)'

The output is:

Satellite per day:
 - 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo (us2.storj.io:7777)
  * Date: 2023-05-07T00:00:00Z
   # IN: 1.1MB
   # Increase used space: 0.5MB
   # Efficiency: 48%
  * Date: 2023-05-08T00:00:00Z
   # IN: 2.9MB
   # Increase used space: 0.5MB
   # Efficiency: 18%
  * Date: 2023-05-09T00:00:00Z
   # IN: 5.4MB
   # Increase used space: 0.7MB
   # Efficiency: 13%
  * Date: 2023-05-10T00:00:00Z
   # IN: 3.7MB
   # Increase used space: 3.1MB
   # Efficiency: 83%
  * Date: 2023-05-11T00:00:00Z
   # IN: 10.8MB
   # Increase used space: 1.8MB
   # Efficiency: 17%
  * Date: 2023-05-12T00:00:00Z
   # IN: 6MB
   # Increase used space: 1.5MB
   # Efficiency: 25%
  * Date: 2023-05-13T00:00:00Z
   # IN: 6.3MB
   # Increase used space: 0MB
   # Efficiency: 0%

  * Total :
   # IN: 36.1MB
   # Total used space: 8.1MB
   # Efficiency: 23%

 - 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE (saltlake.tardigrade.io:7777)
  * Date: 2023-05-07T00:00:00Z
   # IN: 88.9MB
   # Increase used space: 335.5MB
   # Efficiency: 378%
  * Date: 2023-05-08T00:00:00Z
   # IN: 599.8MB
   # Increase used space: 691MB
   # Efficiency: 115%
  * Date: 2023-05-09T00:00:00Z
   # IN: 694.4MB
   # Increase used space: 629.3MB
   # Efficiency: 91%
  * Date: 2023-05-10T00:00:00Z
   # IN: 626.1MB
   # Increase used space: 658.7MB
   # Efficiency: 105%
  * Date: 2023-05-11T00:00:00Z
   # IN: 566.5MB
   # Increase used space: 684.4MB
   # Efficiency: 121%
  * Date: 2023-05-12T00:00:00Z
   # IN: 760.3MB
   # Increase used space: 390.3MB
   # Efficiency: 51%
  * Date: 2023-05-13T00:00:00Z
   # IN: 504.3MB
   # Increase used space: 0MB
   # Efficiency: 0%

  * Total :
   # IN: 3840.2MB
   # Total used space: 3389.3MB
   # Efficiency: 88%

 - 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 (ap1.storj.io:7777)
  * Date: 2023-05-07T00:00:00Z
   # IN: 100.4MB
   # Increase used space: 321.2MB
   # Efficiency: 320%
  * Date: 2023-05-08T00:00:00Z
   # IN: 765MB
   # Increase used space: 652.4MB
   # Efficiency: 85%
  * Date: 2023-05-09T00:00:00Z
   # IN: 1046.8MB
   # Increase used space: 624.8MB
   # Efficiency: 60%
  * Date: 2023-05-10T00:00:00Z
   # IN: 969.9MB
   # Increase used space: 664MB
   # Efficiency: 68%
  * Date: 2023-05-11T00:00:00Z
   # IN: 957.6MB
   # Increase used space: 760.2MB
   # Efficiency: 79%
  * Date: 2023-05-12T00:00:00Z
   # IN: 1251MB
   # Increase used space: 425MB
   # Efficiency: 34%
  * Date: 2023-05-13T00:00:00Z
   # IN: 1013.4MB
   # Increase used space: 0MB
   # Efficiency: 0%

  * Total :
   # IN: 6104.1MB
   # Total used space: 3447.6MB
   # Efficiency: 56%

 - 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S (us1.storj.io:7777)
  * Date: 2023-05-07T00:00:00Z
   # IN: 5756.6MB
   # Increase used space: 3943.7MB
   # Efficiency: 69%
  * Date: 2023-05-08T00:00:00Z
   # IN: 32038.4MB
   # Increase used space: 7477.1MB
   # Efficiency: 23%
  * Date: 2023-05-09T00:00:00Z
   # IN: 24973.4MB
   # Increase used space: 7300.6MB
   # Efficiency: 29%
  * Date: 2023-05-10T00:00:00Z
   # IN: 35875.6MB
   # Increase used space: 10192.4MB
   # Efficiency: 28%
  * Date: 2023-05-11T00:00:00Z
   # IN: 27796.6MB
   # Increase used space: 12180.8MB
   # Efficiency: 44%
  * Date: 2023-05-12T00:00:00Z
   # IN: 35726.9MB
   # Increase used space: 12377.4MB
   # Efficiency: 35%
  * Date: 2023-05-13T00:00:00Z
   # IN: 30903.6MB
   # Increase used space: 0MB
   # Efficiency: 0%

  * Total :
   # IN: 193071.1MB
   # Total used space: 53472MB
   # Efficiency: 28%

 - 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs (eu1.storj.io:7777)
  * Date: 2023-05-07T00:00:00Z
   # IN: 1016.8MB
   # Increase used space: 2698.1MB
   # Efficiency: 265%
  * Date: 2023-05-08T00:00:00Z
   # IN: 6900.1MB
   # Increase used space: 4161.8MB
   # Efficiency: 60%
  * Date: 2023-05-09T00:00:00Z
   # IN: 7822.3MB
   # Increase used space: 5942.5MB
   # Efficiency: 76%
  * Date: 2023-05-10T00:00:00Z
   # IN: 10169.1MB
   # Increase used space: 5409.1MB
   # Efficiency: 53%
  * Date: 2023-05-11T00:00:00Z
   # IN: 7957.3MB
   # Increase used space: 4329.4MB
   # Efficiency: 54%
  * Date: 2023-05-12T00:00:00Z
   # IN: 9272MB
   # Increase used space: 6831.8MB
   # Efficiency: 74%
  * Date: 2023-05-13T00:00:00Z
   # IN: 7556.7MB
   # Increase used space: 0MB
   # Efficiency: 0%

  * Total :
   # IN: 50694.3MB
   # Total used space: 29372.7MB
   # Efficiency: 58%

 - 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB (europe-north-1.tardigrade.io:7777)
  * Date: 2023-05-07T00:00:00Z
   # IN: 80.3MB
   # Increase used space: 288MB
   # Efficiency: 358%
  * Date: 2023-05-08T00:00:00Z
   # IN: 453MB
   # Increase used space: 426MB
   # Efficiency: 94%
  * Date: 2023-05-09T00:00:00Z
   # IN: 367.2MB
   # Increase used space: 404.5MB
   # Efficiency: 110%
  * Date: 2023-05-10T00:00:00Z
   # IN: 468.8MB
   # Increase used space: 436.7MB
   # Efficiency: 93%
  * Date: 2023-05-11T00:00:00Z
   # IN: 493.6MB
   # Increase used space: 534.1MB
   # Efficiency: 108%
  * Date: 2023-05-12T00:00:00Z
   # IN: 425.3MB
   # Increase used space: 481.8MB
   # Efficiency: 113%
  * Date: 2023-05-13T00:00:00Z
   # IN: 291.6MB
   # Increase used space: 0MB
   # Efficiency: 0%

  * Total :
   # IN: 2579.8MB
   # Total used space: 2571.1MB
   # Efficiency: 100%


Per day:
 - Date: 2023-05-07T00:00:00Z
  * IN: 7044.1MB
  * Increase used space: 7587.1MB
  * Efficiency: 108%
 - Date: 2023-05-08T00:00:00Z
  * IN: 40759.2MB
  * Increase used space: 13408.9MB
  * Efficiency: 33%
 - Date: 2023-05-09T00:00:00Z
  * IN: 34909.5MB
  * Increase used space: 14902.4MB
  * Efficiency: 43%
 - Date: 2023-05-10T00:00:00Z
  * IN: 48113.2MB
  * Increase used space: 17364MB
  * Efficiency: 36%
 - Date: 2023-05-11T00:00:00Z
  * IN: 37782.3MB
  * Increase used space: 18490.6MB
  * Efficiency: 49%
 - Date: 2023-05-12T00:00:00Z
  * IN: 47441.6MB
  * Increase used space: 20507.8MB
  * Efficiency: 43%
 - Date: 2023-05-13T00:00:00Z
  * IN: 40275.9MB
  * Increase used space: 0MB
  * Efficiency: 0%

Overall:
 - IN: 256325.8MB
 - Total used space: 92260.8MB
 - Efficiency: 36%

So, the efficiency is only 36%. Although this appears the worst figure of all nodes, no node exceeds 50% overall efficency and the mean is about 40-45%. Also interesting to see is that some days the efficiency exceeds 100% (Recoveries? Other cut-off time for the bandwidth than the storage-at-rest-time?). Also there is a big difference between the efficiency between the nodes, especially satellites in the US turn out to be the worst.

I suspected that it might someting with not winning the race (especially because I’m located in Europe, and US has the worst efficacy for me). So I checked the logs:

root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "uploaded"
255340
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "upload started"
256718
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "piecedeleter"
25745
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c "sent to trash"
25743

However, these logs don’t seem to support this; because it seems to be the case this node wins over 99.4% of the races.
Also the fact, the trash is only 17GB and the small amount of piecedeleter-messages doesn’t support this as the source of the discrepancy.

Any other thoughts / explanations from your side concerning this gap?
Any reason to make some adjustments to the settings?
Is this also a focus point for the developers?

Post-script:

  • My first post concerning this subject does seem to be removed, and not reposted after review from the staff? An explanation would be great.
  • I doubted whether this topic should have been posted in the developer section. Since this is of more importance to SNO’s and I’m a SNO myself, I decided to post it here.

Thank you for pointing the right direction, indeed I can confirm this most probably is a big of overestimation of real traffic. Considering the fact every day traffic according to the dashboard is about 35GB, the interfaces should show at least 190GB in this case. Although only showing 118GB:

root@Storj-node4:~# uptime
 22:43:50 up 5 days, 13:23,  2 users,  load average: 0,00, 0,01, 0,00
root@Storj-node4:~# cat /proc/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
    lo: 7624789   20648    0    0    0     0          0         0  7624789   20648    0    0    0     0       0          0
  ens3: 123635689946 116960457    0    7    0     0          0         0 18325589070 52649344    0    0    0     0       0          0
docker0: 13899024335 48299283    0    0    0     0          0         0 116431320995 88342120    0    0    0     0       0          0
   wg0: 118509901132 89686551    0    0    0     0          0         0 16059603700 52535606    0    0    0     0       0          0
vethc51e7a2: 14451690075 47671656    0    0    0     0          0         0 115008567543 87294201    0    0    0     0       0          0

I’m using VPN to overcome multiple NAT-levels, so wg0 is the interface of interest here.

Bad news is, that it’s still considerable more than expected actually. Since I would have expected about 90GB, meaning at least 20% overhead.

Please note that this is an overestimation of races won, see this post: Short-lived data - #9 by BrightSilence

this is because of this bug

I’m running 5 nodes since about a month. Every time I see the dashboard, I happen to think: that’s a big difference between ingress and increase in storage. So, in the end I wrote a script in order to confirm/refute my suspicion and I’m hoping you can help me with your thoughts and insights.

So this is my dashboard:


Essentially, the ingress is about 34-48GB a day. However, the increase in storage is about 16GB per day (about a one third to a half of the ingress).

So I wrote this jq-script in bash (clean version at: echo "$( wget -qO - localhost:14002/api/sno | jq -r .satellites[].id | while rea - Pastebin.com ):
echo “$( wget -qO - localhost:14002/api/sno | jq -r .satellites.id | while read -r sNode; do echo ‘,’; wget -qO - localhost:14002/api/sno/satellite/$sNode; done )]” | sed -z ‘s/^,/[/’ |

jq -r ’
"Satellite per day: “,
(
. |
(
" - " + .id + " (” + .audits.satelliteName + “)”,
.bandwidthDaily as $bw |
(.storageDaily | map(select(.intervalStart == $bw.intervalStart)) | .[0].atRestTotalBytes) as $space |
(.storageDaily | map(select(.intervalStart > $bw.intervalStart)) | .[0]?.atRestTotalBytes) as $spaceTomorrow |
(($spaceTomorrow // $space) - $space) as $spaceInc |
($bw.ingress | .repair + .usage) as $ing |
(
" * Date: " + $bw.intervalStart,
" # IN: " + ($ing / 100000 | round | . / 10 | tostring) + “MB”,
" # Increase used space: " + ($spaceInc / 100000 | round | . / 10 | tostring) + “MB”,
" # Efficiency: " + ($spaceInc / $ing * 100 | round | tostring) + “%”
)
)
),
“”,
"Per day: ",
(
[..bandwidthDaily] as $bw |
[..storageDaily] as $stor |
(
$bw | group_by(.intervalStart) | (
.[0].intervalStart as $today |
([$stor.intervalStart] | map(select(. > $today)) | min) as $tomorrow |
($stor | map(select(.intervalStart == $today).atRestTotalBytes) | add) as $space |
($stor | map(select(.intervalStart == $tomorrow).atRestTotalBytes) | add) as $spaceTomorrow |
( [..ingress | (.repair + .usage)] | add ) as $ing |
(($spaceTomorrow // $space) - $space) as $spaceInc |
" - Date: " + $today,
" * IN: " + ($ing / 100000 | round | . / 10 | tostring) + “MB”,
" * Increase used space: " + ($spaceInc / 100000 | round | . / 10 | tostring) + “MB”,
" * Efficiency: " + ($spaceInc / $ing * 100 | round | tostring) + “%”
)
),
“”,
"Total: ",
(
( [$bw.ingress | (.repair + .usage)] | add ) as $ing |
( [$stor.intervalStart] | max) as $maxdate |
( [$stor.intervalStart] | min) as $mindate |
(($stor | map(select(.intervalStart == $maxdate).atRestTotalBytes) | add) - ($stor | map(select(.intervalStart == $mindate).atRestTotalBytes) | add)) as $space |
" - IN: " + ($ing / 100000 | round | . / 10 | tostring) + “MB”,
" - Total used space: " + ($space / 100000 | round | . / 10 | tostring) + “MB”,
" - Efficiency: " + ($space / $ing * 100 | round | tostring) + “%”
)
)’

Showing this output (full version: Showing this output:> Satellite per day:> - 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9 - Pastebin.com ; because whole layout was distorted…)

Per day:

  • Date: 2023-05-01T00:00:00Z
  • IN: 42113.5MB
  • Increase used space: 21750.4MB
  • Efficiency: 52%
  • Date: 2023-05-02T00:00:00Z
  • IN: 39941MB
  • Increase used space: 4858.4MB
  • Efficiency: 12%
  • Date: 2023-05-03T00:00:00Z
  • IN: 38560.3MB
  • Increase used space: 17694.3MB
  • Efficiency: 46%
  • Date: 2023-05-04T00:00:00Z
  • IN: 37709.4MB
  • Increase used space: 21885.2MB
  • Efficiency: 58%
  • Date: 2023-05-05T00:00:00Z
  • IN: 35734MB
  • Increase used space: 10888.4MB
  • Efficiency: 30%
  • Date: 2023-05-06T00:00:00Z
  • IN: 41103.6MB
  • Increase used space: 28519.2MB
  • Efficiency: 69%
  • Date: 2023-05-07T00:00:00Z
  • IN: 54274.2MB
  • Increase used space: -4460.8MB
  • Efficiency: -8%
  • Date: 2023-05-08T00:00:00Z
  • IN: 32035.5MB
  • Increase used space: 14108.8MB
  • Efficiency: 44%
  • Date: 2023-05-09T00:00:00Z
  • IN: 24599.2MB
  • Increase used space: 9432.8MB
  • Efficiency: 38%
  • Date: 2023-05-10T00:00:00Z
  • IN: 51574.6MB
  • Increase used space: 12455.4MB
  • Efficiency: 24%
  • Date: 2023-05-11T00:00:00Z
  • IN: 40622.4MB
  • Increase used space: 20890.4MB
  • Efficiency: 51%
  • Date: 2023-05-12T00:00:00Z
  • IN: 49170.3MB
  • Increase used space: 76446.4MB
  • Efficiency: 155%
  • Date: 2023-05-13T00:00:00Z
  • IN: 34594.7MB
  • Increase used space: 0MB
  • Efficiency: 0%

Total:

  • IN: 249129.4MB
  • Total used space: 92275.6MB
  • Efficiency: 37%

Indeed showing an efficiency of 37% in total.
Interestingly to see, is that sometimes there seems to be an efficiency of >100% (restorations? Other cut-off time of the bandwith than storage-at-rest-time?

Indeed showing an efficiency of 37% in total.
Interestingly to see, is that sometimes there seems to be an efficiency of >100% (restorations? Other cut-off time of the bandwith than storage-at-rest-time?

I thought, maybe a lot of the ingress-traffic can be explained by not winning the competitions with other nodes:
So…

root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c “upload started”
239037
root@Storj-node4:~# docker logs storagenode 2>&1 | grep -c “uploaded”
238371

So, I seem to win 99.7% of the races. Not so bad, so to say, but doesn’t seem to explain the gap between ingress and storage.

The fact trash is also 17GB and the node is the newest (so less than 30 days), also not explaining this part I would say.

So, any thoughts on the matter? Explanations why this protocol seems to be so inefficient? And if so, are there tuning options or developer initiatives to improve this?

Post-script: I was doubting whether this shouldn’t be in the developer section. But because I’m not a developer and it’s an interesting question for storagenode-owners, I decided to post it here.

why you create several topics with same content?