This month I got 75TB ingress, even with some problems due to performance (or rather lack of it) some times.
Your success rate from at least saltlake should be higher.
OTOH, today my ingress was hovering around 120mbps, but it was higher in the past:
(some dips are there because of how the test was performed and some are definitely because my node was not fast enough)
I am still trying to figure out whether this is because the traffic has dropped or because my node it too slow. I just restarted my node VM (I needed to expand the virtual disk) and changed some settings, we will see how it runs now.
Right now it is difficult to find out if the traffic is low because the node is too slow or just because nobody is uploading. Storj said that at some point they will add a way to know how often a node is selected by the new algorithm, hopefully that will improve things. However, Iām sure right now they have more important things to do.
My multinode dashboard says 52TB of ingress, but this is way off what my router tells me. But I think this should be fixed in version 105.x or 106.x? Something about measuring a different value (if I remember correctly)
I took the 75TB from the node dashboard, apparently itās wrong. If I had thought about it for a minute I would have realized that.
117mbps * 30 days would be 37TB
You suggested there was an internal (Storj team) debate about stop uploading (test) data. It does not make sense to me after the Storj network growth plan announcement which includes massive test data uploads. Surely i missunderstood something.
Ok now I get it. The announcement is more about the growth we are expecting with or without signed deals. If they donāt get signed we have to keep uploading testdata for the next big customers. If the deals are getting signed there is no need to allocate twice as much capacity.
Interesting on what version this problem is, Alexey told that this problem was on 104.X was fixed on 105.x but need to filewalker go over on start to update databases. On 1 node I wait till it update. Because my trash not updating, I think it was deleted data with TTL, then it was cleared in trash by TTL but space not updated. I have almost 48TB of trash like this. But not all of this 48TB.
This problem realy preventing speed, because lot of my nodes false full, because of this.
Iāll re-run used-space once more after the 1.107 update, but after that itās on storj to figure out a way to correctly track used space, it really is getting tiring.
That is correct but for a different bug. It was for trash cleanup. This new bug is for TTL cleanup.
Edit: And I thought it was already fixed in 104 and for some reason some people believe it wasnāt included in the 104 release. We have a change log here in the forum to look it up. To my knowledge 104 it was.
I see this problem with trash from this time when you overwritten some TTL test, so i have 0.7-0.9 TB of trash, with some GB files in it. previusly i had filewalker runing with WARN level, so i dont have proof that it ended work, yesterday I restarted it with info level, but it not yet over. I have 3.9tb node with 0.7TB virtul trash
edited: something like this i have on lot of nodes, up to 48TB trash, all together.
I have quite a few lines that show a count of 600k every hour. I had stopped the ingress so the free space on the system should have been updated/increased but it didnāt. The only time I saw any increase in free space was during trash cleanup.