Updates on Test Data

This month I got 75TB ingress, even with some problems due to performance (or rather lack of it) some times.
Your success rate from at least saltlake should be higher.
OTOH, today my ingress was hovering around 120mbps, but it was higher in the past:


(some dips are there because of how the test was performed and some are definitely because my node was not fast enough)

I am still trying to figure out whether this is because the traffic has dropped or because my node it too slow. I just restarted my node VM (I needed to expand the virtual disk) and changed some settings, we will see how it runs now.

Right now it is difficult to find out if the traffic is low because the node is too slow or just because nobody is uploading. Storj said that at some point they will add a way to know how often a node is selected by the new algorithm, hopefully that will improve things. However, Iā€™m sure right now they have more important things to do.

This doesnā€™t fit your chart. Are there other network interfaces? :thinking:

This is my network graph for this month, also with some dips. But never more than 150mbit

My multinode dashboard says 52TB of ingress, but this is way off what my router tells me. But I think this should be fixed in version 105.x or 106.x? Something about measuring a different value (if I remember correctly)

1 Like

1.106 is the version that brings back reliable ingress numbers.

3 Likes

I took the 75TB from the node dashboard, apparently itā€™s wrong. If I had thought about it for a minute I would have realized that.
117mbps * 30 days would be 37TB

I am not that far off with my new pi5 setup. I am hovering around 100mbps.

2 Likes

Debate about stop uploading test data after the Storj network growth plan and capacity targets announcement?

imagen

1 Like

Is that a question for me? I donā€™t understand. I am not debating it.

You suggested there was an internal (Storj team) debate about stop uploading (test) data. It does not make sense to me after the Storj network growth plan announcement which includes massive test data uploads. Surely i missunderstood something.

Ok now I get it. The announcement is more about the growth we are expecting with or without signed deals. If they donā€™t get signed we have to keep uploading testdata for the next big customers. If the deals are getting signed there is no need to allocate twice as much capacity.

6 Likes

Looks like TTL on test data started to HIT
Storj Network Statistics - Grafana (storjstats.info)
we see space used started slowly go down.

ā€¦and Europe is getting a spring cleaning.

Not on the nodesā€¦

It seems deleting expired TTL pieces does not update the used space stats.

2 Likes

Interesting on what version this problem is, Alexey told that this problem was on 104.X was fixed on 105.x but need to filewalker go over on start to update databases. On 1 node I wait till it update. Because my trash not updating, I think it was deleted data with TTL, then it was cleared in trash by TTL but space not updated. I have almost 48TB of trash like this. But not all of this 48TB.
This problem realy preventing speed, because lot of my nodes false full, because of this.

2 Likes

Iā€™ll re-run used-space once more after the 1.107 update, but after that itā€™s on storj to figure out a way to correctly track used space, it really is getting tiring.

3 Likes

That is correct but for a different bug. It was for trash cleanup. This new bug is for TTL cleanup.

Edit: And I thought it was already fixed in 104 and for some reason some people believe it wasnā€™t included in the 104 release. We have a change log here in the forum to look it up. To my knowledge 104 it was.

I mentioned this elsewhere

I see this problem with trash from this time when you overwritten some TTL test, so i have 0.7-0.9 TB of trash, with some GB files in it. previusly i had filewalker runing with WARN level, so i dont have proof that it ended work, yesterday I restarted it with info level, but it not yet over. I have 3.9tb node with 0.7TB virtul trash

edited: something like this i have on lot of nodes, up to 48TB trash, all together.

Now we know why it is like this.

I have quite a few lines that show a count of 600k every hour. I had stopped the ingress so the free space on the system should have been updated/increased but it didnā€™t. The only time I saw any increase in free space was during trash cleanup.