Updates on Test Data

No, you have all rights. However all things have priorities…
And the quick solution is to use other monitoring software until the integrated one would be fixed.
Like already suggested

1 Like

Seems you need to do a reverse - reduce the priority for storagenodes, not change the priority for the Receiver.

I agree. I don’t know if this was intentionally delayed but it takes way too long to roll out these fixes to the Docker nodes. Unfortunately the same with Storage node performance for filewalker is still extremely slow on large nodes · Issue #6998 · storj/storj · GitHub

1 Like

Guys, the last two days both of my nodes are almost full… I can still give a few TB to the smaller ones. Is it worth it or is it just a trial and will be deleted soon?

As it didn’t help anyway, i limited the node on the same wlan as the reciver.
yes, cable is no option (other node with cable will likely go to wlan as well, after moving next month.) and will be for long timme.

Perhaps QoS doesn’t work properly or does it overload your router?

I will keep an eye out for this, but so far I still have 100% audit score in all satellites. It is possible that as I have not yet reached a vetted status for saltlate that any audit errors are not impacting the audit score yet.

No databases have been deleted. This appears to be due to the error in the generation of the test data as described by littleskunk in the previous post.

Unlikely. GC is working for a normal way deleted data, like the customer deleted it explicitly. The TTL data will be deleted by the node without GC run and sitting in the trash.

As far as I know, the GC does not distinguish between TTL or standard data, instead it simply checks every blob against the bloom filter.
If for whatever reason a file was deleted before the TTL came into effect, as it was the case with test data, then the Garbage Collector will check the TTL blob against the bloom filter and it will detect that this blob is not part of the bloom filter, and therefore it will move it to trash.

Clearly TTL data being deleted via GC was an issue with the generation of the test data and is not an issue specific of my nodes, given that:
A) There was an issue with test data being overwritten, as reported by littleskunk.
B) Multiple SNO’s have reported an increase in trash size for saltlake.
C) No databases have been deleted from my part, and this affects multiple nodes in multiple machines.

2 Likes

BF is exist only for a data deleted by the customers in a first place. If the customer deleted data before the TTL, of course it will be collected by GC.

And that is exactly what happened with test data and the cause for the deletion of test data before TTL by the GC.
Please see:

If the data is overwritten, then yes, that’s mean DELETE, then UPLOAD.

Since it’s DELETE, then, well, GC is willing to help…

This is a very important question for me, since I do not use RAID, but create new nodes on new disks (realizing that I am losing a lot of money on retention), I would like to offer several solutions for consideration:

  1. Allow operators who have nodes in one /24 network to set priority between their nodes, thereby regulating and redistributing the load during service time.
  2. Allow operators to specify what TTL they would like to have the minimum, this will also relieve the load if a node cannot cope or several nodes in the /24 subnet or for maintenance.

I want to emphasize that this is the WISH of the operator, and not a hard and fast rule. The desire will be to know the satellite, and decide to send data or not. If everything is fine in the network, we accept the wishes of the node - it will help remove the long tail, and if there is not enough space in the network, the satellite will send all the data, ignoring the WISH of the operator.

Thus, this will allow us to balance the load, and the satellites will decide for themselves, as before, to whom and how much to send.
My proposals are an addition to the concept of a “slow” node, which is now being tracked by the satellite, knowing that the node is losing the race.

Please consider my suggestions.

How would Storj know one SNO controls many Identities in a /24 - would their node-selection process now have to look at email or ETH addresses too? And a /24 covers 250+ IPs… could you bump your priority over your neighbors?

Don’t we already control the priority between our nodes: by determining which is available for ingress or not? If you want to fill one node before the others… have the others refuse new data. (Many SNOs migrating/repairing nodes alter their STORAGE flag to do this all the time)

This seems bad for the customer. Like one who has a 3-month TTL gets lower upload performance because some SNOs reject them… but the customer that has automation on their side to simply delete data in 3-months gets full speeds? When a customer may choose to delete their paid data should have nothing to do with the SNOs available to them.

I like the direction the new node-selection is heading. It limits traffic to nodes that start to appear overloaded, and sends it to nodes it detects are faster instead. Everyone can still participate: but SNOs that (for whatever reason) have faster configs get more data (thus higher rewards). Equality of opportunity, not equality of outcome :+1:

4 Likes

Sorry, maybe I missed something.
Are you saying that for maintenance it is possible to indicate in the configuration that the node only stores and is not ready to receive new data?
How to write this correctly in the configuration

I accept, consider my proposal only for nodes on one IP address

Yes: you know how in your “docker run” command you can specify max space: like:

-e STORAGE="2TB"

(or in your config.yaml). SNOs that want their nodes to be idle-but-online (like while copying to a new HDD) will set their STORAGE to the same size the node is already using (or less). That way it will reject new uploads… but still be running to service downloads and pass audit/online checks.

You can just set it arbitrarily low (like 600GB or something) to stop new ingress. The node doesn’t actively start to remove data or anything: it doesn’t try to get smaller: it will just not take anything new, and be fine with if it naturally gets smaller over time as trash gets taken out.

When you’re done your maintenance, just set that STORAGE flag back to the proper size and the node will advertise it has free space again!

1 Like

I think the TV stream is already via igmp-v3 ,
It all points to the wifi- bottleneck, my LAN connected node makes no difference, and was not limited since ever.
At the moment, im busy with renovating the new house where i will move to, and i do not mind the quick fix. I will make a new judgement when the infrastructure is moved and running.

Do you mean Service-quality management? I have no plan where to set this up and how.
Devices are: Fritzbox7590ax(slow node via minipc on lan connected), fritz!repeater6000 (Pc with node via lan to this connected), fritzbox7490 lan connect to the MediaReciver 401.

Clearly, this is an old method of simply reducing the declared volume.
I thought something new had appeared

see

Next best option is updating the Mainboard driver of my pc. but for now it works flawlessly again.

1 Like

@daki82
One possible solution for fritzbox router might be to move storj nodes to the guest net. Then set a percentage of bandwith reserved for home net. So nodes can only consume what is left.

4 Likes