Reduce storage size by one node

Hello
I have a node with a docker configuration on a QNAP. The capacity I have initially configured
-e STORAGE=“80TB”.
However, after 1 year, the size consumed is about 10TB.

I would like to know how I can reduce the STORAGE parameter to 20TB.

Also, I would like to know if I can configure another node with docker on the same QNAP, taking advantage of the space freed from the 80TB of the current node.

Regards

yes, no problem.

yes, for optimum effect take the other ip from the second provider and fix them to the matching node where the ip is configured in the yaml or docker run command. no failover to other ip’s adress unless you reconfigure the nodes config with the ip per hand…

Note you need a 2.nd newly gernerated and signed Identity. Get a new auth-token
use different ports for security.
god forbid to switch ip. insta disqualification with same port. therefore do not use the same port, increase by one or so.
then in case of failover, they will coexist, but it will be shared ingress.
use the same payout ethereum-adress and email

Regards

To decrease the space, simply change the value to
-e STORAGE=“80TB” to -e STORAGE=“20TB”.
Is there any document or link, with the procedure to change this parameter?

Regarding the second node, from what I understand, I should configure each node to go out on a different public IP than the first one. Is this correct?
If I do this configuration, is there any benefit when it comes to generating payments?

In the case that both nodes have to leave through the same public IP (changing the ports) is there any benefit when it comes to generating payments?

this is not rocket sience…change it, restart, done. Control it on the node dashboard.

I don’t know docker, but its the same as with windows where you just edit an .yaml file via texteditor. you can even specify less than the 10tb and the node will slowly shrink via deletes of customers. Just do not advise more than exists -10%…or less than 500GB and you are fine.
I set my nodes for myself a bit under what is used, for defragmentation.

Maybe

yes, also use diffferent ports anyway, in case of failover maybe far in the future.
If its on an seperate drive(maybe you can extract one from the raid i assume), do so. It has more ingress/counts as 2 single nodes.

They count together as one node regarding data ingress. Its unneccesary and do not run them on the same drive/raid. it will have no benefits and double iops.

Benefits: ability to eject individual small nodes to reclaim space.

Drawbacks: none.

Where does double io come from?

I would read ToS carefully around circumventing the /24 rules. If you run 15 nodes on the same hardware they should and are counted as one node for the purposes of data distribution for availability and durability purposes as long as they on the same /24 network. If you spread them over multiple separate networks you may be breaking that.

There is no good reason to do that other than violate ToS, so don’t do that.

1 Like

filewalks? gc? rw-checks?

Data that would be hosted on a single node now would be distributed among multiple nodes. It’s the same total amount of data.

1 Like

My guess is, that multiple checks, and filewalks and simultaneous GC could clog the system more than just one node.

rw checks are multiplied for sure. others are shorter, but maybe simultane.

Like fat people on a slide, why should they build a line when they could all slide at once, by just making themselve smaller??

Thats exactly the reason, lets wait for the new ToS…

If nodes are on separate disks/pools they are not affecting each other. The system load should be negligible for almost any CPU, include ARM, especially if you followed a recommendation to run not more than a one node per CPU core.