Allow to merge two nodes into one

hello,

It may be interesting to put a function to join 2 or more nodes into one.

As a node operator I had 3 nodes, each separated on a hard drive as recommended.

What happens now is that I have migrated from having 3 separate disks to a RAID-Z1 (ZFS raid) so now having 3 separate nodes on the same Volume doesn’t make any more sense.

Is posible create something like the Gracefull exit but saying exactly which node you want to transfer your data to?

Thank you

Also, split would be good.
Also multidisk node where audit is per disk.
Node merge makes lots of sense with disk sizes of 26 TB available soon

1 Like

Yes, it is not necessary to have a raidz. but with a disk that is larger it would be the same.

Assuming I have 3 x 4TB and now I have a 20TB drive I no longer need to have 3 nodes on 1 drive.

Having 3 nodes on a disk increases the IO Delay much more than if you had only 1 node with the same amount of information.

Likewise, having 1 or 3 nodes will receive the same amount of information that comes from the satellites.

thanks

yeah, but 3 disks serve data 3 time faster than 1 big disk.

I would like to bring this topic up, as today price make lot of small nodes unprofitable. So it would be good to give people more flexibility. Lot of people have some small and some big nodes, opportunity to merge nodes gives people hold on electricity. It cold be just like GE but to one specific node, so there is no need to big code writings.

2 Likes

If you think it’s a small code change - your PR is welcome!

There are several issues:

  1. Using random nodes for transfer is by design. Data should not be centralized.
  2. There is a rule of one node per /24 subnet for the same reason.
  3. Your node must be at least 6 months old to call a Graceful Exit. I do not know, will the small node still small after 6 months.

Due to the restrictions above I do not think it would be implemented anytime soon.
At the moment it look like you may set the storage to zero to do not accept new data (it’s still will be paid for used storage and egress, since you shall use the same wallet address, it will be added to a common payout), or you may call a GE.

Hello Alexey,
i started storj as project ‘learn something new’.
Had a HP N40L MicroServer and 4x 1,0TB HDD.
Now i got 2x 6TB HDDs. I know how to migrate 1:1. But how to Migrate 2:1 ?
Migrate 1:1 & GE on 2 Nodes would result in 3 Month growing is lost…

Hello @dancekid,
Welcome to the forum!

You may call a Graceful Exit for the smallest one, and they will not split ingress anymore, all ingress would go to a remained one. So nothing is lost. You would still have the one node, and now you would use only one HDD, the second one can be used for something else.

However, you cannot call a GE if the node is not older than 6 months. So in your case you have several options:

  • leave it as is;
  • reduce the allocation to zero on the one of the nodes, it would shrink during the time (but slowly). It would still has an egress, and will be paid for used space, but will not grow anymore;
  • shutdown the second node and lose its held amount.

I tried it with setting node size/allocation to 0TB and now it is suspended on one satellite, what did I do wrong?

Maybe set to 0.55 tb. The minimal node size.
Suspended at how much% ?

Check logs

now 2 of my top performing nodes have various degrees of suspension based on Alexeys guidance, how can I get them back in compliance? Suspension is between 50-90%. Alexey said to set to 0TB, not 0.55TB, so over time only my 3rd node on same IP would stay active/grow

what exactly did you do?
what is the setup? windows? docker? linux?
can’t find anything here

Hello @zaphod007,
Welcome to the forum!

Could you please show your node’s scores from the dashboard? The suspension may happen for two reasons:

  1. Your node answering on audit requests with an unexpected response, in this case the suspension score would be affected. And if it’s below 60% your node become suspended. You need to fix the root cause of failing audits:
  1. Your node were offline (did not respond on audit requests at all), in this case the online score would be affected. And if it’s below 60% your node become suspended. You need to fix the offline issue. You may check when your node was offline with these scripts:

suspension magically went back to 100% over night, so all seems fine.

what did I do? reduce the allocation to zero on the one of the nodes. so STORAGE=0TB

nodes are running on synology nas/linux, managed via portainer/docker-compose, all software up-to-date.

Is there another way to merge 3 nodes into 1 node? Or is there anything else I need to consider before setting allocation to 0TB?

You may also call a Graceful Exit: Graceful Exit Guide, but it also advisable to set allocation either to zero or to any other number below the storage usage to prevent ingress to these nodes.

The drop in suspension is unlikely happened because you set the allocation to zero. Only logs can show, as it’s described in the guide in my previous post.

Hi,

I had a lot of 1TB nodes, now they are shrinked yo 500G but if I configure STORAGE=0G the node don’t start. There is a Minimum STORAGE requirement of 500G.

2023-08-17T04:32:04Z ERROR piecestore:monitor Total disk space is less than required minimum {“process”: “storagenode”, “bytes”: 500000000000}

I think it’s good the Minimum required size, but algo could be nice to allow the 0 size configuration to allow shrinking a node to 0 or close to 0 and then enable GE on that node.

Thanks

See

storagenode setup --help | grep minimum-disk-space
      --storage2.monitor.minimum-disk-space memory.Size          how much disk space a node at minimum has to
advertise (default 500.00 GB)

So, you may change this option to 0B too.

Oooooh!

Thanks! I’m going to try it today.

1 Like

By the way, I think we can detect nodes with 0B advertised space and treat them as unhealthy to allow the repair worker to move data from them in case if amount of healthy pieces would be lower than a repair threshold.
This basically may work as a partially GE.

1 Like