Node with LVM, how to replace disk?

My node is using LVM on Ubuntu server with 3 disks 1TB each. Since I am getting almost full I would replace / migrate one of the disk with a larger one (4TB).

The disks setup is the following:
sdb 8:16 0 931.5G 0 disk
└─jbod_group-storj_node 253:0 0 2.7T 0 lvm /storj_jbod
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part
└─jbod_group-storj_node 253:0 0 2.7T 0 lvm /storj_jbod
sdd 8:48 0 931.5G 0 disk
└─jbod_group-storj_node 253:0 0 2.7T 0 lvm /storj_jbod

Is there any simple way of replacing one of the above disks?

I have read about ONLINE pvmove / pvreduce but I am worried about the duration of the process and not to corrupt the data.

Also I am thinking to stop the node, put one of the disks on another system, do “dd” command to copy on the new disk and start the node with the new disk - this is faster but requires downtime and it is also risky to some extent.

The third option is to mirror everything on the new disk, I did not explore too much this option to know the exact steps.

“pvmove” is quite safe. It essentially does the same thing as mirroring a complete disk and then taking the original offline, except not all at once.

Any estimate on how long it will take to pvmove, let’s say 900GB? Of course system specs are needed:
AMD E-350 dual core (similar performance with Intel Atom), 4GB DDR3, sata3.

What if during pvmove power outage occurs and system is restarted, can be resumed / resumes automatically or I am lost?

Are you aware of that with one drive failure all data in your LVM will be gone? Just asking…

1 Like

I understand the point that the risk of failure of one of the 3 LVM disks is higher than 1 disk setup but in both the cases the data is gone in case of a failure.

No, if you set up 1 node per disk, then only one node will be lost. It is recommended to run one node per disk unless you have a RAID setup.

well this is what I actually intend to do, migrate everything to one bigger disk and reduce the 3 x 1TB disks

1 Like

The way you’d usually do this is:

  1. pvcreate on the new disk.
  2. vgextend the VG to add the new disk.
  3. pvmove $sourcepv $destpv for each PV you want to vacate. All allocations on the source will be moved to the destination. (You may want to run lvs -o +devices to see which slices of the LV exist on each PV and move them in order to reassemble the LV as contiguous on the new PV.)

As others have said, pvmove is safe. We use it on production systems. If the move fails or is aborted, the origin will still be fully intact and available (assuming the failure wasn’t because the origin disk didn’t fail). It is safe to use on a mounted volume, though note that disk I/O will increase so you will likely see increased latency and throughput on the volume until the move completes. This should not affect your node’s reputation or disqualify it, but you may see more upload/download failures than usual while the volume is being moved.

Once this is done, the old PVs can be removed from the VG (vgreduce) or you can allocate new LVs on them.