Replacing smaller disks with larger disks in Linux

Hello everyone. In anticipation of the launch of a new group of the Linux Administrator course , we are publishing useful material written by our student, as well as the course mentor, REG.RU corporate product technical support specialist Roman Travin.



This article will discuss 2 cases of replacing disks and transferring information to new disks of a larger volume with further expansion of the array and file system. The first case will concern the replacement of disks with the same MBR / MBR or GPT / GPT markup, the second case will concern the replacement of disks with MBR marking by disks with a capacity of more than 2 TB, on which GPT markup with the biosboot partition will be required. In both cases, the disks to which we transfer data are already installed on the server. The file system used for the root partition is ext4.



Case 1: Replacing smaller drives with larger drives (up to 2TB)


Task: Replace current drives with larger drives (up to 2 TB) with information transfer. In this case, we have 2 x 240 GB SSD (RAID-1) disks with the installed system and 2 x 1 TB SATA disks on which you need to transfer the system.

Consider the current disk layout.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sda2           8:2    0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
sdd              8:48   0 931,5G  0 disk  

Check the current file system space used.

[root@localhost ~]# df -h
          % C 
devtmpfs                32G            0   32G            0% /dev
tmpfs                   32G            0   32G            0% /dev/shm
tmpfs                   32G         9,6M   32G            1% /run
tmpfs                   32G            0   32G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   204G         1,3G  192G            1% /
/dev/md126            1007M         120M  837M           13% /boot
tmpfs                  6,3G            0  6,3G            0% /run/user/0

The file system size before replacing the disks is 204 GB, 2 md126 program arrays are used, which is mounted in /bootand md127which is used as physical volume for the VG group vg0 .

1. Removing disk partitions from arrays


Check the state of the array

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sda1[0] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sda2[0] sdb2[1]
      233206784 blocks super 1.2 [2/2] [UU]
      bitmap: 0/2 pages [0KB], 65536KB chunk

unused devices: <none>

The system uses 2 arrays: md126(mount point /boot) - consists of a partition /dev/sda1and /dev/sdb1, md127(LVM for swap and the root of the file system) - consists of /dev/sda2and /dev/sdb2.

We mark the partitions of the first disk that are used in each array as bad.

mdadm /dev/md126 --fail /dev/sda1

mdadm /dev/md127 --fail /dev/sda2

We remove sections of the block device / dev / sda from arrays.

mdadm /dev/md126 --remove /dev/sda1

mdadm /dev/md127 --remove /dev/sda2

After we removed the disk from the array, information about block devices will look like this.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
sdd              8:48   0 931,5G  0 disk  

The state of arrays after removing disks.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdb1[1]
      1047552 blocks super 1.2 [2/1] [_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>

2. Copying the partition table to a new disk


You can check the partition table used on the disk with the following command.

fdisk -l /dev/sdb | grep 'Disk label type'

The output for the MBR will be:

Disk label type: dos

for GPT:

Disk label type: gpt

Copy markup table for MBR:

sfdisk -d /dev/sdb | sfdisk /dev/sdc

In this command , the disk from which the markup is copied is indicated first , the second is where to copy.

NOTE : For GPT first specified disk on which to copy the layout, the second disk drive is specified from which to copy the markup. If you mix up the disks, then the initially healthy markup will be overwritten and destroyed.

Copying the markup table for the GPT:

sgdisk -R /dev/sd /dev/sdb

Next, assign a random UUID to the disk (for GPT).


sgdisk -G /dev/sdc

After the command is executed, partitions should appear on the disk /dev/sdc.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
└─sdc2           8:34   0 222,5G  0 part  
sdd              8:48   0 931,5G  0 disk  

If after the performed action the partitions in the system on the disk are /dev/sdcnot defined, then we execute the command to re-read the partition table.

sfdisk -R /dev/sdc

If the current disks use the MBR table and the information needs to be transferred to disks larger than 2 TB, then new disks will need to manually create GPT markup using the biosboot partition. This case will be considered in part 2 of this article.

3. Adding partitions of the new disk to the array


Add disk partitions to the corresponding arrays.

mdadm /dev/md126 --add /dev/sdc1

mdadm /dev/md127 --add /dev/sdc2

Check that sections have been added.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  

After that we wait for the synchronization of arrays.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdc1[2] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdc2[2] sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      [==>..................]  recovery = 10.6% (24859136/233206784) finish=29.3min speed=118119K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

You can continuously monitor the synchronization process using the utility watch.

watch -n 2 cat /proc/mdstat

The parameter -nindicates at what intervals in seconds it is necessary to execute a command to check progress.

Repeat steps 1 - 3 for the next replacement drive.

We mark the partitions of the second disk, which are used in each array, as bad.

mdadm /dev/md126 --fail /dev/sdb1

mdadm /dev/md127 --fail /dev/sdb2

We delete sections of the block device /dev/sdbfrom arrays.

mdadm /dev/md126 --remove /dev/sdb1

mdadm /dev/md127 --remove /dev/sdb2

After we removed the disk from the array, information about block devices will look like this.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  

The state of arrays after removing disks.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdc1[2]
      1047552 blocks super 1.2 [2/1] [U_]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdc2[2]
      233206784 blocks super 1.2 [2/1] [U_]
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>

Copy the MBR markup table from disk /dev/sd to disk /dev/sdd.

sfdisk -d /dev/sd | sfdisk /dev/sdd

After the command is executed, partitions should appear on the disk /dev/sdd.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
└─sdd2           8:50   0 222,5G  0 part  

Add disk partitions to arrays.

mdadm /dev/md126 --add /dev/sdd1

mdadm /dev/md127 --add /dev/sdd2

Check that sections have been added.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

After that we wait for the synchronization of arrays.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdd1[3] sdc1[2]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdd2[3] sdc2[2]
      233206784 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.5% (1200000/233206784) finish=35.4min speed=109090K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

5. Install GRUB on new drives


For CentOS:

grub2-install /dev/sdX

For Debian / Ubuntu:

grub-install /dev/sdX

where Xis the letter of the block device. In this case, install GRUB on /dev/sdcand /dev/sdd.

6. Extension of the file system (ext4) of the root partition


On new disks /dev/sdcand /dev/sddavailable 931.5 GB. Due to the fact that the partition table is a smaller volume is copied from CDs, on the sections /dev/sdc2and /dev/sdd2are available 222.5 GB.

sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

It is necessary:

  1. Extend section 2 on each drive,
  2. Extend the md127 array,
  3. Expand PV (physical volume),
  4. Extend LV (logical-volume) vg0-root,
  5. Extend the file system.

Using the parted utility , we expand the section /dev/sdc2to the maximum value. We execute command parted /dev/sdc(1) and view the current partition table with command p(2).



As you can see, the end of section 2 ends with 240 GB. Let's expand the section with the command resizepart 2, where 2 is the section number (3). We indicate the value in digital format, for example 1000 GB, or we use the indication of the disk share - 100%. Again we check that the section has a new size (4).

Repeat the above steps for the disk /dev/sdd. After the expansion of the sections, /dev/sdc2they /dev/sdd2became equal to 930.5 GB.

[root@localhost ~]# lsblk                                                 
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 930,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 930,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

After that, we expand the md127 array to the maximum.

mdadm --grow /dev/md127 --size=max

Check that the array has expanded. Now its size has become 930.4 GB.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 930,5G  0 part  
  └─md127        9:127  0 930,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 930,5G  0 part  
  └─md127        9:127  0 930,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

We perform the expansion of physical volume . Before expansion, check the current state of PV.

[root@localhost ~]# pvscan
  PV /dev/md127   VG vg0             lvm2 [222,40 GiB / 0    free]
  Total: 1 [222,40 GiB] / in use: 1 [222,40 GiB] / in no VG: 0 [0   ]

As you can see, PV /dev/md127uses 222.4 GB of space.

Expand PV with the following command.

pvresize /dev/md127

Check the result of the PV extension.

[
root@localhost ~]# pvscan
  PV /dev/md127   VG vg0             lvm2 [930,38 GiB / 707,98 GiB free]
  Total: 1 [930,38 GiB] / in use: 1 [930,38 GiB] / in no VG: 0 [0   ]

Expanding logical volume . Before the extension, check the current status of LV (1).

[root@localhost ~]# lvscan
  ACTIVE            '/dev/vg0/swap' [<16,00 GiB] inherit
  ACTIVE            '/dev/vg0/root' [<206,41 GiB] inherit

LV /dev/vg0/rootuses 206.41 GB.

We expand LV with the following command (2).

lvextend -l +100%FREE /dev/mapper/vg0-root


Check the performed action (3).

[root@localhost ~]# lvscan 
  ACTIVE            '/dev/vg0/swap' [<16,00 GiB] inherit
  ACTIVE            '/dev/vg0/root' [<914,39 GiB] inherit

As you can see, after expanding LV, the amount of disk space occupied became 914.39 GB.



LV volume has increased (4), but the file system still occupies 204 GB (5).

1. Perform the file system extension.

resize2fs /dev/mapper/vg0-root

Check after the executed command the size of the file system.

[root@localhost ~]# df -h
          % C 
devtmpfs                32G            0   32G            0% /dev
tmpfs                   32G            0   32G            0% /dev/shm
tmpfs                   32G         9,5M   32G            1% /run
tmpfs                   32G            0   32G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   900G         1,3G  860G            1% /
/dev/md126            1007M         120M  837M           13% /boot
tmpfs                  6,3G            0  6,3G            0% /run/user/0

The size of the root file system will increase to 900 GB. After completing the steps, you can remove the old discs.

Case 2: Replacing smaller drives with larger drives (more than 2TB)


Task: Replace current disks with larger disks (2 x 3TB) with saving information. In this case, we have 2 x 240 GB SSD (RAID-1) disks with the installed system and 2 x 3 TB SATA disks to which you need to transfer the system. Current drives use the MBR partition table. Since new disks have a capacity of more than 2 TB, they will need to use the GPT table, since MBR cannot work with disks larger than 2TB.

View the current disk layout.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sda2           8:2    0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0   2,7T  0 disk  
sdd              8:48   0   2,7T  0 disk  

Check the partition table used on the disk /dev/sda.

[root@localhost ~]# fdisk -l /dev/sda | grep 'Disk label type'
Disk label type: dos

The disk /dev/sdbuses a similar partition table. Check the used disk space in the system.

[root@localhost ~]# df -h
          % C 
devtmpfs                16G            0   16G            0% /dev
tmpfs                   16G            0   16G            0% /dev/shm
tmpfs                   16G         9,5M   16G            1% /run
tmpfs                   16G            0   16G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   204G         1,3G  192G            1% /
/dev/md126            1007M         120M  837M           13% /boot
tmpfs                  3,2G            0  3,2G            0% /run/user/0

As you can see, the root of the file system is 204 GB. Check the current state of the software RAID.

1. Install GPT partition table and disk partitioning


Check disk layout by sector.

[root@localhost ~]# parted /dev/sda print
: ATA KINGSTON SVP200S (scsi)
 /dev/sda: 240GB
  (./.): 512B/512B
 : msdos
Disk Flags: 

                  
 1     1049kB  1076MB  1075MB  primary                    , raid
 2     1076MB  240GB   239GB   primary                    raid

On the new 3TB drive, we will need to create 3 partitions:

  1. CATEGORY bios_grub2MiB GPT size for compatibility with the BIOS,
  2. The partition for the RAID array to be mounted on /boot.
  3. The partition for the RAID array, on which will be LV root and LV swap .

Install the parted utility with the command yum install -y parted(for CentOS), apt install -y parted(for Debian / Ubuntu).

Using parted, execute the following commands to partition the disk.

We execute the command parted /dev/sdcand go into edit mode for disk layout.

Create a GPT partition table.

(parted) mktable gpt

We create 1 section bios_grubsection and set a flag for it.

(parted) mkpart primary 1MiB 3MiB
(parted) set 1 bios_grub on  

Create a 2 section and set a flag for it. The partition will use as a block for the RAID array and mount it in /boot.

(parted) mkpart primary ext2 3MiB 1028MiB
(parted) set 2 boot on

We create a 3 section, which will also be used as an array block in which there will be LVM.

(parted) mkpart primary 1028MiB 100% 

In this case, setting the flag is not necessary, but if necessary, it is possible to set it with the following command.

(parted) set 3 raid on

Check the created table.

(parted) p                                                                
: ATA TOSHIBA DT01ACA3 (scsi)
 /dev/sdc: 3001GB
  (./.): 512B/4096B
 : gpt
Disk Flags: 

                  
 1     1049kB  3146kB  2097kB                    primary  bios_grub
 2     3146kB  1077MB  1074MB                    primary  
 3     1077MB  3001GB  3000GB                    primary

Assign the drive a new random GUID.

sgdisk -G /dev/sdd

2. Removing partitions of the first disk from arrays


Check the state of the array

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sda1[0] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sda2[0] sdb2[1]
      233206784 blocks super 1.2 [2/2] [UU]
      bitmap: 0/2 pages [0KB], 65536KB chunk

unused devices: <none>

The system uses 2 arrays: md126 (mount point / boot) - consists of /dev/sda1and /dev/sdb1, md127(LVM for swapand the root of the file system) - consists of /dev/sda2and /dev/sdb2.

We mark the partitions of the first disk that are used in each array as bad.

mdadm /dev/md126 --fail /dev/sda1

mdadm /dev/md127 --fail /dev/sda2

We delete sections of the block device /dev/sdafrom arrays.

mdadm /dev/md126 --remove /dev/sda1

mdadm /dev/md127 --remove /dev/sda2

Check the state of the array after removing the disk.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdb1[1]
      1047552 blocks super 1.2 [2/1] [_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

3. Adding partitions of the new disk to the array


The next step is to add the partitions of the new disk to the arrays for synchronization. We look at the current state of the disk layout.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
└─sdc3           8:35   0   2,7T  0 part  
sdd              8:48   0   2,7T  0 disk  

A section /dev/sdc1is a bios_grubsection and is not involved in creating arrays. In arrays, only /dev/sdc2and will be involved /dev/sdc3. Add these sections to the corresponding arrays.

mdadm /dev/md126 --add /dev/sdc2

mdadm /dev/md127 --add /dev/sdc3

Then we wait for the synchronization of the array.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdc2[2] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdc3[2] sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.2% (619904/233206784) finish=31.2min speed=123980K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>

Partitioning disks after adding partitions to an array.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  

4. Removing partitions of the second disk from arrays


We mark the partitions of the second disk, which are used in each array, as bad.

mdadm /dev/md126 --fail /dev/sdb1

mdadm /dev/md127 --fail /dev/sdb2

We delete sections of the block device /dev/sdafrom arrays.

mdadm /dev/md126 --remove /dev/sdb1

mdadm /dev/md127 --remove /dev/sdb2

5. Copy the GPT markup table and synchronize the array


To copy the GPT markup table, we will use the utility sgdiskthat is included in the package for working with disk partitions and the GPT table gdisk.

Installation gdiskfor CentOS:

yum install -y gdisk

Installation gdiskfor Debian / Ubuntu:

apt install -y gdisk

ATTENTION : For GPT , the disk to which the markup is copied is first indicated , the second disk is the disk from which the markup is copied. If you mix up the disks, then the initially healthy markup will be overwritten and destroyed.

Copy the GPT markup table.

sgdisk -R /dev/sdd /dev/sdc

Partitioning disks after transferring a table to disk /dev/sdd.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
└─sdd3           8:51   0   2,7T  0 part  

Next, we add each of the partitions participating in the software RAID arrays.

mdadm /dev/md126 --add /dev/sdd2

mdadm /dev/md127 --add /dev/sdd3

We are waiting for the synchronization of the array.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdd2[3] sdc2[2]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid1 sdd3[3] sdc3[2]
      233206784 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.0% (148224/233206784) finish=26.2min speed=148224K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>

After copying the GPT markup to a second new disk, the markup will look like this.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd3           8:51   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Next, install GRUB on the new drives.

Installation for CentOS:

grub2-install /dev/sdX

Installation for Debian / Ubuntu:

grub-install /dev/sdX

where Xis the drive letter, in our case, the drives /dev/sdcand /dev/sdd.

Updating the information about the array.

For CentOS:

mdadm --detail --scan --verbose > /etc/mdadm.conf

For Debian / Ubuntu:

echo "DEVICE partitions" > /etc/mdadm/mdadm.conf

mdadm --detail --scan --verbose | awk '/ARRAY/ {print}' >> /etc/mdadm/mdadm.conf

Update image initrd:
For CentOS:

dracut -f -v --regenerate-all

For Debian / Ubuntu:

update-initramfs -u -k all

Updating the GRUB configuration.

For CentOS:

grub2-mkconfig -o /boot/grub2/grub.cfg

For Debian / Ubuntu:

update-grub

After the completed steps, the old discs can be removed.

6. Extension of the file system (ext4) of the root partition


Partitioning disks before expanding the file system after moving the system to 2 x 3TB (RAID-1) disks.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
sdb              8:16   0 223,6G  0 disk  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md126        9:126  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdd3           8:51   0   2,7T  0 part  
  └─md126        9:126  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Now sections /dev/sdc3and /dev/sdd3occupy 2.7 TB. Since we created a new partitioning of disks with a GPT table, the size of the 3 partitions was immediately set to the maximum possible disk space, in this case we do not need to expand the partition.

It is necessary:

  1. Extend the md126 array,
  2. Expand PV (physical volume),
  3. Extend LV (logical-volume) vg0-root,
  4. Extend the file system.

1. Expand the array md126to the maximum.

mdadm --grow /dev/md126 --size=max

After expanding the array, the md126size of the occupied space increased to 2.7 TB.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
sdb              8:16   0 223,6G  0 disk  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md126        9:126  0   2,7T  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdd3           8:51   0   2,7T  0 part  
  └─md126        9:126  0   2,7T  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Expanding physical volume .

Before expansion, we check the current value of the occupied space PV / dev/md126.

[root@localhost ~]# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/md126 vg0 lvm2 a--  222,40g    0 

Expand PV with the following command.

pvresize /dev/md126

Check the completed action.

[root@localhost ~]# pvs
  PV         VG  Fmt  Attr PSize  PFree
  /dev/md126 vg0 lvm2 a--  <2,73t 2,51t

Expanding class logical volume vg0-root .

After expanding PV, we check the occupied space of VG.

[root@localhost ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  vg0   1   2   0 wz--n- <2,73t 2,51t

Check the space occupied by LV.

[root@localhost ~]# lvs
  LV   VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root vg0 -wi-ao---- <206,41g                                                    
  swap vg0 -wi-ao----  <16,00g            

The vg0-root volume takes up 206.41 GB.

Expand LV to maximum disk space.

lvextend -l +100%FREE /dev/mapper/vg0-root 

Checking the LV space after expansion.

[root@localhost ~]# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root vg0 -wi-ao----   2,71t                                                    
  swap vg0 -wi-ao---- <16,00g

Extending the file system (ext4).

Check the current file system size.

[root@localhost ~]# df -h
          % C 
devtmpfs                16G            0   16G            0% /dev
tmpfs                   16G            0   16G            0% /dev/shm
tmpfs                   16G         9,6M   16G            1% /run
tmpfs                   16G            0   16G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   204G         1,4G  192G            1% /
/dev/md127            1007M         141M  816M           15% /boot
tmpfs                  3,2G            0  3,2G            0% /run/user/0

The / dev / mapper / vg0-root volume takes up 204 GB after the LV extension.

Expanding the file system.

resize2fs /dev/mapper/vg0-root 

Check the size of the file system after its expansion.

[root@localhost ~]# df -h
          % C 
devtmpfs                16G            0   16G            0% /dev
tmpfs                   16G            0   16G            0% /dev/shm
tmpfs                   16G         9,6M   16G            1% /run
tmpfs                   16G            0   16G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   2,7T         1,4G  2,6T            1% /
/dev/md127            1007M         141M  816M           15% /boot
tmpfs                  3,2G            0  3,2G            0% /run/user/0

The file system size is increased by the entire volume of the volume.

Source: https://habr.com/ru/post/undefined/


All Articles