Home Creating a RAID array
Post
Cancel

Creating a RAID array

This is the ‘old’ way of creating a RAID array. ZFS is a better way these days.

source: https://www.makeuseof.com/tag/configure-raid-hdd-array-linux/

Find the drives to be put into a RAID array

1
2
3
4
5
6
7
8
9
10
11
$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0   32G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    2G  0 part /boot
└─sda3                      8:3    0   30G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   30G  0 lvm  /
sdb                         8:16   0   32G  0 disk
sdc                         8:32   0   32G  0 disk
sdd                         8:48   0   32G  0 disk
sde                         8:64   0   32G  0 disk

In this case, we want to use sdb, sdc, sdd, sde.

Creating RAID partitions

For each of the drives, we need to create RAID partitions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ sudo fdisk /dev/sdb

Welcome to fdisk (util-linux 2.37.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8afa48f9.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-67108863, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-67108863, default 67108863):

Created a new partition 1 of type 'Linux' and of size 32 GiB.

Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Notify the OS about the new partitions

1
$ sudo partprobe /dev/sdb /dev/sdc /dev/sdd /dev/sde

You won’t see any output.

Create the array

1
2
3
$ sudo mdadm -C /dev/md0 --level=raid5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Monitor the new RAID array

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Dec  6 16:48:01 2022
        Raid Level : raid5
        Array Size : 100608000 (95.95 GiB 103.02 GB)
     Used Dev Size : 33536000 (31.98 GiB 34.34 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Dec  6 16:48:54 2022
             State : clean, degraded, recovering
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 34% complete

              Name : resize:0  (local to host resize)
              UUID : fff2299a:3f00f698:70b90131:4b077662
            Events : 6

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      spare rebuilding   /dev/sde1

Once the array is done rebuilding, you will see this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Dec  6 16:48:01 2022
        Raid Level : raid5
        Array Size : 100608000 (95.95 GiB 103.02 GB)
     Used Dev Size : 33536000 (31.98 GiB 34.34 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Dec  6 16:50:49 2022
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : resize:0  (local to host resize)
              UUID : fff2299a:3f00f698:70b90131:4b077662
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1

Make the file system

You could also use LVM at this point.

1
2
3
4
5
6
7
8
9
10
11
12
$ sudo mkfs.ext4 /dev/md0
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 25152000 4k blocks and 6291456 inodes
Filesystem UUID: 3afba9e8-a786-47d2-820a-1ee88f8340b1
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the raid array

1
sudo mount /dev/md0 /mnt/raid5

Look at the new filesystem

1
2
3
4
5
$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv   30G  4.7G   24G  17% /
/dev/sda2                          2.0G  127M  1.7G   7% /boot
/dev/md0                            94G   24K   90G   1% /mnt/raid5
This post is licensed under CC BY 4.0 by the author.