mdadm – is a utility for creating, managing, and monitoring software RAID arrays on Linux.
# Installation
sudo apt-get update
sudo apt-get install mdadm
Example of creating RAID 1 (mirroring)
# Identify the disks to be used in the array
lsblk
# Example output
pac-man@lab-vm:~/labs/scripts$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 63.3M 1 loop /snap/core20/1828
loop1 7:1 0 91.9M 1 loop /snap/lxd/24061
loop2 7:2 0 49.9M 1 loop /snap/snapd/18357
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 10G 0 part /
vdb 252:16 0 1G 0 disk
vdc 252:32 0 1G 0 disk
vdd 252:48 0 1G 0 disk
vde 252:64 0 1G 0 disk
vdf 252:80 0 1G 0 disk
vdg 252:96 0 1G 0 disk
pac-man@lab-vm:~/labs/scripts$
# Clean superblock on /dev/vdb and /dev/vdc
sudo mdadm --zero-superblock --force /dev/sd{b,c}
# Example output
pac-man@lab-vm:~/labs/scripts$ sudo mdadm --zero-superblock --force /dev/vd{b,c}
mdadm: Unrecognised md component device - /dev/vdb
mdadm: Unrecognised md component device - /dev/vdc
pac-man@lab-vm:~/labs/scripts$
# Wipe all metadata and signatures for /dev/vdb and /dev/vdc
wipefs --all --force /dev/vd{b,c}
# Create RAID 1
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/vdb /dev/vdc
# Example output
pac-man@lab-vm:~$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/vdb /dev/vdc
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 1046528K
Continue creating array? y
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
pac-man@lab-vm:~$
# Checking buildind status
sudo watch -n 5 cat /proc/mdstat
# Example output
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vdc[1] vdb[0]
1046528 blocks super 1.2 [2/2] [UU]
unused devices: <none>
pac-man@lab-vm:~$
# Take a look again
pac-man@lab-vm:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 63.3M 1 loop /snap/core20/1828
loop1 7:1 0 91.9M 1 loop /snap/lxd/24061
loop2 7:2 0 49.9M 1 loop /snap/snapd/18357
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 10G 0 part /
vdb 252:16 0 1G 0 disk
└─md0 9:0 0 1022M 0 raid1
vdc 252:32 0 1G 0 disk
└─md0 9:0 0 1022M 0 raid1
vdd 252:48 0 1G 0 disk
vde 252:64 0 1G 0 disk
vdf 252:80 0 1G 0 disk
vdg 252:96 0 1G 0 disk
pac-man@lab-vm:~$
# Create a filesystem on the new RAID array
sudo mkfs.ext4 /dev/md0
# Example output
pac-man@lab-vm:~$ sudo mkfs.ext4 /dev/md0
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 261632 4k blocks and 65408 inodes
Filesystem UUID: c853f28b-870a-4676-858f-bf0e2ba5bdd5
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
pac-man@lab-vm:~$
# Mount the RAID array:
sudo mkdir -p /mnt/raid1
sudo mount /dev/md0 /mnt/raid1
# Add an entry to /etc/fstab to mount the array automatically at boot
sudo echo '/dev/md0 /mnt/raid1 ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab
# Example output
pac-man@lab-vm:~$ sudo echo '/dev/md0 /mnt/raid1 ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab
/dev/md0 /mnt/raid1 ext4 defaults,nofail 0 0
pac-man@lab-vm:~$
# Better to use UUID insted of /dev/md0
# sudo blkid # after use UUID you have got
# sudo echo 'UUID="c853f28b-870a-4676-858f-bf0e2ba5bdd5" /mnt/raid1 ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab
- Version – is the metadata version.
- Creation Time — the date at the time the array was created.
- Raid Level — the RAID level.
- Array Size — the amount of disk space for the RAID.
- Used Dev Size — the volume used for devices. There will be an individual calculation for each level: RAID1 is equal to half of the total disk size, RAID5 is equal to the size used for parity control.
- Raid Devices — the number of devices used for the RAID.
- Total Devices — the number of devices added to the RAID.
Update Time — the date and time when the array was last modified. - State — the current state. clean — everything is fine.
Active Devices — the number of devices running in the array. - Working Devices — the number of devices added to the array in working condition.
- Failed Devices — the number of failed devices.
- Spare Devices — the number of spare devices.
- Consistency Policy — the consistency policy of the active array (in case of an unexpected failure). By default, resync is used — full resynchronization after recovery. There may also be bitmap, journal, ppl.
- Name — the name of the computer.
- UUID is the identifier for the array.
- Events — the number of update events.
- Chunk Size (for RAID5) is the block size in kilobytes, which is written to different disks.
Managing the RAID Array
# Get details from mdadm
sudo mdadm --detail --scan --verbose
# Example output
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=lab-vm:0 UUID=8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
devices=/dev/vdb,/dev/vdc
# Checking the array status
sudo cat /proc/mdstat
# Example output
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vdc[1] vdb[0]
1046528 blocks super 1.2 [2/2] [UU] # looks good [2/2] [UU]
# Checking the array status
sudo mdadm --detail /dev/md0
# Example output
pac-man@lab-vm:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 08:43:19 2024
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Jul 11 08:50:25 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
Events : 17
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
# Fail simulation. For example accidentaly remove disk /dev/vdc
# I used a command sudo mdadm --fail /dev/md0 /dev/vdc
# Example output
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vdc[1](F) vdb[0]
1046528 blocks super 1.2 [2/1] [U_] # <--- [2/1] [U_] = not good
pac-man@lab-vm:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 08:43:19 2024
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Jul 11 09:26:02 2024
State : clean, degraded # <--degraded = not good
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
Events : 19
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
- 0 0 1 removed
1 252 32 - faulty /dev/vdc # <--fail
# Adding a new disk to the array
sudo mdadm --add /dev/md0 /dev/vde
# Example output
pac-man@lab-vm:~$ sudo mdadm --add /dev/md0 /dev/vde
mdadm: added /dev/vde
pac-man@lab-vm:~$
# Check now
# Example output
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vde[2] vdc[1](F) vdb[0]
1046528 blocks super 1.2 [2/2] [UU] # <--good [2/2] [UU]
unused devices: <none>
pac-man@lab-vm:~$
pac-man@lab-vm:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 08:43:19 2024
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Jul 11 09:38:51 2024
State : clean # <--looks good
Active Devices : 2
Working Devices : 2
Failed Devices : 1 # <--take a look, but RAID is OK
Spare Devices : 0
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
Events : 38
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
2 252 64 1 active sync /dev/vde
1 252 32 - faulty /dev/vdc
pac-man@lab-vm:~$
# Now you can delete /dev/vdc from the array
sudo mdadm --remove /dev/md0 /dev/vdc
# Example output
pac-man@lab-vm:~$ sudo mdadm --remove /dev/md0 /dev/vdc
mdadm: hot removed /dev/vdc from /dev/md0
pac-man@lab-vm:~$
# Check again
pac-man@lab-vm:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 08:43:19 2024
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Jul 11 09:44:40 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
Events : 39
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
2 252 64 1 active sync /dev/vde
pac-man@lab-vm:~$
# Everithing looks OK
# Remove the failed disk from the array
sudo mdadm --fail /dev/md0 /dev/sda
sudo mdadm --remove /dev/md0 /dev/sda
# Add a new disk to the array
sudo mdadm --add /dev/md0 /dev/sde
Reassembling the array
# If we need to return a previously disassembled or collapsed array of disks that were already part of the RAID automatically, do
mdadm --assemble --scan
# Also you can chose disks
mdadm --assemble /dev/md0 /dev/vdb /dev/vdc
Hot Spare disk
If a spare hot-swappable disk is in the array, a spare one will take its place if one of the primary disks fails.
The Hot Spare disk will be the one that will be added to the array
# Add a Hot spare disk in the array
sudo mdadm /dev/md0 --add /dev/vdf
# Example output
pac-man@lab-vm:~$ sudo mdadm /dev/md0 --add /dev/vdf
mdadm: added /dev/vdf
pac-man@lab-vm:~$
# Check now
pac-man@lab-vm:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 08:43:19 2024
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Jul 11 09:58:06 2024
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
Events : 40
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
2 252 64 1 active sync /dev/vde
3 252 80 - spare /dev/vdf # <---OK
pac-man@lab-vm:~$
# Now simulate a disk fail
pac-man@lab-vm:~$ sudo mdadm /dev/md0 --fail /dev/vdb
mdadm: set /dev/vdb faulty in /dev/md0
pac-man@lab-vm:~$
# Check again
pac-man@lab-vm:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 08:43:19 2024
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Jul 11 10:02:02 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01
Events : 59
Number Major Minor RaidDevice State
3 252 80 0 active sync /dev/vdf
2 252 64 1 active sync /dev/vde
0 252 16 - faulty /dev/vdb
pac-man@lab-vm:~$
# Well done, a Hot spare disk /dev/vdf substituded a failed disk /dev/vbd
Over failsafe array
For RAID1, two disks are enough, but you can use more than 2. Just add disk or disks in the array and then change raid-devices. Be aware, you can`t reduce raid-devices after!!!
# Add one more disk in the array
sudo mdadm /dev/md0 --add /dev/vdd
# Example output
pac-man@lab-vm:~$ sudo mdadm /dev/md0 --add /dev/vdd
mdadm: added /dev/vdd
# Change the array devices
sudo mdadm -G /dev/md0 --raid-devices=3
# Example output
pac-man@lab-vm:~$ sudo mdadm -G /dev/md0 --raid-devices=3
raid_disks for /dev/md0 set to 3
pac-man@lab-vm:~$
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vdd[4] vdf[3] vde[2] vdb[0](F)
1046528 blocks super 1.2 [3/2] [UU_]
[=================>...] recovery = 87.5% (916608/1046528) finish=0.0min speed=229152K/sec
# Check again
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vdd[4] vdf[3] vde[2] vdb[0](F)
1046528 blocks super 1.2 [3/3] [UUU] # <---[3/3] [UUU] -Ok
Delete the Array
If we need to completely disassemble the RAID, first we will unmount and stop it, then do:
# unmount the array (see fstab)
sudo umount /mnt/raid1
# If you can't umount, because is busy, then do:
# sudo fuser -vm /mnt/folder_name
# kill processes
# Stop the array
sudo mdadm -S /dev/md0
# If you can't stop the array, because is busy, then do:
# lsof -f -- /dev/md0
# kill processes
# Example output
pac-man@lab-vm:~$ sudo mdadm -S /dev/md0
mdadm: stopped /dev/md0
# Erase superblocks for each disk
sudo mdadm --zero-superblock /dev/vdx
# Example output
pac-man@lab-vm:~$ sudo mdadm --zero-superblock /dev/vdb
pac-man@lab-vm:~$ sudo mdadm --zero-superblock /dev/vdc
pac-man@lab-vm:~$ sudo mdadm --zero-superblock /dev/vdd
pac-man@lab-vm:~$ sudo mdadm --zero-superblock /dev/vdf
pac-man@lab-vm:~$ sudo mdadm --zero-superblock /dev/vde
# Delete the metadata and the signature
sudo wipefs --all --force /dev/vd{b,c,d,e,f}
# Example output
pac-man@lab-vm:~$ sudo wipefs --all --force /dev/vd{b,c,d,e,f}
pac-man@lab-vm:~$
# Check
lsblk
#Example output
pac-man@lab-vm:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 63.3M 1 loop /snap/core20/1828
loop1 7:1 0 91.9M 1 loop /snap/lxd/24061
loop2 7:2 0 49.9M 1 loop /snap/snapd/18357
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 10G 0 part /
vdb 252:16 0 1G 0 disk
vdc 252:32 0 1G 0 disk
vdd 252:48 0 1G 0 disk
vde 252:64 0 1G 0 disk
vdf 252:80 0 1G 0 disk
vdg 252:96 0 1G 0 disk
# Delete or comment a record in fstab
sudo nano /etc/fstab
# Example
GNU nano 4.8 /etc/fstab Modified
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/vda2 during curtin installation
/dev/disk/by-uuid/6bb4d62f-ca54-45a0-a8cc-16f8fb1c3c10 / ext4 defaults 0 1
# /dev/md0 /mnt/raid1 ext4 defaults,nofail 0 0 <--commented
Create a RAID5 array from three or more disks
# Create the RAID5 array md0, using 3 disks (vdb,vdc,vdd)
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/vd[b-d]
# Example output
pac-man@lab-vm:~$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/vd[b-d]
[sudo] password for pac-man:
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 1046528K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
pac-man@lab-vm:~$
# Check
sudo cat /proc/mdstat
# Example output
pac-man@lab-vm:~$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 vdd[3] vdc[1] vdb[0]
2093056 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
# Make a file system (ext4)
sudo mkfs.ext4 /dev/md0
# Example output
pac-man@lab-vm:~$ sudo mkfs.ext4 /dev/md0
mke2fs 1.45.5 (07-Jan-2020)
/dev/md0 contains a ext4 file system
last mounted on Thu Jul 11 08:49:18 2024
Proceed anyway? (y,N) y
Creating filesystem with 523264 4k blocks and 130816 inodes
Filesystem UUID: 089269bb-d93d-46e1-9d39-be6ac38ec89a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
pac-man@lab-vm:~$
# Create a folder raid5
sudo mkdir -p /mnt/raid5
# Get Array`s UUID
sudo blkid /dev/md0
# Example output
pac-man@lab-vm:/$ sudo blkid /dev/md0
/dev/md0: UUID="089269bb-d93d-46e1-9d39-be6ac38ec89a" TYPE="ext4"
# Add an entry to /etc/fstab to mount the array automatically at boot
sudo echo 'UUID="089269bb-d93d-46e1-9d39-be6ac38ec89a" /mnt/raid5 ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab
# Example output
pac-man@lab-vm:/$ sudo echo 'UUID="089269bb-d93d-46e1-9d39-be6ac38ec89a" /mnt/raid5 ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab
UUID="089269bb-d93d-46e1-9d39-be6ac38ec89a" /mnt/raid5 ext4 defaults,nofail 0 0
# Mounting and checking
sudo mount -a
sudo lsblk /dev/md0
# Example output
pac-man@lab-vm:/$ lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
md0 9:0 0 2G 0 raid5 /mnt/raid5
pac-man@lab-vm:/$
# Getting more information about the array
sudo mdadm --detail /dev/md0
# Example output
pac-man@lab-vm:/$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 22:17:03 2024
Raid Level : raid5
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Jul 11 22:31:03 2024
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 0936a567:8f0ddceb:41af9de7:ed9323a3
Events : 18
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
3 252 48 2 active sync /dev/vdd
pac-man@lab-vm:/$
# Getting information about file systems
lsblk --fs
# Example output
pac-man@lab-vm:/$ lsblk --fs
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
loop0 squashfs 0 100% /snap/core20/1828
loop1 squashfs 0 100% /snap/lxd/24061
loop2 squashfs 0 100% /snap/snapd/18357
loop3 squashfs 0 100% /snap/snapd/21759
loop4 squashfs 0 100% /snap/core20/2318
sr0
vda
├─vda1
└─vda2 ext4 6bb4d62f-ca54-45a0-a8cc-16f8fb1c3c10 6.4G 29% /
vdb linux_raid_member lab-vm:0 0936a567-8f0d-dceb-41af-9de7ed9323a3
└─md0 ext4 089269bb-d93d-46e1-9d39-be6ac38ec89a 1.8G 0% /mnt/raid5
vdc linux_raid_member lab-vm:0 0936a567-8f0d-dceb-41af-9de7ed9323a3
└─md0 ext4 089269bb-d93d-46e1-9d39-be6ac38ec89a 1.8G 0% /mnt/raid5
vdd linux_raid_member lab-vm:0 0936a567-8f0d-dceb-41af-9de7ed9323a3
└─md0 ext4 089269bb-d93d-46e1-9d39-be6ac38ec89a 1.8G 0% /mnt/raid5
vde
vdf
vdg
pac-man@lab-vm:/$
# Adding a Hot spare disk
sudo mdadm /dev/md0 --add /dev/vde
# Example output
pac-man@lab-vm:/$ sudo mdadm /dev/md0 --add /dev/vde
mdadm: added /dev/vde
# Checking
sudo mdadm --detail /dev/md0
# Example output
pac-man@lab-vm:/$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Jul 11 22:17:03 2024
Raid Level : raid5
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Jul 11 22:40:52 2024
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : lab-vm:0 (local to host lab-vm)
UUID : 0936a567:8f0ddceb:41af9de7:ed9323a3
Events : 19
Number Major Minor RaidDevice State
0 252 16 0 active sync /dev/vdb
1 252 32 1 active sync /dev/vdc
3 252 48 2 active sync /dev/vdd
4 252 64 - spare /dev/vde # <---OK
pac-man@lab-vm:/$
Creating a file mdadm.conf
The mdadm.conf file contains information about RAID arrays and their components. To create it, run the following commands:
sudo mdadm --detail --scan --verbose >> /etc/mdadm/mdadm.conf
# Example output cat >> /etc/mdadm/mdadm.conf
ARRAY /dev/md127 level=raid5 num-devices=3 metadata=1.2 name=lab-vm:127 UUID=1e2f1734:8fdb85e8:c6ddfad4:060a2b78
devices=/dev/vdc,/dev/vdd,/dev/vde
How OS logs mdadm`s events
sudo dmesg | grep md0
# Example output
pac-man@lab-vm:~$ sudo dmesg | grep md0
[ 188.517629] md/raid1:md0: not clean -- starting background reconstruction
[ 188.517631] md/raid1:md0: active with 2 out of 2 mirrors
[ 188.517674] md0: detected capacity change from 0 to 1071644672
[ 188.517853] md: resync of RAID array md0
[ 194.062962] md: md0: resync done.
[ 547.886627] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[ 2752.037737] md/raid1:md0: Disk failure on vdc, disabling device.
md/raid1:md0: Operation continuing on 1 devices.
[ 3514.977636] md: recovery of RAID array md0
[ 3520.490807] md: md0: recovery done.
[ 4906.492683] md/raid1:md0: Disk failure on vdb, disabling device.
md/raid1:md0: Operation continuing on 1 devices.
[ 4906.539679] md: recovery of RAID array md0
[ 4911.759835] md: md0: recovery done.
[ 5189.557582] md: recovery of RAID array md0
[ 5194.927427] md: md0: recovery done.
[ 6491.652590] md0: detected capacity change from 1071644672 to 0
[ 6491.652672] md: md0 stopped.
Example Script for Monitoring RAID Array
This script will check the status of the RAID array and send an email notification if the array is not in a “clean” state. You can customize the email address and other details as needed.
You need to install sendmail and mailutils
sudo apt install sendmail
sudo apt-get install mailutils
#!/bin/bash
# LOG file, date and time
LOG_FILE="/var/log/check_raid.log"
LOG_DATE=`date +"%A / %F %H:%M"`
# Checking a state of RAID Array
MDADM_DETAIL=$(sudo mdadm --detail /dev/md0)
RAID_STATUS=$(echo "$MDADM_DETAIL" | grep degraded | wc -l)
RAID_STATUS2=$(echo "$MDADM_DETAIL" | grep FAILED | wc -l)
RAID_STATUS3=$(echo "$MDADM_DETAIL" | grep "Not Started" | wc -l)
# If the state degraded or FAILED or Not Started send an email
if [ $RAID_STATUS -ge 1 ] || [ $RAID_STATUS2 -ge 1 ] || [ $RAID_STATUS3 -ge 1 ] ; then
echo "$LOG_DATE -----> THE RAID ARRAY HAS GOT A TROUBLE" >> "$LOG_FILE"
echo "$MDADM_DETAIL" | mail -s "THE RAID ARRAY HAS GOT A TROUBLE " radik.m@ocitec.us -aFrom:ITB-HOST-01
else
echo "$LOG_DATE -----> THE STATE OF RAID ARRAY IS CLEAN" >> "$LOG_FILE"
fi