{"id":179,"date":"2024-07-11T09:36:28","date_gmt":"2024-07-11T06:36:28","guid":{"rendered":"https:\/\/pac-man.ocitec.us\/?page_id=179"},"modified":"2024-08-13T10:18:24","modified_gmt":"2024-08-13T07:18:24","slug":"0003_mdadm","status":"publish","type":"page","link":"https:\/\/itgen.itbumper.com\/?page_id=179","title":{"rendered":"0003_mdadm"},"content":{"rendered":"\n<p><strong>mdadm<\/strong> &#8211; is a utility for creating, managing, and monitoring software RAID arrays on Linux.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; first-line: 1; title: ; notranslate\" title=\"\">\n# Installation\n\nsudo apt-get update\nsudo apt-get install mdadm\n<\/pre><\/div>\n\n\n<p>&nbsp;<\/p>\n\n\n\n<p class=\"has-text-align-center\">&nbsp;<strong>Example of creating RAID 1 (mirroring)&nbsp;<\/strong><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Identify the disks to be used in the array\nlsblk\n\n# Example output\npac-man@lab-vm:~\/labs\/scripts$ lsblk\nNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT\nloop0    7:0    0 63.3M  1 loop \/snap\/core20\/1828\nloop1    7:1    0 91.9M  1 loop \/snap\/lxd\/24061\nloop2    7:2    0 49.9M  1 loop \/snap\/snapd\/18357\nsr0     11:0    1 1024M  0 rom\nvda    252:0    0   10G  0 disk\n\u251c\u2500vda1 252:1    0    1M  0 part\n\u2514\u2500vda2 252:2    0   10G  0 part \/\nvdb    252:16   0    1G  0 disk\nvdc    252:32   0    1G  0 disk\nvdd    252:48   0    1G  0 disk\nvde    252:64   0    1G  0 disk\nvdf    252:80   0    1G  0 disk\nvdg    252:96   0    1G  0 disk\npac-man@lab-vm:~\/labs\/scripts$\n<\/pre><\/div>\n\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Clean superblock on \/dev\/vdb and \/dev\/vdc\nsudo mdadm --zero-superblock --force \/dev\/sd{b,c}\n\n# Example output\npac-man@lab-vm:~\/labs\/scripts$ sudo mdadm --zero-superblock --force \/dev\/vd{b,c}\nmdadm: Unrecognised md component device - \/dev\/vdb\nmdadm: Unrecognised md component device - \/dev\/vdc\npac-man@lab-vm:~\/labs\/scripts$\n\n# Wipe all metadata and signatures for \/dev\/vdb and \/dev\/vdc\nwipefs --all --force \/dev\/vd{b,c}\n\n# Create RAID 1\nsudo mdadm --create --verbose \/dev\/md0 --level=1 --raid-devices=2 \/dev\/vdb \/dev\/vdc\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm --create --verbose \/dev\/md0 --level=1 --raid-devices=2 \/dev\/vdb \/dev\/vdc\nmdadm: Note: this array has metadata at the start and\n    may not be suitable as a boot device.  If you plan to\n    store '\/boot' on this device please ensure that\n    your boot-loader understands md\/v1.x metadata, or use\n    --metadata=0.90\nmdadm: size set to 1046528K\nContinue creating array? y\nmdadm: Fail create md0 when using \/sys\/module\/md_mod\/parameters\/new_array\nmdadm: Defaulting to version 1.2 metadata\nmdadm: array \/dev\/md0 started.\npac-man@lab-vm:~$\n\n\n# Checking buildind status\nsudo watch -n 5 cat \/proc\/mdstat\n\n# Example output\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid1 vdc&#x5B;1] vdb&#x5B;0]\n      1046528 blocks super 1.2 &#x5B;2\/2] &#x5B;UU]\n\nunused devices: &lt;none&gt;\npac-man@lab-vm:~$\n\n# Take a look again\npac-man@lab-vm:~$ lsblk\nNAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT\nloop0    7:0    0 63.3M  1 loop  \/snap\/core20\/1828\nloop1    7:1    0 91.9M  1 loop  \/snap\/lxd\/24061\nloop2    7:2    0 49.9M  1 loop  \/snap\/snapd\/18357\nsr0     11:0    1 1024M  0 rom\nvda    252:0    0   10G  0 disk\n\u251c\u2500vda1 252:1    0    1M  0 part\n\u2514\u2500vda2 252:2    0   10G  0 part  \/\nvdb    252:16   0    1G  0 disk\n\u2514\u2500md0    9:0    0 1022M  0 raid1\nvdc    252:32   0    1G  0 disk\n\u2514\u2500md0    9:0    0 1022M  0 raid1\nvdd    252:48   0    1G  0 disk\nvde    252:64   0    1G  0 disk\nvdf    252:80   0    1G  0 disk\nvdg    252:96   0    1G  0 disk\npac-man@lab-vm:~$\n\n\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Create a filesystem on the new RAID array\n\nsudo mkfs.ext4 \/dev\/md0\n\n# Example output\npac-man@lab-vm:~$ sudo mkfs.ext4 \/dev\/md0\nmke2fs 1.45.5 (07-Jan-2020)\nDiscarding device blocks: done\nCreating filesystem with 261632 4k blocks and 65408 inodes\nFilesystem UUID: c853f28b-870a-4676-858f-bf0e2ba5bdd5\nSuperblock backups stored on blocks:\n        32768, 98304, 163840, 229376\n\nAllocating group tables: done\nWriting inode tables: done\nCreating journal (4096 blocks): done\nWriting superblocks and filesystem accounting information: done\n\npac-man@lab-vm:~$\n\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Mount the RAID array:\n\nsudo mkdir -p \/mnt\/raid1\nsudo mount \/dev\/md0 \/mnt\/raid1\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Add an entry to \/etc\/fstab to mount the array automatically at boot\n\nsudo echo '\/dev\/md0 \/mnt\/raid1 ext4 defaults,nofail 0 0' | sudo tee -a \/etc\/fstab\n\n# Example output\npac-man@lab-vm:~$ sudo echo '\/dev\/md0 \/mnt\/raid1 ext4 defaults,nofail 0 0' | sudo tee -a \/etc\/fstab\n\/dev\/md0 \/mnt\/raid1 ext4 defaults,nofail 0 0\npac-man@lab-vm:~$\n\n# Better to use UUID insted of \/dev\/md0\n# sudo blkid # after use UUID you have got\n# sudo echo 'UUID=&quot;c853f28b-870a-4676-858f-bf0e2ba5bdd5&quot; \/mnt\/raid1 ext4 defaults,nofail 0 0' | sudo tee -a \/etc\/fstab\n\n\n<\/pre><\/div>\n\n<ul>\n<li data-tadv-p=\"keep\"><strong>Version<\/strong> &#8211; is the metadata version.<\/li>\n<li data-tadv-p=\"keep\"><strong>Creation Time<\/strong> \u2014 the date at the time the array was created.<\/li>\n<li data-tadv-p=\"keep\"><strong>Raid Level<\/strong> \u2014 the RAID level.<\/li>\n<li data-tadv-p=\"keep\"><strong>Array Size<\/strong> \u2014 the amount of disk space for the RAID.<\/li>\n<li data-tadv-p=\"keep\"><strong>Used Dev Size<\/strong> \u2014 the volume used for devices. There will be an individual calculation for each level: RAID1 is equal to half of the total disk size, RAID5 is equal to the size used for parity control.<\/li>\n<li data-tadv-p=\"keep\"><strong>Raid Devices<\/strong> \u2014 the number of devices used for the RAID.<\/li>\n<li data-tadv-p=\"keep\"><strong>Total Devices<\/strong> \u2014 the number of devices added to the RAID.<br><strong>Update Time<\/strong> \u2014 the date and time when the array was last modified.<\/li>\n<li data-tadv-p=\"keep\"><strong>State<\/strong> \u2014 the current state. clean \u2014 everything is fine.<br><strong>Active Devices<\/strong> \u2014 the number of devices running in the array.<\/li>\n<li data-tadv-p=\"keep\"><strong>Working Devices<\/strong> \u2014 the number of devices added to the array in working condition.<\/li>\n<li data-tadv-p=\"keep\"><strong>Failed Devices<\/strong> \u2014 the number of failed devices.<\/li>\n<li data-tadv-p=\"keep\"><strong>Spare Devices<\/strong> \u2014 the number of spare devices.<\/li>\n<li data-tadv-p=\"keep\"><strong>Consistency Policy<\/strong> \u2014 the consistency policy of the active array (in case of an unexpected failure). By default, resync is used \u2014 full resynchronization after recovery. There may also be bitmap, journal, ppl.<\/li>\n<li data-tadv-p=\"keep\"><strong>Name<\/strong> \u2014 the name of the computer.<\/li>\n<li data-tadv-p=\"keep\"><strong>UUID<\/strong> is the identifier for the array.<\/li>\n<li data-tadv-p=\"keep\"><strong>Events<\/strong> \u2014 the number of update events.<\/li>\n<li data-tadv-p=\"keep\"><strong>Chunk Size<\/strong> (for RAID5) is the block size in kilobytes, which is written to different disks.<\/li>\n<\/ul>\n<p style=\"text-align: center;\" data-tadv-p=\"keep\"><strong>Managing the RAID Array<\/strong><\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Get details from mdadm\nsudo mdadm --detail --scan --verbose\n\n# Example output\nARRAY \/dev\/md0 level=raid1 num-devices=2 metadata=1.2 name=lab-vm:0 UUID=8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n   devices=\/dev\/vdb,\/dev\/vdc\n\n# Checking the array status\nsudo cat \/proc\/mdstat\n\n# Example output\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid1 vdc&#x5B;1] vdb&#x5B;0]\n      1046528 blocks super 1.2 &#x5B;2\/2] &#x5B;UU] # looks good &#x5B;2\/2] &#x5B;UU]\n\n# Checking the array status\nsudo mdadm --detail \/dev\/md0\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm --detail \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 08:43:19 2024\n        Raid Level : raid1\n        Array Size : 1046528 (1022.00 MiB 1071.64 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 2\n     Total Devices : 2\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 08:50:25 2024\n             State : clean\n    Active Devices : 2\n   Working Devices : 2\n    Failed Devices : 0\n     Spare Devices : 0\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n            Events : 17\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       1     252       32        1      active sync   \/dev\/vdc\n\n\n\n# Fail simulation. For example accidentaly remove disk \/dev\/vdc\n# I used a command sudo mdadm --fail \/dev\/md0 \/dev\/vdc\n\n# Example output\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid1 vdc&#x5B;1](F) vdb&#x5B;0]\n      1046528 blocks super 1.2 &#x5B;2\/1] &#x5B;U_]  # &lt;--- &#x5B;2\/1] &#x5B;U_] = not good\n\npac-man@lab-vm:~$ sudo mdadm --detail  \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 08:43:19 2024\n        Raid Level : raid1\n        Array Size : 1046528 (1022.00 MiB 1071.64 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 2\n     Total Devices : 2\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 09:26:02 2024\n             State : clean, degraded    # &lt;--degraded = not good\n    Active Devices : 1\n   Working Devices : 1\n    Failed Devices : 1\n     Spare Devices : 0\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n            Events : 19\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       -       0        0        1      removed\n\n       1     252       32        -      faulty   \/dev\/vdc # &lt;--fail\n\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Adding a new disk to the array\n\nsudo mdadm --add \/dev\/md0 \/dev\/vde\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm --add \/dev\/md0 \/dev\/vde\nmdadm: added \/dev\/vde\npac-man@lab-vm:~$\n\n# Check now\n# Example output\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid1 vde&#x5B;2] vdc&#x5B;1](F) vdb&#x5B;0]\n      1046528 blocks super 1.2 &#x5B;2\/2] &#x5B;UU] # &lt;--good &#x5B;2\/2] &#x5B;UU]\n\nunused devices: &lt;none&gt;\npac-man@lab-vm:~$\n\npac-man@lab-vm:~$ sudo mdadm --detail  \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 08:43:19 2024\n        Raid Level : raid1\n        Array Size : 1046528 (1022.00 MiB 1071.64 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 2\n     Total Devices : 3\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 09:38:51 2024\n             State : clean   # &lt;--looks good\n    Active Devices : 2\n   Working Devices : 2\n    Failed Devices : 1       # &lt;--take a look, but RAID is OK\n     Spare Devices : 0\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n            Events : 38\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       2     252       64        1      active sync   \/dev\/vde\n\n       1     252       32        -      faulty   \/dev\/vdc\npac-man@lab-vm:~$\n\n\n\n# Now you can delete \/dev\/vdc from the array\nsudo mdadm --remove \/dev\/md0 \/dev\/vdc\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm --remove \/dev\/md0 \/dev\/vdc\nmdadm: hot removed \/dev\/vdc from \/dev\/md0\npac-man@lab-vm:~$\n\n# Check again\npac-man@lab-vm:~$ sudo mdadm --detail  \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 08:43:19 2024\n        Raid Level : raid1\n        Array Size : 1046528 (1022.00 MiB 1071.64 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 2\n     Total Devices : 2\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 09:44:40 2024\n             State : clean\n    Active Devices : 2\n   Working Devices : 2\n    Failed Devices : 0\n     Spare Devices : 0\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n            Events : 39\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       2     252       64        1      active sync   \/dev\/vde\npac-man@lab-vm:~$\n# Everithing looks OK\n\n<\/pre><\/div>\n\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Remove the failed disk from the array\n\nsudo mdadm --fail \/dev\/md0 \/dev\/sda\nsudo mdadm --remove \/dev\/md0 \/dev\/sda\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Add a new disk to the array\n\nsudo mdadm --add \/dev\/md0 \/dev\/sde\n<\/pre><\/div>\n\n<p style=\"text-align: center;\"><strong><span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"0:10\">Reassembling<\/span> the <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"11:7\">array<\/span><\/strong><\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# If we need to return a previously disassembled or collapsed array of disks that were already part of the RAID automatically, do\n\nmdadm --assemble --scan\n\n# Also you can chose disks\nmdadm --assemble \/dev\/md0 \/dev\/vdb \/dev\/vdc\n<\/pre><\/div>\n\n<p><\/p>\n<p style=\"text-align: center;\"><strong>Hot Spare disk<\/strong><\/p>\n<p>If a spare hot-swappable disk is in the array, a spare one will take its place if one of the primary disks fails.<\/p>\n<p>The Hot Spare disk will be the one that will be added to the array<\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Add a Hot spare disk in the array\nsudo mdadm \/dev\/md0 --add \/dev\/vdf\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm \/dev\/md0 --add \/dev\/vdf\nmdadm: added \/dev\/vdf\npac-man@lab-vm:~$\n\n# Check now\npac-man@lab-vm:~$ sudo mdadm --detail  \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 08:43:19 2024\n        Raid Level : raid1\n        Array Size : 1046528 (1022.00 MiB 1071.64 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 2\n     Total Devices : 3\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 09:58:06 2024\n             State : clean\n    Active Devices : 2\n   Working Devices : 3\n    Failed Devices : 0\n     Spare Devices : 1\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n            Events : 40\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       2     252       64        1      active sync   \/dev\/vde\n\n       3     252       80        -      spare   \/dev\/vdf  # &lt;---OK\npac-man@lab-vm:~$\n\n\n\n# Now simulate a disk fail\npac-man@lab-vm:~$ sudo mdadm \/dev\/md0 --fail \/dev\/vdb\nmdadm: set \/dev\/vdb faulty in \/dev\/md0\npac-man@lab-vm:~$\n\n# Check again\npac-man@lab-vm:~$ sudo mdadm --detail  \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 08:43:19 2024\n        Raid Level : raid1\n        Array Size : 1046528 (1022.00 MiB 1071.64 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 2\n     Total Devices : 3\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 10:02:02 2024\n             State : clean\n    Active Devices : 2\n   Working Devices : 2\n    Failed Devices : 1\n     Spare Devices : 0\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 8bcf87ec:3bd0c8f9:37ebf8bd:3fa8fc01\n            Events : 59\n\n    Number   Major   Minor   RaidDevice State\n       3     252       80        0      active sync   \/dev\/vdf\n       2     252       64        1      active sync   \/dev\/vde\n\n       0     252       16        -      faulty   \/dev\/vdb\npac-man@lab-vm:~$\n\n# Well done, a Hot spare disk \/dev\/vdf substituded a failed disk \/dev\/vbd\n<\/pre><\/div>\n\n<p><\/p>\n<p style=\"text-align: center;\"><strong>Over failsafe array<\/strong><\/p>\n<p>For RAID1, two disks are enough, but you can use more than 2.&nbsp; Just add disk or disks in the array and then change raid-devices. Be aware, you can`t reduce raid-devices after!!!<\/p>\n\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Add one more disk in the array\nsudo mdadm \/dev\/md0 --add \/dev\/vdd\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm \/dev\/md0 --add \/dev\/vdd\nmdadm: added \/dev\/vdd\n\n# Change the array devices\nsudo mdadm -G \/dev\/md0 --raid-devices=3\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm -G \/dev\/md0 --raid-devices=3\nraid_disks for \/dev\/md0 set to 3\npac-man@lab-vm:~$\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid1 vdd&#x5B;4] vdf&#x5B;3] vde&#x5B;2] vdb&#x5B;0](F)\n      1046528 blocks super 1.2 &#x5B;3\/2] &#x5B;UU_]\n      &#x5B;=================&gt;...]  recovery = 87.5% (916608\/1046528) finish=0.0min speed=229152K\/sec\n\n\n# Check again\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid1 vdd&#x5B;4] vdf&#x5B;3] vde&#x5B;2] vdb&#x5B;0](F)\n      1046528 blocks super 1.2 &#x5B;3\/3] &#x5B;UUU]    # &lt;---&#x5B;3\/3] &#x5B;UUU] -Ok\n\n\n<\/pre><\/div>\n\n<p><\/p>\n<p style=\"text-align: center;\"><strong>Delete the Array<\/strong><\/p>\n<p><span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"0:4\">If<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"5:3\">we<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"9:5\">need<\/span> to <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"15:9\">completely<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"25:9\">disassemble<\/span> the <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"35:4\">RAID<\/span><span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"39:1\">,<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"41:7\">first<\/span> we will <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"49:12\">unmount<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"62:1\">and<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"64:9\">stop<\/span> <span class=\"EzKURWReUAB5oZgtQNkl\" data-src-align=\"74:3\">it, then do:<\/span><\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# unmount the array (see fstab) \nsudo umount \/mnt\/raid1\n\n# If you can't umount, because is busy, then do: \n# sudo fuser -vm \/mnt\/folder_name\n# kill processes\n\n# Stop the array\nsudo mdadm -S \/dev\/md0\n\n# If you can't stop the array, because is busy, then do: \n# lsof -f -- \/dev\/md0\n# kill processes\n\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm -S \/dev\/md0\nmdadm: stopped \/dev\/md0\n\n\n# Erase superblocks for each disk\nsudo mdadm --zero-superblock \/dev\/vdx\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm --zero-superblock \/dev\/vdb\npac-man@lab-vm:~$ sudo mdadm --zero-superblock \/dev\/vdc\npac-man@lab-vm:~$ sudo mdadm --zero-superblock \/dev\/vdd\npac-man@lab-vm:~$ sudo mdadm --zero-superblock \/dev\/vdf\npac-man@lab-vm:~$ sudo mdadm --zero-superblock \/dev\/vde\n\n\n# Delete the metadata and the signature\nsudo wipefs --all --force \/dev\/vd{b,c,d,e,f}\n\n# Example output\npac-man@lab-vm:~$ sudo wipefs --all --force \/dev\/vd{b,c,d,e,f}\npac-man@lab-vm:~$\n\n# Check\nlsblk\n\n#Example output\npac-man@lab-vm:~$ lsblk\nNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT\nloop0    7:0    0 63.3M  1 loop \/snap\/core20\/1828\nloop1    7:1    0 91.9M  1 loop \/snap\/lxd\/24061\nloop2    7:2    0 49.9M  1 loop \/snap\/snapd\/18357\nsr0     11:0    1 1024M  0 rom\nvda    252:0    0   10G  0 disk\n\u251c\u2500vda1 252:1    0    1M  0 part\n\u2514\u2500vda2 252:2    0   10G  0 part \/\nvdb    252:16   0    1G  0 disk\nvdc    252:32   0    1G  0 disk\nvdd    252:48   0    1G  0 disk\nvde    252:64   0    1G  0 disk\nvdf    252:80   0    1G  0 disk\nvdg    252:96   0    1G  0 disk\n\n# Delete or comment a record in fstab\nsudo nano \/etc\/fstab\n\n# Example \n  GNU nano 4.8                                                         \/etc\/fstab                                                          Modified\n# \/etc\/fstab: static file system information.\n#\n# Use 'blkid' to print the universally unique identifier for a\n# device; this may be used with UUID= as a more robust way to name devices\n# that works even if disks are added and removed. See fstab(5).\n#\n# &lt;file system&gt; &lt;mount point&gt;   &lt;type&gt;  &lt;options&gt;       &lt;dump&gt;  &lt;pass&gt;\n# \/ was on \/dev\/vda2 during curtin installation\n\/dev\/disk\/by-uuid\/6bb4d62f-ca54-45a0-a8cc-16f8fb1c3c10 \/ ext4 defaults 0 1\n# \/dev\/md0 \/mnt\/raid1 ext4 defaults,nofail 0 0  &lt;--commented\n<\/pre><\/div>\n\n<p><\/p>\n<p style=\"text-align: center;\"><strong>Create a RAID5 array from three or more disks<\/strong><\/p>\n<div class=\"dark bg-gray-950 rounded-md border-[0.5px] border-token-border-medium\">\n<div class=\"flex items-center relative text-token-text-secondary bg-token-main-surface-secondary px-4 py-2 text-xs font-sans justify-between rounded-t-md\">&nbsp;<\/div>\n<\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n# Create the RAID5 array md0, using 3 disks (vdb,vdc,vdd)\n\nsudo mdadm --create --verbose \/dev\/md0 --level=5 --raid-devices=3 \/dev\/vd&#x5B;b-d]\n\n# Example output\npac-man@lab-vm:~$ sudo mdadm --create --verbose \/dev\/md0 --level=5 --raid-devices=3 \/dev\/vd&#x5B;b-d]\n&#x5B;sudo] password for pac-man:\nmdadm: layout defaults to left-symmetric\nmdadm: layout defaults to left-symmetric\nmdadm: chunk size defaults to 512K\nmdadm: size set to 1046528K\nmdadm: Defaulting to version 1.2 metadata\nmdadm: array \/dev\/md0 started.\npac-man@lab-vm:~$\n\n# Check\nsudo cat \/proc\/mdstat\n\n# Example output\npac-man@lab-vm:~$ sudo cat \/proc\/mdstat\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10]\nmd0 : active raid5 vdd&#x5B;3] vdc&#x5B;1] vdb&#x5B;0]\n      2093056 blocks super 1.2 level 5, 512k chunk, algorithm 2 &#x5B;3\/3] &#x5B;UUU]\n\nunused devices: &lt;none&gt;\n\n# Make a file system (ext4)\nsudo mkfs.ext4 \/dev\/md0\n\n# Example output\npac-man@lab-vm:~$ sudo mkfs.ext4 \/dev\/md0\nmke2fs 1.45.5 (07-Jan-2020)\n\/dev\/md0 contains a ext4 file system\n        last mounted on Thu Jul 11 08:49:18 2024\nProceed anyway? (y,N) y\nCreating filesystem with 523264 4k blocks and 130816 inodes\nFilesystem UUID: 089269bb-d93d-46e1-9d39-be6ac38ec89a\nSuperblock backups stored on blocks:\n        32768, 98304, 163840, 229376, 294912\n\nAllocating group tables: done\nWriting inode tables: done\nCreating journal (8192 blocks): done\nWriting superblocks and filesystem accounting information: done\n\npac-man@lab-vm:~$\n\n# Create a folder raid5 \nsudo mkdir -p \/mnt\/raid5\n\n\n# Get Array`s UUID \nsudo blkid \/dev\/md0\n\n# Example output\npac-man@lab-vm:\/$ sudo blkid \/dev\/md0\n\/dev\/md0: UUID=&quot;089269bb-d93d-46e1-9d39-be6ac38ec89a&quot; TYPE=&quot;ext4&quot;\n\n\n# Add an entry to \/etc\/fstab to mount the array automatically at boot\nsudo echo 'UUID=&quot;089269bb-d93d-46e1-9d39-be6ac38ec89a&quot; \/mnt\/raid5 ext4 defaults,nofail 0 0' | sudo tee -a \/etc\/fstab\n\n# Example output\npac-man@lab-vm:\/$ sudo echo 'UUID=&quot;089269bb-d93d-46e1-9d39-be6ac38ec89a&quot; \/mnt\/raid5 ext4 defaults,nofail 0 0' | sudo tee -a \/etc\/fstab\nUUID=&quot;089269bb-d93d-46e1-9d39-be6ac38ec89a&quot; \/mnt\/raid5 ext4 defaults,nofail 0 0\n\n# Mounting and checking\nsudo mount -a\nsudo lsblk \/dev\/md0\n\n# Example output\npac-man@lab-vm:\/$ lsblk \/dev\/md0\nNAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT\nmd0    9:0    0   2G  0 raid5 \/mnt\/raid5\npac-man@lab-vm:\/$\n\n\n# Getting more information about the array\nsudo mdadm --detail \/dev\/md0\n\n# Example output\npac-man@lab-vm:\/$ sudo mdadm --detail \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 22:17:03 2024\n        Raid Level : raid5\n        Array Size : 2093056 (2044.00 MiB 2143.29 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 3\n     Total Devices : 3\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 22:31:03 2024\n             State : clean\n    Active Devices : 3\n   Working Devices : 3\n    Failed Devices : 0\n     Spare Devices : 0\n\n            Layout : left-symmetric\n        Chunk Size : 512K\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 0936a567:8f0ddceb:41af9de7:ed9323a3\n            Events : 18\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       1     252       32        1      active sync   \/dev\/vdc\n       3     252       48        2      active sync   \/dev\/vdd\npac-man@lab-vm:\/$\n\n\n# Getting information about file systems\nlsblk --fs\n\n# Example output\npac-man@lab-vm:\/$ lsblk --fs\nNAME   FSTYPE            LABEL    UUID                                 FSAVAIL FSUSE% MOUNTPOINT\nloop0  squashfs                                                              0   100% \/snap\/core20\/1828\nloop1  squashfs                                                              0   100% \/snap\/lxd\/24061\nloop2  squashfs                                                              0   100% \/snap\/snapd\/18357\nloop3  squashfs                                                              0   100% \/snap\/snapd\/21759\nloop4  squashfs                                                              0   100% \/snap\/core20\/2318\nsr0\nvda\n\u251c\u2500vda1\n\u2514\u2500vda2 ext4                       6bb4d62f-ca54-45a0-a8cc-16f8fb1c3c10    6.4G    29% \/\nvdb    linux_raid_member lab-vm:0 0936a567-8f0d-dceb-41af-9de7ed9323a3\n\u2514\u2500md0  ext4                       089269bb-d93d-46e1-9d39-be6ac38ec89a    1.8G     0% \/mnt\/raid5\nvdc    linux_raid_member lab-vm:0 0936a567-8f0d-dceb-41af-9de7ed9323a3\n\u2514\u2500md0  ext4                       089269bb-d93d-46e1-9d39-be6ac38ec89a    1.8G     0% \/mnt\/raid5\nvdd    linux_raid_member lab-vm:0 0936a567-8f0d-dceb-41af-9de7ed9323a3\n\u2514\u2500md0  ext4                       089269bb-d93d-46e1-9d39-be6ac38ec89a    1.8G     0% \/mnt\/raid5\nvde\nvdf\nvdg\npac-man@lab-vm:\/$\n\n# Adding a Hot spare disk\nsudo mdadm \/dev\/md0 --add \/dev\/vde\n\n# Example output\npac-man@lab-vm:\/$ sudo mdadm \/dev\/md0 --add \/dev\/vde\nmdadm: added \/dev\/vde\n\n# Checking\nsudo mdadm --detail \/dev\/md0\n\n# Example output\npac-man@lab-vm:\/$ sudo mdadm --detail \/dev\/md0\n\/dev\/md0:\n           Version : 1.2\n     Creation Time : Thu Jul 11 22:17:03 2024\n        Raid Level : raid5\n        Array Size : 2093056 (2044.00 MiB 2143.29 MB)\n     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)\n      Raid Devices : 3\n     Total Devices : 4\n       Persistence : Superblock is persistent\n\n       Update Time : Thu Jul 11 22:40:52 2024\n             State : clean\n    Active Devices : 3\n   Working Devices : 4\n    Failed Devices : 0\n     Spare Devices : 1\n\n            Layout : left-symmetric\n        Chunk Size : 512K\n\nConsistency Policy : resync\n\n              Name : lab-vm:0  (local to host lab-vm)\n              UUID : 0936a567:8f0ddceb:41af9de7:ed9323a3\n            Events : 19\n\n    Number   Major   Minor   RaidDevice State\n       0     252       16        0      active sync   \/dev\/vdb\n       1     252       32        1      active sync   \/dev\/vdc\n       3     252       48        2      active sync   \/dev\/vdd\n\n       4     252       64        -      spare   \/dev\/vde # &lt;---OK\npac-man@lab-vm:\/$\n\n<\/pre><\/div>\n\n<p style=\"text-align: center;\" data-tadv-p=\"keep\"><strong>Creating a file mdadm.conf<\/strong><\/p>\n<p>The mdadm.conf file contains information about RAID arrays and their components. To create it, run the following commands:<\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm\/mdadm.conf\n\n# Example output cat &gt;&gt; \/etc\/mdadm\/mdadm.conf\nARRAY \/dev\/md127 level=raid5 num-devices=3 metadata=1.2 name=lab-vm:127 UUID=1e2f1734:8fdb85e8:c6ddfad4:060a2b78\n   devices=\/dev\/vdc,\/dev\/vdd,\/dev\/vde\n<\/pre><\/div>\n\n<p><\/p>\n<p style=\"text-align: center;\"><strong>How OS logs&nbsp; mdadm`s events<\/strong><\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo dmesg | grep md0\n\n# Example output\npac-man@lab-vm:~$ sudo dmesg | grep md0\n&#x5B;  188.517629] md\/raid1:md0: not clean -- starting background reconstruction\n&#x5B;  188.517631] md\/raid1:md0: active with 2 out of 2 mirrors\n&#x5B;  188.517674] md0: detected capacity change from 0 to 1071644672\n&#x5B;  188.517853] md: resync of RAID array md0\n&#x5B;  194.062962] md: md0: resync done.\n&#x5B;  547.886627] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)\n&#x5B; 2752.037737] md\/raid1:md0: Disk failure on vdc, disabling device.\n               md\/raid1:md0: Operation continuing on 1 devices.\n&#x5B; 3514.977636] md: recovery of RAID array md0\n&#x5B; 3520.490807] md: md0: recovery done.\n&#x5B; 4906.492683] md\/raid1:md0: Disk failure on vdb, disabling device.\n               md\/raid1:md0: Operation continuing on 1 devices.\n&#x5B; 4906.539679] md: recovery of RAID array md0\n&#x5B; 4911.759835] md: md0: recovery done.\n&#x5B; 5189.557582] md: recovery of RAID array md0\n&#x5B; 5194.927427] md: md0: recovery done.\n&#x5B; 6491.652590] md0: detected capacity change from 1071644672 to 0\n&#x5B; 6491.652672] md: md0 stopped.\n\n<\/pre><\/div>\n\n<p><\/p>\n<p style=\"text-align: center;\"><strong>Example Script for Monitoring RAID Array<\/strong><\/p>\n<p style=\"text-align: justify;\">This script will check the status of the RAID array and send an email notification if the array is not in a &#8220;clean&#8221; state. You can customize the email address and other details as needed.<\/p>\n<p>You need to install sendmail and mailutils<\/p>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt install sendmail\nsudo apt-get install mailutils\n<\/pre><\/div>\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n#!\/bin\/bash\n\n# LOG file, date and time\nLOG_FILE=&quot;\/var\/log\/check_raid.log&quot;\nLOG_DATE=`date +&quot;%A \/ %F %H:%M&quot;`\n\n# Checking a state of RAID Array\nMDADM_DETAIL=$(sudo mdadm --detail \/dev\/md0)\nRAID_STATUS=$(echo &quot;$MDADM_DETAIL&quot; | grep degraded | wc -l)\nRAID_STATUS2=$(echo &quot;$MDADM_DETAIL&quot; | grep FAILED | wc -l)\nRAID_STATUS3=$(echo &quot;$MDADM_DETAIL&quot; | grep &quot;Not Started&quot; | wc -l)\n\n# If the state degraded or FAILED or Not Started send an email\nif &#x5B; $RAID_STATUS -ge 1 ] || &#x5B; $RAID_STATUS2 -ge 1 ] || &#x5B; $RAID_STATUS3 -ge 1 ] ; then\n    echo &quot;$LOG_DATE -----&gt; THE RAID ARRAY HAS GOT A TROUBLE&quot; &gt;&gt; &quot;$LOG_FILE&quot;\n    echo &quot;$MDADM_DETAIL&quot; | mail -s &quot;THE RAID ARRAY HAS GOT A TROUBLE &quot; radik.m@ocitec.us -aFrom:ITB-HOST-01\nelse\n    echo &quot;$LOG_DATE -----&gt; THE STATE OF RAID ARRAY IS CLEAN&quot; &gt;&gt; &quot;$LOG_FILE&quot;\nfi\n\n<\/pre><\/div>","protected":false},"excerpt":{"rendered":"<p>&nbsp; &nbsp;Example of creating RAID 1 (mirroring)&nbsp; Version &#8211; is the metadata version. Creation Time \u2014 the date at the time the array was created. Raid Level \u2014 the RAID level. Array Size \u2014 the amount of disk space for the RAID. Used Dev Size \u2014 the volume used for devices. There will be an &hellip; <a href=\"https:\/\/itgen.itbumper.com\/?page_id=179\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;0003_mdadm&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"categories":[],"tags":[],"_links":{"self":[{"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=\/wp\/v2\/pages\/179"}],"collection":[{"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=179"}],"version-history":[{"count":33,"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=\/wp\/v2\/pages\/179\/revisions"}],"predecessor-version":[{"id":269,"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=\/wp\/v2\/pages\/179\/revisions\/269"}],"wp:attachment":[{"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=179"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=179"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/itgen.itbumper.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=179"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}