{"id":201,"date":"2017-03-14T12:06:56","date_gmt":"2017-03-14T11:06:56","guid":{"rendered":"http:\/\/owncloud.gonscak.sk\/?p=201"},"modified":"2017-03-14T12:06:56","modified_gmt":"2017-03-14T11:06:56","slug":"how-to-create-software-raid-1-with-mdadm-witsh-spare","status":"publish","type":"post","link":"https:\/\/www.gonscak.sk\/?p=201","title":{"rendered":"How to create software raid 1 with mdadm with spare"},"content":{"rendered":"<p>At first, we must create partitions on disks with the SAME size in blocks:<\/p>\n<pre><strong>fdisk \/dev\/sdc<\/strong>\n&gt; n (new partition)\n&gt; p (primary type of partition)\n&gt; 1  (partition number)\n&gt; 2048 (first sector: default)\n&gt; 1953525167 (last sector: default)\n&gt; t (change partition type) - selected partition nb. 1\n&gt; fd (set it to Linux raid autodetect)\n&gt; w (write end exit)<\/pre>\n<pre><strong>fdisk \/dev\/sdd<\/strong>\n&gt; n (new partition)\n&gt; p (primary type of partition)\n&gt; 1  (partition number)\n&gt; 2048 (first sector: default)\n&gt; 1953525167 (last sector: default)\n&gt; t (change partition type) - selected partition nb. 1\n&gt; fd (set it to Linux raid autodetect)\n&gt; w (write end exit)<\/pre>\n<pre><strong>fdisk -l \/dev\/sdc<\/strong>\nDisk \/dev\/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nDevice\u00a0\u00a0\u00a0\u00a0 Boot Start\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 End\u00a0\u00a0\u00a0 Sectors\u00a0\u00a0 Size Id Type\n\/dev\/sdc1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 2048 1953525167 1953523120 931.5G fd Linux raid autodetect\nroot@cl3-amd-node2:~# fdisk -l \/dev\/sdd\nDisk \/dev\/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nDevice\u00a0\u00a0\u00a0\u00a0 Boot Start\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 End\u00a0\u00a0\u00a0 Sectors\u00a0\u00a0 Size Id Type\n\/dev\/sdd1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 2048 1953525167 1953523120 931.5G fd Linux raid autodetect\n<\/pre>\n<p>Now, we can create raid using a mdadm. Parameter &#8211;level=1 defines raid1.<\/p>\n<pre>\u00a0mdadm --create \/dev\/md1 --level=1 --raid-devices=2 \/dev\/sdc1 \/dev\/sdd1<\/pre>\n<p>We can watch the progress of building the raid:<\/p>\n<pre><strong>cat \/proc\/mdstat<\/strong>\nmd1 : active raid1 sdd1[1] sdc1[0]\n\u00a0\u00a0\u00a0\u00a0\u00a0 976630464 blocks super 1.2 [2\/2] [UU]\n\u00a0\u00a0\u00a0\u00a0\u00a0 [&gt;....................]\u00a0 resync =\u00a0 1.8% (17759616\/976630464) finish=110.0min speed=145255K\/sec\n\u00a0\u00a0\u00a0\u00a0\u00a0 bitmap: 8\/8 pages [32KB], 65536KB chunk<\/pre>\n<p>Now we can add a spare disk:<\/p>\n<pre><strong>fdisk \/dev\/sde<\/strong>\n&gt; n (new partition)\n&gt; p (primary type of partition)\n&gt; 1  (partition number)\n&gt; 2048 (first sector: default)\n&gt; 1953525167 (last sector: default)\n&gt; t (change partition type) - selected partition nb. 1\n&gt; fd (set it to Linux raid autodetect)\n&gt; w (write end exit)<\/pre>\n<pre>mdadm --add-spare \/dev\/md1 \/dev\/sde1<\/pre>\n<p>And now we can see detail of the raid:<\/p>\n<pre><strong>mdadm --detail \/dev\/md1<\/strong>\n\/dev\/md1:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Version : 1.2\n\u00a0 Creation Time : Tue Mar 14 11:56:28 2017\n\u00a0\u00a0\u00a0\u00a0 Raid Level : raid1\n\u00a0\u00a0\u00a0\u00a0 Array Size : 976630464 (931.39 GiB 1000.07 GB)\n\u00a0 Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)\n\u00a0\u00a0 Raid Devices : 2\n\u00a0 Total Devices : 3\n\u00a0\u00a0\u00a0 Persistence : Superblock is persistent\n\u00a0 Intent Bitmap : Internal\n\u00a0\u00a0\u00a0 Update Time : Tue Mar 14 12:00:49 2017\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 State : clean, resyncing\n\u00a0Active Devices : 2\nWorking Devices : 3\n\u00a0Failed Devices : 0\n\u00a0 Spare Devices : 1\n\u00a0 Resync Status : 3% complete\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Name : cl3-amd-node2:1\u00a0 (local to host cl3-amd-node2)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 UUID : 919632d4:74908819:4f43bba3:33b89328\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Events : 52\n\u00a0\u00a0\u00a0 Number\u00a0\u00a0 Major\u00a0\u00a0 Minor\u00a0\u00a0 RaidDevice State\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 33\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 active sync\u00a0\u00a0 \/dev\/sdc1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 49\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0\u00a0 active sync\u00a0\u00a0 \/dev\/sdd1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 65\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 -\u00a0\u00a0\u00a0\u00a0\u00a0 spare\u00a0\u00a0 \/dev\/sde1<\/pre>\n<p>And we can it see here too:<\/p>\n<pre><strong>cat \/proc\/mdstat<\/strong>\nmd1 : active raid1 sde1[2](S) sdd1[1] sdc1[0]\n\u00a0\u00a0\u00a0\u00a0\u00a0 976630464 blocks super 1.2 [2\/2] [UU]\n\u00a0\u00a0\u00a0\u00a0\u00a0 [=&gt;...................]\u00a0 resync =\u00a0 7.5% (73929920\/976630464) finish=103.3min speed=145533K\/sec\n\u00a0\u00a0\u00a0\u00a0\u00a0 bitmap: 8\/8 pages [32KB], 65536KB chunk\nunused devices: &lt;none&gt;<\/pre>\n<p>After reboot, if we can not see our md1 device like this:<\/p>\n<pre>root@cl3-amd-node2:\/etc\/drbd.d# cat \/proc\/mdstat\nPersonalities : [raid1]\nmd0 : active raid1 sda1[0] sde1[2](S) sdb1[1]\n\u00a0\u00a0\u00a0\u00a0\u00a0 976629760 blocks super 1.2 [2\/2] [UU]\n\u00a0\u00a0\u00a0\u00a0\u00a0 bitmap: 1\/8 pages [4KB], 65536KB chunk\nunused devices: &lt;none&gt;<\/pre>\n<p>We can recreate (assemble) it with this command without resync:<\/p>\n<pre>mdadm --assemble \/dev\/md1 \/dev\/sdc1 \/dev\/sdd1\nmdadm: \/dev\/md1 has been started with 2 drives.\nroot@cl3-amd-node2:\/etc\/drbd.d# cat \/proc\/mdstat\nPersonalities : [raid1]\nmd1 : active raid1 sdc1[0] sdd1[1]\n\u00a0\u00a0\u00a0\u00a0\u00a0 976630464 blocks super 1.2 [2\/2] [UU]\n\u00a0\u00a0\u00a0\u00a0\u00a0 bitmap: 0\/8 pages [0KB], 65536KB chunk\nmd0 : active raid1 sda1[0] sde1[2](S) sdb1[1]\n\u00a0\u00a0\u00a0\u00a0\u00a0 976629760 blocks super 1.2 [2\/2] [UU]\n\u00a0\u00a0\u00a0\u00a0\u00a0 bitmap: 1\/8 pages [4KB], 65536KB chunk\nunused devices: &lt;none&gt;<\/pre>\n<p>If we want to automatically start this raid with the boot, we must add this array to mdadm.conf. At first, we scan for our arrays and add it to <em>\/etc\/mdadm\/mdadm.conf.<\/em><\/p>\n<pre>root@cl3-amd-node2:\/etc\/drbd.d# mdadm --examine --scan\n...\nARRAY \/dev\/md\/1\u00a0 metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1\nARRAY \/dev\/md\/0\u00a0 metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0\n\u00a0\u00a0 spares=1<\/pre>\n<pre>cat \/etc\/mdadm\/mdadm.conf\n...\n# definitions of existing MD arrays\nARRAY \/dev\/md\/0\u00a0 metadata=1.2 UUID=2c29b20a:0f2d8abf:c2c9e150:070adaba name=cl3-amd-node2:0\n\u00a0\u00a0 spares=1<\/pre>\n<pre>echo \"ARRAY \/dev\/md\/1\u00a0 metadata=1.2 UUID=94e2df50:43dbed78:b3075927:401a9b65 name=cl3-amd-node2:1\" &gt;&gt; \/etc\/mdadm\/mdadm.conf<\/pre>\n<p>And the last step is update the initramfs to update mdadm.conf in it:<\/p>\n<pre><code>update-initramfs -u<\/code><\/pre>\n<p>If there is a need to replace bad missing disk, we must create a partition on new disk with the same space.<\/p>\n<pre>fdisk -l \/dev\/sdb\nDisk \/dev\/sdb: 233.8 GiB, 251000193024 bytes, 490234752 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nSector size (logical\/physical): 512 bytes \/ 512 bytes\nDevice\u00a0\u00a0\u00a0\u00a0 Boot Start\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 End\u00a0\u00a0 Sectors\u00a0\u00a0 Size Id Type\n\/dev\/sdb1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 2048 488397167 488395120 232.9G fd Linux raid autodetect<\/pre>\n<p>Degraded array:<\/p>\n<pre>mdadm --detail \/dev\/md1\n\/dev\/md1:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Version : 1.2\n\u00a0 Creation Time : Fri May 27 09:08:25 2016\n\u00a0\u00a0\u00a0\u00a0 Raid Level : raid5\n\u00a0\u00a0\u00a0\u00a0 Array Size : 488132608 (465.52 GiB 499.85 GB)\n\u00a0 Used Dev Size : 244066304 (232.76 GiB 249.92 GB)\n\u00a0\u00a0 Raid Devices : 3\n\u00a0 Total Devices : 2\n\u00a0\u00a0\u00a0 Persistence : Superblock is persistent\n\u00a0\u00a0\u00a0 Update Time : Thu Apr 20 11:33:11 2017\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 State : clean, degraded\n\u00a0Active Devices : 2\nWorking Devices : 2\n\u00a0Failed Devices : 0\n\u00a0 Spare Devices : 0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Layout : left-symmetric\n\u00a0\u00a0\u00a0\u00a0 Chunk Size : 512K\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Name : cl2-sm-node3:1\u00a0 (local to host cl2-sm-node3)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 UUID : 827b1c8a:5a1a1e7c:1bb5624f:9aa491b1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Events : 692\n\u00a0\u00a0\u00a0 Number\u00a0\u00a0 Major\u00a0\u00a0 Minor\u00a0\u00a0 RaidDevice State\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 removed\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 65\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0\u00a0 active sync\u00a0\u00a0 \/dev\/sde1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 49\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 2\u00a0\u00a0\u00a0\u00a0\u00a0 active sync\u00a0\u00a0 \/dev\/sdd1<\/pre>\n<p>Now we can add new disk to this array:<\/p>\n<pre>mdadm --manage \/dev\/md1 --add \/dev\/sdb1\n   mdadm: added \/dev\/sdb1<\/pre>\n<p>And its done:<\/p>\n<pre>cat \/proc\/mdstat\nPersonalities : [raid1] [raid6] [raid5] [raid4]\nmd1 : active raid5 sdb1[4] sde1[1] sdd1[3]\n\u00a0\u00a0\u00a0\u00a0\u00a0 488132608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3\/2] [_UU]\n\u00a0\u00a0\u00a0\u00a0\u00a0 [&gt;....................]\u00a0 recovery =\u00a0 0.3% (869184\/244066304) finish=197.5min speed=20515K\/sec\n\u00a0\u00a0\u00a0\u00a0\u00a0 bitmap: 0\/2 pages [0KB], 65536KB chunk\n<\/pre>\n<p>If we have a problem with some disk, we may remove it during work. At first, we must mark it as failed. So look at good and working raid-1:<\/p>\n<pre>mdadm --detail \/dev\/md0\n\/dev\/md0:\n Raid Level : raid1\n Array Size : 976629760 (931.39 GiB 1000.07 GB)\n Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)\n Raid Devices : 2\n Total Devices : 3\n State : clean\n Active Devices : 2\n Working Devices : 3\n Failed Devices : 0\n Spare Devices : 1\nactive sync \/dev\/sda1\nactive sync \/dev\/sdb1\nspare \/dev\/sde1<\/pre>\n<p>Now mark disk sda1 as faulty:<\/p>\n<pre>mdadm \/dev\/md0 -f \/dev\/sda1<\/pre>\n<pre>mdadm --detail \/dev\/md0\n\/dev\/md0:\n Array Size : 976629760 (931.39 GiB 1000.07 GB)\n Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)\n Raid Devices : 2\n Total Devices : 3\n Persistence : Superblock is persistent\n State : clean, degraded, recovering\n Active Devices : 1\nWorking Devices : 2\n Failed Devices : 1\n Spare Devices : 1\nRebuild Status : 0% complete\nspare rebuilding \/dev\/sde1\nactive sync \/dev\/sdb1\nfaulty \/dev\/sda1<\/pre>\n<pre>cat \/proc\/mdstat\nmd0 : active raid1 sda1[0](F) sde1[2] sdb1[1]\n 976629760 blocks super 1.2 [2\/1] [_U]\n [&gt;....................] recovery = 0.2% (2292928\/976629760) finish=169.9min speed=95538K\/sec<\/pre>\n<p>I waited until finish this operation. Then I halted this server, remove the exact drive and insert a new one. After power-on, we create a new partition table on \/dev\/sda exactly as old one, or as active disks now. The we re-add it as spare to the raid:<\/p>\n<pre> mdadm \/dev\/md0 -a \/dev\/sda1<\/pre>\n<pre>mdadm --detail \/dev\/md0\n\/dev\/md0:\n Raid Level : raid1\n Array Size : 976629760 (931.39 GiB 1000.07 GB)\n Used Dev Size : 976629760 (931.39 GiB 1000.07 GB)\n Raid Devices : 2\n Total Devices : 3\n Active Devices : 2\nWorking Devices : 3\n Failed Devices : 0\n Spare Devices : 1\nactive sync \/dev\/sde1\nactive sync \/dev\/sdb1\n<strong>spare \/dev\/sda1<\/strong><\/pre>\n<pre>cat \/proc\/mdstat\nmd0 : active raid1 sda1[3](S) sde1[2] sdb1[1]\n 976629760 blocks super 1.2 [2\/2] [UU]\n bitmap: 1\/8 pages [4KB], 65536KB chunk<\/pre>\n ","protected":false},"excerpt":{"rendered":"<p>At first, we must create partitions on disks with the SAME size in blocks: fdisk \/dev\/sdc &gt; n (new partition) &gt; p (primary type of partition) &gt; 1 (partition number) &gt; 2048 (first sector: default) &gt; 1953525167 (last sector: default) &gt; t (change partition type) &#8211; selected partition nb. 1 &gt; fd (set it to &hellip; <a href=\"https:\/\/www.gonscak.sk\/?p=201\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">How to create software raid 1 with mdadm with spare<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[39],"tags":[40,41,42,43,44,45,46,47,48],"class_list":["post-201","post","type-post","status-publish","format-standard","hentry","category-debian-jessie","tag-devices","tag-examine","tag-fail","tag-faulty","tag-mdadm","tag-mdstat","tag-raid1","tag-scan","tag-spare"],"_links":{"self":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts\/201","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=201"}],"version-history":[{"count":0,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=\/wp\/v2\/posts\/201\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=201"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=201"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gonscak.sk\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=201"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}