Start an Array, to reassemble (start) an array that was previously created: mdadm -assemble -scanĭdadm will scan for defined arrays and start assembling it.ĥ. Note: This is not the same as "growing" the array!Ĥ. Where /dev/md0 is the array device and /dev/sda is the new disk. Add a disk to an array: sudo mdadm -add /dev/md0 /dev/sda1 Where /dev/md0 is the array device and /dev/sda is the faulty disk.ģ. To remove a disk from an array: sudo mdadm -remove /dev/md0 /dev/sda1 To stop an array, type: sudo mdadm -stop /dev/md0Ģ. Note: You can add, remove disks, or set them as faulty without stopping an array.ġ. You can see that all three disks are present and in sync. Algorithm 2 shows it is a write algorithm pattern, which is "left disk to right disk" writing across the array. These are /dev/sdc1, /dev/sde1 and /dev/sdd1, with a 64k "chunk" size or write size. Md6 is different in that we can see it is a raid 5 array, striped across three disks. The same can be said of md0 only it is smaller (you can see from the blocks parameter) and is made up of /dev/sda1 and /dev/sdb1. You can see that md5 is a raid1 array and made up of disk /dev/sda partition 7, and /dev/sdb partition 7, containing 62685504 blocks, with 2 out of 2 disks available and both in sync. You can also see in the three example meta devices that there are two raid 1 mirrored meta devices. Md6 : active raid5 sdc1 sde1 sdd1ĩ76767872 blocks level 5, 64k chunk, algorithm 2 įrom this information you can see that the available personalities on this machine are "raid1, raid6, raid4, and raid5" which means this machine is set-up to use raid devices configured in a raid1, raid6, raid4 and raid5 configuration. Two useful commands to check the status are: cat /proc/mdstatĮxample output: Personalities :
Provided the RAID is working fine this can be fixed with: sudo update-initramfs -k all -uįor those that want full control over the RAID configuration, the mdadm CLI provides this. Swap space doesn't come up, error message in dmesg start your server and see if your server can boot from a degraded disk.remove the power and cable data of your first drive.You also can use #dpkg-reconfigure mdadm rather than CLI!.Additionally, this can be specified on the kernel boot line with the bootdegraded=.# Please provide URL to support claim: (this option is not supported from mdadm-3.2.5-5ubuntu3 / Ubuntu 14.04 onwards) change "BOOT_DEGRADED=false" to "BOOT_DEGRADED=true".If your server is located in a remote area, the best practice may be to configure this to occur automatically: If the default HDD fails then RAID will ask you to boot from a degraded disk.
In case your next HDD won't boot then simply install Grub to another drive: grub-install /dev/sdb Filesystem and mount points will need to be specified for each RAID device.Repeat steps 3 to 7 with each pair of partitions you have created.Select RAID type: RAID 0, RAID 1, RAID 5 or RAID 6.
Once you have completed your partitioning in the main "Partition Disks" page select "Configure Software RAID".Repeat steps 2 to 5 for the other hard drive On / partition select "bootable flag" and set it to "on"Ħ. Ubuntu will create 2 partitions: / and swap, as shown below:ĥ. Select the "FREE SPACE" on the 1st drive then select "automatically partition the free spaceĤ. Select your hard drive, and agree to "Create a new empty partition table on this device ?"ģ. Select "Manual" as your partition methodĢ. Warning: this will remove all data on hard drives.ġ. If you want to use some other RAID level for most things, you'll need to create separate partitions and make a RAID1 device for /boot. Warning: the /boot filesystem cannot use any softRAID level other than 1 with the stock Ubuntu bootloader. Install Ubuntu until you get to partitioning the disks Enough drives to meet the requirements of the RAID.
Read Getting Ubuntu Alternate Install disk and How to do a Ubuntu Alternate Install
If you're building a desktop then you need the "Alternate" install ISO for Ubuntu.