Mdadm is the linux utility used to manage and monitor software RAID devices. It stands for MD(multiple devices) Adm(administration). Replaced the previous mdctl as default software RAID utility on Linux. I'll show you how to manage software array on linux with mdadm.

 Package installation

On linux

yum install mdadm

Basic syntax

Here is its basic syntax

       mdadm [mode] <raiddevice> [options] <component-devices>

Major modes of operations

The first argument in the syntax is mode, mdadm has several modes:

Create

Create a new array with per-device metadata

Assemble

Assemble  the  components  of  a  previously  created array into an active array

Build

Build  an  array  that  doesn’t  have per-device metadata (superblocks)

Follow or monitor

Monitor  one  or  more  md  devices  and act on any state changes

Grow 

Grow  (or  shrink)  an  array,  or otherwise reshape it in some way

Manage

This is for doing things to specific components of an array such as adding new spares and removing faulty devices

Misc

This  is  an  ’everything else’ mode that supports operations on active arrays

Incremental Assembly

Incremental Assembly add a single device to an appropriate array

Auto-detect

This mode does not act on a specific device or array, but rather it requests the Linux Kernel to activate any auto-detected arrays

Creating an Array

In this example I use mdadm to create a RAID-0  /dev/md0 made up of /dev/sdb1 and /dev/sdc1:

#mdadm --create --verbose /dev/md1 --level=0 
        --raid-devices=2  /dev/sdd1 /dev/sdc1
mdadm: chunk size defaults to 64K
mdadm: array /dev/md1 started.

The --level option specifies which type of RAID to create in the same way that raidtools uses the raid-level configuration line. Valid choices are 0,1,4 and 5 for RAID-0, RAID-1, RAID-4, RAID-5 respectively.

other option -C,-v, -l and -c

-C selects Create mode, and the -v option here to turn on verbose output. -l and -n specify the RAID level and number of member disks, -c specify chunk size.

Linear mode

Append two partitions/block devices to each other, not necessary same size.

#mdadm --create --verbose /dev/md1 --level=linear --raid-devices=2 /dev/sda3 /dev/sdb4

Raid0

The following command create md0, raid0(stripe), two disk partitions(same size), chunk size 128k(default 64k), verbose output, two partitions get access in parallel.

# mdadm -Cv /dev/md0 -l0 -n2 -c128 /dev/sdd1 /dev/sdc1

Raid1 with spare

Two disk partitions, mirror to each other, the third one is the spare disk(sdd1) which can take over the failed partition.

# mdadm -Cv /dev/md0 -l1 -n2 -c128 /dev/sdd1 /dev/sdc1 --spare-devices=1 /dev/sdd1

Raid4

Mdadm doesn't support raid3, Raid4 is rarely used, it support block level stripping with a dedicated spare disk.

 #mdadm --create --verbose /dev/md0 --level=4 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1

Raid5

Block level striping, parity distributed across all disk partitions. The following commad creates a raid5 array, chunk size 128k, using device /dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

#mdadm -Cv /dev/md0 -l5 -n5 -c128 /dev/sd{a,b,c,d,e}1
mdadm: layout defaults to left-symmetric
mdadm: array /dev/md0 started.

Raid6

Like RAID5, but with two parity segment in each block stripe.

#mdadm -Cv /dev/md1 -l6  /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 

Raid10

Take number of mirrorsets, and stripe across them as RAID0

# mdadm -Cv /dev/md0 -l10 -n2 -c128 /dev/sdd1 /dev/sdc1 --spare-devices=1 /dev/sdd1

Note: RAID10 is different with raid 1+0

Save the configuration

/etc/mdadm.conf is the mdadm configuration file, mdadm will function properly without the use of a configuration file, but this file is useful for keeping track of arrays and member disks.

Use the command to save/update mdadm.conf

mdadm --detail --scan >> /etc/mdadm.conf

A simple mdadm.conf file might look like this, or using uuid on newer version

MAILADDR root
DEVICE       /dev/sdc1 /dev/sdd1>
ARRAY        /dev/md1 devices=/dev/sdc1,/dev/sdd1

or
ARRAY /dev/md1 level=raid10 num-devices=2 metadata=0.90 UUID=a0d44a20:b41dda0a:e9c548a1:a3e98bee

Generally speaking, it's best to create an /etc/mdadm.conf file after you have created an array and update the file when new arrays are created. Without an /etc/mdadm.conf file you'd need to specify more detailed information about an array on the command in order to activate it.

If there are multiple arrays running on the system, then mdadm --detail --scan would generate an array line for each one.

Check currenty array status

The output of /proc/mdstat below shows the array is at good condition

 #cat /proc/mdstat 
Personalities : [raid10]
md1 : active raid10 sdc1[0] sdd1[1]
      244195904 blocks 2 near-copies [2/2] [UU]
      
unused devices: <none>

Misc mode, Get, Exam array info from device

Query Array             

Examine a device to see (1) if it is an md device and (2) if  it is  a  component of an md array.  Information about what is discovered is presented.

#mdadm --query /dev/md1
/dev/md1: 232.88GiB raid10 2 devices, 0 spares. Use mdadm --detail for more detail.

#mdadm --query /dev/sdd1
/dev/sdd1: is not an md array
/dev/sdd1: device 1 in 2 device active raid10 /dev/md1.  Use mdadm --examine for more detail.

Examine array

The examine options (-E or --examine) allows you to print the md superblock (if present) from a block device that could be an array component.

#mdadm -E /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0d44a20:b41dda0a:e9c548a1:a3e98bee
  Creation Time : Mon Jun 20 16:08:13 2011
     Raid Level : raid10
  Used Dev Size : 244195904 (232.88 GiB 250.06 GB)
     Array Size : 244195904 (232.88 GiB 250.06 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Fri Jul  4 16:03:17 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 3c12785e - correct
         Events : 866

         Layout : near=2
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       33        0      active sync   /dev/sdc1

   0     0       8       33        0      active sync   /dev/sdc1
   1     1       8       49        1      active sync   /dev/sdd1

Array detail info

Check an array's detail info

#mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Mon Jun 20 16:08:13 2011
     Raid Level : raid10
     Array Size : 244195904 (232.88 GiB 250.06 GB)
  Used Dev Size : 244195904 (232.88 GiB 250.06 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jul  4 16:03:17 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

           UUID : a0d44a20:b41dda0a:e9c548a1:a3e98bee
         Events : 0.866

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1

Assemble mode, Start, Stop, rename, and check an Array

Assemble mode is used to start an array that already exists

Stop an array

use --stop -S option to stop running array

#mdadm -S /dev/md1

Start an array

If you created the array like below

#mdadm -Cv /dev/md0 -l5 -n5 -c128 /dev/sd{a,b,c,d,e}1

then, run

#mdadm --assemble /dev/md0 /dev/sd{a,b,c,d,e}1

Or, run

# mdadm -As /dev/md0
where
-A stands for Assemble
-s stands for scan

Or for simple md device case

#mdadm --assemble --scan

In more complex situation, if you only want to specify an arry to start, run

#mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c

Note: mdadm --run /dev/md0 doesn't work

Rename an Array

Option 1, on the fly

#mdadm --detail /dev/md1 ## get detail physical drive info
#mdadm -S /dev/md1
#mdadm --assemble /dev/md2 --super-minor=0 --update=super-minor /dev/sdc1 /dev/sdd1

Option 2, reboot machine

Change the array name in /etc/mdadm.conf, then reboot the machine, the array will come up as new name.(Remember change fstab entry if device name is used)

Managing Arrays

Using Manage mode you can add and remove disks to a running array. This is useful for removing failed disks, adding spare disks, or adding replacement disks.

For example, if one hard driver failed, here is steps for disk replacement.

1. check md disk status
    you can either use mdadm -E, or cat /proc/mdstat
2. fail disk partition first, repeat steps below if there are more partitions.
  #mdadm /dev/md1 --fail /dev/sdc1
  followed by remove command, or you can put them together
      #mdadm /dev/md1 --fail /dev/sdc1 --remove /dev/sdc1
3. replace physical disk and partition new disk
      #sfdisk -d /dev/sdd | sfdisk /dev/sdc
4. add new disk partitions to array
      #mdadm /dev/md1 --add /dev/sdc1
5. then Monitoring Arrays

Want to know more detail about disk replacement see how to replace a failed disk for linux software array

Follow, or Monitor, mode

It provides some of mdadm's best and most unique features. Using Follow/Monitor mode you can daemonize mdadm and configure it to send email alerts to system administrators when arrays encounter errors or fail. .
 
          The following command will monitor /dev/md0 (polling every 300 seconds) for critical events. When a fatal error occurs, mdadm will send an email to sysadmin. You can tailor the polling interval and email address to meet your needs.

  
      #mdadm --monitor --mail=root --delay=300 /dev/md0 

          You can also specify -y option to let mdadm write log into syslog.
 
On linux, commonly people just enable mdmonitor, which is a put mdadm to backgroup  daemon mode, it uses the following options.

          chkconfig mdmonitor on
          chkconfig --list mdmonitor
          mdmonitor          0:off    1:off    2:on    3:on    4:on    5:on    6:off   
By default, it uses the following options
          OPTIONS="--monitor --scan -f --pid-file=$PIDFILE"

To test monitoring function, try

          mdadm --monitor  -t /dev/md1

I have another article discussed how to do Linux Software Array Performance Tuning , It's for large array on heavy i/o disk server, take a look if you are interested.