In this article, I'll show you how to use mdadm utility to create different type of RAIDs on Linux. My testbed runs RHEL6.5, kernel 2.6.32-504.23.4.el6.x86_64, mdadm 3.3

 

The basic syntax of mdadm for RAID creation is:

mdadm create <raiddevice> [options] <component-devices>
or
mdadm -C <raiddevice> [options] <component-devices>

Where the essencial options are:

-l, --level, specifies which type of RAID to create, currently, Linux supports LINEAR md devices,  RAID0  (striping),  RAID1  (mirroring),  RAID4,  RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CONTAINER.

-n, --raid-devices, number of devices to be used for raiddevice.

Other popular optional options -c, -x, -v

-c, --chunk-size chunk size, default is 512k.
-x, --spare-devices, specify the number of spare devices in the initial array.  Spares can  also  be  added and  removed later.
-v, turn on verbose output 

Linear mode

Two or more partitions which are not necessarily the same size, append to each other, they can be same size. When writing, it fills the first partition first, then second. No parallel access.

#mdadm --create --verbose /dev/md1 --level=linear --raid-devices=2 /dev/sda3 /dev/sdb4

Raid0

Also called striping, two or more devices, of approximately the same size, combine their storage capacity and also combine their performance by accessing them in parallel.

The following command create md0, raid0(stripe), two disk partitions(same size), chunk size 128k(default 64k), verbose output, two partitions get access in parallel.

# mdadm -Cv /dev/md0 -l0 -n2 -c128 /dev/sdd1 /dev/sdc1

Raid1 with spare

Mirror, Two disk partitions, mirror to each other, you can also add a disk partition as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.

# mdadm -Cv /dev/md0 -l1 -n2 -c128 /dev/sdd1 /dev/sdc1 --spare-devices=1 /dev/sdd1

Raid4

Mdadm doesn't support RAID3, similar to RAID3, RAID4 suports block level stripping with a dedicated spare disk, rarely used in practicle. Three or more devices, spare disk can be added/removed later.

N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This "missing" space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost.

 #mdadm --create --verbose /dev/md0 --level=4 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1

Raid5

Block level striping, parity distributed across all disk partitions. Three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety.
If you use N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This "missing" space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost.

#mdadm -Cv /dev/md0 -l5 -n5 -c128 /dev/sd{a,b,c,d,e}1
mdadm: layout defaults to left-symmetric
mdadm: array /dev/md0 started.

Raid6

Like RAID5, provides block level striping, but with two parity segment in each block stripe. 4 or more partitions are required, can tolorence 2 partitions failed.

Use N devices where the smallest has size S, the size of the entire array will be (N-2)*S.

#mdadm -Cv /dev/md1 -l6  /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 

Raid10

Take number of mirrorsets, and stripe across them as RAID0

# mdadm -Cv /dev/md0 -l10 -n2 -c128 /dev/sdd1 /dev/sdc1 --spare-devices=1 /dev/sdd1

Note: RAID10 is different with raid 1+0

 

Note: Below three types are new, I never tried them in production, I would suggest to use them only if you have special needs

MULTIPATH

MULTIPATH is not a Software RAID mechanism, but does involve multiple devices: each device is a path to one  common  physical  storage device.  New installations should not use md/multipath as it is not well supported and has no ongoing development.  Use the Device Mapper based multipath-tools instead.

In below example, sda1, sdb1s sdc1s and sdd1 are 4 devices that pointing to the same storage device via 4 different scsi channels.

mdadm -C /dev/md0 --level=multipath --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
Continue creating array? yes
mdadm: array /dev/md0 started.

Seems to me it's duplication to md/multipath. Actually, it's harder to manage and identify devices.

FAULTY

FAULTY is also not true RAID, and it only involves one device.  It provides a layer over a true device  that  can be used to inject faults.

CONTAINER

A CONTAINER is a collection of devices that are managed as a set.  This is similar to the set of devices connected to a hardware RAID controller.  The set of devices may contain a number  of  different RAID arrays each utilising some (or all) of the blocks from a number of the devices in the set. With  a CONTAINER, there is one set of metadata that describes all of the arrays in the container.  So when mdadm creates a CONTAINER device, the device just represents the metadata.  Other normal arrays (RAID1 etc) can be created inside the container.

It's only used for particular external meta data

Starting with Linux kernel v2.6.27 and mdadm v3.0, external metadata are supported. These formats have been long supported with DMRAID and allow the booting of RAID volumes from OptionROM depending on the vendor.

The first format is the DDF (Disk Data Format) defined by SNIA as the "Industry Standard" RAID metadata format. When a DDF array is constructed, a CONTAINER is created in which normal RAID arrarys can be created within the container.

The second format is the Intel(r) Matrix Storage Manager metadata format. This also creates a CONTAINER that is managed similar to DDF. And on some platforms (depending on vendor), this format is supported by option-ROM in order to allow booting.

To create RAID volumes that are external metadata, we must first create a container:

   mdadm --create --verbose /dev/md/imsm /dev/sd[b-g] --raid-devices 4 --metadata=imsm

In this example we created an IMSM based container for 4 RAID devices. Now we can create volumes within the container.

   mdadm --create --verbose /dev/md/vol0 /dev/md/imsm --raid-devices 4 --level 5

Of course, the --size option can be used to limit the size of the disk space used in the volume during creation in order to create multiple volumes within the container. One important note is that the various volumes within the container MUST span the same disks. i.e. a RAID10 volume and a RAID5 volume spanning the same number of disks.