Creating a Linux Software Raid with mdadm

Oct 13, 2018 3 min read

At home I have a HP ProLiant G7 N54L MicroServer. I've performed the BIOS update, swapped out the CD-ROM with a trayless swap drive and installed four 500G SATA drives. I did attempt to upgrade to 2.5" SSDs but I could not seem to find the correct 2.5" to 3.5" adapters that would line up the drive correctly in the trays.

Because this server has no hardware RAID, I use mdadm which has worked great for me on various servers for years.

I decided to create this blog post to capture the commands used since I always seem to lose them and have to re-google every year or so when I do maintenance/upgrades.

Create /dev/md0

My OS drive is /dev/sde, so the command below uses the first four drives for my new RAID10.

mdadm --create /dev/md0 --level=10 --metadata=1.2 --raid-devices=4 /dev/sd[abcd]

Scan arrays

Running the command below will provide a line to place into /etc/mdadm/mdadm.conf.

root@zeus:~# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=zeus:0 UUID=67e2603c:16e77c2c:21ffd468:df5f4cc5

The output line should be placed in /etc/mdadm/mdadm.conf.

root@zeus:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 name=zeus:0 UUID=67e2603c:16e77c2c:21ffd468:df5f4cc5

# This file was auto-generated on Sat, 02 Jul 2016 14:25:21 -0700
# by mkconf $Id$

Fix device name

If you got to this point and rebooted, you will notice you probably have a /dev/md127. This apparently is due to the name=zeus:0, in which mdadm picks I guess the last available device number. To fix this, we just need to remove the name=zeus:0 bit in /etc/mdadm/mdadm.conf.

root@zeus:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 UUID=67e2603c:16e77c2c:21ffd468:df5f4cc5

# This file was auto-generated on Sat, 02 Jul 2016 14:25:21 -0700
# by mkconf $Id$

Update initramfs

I wasn't sure if this was required for sure, but ran anyway to update the boot stuff for GRUB.

update-initramfs -u

Validate

Upon rebooting, checking /proc/mdstat and mdadm --detail /dev/md0 show good results.

root@zeus:~# cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid10 sda[0] sdb[1] sdc[2] sdd[3]
      976510976 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>
root@zeus:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Jul 15 19:35:01 2016
     Raid Level : raid10
     Array Size : 976510976 (931.27 GiB 999.95 GB)
  Used Dev Size : 488255488 (465.64 GiB 499.97 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Fri Jul 15 22:16:01 2016
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : zeus:0  (local to host zeus)
           UUID : 67e2603c:16e77c2c:21ffd468:df5f4cc5
         Events : 1808

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync set-A   /dev/sda
       1       8       16        1      active sync set-B   /dev/sdb
       2       8       32        2      active sync set-A   /dev/sdc
       3       8       48        3      active sync set-B   /dev/sdd

What wasn't shown

I did not show any output of the building phase because by the time I created this post, my drive was already created. However, when you first initialize/create your md drive, you can do a watch -n 1 cat /proc/mdstat and watch the progress of the build.

Join the conversation

Great! Next, complete checkout for full access to Brandon's Blog.
Welcome back! You've successfully signed in.
You've successfully subscribed to Brandon's Blog.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.