mdadm RAID 1 Migration

I’m migrating our work server over to a complete RAID setup (software) and needed to find a way to migrate the system partitions over to RAID 1 with a minimum of hassle. I found this how-to and was able to follow most of it verbatim for my test run, but I thought I’d post the actual steps I followed here for future reference.

I had a spare drive that exactly matched the drive currently in my office machine (Seagate ST3200822A), so I just did a ‘parted /dev/hda print‘ and copied the exact start/end of each partiton over to the new drive. Edit: Make sure you set the ‘Linux Raid’ flag on your partitions being used for the arrays. At boot (or module insertion), the kernel will automagically setup the arrays and needed devices making the Gentoo init patch below completely un-needed. Doh!

I then created a degraded RAID 1 array by running ‘mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/hdb5‘ and verified it had been setup using ‘cat /proc/mdstat‘.

I created my xfs filesystem on the new array and mounted it on /mnt/tmp so I could copy over my data from the live partition using ‘cd /home && tar cf - .| tar -C /mnt/tmp -xf -‘ and grabbed a smoke while I waited for that to finish.

The final step in creating the array was to add the old partition to the new array and rebuild it using ‘mdadm /dev/md0 -a /dev/hda5‘. The rebuilding process took about 15 minutes for a 38G partition and a successful mount verified all had gone well.

Edit:
Just make sure you have your array partitions marked as above and the kernel md drivers do the rest…
I decided to reboot to test that things had indeed gone as smoothly as I thought and had a rude awakening. I had forgotten to edit /etc/mdadm.conf and add my arrays, so after a quick edit I tried rebooting again and watched in horror as md0 was activated, but 1 and 2 failed. Did a bit of digging on the system and discovered that udev was creating the device nodes as /dev/md/x with a symlink from /dev/mdx at activation. mdadm has two issues with this currently, the first being it won’t activate an array from a scan if the device node doesn’t exist and the second is that the ‘–auto’ param pukes (instead of assuming it’s setup already) if it sees a symlink. I started to edit the checkfs init script to correct the issue manually, but found a patch on the Gentoo bugzilla that fixed everything…

mdadm RAID 1 Migration by Matthew Schick, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Related Posts
  1. I was wondering how your SIIG card has been performing. I’m in the market for an SATA150 4port card and would be building a raid5 using software raid. Would you mind posting/emailing me some hdparm -tT results on your raid device? Also, how has the reliability been on it? Thanks!

  2. Matthew Schick

    Hi Dave,

    Somehow your comment got flagged as spam, so I’m just now seeing it… Anyway, here’s the results on the RAID device (four disk, RAID 5):
    hdparm -tT /dev/md8

    /dev/md8:
    Timing cached reads: 960 MB in 2.04 seconds = 471.35 MB/sec
    Timing buffered disk reads: 146 MB in 3.03 seconds = 48.18 MB/sec

    /dev/md8:
    Timing cached reads: 1000 MB in 2.00 seconds = 499.33 MB/sec
    Timing buffered disk reads: 136 MB in 3.04 seconds = 44.67 MB/sec

    /dev/md8:
    Timing cached reads: 896 MB in 2.00 seconds = 447.84 MB/sec
    Timing buffered disk reads: 142 MB in 3.03 seconds = 46.86 MB/sec

  3. excentral » RAID 5 via mdadm - pingback on 10/27/2007 at 1:32 am

Leave a Reply

Trackbacks and Pingbacks:

%d bloggers like this: