(original article)

Re: A Nasty md/raid bug

05 August 2012, 06:57 UTC

Hi Neil!

I am experiencing prehaps a new, but similar rather nasty bug.

My goal: create a RAID 1 array between a partition on the hard drive and a ramdisk and mount this md array at /usr. Once poplulated, the vast majority of my programs will run out of RAM, and be very fast! I have 8 GB of RAM, so why not?

On startup, the partition, which is a linux_raid_member, is detected as such, and /dev/md127 is automatically created.

The problem: /dev/md127 is not usable. "sudo blkid /dev/md127" yeilds nothing.

Since this is RAID _one_ it should be able to function just fine with only one member, but it doesn't.

In fact, to get it going again, I have to do:

sudo mdadm --stop /dev/md127 sduo mdadm -A /dev/md127 -f /dev/sda2 [worksat this point, now to add in the ramdisk:] sudo mdadm /dev/md127 -a /dev/ram0

Then "sudo blkid /dev/md127" says that it's ext4 formatted, that it has a UUID, etc., and it is usable. After mounting it after fixing it, all my data is there.

Problem restated: /usr needs to be mounted so early in startup that there is no script that I can have automate the fixing of my /usr RAID device in time for it to be mounted when it needs to be.

I have a thread going on about this here: http://bbs.archbang.org/viewtopic.php?id=3179 .

I have even tried renaming /sbin/mdadm to /sbin/mdadm.moved and doing "sudo /sbin/mdadm.moved --stop --scan" before shutdown. After restart, the automatically generated md device for the found linux_raid_member partition is still unusable.

I have tried both kernel 3.3.4-1-ARCH with mdadm 3.2.3-2 and kernel 3.4.7-1 with mdadm 3.2.5-2.