(original article)

Re: Converting RAID5 to RAID6 and other shape changing in md/raid

25 September 2010, 00:38 UTC

Thank you Neil for all your effort and continued improvements on a wonderful tool! The "new" raid level reshaping abilities are truly impressive.

*edit* I resolved my problem (below) so I guess it can be ignored ;D (let me know if you'd rather I edit it down?)

*edit2* spoke too soon: now I've got massive drive corruption... slowed the resync/reshape to 0k while I backup some files up, in the event it is the reshape causing the corruption.

Sorry if this is the wrong place to post since it's technically not a raid5->raid6 problem (I've been reading this blog for a while now).

On a fedora 12 system that I recently grew the raid5 to raid6 then while adding a 7th 2TB drive to the new raid6, a drive was kicked offline (the new, drive I think) 6-8 hrs into the reshape.

Unfortunately, while diagnosing the drive for reliability (booting knoppix, etc.) the OS drive became corrupted! I installed fedora 13 (so I wouldn't need to upgrade mdadm to 3.1.2) on a new drive and now I see that the raid6 array is... "degraded" but not started, with a "removed" drive. "State : active, degraded, Not Started"

Sorry if I'm being overly cautious (in asking for advice) but I wanted to confirm my next steps.

#The 1st thing I tried 1) mdadm /dev/md126 --re-add /dev/sdi1 mdadm: --re-add for /dev/sdi1 to /dev/md126 is not possible # Try without the "removed" disk 2) mdadm --stop /dev/md126 mdadm --assemble /dev/md126 /dev/sdb1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdj1 mdadm: /dev/md126 assembled from 6 drives - not enough to start the array while not clean - consider --force. # Try with the "removed" disk 3) mdadm --stop /dev/md126 mdadm --assemble /dev/md126 /dev/sdb1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 mdadm: /dev/md126 assembled from 6 drives and 1 spare - not enough to start the array while not clean - consider --force. Q: Should I use --force? ------------------------- mdadm --detail /dev/md126 /dev/md126: Version : 1.2 Creation Time : Mon Jul 19 16:27:34 2010 Raid Level : raid6 Used Dev Size : 1927798784 (1838.49 GiB 1974.07 GB) Raid Devices : 7 Total Devices : 6 Persistence : Superblock is persistent Update Time : Wed Sep 22 01:24:28 2010 State : active, degraded, Not Started Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Delta Devices : 1, (6->7) Name : midori:128 (local to host midori) UUID : b2945795:978d8c97:9451e9f5:3191ab23 Events : 49541 Number Major Minor RaidDevice State 0 8 97 0 active sync /dev/sdg1 1 8 113 1 active sync /dev/sdh1 2 8 81 2 active sync /dev/sdf1 4 8 65 3 active sync /dev/sde1 6 8 17 4 active sync /dev/sdb1 5 0 0 5 removed 8 8 145 6 active sync /dev/sdj1 *UPDATE* mdadm --assemble /dev/md128 /dev/sdf1 /dev/sdg1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdi1 /dev/sdh1 mdadm: /dev/md128 assembled from 6 drives and 1 spare - not enough to start the array while not clean - consider --force. mdadm --run /dev/md128 mdadm: failed to run array /dev/md128: Input/output error Still afraid to run '--force' OK I finally ran '--force' mdadm --assemble /dev/md128 /dev/sdf1 /dev/sdg1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdi1 /dev/sdh1 --force mdadm: cannot open device /dev/sdf1: Device or resource busy mdadm: /dev/sdf1 has no superblock - assembly aborted *edit* Oops raid not "stopped" let's try that again mdadm --assemble /dev/md128 /dev/sdf1 /dev/sdg1 /dev/sdd1 /dev/sde1 /dev/sdb1 /dev/sdi1 /dev/sdh1 --force mdadm: /dev/md128 has been started with 6 drives (out of 7) and 1 spare. cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md128 : active raid6 sdf1[0] sdi1[8] sdb1[6] sde1[4] sdd1[2] sdg1[1] 7711195136 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [uuuuu_u] [=>...................] reshape = 5.1% (99625660/1927798784) finish=2456.1min speed=12404K/sec Guess I resolved this on my own :D







[æ]