mdadm 2.6.1 released

22 February 2007, 04:22 UTC

Yes, I forgot to announce 2.6 here, sorry about that.

2.6.1 is just some minor bug fixes. The release is motivated primarily by the fact that I have implemented raid6 reshape (i.e. add one or more devices to a raid6 while online). For the moment you need to collect patches from the linux-raid mailing list or wait for the next -mm release. They will hopefully be in 2.6.21-rc2. Earlier versions of mdadm can start a raid6 reshape with a new kernel, but there is one small case where it didn't quite do the right thing so I wanted to get that fix out.

2.6 introduced --incremental mode. This is intended for interfacing with 'udev'. When a new device is discoverred it is passed to "mdadm --incremental" and mdadm tries to include it in an md array if that is appropriate. As soon as all devices become available, the array is ready. Of course if one device is missing, we have a problem. Do we start the array degraded as soon as possible, or wait for the missing device to appear, possible waiting forever... No go answers to this question yet. mdadm allows you to try either.


Comments...

Comment (12 March 2007, 21:09 UTC)
it should be nice from you to add links to mdam graphical frontend . thanks.

[permalink][hide]

Re: mdadm 2.6.1 released (12 March 2007, 21:43 UTC)

I am not aware of any graphical front-end for mdadm. Are you?

[permalink][hide]

Re: mdadm 2.6.1 released (19 May 2007, 08:41 UTC)
Question, I know that it is supposed to start the array after it has been created. In my case I tried reusing a couple of old hard drive that had old Linux RAID Autodetect (fd) partitions on them. At this point I have done everything short of let badblocks run on them overnight to get rid of the old md auto detect stuff. I want to create a new array without having mdadm run a recovery on the array. Is this even possible in this secenario?

Jeff Means meaje(a) meanspc dot com

[permalink][hide]

Re: mdadm 2.6.1 released (19 May 2007, 08:50 UTC)

I'm not sure what yoiu mean. When you create an array, it will normally resync to make sure the array is entirely consistent.

You can use --assume-clean to avoid this. This is dangerous for raid5, but is OK for raid1.

NeilBrown

[permalink][hide]

Re: mdadm 2.6.1 released (24 May 2007, 04:58 UTC)

When trying to build an RPM from the SRPM, I get the following

(from 2.6) gcc -Wall -Werror -Wstrict-prototypes -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -c -o super1.o super1.c cc1: warnings being treated as errors super1.c: In function 'add_internal_bitmap1': super1.c:1146: warning: 'offset' may be used uninitialized in this function make: *** {super1.o} Error 1 error: Bad exit status from /var/tmp/rpm-tmp.7227 (%build)

Note, I had to change the brackets due to the way this blog uses the square ones.

I get similar results from 2.6.2 (just for Detail.c)

Compiles direct from the .tgz work file, and the only difference I can find is the extra "-Wp,-D_FORTIFY_SOURCE=2" option to gcc in the RPM build.

This is on a FC4 box (which only has mdadm-1.11 in the repo).

I hacked around it by removing -Werror from the CWFLAGS in the SPEC file, but wanted to let you know anyway. I'm not so worried about the warnings as it compiles clean direct from the tgz.

heath

[permalink][hide]

Re: mdadm 2.6.1 released (04 June 2007, 17:07 UTC)
Hi Neil,

I'm experimenting with mdadm and nbd. I have no idea if this is a good or really bad idea but my observations are: less mdadm.conf DEVICE /dev/nbd1 /dev/nbd2 /dev/nbd3 ARRAY /dev/md0 level=raid5 num-devices=3 devices=/dev/nbd1,/dev/nbd2,/dev/nbd3

hdparm timings:

/dev/nbd1: Timing cached reads: 1248 MB in 2.00 seconds = 623.38 MB/sec Timing buffered disk reads: 30 MB in 3.12 seconds = 9.63 MB/sec

/dev/nbd2: Timing cached reads: 1196 MB in 2.00 seconds = 597.05 MB/sec Timing buffered disk reads: 22 MB in 3.00 seconds = 7.33 MB/sec

/dev/nbd3: Timing cached reads: 1200 MB in 2.01 seconds = 598.47 MB/sec Timing buffered disk reads: 30 MB in 3.19 seconds = 9.39 MB/sec

/dev/md0: Timing cached reads: 1204 MB in 2.00 seconds = 600.53 MB/sec Timing buffered disk reads: 2 MB in 4.00 seconds = 511.88 kB/sec

Is this caused by: mythtv@server etc$ cat /proc/mdstat Personalities : raid5 raid4 md0 : active raid5 nbd33 nbd21 nbd10 594870656 blocks level 5, 64k chunk, algorithm 2 3/2 UU_ >.................... recovery = 1.4% (4243452/297435328) finish=2421.2min speed=2016K/sec

unused devices: <none>

also being rather slow at 2016K/sec

Please any comments/advice how to improve

Henk Schoneveld The Netherlands

[permalink][hide]

Re: mdadm 2.6.1 released (07 June 2007, 01:14 UTC)

I'm afraid I can't think of any obvious causes of the slow done. You could possibly try changing the readahead setting for md0, using the 'blockdev' command. But that wouldn't affect the resync speed.

It might be interesting to watch the network to see what was happening, but I'm not sure what exactly to look for.

NeilBrown

[permalink][hide]

md / mdadm handling of read errors (13 October 2007, 12:23 UTC)

I am expanding a raid5 array from 4 disks to 5. One of the older devices is causing traces like this to appear in /var/log/messages:


Jul 17 18:31:46 as kernel: sd 6:0:0:0: SCSI error: return code = 0x08000002
Jul 17 18:31:46 as kernel: sde: Current: sense key: Medium Error
Jul 17 18:31:46 as kernel: Additional sense: Unrecovered read error
Jul 17 18:31:46 as kernel: end_request: I/O error, dev sde, sector 800998880
Jul 17 18:31:46 as kernel: raid5:md0: read error corrected (8 sectors at 800998880 on sde)
Jul 17 18:31:46 as kernel: raid5:md0: read error corrected (8 sectors at 800998888 on sde)
Jul 17 18:31:46 as kernel: raid5:md0: read error corrected (8 sectors at 800998896 on sde)

The reshape process continues unabaited and no problems seem to be indicated.

cat /sys/block/md0/md/dev-sd?/errors

Yields:


0
0
6584
0
0

The drive with the error is of course sde. Is this something I need to be mildly worried about or extremely worried about?

Otherwise, this is a wonderful wonderful tool. Would like to convert to raid6 though ;-)

[permalink][hide]

Re: mdadm 2.6.1 released (19 July 2007, 05:27 UTC)

You should be worried enough that you replace the drive.

If this sort of thing (re Medum Error that md could correct) occurred once, it probably isn't a big problem. If it occurred every few months, you would want to look at replacing your drive. If it occurs several times during a rebuild, then it is definitely time that the drive was retired.

[permalink][hide]

Re: mdadm 2.6.1 released (19 July 2007, 07:27 UTC)

Thanks. While I RMA to get a replacement drive, am I better off running degraded or better off leaving the drive in the array?

Also, is there anyway to force the resize to run faster? At one point is spontaineously jumped from 4000K to 8000K, but now is back down to 4000. I've tried the min and max speed settings with no observable change.

Thanks!

[permalink][hide]

Re: mdadm 2.6.1 released (20 July 2007, 05:34 UTC)

Probably safe enough to leave it in. You will have a problem if you hit a read error on the bad drive while the array is degraded, or at the same position as a read error on another drive. So as long as the other drives hold up, you are fine. If another drive fails though, you will need to be very careful. Probably keep the array read-only until you are back with everything working.

Reshape is an inherently slow operation as is reads from the drive, then goes back and writes again. As such it is much slower than e.g. recovery which reads sequentially from some drives and writes sequentailly to the other. You'll just have to wait.

[permalink][hide]

Re: mdadm 2.6.1 released (29 July 2007, 18:41 UTC)

I'd email you this but I couldn't find an email :)

Does mdadm RAID5 array growing work like this? http://docs.sun.com/app/docs/doc/806-6111/6jf2ve3il?a=view

If you scroll down, you see "Concatenated (Expanded) RAID 5 Volume" and it appears that they don't actually reshape the array, they just add a disk onto the end of the existing array and somehow deal with the need for extra parity... I'm not sure how.

Is what mdadm does more advanced than this? I thought mdadm would be rebuilding the array bit by bit with the existing data.

Thank you for your work!

[permalink][hide]

Re: mdadm 2.6.1 released (01 August 2007, 15:04 UTC)

Hi,

Re: Software Raid Problem --------------------------

Sorry to post here... Cant seem to find an email....

Background ---------- OS= Opensuse 10.2 2 X 250gb Sata drives

Drive 1 ---------- /dev/sda1 = swap /dev/sda2 = linux raid (os & data is here) /dev/sda3 = linux raid

Drive 2 ---------- /dev/sdb1 = swap /dev/sdb2 = linux resiser /dev/sdb3 = linux reiser ---------------------------------------------------------------

Using mdadm I managed to create /dev/md0 in raid 1 (mirror). /dev/sda3 & /dev/sdb3 where added to this set.

Good so far....now for the problem....

Problems Area -------------- Using mdadm I failed to create /dev/md1 in raid 1 (mirror). /dev/sda2 & /dev/sdb2 where part of this set. I get this error: mdadm: cannot open device /dev/sda2: Device or resource busy.

I booted from rescue CD, and ran the mdam command to create /dev/md1 ... and it worked! I let the raid sync.

When I reboot, I get this error:

mdadm: cannot open device /dev/sda2: Device or resource busy. mdadm: /dev/sda2 has no superblock - assembly aborted

How to do I solve this?

Your help will be appreciated.

Regards

Basheer subs@nmcexquisite.com South Africa

[permalink][hide]

Advice: Sharing a hot spare device in software RAID! (22 August 2007, 01:22 UTC)
Dear Neil Brown :

Global (share) spare disk is important and economic. For example, we create two array md0 and md1, and set a global (share) spare disk, whenever anyone disk of array md0 and md1 is faulty, the mdadm add the global spare disk to this array for recovering automatically. the global (share) spare disk is safety and economic strategy!

steps:

1.modify the /etc/mdadm.conf add spare-group as follows

ARRAY /dev/md0 level=raid6 num-devices=4 UUID=9b2c49f3:aab3a59f:319fd604:8d84ab53 devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,spare-group=global

ARRAY /dev/md1 level=raid5 num-devices=3 spares=1 UUID=ccfde74d:ddbf8e63:4079f0e2:0c9a0cbc devices=/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,spare-group=global


2.run mdadm in daemon

mdadm -F -s -m root@localhost -f -d 30

3.test faulty the disk

mdadm /dev/md0 -f /dev/sdc

>>>>>>>> no complete

reference http://winsonz.spaces.live.com/blog/cns!10221e373b076bc9!169.entry


Zhonghua Jiang jzh800@126.com 2007-8-21









[permalink][hide]

Re: mdadm 2.6.1 released (21 August 2007, 05:25 UTC)

Shared spares are already supported and have been for a long time.

You need to the mark the /etc/mdadm.conf entry with something like "spare-group=global" (you choose the name, there can be a number of independant spare-groups) and you need to have "mdadm --monitor" running. When a drive fails, mdadm will see if another array in the same spare-group has a spare, and will move it across if appropriate.

Read the man-page again.

[permalink][hide]

Re: mdadm 2.6.1 released (22 August 2007, 21:06 UTC)

Zhonghua Jiang : editing a comment with substantial new information, after it has been replied to, breaks the flow of the conversation.

In your example mdadm.conf, you need to replace the ',' before "spare-group=global" with a space.

Also there is a bug in mdadm-2.6.2 which stops spare migration from working. It is fixed in 2.6.3, and not present before 2.6 (I think).

[permalink][hide]

Re: mdadm 2.6.1 released (05 November 2007, 20:38 UTC)

Hi Neil:

I'm trying to track down a weird booting issue. My / is on /dev/md0, which is a mirrored pair of SCSI drives. The system has 2 SCSI drives, 6 SATA drives, and one USB drive. All show up as sd? devices.

Each time the system boots, the drives will get different sd device names. Once in a while, the system will refuse to boot with mdadm complaining it can't find /dev/md0 and / is not available. Rebooting a few times (to get the drives in different order) will work and eventually mdadm finds /dev/md0.

I've never had trouble with this before, and I'm not really sure if it's a mdadm problem and I have no idea how to address this.

Can you shed any light on this?

[permalink][hide]

trouble with raid (14 November 2007, 15:47 UTC)

Hello Neil :-),

I have a problem with the linux-raid and maybe you can help me or know where to ask... Please :-)

I have a disk from an old Raid-1 and i want to have my old data. What can i do to get them?

Thanks, Patrick

[permalink][hide]

Re: mdadm 2.6.1 released (15 November 2007, 01:12 UTC)

i found a solution: http://ubuntuforums.org/showthread.php?p=3770804#post3770804

thanks and good work :-)

[permalink][hide]

Re: graphical frontend (19 November 2007, 07:01 UTC)

Are you looking for a graphical front end something like this?

[permalink][hide]

resizing almost works (19 November 2007, 07:09 UTC)

Using --grow to add a new hard drive to an array appears to work fine.

As far as I can tell, the "auto-assemble" at boot time never works on arrays that have been "grown".

Details: Linux questions: adding to a RAID.

Anything else I can do to improve RAID on Linux, other than whining about little bugs like this?

[permalink][hide]

Re: mdadm 2.6.1 released (14 February 2008, 19:34 UTC)

Is there any way I can convert a raid6 to raid5 with mdadm?

Thanks,

G

[permalink][hide]

Re: mdadm 2.6.1 released (18 February 2008, 04:37 UTC)

No, raid6 -> raid5 convertion is not currently possible. Maybe one day ... maybe not.


[permalink][hide]

Re: mdadm 2.6.1 released (09 June 2008, 11:20 UTC)
Hi!

I am looking for a way to change the Stripesize on a existing RAID 5. Can this be done with mdadm, or do i have to remove and re-create the whole RAID?

Thanks Arne K.

[permalink][hide]

Re: mdadm 2.6.1 released (03 August 2008, 19:59 UTC)
Hello Neil,

I have to thank you for the work you've done on md.. I use it daily 24/7 and so far very satisfied! I have few questions though:

- is it possible to reshape raid5 -> raid6 ? If so, what would be the mdadm syntax and how many spares do I need for the process? (currently I have 11 1TB drives in Raid5 (10G array) + 1x 1TB spare - unused). 'mdadm --grow --level 6 --raid-devices 12 /dev/md1' does tell me: Cannot reshape array without increasing size (yet).

- if I mark drive as failed and remove it or initialize the array accidentally without one drive, is there a possibility to re-add the drive immediately without going into re-sync? mdadm --re-add does not seem to have effect (resync is initiated) and --assume-clean cannot be used when re-adding drive.

thank you, Roman D.

[permalink][hide]

Re: mdadm 2.6.1 released (25 September 2008, 15:36 UTC)

Hi Neil,

I've come across some stramge data that I hope you might be able to explain:

I have a 4 disk raid10 setup. When I measure disk I/O throughput using bonnie++ It shows to be about 20% faster on reads during raid initial resync.

Is this something you recognize and/or can explain?

Best regards,

Daniel

[permalink][hide]

Re: mdadm 2.6.1 released (26 September 2008, 18:41 UTC)
Hi,

I have one simple question, why the limit of 255 arrays ?

I have machines hosting tens of databases on a SAN storage , and the limit of 255 arrays is close.

Regards

[permalink][hide]

Re: mdadm 2.6.1 released (25 April 2009, 09:06 UTC)
Hi,

Is there a way to make mdadm displaying /dev/mapper/XXX devices instead of /dev/dm-XX when using DM-multipath'ed devices to build md arrays ?

Or is there a roadmap to make MD and DM more "integrated" ?

Regards

Brem

[permalink][hide]

Re: mdadm 2.6.1 released (26 April 2009, 05:58 UTC)

Mdadm looks for names of devices in /dev and always chooses the shortest name. So it will always prefer /dev/dm-XXX over /dev/mapper/XXX. While I appreciated that the latter way could be preferred, finding a generic way to prefer it would be a challenge.

There is no road map for "dm/md integration". Different people suggest it at different times, and they probably all mean something slightly different. Sometime people try to work towards it, but so far not with much progress. I'm not sure it would answer the above issue anyway. mdadm would equally prefer /dev/md0 of /dev/md/0, and there is no integration issue there.

[permalink][hide]

Re: mdadm 2.6.1 released (13 October 2009, 21:31 UTC)

Just wanted to say thanks for this fine piece of software. I LOVE YOU!

-Phil

[permalink][hide]




[æ]