(original article)

Comment

22 February 2010, 16:34 UTC

Thank you very much for your answers, everything as I expected. I decided to add another 1TB disk instead of a 1.5TB disk as I didn't consider that the 1.5TB disk would be much slower than all other preexisting 1TB disk. So here my situation again: I want to add another 1TB-disk to an existing raid6-array consisting of seven 1TB-disks. Unfortunately mdadm gives me some strange errors. At first I added the new disk (sdh) to the existing array (md11), which went fine. So now I have a 7-disk array with a single spare:

md11 : active raid6 sdh[9](S) sdc[0] sdi[8] sdf[7] sde[4] sdk[3] sdj[2] sdd[1] 4883811840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [uuuuuuu]

If i now issue 'mdadm /dev/md11 --grow --raid-devices=8 --backup-file=/root/backup_md11.bak' I get

mdadm: this change will reduce the size of the array. use --grow --array-size first to truncate array. e.g. mdadm --grow /dev/md11 --array-size 1565606912

Why would adding a disk REDUCE the size? Am I missing something here? If I add the '--size=max' switch to the command line, i.e. 'mdadm --grow /dev/md11 --raid-devices=8 --level=6 --backup-file=/root/backup_md11.bak --size=max', I get

mdadm: cannot change component size at the same time as other changes. Change size first, then check data is intact before making other changes.

I really don't understand this as the new disk is exactly the same model as the other ones. 'hdparm -I' on the new disk gives me a device size which is identical to the other disks. Has anyone a clue what is going on? Has it something to do with the non-standard chunk-size?

Thanks for helping,

rman




[æ]