mdadm

07 June 2004, 12:38 UTC

mdadm is a tool for managing Linux Software RAID arrays.

It can create, assemble, report on, and monitor arrays.

It can also move spares between raid arrays when needed.

It can be found at http://www.kernel.org/pub/linux/utils/raid/mdadm/ or any of the kernel.org mirrors.

There is a git repository at http://neil.brown.name/git/mdadm or git://neil.brown.name/mdadm.

FAQ and other documentation can be found at http://www.linuxfoundation.org/collaborate/workgroups/linux-raid







Comments...

Re: mdadm (31 August 2006, 17:46 UTC)
Hi Neil, There is a few of us out here that are using the great mdadm multipath feature, however we are having problems with the failback on repair / replug of paths. Also we've seen situations where both paths to devices are active & sync, is this correct or should one be active & sync and the other be standby ( I suppose active-active or active-standby is what I'm getting at).

Is there any chance you could help us out with this or maybe show us where we are going wrong in the config of this feature?

Please have a look at the following URL for our issues:

http://www.linuxquestions.org/questions/showthread.php?p=2403645#post2403645

Thanks

Tom

t.beardsell@tiscali.co.uk

[permalink][hide]

Re: mdadm (31 August 2006, 17:46 UTC)
Hi Neil, There is a few of us out here that are using the great mdadm multipath feature, however we are having problems with the failback on repair / replug of paths. Also we've seen situations where both paths to devices are active & sync, is this correct or should one be active & sync and the other be standby ( I suppose active-active or active-standby is what I'm getting at).

Is there any chance you could help us out with this or maybe show us where we are going wrong in the config of this feature?

Please have a look at the following URL for our issues:

http://www.linuxquestions.org/questions/showthread.php?p=2403645#post2403645

Thanks

Tom

[permalink][hide]

Re: mdadm and multipath (01 September 2006, 05:07 UTC)

Hi Tom.

I'm afraid that multipath isn't something I have much time for (you have to draw the line somewhere....).

The multipath implementation in md was written by Ingo Molnar some years ago and apparently left to rot. I have tried to make sure it didn't rot too much while maintaining other parts of md, but I haven't been in a postition to improve it. I don't use multipath myself, don't know the fine details or what the important issues are, and don't have any hardware that would allow me to test it.

It is my understanding that the 'dm' based multipath is seeing active development and it should be quite usable and reliable, but again - I have never used it and so cannot comment directly.

I will not be putting any effort into multipath on md. However if you or anyone else would like to work on the code, either in the kernel or in mdadm, I would be happy to review any changes and get them included in mdadm or Linux as appropriate.

Sorry I cannot be more helpful.

[permalink][hide]

Comment (20 February 2007, 00:31 UTC)
Hi Neil,

Great package, If I should address thios question somewhere else please let me know and sorry to bother you.

I am using the Ubuntu version and did an upgrade from Breezy to Edgy. The upgrade managed to erase the superblocks on the RAID devices I'm using. If I do a build of the array by hand


mdadm --build /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]5
mdadm --build /dev/md1 --level=1 --raid-devices=2 /dev/hd[ac]6
mdadm --build /dev/md2 --level=1 --raid-devices=2 /dev/hd[ac]7

The array seems to get built but when I do an examine it notes that the superblocks are not persistent. Thus when I do a reboot all the array will not reassemble itself. How do I fix this?

The output of: mdadm --examine /dev/md2

looks like: /dev/md2: Magic : a92b4efc Version : 00.90.00 UUID : bf0fdc1c:7831246c:cd2a4a74:fa419930 Creation Time : Sat Feb 7 13:42:02 2004 Raid Level : raid1 Raid Devices : 2 Total Devices : 2 Preferred Minor : 4 Update Time : Mon May 15 07:36:13 2006 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 31ef920e - correct Events : 0.1577961 Number Major Minor RaidDevice State this 0 253 7 0 active sync /dev/evms/.nodes/hda7 0 0 253 7 0 active sync /dev/evms/.nodes/hda7 1 1 253 8 1 active sync /dev/evms/.nodes/hdc7

The version of mdadm reports itself as: mdadm --version mdadm - v1.12.0 - 14 June 2005

My email address is rcheetham@varco.com.

Robert Cheetham

[permalink][hide]

Re: mdadm (20 February 2007, 00:36 UTC)

Hi Robert,

It looks like the metadata is still intact.

What do you get if you "--assemble" instead of "--create".

Maybe all you need is to put relevant information in /etc/mdadm.conf (or /etc/mdadm/mdadm.conf) and they will be assembled automatically. Try:

mdadm -Es >> /etc/mdadm/mdadm.conf

and see if that help.

[permalink][hide]

Comment (21 February 2007, 04:36 UTC)
Hi

Thanks for the reply. I tried the --assmemble command not the --create command. If I run create I am afraid I will destroy the existing array (such as it is).

I have no idea what is going on on. My mdadm.conf file looks like:

cat /etc/mdadm/mdadm.conf ARRAY /dev/md2 level=raid1 num-devices=2 UUID=bf0fdc1c:7831246c:cd2a4a74:fa419930 ARRAY /dev/md1 level=raid1 num-devices=2 /dev/hd[ac]6 ARRAY /dev/md0 level=raid1 num-devices=2 /dev/hd[ac]7 UUID=bd37878c:e24b3b3d:fcb4335e:9761b852

when I do mdadm -Es I get: mdadm: only give one device per ARRAY line: /dev/md1 and /dev/hd[ac]6 mdadm: ARRAY line /dev/md1 has no identity information. mdadm: only give one device per ARRAY line: /dev/md0 and /dev/hd[ac]7 mdadm: No devices listed in /etc/mdadm/mdadm.conf

However when I do: mdadm --detail /dev/md2 /dev/md2: Version : 00.90.03 Creation Time : Mon Feb 19 11:28:16 2007 Raid Level : raid1 Array Size : 89747072 (85.59 GiB 91.90 GB) Device Size : 89747072 (85.59 GiB 91.90 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is not persistent

Update Time : Tue Feb 20 22:12:33 2007 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0

Number Major Minor RaidDevice State 0 3 7 0 active sync /dev/hda7 1 22 7 1 active sync /dev/hdc7

which is what I expect!

But mdadm -Es -vv /dev/md2 /dev/md2: Magic : a92b4efc Version : 00.90.00 UUID : bf0fdc1c:7831246c:cd2a4a74:fa419930 Creation Time : Sat Feb 7 13:42:02 2004 Raid Level : raid1 Raid Devices : 2 Total Devices : 1 Preferred Minor : 4

Update Time : Wed Jan 31 05:47:51 2007 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Checksum : 33961742 - correct Events : 0.4149720


Number Major Minor RaidDevice State this 1 22 7 1 active sync /dev/hdc7

0 0 0 0 0 removed 1 1 22 7 1 active sync /dev/hdc7

tells me something different.

Now looking at df it tells me that md2 is mounted. #df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hdb5 5162796 2399004 2501536 49% / varrun 518076 92 517984 1% /var/run varlock 518076 4 518072 1% /var/lock udev 518076 168 517908 1% /dev devshm 518076 0 518076 0% /dev/shm lrm 518076 18856 499220 4% /lib/modules/2.6.15-27-386/volatile /dev/hdb1 37604 22531 13067 64% /boot /dev/hdb6 4134900 131260 3793592 4% /opt /dev/hdb7 2925300 731416 2045288 27% /var /dev/md2 88338256 27919288 55931620 34% /home

I suspect that the array is functioning with only one drive, how do I fix this so it will work with two drives? It is crucial I don't delete any of the data on the drive.

Would mdadm /dev/md2 -a /dev/hda7

add /dev/hda7 back into the array and bring it up to date?

----------------------------------

On a slightly different subject, reading the manual I don't understand what is meant by partitions in this context. A few more examples would be very helpful. For example what a mdadm.conf file looks like and how to produce one. What the difference is between create, assemble and build. Why you sometimes get persistent superblock and sometimes not and why this makes any difference?

Your attention to this is very much appreciated, if you need any more detail please let me know.


many thanks

Robert

[permalink][hide]

Re: mdadm (21 February 2007, 12:18 UTC)

Hi Neil,

Is this still the place to look for mdadm and linux md news? I only ask since I cannot see any announcement here about mdadm 2.6. Where should I go to stay informed?

Many thanks, Paul

[permalink][hide]

Re: mdadm (22 February 2007, 01:45 UTC)

Yes, I have been a bit slack about keeping this website up-to-date. Sorry about that.

I always announce new mdadm releases on linux-raid@vger.kernel.org, sometimes announce them through freshmeat.net, and occasionally put something on this website. I'll try to remember to put more stuff on this website, as people obviously read it !!

NeilBrown

[permalink][hide]

Re: mdadm (07 April 2007, 08:10 UTC)
Hi Neil

This package is great :)

I was going to use the "fake" RAID provided by intel (on the motherboard), but it doesn't quite work in Gentoo... No worries quick hunt around revealed that this was a better option anyway.

I just needed simple RAID1 and this has come up trumps!

Cheers,

Nick

[permalink][hide]

Comment (13 June 2007, 13:55 UTC)
Hi Neil,

Great work man! We're trying to push mdadm to the limit :)

One quick question wrt RAID5 calculations without reviewing the code: it seems that the RAID5-calculations are performed only on one core/CPU. Is there any work going on to improve this?

OOPS: it was a test involving NTFS; and this must have been the bottleneck; not the RAID calculation - so never mind :)

Greetings, Jasper.

[permalink][hide]

Re: mdadm (13 July 2007, 11:04 UTC)

Hello, would there be any problem to add a feature that mdadm in Monitor mode could send e-mail alerts to more addresses, not only one as it is now?

Michal Kovacik

[permalink][hide]

Re: mdadm (13 July 2007, 12:53 UTC)

If you want mdadm --monitor to send mail to multiple addresses, you have two options.


1/ Create an alias (e.g. in /etc/aliases) that will forward the mail to a list of people, and have mdadm send to that address
2/ Write a shell script which composes whatever mail message you like, and does whatever you want with it, and give that script to mdadm as a 'program'.

[permalink][hide]

Re: mdadm (06 November 2007, 09:59 UTC)

Greetings!

Fantastic app!

I have noticed through the wonder that is smartd that the temperature of my hda is always 2~3oC higher than hdc.

These 2 drives are mirrored. Is it possible that hda is being used as the primary read drive or something hence the higher temperature? Is there anyway to 'share the love'?

Cheers!

[permalink][hide]

Re: mdadm (13 November 2007, 16:51 UTC)

So, if I am having a problem with mdadm, where/how can I get assistance? What is the best channel?

thanks Bob

[permalink][hide]

Re: mdadm (16 November 2007, 01:41 UTC)

The best way to get assistance with mdadm or md/raid is to send mail to linux-raid@vger.kernel.org.

[permalink][hide]

Re: mdadm (05 February 2009, 01:09 UTC)
Sir,

1. This tool totally ROCKS! On my "Hot Smokin' Weapon!" usefulness-O-meter, this pins the needle at "DUDE!!"

One of the things I really like about messing with Linux is that there are so many cool things you can do with it - because folks like you take the time to make it useful. Many thanks, moy Obligado, muchos Gracias, bolshoi Spasiba, merci beaucoup, etc. etc.

2. (the inevitable) Question: Assume I am creating a raid-5 array using a bunch of 1T drives as the array members. I fire off the mdadm command, it takes the parameters, chews for a millisecond or two, and then immediately returns with a success message indicating that the array is "running".

Normally, when a command like this returns (formatting a disk, for example), this means it's totally done, and the device is golden. However with mdadm, the command returns, but the array is still "building". Top shows "md0_raid5" and "md0_resync" in response to my create command using mdadm.

(a) Why does it do this?

(b) Besides looking at the drive lights (which can fool you) or leaving an instance of top running, how would I determine that the process has finished, and how do I know it finished correctly?

(c) If you gave an answer to the previous question about "partitions" in the mdadm context - could you re-post it? I'd sure like to know that as well.

Again thanks for everything!

Jim

[permalink][hide]

Re: mdadm (05 February 2009, 06:25 UTC)

1/ Thanks.

2/a/ Because it can. With "mkfs" you cannot use the filesystem until the mkfs has finished. With md/raid you can start using the array immediately. The initial sync continues in the background.

2/b/ The command "mdadm --wait /dev/mdXX" will wait for mdXX to finish any resync. Or just look at "cat /proc/mdstat".

2/c/ I don't really understand the question about "paritions".

A device that /dev/sda has partitions like /dev/sda1 or /dev/sda2. An md array can be created over whole devices (/dev/sda and /dev/sdb) or ove parititions (/dev/sda1 and /dev/sdb1). Does that help?


[permalink][hide]

Comment (18 February 2009, 01:45 UTC)
[astupid question...]

What I'd like to do is something I know is possible with ENBD (Enhanced NBD) but the author is still tinkering with it so I'm not sure I entirely trust it. So I wondered if it's possible with iSCSI.

Basically, I want a hot-failover fileserver. Failover easy enough with VRRP component of keepalived. Mirroring data easy enough with rsync, but big performance hit syncing frequently; infrequent syncing bound to lead to loss of important changes.

So the basic idea is two servers each with one disk that is part of a networked software RAID array. Only the live server gets to read/write the array. Disk on live server dies, the server gives up the virtual IP. Live server dies, failover server mounts its disk locally. Some tinkering with keepalived scripting should take care of that. But is mdadm and iSCSI compatible in that scenario? Anyone know? Or willing to opine? If I had enough spare kit I'd just test it, but I don't have that luxury. :(

I've googled, but keep getting pages about a server with two disks in s/w raid being accessed by other servers (use GFS, fencing, blah, blah). I don't need the throughput of two live servers, I just want to be reasonably sure I have something that works even if one disk or one server dies and the only down-time is a couple of minutes for VRRP to kick in and for Windows users to re-map network drives.

[permalink][hide]

Re: mdadm (18 February 2009, 01:55 UTC)
What I'd like to do is something I know is possible with ENBD (Enhanced NBD) but the author is still tinkering with it so I'm not sure I entirely trust it. So I wondered if it's possible with iSCSI.

Wouldn't this come under the banner of Storage Array Network aka SAN ?

The idea that many hardware nodes are a single software node, clients connect to software node. Hardware node dies, nothing happens as far as the client is concerned...

[permalink][hide]

Re: mdadm, ntfs-3g (06 March 2009, 20:02 UTC)

I've greatly enjoyed using mdadm, but have been puzzled by one piece of its behaviour. If used to to build a prexisting Windows Raid array with -build, e.g.:

mdadm -B -c 64 -l 1 -n 2 /dev/md2 /dev/hda1 /dev/hdb1

where Windows has built a healthy RAID1 array on the first partitions of /dev/hda1 and bdb1, all seems to go well.

The array is mountable with the kernel ntfs driver ro or rw. Using ntfs-3g, however, is impossible as apparently the device size is smaller than the volume size (as found by ntfsresize -i /dev/md2, for example, see the thread below)

Thinking this to be a ntfs-3g problem, I contacted the developer,

http://forum.ntfs-3g.org/viewtopic.php?f=3&t=1125

but apparently they feel this is a mdadm problem. I did some testing, and the problem is independent of whether dmraid is in use or not. Searching around the web yields similar problems with Windows Raid-0 and Raid-5 arrays, all with the same issue.

Any clues?


[permalink][hide]

Re: mdadm (06 March 2009, 21:18 UTC)

hda1 is 61440561K sectors in size (I think). The NTFS filesystem is 61440560K in size.

You told mdadm to use a chunk size of 64K. This rounds the size down to a multiple of 64K which will be 61440512K. Clearly too small.

Chunksize is not meaningful for RAID1. simply remove the "-c 64" or make it something that divides into the filesystem size such as "-c 16".

Then it should "just work".

You should be aware that when using -B, md is not able to record device failures. So if one device gets unplugged, md will continue to write to the other but will not record anywhere that the device is failed. If you then shutdown and start up again having plugged the drive back in, md (or windows) will find the other drive and assume it is a working part of the array and will trust the data on it. This might not be what you want.

So it will work, but be careful if you get a drive failure.

[permalink][hide]

Re: mdadm (22 March 2009, 21:50 UTC)


Hello Neil,


I have been using mdadm for some time now and it has definitely proved to be most useful. Your work on this package is very much appreciated.

I have a question about the --create command. If I have an existing array, and I run the create command against that array, will the create command destroy the data on that array?

I have lost three of the four superblocks on my raid array and I am struggling to bring it back up. Because the superblocks are not there, and because I have moved the array to another computer, I am unsure about the physical order that the drives are connected in. (ie. I am not sure if the drive currently connected as sdb is the drive that was originally connected as sdb). Will this have an impact on the create command? Is there a better way to recreate the superblocks without running the --create command?


I hate to ask this question here as realize it is not the correct place to do so and I assume you have better things to do than answer every question about mdadm. Unfortunately I was unable to find this information via other channels. Any input is appreciated.

thanks


Michael

[permalink][hide]

Re: mdadm (23 March 2009, 03:35 UTC)

Hi.

The best way to ask questions about mdadm is to send mail to linux-raid@vger.kernel.org. However I do sometimes reply to questions posted here.

When you --create an array, and pre-existing data is not destroyed, at least not necessarily. If the 'resync' process that usually happen on array creation starts, that could destroy data, so while experimenting you want to avoid that.

For a raid5, the best think to do is to create the array with one device listed as "missing" rather than "/dev/sdwhatever". That way the array will appear degraded and no resync will happen (For raid6 you would want 2 'missing' devices).

So I suggest you try creating the array with various combinations of drives, always having one 'missing', and try "fsck -fn" on the array until you find a combination that fsck doesn't complain about.

The order of the drives is very important, so try different orders until it works.

[permalink][hide]

Re: mdadm (23 March 2009, 12:50 UTC)

Thanks you very much Neil. That is very helpful information. I will direct future inquiries to linux-raid@vger.kernel.org mailing list.


Thanks for your time.


Michael

[permalink][hide]

bug mdadm-df? (22 April 2009, 14:26 UTC)

Hi Neil,

I am using "mdadm" and with 4 partitions in my system. The problem is that "df" shows that root partition is full when it's not true. I guess it is counting /var and /home which i have in different partitions.

Those applications which use "df" (such system updates) do not work because they find the disk full.

I've spent many time searching a solution with no success... is there a solution for this?

many thanks!

[permalink][hide]

Re: mdadm (22 April 2009, 23:30 UTC)

If 'df' shows that the filesystem is full, then it is full. This is completely unrelated to raid.

Why do you think that the filesystem is not full??

I would suggest running "du -x / | sort -n" to find out where all the space is used.

[permalink][hide]

Re: mdadm (23 April 2009, 09:10 UTC)

Thank you it's solved!

The problem was that I am doing backups with rsync to an external drive mounted in /media/backup, but if the drive is not present the rsync synchronizes with the local folder filling the root partition. (the -x option helped).

thanks again!

[permalink][hide]

Re: mdadm (21 August 2009, 18:54 UTC)
Hi Neil, I apologize for bothering you with this but maybe it'll help others as well. I am using the latest mdadm (3.0) and when I did a grow on a 3 partition RAID5 to 4, the machine shutdown on the UPS signal gracefully during the critical section. However, 3 of the 4 drives no longer have the superblock and they refuse to assemble.

I know you've had the foresight to think of this so I'm really hoping this can be fixed.

Here is the mdadm output on the last remaining spare:

Magic : a92b4efc Version : 1.2 Feature Map : 0x4 Array UUID : 495f6668:f1e12d10:99520f92:7619b487 Name : GATEWAY:raid5_280G (local to host GATEWAY) Creation Time : Fri Jul 31 23:05:48 2009 Raid Level : raid5 Raid Devices : 4

Avail Dev Size : 586099060 (279.47 GiB 300.08 GB) Array Size : 1758296832 (838.42 GiB 900.25 GB) Used Dev Size : 586098944 (279.47 GiB 300.08 GB) Data Offset : 272 sectors Super Offset : 8 sectors State : active Device UUID : 754ae1cf:bbee0582:f660ec89:a88800d3

Reshape pos'n : 0 Delta Devices : 1 (3->4)

Update Time : Fri Aug 21 09:55:38 2009 Checksum : e18481fb - correct Events : 13581

Layout : left-symmetric Chunk Size : 64K

Device Role : spare Array State : AAAA ('A' == active, '.' == missing)

I'll keep checking this page for your input...appreciate it very much

also on running: mdadm --assemble --scan mdadm: Failed to restore critical section for reshape, sorry.

Any help will be great...

Thanks, Anshuman

[permalink][hide]

Comment (28 September 2009, 08:57 UTC)
Dear Neil,

First of all, thanks for the great support you show the Linux community. Its' people like you that made Linux an entirely viable alternative to M$.

I am running ubuntu 9.04. Until recently I used the mdadm version that exists in the ubuntu repositories, with perfect success. However, since that version is fairly old (2.6, I think), I decided to install the latest version 3.0.2.

On a freshly formatted system with no mdadm I downloaded http://www.kernel.org/pub/linux/utils/raid/mdadm/mdadm-3.0.2.tar.gz, unzipped it and run the "sudo make" and "sudo make install" commands. According to mdadm --version, I succesfuly installed version 3.0.2 of the program. I was also successful on creating and mounting a fresh raid 5 array.

However, the installation didn't create either the /etc/mdadm/mdadm.conf or the /etc/mdadm.conf file, where I need to specify the array details so that it is assembled during boot time.

I manually created (with gedit) the mdadm.conf file in both locations and added the "DEVICE partitions" line and a line with the result from mdadm --examine --scan

Even though this was enough on the old version of mdadm for auto assemble during the boot sequence (so that /etc/fstab could mount /dev/md0), on version 3.0.2 it made no difference at all, as if the mdadm I compiled and installed manually wouldn't read the mdadm.config file, neither on /etc/mdadm/mdadm.conf or on /etc/mdadm.conf

Is there anything I could do to address this problem? Any way to permanently point the mdadm to its mdadm.conf file? is there an mdadm.conf file created on some other location on the disk, where I can add the array information?

Thanks in advance

Kind regards

Angelos Kyritsis.

[permalink][hide]

Re: mdadm (25 October 2009, 19:42 UTC)

Hello Neil.

Can you please comment on the following idea? (please add whatever general or specific feedback you feel like).

"Asymmetric Performance Mirroring - Taking advantage of Gen 1 SSDs in home computers." I'd like to realize the benefits of solid state drives (SSDs) while not losing the advantage of HDDs and also not falling victim to the serious drawbacks of SSDs. My goal is to create a cheap but ultraresponsive PC for home or business use. (Summary of SSD performance: Very good random read, moderate sequential read, adequate to poor random write, "stuttering").

What if an SSD was paired with an HDD in software RAID 1 with smart multiple device optimisation bespoke designed for this asymmetry? It seems like mdadm is there already, or at least close!

Ideally, a [typical] user would install an SSD of size X on a sata channel, and another HDD of size Y>X on a different sata channel. The SSD would be a single primary partition (sda0). The HDD would be two partitions, the first of which is sized X (sdb0). sda0+sdb0 would be used to create the RAID 1 redundant array (sdl0) and this logical drive would have the OS and "some" applications installed. It would be considered the "performance" partition. sdb1 would have media files, and larger applications if necessary.

Read Races ----------- A read operation to any logical block of sdl0 would initially be sent to the SSD, then the same request sent to the HDD. The manager would then supply data from the read of the SSD without waiting for the HDD to complete (as I think the mdadm driver already does). A subsequent read operation can look at prior complete times. If it notes that the SSD won the last "race", then it can continue with the same order: issue cmd to SSD first then to HDD. It can also cancel the first request to the HDD if possible. If the user is performing some sustained operations, or if the SSD gets into one of the stuttering states where completions are made to wait for internal garbage collection or erasure events, then the HDD might win the time to completion race. At this point, it would be nice for the manager to further optimize by inverting the order of devices to which the manager issues identical commands. At this point, the HDD is picking up the slack for the SDD and the end user is seeing the total benefit. After a sustained bandwidth sensitive user has stopped requesting, the queue has drained, the manager should see that the SDD is wining the individual races and should invert the cmd issue priority again (favoring random reads).

Filtering Writes ----------------- I'm not sure how this works now, but perhaps the manager can be adapted to help the SSD a little bit. Its been reported on other forums that a utility for windows which caches smaller writes can help the "stuttering" of SSDs. Unfortunately the utility hurts larger write performance. Perhaps the same race analogy can be used: The manager times write completion and if it sees that the HDD starts to win, it simply queues up the writes to the SSD without blocking the user. Data is safe on the HDD if power is removed. Maybe the end user ensures data with UPS system. Maybe the manager has to write to a log in memory to indicate a repair is needed when? The desire here is that the SSD does not slow the system down. My general prediction is that sometimes the SSD will complete a write first (there are spare flash blocks ready to be written in a well maintained flash system) and sometimes the HDD will complete a write first (heads are located ideally and/or HDD cache is not full, and/or the SSD requires internal flash read-modify-write management).

Regards, and thanks for reading, Tom

[permalink][hide]

Re: mdadm (13 December 2009, 18:06 UTC)

The link to the FAQ and documentation is broken (http://linux-raid.osdl.org/index.php/Main_Page).

Thanks for a great tool!

[permalink][hide]

Re: mdadm (14 December 2009, 04:13 UTC)

Thanks. I've updated the link.

[permalink][hide]

Re: mdadm (29 December 2009, 21:00 UTC)
Hello Neil,

I have read through the comments until I have read the sentence 'The order of the drives is very important, so try different orders until it works.'

Is his correct ?? To my perspective the order of the raid storage devices is important vor managing but only to the internal view of the raid. Isn't it easy to configure the raid (from the view of mdadm) and it's components from the information you get from the superblocks?

This means, if you have a raid array containing 3 disks sda, sdb and sdc marked with numbers ranging from 0 to 2 (in the superblocks). Then if this array is mounted as sdg (formerly sdc, #2), sdh (sda,#0), sdi (sdb,#1) it would be easy to automatically remap it - isn't it?

[permalink][hide]

Re: mdadm (30 December 2009, 11:30 UTC)

Yes, the statement is correct. But your perspective is also correct.

The statement was made in the context that the superblock had been destroyed by an incorrect --create. Your perspective assumes that the superblocks are still valid.


[permalink][hide]

Re: mdadm (10 May 2010, 19:50 UTC)

I tried e-mailing the address you mentioned above but got a permanent failure message. If there is a better place to pose this question, please feel free to redirect me.



After years of trial and error I am finally transferring my entire RAID to linux as a software RAID. I have wanted to do it for some time, but just did not have the linux expertise to get it done. My RAID 6 is building as I write this, but this is one feature I have not been able to locate. Most of the hardware RAID I have used in the past had something called "Dynamic Sector Repair" where it would do a low impact check through the raid to confirm and dynamically repair any damaged sectors or data. This is an amazing capability since it drastically reduces the likelihood of a catastrophic failure during a drive rebuild or some similar action. Does mdadm have this capability at all? Would it be possible to add it? If not, are you aware of any apps that could?

If this is written in the documentation and I just missed it I apologize. My understanding of linux is still on the low end. Thank you for your time!

[permalink][hide]

Re: mdadm (07 February 2011, 14:36 UTC)

First off, thank you so much for the MD software. It is fantastic stuff and has saved my bacon many a time.

I have a minor question. On one of my MD RAIDs (level 6) I had a drive failure and replaced the failed 1TB drive with a 2TB drive until I could get a new 1TB drive. When I got the new drive I did a --fail, --remove on the 2TB drive and --add on the new 1TB drive. Of course that means that for the duration of the rebuild, the system was running at reduced parity. Is there any way to tell the MD system that I want to replace a drive so it should copy the contents of one to the other and mirror writes to both, then once complete, fail out the original drive?

Thanks, -ben

[permalink][hide]

Re: mdadm (09 February 2011, 22:08 UTC)

Sorry, but. It is a feature that is often requested but hasn't been implemented yet.

[permalink][hide]

Re: mdadm (26 March 2011, 18:55 UTC)
I assume that dd if=/dev/olddrive of=/dev/newdrive doesn't do the job then ?

[permalink][hide]

Re: mdadm (26 March 2011, 19:10 UTC)

Device by ID

My biggest problems with mdadm revolve around the fact that its point of reference for drives is the controller they are attached to instead of the actual disks.

ie. when I create an array you end up saying create me a raid5 array using the disks on controllers /dev/sdc1,/dev/sdd1,/dev/sde1 which doesn't seem a problem.

However if I then add another disk to the system it can sometime bump all the device names so /dev/sdc1 becomes /dev/sdd1

It would be much safer to say create my array using /dev/devicebyid/harddisk1,/dev/devicebyid/harddisk2,/dev/devicebyid/harddisk3.

Which means I can then hang the disks off any controller, and the system will cope and reassemble the arrays on boot should a new drive be added.

Does this make any sense ?

I have tried creating arrays using device-by-id device names, and although the device is created sucessfully the array is not reassembled on reboot, even with a the devicenames in mdadm.conf - further investiagtion using mdadm -vv --assemble --scan ; shows that when assembling mdadm uses only controller names so you get lots of errors that say something like /dev/sda1 does not match /dev/devicebyid/harddisk1.

If you rewrite mdmadm.conf to use controller device names then the array is sucessfully assembled.





[permalink][hide]

Re: mdadm (26 March 2011, 22:00 UTC)

It is best not to use device names in mdadm.conf at all.

Just identify the device by uuid (i.e. use the output of "mdadm -Db /dev/md0" in mdadm.conf). Then mdadm will hunt through all attached devices to find the one with the uuid given and use those devices.

If that isn't clear enough - the best thing to do is post to linux-raid@vger.kernel.org (you don't need to subscribe first) with lots of details about what you are doing and how it doesn't work.


[permalink][hide]

Re: mdadm (27 March 2011, 01:10 UTC)

In my experience with troubleshooting, it is wise to copy-paste a copy of the original mdadm --create command in a commentary into the mdadm.conf file as well, it can sometimes be practical for recovery.

For example in the case of trying to recover a (partial) loss of 2 drives out of a raid-5 array (by recreating it). But if in the recreation process, for example a different chunk-size is being used, it's a no-go...

The best way to get used to recovery of a lost raid array is to use 3 or more usb keys (as simulating a failed drive can be easily done by unplugging it :-D Getting experience here, is advised prior to using this in production... ... especially because it works very well - as long as you don't mess it up ;-D

Greetings, Jasper


[permalink][hide]




[æ]