My raid10 module has recently appear in Linus' source tree and should be in the next release of 2.6 - 2.6.9.
I started writing this module about 3 years ago, hit a hiccup, and took 2 and a half year to get back to it. There was a difficulty getting the resync code to work sensibly. As often happens I didn't look for an easy way out that maybe wasn't so complete, but tried to find a completely "right" solution, and ended up with none. "Perfect" was the enemy of "good" once again.
But I finally got back to it earlier this year and got most of it written in a couple of days, most of it working in another couple of days a few weeks later, and the final bugs out about two weeks ago.
RAID10 is a raid module that combines features of both raid0 (striping) and raid1 (mirroring or plexing). There are multiple copies of all data blocks, and they are arranged on multiple drives following a striping discipline. It is similar in some ways to what some people call "RAID 0+1" which is a raid0 array built over a collection of raid1 arrays.
There are two distinct layouts that md/raid10 can use, which I call "near" and "far". The layouts can actually be combined if you want 4 or more copies of all data, but that is not likely to be useful.
With the near layout, copies of the one block of data are, not surprisingly, near each other. They will often be at the same address on different devices, though possibly some copies will be one chunk further into the device (if, e.g. you have an odd number of devices, and 2 copies of each data block).
This yields read and write performance similar to raid0 over half the number of drives.
The "far" layout lays all the data out in a raid0 like arrangement over the first half of all drives, and then a second copy in a similar layout over the second half of all drives - making sure that all copies of a block are on different drives.
This would be expected to yield read performance which is similar to raid0 over the full number of drives, but write performance that is substantially poorer as there will be more seeking of the drive heads.
I don't have any useful performance measurements yet.
The main differences between raid10 and raid0+1 configurations are:
- management is easier as it is one complete array rather than a combination of multiple arrays. In particular, hot spare management is more straight forward
- The "far" data layout is easier to obtain
- IT is simple to have a raid10 with an odd number of drives and still have two copies of each data block
You need atleast verions 1.7.0 of mdadm to work with raid10 arrays.