Hardware: AMD X2 3600, 3GB ram, gigabyte mainboard.
Disks: 1 OS and 8 sata/ata 250 and 300GB disks.

Test: DD and IOZone on: zfs: stripe, raidz, raid10, raid50 SVM: raid5, stripe

Oh, might also do veritas filesystem, which is supposed to be quite good.

The 8 disks are all roughly the same type and the same speed. Somewhere between 50 and 60MB/s read/write. On each disk, there is a 200GB partition at the beginning of the disk.

Solaris10, updated to latest version as of january 2008.

First: results with 8 disks.

Creating the zfs pools takes a few second, creating the 8 disk SVM raid5 took 1.5 hours. and an 8 disk raid10 on linux took 2.5 hours.

When results seem strange, like linuxraid10 read performance, the test was done again to verify the result.

dd test: running with 4 or 8G doesn't make much difference. The MD on linux dd's were with 8GB.

        ZFS raidz, stripe8, raid10, raid15,     SVM raid5, stripe,    LVM-i8, MD0,    MD10
dd:(w/r) 92.0/150, 132/169, 71.5/163, 73.8/137  , 14.3/86.6, 148/112, 183/137,188/259, 159/90.8

iozone -R -r16M -s 8g -i 0 -i 1 -i 2 -f /mnt/foo
iozone -R -r16M -s 16g -i 0 -i 1 -i 2 -f /mnt/foo

Note on linux write performance: I suspect that it does something nasty with write-caching..

          DD MB/s|  IOzone-16M-8GB      random random
       write read| write rewr read rerd read write 
raidz:    73 144 |   74   63   144  144   43   59 Freebsd (unstable)
vin5:     20  43 |    ?    ?     ?    ?    ?    ? FreeBSD vinum (unstable) 
raidz:    92 150 |   95   91   143  141   57  108 Solaris10 ZFS raidz
SVMr5:    14  87 |   13   15    63   66   74   13 Solaris10 SVM raid5 (1.5 hours to initialize)
vrts5:    35 163 |   38   39   178  178  152   39 Solaris veritas4, raid5 8G, log on data-disk
vrts5:    35 163 |   38   39   178  178  152   39 Solaris veritas4, raid5 8G, no log
vrts5-7d: 37 169 |   41   42   162  162  151   40 Solaris veritas4, raid5 8G 7 disks, log on disk 8
lxmd5:   133 165 |  144  119   154  158  151  106 Fedora7 MD raid5 (4 hours to initialize)
lxmd-16G 130 164 |  141  115   156  156  129   98 Fedora7 MD raid5, with 16GB file instead of 8GB
vrts5:    32 120 |   38   38   150  149  120   36 CentOS4.6, veritas 4, raid5

zfs0:    155 236 |  183  175   225  226   60  162 FreeBSD7rc1 zfs (16G)
vin0:    177  61 |  193  186    61   61   71  181 FreeBSD vinum stripe, no softupdates.
vrtstr:  263 172 |  305  310   185  183  162  307 Solaris10, veritas 4, stripe 8 disks. (8G and 16G, identical results)
zfs0:    132 169 |  139  141   175  202   57  139 Solaris10 ZFS raid0
SVMr0:   148 112 |  136  161    86   83   73  164 Solaris10 SVM raid0
lxmd0:   188 259 |  202  135   246  248  197  142 Fedora7 MD raid0
lxlvmi8: 183 137 |  192  175    85   85  114  167 Fedora7 lvm -i "raid0"
vrtstr:  270 236 |  301  304   187  182  169  303 CentOS4.6, veritas 4, 8 disk stripe

zfs10:   100 201 |  107  110   199  201   56  102 Freebsd7rc1 (8G)
vin10:   131  31 |  137  136    31   31   42  129 Freebsd7rc1 (8G) with 2 stripes of 4 200g partitions, as per handbook
zfs10:    72 163 |  108  100   175  218   57  104 Solaris10 ZFS raid10
vrts10:  259 171 |  304  308   183  183  163  305 Solaris veritas4, striped mirror 8G
lxmd10:  159  91 |  172  103    96   97  111  100 Fedora7 MD raid10 (2.5 hours to initialize)
vrtstr:  120 148 |  114  108   146  141  127  111 CentOS4.6, veritas 4, striped mirror

1-dsk vxf 62  63 |   59   60    60   60   58   58 Solaris10 veritas4, vxfs on single disk
1-dsk zfs 48  21 |   50   47    20   20    7   47 Solaris10 zfs on single disk (ran twice to verify)
1-dsk ufs 53  61 |   53   56    73   74   72   54 Solaris10 standard ufs on single disk

zfs: 
+ nice and simple to configure
+ no waiting for initialization
- can't resize raidz's
- can't remove disks
- performance is so-so

veritas:
- doesn't ship with OS
- quite big and complex
- version 5 wasn't stable
+ very flexible
+ filesystems/volumes can be modified in any possible way
+ very high performance

SVM:
-

configuration:
solaris:
D0=c0d0s0; D1=c0d1s0; D2=c1d1s0; D3=c2d0s0
D4=c3d0s0; D5=c4d0s0; D6=c5d0s0; D7=c6d0s0

freebsd:
D0=ad1s1; D1=ad2s1; D2=ad3s1;D3=ad6s1
D4=ad8s1; D5=ad10s1; D6=ad12s1; D7=ad14s1

zfs:
raidz: zpool create mypool raidz $D0 $D1 $D2 $D3 $D4 $D5 $D6 $D7
raid10: zpool create mypool  mirror $D0 $D1 mirror $D2 $D3 mirror $D4 $D5 mirror $D6 $D7
stripe8: zpool create mypool $D0 $D1 $D2 $D3 $D4 $D5 $D6 $D7
raid15: zpool create mypool raidz $D0 $D1 $D2 $D6 raidz $D3 $D4 $D5 $D7

SVM:
raid5:
# metadb -a -f c0d0s0
# the next one takes about 1.5 hours...
# metainit d0 -r c0d0s0 c0d1s0 c1d1s0 c2d0s0 c3d0s0 c4d0s0 c5d0s0 c6d0s0
# newfs -v /dev/md/rdsk/d0
# mount /dev/md/dsk/d0 /mnt
metaclear -a
stripe8: (1 stripe, 8 slices)
metainit d0 1 8 c0d0s0 c0d1s0 c1d1s0 c2d0s0 c3d0s0 c4d0s0 c5d0s0 c6d0s0

raid10: 4 mirror pairs
metainit d0 1 1 $D0
metainit d1 1 1 $D1
metainit d2 1 1 $D2
metainit d3 1 1 $D3
metainit d4 1 1 $D4
metainit d5 1 1 $D5
metainit d6 1 1 $D6
metainit d7 1 1 $D7
metainit d10 -m d0 #d10 is a 1 disk mirror consisting of d0
metainit d12 -m d2
metainit d14 -m d4
metainit d16 -m d6
metattach d10 d1 # add d1 to d10 as a mirror slave
metattach d12 d3
metattach d14 d5
metattach d16 d7
metainit d100 1 4 d10 d12 d14 d16 #d100 is a stripe of 4 slices

linux:  
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
vgcreate vgtest /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
lvcreate -n lvtest vgtest -L 381480 -i 8
mke2fs -j /dev/vgtest/lvtest
mount /dev/vgtest/lvtest /mnt

umount /mnt
lvremove vgtest/lvtest
vgremove vgtest
pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
mdadm --create /dev/md0  --level=0 --raid-devices=8 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
mdadm --detail /dev/md0
mke2fs -j /dev/md0
mount /dev/md0 /mnt
dd if=/dev/zero of=/mnt/foo bs=1024k count=4000 conv=fsync
umount /mnt;mount /dev/md0 /mnt
dd if=/mnt/foo of=/dev/null bs=1024k
iozone -R -r16M -s 8g -i 0 -i 1 -i 2 -f /mnt/foo
umount /mnt
mdadm --stop /dev/md0

mdadm --create /dev/md0  --level=10 --raid-devices=8 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
mke2fs -j /dev/md0
#array is synchronizing in the background.. started around 11:15, ready around 13:45 = 2.5 hours...

mdadm --create /dev/md0  --level=5  --raid-devices=8 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
#started at 14:18. Don't worry about the spare disk it will show, that is just part of the initialization process. Took until 18.00
# watch cat /proc/mdstat

freebsd:
zpool create mypool ad1s1 ad2s1 ad3s1 ad6s1 ad8s1 ad10s1 ad12s1 ad14s1
dd if=/dev/zero of=/mypool/vol1/foo bs=1M count=8000 conv=sync
zfs umount mypool/vol1; zfs mount mypool/vol1
dd of=/dev/null if=/mypool/vol1/foo bs=1M

#zpool create mypool raidz ad1s1 ad2s1 ad3s1 ad6s1 ad8s1 ad10s1 ad12s1 ad14s1
zpool create mypool raidz1 ad1 ad2 ad3 ad6 ad8 ad10 ad12 ad14
dd if=/dev/zero of=/mypool/vol1/foo bs=1M count=8000 conv=sync
zfs umount mypool/vol1; zfs mount mypool/vol1
dd of=/dev/null if=/mypool/vol1/foo bs=1M

zpool create mypool mirror ad1 ad2 mirror ad3 ad6 mirror ad8 ad10 mirror ad12 ad14
zfs create mypool/vol1
dd if=/dev/zero of=/mypool/vol1/foo bs=1M count=8000 conv=sync
#8388608000 bytes transferred in 79.738018 secs (105202114 bytes/sec)
zfs umount mypool/vol1; zfs mount mypool/vol1
dd of=/dev/null if=/mypool/vol1/foo bs=1M

freebsd vinum:

gvinum create 

drive a device ad1
drive b device ad2
drive c device ad3
drive d device ad6
drive e device ad8
drive f device ad10
drive g device ad12
drive h device ad14

volume stripe
plex org striped 512k
 sd length 200g drive a
 sd length 200g drive b
 sd length 200g drive c
 sd length 200g drive d
 sd length 200g drive e
 sd length 200g drive f
 sd length 200g drive g
 sd length 200g drive h

newfs /dev/gvinum/stripe
mount /dev/gvinum/stripe /mnt
dd if=/dev/zero of=/mnt/foo bs=1M count=8000 conv=sync
umount /mnt;mount /dev/gvinum/raid10 /mnt
dd of=/dev/null if=/mnt/foo bs=1M

tunefs -n enable /dev/gvinum/stripe

iostat ad1 ad2 ad3 ad6 ad8 ad10 ad12 ad14 1

gvinum create

drive a device ad1
drive b device ad2
drive c device ad3
drive d device ad6
drive e device ad8
drive f device ad10
drive g device ad12
drive h device ad14

volume raid10
#  with 8 half disks in each plex is really slow..
plex org striped 512k
 sd length 200g drive a
 sd length 200g drive b
 sd length 200g drive c
 sd length 200g drive d
plex ord striped 512k
 sd length 200g drive e
 sd length 200g drive f
 sd length 200g drive g
 sd length 200g drive h


drive a device ad1
drive b device ad2
drive c device ad3
drive d device ad6
drive e device ad8
drive f device ad10
drive g device ad12
drive h device ad14

volume raid5
plex org raid5 512k
 sd length 200g drive a
 sd length 200g drive b
 sd length 200g drive c
 sd length 200g drive d
 sd length 200g drive e
 sd length 200g drive f
 sd length 200g drive g
 sd length 200g drive h



test commands: 

dd:
/usr/local/bin/dd if=/dev/zero of=/mypool/vol1/foo bs=1024k count=4000 conv=fsync
/usr/local/bin/dd if=/dev/zero of=/mnt/foo bs=1024k count=4000 conv=fsync
/usr/local/bin/dd if=/mnt/foo of=/dev/null bs=1024k conv=fsync

after this, sleep, unmount, remount, sleep, run the read dd.

iozone -R -r16M -s 8g -i 0 -i 1 -i 2 -f /mypool/vol1/foo

Veritas notes: couldn't get veritas 5 working on solaris10, it kept crashing the system. Downgraded to version 4 and no more problems. Actually veritas is kinda cool, you can change the layout of your pool on the fly, online. From raid5 to stripe to mirror, whatever you want.

actual results:
            DD MB/s                                                  random  random
ZFS     write read     KB  reclen   write rewrite    read    reread    read   write
raidz:     92 150 8388608   16384   97099   93598   146704   144393   58573  110302
SVMr5:     14  87 8388608   16384   13451   15220    64856    67210   75733   13467
lxmd5:

zfs0:     132 169 8388608   16384  142652  144664   178769   206688   58522  142501
SVMr0:    148 112 8388608   16384  139245  165368    87621    84610   75062  167966
lxmd0:    188 259 8388608   16384  206662  138024   251398   253499  201266  144970
lxlvmi8:  183 137 8388608   16384  197144  179597    86744    86990  116836  171285

zfs10:     72 163 8388608   16384  110972  101984   179740   223007   57987  106952
lxmd10:   159  91 8388608   16384  176241  105471    98059    99237  113373  102535