User:River/ms6 vxvm
i am testing Veritas Storage Foundation on ms6 as an alternative to ZFS. i will install 5.0MP3RP2 and use filebench to test various configurations:
- straight mirror
- raid5, various stripe widths
after benchmarking Vx i will recreate the pool with ZFS and run the same benchmarks to compare performance.
if benchmarking produces reasonable results, i will test VxFS's File Change Log feature for replication between image servers.
Installing VX
- cd /root/sf_50mp3_x86
- ./installer
add patches:
127336 02 < 03 --- 138 VM 5.0_MP3RP2_x86: Rolling Patch 02 for VM 5.0MP3Sun5.10_x86 127337 02 < 03 --- 138 VRTSvxfs 5.0MP3RP2_x86: Maintenance Patch for File System 5.0-Sun5 127362 01 < 03 --- 138 VRTSddlpr 5.0MP3RP2_x86: Rolling Patch 02 for VRTSddlpr 5.0 MP3 139355 -- < 01 --- 346 VRTSvmman 5.0MP3RP1: Rolling Patch 01 for Volume Manager 5.0MP3_x8 139738 -- < 01 --- 346 VRTSdcli 5.0MP3RP1_x86: Rolling Patch 01 for for VRTSdcli 5.0MP3 139740 -- < 01 --- 346 VRTSvmpro 5.0MP3RP1_x86: Rolling Patch 01 for VRTSvmpro 5.0 139746 -- < 02 --- 138 VRTSobc33_x86 5.0MP3RP2: Maintenance Patch for VEA Server 139745 -- < 02 --- 138 VRTSob_x86 5.0MP3RP2: Maintenance Patch for VEA Server 139747 -- < 01 --- 346 VRTSaa._x86 5.0MP3RP1: Maintenance Patch for VRTSaa 139748 -- < 01 --- 346 VRTSccg._x86 5.0MP3RP1: Maintenance Patch for VRTSccg 140658 -- < 01 --- 137 VRTSdsa 5.0MP3RP2_x86: Maintenance Patch for VRTSdsa 5.0 140662 -- < 01 --- 138 VRTSobgui_x86 5.0MP3RP2: Maintenance Patch for VEA GUI 141280 -- < 01 --- 138 VRTSmapro 5.0MP3RP2_x86: Rolling Patch for Solaris 10
Creating a RAID50 volume
we will create a single RAID-50 volume over all disks.
create subdisks for data (5) and log (2) for each leg:
root@ms6:~# vxmake sd test-plex1-disk1 disks01,0,976691152 root@ms6:~# vxmake sd test-plex1-disk2 disks02,0,976691152 root@ms6:~# vxmake sd test-plex1-disk3 disks03,0,976691152 root@ms6:~# vxmake sd test-plex1-disk4 disks04,0,976691152 root@ms6:~# vxmake sd test-plex1-disk5 disks05,0,976691152 root@ms6:~# vxmake sd test-plex1-log1 disks45,0,4000 root@ms6:~# vxmake sd test-plex1-log2 disks46,0,4000 root@ms6:~# vxmake sd test-plex2-disk1 disks06,0,976691152 root@ms6:~# vxmake sd test-plex2-disk2 disks07,0,976691152 root@ms6:~# vxmake sd test-plex2-disk3 disks08,0,976691152 root@ms6:~# vxmake sd test-plex2-disk4 disks09,0,976691152 root@ms6:~# vxmake sd test-plex2-disk5 disks10,0,976691152 root@ms6:~# vxmake sd test-plex2-log1 disks45,4000,4000 root@ms6:~# vxmake sd test-plex2-log2 disks46,4000,4000 root@ms6:~# vxmake sd test-plex3-disk1 disks11,0,976691152 root@ms6:~# vxmake sd test-plex3-disk2 disks12,0,976691152 root@ms6:~# vxmake sd test-plex3-disk3 disks13,0,976691152 root@ms6:~# vxmake sd test-plex3-disk4 disks14,0,976691152 root@ms6:~# vxmake sd test-plex3-disk5 disks15,0,976691152 root@ms6:~# vxmake sd test-plex3-log1 disks45,8000,4000 root@ms6:~# vxmake sd test-plex3-log2 disks46,8000,4000 root@ms6:~# vxmake sd test-plex4-disk1 disks16,0,976691152 root@ms6:~# vxmake sd test-plex4-disk2 disks17,0,976691152 root@ms6:~# vxmake sd test-plex4-disk3 disks18,0,976691152 root@ms6:~# vxmake sd test-plex4-disk4 disks19,0,976691152 root@ms6:~# vxmake sd test-plex4-disk5 disks20,0,976691152 root@ms6:~# vxmake sd test-plex4-log1 disks45,12000,4000 root@ms6:~# vxmake sd test-plex4-log2 disks46,12000,4000 root@ms6:~# vxmake sd test-plex5-disk1 disks21,0,976691152 root@ms6:~# vxmake sd test-plex5-disk2 disks22,0,976691152 root@ms6:~# vxmake sd test-plex5-disk3 disks23,0,976691152 root@ms6:~# vxmake sd test-plex5-disk4 disks24,0,976691152 root@ms6:~# vxmake sd test-plex5-disk5 disks25,0,976691152 root@ms6:~# vxmake sd test-plex5-log1 disks45,16000,4000 root@ms6:~# vxmake sd test-plex5-log2 disks46,16000,4000 root@ms6:~# vxmake sd test-plex6-disk1 disks26,0,976691152 root@ms6:~# vxmake sd test-plex6-disk2 disks27,0,976691152 root@ms6:~# vxmake sd test-plex6-disk3 disks28,0,976691152 root@ms6:~# vxmake sd test-plex6-disk4 disks29,0,976691152 root@ms6:~# vxmake sd test-plex6-disk5 disks30,0,976691152 root@ms6:~# vxmake sd test-plex6-log1 disks45,20000,4000 root@ms6:~# vxmake sd test-plex6-log2 disks46,20000,4000 root@ms6:~# vxmake sd test-plex7-disk1 disks31,0,976691152 root@ms6:~# vxmake sd test-plex7-disk2 disks32,0,976691152 root@ms6:~# vxmake sd test-plex7-disk3 disks33,0,976691152 root@ms6:~# vxmake sd test-plex7-disk4 disks34,0,976691152 root@ms6:~# vxmake sd test-plex7-disk5 disks35,0,976691152 root@ms6:~# vxmake sd test-plex7-log1 disks45,24000,4000 root@ms6:~# vxmake sd test-plex7-log2 disks46,24000,4000 root@ms6:~# vxmake sd test-plex8-disk1 disks36,0,976691152 root@ms6:~# vxmake sd test-plex8-disk2 disks37,0,976691152 root@ms6:~# vxmake sd test-plex8-disk3 disks38,0,976691152 root@ms6:~# vxmake sd test-plex8-disk4 disks39,0,976691152 root@ms6:~# vxmake sd test-plex8-disk5 disks40,0,976691152 root@ms6:~# vxmake sd test-plex8-log1 disks45,28000,4000 root@ms6:~# vxmake sd test-plex8-log2 disks46,28000,4000
assemble the subdisks into RAID-5 plexes:
root@ms6:~# for plex in {1..5}; do \
vxmake plex test-plex$plex layout=raid5 \
sd=test-plex$plex-disk1,test-plex$plex-disk2,test-plex$plex-disk3,test-plex$plex-disk4,test-plex$plex-disk5 \
stwidth=64; \
done
create simple (concat) plexes from each log subdisk:
root@ms6:/# for plex in {1..5}; \
do vxmake plex test-vol$plex-log1 sd=test-plex$plex-log1; \
vxmake plex test-vol$plex-log2 sd=test-plex$plex-log2; \
done
create 5 RAID-5 volumes from each RAID-5 plex and two log volumes, start each volume, then convert them into layered volumes:
root@ms6:/# for vol in {1..5}; do \
vxmake -Uraid5 vol test-vol$vol plex=test-plex$vol,test-vol$vol-log1,test-vol$vol-log2; \
done
root@ms6:/# for vol in {1..5}; do vxvol start test-vol$x; done
root@ms6:/# for vol in {1..5}; do vxedit set layered=on test-vol$x; done
create a single subdisk on each volume:
root@ms6:/# for vol in {1..5}; do \
vxmake sd test-sd$vol test-vol$vol,0,3906764544; \
done
use the subdisks to create the RAID-50 volume as a stripe over each RAID5 volume, then create a volume on the plex:
root@ms6:/# vxmake plex test-plex layout=stripe ncolumns=2 stwidth=128 sd=test-sd1,test-sd2,test-sd3,test-sd4,test-sd5 root@ms6:/# vxmake -Ufsgen vol test plex=test-plex
Performance
In the configuration above, write performance was limited entirely by the speed of the two log disks (~70MB/s), with the other disks remaining mostly idle. This is no good for our synchronous NFS workload, so I recreated the RAID with a log volume striped over all disks:
root@ms6:~# vxassist make test-logvol 1g layout=stripe-mirror ncols=23
With a single raid5 plex for now: