RAID10和RAID5性能对比测试
3个文件系统---u01: hdd 4T*4 raid10raid无缓存 ---u02: hdd 4T*4 raid5raid无缓存 ---u03: ssd 447G*1测试结果如下./testdd.sh /u01 /u02 /u03 testdd.log.date %Y%m%d%H%M 21 vgraid10_local-lv01 7.3T 100G 7.2T 2% /u01 --- direct写入38.7 MB/s direct读取151 MB/s cache写入328 MB/s cache读取511 MB/s vgraid5_local-lv01 11T 88G 11T 1% /u02 --- direct写入7.4 MB/s direct读取128 MB/s cache写入77.6 MB/s cache读取700 MB/s vgssd_local-lv01 447G 65G 382G 15% /u03 --- direct写入147 MB/s direct读取189 MB/s cache写入387 MB/s cache读取512 MB/s结论在无缓存例如RAID卡禁用缓存、设置为Write Through模式或使用无缓存的RAID卡的情况下针对4块磁盘的配置RAID 10 的直接写入性能【38.7 MB/s】全面且大幅度地优于 RAID 5【7.4 MB/s】。二、raid缓存模式确认以上测试 u01/u02 是无缓存模式[roothost2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 7.3T 0 disk └─vgraid10_local-lv01 253:3 0 7.3T 0 lvm /u01 sdd 8:48 0 10.9T 0 disk └─vgraid5_local-lv01 253:4 0 10.9T 0 lvm /u02 sdb 8:16 0 447.1G 0 disk └─vgssd_local-lv01 253:5 0 447G 0 lvm /u03执行 arcconf getconfig 1 ld 显示 Logical Device number 0 和 1 的配置信息[roothost2 ~]# arcconf getconfig 1 ld Controllers found: 1 -------------------------------------------------------- Logical device information -------------------------------------------------------- Logical Device number 0 Logical Device name : vd1 Disk Name : /dev/sdc (Disk0) (Bus: 1, Target: 0, Lun: 0) Block Size of member drives : 512 Bytes Array : 0 RAID level : 10 Status of Logical Device : Optimal Size : 7630830 MB Stripe-unit size : 256 KB Full Stripe Size : 512 KB Interface Type : Serial ATA Device Type : Data Boot Type : None Heads : 255 Sectors Per Track : 32 Cylinders : 65535 Caching : Disabled Mount Points : Not Mounted LD Acceleration Method : None SED Encryption : Disabled Volume Unique Identifier : 600508B1001CF6173057FB8A85255004 -------------------------------------------------------- Logical Device segment information -------------------------------------------------------- Segment : Availability (SizeMB, Protocol, Type, Connector ID, Location) Serial Number -------------------------------------------------------- Group 0, Segment 0 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:2) WQB0BYF0 Group 0, Segment 1 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:4) WQB0B5PV Group 1, Segment 0 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:3) WQB0B5V7 Group 1, Segment 1 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:5) V302WXYF Logical Device number 1 Logical Device name : vd2 Disk Name : /dev/sdd (Disk0) (Bus: 1, Target: 0, Lun: 1) Block Size of member drives : 512 Bytes Array : 1 RAID level : 5 Status of Logical Device : Optimal Parity Initialization Status : Completed Size : 11446245 MB Stripe-unit size : 256 KB Full Stripe Size : 768 KB Interface Type : Serial ATA Device Type : Data Boot Type : None Heads : 255 Sectors Per Track : 32 Cylinders : 65535 Caching : Disabled Mount Points : Not Mounted LD Acceleration Method : None SED Encryption : Disabled Volume Unique Identifier : 600508B1001CE4C11BEB914107DF0141 -------------------------------------------------------- Array Physical Device Information -------------------------------------------------------- Device ID : Availability (SizeMB, Protocol, Type, Connector ID, Location) Serial Number -------------------------------------------------------- Device 14 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:6) V3039ZHF Device 15 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:7) WQB0BY59 Device 16 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:8) WQB0AW76 Device 17 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:9) VB00EL3F Command completed successfully.关键信息是这两行Caching : Disabled在 arcconf 设置 磁盘组的 Caching 为 WB 模式arcconf SETCACHE 1 LOGICALDRIVE 0 con arcconf SETCACHE 1 LOGICALDRIVE 1 con再次查询已经强行改成了 缓存模式Caching : Enabled再次测试磁盘的IO性能./testdd.sh /u01 /u02 testdd.log.date %Y%m%d%H%M 21 vgraid10_local-lv01 7.3T 100G 7.2T 2% /u01 --- direct写入38.3 MB/s direct读取400 MB/s cache写入303 MB/s cache读取526 MB/s vgraid5_local-lv01 11T 88G 11T 1% /u02 --- direct写入7.1 MB/s direct读取357 MB/s cache写入33.7 MB/s cache读取350 MB/s结论打开RAID卡缓存后raid5的 direct写入 性能仍然很差【7.1 MB/s】但是 direct读取 性能有了大幅度提升【128 MB/s --- 357 MB/s】读取性能飙升而写入性能依旧拉胯完全符合理论预期。最后修改回原来设置因为 raid 卡无后备电池保护存在丢数据风险。arcconf SETCACHE 1 LOGICALDRIVE 0 coff arcconf SETCACHE 1 LOGICALDRIVE 1 coff三、测试脚本vim testdd.sh #!/bin/bash if [ $# -lt 1 ]; then echo usage: $0 /target1 /target2 /target3 ... exit 1 fi while [ $# -gt 0 ]; do target$1 echo ${target} direct写入 sync echo 3 /proc/sys/vm/drop_caches time dd if/dev/zero of${target}/dd.out bs8k count200000 oflagdirect echo ${target} direct读取 sync echo 3 /proc/sys/vm/drop_caches time dd if${target}/dd.out of/dev/null bs8k count200000 iflagdirect echo ${target} cache写入 sync echo 3 /proc/sys/vm/drop_caches time dd if/dev/zero of${target}/dd.out bs8k count200000 echo ${target} cache读取 sync echo 3 /proc/sys/vm/drop_caches time dd if${target}/dd.out of/dev/null bs8k count200000 shift done chmod x testdd.sh ./testdd.sh /u01 /u02 testdd.log.date %Y%m%d%H%M 21