您的位置:首页 > 运维架构 > Linux

linux 使用FIO测试磁盘iops

2015-03-24 14:18 531 查看
FIO是测试IOPS的非常好的工具,用来对硬件进行压力测试和验证,支持13种不同的I/O引擎,

包括:sync,mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio 等等。

fio 官网地址:http://freshmeat.net/projects/fio/

一,FIO安装

wget http://brick.kernel.dk/snaps/fio-2.0.7.tar.gz
yum install libaio-devel

tar -zxvf fio-2.0.7.tar.gz

cd fio-2.0.7

make

make install

二,随机读测试:

随机读:

fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=200G

-numjobs=10 -runtime=1000 -group_reporting -name=mytest

说明:

filename=/dev/sdb1 测试文件名称,通常选择需要测试的盘的data目录。

direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。

rw=randwrite 测试随机写的I/O

rw=randrw 测试随机写和读的I/O

bs=16k 单次io的块文件大小为16k

bsrange=512-2048 同上,提定数据块的大小范围

size=5g 本次的测试文件大小为5g,以每次4k的io进行测试。

numjobs=30 本次的测试线程为30.

runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止。

ioengine=psync io引擎使用pync方式

rwmixwrite=30 在混合读写的模式下,写占30%

group_reporting 关于显示结果的,汇总每个进程的信息。

此外

lockmem=1g 只使用1g内存进行测试。

zero_buffers 用0初始化系统buffer。

nrfiles=8 每个进程生成文件的数量。

顺序读:

fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest

随机写:

fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest

顺序写:

fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest

混合随机读写:

fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop

三,实际测试范例:

[root@localhost ~]# fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30

-runtime=100 -group_reporting -name=mytest1

mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1

...

mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1

fio 2.0.7

Starting 30 threads

Jobs: 1 (f=1): [________________m_____________] [3.5% done] [6935K/3116K /s] [423 /190 iops] [eta 48m:20s] s]

mytest1: (groupid=0, jobs=30): err= 0: pid=23802

read : io=1853.4MB, bw=18967KB/s, iops=1185 , runt=100058msec

clat (usec): min=60 , max=871116 , avg=25227.91, stdev=31653.46

lat (usec): min=60 , max=871117 , avg=25228.08, stdev=31653.46

clat percentiles (msec):

| 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8],

| 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 19],

| 70.00th=[ 26], 80.00th=[ 37], 90.00th=[ 57], 95.00th=[ 79],

| 99.00th=[ 151], 99.50th=[ 202], 99.90th=[ 338], 99.95th=[ 383],

| 99.99th=[ 523]

bw (KB/s) : min= 26, max= 1944, per=3.36%, avg=636.84, stdev=189.15

write: io=803600KB, bw=8031.4KB/s, iops=501 , runt=100058msec

clat (usec): min=52 , max=9302 , avg=146.25, stdev=299.17

lat (usec): min=52 , max=9303 , avg=147.19, stdev=299.17

clat percentiles (usec):

| 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 74],

| 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90],

| 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 120], 95.00th=[ 370],

| 99.00th=[ 1688], 99.50th=[ 2128], 99.90th=[ 3088], 99.95th=[ 3696],

| 99.99th=[ 5216]

bw (KB/s) : min= 20, max= 1117, per=3.37%, avg=270.27, stdev=133.27

lat (usec) : 100=24.32%, 250=3.83%, 500=0.33%, 750=0.28%, 1000=0.27%

lat (msec) : 2=0.64%, 4=3.08%, 10=20.67%, 20=19.90%, 50=17.91%

lat (msec) : 100=6.87%, 250=1.70%, 500=0.19%, 750=0.01%, 1000=0.01%

cpu : usr=1.70%, sys=2.41%, ctx=5237835, majf=0, minf=6344162

IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

issued : total=r=118612/w=50225/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
READ: io=1853.4MB, aggrb=18966KB/s, minb=18966KB/s, maxb=18966KB/s, mint=100058msec, maxt=100058msec

WRITE: io=803600KB, aggrb=8031KB/s, minb=8031KB/s, maxb=8031KB/s, mint=100058msec, maxt=100058msec

Disk stats (read/write):

sdb: ios=118610/50224, merge=0/0, ticks=2991317/6860, in_queue=2998169, util=99.77%

主要查看以上红色字体部分的iops(read/write)

**磁盘阵列吞吐量与IOPS两大瓶颈分析**

1、吞吐量

  吞吐量主要取决于阵列的构架,光纤通道的大小(现在阵列一般都是光纤阵列,至于SCSI这样的SSA阵列,我们不讨论)以及硬盘的个数。阵列的构架与每个阵列不同而不同,他们也都存在内部带宽(类似于pc的系统总线),不过一般情况下,内部带宽都设计的很充足,不是瓶颈的所在。

  光纤通道的影响还是比较大的,如数据仓库环境中,对数据的流量要求很大,而一块2Gb的光纤卡,所77能支撑的最大流量应当是2Gb/8(小B)=250MB/s(大B)的实际流量,当4块光纤卡才能达到1GB/s的实际流量,所以数据仓库环境可以考虑换4Gb的光纤卡。

  最后说一下硬盘的限制,这里是最重要的,当前面的瓶颈不再存在的时候,就要看硬盘的个数了,我下面列一下不同的硬盘所能支撑的流量大小:

  10 K rpm 15 K rpm ATA

  ——— ——— ———

  10M/s 13M/s 8M/s

  那么,假定一个阵列有120块15K rpm的光纤硬盘,那么硬盘上最大的可以支撑的流量为120*13=1560MB/s,如果是2Gb的光纤卡,可能需要6块才能够,而4Gb的光纤卡,3-4块就够了。

2、IOPS

  决定IOPS的主要取决与阵列的算法,cache命中率,以及磁盘个数。阵列的算法因为不同的阵列不同而不同,如我们最近遇到在hds usp上面,可能因为ldev(lun)存在队列或者资源限制,而单个ldev的iops就上不去,所以,在使用这个存储之前,有必要了解这个存储的一些算法规则与限制。

  cache的命中率取决于数据的分布,cache size的大小,数据访问的规则,以及cache的算法,如果完整的讨论下来,这里将变得很复杂,可以有一天好讨论了。我这里只强调一个cache的命中率,如果一个阵列,读cache的命中率越高越好,一般表示它可以支持更多的IOPS,为什么这么说呢?这个就与我们下面要讨论的硬盘IOPS有关系了。

  硬盘的限制,每个物理硬盘能处理的IOPS是有限制的,如

  10 K rpm 15 K rpm ATA

  ——— ——— ———

  100 150 50

  同样,如果一个阵列有120块15K rpm的光纤硬盘,那么,它能撑的最大IOPS为120*150=18000,这个为硬件限制的理论值,如果超过这个值,硬盘的响应可能会变的非常缓慢而不能正常提供业务。

  在raid5与raid10上,读iops没有差别,但是,相同的业务写iops,最终落在磁盘上的iops是有差别的,而我们评估的却正是磁盘的IOPS,如果达到了磁盘的限制,性能肯定是上不去了。

  那我们假定一个case,业务的iops是10000,读cache命中率是30%,读iops为60%,写iops为40%,磁盘个数为120,那么分别计算在raid5与raid10的情况下,每个磁盘的iops为多少。

  raid5:

  单块盘的iops = (10000*(1-0.3)*0.6 + 4 * (10000*0.4))/120

  = (4200 + 16000)/120

  = 168

  这里的10000*(1-0.3)*0.6表示是读的iops,比例是0.6,除掉cache命中,实际只有4200个iops

  而4 * (10000*0.4) 表示写的iops,因为每一个写,在raid5中,实际发生了4个io,所以写的iops为16000个

  为了考虑raid5在写操作的时候,那2个读操作也可能发生命中,所以更精确的计算为:

  单块盘的iops = (10000*(1-0.3)*0.6 + 2 * (10000*0.4)*(1-0.3) + 2 * (10000*0.4))/120

  = (4200 + 5600 + 8000)/120

  = 148

  计算出来单个盘的iops为148个,基本达到磁盘极限

  raid10

  单块盘的iops = (10000*(1-0.3)*0.6 + 2 * (10000*0.4))/120

  = (4200 + 8000)/120

  = 102

  可以看到,因为raid10对于一个写操作,只发生2次io,所以,同样的压力,同样的磁盘,每个盘的iops只有102个,还远远低于磁盘的极限iops。

  在一个实际的case中,一个恢复压力很大的standby(这里主要是写,而且是小io的写),采用了raid5的方案,发现性能很差,通过分析,每个磁盘的iops在高峰时期,快达到200了,导致响应速度巨慢无比。后来改造成raid10,就避免了这个性能问题,每个磁盘的iops降到100左右。

测试结果详细分析参考:http://tobert.github.io/post/2014-04-17-fio-output-explained.html
Fio Output Explained

Previously, I blogged about setting up my benchmarking machine. Now that it's
up and running, I've started exploring the fio benchmarking tool.

fio - the Flexible IO Tester is an application written by Jens
Axboe, who may be better known as the maintainer of the Linux kernel's block IO subsystem. It resembles the older ffsb tool in a few ways, but doesn't seem to have any relation to it. As power tools go, it's capable of generating pretty much arbitrary
load. The tradeoff is that it's difficult to learn and that's exactly what I've been doing.

config
raw output

Here's a section-by-section breakdown of the default output. I'll look at other output options in future posts. The data displayed is from a Samsung 840 Pro SSD.

The explanation for each section can be found below the output text.
read : io=10240MB, bw=63317KB/s, iops=15829, runt=165607msec


The first line is pretty easy to read. fio did a total of 10GB of IO at 63.317MB/s for a total of 15829 IOPS (at the default 4k block size), and ran for 2 minutes and 45 seconds.

The first latency metric you'll see is the 'slat' or submission latency. It is pretty much what it sounds like, meaning "how long did it take to submit this IO to the kernel for processing?"
slat (usec): min=3, max=335, avg= 9.73, stdev= 5.76


I originally thought that submission latency would be useless for tuning, but the numbers below changed my mind. 269usec or 1/4 of a millisecond seems to be noise, but check it out. I haven't tuned anything yet, so I suspect that changing the scheduler and
telling the kernel it's not a rotating device will help.

Here are some more examples from the other devices:
slat (usec): min=3, max=335, avg= 9.73, stdev= 5.76 (SATA SSD)
slat (usec): min=5, max=68,  avg=26.21, stdev= 5.97 (SAS 7200)
slat (usec): min=5, max=63,  avg=25.86, stdev= 6.12 (SATA 7200)
slat (usec): min=3, max=269, avg= 9.78, stdev= 2.85 (SATA SSD)
slat (usec): min=6, max=66,  avg=27.74, stdev= 6.12 (MDRAID0/SAS)

clat (usec): min=1, max=18600, avg=51.29, stdev=16.79


Next up is completion latency. This is the time that passes between submission to the kernel and when the IO is complete, not including submission latency. In older versions of fio, this was the best metric for approximating application-level latency.
lat (usec): min=44, max=18627, avg=61.33, stdev=17.91


From what I can see, the 'lat' metric is fairly new. It's not documented in the man page or docs. Looking at the C code, it seems that this metric starts the moment the IO struct is created in fio and is completed right after clat, making this the one that
best represents what applications will experience. This is the one that I will graph.
clat percentiles (usec):
|  1.00th=[   42],  5.00th=[   45], 10.00th=[   45], 20.00th=[   46],
| 30.00th=[   47], 40.00th=[   47], 50.00th=[   49], 60.00th=[   51],
| 70.00th=[   53], 80.00th=[   56], 90.00th=[   60], 95.00th=[   67],
| 99.00th=[   78], 99.50th=[   81], 99.90th=[   94], 99.95th=[  101],
| 99.99th=[  112]


Completion latency percentiles are fairly self-explanatory and probably the most useful bit of info in the output. I looked at the source code and this is not slat + clat; it is tracked in its own struct.

The buckets are configurable in the config file. In the terse output, this is 20 fields of %f=%d;%f=%d;... which makes parsing more fun than it should be.

For comparison, here's the same section from a 7200 RPM SAS drive running the exact same load.
clat percentiles (usec):
|  1.00th=[ 3952],  5.00th=[ 5792], 10.00th=[ 7200], 20.00th=[ 8896],
| 30.00th=[10304], 40.00th=[11456], 50.00th=[12608], 60.00th=[13760],
| 70.00th=[15168], 80.00th=[16768], 90.00th=[18816], 95.00th=[20608],
| 99.00th=[23424], 99.50th=[24192], 99.90th=[26752], 99.95th=[28032],
| 99.99th=[30080]

bw (KB  /s): min=52536, max=75504, per=67.14%, avg=63316.81, stdev=4057.09


Bandwidth is pretty self-explanatory except for the per= part. The docs say it's meant for testing a single device with multiple workloads, so you can see how much of the IO was consumed by each process. When fio is run against multiple devices, as I did for
this output, it doesn't provide much meaning but is amusing when SSDs are mixed with spinning rust.

And here's the SAS drive again with 0.36% of the total IO out of 4 devices being tested.
bw (KB  /s): min=   71, max=  251, per=0.36%, avg=154.84, stdev=18.29

lat (usec) :   2= 0.01%,   4=0.01%,  10=0.01%,   20=0.01%, 50=51.41%
lat (usec) : 100=48.53%, 250=0.06%, 500=0.01%, 1000=0.01%
lat (msec) :   2= 0.01%,   4=0.01%,  10=0.01%,   20=0.01%


The latency distribution section took me a couple passes to understand. This is one series of metrics. Instead of using the same units for all three lines, the third line switches to milliseconds to keep the text width under control. Read the last line as 2000,
4000, 10,000, and 20,000usec and it makes more sense.

As this is a latency distribution, it's saying that 51.41% of requests took less than 50usec, 48.53% took less than 100usec and so on.
lat (msec) : 4=1.07%, 10=27.04%, 20=65.43%, 50=6.46%, 100=0.01%


In case you were thinking of parsing this madness with a quick script, you might want to know that the lat section will omit entries and whole lines if there is no data. For example, the SAS drive I've been referencing didn't manage to do any IO faster than
a millisecond, so this is the only line.
cpu          : usr=5.32%, sys=21.95%, ctx=2829095, majf=0, minf=21


Here's the user/system CPU percentages followed by context switches then major and minor page faults. Since the test
is configured to use direct IO, there should be very few page faults.
IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%


Fio has an iodepth setting that controls how many IOs it issues to the OS at any given time. This is entirely application-side, meaning it is not the same thing as the device's IO queue. In this case, iodepth was set to 1 so the IO depth was always 1 100% of
the time.
submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%


submit and complete represent the number of submitted IOs at a time by fio and the number completed at a time. In the case of the thrashing test used to generate this output, the iodepth is at the default value of 1, so 100% of IOs were submitted 1 at a time
placing the results in the 1-4 bucket. Basically these only matter if iodepth is greater than 1.

These will get much more interesting when I get around to testing the various schedulers.
issued    : total=r=2621440/w=0/d=0, short=r=0/w=0/d=0


The number of IOs issued. Something is weird here since this was a 50/50 read/write load, so there should have been an equal number of writes. I suspect having unified_rw_reporting enabled
is making fio count all IOs as reads.

If you see short IOs in a direct IO test something has probably gone wrong. The reference I found in the Linux
kernel indicates that this happens at EOF and likely end of device.
latency   : target=0, window=0, percentile=100.00%, depth=1


Fio can be configured with a latency target, which will cause it to adjust
throughput until it can consistently hit the configured latency target. I haven't messed with this much yet. In time or size-based tests, this line will always look the same. All four of these values represent the configuration settings latency_target, latency_window, latency_percentile,
and iodepth.
Run status group 0 (all jobs):


fio supports grouping different tests for aggregation. For example, I can have one config for SSDs and HDDs mixed in the same file, but set up groups to report the IO separately. I'm not doing this for now, but future configs will need this functionality.
MIXED: io=12497MB, aggrb=42653KB/s, minb=277KB/s, maxb=41711KB/s, mint=300000msec, maxt=300012msec


And finally, the total throughput and time. io= indicates the amount of IO done in total. It will be variable for timed tests and should match the sizeparameter
for sized tests. aggrb is the aggregate bandwidth across all processes / devices. minb/maxb show minimum/maximum observed bandwidth. mint/maxt show the shortest & longest times for tests. Similar to the io= parameter, these should match the runtime parameter
for time-based tests and will vary in size-based tests.

Since I ran this test with unified_rw_reporting enabled, we only see
a line for MIXED. If it's disabled there will be separate lines for READ and WRITE.

Simple, right? I'll be spending a lot more time with fio for the next few weeks and will post more examples of configs, output, and graphing code.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: