CentOS 7.1 NVMe 中断绑定最近的NUMA node
2016-01-05 18:39
519 查看
CentOS 7.1 NVMe 中断绑定在最近的NUMA node的所有core上。
[root@memblaze-lyk1 lyk]# fio --ioengine=libaio --bs=4k --numjobs=10 --iodepth=32 --direct=1 --runtime=60 --filename=/dev/nvme0n1 --name=randread --rw=randread --size=60GB --group_reporting
randread: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.9
Starting 10 processes
^Cbs: 10 (f=10): [r(10)] [18.0% done] [2747MB/0KB/0KB /s] [703K/0/0 iops] [eta 00m:50s]
fio: terminating on signal 2
randread: (groupid=0, jobs=10): err= 0: pid=14047: Tue Jan 5 18:37:04 2016
read : io=27048MB, bw=2729.7MB/s, iops=698785, runt= 9909msec
slat (usec): min=1, max=1416, avg= 3.72, stdev= 2.49
clat (usec): min=45, max=6551, avg=452.54, stdev=301.52
lat (usec): min=58, max=6554, avg=456.43, stdev=301.52
clat percentiles (usec):
| 1.00th=[ 157], 5.00th=[ 179], 10.00th=[ 195], 20.00th=[ 221],
| 30.00th=[ 258], 40.00th=[ 306], 50.00th=[ 358], 60.00th=[ 422],
| 70.00th=[ 510], 80.00th=[ 628], 90.00th=[ 844], 95.00th=[ 1048],
| 99.00th=[ 1560], 99.50th=[ 1768], 99.90th=[ 2256], 99.95th=[ 2480],
| 99.99th=[ 3024]
bw (KB /s): min=271808, max=283056, per=10.00%, avg=279537.18, stdev=2089.00
lat (usec) : 50=0.01%, 100=0.01%, 250=28.14%, 500=40.91%, 750=17.41%
lat (usec) : 1000=7.55%
lat (msec) : 2=5.75%, 4=0.24%, 10=0.01%
cpu : usr=16.84%, sys=33.91%, ctx=2429989, majf=0, minf=22168
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=6924266/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: io=27048MB, aggrb=2729.7MB/s, minb=2729.7MB/s, maxb=2729.7MB/s, mint=9909msec, maxt=9909msec
Disk stats (read/write):
nvme0n1: ios=6916456/0, merge=0/0, ticks=3037118/0, in_queue=3198207, util=100.00%
[root@memblaze-lyk1 lyk]# cat /proc/interrupts | grep nvme
121: 200 0 0 0 0 0 0 0 988572 0 0 0 IR-PCI-MSI-edge nvme0q0, nvme0q1
122: 0 0 0 0 0 0 0 0 0 0 63551 0 IR-PCI-MSI-edge nvme0q2
123: 1002556 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q3
124: 0 0 0 0 307 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q4
125: 8 0 0 0 0 0 825539 0 214358 0 0 0 IR-PCI-MSI-edge nvme0q5
126: 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q6
127: 8 0 0 0 0 0 0 0 0 0 921260 0 IR-PCI-MSI-edge nvme0q7
128: 16477 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q8
129: 0 0 980071 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q9
130: 0 0 0 0 303 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q10
131: 0 0 0 0 0 0 949187 0 0 0 0 0 IR-PCI-MSI-edge nvme0q11
132: 0 0 0 0 0 0 0 0 1 0 0 0 IR-PCI-MSI-edge nvme0q12
[root@memblaze-lyk1 lyk]# lspci | grep Non
06:00.0 Non-Volatile memory controller: Device 1c5f:0540 (rev 05)
[root@memblaze-lyk1 lyk]# uname -a
Linux memblaze-lyk1 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@memblaze-lyk1 lyk]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
[root@memblaze-lyk1 lyk]#
[root@memblaze-lyk1 lyk]# fio --ioengine=libaio --bs=4k --numjobs=10 --iodepth=32 --direct=1 --runtime=60 --filename=/dev/nvme0n1 --name=randread --rw=randread --size=60GB --group_reporting
randread: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.9
Starting 10 processes
^Cbs: 10 (f=10): [r(10)] [18.0% done] [2747MB/0KB/0KB /s] [703K/0/0 iops] [eta 00m:50s]
fio: terminating on signal 2
randread: (groupid=0, jobs=10): err= 0: pid=14047: Tue Jan 5 18:37:04 2016
read : io=27048MB, bw=2729.7MB/s, iops=698785, runt= 9909msec
slat (usec): min=1, max=1416, avg= 3.72, stdev= 2.49
clat (usec): min=45, max=6551, avg=452.54, stdev=301.52
lat (usec): min=58, max=6554, avg=456.43, stdev=301.52
clat percentiles (usec):
| 1.00th=[ 157], 5.00th=[ 179], 10.00th=[ 195], 20.00th=[ 221],
| 30.00th=[ 258], 40.00th=[ 306], 50.00th=[ 358], 60.00th=[ 422],
| 70.00th=[ 510], 80.00th=[ 628], 90.00th=[ 844], 95.00th=[ 1048],
| 99.00th=[ 1560], 99.50th=[ 1768], 99.90th=[ 2256], 99.95th=[ 2480],
| 99.99th=[ 3024]
bw (KB /s): min=271808, max=283056, per=10.00%, avg=279537.18, stdev=2089.00
lat (usec) : 50=0.01%, 100=0.01%, 250=28.14%, 500=40.91%, 750=17.41%
lat (usec) : 1000=7.55%
lat (msec) : 2=5.75%, 4=0.24%, 10=0.01%
cpu : usr=16.84%, sys=33.91%, ctx=2429989, majf=0, minf=22168
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=6924266/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: io=27048MB, aggrb=2729.7MB/s, minb=2729.7MB/s, maxb=2729.7MB/s, mint=9909msec, maxt=9909msec
Disk stats (read/write):
nvme0n1: ios=6916456/0, merge=0/0, ticks=3037118/0, in_queue=3198207, util=100.00%
[root@memblaze-lyk1 lyk]# cat /proc/interrupts | grep nvme
121: 200 0 0 0 0 0 0 0 988572 0 0 0 IR-PCI-MSI-edge nvme0q0, nvme0q1
122: 0 0 0 0 0 0 0 0 0 0 63551 0 IR-PCI-MSI-edge nvme0q2
123: 1002556 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q3
124: 0 0 0 0 307 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q4
125: 8 0 0 0 0 0 825539 0 214358 0 0 0 IR-PCI-MSI-edge nvme0q5
126: 0 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q6
127: 8 0 0 0 0 0 0 0 0 0 921260 0 IR-PCI-MSI-edge nvme0q7
128: 16477 0 0 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q8
129: 0 0 980071 0 0 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q9
130: 0 0 0 0 303 0 0 0 0 0 0 0 IR-PCI-MSI-edge nvme0q10
131: 0 0 0 0 0 0 949187 0 0 0 0 0 IR-PCI-MSI-edge nvme0q11
132: 0 0 0 0 0 0 0 0 1 0 0 0 IR-PCI-MSI-edge nvme0q12
[root@memblaze-lyk1 lyk]# lspci | grep Non
06:00.0 Non-Volatile memory controller: Device 1c5f:0540 (rev 05)
[root@memblaze-lyk1 lyk]# uname -a
Linux memblaze-lyk1 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@memblaze-lyk1 lyk]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
[root@memblaze-lyk1 lyk]#
相关文章推荐
- Travis Karr: 我们就是不断在改变,这是我们的宿命(二)
- 新技术将如何影响数据中心存储系统
- PMC推出 Flashtec NVRAM闪存加速卡 性能十倍于闪存SSD
- 闪存逐鹿——NVMe引领闪存新时代
- Flashtec NVRAM加速卡以次微秒级延迟实现一千五百万次 IOPS
- PMC全线出击2014闪存峰会(Flash Memory Summit)
- 存储系统的缺失与弥补方案
- 参与PMC博客互动,留言赢取大奖
- 全球第一款支持NVMe标准全闪存阵列BlazeArray产品原型亮相FMS2015
- Memblaze携全闪存VSAN解决方案亮相VMworld2015
- FMS2015:NVMe SSD的高可靠性及数据保护
- QEMU-KVM I/O性能优化之Virtio-blk-data-plane
- IO协议栈前沿技术研究动态(2015存储峰会分享)
- PMC推出Smart系列解决方案 提供高效数据中心连接
- PMC推出Smart系列解决方案 提供高效数据中心连接
- 浅谈闪存控制器架构
- 浪潮互联网峰会张冬技术报告
- PMC在2015闪存峰会的5个精彩瞬间
- 2015美国闪存峰会特稿——NVMe + RDMA = 非凡的性能!
- PMC推高性能PCIe3.0交换器以及第二代NVMe控制器,促进下一代闪存部署