各种模式虚拟化的网络性能对比测试
2018-01-14 16:59
1491 查看
dpdk+ovs 与 物理机 与 sriov(pci passthrough) 纯粹的linux bridge的性能对比
CPU E5-2680V2 2.8G,网卡 ixgbe intel 82599。
测试方法:路由转发
结构为:A------R------B
实际上R有两块网卡,有4个方向的数据包,本测试只取其中之一,同时使用了LINUX的路由功能做测试,所以不做为各种技术的基准测试参考,只是一个相对值。
A做netserver,R替换为以上各种模式的机器,B做netperf的客户端
分别使用netperf做20、40、80并发并绑定CPU进行TCP_RR测试。
dpdk-ovs-vhost-user的xml:
sriov-passthrough的xml
linux-bridge的xml
CPU E5-2680V2 2.8G,网卡 ixgbe intel 82599。
模式/并发-----pps | 20 | 40 | 80 |
物理机 | 9.7W | 17.3W | 16.5W |
dpdk + ovs + vhost-user | 18.5W | 30.8W | 48W |
sriov(pci passthrough) | 17.5W | 29W | 48W |
linux-bridge | 10W | 16.7W | 20W |
结构为:A------R------B
实际上R有两块网卡,有4个方向的数据包,本测试只取其中之一,同时使用了LINUX的路由功能做测试,所以不做为各种技术的基准测试参考,只是一个相对值。
A做netserver,R替换为以上各种模式的机器,B做netperf的客户端
分别使用netperf做20、40、80并发并绑定CPU进行TCP_RR测试。
dpdk-ovs-vhost-user的xml:
<domain type='kvm'> <name>vm1</name> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <hugepages> <page size='1024' unit='M' nodeset='0'/> </hugepages> </memoryBacking> <cpu mode='host-model'> <model fallback='allow'/> <numa> <cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/> </numa> </cpu> <vcpu placement='static' current='2'>16</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/home/vm_workspace/vm1.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <interface type='vhostuser'> <mac address='52:54:00:00:06:00'/> <source type='unix' path='/var/run/openvswitch/vhost-user-0' mode='client'/> <model type='virtio'/> <driver name='vhost' queues='2'> <host mrg_rxbuf='on'/> </driver> </interface> <interface type='vhostuser'> <mac address='52:54:00:00:06:01'/> <source type='unix' path='/var/run/openvswitch/vhost-user-1' mode='client'/> <model type='virtio'/> <driver name='vhost' queues='2'> <host mrg_rxbuf='on'/> </driver> </interface> <serial type='pty'/> <input type='tablet' bus='usb'/> <graphics type='vnc' autoport='yes' keymap='en-us' listen='0.0.0.0'/> <video> <model type='cirrus'/> </video> <memballoon model='virtio'> <stats period='10'/> </memballoon> </devices> </domain>
sriov-passthrough的xml
<domain type='kvm'> <name>vm1</name> <memory>1024000</memory> <cpu mode='host-passthrough'> <cache mode='passthrough'/> </cpu> <vcpu placement='static' cpuset='0-3'>3</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/home/vm_workspace/vm1.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='0x02' slot='0x10' function='0x00'/> </source> <mac address='52:54:00:6d:90:00'/> <vlan> <tag id='4000'/> </vlan> </interface> <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0' bus='0x02' slot='0x10' function='0x01'/> </source> <mac address='52:54:00:6d:90:01'/> <vlan> <tag id='4001'/> </vlan> </interface> <serial type='pty'/> <input type='tablet' bus='usb'/> <graphics type='vnc' autoport='yes' keymap='en-us' listen='0.0.0.0'/> <video> <model type='cirrus'/> </video> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <memballoon model='virtio'> <stats period='10'/> </memballoon> </devices> </domain>
linux-bridge的xml
<domain type='kvm'> <name>vm1</name> <memory>1024000</memory> <cpu mode='host-passthrough'> <cache mode='passthrough'/> </cpu> <vcpu placement='static' cpuset='0-3'>3</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/home/vm_workspace/vm1.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='bridge'> <model type='virtio'/> <source bridge='br-ext'/> </interface> <interface type='bridge'> <model type='virtio'/> <source bridge='br-int'/> </interface> <serial type='pty'/> <input type='tablet' bus='usb'/> <graphics type='vnc' autoport='yes' keymap='en-us' listen='0.0.0.0'/> <video> <model type='cirrus'/> </video> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <memballoon model='virtio'> <stats period='10'/> </memballoon> </devices> </domain>
相关文章推荐
- 各种流量采集方式的性能对比测试结果
- 测试 Nginx 作为前端下各种模式的性能
- .Net Core内存回收模式及性能测试对比分析
- solr 各种 writer 的性能测试对比
- SAS vs SSD各种模式下MySQL TPCC OLTP对比测试结果
- SAS vs SSD各种模式下MySQL TPCC OLTP对比测试结果
- Apworks框架中各种仓储实现的性能基准测试与结果对比
- 性能测试:有线网络与无线网络的对比
- Java设计模式(四):原型模式深拷贝的两种实现方式,以及和new对象的性能测试对比
- SQLite3开启事务和关闭事务模式下,性能测试对比
- 各种流量采集方式的性能对比测试结果
- 用C#和策略模式实现各种排序方法及性能测试
- 关于网络上的各种mysql性能测试结论
- Apworks框架中各种仓储实现的性能基准测试与结果对比
- Redis和Memcache性能测试对比
- 网络性能测试-perf
- Oracle 与spark-local 模式的性能对比
- 各种流行深度学习构架的性能对比
- Apache Prefork和Worker模式的性能比较测试
- InnoDB memcached插件 vs 原生memcached对比性能测试