您的位置:首页 > 运维架构

OpenStack Tacker介绍 - 7.创建一个多网卡的SFC服务链-VM内部流量打通

2017-10-16 14:49 375 查看

1、说明

    本文介绍用tacker创建一个多网卡的sfc服务链中内部VM的配置,实现流量的打通

   创建以下sfc配置的tacker模板。
   实现业务链串联vnf1_001, vnf2_001, vnf3_001 三台nfv虚拟机, 并从VM1处发包进入业务链:  

+----------------+   +----------------+   +----------------+
|   vnf1_001     |   |   vnf2_001     |   |   vnf3_001     |
+----------------+   +----------------+   +----------------+
CP11  CP12|  CP13|   CP21  CP22|  CP23|   CP31  CP32|  CP33|
|      |             |      |             |      |
VM CP0->-----------+      +-------------+      +-------------+      +--->
SFC的创建以及配置见: http://blog.csdn.net/linshenyuan1213/article/details/78224855[/code] 

2、VM CP0的配置

1) eth3 网卡是与sfc通信的网卡
[root@lyh-public ~]# ifconfig eth3
eth3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
inet 1.0.0.8  netmask 255.255.255.0  broadcast 1.0.0.255
inet6 fe80::f816:3eff:fe51:7c12  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:51:7c:12  txqueuelen 1000  (Ethernet)

2) 其默认路由只有一个 1.0.0.1,没有匹配规则的时候,由默认路由出去
[root@lyh-public ~]# ip route
default via 1.0.0.1 dev eth3
1.0.0.0/24 dev eth3  proto kernel  scope link  src 1.0.0.8  metric 100
169.254.169.254 via 172.16.30.1 dev eth1  proto dhcp  metric 100
172.16.30.0/24 dev eth1  proto kernel  scope link  src 172.16.30.3  metric 100
172.16.40.0/24 dev eth2  proto kernel  scope link  src 172.16.40.8  metric 100
192.168.6.0/24 dev eth0  proto kernel  scope link  src 192.168.6.114  metric 100

3) 配置arp
arp -s 1.0.0.1 fa:16:3e:ee:ee:ee


3、vnf1_001,vnf2_001,vnf3_001的配置,全部一样

1) 环境预处理:

   安装并启动ovs

# rpm -ivh openvswitch-2.5.0-2.el7.x86_64.rpm
# systemctl restart openvswitch
# systemctl enable openvswitch
2)创建网桥,并设置网络流量从eth1入,并从eth2出

2.1) 创建网桥并加入网卡
# ovs-vsctl add-br br-int
# ovs-vsctl add-port br-int eth0
# ovs-vsctl add-port br-int eth1

2.2)查看网桥网卡信息
# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000da1bcbbbfc45
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth1): addr:fa:16:3e:15:3d:0d
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
2(eth2): addr:fa:16:3e:21:bf:08
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-int): addr:da:1b:cb:bb:fc:45
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

2.3) 查看eth2 网卡mac地址
# ifconfig eth2 |grep ether
ether fa:16:3e:21:bf:08  txqueuelen 1000  (Ethernet)

2.4) 设置流表规则,从1(eth1)入并由2(eth2)出,出口mac地址为fa:16:3e:21:bf:08
ovs-ofctl add-flow br-int "in_port=1 actions=mod_dl_src:fa:16:3e:21:bf:08,output:2"
ovs-ofctl add-flow br-int "priority=0 actions=NORMAL"

附:
1) 清除流表
# ovs-ofctl del-flows br-int
2) 查看流表
# ovs-ofctl show br-int
3) 添加流表
# ovs-ofctl add-flow br-int "in_port=1 actions=mod_dl_src:fa:16:3e:21:bf:08,output:2"
# 修改出去网卡的目的地址,该dst地址可以设置为业务的mac地址
# ovs-ofctl add-flow br-int "in_port=1 actions=mod_dl_src:fa:16:3e:21:bf:08,mod_dl_dst:fa:16:3e:63:b9:83,output:2"
2.5) 查看设置后的流表规则
# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=673.964s, table=0, n_packets=716, n_bytes=67816, idle_age=0, in_port=1 actions=mod_dl_src:fa:16:3e:21:bf:08,output:2
cookie=0x0, duration=641.176s, table=0, n_packets=132, n_bytes=5544, idle_age=422, priority=0 actions=NORMAL


4、测试流量是否按照预定方式转发

1) VM CP0上执行ping 4.0.0.9的IP(vnf3_001:CP33的IP)
# ping 4.0.0.9
PING 4.0.0.9 (4.0.0.9) 56(84) bytes of data.

2) 在vnf1_001上抓包
# tcpdump -i eth2 -nne
listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
14:34:49.304449 fa:16:3e:21:bf:08 > fa:16:3e:15:3d:0d, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16139, seq 12980, length 64
14:34:49.453383 fa:16:3e:21:bf:08 > fa:16:3e:15:3d:0d, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16234, seq 150, length 64

3) 在vnf1_002上抓包
# tcpdump -i eth2 -nne
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
14:32:26.007022 fa:16:3e:21:bf:08 > fa:16:3e:01:8e:c4, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16234, seq 6, length 64
14:32:26.858004 fa:16:3e:21:bf:08 > fa:16:3e:01:8e:c4, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16139, seq 12837, length 64

4) 在vnf1_003上抓包
# tcpdump -i eth2 -nne
listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
14:34:34.679680 fa:fa:16:3e:b8:0e > fa:16:3e:9b:f3:cd, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16139, seq 12965, length 64
14:34:34.828632 fa:fa:16:3e:b8:0e > fa:16:3e:9b:f3:cd, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16234, seq 135, length 64


5、在云平台的物理主机上查看流量是否从vnf1_003 eth2上出来

1) 查询 4.0.0.9 对应的neutron port id
# neutron port-list |grep 4.0.0.9
| 0d31d060-50bb-40df-b6db-3822086819a3 |fa:16:3e:b8:0e:d0 | "ip_address": "4.0.0.9"}       |

2) 获取port name
则其对应的port name为 "qvo" + 0d31d060-50

3) 抓取最终流出的流量
# tcpdump -i qvo0d31d060-50 -nne
listening on qvo0d31d060-50, link-type EN10MB (Ethernet), capture size 65535 bytes
14:41:42.900677 fa:fa:16:3e:b8:0e > fa:16:3e:9b:f3:cd, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16139, seq 13398, length 64
14:41:43.049561 fa:fa:16:3e:b8:0e > fa:16:3e:9b:f3:cd, ethertype IPv4 (0x0800), length 98: 1.0.0.8 > 4.0.0.9: ICMP echo request, id 16234, seq 568, length 64

4) 查看每个port pair的流量
可以看到对应的每个port pair确实有流量通过
# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0x5f227dbcf67192cd, duration=18787.243s, table=0, n_packets=14421, n_bytes=1413258, idle_age=0, priority=30,ip,in_port=88,nw_src=1.0.0.8,nw_dst=4.0.0.0/24 actions=group:1
cookie=0x5f227dbcf67192cd, duration=18787.133s, table=0, n_packets=14367, n_bytes=1407966, idle_age=0, priority=30,ip,in_port=128,nw_src=1.0.0.8,nw_dst=4.0.0.0/24 actions=group:2
cookie=0x5f227dbcf67192cd, duration=18787.025s, table=0, n_packets=14340, n_bytes=1405320, idle_age=0, priority=30,ip,in_port=131,nw_src=1.0.0.8,nw_dst=4.0.0.0/24 actions=group:3
cookie=0x5f227dbcf67192cd, duration=18786.985s, table=0, n_packets=12324, n_bytes=1207752, idle_age=0, priority=30,ip,in_port=134,nw_src=1.0.0.8,nw_dst=4.0.0.0/24 actions=NORMAL
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: