OVS源码bridge_run分析
2018-01-12 15:24
971 查看
bridge_run
主要包含3个部分:bridge_init_ofproto, bridge_run__, bridge_reconfigure.
bridge_init_ofproto
主要进行ofproto_dpif_class的注册,该类是structofproto_class的具体实现.
bridge_run__
主要包括2个部分:ofproto_type_run, ofproto_run.
ofproto_type_run /*Let each datapath type do the work that it needs to do. */
dpif_poll_threads_set与dpif_run最终都会走入pmd_thread_main流程.
dpif_recv_set以及udpif_set_threads主要处理netlink流程,使用ovs+dpdk的netdev流程不涉及.
ofproto_run /* Let each bridge do the work that it needs to do. */
run将调用一系列与ofproto_dpif相关的run,包括:netflow_run, dpif_sflow_run, dpif_ipfix_run, port_run, bundle_run, stp_run, rstp_run, mac_learning_run, mcast_snooping_run...
process_port_change处理port的增删以及port属性的改变.
connmgr_run为每条在openflow连接上的消息调用‘handle_openflow’对消息进行处理,并将openflow连接本身以及消息继续传递.
bridge_reconfigure逐层完成桥配置的更新,首先通过ovs_cfg与当前全局变了all_bridges配置的对比完成对bridge层的更新,然后对ofproto层进行更新,最后通过调用bridge_run__->type_run完成对ofproto_dpif层的更新
主要包含3个部分:bridge_init_ofproto, bridge_run__, bridge_reconfigure.
bridge_init_ofproto
主要进行ofproto_dpif_class的注册,该类是structofproto_class的具体实现.
ofproto_init(const struct shash *iface_hints) { … ofproto_class_register(&ofproto_dpif_class); … }
bridge_run__
主要包括2个部分:ofproto_type_run, ofproto_run.
ofproto_type_run /*Let each datapath type do the work that it needs to do. */
dpif_poll_threads_set与dpif_run最终都会走入pmd_thread_main流程.
dpif_recv_set以及udpif_set_threads主要处理netlink流程,使用ovs+dpdk的netdev流程不涉及.
type_run(const char *type) { struct dpif_backer *backer; /*根据datapath type查找backer,即相同的datapath type只有一个backer*/ backer = shash_find_data(&all_dpif_backers, type); ... /*dpif创建自己的 I/O polling threads,更新polling threads为dpif_run做准备 由dpif_netdev_class实现*/ dpif_poll_threads_set(backer->dpif, pmd_cpu_mask); if (dpif_run(backer->dpif)) { backer->need_revalidate = REV_RECONFIGURE; } udpif_run(backer->udpif); ... return 0; }
ofproto_run /* Let each bridge do the work that it needs to do. */
run将调用一系列与ofproto_dpif相关的run,包括:netflow_run, dpif_sflow_run, dpif_ipfix_run, port_run, bundle_run, stp_run, rstp_run, mac_learning_run, mcast_snooping_run...
process_port_change处理port的增删以及port属性的改变.
connmgr_run为每条在openflow连接上的消息调用‘handle_openflow’对消息进行处理,并将openflow连接本身以及消息继续传递.
ofproto_run(struct ofproto *p) { int error; uint64_t new_seq; error = p->ofproto_class->run(p); ... if (p->ofproto_class->port_poll) { char *devname; while ((error = p->ofproto_class->port_poll(p, &devname)) != EAGAIN) { process_port_change(p, error, devname); } } ... connmgr_run(p->connmgr, handle_openflow); return error; }
bridge_reconfigure逐层完成桥配置的更新,首先通过ovs_cfg与当前全局变了all_bridges配置的对比完成对bridge层的更新,然后对ofproto层进行更新,最后通过调用bridge_run__->type_run完成对ofproto_dpif层的更新
bridge_reconfigure(const struct ovsrec_open_vswitch *ovs_cfg) { struct sockaddr_in *managers; struct bridge *br, *next; int sflow_bridge_number; size_t n_managers; ... /* Destroy "struct bridge"s, "struct port"s, and "struct iface"s according * to 'ovs_cfg', with only very minimal configuration otherwise. * * This is mostly an update to bridge data structures. Nothing is pushed * down to ofproto or lower layers. */ /*首先将cfg中的所有br_cfg添加到new_br中, *br是全局变量all_bridges中的node *根据br->name在new_br中查找br->cfg,若找不到或者br->type改变,删除这些br *根据br_cfg->name在all_bridges中查找br,若没有那么将br_cfg添加至all_bridges*/ add_del_bridges(ovs_cfg); HMAP_FOR_EACH (br, node, &all_bridges) { /*wanted_ports包括ovsrec_bridge->cfg->ports[i]中的所有ports以及local port*/ bridge_collect_wanted_ports(br, &br->wanted_ports); /*处理ports以及interfaces,逻辑与处理bridge类似*/ bridge_del_ports(br, &br->wanted_ports); } /* Start pushing configuration changes down to the ofproto layer: * * - Delete ofprotos that are no longer configured. * * - Delete ports that are no longer configured. * * - Reconfigure existing ports to their desired configurations, or * delete them if not possible. * * We have to do all the deletions before we can do any additions, because * the ports to be added might require resources that will be freed up by * deletions (they might especially overlap in name). */ bridge_delete_ofprotos(); HMAP_FOR_EACH (br, node, &all_bridges) { if (br->ofproto) { bridge_delete_or_reconfigure_ports(br); } } /* Finish pushing configuration changes to the ofproto layer: * * - Create ofprotos that are missing. * * - Add ports that are missing. */ HMAP_FOR_EACH_SAFE (br, next, node, &all_bridges) { if (!br->ofproto) { int error; error = ofproto_create(br->name, br->type, &br->ofproto); if (error) { VLOG_ERR("failed to create bridge %s: %s", br->name, ovs_strerror(error)); shash_destroy(&br->wanted_ports); bridge_destroy(br, true); } else { /* Trigger storing datapath version. */ seq_change(connectivity_seq_get()); } } } HMAP_FOR_EACH (br, node, &all_bridges) { bridge_add_ports(br, &br->wanted_ports); shash_destroy(&br->wanted_ports); } reconfigure_system_stats(ovs_cfg); /* Complete the configuration. */ sflow_bridge_number = 0; collect_in_band_managers(ovs_cfg, &managers, &n_managers); HMAP_FOR_EACH (br, node, &all_bridges) { struct port *port; /* We need the datapath ID early to allow LACP ports to use it as the * default system ID. */ bridge_configure_datapath_id(br); HMAP_FOR_EACH (port, hmap_node, &br->ports) { struct iface *iface; port_configure(port); LIST_FOR_EACH (iface, port_elem, &port->ifaces) { iface_set_ofport(iface->cfg, iface->ofp_port); /* Clear eventual previous errors */ ovsrec_interface_set_error(iface->cfg, NULL); iface_configure_cfm(iface); iface_configure_qos(iface, port->cfg->qos); iface_set_mac(br, port, iface); ofproto_port_set_bfd(br->ofproto, iface->ofp_port, &iface->cfg->bfd); ofproto_port_set_lldp(br->ofproto, iface->ofp_port, &iface->cfg->lldp); ofproto_port_set_config(br->ofproto, iface->ofp_port, &iface->cfg->other_config); } } bridge_configure_mirrors(br); bridge_configure_forward_bpdu(br); bridge_configure_mac_table(br); bridge_configure_mcast_snooping(br); bridge_configure_remotes(br, managers, n_managers); bridge_configure_netflow(br); bridge_configure_sflow(br, &sflow_bridge_number); bridge_configure_ipfix(br); bridge_configure_spanning_tree(br); bridge_configure_tables(br); bridge_configure_dp_desc(br); bridge_configure_aa(br); } free(managers); /* The ofproto-dpif provider does some final reconfiguration in its * ->type_run() function. We have to call it before notifying the database * client that reconfiguration is complete, otherwise there is a very * narrow race window in which e.g. ofproto/trace will not recognize the * new configuration (sometimes this causes unit test failures). */ bridge_run__(); }
相关文章推荐
- OVS源码connmgr_run分析
- 【OVS2.5.0源码分析】bridge&bundle&port分析(1)
- 【OVS2.5.0源码分析】datapath之action分析(4)
- 【OVS2.5.0源码分析】enqueue action精确流表生成过程分析
- 【OVS2.5.0源码分析】vlan&trunk实现原理分析(1)
- Android-vold源码分析之runCommand(7)
- OpenStack Neutron源码分析:ovs-neutron-agent启动源码解析
- 【OVS2.5.0源码分析】vxlan发包流程分析
- frakti && RunPodSandbox 源码分析
- docker 源码分析 六(基于1.8.2版本),Docker run启动过程
- 【OVS2.5.0源码分析】upcall处理线程分析(2)
- yii框架源码分析之Yii::createWebApplication()->run() 执行过程分析
- 【OVS2.5.0源码分析】datapath之action分析(1)
- Muduo网络库源码分析(三)线程间使用eventfd通信和EventLoop::runInLoop系列函数
- 【OVS2.5.0源码分析】ofpbuf数据结构分析
- SpringBoot源码分析之run方法
- 【OVS2.5.0源码分析】mirror实现原理(1)
- 【OVS2.5.0源码分析】vxlan端口创建流程分析
- Quartz源码——QuartzSchedulerThread.run() 源码分析(三)
- 【containerd 1.0 源码分析】ctr run container 源码分析