您的位置:首页 > 其它

一个为无阻流量规则分配与端点规则实施而设计的通用最优化框架(四)

2015-11-05 09:01 369 查看
写在前面:本文是我做毕业设计参考的一篇并不如何有名的外文文献,其题名为:《OFFICER:A general optimization framework for OpenFlow Rule Allocation and Endpoint Policy
Enforcement》。有兴趣的朋友可以谷歌该题目,找到就可以下载了。不过为了方便,我尽量还是中英文对照着来写吧,今天是第四部分。

IV. EVALUATION

In this section, we evaluate our model and heuristic for the particular case of memory constrained networks as defined in Sec. III, for Internet Service Provider (ISP) and Data Center (DC) networks. We selected these two particular deployment scenarios of OpenFlow
for their antagonism. On the one hand, ISP networks tend to be built organically and follow the evolution of their customers [12]. On the other hand, DC networks are methodically structured and often present a high degree of symmetry [13]. Moreover, while
workload in both cases is heavy-tailed with a few flows accounting for most of the traffic, DCs exhibit more locality dependency in their traffic with most of communications remaining confined between servers of the same rack [11].

翻译:

在这一部分中,我们以第三部分中定义的存储受限网络如网络服务提供者(ISP)和数据中心网络这种特殊例子来评估我们的模型和启发式算法。由于它们存在对立性,所以我们选择这两个特殊的OpenFlow部署环境(做研究)。一方面,ISP网络需要有机地建立并且能随着它们的消费者进化[12]。另一方面,DC网络是系统性构建并且具有高度对称性的[13]。更进一步地,当两种例子中的工作负载是一小部分流导致绝大部分的流量这样长尾效应,DC在流量上表现出更多的本地依赖性,大部分通信是局限于相同框架的服务器之间的。

A. Methodology

We use numerical simulations to evaluate the costs and benefits of relaxing routing policy in a memory constrained OpenFlow network. There are four main factors that can influence the allocation matrix: the topology, the traffic workload,the controller placement,
and the allocation algorithm.

翻译:

A.方法

我们使用数值仿真来估计在存储受限的OpenFlow网络中松散路由规则的开销与收益。有四个主要因素会影响分配矩阵:拓扑、流量工作负载、控制器放置和分配算法。

1) Topologies: For both ISP and DC cases we consider two topologies, a small one and a large one. As an example of small topology
for ISP we use the Abilene [14] network with 100 servers attached randomly (labeled Abilene in the remaining of the paper). For the large one we use a synthetic scale-free

topology composed of 100 switches with 1000 servers attached randomly (labeled ScaleFree).

翻译:

1)拓扑:不论是对于ISP还是DC案例而言,我们都要考虑两种拓扑结构,一个小的一个大的。作为一个ISP的小拓扑结构的例子,我们使用100个服务器随机连接的阿比林网络(本文后面统一称之为Abilene)。对于大的拓扑,我们使用一个由100个交换机和1000个服务器随机随机连接的合成自由规模拓扑(本文后面统一称之为ScaleFree)。

The topologies for DC consist of a synthetic fat tree with 8 pods and 128 servers (labeled FatTree8) for the small one,and a synthetic
fat tree with 16 pods and 1024 servers (labeled FatTree16) for the large one. Both synthetic topologies are randomly produced by the generator proposed by Saino et al.

in [15]. Details of the topologies are summarized in Table II. To concentrate on the effect of memory on the allocation matrix,we consider infinite bandwidth links in all four topologies.

翻译:

DC的小拓扑结构包含了一个具有8个pods(什么玩意儿?)和128个服务器的合成胖树(后面称之为FatTree8),大拓扑结构则包含一个具有8个pods,1024个服务器的合成胖树(后面称之为FatTree16)。这两种合成拓扑结构都是由[15]中的Saino
et al提出的发生器随机产生的。表II总结了拓扑的相关细节。为了集中注意力在内存对于分配矩阵的影响,我们在四中拓扑结构中都考虑无限带宽链路。



2) Workloads: For each topology, we randomly produce 24 workloads using publicly available workload generators [15],[16], each represents the
traffic in one hour. For each workload,we extract the set F of origin-destination flows together with their assigned source and destination servers. We then use the

volume of a flow as its normalized value for the objective function (11) (i.e., any f∈F, any l∈ E(f) : wf,l = pf
). A flow f∈F starts from the ingress link of the source server and asks to exit at the egress link of the destination server.

翻译:

2)工作负载:对于每个拓扑,我们使用公开可获得工作负载发生器来随机生成24个工作负载[15],[16],其中每个负载代表了一个小时内的流量。对于每个工作负载,我们提取出源到目的的流与它们被指定的源和目的服务器的集合F。接着我们使用流的体积作为一个对目标函数(11)的标准值(即:对于任意 f∈F,任意l∈
E(f):wf,l =
pf)。流f∈F从源服务器的入口链路寻路,从目的服务器的出口链路离开(网络)。

3) Controller placement: The controller placement and the default path towards it are two major factors influencing the allocation
matrix. In the evaluation, we consider two extreme controller positions in the topology: the most centralized position (i.e., the node that has minimum total distance to other nodes, denoted by MIN), and least centralized position (i.e., the node that has
maximum total distance to other nodes, denoted by MAX). In all cases, the default path is constituted by the minimum shortest path tree from all ingress links to the controller. The most centralized position limits the default path’s length and hence the number
of possible deflection points. On the contrary, the least centralized position allows a longer default path and more choices for the deflection point.

翻译:

3)控制器放置:控制器放置和朝向它的默认路径是影响分配矩阵的两个主要因素。在评估中,我们考虑在拓扑中两种极端的控制器位置情况:最中心的位置(即:一个到所有其他节点总距离最小的节点,用MIN来表示),和最不中心的位置(即:一个到所有其他节点总距离最大的节点,用MAX来表示)。在所有情形中,默认路径是由从所有入口链路到控制器的最小最短路径组成的。最中心的控制器限制了最短路径的长度,因此也限制了可能的偏转点数量。而另一方面,最不中心的位置允许一个更长的默认路径,也就有了更多的偏转点选择。

4) Allocation algorithms: To evaluate the quality of the heuristic defined in Sec. III-C, we compare it with the following two
allocation algorithms:

• Random Placement (RP): It is a variant of OFFICER where flow sets are randomly ranked and deflection points are randomly selected.

• Optimum (OP): The allocation matrix corresponds to the optimal one as defined in Sec. III-B and is computed using CPLEX. Unfortunately,
as computing the optimum is NP-hard, it is impossible to apply it to the large ISP and large DC topologies.

翻译:

4)分配算法:为了评估第三部分C中定义的启发式算法的质量,我们将其与下面这两种分配算法作比较:

·随机放置(RP):它是一个OFFICER的变式,只不过是流的集合是随机排序的,偏转点是随机选取的。

·最优化方案(OP):符合第三部分B中所描述的最优化设计的分配矩阵可以用CPLEX进行计算。然而,计算最优化是NP难求解的,把它应用在大型ISP和大型DC拓扑中是不可能的。

Because of room constraints, we only present results for the CE strategy to choose the deflection point. Nevertheless, with extensive
evaluations, we observed that this strategy outperforms the two others by consuming less memory resources.

翻译:

由于篇幅局限,我们仅展示用CE策略选择偏转点的结果。尽管如此,在更进一步的评估中,我们观察到这一策略消耗了更少的内存资源,从而比另外两种策略更加优越。

B. Results

In this section, we compare rule allocation obtained with OFFICER with the optimal allocation and random allocation.We also study the impact of the controller placement on the allocation. The benefit of OFFICER is identified as the amount of traffic able to
strictly respect the endpoint policy while the

drawback is expressed with the path stretch. We also link the number of flows passing through nodes with their topological location.

翻译:

B.结果

在这一部分,我们比较用OFFICER算法获得的规则分配与最优化分配与随机分配。我们也研究了在分配中控制器放置的影响。OFFICER的好处是流量数量能够严格符合端点规则但它的缺点是路径延展。我们也讨论了通过节点的流数量与拓扑位置的关系。





In Fig. 3 and Fig. 4, the x-axis gives the normalized total memory capacity computed as the ratio of the total number of forwarding
entries to install in the network divided by the number of flows (e.g., a capacity of 2 means that on average flows consume two forwarding entries). Thin curves refer to results obtained with the controller placed at the most centralized location (i.e., MIN)
while the thick curves refer to results for the least centralized location (i.e., MAX). The y-axis indicates the average value and standard deviation over the 24 workloads for the metric of interest. Curves are labeled by the concatenation of their allocation
algorithm acronym (i.e., CE, RP, and OP) and their controller location (i.e., MIN and MAX).

翻译:

在图3和图4中,x轴给出了以装在网络上的转发条目总数比上流的数量这一比值来计算的标准总内存容量(比如,容积为2表示每一条流上消耗两个转发条目)。细的曲线表示放置在最中心位置的控制器获得的结果(即MIN),而粗曲线表示最不中心位置控制器获得的结果(即MAX)。y轴表示在24个工作负载上的平均值和标准差以度量收益。曲线以它们的分配算法(即CE,RP和OP)和它们的控制器位置(即MIN和MAX)串接形式标记。

Reference points indicate the value of the metric of interest if all flows are delivered to their egress link when (i) strictly following
the shortest path and denoted with a square and (ii), if ever computable, when minimizing memory usage as formulated in Sec. III-A and denoted with a circle. For a fair

comparison with OFFICER, we also use the aggregation with the default path for these reference points. It is worth noting that the squares are on the right of the circles confirming so that by relaxing routing policy it is possible to deliver all the flows
with less memory capacity.

翻译:

参考点表示若所有流都被传送到它们的出口链路时的收益度量值,当:(i)严格遵循最短路径时用正方形来表示以及(ii)像第三部分A中那样最小化内存使用时如果是可计算的,用圆形来表示。为了与OFFICER进行公平的比较,我们也对这些参考点使用默认路径的聚合。值得注意的是,正方形在圆形的右边证实了使用松散路由规则可以以更小的内存容量传输所有流。

Fig. 3 evaluates the proportion of the volume of traffic that can be delivered to an egress point that satisfies the endpoint policy
as a function of the capacity. In all situations, OFFICER is able to satisfy 100% of the traffic with less capacity than with a strict shortest routing policy. In addition, when the

optimal can be computed, we note that OFFICER is nearly optimal and is even able to satisfy 100% of the traffic with the optimal minimum capacity. This happens because there are no link bandwidth nor per-switch memory limitations and that in our two examples
flows never cross twice the default

path. On the contrary, the random allocation behaves poorly in all situations and requires up to 150% more memory than OFFICER to cover the same proportion of traffic.

翻译:

图2估计可以被运送到一个满足容积函数的端点规则的出口链路的流量体积的比例。在所有情形中,与严格最短路由规则相比,OFFICER能够以更少的容量百分之百满足流量(需求)。另外,当最优方案可计算时,我们注意到OFFICER是接近最优的甚至能够以最优最小容积百分之百满足流量(需求)。这种情况之所以发生,是因为不存在链路带宽或者是单个交换机内存的限制,也因为在这两种案例中流都不会通过默认路径两次。

Also, with only 50% of the minimal memory capacity required to satisfy 100% of the traffic, OFFICER satisfies from 75% to 95%
of the traffic. The marginal gain of increasing the memory is hence limited and the choice of the memory to put in a network is a tradeoff between memory costs and the lost

of revenues induced by using the default path.

翻译:

还有,即便只有百分百满足流量的最小容积的百分之五十,OFFICER也可以满足流量的百分之七十五到百分之九十五。所以随着内存增加,边际收益就会受到限制,而在网络中放入内存的决定就是一个内存开销与使用默认路径的收益减少之间的利益考量。

Relaxing routing policy permits to deliver more traffic as path diversity is increased but comes at the cost of longer paths.
Fig. 4 depicts the average path stretch (compared to shortest path in case of infinite memory) as a function of the capacity. Fig. 4 shows that the path stretch induced by the optimal placement is negligible in all type of topologies and is kept small for
OFFICER using the CE strategy (i.e., less than 5%). On the contrary, the random placement significantly increases path length. In DC topologies, the average path stretch is virtually equal to 1 (Fig. 4(c) and Fig 4(d)). The reason is

that in DC networks there is a high diversity of shortest path between node pairs, so it is more likely to find a shortest path satisfying all constraints than in ISPs topologies. It also worth noting that in DCs, there are many in-rack communications that
consumes less memories than out-rack communications,

thus the risk of overloading memory of inter-rack switches is reduced. Interestingly, even though there is a path stretch, the overall memory consumption is reduced indicating that it is compensated by the aggregation with the default rule.

翻译:

松散路由规则由于路径多样性的增加,允许传输更多的流量,但伴随着更长的路径开销。图4将平均路径延展(与无限内存下的最短路径相比)展示成一个容量的函数。图4表明,由最优化放置引发的路径延展在所有拓扑类型中都可以忽略不计,而在使用CE策略的OFFICER中也非常小(小于百分之五)。而另一方面,随机放置则显著增加了路径延展。在DC拓扑结构中,平均路径延展事实上等于1(图4(c)和图4(d))。理由就是,在DC网络中,在节点对之间存在许多的最短路径,因此相比ISP拓扑就更可能找到一个满足所有限制条件的最短路径。还值得注意的是,在DC中,有许多框架内通信消耗的内存比框架外通信要少,因此框架内通信的内存过载的风险就会减少。有趣的是,即使存在一个路径延展,总的内存消耗也会减少,这就表明它是由默认规则的聚合所补偿。

For ISP networks, when the optimal allocation is computed or approximated with OFFICER, there is a high correlation (i.e., over
0.9) between the memory required on a switch and its topological location (e.g., betweeness centrality and node degree). On the contrary, no significant correlation is observed

in DCs where there are much more in-racks communication than out-racks communication [16]. This suggests to put switches with the highest memory capacity at the most central locations in ISPs and within racks in DCs.

翻译:

对于ISP网络,当最优分配是可计算的或者接近OFFICER,那么在一个交换机上的所需内存与它的拓扑位置(中介度中心性和节点度)就有很高的相关性(超过0.9)。另一方面,框架内通信比框架外通信多的DC中则没有观察到明显的相关性[16]。这就提出(了一个方法),要讲具有最大内存的交换机放置在ISP的最中心位置或者是在DC的框架内部。

Even though the controller placement is important in OFFICER as it leverages the default path, Fig. 3 and Fig. 4 do not exhibit
a significant impact of the location of the controller. Nevertheless, no strong conclusion can be drawn from our evaluation. Actually, there are so many factors that drive the

placement of the controller [17] that we believe it is better to consider controller placement as an input of the rule allocation problem and we let its full study for future work.

翻译:

即便控制器放置在OFFICER中是重要的,因为它利用了默认路径,但是图3和图4并没有展示控制器位置的重要影响。尽管如此,从我们的评估中也不能得出一个强有力的论断。事实上,有那么多因素影响着控制器的放置,以致于我们相信考虑控制器放置作为一个规则分配问题的输入会更好,在将来的工作中我们会做更进一步的研究。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: