您的位置:首页 > 理论基础 > 计算机网络

Linux 网络流量控制工具-TC

2017-09-19 01:00 761 查看

网络流量控制(Shaping,流量整形)

TC(traffic control)工作原理

通过设置不同类型的网络接口队列,从而改变数据包发送的速率和优先级,达到流量控制的目的。内核如果需要通过某个网络接口发送数据包,它都需要按照为这个接口配置的qdisc(队列规则)把数据包加入队列,然后内核会尽可能多的从qdisc里取出数据包,把它们交给网络适配器驱动模块。

tc自 linux2.2内核以后逐渐被加在了内核里,成为linux服务器本身就能提供的一种服务。具体指令可查看man手册。

流量的处理由三种对象控制:qdisc(队列规定),class(类),filter(分类器)

流量控制方式

流量控制包括以下几种方式:

SHAPING(限制)

当流量被限制,它的传输速率就被控制在某个值以下。限制值可以大大小于有效带宽,这样可以平滑突发数据流量,使网络更为稳定。shaping(限制)只适用于向外的流量。


SCHEDULING(调度)

通过调度数据包的传输,可以在带宽范围内,按照优先级分配带宽。SCHEDULING(调度)也只适于向外的流量。


POLICING(策略)

SHAPING用于处理向外的流量,而POLICIING(策略)用于处理接收到的数据。


DROPPING(丢弃)

如果流量超过某个设定的带宽,就丢弃数据包,不管是向内还是向外。


功能实例

控制网络延迟: ping值

#ping google.com
PING google.com (74.125.224.230) 56(84) bytes of data.
64 bytes from lax04s08-in-f6.1e100.net (74.125.224.230): icmp_req=1 ttl=56 time=24.1 ms
64 bytes from lax04s08-in-f6.1e100.net (74.125.224.230): icmp_req=2 ttl=56 time=24.2 ms
64 bytes from lax04s08-in-f6.1e100.net (74.125.224.230): icmp_req=3 ttl=56 time=21.9 ms


可以看到平均网络延迟在23ms左右,若使之增加到120ms,则:

tc qdisc add dev eth0 root netem delay 97ms


注:eth0 根据系统决定,可使用 ip -a 命令或者ifconfig 查看

tc -s
验证规则:

# tc -s qdisc
qdisc netem 8002: dev eth0 root refcnt 2 limit 1000 delay 97.0ms

# ping google.com
PING google.com (74.125.239.8) 56(84) bytes of data.
64 bytes from lax04s09-in-f8.1e100.net (74.125.239.8): icmp_req=1 ttl=56 time=122 ms
64 bytes from lax04s09-in-f8.1e100.net (74.125.239.8): icmp_req=2 ttl=56 time=120 ms
64 bytes from lax04s09-in-f8.1e100.net (74.125.239.8): icmp_req=3 ttl=56 time=120 ms


删除规则

tc qdisc del dev eth0 root netem


控制流量

tc qdisc add dev eth0 root handle 1: htb default 30
tc class add dev eth0 parent 1: classid 1:1 htb rate 6mbit burst 15k
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 2mbit ceil 3mbit burst 15k
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 1mbit ceil 2mbit burst 15k
tc class add dev eth0 parent 1:1 classid 1:30 htb rate 0.5mbit ceil 1mbit burst 15k

tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10


第一段首先在网卡eth0上建立根句柄1:,采用htb这种可分类的队列规定,默认数据包传输会被送到1:30这个下一级的分类队列中。2然后给根1:(你也可 以写成1:0)分配带宽,速率为6m,最大允许超出量为15k。3创建一个子类(或者叫子队列)1:10,速率为2兆,最大不能超过3m(意味着在总带宽 有剩余可用的状态下可以借用1m)。4创建子类1:20,速率1m,最大2m,最大允许超出量15k。

第二段目的是调度,或者说确保不至于让某种传输占用过多的带宽,针对每个分类1:10,1:20和1:30设置每10秒钟变换一次算法——保障不会出现某种网络连接因为某种算法优势一直传输的比其他连接快。

控制网络丢包 Packet loss

主要模仿不同网络环境下的的行为:

Latency and jitter

Using the netem qdisc we can emulate network latency and jitter on all outgoing packets (man page, read more). Some examples:

$ tc qdisc add dev eth0 root netem delay 100ms
<delay packets for 100ms>

$ tc qdisc add dev eth0 root netem delay 100ms 10ms
<delay packets with value from uniform [90ms-110ms] distribution>

$ tc qdisc add dev eth0 root netem delay 100ms 10ms 25%
<delay packets with value from uniform [90ms-110ms] distribution and 25% \
correlated with value of previous packet>

$ tc qdisc add dev eth0 root netem delay 100ms 10ms distribution normal
<delay packets with value from normal distribution (mean 100ms, jitter 10ms)>

$ tc qdisc add dev eth0 root netem delay 100ms 10ms 25% distribution normal
<delay packets with value from normal distribution (mean 100ms, jitter 10ms) \
and 25% correlated with value of previous packet>
```
Packet loss

Using the netem qdisc packet loss can be emulated as well (man page, read more). Some simple examples:


$ tc qdisc add dev eth0 root netem loss 0.1%

$ tc qdisc add dev eth0 root netem loss 0.3% 25%
<drop packets randomly with probability of 0.3% and 25% correlated with drop \
decision for previous packet>
But netem can even emulate more complex loss mechanism, such as the Gilbert-Elliot scheme. This scheme defines 2 states Good (or drop Gap) and Bad (or drop Burst). The drop chances of both states and the chances of switching between states are all provided. See section 3 of this paper for more info.


$ tc qdisc add dev eth0 root netem loss gemodel 1% 10% 70% 0.1%
<drop packets using Gilbert-Elliot scheme with probabilities \
move-to-burstmode (p) of 1%, move-to-gapmode (r) of 10%, \
drop-in-burstmode (1-h) of 70% and drop-in-gapmode (1-k) of 0.1%>
```
Duplication and corruption

Packet duplication and corruption is also possible with netem (man page, read more). The examples are self explaining.


$ tc qdisc add dev eth0 root netem duplicate <chance> [<correlation>]

$ tc qdisc add dev eth0 root netem corrupt <chance> [<correlation>]


Reordering

Finally, netem allows packets to be reordered (man page, read more). This is achieved by holding some packets back for a specified amount of time. In other words, reordering will only occur if the interval between packets is smaller than the configured delay!

Combining multiple impairments

Configuring a single impairment is useful for debugging, but to emulate real networks, multiple impairments often need to be active at the same time.

Multiple netem impairments can be combined into a single qdisc, as shown in the following example.

$ tc qdisc add dev eth2 root netem delay 10ms reorder 25% 50% loss 0.2


To combine these impairments with rate limitation, we need to chain the tbf and netem qdiscs. This process is described in the first post of this series. For example, it would lead to the following commands:

$ tc qdisc del dev eth0 root
$ tc qdisc add dev eth0 root handle 1: netem delay 10ms reorder 25% 50% loss 0.2%
$ tc qdisc add dev eth0 parent 1: handle 2: tbf rate 1mbit burst 32kbit latency 400ms
$ tc qdisc show dev eth0
qdisc netem 1: root refcnt 2 limit 1000 delay 10.0ms loss 0.2% reorder 25% 50% gap 1
qdisc tbf 2: parent 1: rate 1000Kbit burst 4Kb lat 400.0ms


Simply switch the two qdiscs to have the packet’s rate limited first and the impairment applied afterwards.

参考链接:https://www.excentis.com/blog/use-linux-traffic-control-impairment-node-test-environment-part-2

http://bencane.com/2012/07/16/tc-adding-simulated-network-latency-to-your-linux-server/
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: