您的位置:首页 > 理论基础 > 计算机网络

【疑难杂症系列】如何通过流量控制来防止网络攻击

2010-06-28 00:35 666 查看
如何通过流量控制来防止网络攻击

Sailor_forever sailing_9806#163.com

(本原创文章发表于Sailor_forever 的个人blog,未经本人许可,不得用于商业用途。任何个人、媒体、其他网站不得私自抄袭;网络媒体转载请注明出处,增加原文链接,否则属于侵权行为。如 有任何问题,请留言或者发邮件给sailing_9806#163.com)

http://blog.csdn.net/sailor_8318/archive/2010/06/28/5698353.aspx

【摘要】本文从局域网上网被攻击说起,简单介绍了ARP攻击的原理。并由此介绍了如何在嵌入式Linux设备上进行网络攻击的防御。详细介绍了如何在网卡驱动程序中进行流量限速以此达到限制部分网络攻击的方法,并给出了相关测试数据。

【关键字】局域网攻击 ARP防火墙 Gbit 接收中断 BD缓冲区

不知道大家有没有这样的体会 几个人合租的屋子 大家共享宽带 但有人的网络经常有问题 上个QQ有时候都掉线 而有的人看着视频还一点都不卡 同在屋檐下 有人吃肉 有人只能喝粥 这是为什么呢? 当然这其中必有玄机之处呀 很可能是某人在局域网内使用了网络攻击软件 频繁的对你的电脑发起攻击 导致网络特性变化

这种网络攻击通常是利用ARP报文进行攻击 因为正常情况下电脑肯定会响应ARP报文 而这种报文也只有在局域网环境下才可以用 当电脑频繁响应ARP报文时 网络性能会下降 甚至电脑整体性能也会急剧下降 以此来达到攻击的目的

类似的攻击软件有网络剪刀手 网络执法官等P2P管理软件 相应的也有些防御软件如ARP防火墙等 所谓有政策就有对策 咱也不能坐以待毙呀

最近I&V提了个变态的bug 用网络流量计对我们开发的设备进行网络攻击 以此检测设备是否能在异常情况下正常工作 就像你半夜开着手机睡觉 动不动就来个电话 而你还必须接这个电话 这样你能睡得着么

通常对于网络设备来说 可以设置网卡只接收广播报文和目的MAC为本机的报文 其他报文都可以在硬件层面上自动过滤 但在局域网环境里 可以获知某个网络设备的IP地址和MAC地址 便可向某个设备发送任意攻击报文

在Linux中 当网卡收到一个网络报文时 便进入接收中断处理程序 正常情况下会在软中断里面进行实际的数据处理 而软中断相比系统运行的其他内核任务甚至是用户空间的进程优先级都要高 因此当网卡频繁接收数据包时 系统的性能便会急剧降低

I&V的这个测试用例的目的正是检查我们的设备是否具备抗攻击的能力 因为攻击报文是任意的 而正常情况下这个网口也是管理网口 也会有相应的数据流 因此无法根据报文内容来进行过滤

网络攻击的本质是让CPU频繁处理报文提供CPU的载荷 而没有时间运行其他正常的业务 因此防止攻击的策略也是对网卡进行限速 有两种方案:

1)将Gbit的网卡限速为10M或者100M 这样网卡在硬件上就已经限速了 即使网络攻击的流量再大 对于800M的CPU来说 也是可以处理的

不过这样就是因噎废食了 拿着大炮打蚊子 Gbit的网卡有点太浪费了

2)对单位时间内处理的报文进行限制

正常情况下 管理网口的流量在一个范围内 只有在出现异常攻击的情况下 网口的流量才会急剧上升 将单位时间内CPU处理的网络报文限制在一个范围内 这样CPU便有相应的空闲时间去处理其他业务 从而达到防止因网络攻击而瘫痪的问题

但是这个方法的不足是在受到网络攻击期间 也可能丢弃了正常的报文 正可谓杀敌一千 自损八百呀 可是没办法 为了朝中安宁 宁可错杀一千也不能放过一个啊

至于具体的实现就是在接收服务的中断处理程序里对每秒中收到的包进行计数 当大于某一个设定的阀值时 后续的包就丢弃 一秒过后 清除原有的接收计数 继续接收数据 以此达到流量控制的目的

gfar_interrupt 》gfar_receive 》 __netif_rx_schedule 》 gfar_poll 》 gfar_clean_rx_ring
http://lxr.linux.no/#linux+v2.6.25/drivers/net/gianfar.c#L1637
1636/* The interrupt handler for devices with one interrupt */

1637static irqreturn_t gfar_interrupt(int irq, void *dev_id)

1638{

1639 struct net_device *dev = dev_id;

1640 struct gfar_private *priv = netdev_priv(dev);

1641

1642 /* Save ievent for future reference */

1643 u32 events = gfar_read(&priv->regs->ievent);

1644

1645 /* Check for reception */

1646 if (events & IEVENT_RX_MASK)

1647 gfar_receive(irq, dev_id);

1648

1649 /* Check for transmit completion */

1650 if (events & IEVENT_TX_MASK)

1651 gfar_transmit(irq, dev_id);

1652

1653 /* Check for errors */

1654 if (events & IEVENT_ERR_MASK)

1655 gfar_error(irq, dev_id);

1656

1657 return IRQ_HANDLED;

1658}

1385irqreturn_t gfar_receive(int irq, void *dev_id)

1386{

1387 struct net_device *dev = (struct net_device *) dev_id;

1388 struct gfar_private *priv = netdev_priv(dev);

1389#ifdef CONFIG_GFAR_NAPI

1390 u32 tempval;

1391#else

1392 unsigned long flags;

1393#endif

1394

1395 /* Clear IEVENT, so rx interrupt isn't called again

1396 * because of this interrupt */

1397 gfar_write(&priv->regs->ievent, IEVENT_RX_MASK);

1398

1399 /* support NAPI */

1400#ifdef CONFIG_GFAR_NAPI

1401 if (netif_rx_schedule_prep(dev, &priv->napi)) {

1402 tempval = gfar_read(&priv->regs->imask);

1403 tempval &= IMASK_RX_DISABLED;

1404 gfar_write(&priv->regs->imask, tempval);

1405

1406 __netif_rx_schedule(dev, &priv->napi);

1407 } else {

1408 if (netif_msg_rx_err(priv))

1409 printk(KERN_DEBUG "%s: receive called twice (%x)[%x]/n",

1410 dev->name, gfar_read(&priv->regs->ievent),

1411 gfar_read(&priv->regs->imask));

1412 }

1413#else

1414

1415 spin_lock_irqsave(&priv->rxlock, flags);

1416 gfar_clean_rx_ring(dev, priv->rx_ring_size);

1417

1418 /* If we are coalescing interrupts, update the timer */

1419 /* Otherwise, clear it */

1420 if (priv->rxcoalescing)

1421 gfar_write(&priv->regs->rxic,

1422 mk_ic_value(priv->rxcount, priv->rxtime));

1423 else

1424 gfar_write(&priv->regs->rxic, 0);

1425

1426 spin_unlock_irqrestore(&priv->rxlock, flags);

1427#endif

1428

1429 return IRQ_HANDLED;

1430}

1579#ifdef CONFIG_GFAR_NAPI

1580static int gfar_poll(struct napi_struct *napi, int budget)

1581{

1582 struct gfar_private *priv = container_of(napi, struct gfar_private, napi);

1583 struct net_device *dev = priv->dev;

1584 int howmany;

1585

1586 howmany = gfar_clean_rx_ring(dev, budget);

1587

1588 if (howmany < budget) {

1589 netif_rx_complete(dev, napi);

1590

1591 /* Clear the halt bit in RSTAT */

1592 gfar_write(&priv->regs->rstat, RSTAT_CLEAR_RHALT);

1593

1594 gfar_write(&priv->regs->imask, IMASK_DEFAULT);

1595

1596 /* If we are coalescing interrupts, update the timer */

1597 /* Otherwise, clear it */

1598 if (priv->rxcoalescing)

1599 gfar_write(&priv->regs->rxic,

1600 mk_ic_value(priv->rxcount, priv->rxtime));

1601 else

1602 gfar_write(&priv->regs->rxic, 0);

1603 }

1604

1605 return howmany;

1606}

1607#endif

/* gfar_clean_rx_ring() -- Processes each frame in the rx ring

* until the budget/quota has been reached. Returns the number

* of frames handled

*/

int gfar_clean_rx_ring(struct net_device *dev, int rx_work_limit)

{

struct rxbd8 *bdp;

struct sk_buff *skb;

u16 pkt_len;

int howmany = 0;

struct gfar_private *priv = netdev_priv(dev);

#ifdef CFG_FLOW_CTRL

static unsigned long rx_pkt_per_sec = 0;

static unsigned long rx_pkt_limit_per_sec = CFG_FLOW_CTRL_RX_LIMIT;

static unsigned long rx_pkt_time_start = 0;

#endif

/* Get the first full descriptor */

bdp = priv->cur_rx;

while (!((bdp->status & RXBD_EMPTY) || (--rx_work_limit < 0))) {

struct sk_buff *newskb;

rmb();

/* Add another skb for the future */

newskb = gfar_new_skb(dev);

skb = priv->rx_skbuff[priv->skb_currx];

/* We drop the frame if we failed to allocate a new buffer */

if (unlikely(!newskb || !(bdp->status & RXBD_LAST) ||

bdp->status & RXBD_ERR)) {

count_errors(bdp->status, dev);

if (unlikely(!newskb))

newskb = skb;

if (skb) {

dma_unmap_single(&priv->dev->dev,

bdp->bufPtr,

priv->rx_buffer_size,

DMA_FROM_DEVICE);

dev_kfree_skb_any(skb);

}

} else {

/* Increment the number of packets */

dev->stats.rx_packets++;

#ifdef CFG_FLOW_CTRL

if ((unsigned long)(jiffies - rx_pkt_time_start) > 1 * HZ)

{

rx_pkt_per_sec = 0; // clear the packet number per second

rx_pkt_time_start = jiffies; // update the start

}

rx_pkt_per_sec++;

if (rx_pkt_per_sec < rx_pkt_limit_per_sec) //within the flow control area, then rx, else discard

{

#endif

howmany++;

/* Remove the FCS from the packet length */

pkt_len = bdp->length - 4;

gfar_process_frame(dev, skb, pkt_len);

#ifdef CFG_FLOW_CTRL

}

else

{

/* Increment the number of dropped packets */

dev->stats.rx_dropped++;

kfree_skb(skb); // free the skb in TCP/IP stack to aovid memory leak

}

#endif

dev->stats.rx_bytes += pkt_len;

}

dev->last_rx = jiffies;

priv->rx_skbuff[priv->skb_currx] = newskb;

/* Setup the new bdp */

gfar_new_rxbdp(dev, bdp, newskb);

/* Update to the next pointer */

if (bdp->status & RXBD_WRAP)

bdp = priv->rx_bd_base;

else

bdp++;

/* update to point at the next skb */

priv->skb_currx =

(priv->skb_currx + 1) &

RX_RING_MOD_MASK(priv->rx_ring_size);

}

/* Update the current rxbd pointer to be the next one */

priv->cur_rx = bdp;

return howmany;

}

关键在于丢弃网络报文的时候 相应的BD仍然需要释放 并且分配新的BD 否则后续无法接收新的数据 另外丢弃报文的时候 也必须释放skb 否则TCP/IP协议栈的内存也会耗尽

以下是相关测试数据

1、打流期间 频繁接收数据包 CPU载荷急剧升高 网络数据包产生软中断 无法及时处理 交给内核线程处理

Mem: 548524K used, 485956K free, 0K shrd, 0K buff, 308564K cached

CPU: 3.8% usr 3.6% sys 0.0% nice 0.0% idle 0.0% io 8.0% irq 84.5% softirq

Load average: 4.05 2.96 1.41

PID PPID USER STAT VSZ %MEM %CPU COMMAND

3 2 root RW< 0 0.0 85.7 [ksoftirqd/0]

1004 974 root S 493m 48.7 5.1 swch

936 2 root SW 0 0.0 3.0 [dispatch_timer]

946 944 root S 5176 0.5 1.7 nets

988 963 root S 37668 3.6 0.7 mpmo

1490timer: Delayed timer issued. EvId = ffff, rec = 10bf, dups = 7

852 root R 3060 0.3 0.7 top

973 960 root S 62568 6.0 0.5 faum

1048 985 root S 8136 0.7 0.3 rifa

1105 965 root S 6528 0.6 0.3 tsag

1333 1308 root S 6224 0.6 0.3 /usr/local/esw/l2-protocol/rci_process

4 2 root SW< 0 0.0 0.3 [events/0]

918 1 root S 123m 12.1 0.1 supr -dh

976 957 root S 44168 4.2 0.1 dxc

965 918 root S 7096 0.6 0.1 mana

982 965 root S 163m 16.1 0.0 xsup

9timer: Delayed timer issued. EvId = a9cf, rec = 10c0, dups = 4

70 956 root S 123m 12.2 0.0 caco

966 956 root S 99516 9.6 0.0 capo

943 918 root S 61912 5.9 0.0 cdbm

1012 979 root S 58652 5.6 0.0 csss

2、打流期间 无法ping通设备

From 150.236.56.76 icmp_seq=2460 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2461 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2462 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2463 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2465 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2466 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2467 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2469 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2470 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2471 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2472 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2473 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2474 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2476 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2477 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2478 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2480 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2481 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2482 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2483 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2484 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2485 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2486 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2487 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2488 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2489 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2490 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2491 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2492 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2493 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2494 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2495 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2496 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2497 Destination Host Unreachable

From 150.236.56.76 icmp_seq=2498 Destination Host Unreachable

3、取消打流 立即可以ping通设备

64 bytes from 150.236.56.124: icmp_seq=2499 ttl=60 time=2000 ms

64 bytes from 150.236.56.124: icmp_seq=2500 ttl=60 time=1000 ms

64 bytes from 150.236.56.124: icmp_seq=2501 ttl=60 time=0.465 ms

64 bytes from 150.236.56.124: icmp_seq=2502 ttl=60 time=0.325 ms

64 bytes from 150.236.56.124: icmp_seq=2503 ttl=60 time=0.328 ms

64 bytes from 150.236.56.124: icmp_seq=2504 ttl=60 time=0.255 ms

64 bytes from 150.236.56.124: icmp_seq=2505 ttl=60 time=0.322 ms

64 bytes from 150.236.56.124: icmp_seq=2506 ttl=60 time=0.327 ms

64 bytes from 150.236.56.124: icmp_seq=2507 ttl=60 time=0.321 ms

64 bytes from 150.236.56.124: icmp_seq=2508 ttl=60 time=0.314 ms

64 bytes from 150.236.56.124: icmp_seq=2509 ttl=60 time=0.319 ms

64 bytes from 150.236.56.124: icmp_seq=2510 ttl=60 time=0.324 ms

64 bytes from 150.236.56.124: icmp_seq=2511 ttl=60 time=0.322 ms

64 bytes from 150.236.56.124: icmp_seq=2512 ttl=60 time=0.320 ms

64 bytes from 150.236.56.124: icmp_seq=2513 ttl=60 time=0.324 ms

64 bytes from 150.236.56.124: icmp_seq=2514 ttl=60 time=0.252 ms

64 bytes from 150.236.56.124: icmp_seq=2515 ttl=60 time=0.327 ms

64 bytes from 150.236.56.124: icmp_seq=2516 ttl=60 time=0.251 ms

64 bytes from 150.236.56.124: icmp_seq=2517 ttl=60 time=0.325 ms

64 bytes from 150.236.56.124: icmp_seq=2518 ttl=60 time=0.323 ms

64 bytes from 150.236.56.124: icmp_seq=2519 ttl=60 time=0.335 ms

64 bytes from 150.236.56.124: icmp_seq=2520 ttl=60 time=0.251 ms

64 bytes from 150.236.56.124: icmp_seq=2521 ttl=60 time=0.319 ms

64 bytes from 150.236.56.124: icmp_seq=2522 ttl=60 time=0.319 ms

64 bytes from 150.236.56.124: icmp_seq=2523 ttl=60 time=0.252 ms

64 bytes from 150.236.56.124: icmp_seq=2524 ttl=60 time=0.325 ms

64 bytes from 150.236.56.124: icmp_seq=2525 ttl=60 time=0.319 ms

64 bytes from 150.236.56.124: icmp_seq=2526 ttl=60 time=0.327 ms

--- 150.236.56.124 ping statistics ---

2526 packets transmitted, 166 received, +2022 errors, 93% packet loss, time 2528922ms

rtt min/avg/max/mdev = 0.210/42.823/2352.017/270.895 ms, pipe 4

4、测试完毕后 设备正常 CPU载荷恢复正常值

Mem: 548896K used, 485584K free, 0K shrd, 0K buff, 308560K cached

CPU: 0.0% usr 9.0% sys 0.0% nice 90.9% idle 0.0% io 0.0% irq 0.0% softirq

Load average: 0.27 1.91 1.45

PID PPID USER STAT VSZ %MEM %CPU COMMAND

1601 852 root R 2948 0.2 6.0 top

1004 974 root S 493m 48.7 3.0 swch

982 965 root S 163m 16.1 0.0 xsup

970 956 root S 123m 12.2 0.0 caco

918 1 root S 123m 12.1 0.0 supr -dh

966 956 root S 99516 9.6 0.0 capo

973 960 root S 62568 6.0 0.0 faum

943 918 root S 61912 5.9 0.0 cdbm

1012 979 root S 58652 5.6 0.0 csss

975 957 root S 48932 4.7 0.0 alrm

955 918 root S 48000 4.6 0.0 chkp

987 963 root S 46740 4.5 0.0 dcnm

945 944 root S 46484 4.4 0.0 ipif

968 956 root S 46292 4.4 0.0 shac

976 957 root S 44168 4.2 0.0 dxc

977 957 root S 43668 4.2 0.0 conf

991 918 root S 42488 4.1 0.0 ces_sc

1000 964 root S 41580 4.0 0.0 misc

971 956 root S 39244 3.7 0.0 impo

952 918 root S 39048 3.7 0.0 swdl

SAILING:root:# ping 150.236.70.1

PING 150.236.70.1 (150.236.70.1): 56 data bytes

64 bytes from 150.236.70.1: seq=0 ttl=252 time=4.101 ms

64 bytes from 150.236.70.1: seq=1 ttl=252 time=1.542 ms

64 bytes from 150.236.70.1: seq=2 ttl=252 time=1.532 ms

^C

--- 150.236.70.1 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 1.532/2.391/4.101 ms

SAILING:root:#

附录:

浅谈局域网ARP攻击的危害及防范方法
http://www.duote.com/tech/1/2703_1.html
什么是局域网ARP攻击?
http://hi.baidu.com/seowzyh/blog/item/99242c2d821005331f3089e6.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐