rhel6 下的 Ethernet Channel Bonding 技术
2011-09-13 16:17
134 查看
引言
Ethernet Channel Bonding 技术,我们常称为网卡绑定,我在wiki 找到一些定义!
通道捆合技术原文是英文的Channel bonding,在电脑网络中此技术是用来调整、排定一部电脑上的两个或两个以上的网络接口,好让网络传输量(throughput,有时也称:通量)增加,或使两个以上的网络接口能结合成备援(redundancy,有时也称:冗余)机制。
注意
rhel6 与 rhel5.4 不同,已经没有/etc/modprobe.conf文件了,需要自己建立比如下面使用的 bond0.conf
载入bonding模块
配置bonding
重启网络
# /etc/init.d/network restart
检查下
测试 500MB 25.71MB/s 负载均衡 跑到了 200Mbit/s
忘了说明了我的两块网卡 都是 100Mbit/s
更详细请参考这里,或者这里
如有问题欢迎到此讨论:37275208
Ethernet Channel Bonding 技术,我们常称为网卡绑定,我在wiki 找到一些定义!
通道捆合技术原文是英文的Channel bonding,在电脑网络中此技术是用来调整、排定一部电脑上的两个或两个以上的网络接口,好让网络传输量(throughput,有时也称:通量)增加,或使两个以上的网络接口能结合成备援(redundancy,有时也称:冗余)机制。
注意
rhel6 与 rhel5.4 不同,已经没有/etc/modprobe.conf文件了,需要自己建立比如下面使用的 bond0.conf
载入bonding模块
modprobe bonding
配置bonding
# cat /etc/modprobe.d/bond0.conf alias bond0 bonding cd /etc/sysconfig/network-scripts/ # grep -v "^#" ifcfg-bond0 DEVICE=bond0 BOOTPROTO=none ONBOOT=yes IPADDR=192.168.5.88 NETMASK=255.255.255.0 GATEWAY=192.168.5.1 USERCTL=no # grep -v "^#" ifcfg-eth1 DEVICE=eth1 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes USERCTL=no # grep -v "^#" ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no # echo 100 > /sys/class/net/bond0/bonding/miimon # echo 6 > /sys/class/net/bond0/bonding/mode Mode of operation : 0 for balance-rr, 1 for active-backup, 2 for balance-xor; 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
重启网络
# /etc/init.d/network restart
检查下
ifconfig bond0 Link encap:Ethernet HWaddr 00:18:8B:8D:D6:07 inet addr:192.168.5.88 Bcast:192.168.5.255 Mask:255.255.255.0 inet6 addr: fe80::218:8bff:fe8d:d607/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:24448 errors:0 dropped:0 overruns:0 frame:0 TX packets:353 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1697415 (1.6 MiB) TX bytes:51100 (49.9 KiB) eth0 Link encap:Ethernet HWaddr 00:18:8B:8D:D6:07 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:13320 errors:0 dropped:0 overruns:0 frame:0 TX packets:168 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:982133 (959.1 KiB) TX bytes:24306 (23.7 KiB) Interrupt:19 eth1 Link encap:Ethernet HWaddr 00:18:8B:8D:D6:07 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:11128 errors:0 dropped:0 overruns:0 frame:0 TX packets:185 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:715282 (698.5 KiB) TX bytes:26794 (26.1 KiB) Interrupt:17
cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: e0:05:c5:f2:43:85 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:18:8b:8d:d6:07
测试 500MB 25.71MB/s 负载均衡 跑到了 200Mbit/s
忘了说明了我的两块网卡 都是 100Mbit/s
rsync -avz --progress test root@192.168.5.82:/root Address 192.168.5.82 maps to localhost, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT! root@192.168.5.82's password: building file list ... 1 file to consider test 512000000 100% 25.71MB/s 0:00:13 (xfer#1, to-check=0/1) sent 97 bytes received 158459 bytes 7046.93 bytes/sec total size is 512000000 speedup is 3229.14
更详细请参考这里,或者这里
如有问题欢迎到此讨论:37275208
相关文章推荐
- RHEL Channel Bonding
- RHEL6.4 bonding 技术学习
- Ethernet Channel Bonding NIC Teaming on Linux Systems
- Linux中的Bonding技术
- CentOS 6.x 通过bonding技术实现网络负载均衡及冗余
- RHEL5 双网卡bonding
- Linux的bonding技术中负载均衡详述
- Linux下通过bonding技术实现网络负载均衡及冗余
- Configure NIC(Network Interface Card) bonding in CentOS 7 / RHEL 7
- [我的Linux技术支持生涯] 无法使用xmanager登陆一台RHEL 5.6
- Linux Ethernet Bonding Driver mini-howto (双网卡绑定)
- CentOS 6.8 Bonding技术实现和网卡功能配置基础
- 专题:Channel Bonding/bonding
- Linux下通过bonding技术实现网络负载均衡及冗余
- 交换机的Ethernet Channel
- RHEL6.3 DNS高级技术一 通过DNS View实现不同区域访问域名解析的速度 推荐
- 一系列技术视频:MIT 6.00,Channel 9 E2E,FP Foundamentals
- 专题:『Channel Bonding/team』——EXPERIMANTAL!!!
- Ethernet Channel & 端口聚合
- WCF技术剖析之八:ClientBase<T>中对ChannelFactory<T>的缓存机制