您的位置:首页 > 运维架构 > Docker

利用Flannel在AWS EC2上搭建Docker网络跨机访问网络

2018-08-04 00:04 543 查看
4000

一、AWS EC2上踩过的坑

       在AWS EC2上之前搭建过Docker Swarm集群,由于项目还没有正式上生产,所以胆大什么技术都刚上去尝试。Docker Swarm上手很快,很容易建立一个集群。实际中,Docker Swarm发布Service,公布对外端口。发现客户端连接,首次有短暂超时现象,访问速率不及直接用Docker独立部署镜像,暴露端口对外提供服务。查阅了不少资料,都无法解决,Docker Swarm发布Service,客户端访问速率慢的问题,所以决定放弃Docker Swarm。再之前搭建过Kubernetes,后发现搭建Kubernetes 高可用,多Master方式,在EC2搭建不成功,各种问题和坑,网上也没有查到好的解决方案。唯一办法购买AWS Kubernetes服务。

      从成本出发,决定用其他方式来搭建Docker集群。搭建Docker集群,首先也要解决的是跨主机,容器间的网络访问问题。在尝试利用calico方案,来解决Docker 容器跨主机的访问问题。calico在AWS EC2上,也遇到棘手问题。虽然,calico借助ETCD集群,执行calicoctl node status,可以看到calico貌似搭建成功。实际在测试,利用calico作为Docker的网络的时候,发现calico在本机和跨主机的时候,都不能连通。真是崩溃,由于对calico不熟悉,查找网上各种解决方案,都未能解决在AWS EC2上,calico网络连通问题。

   网上给的成功例子,可能都不是在公用云上执行的。由于在公有云上执行,本身公有云有他们自己的产品,可能有诸多限制,导致在物理机上可以搭建成功,在公有云上却不能搭建成功。在遭遇各种坑之后,决定尝试在AWS EC2上搭建flannel网络,结果非常顺利搭建成功,也解决了Docker容器在跨主机网络问题。

二、准备机器

这里我准备了三台机器,三台机器如下:

    IP地址                主机名称      安装服务

    172.31.72.142    master1      Etcd、Maser节点、Docker、Flanne

    172.31.82.187    master2      Etcd、Node节点、Docker、Flanne

    172.31.11.86      master3      Etcd、Node节点、Docker、Flanne

三、搭建ETCD集群

由于篇幅的关系,这里不在阐述。参见我写的搭建ETCD集群文章,

链接地址:https://blog.csdn.net/QFYJ_TL/article/details/81395543

四、搭建Flannel集群

1、安装Flannel,Docker,在所有EC2上安装Flannel,Docker

yum install flannel -y

docker-ce centos参考官网安装:https://docs.docker.com/install/linux/docker-ce/centos/#os-requirements

yum install docker-ce -y 

2、设置Flannel etcd key

在任何一台机器上执行下面语句,为Flannel创建ETCD key。

etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'

3、修改所有EC2上已安装Flannel配置文件,修改内容如下:

vi /etc/sysconfig/flanneld 

修改内容参见如下:
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://172.31.72.142:2379,http://172.31.82.187:2379,http://172.31.61.130:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS="-iface=ens5"

4、启动Flannel

systemctl enable flanneld
systemctl start flanneld

5、 (172.31.72.142)修改docker配置文件

[root@master1 centos]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.86.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=false

vi /etc/docker/daemon.json,如果/etc/docker/daemon.json文件不存在,则创建/etc/docker/daemon.json文件。

[root@master1 centos]# vi /etc/docker/daemon.json
{
  "cluster-store" : "etcd://172.31.72.142:2379,172.31.82.187:2379,172.31.61.130:2379",
  "host" : "fd://",
  "bip" : "10.1.86.1/24",
  "mtu" : 8973,
  "ip-masq" : false
}

注意daemon.json内容要与/run/flannel/subnet.env内容一致。

6、(172.31.82.187)修改docker配置文件

[root@master2 centos]#  cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.74.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=false

[root@master2 centos]# vi /etc/docker/daemon.json
{
  "cluster-store" : "etcd://172.31.72.142:2379,172.31.82.187:2379,172.31.61.130:2379",
  "host" : "fd://",
  "bip" : "10.1.74.1/24",
  "mtu" : 8973,
  "ip-masq" : false
}

注意daemon.json内容要与/run/flannel/subnet.env内容一致。

7、(172.31.11.86)修改docker配置文件

[root@master3 centos]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.71.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=false

[root@master3 centos]# vi /etc/docker/daemon.json
{
  "cluster-store" : "etcd://172.31.72.142:2379,172.31.82.187:2379,172.31.61.130:2379",
  "host" : "fd://",
  "bip" : "10.1.71.1/24",
  "mtu" : 8973,
  "ip-masq" : false
}

8、在所有EC2上,启动Docker

sysetmctl enable docker

systemctl start docker

systemclt status docker

9、测试

在所有EC2上运行下面测试,得到测试容器busybox,利用busybox测试容器跨机网络连通问题。

[root@master1 centos]docker run -it --name=busybox busybox sh

/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
328: eth0@if329: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 8973 qdisc noqueue
    link/ether 02:42:0a:01:56:03 brd ff:ff:ff:ff:ff:ff
    inet 10.1.86.3/24 brd 10.1.86.255 scope global eth0
       valid_lft forever preferred_lft forever

/ # ping -c2 10.1.74.3
PING 10.1.74.3 (10.1.74.3): 56 data bytes
64 bytes from 10.1.74.3: seq=0 ttl=60 time=0.932 ms
64 bytes from 10.1.74.3: seq=1 ttl=60 time=0.829 ms
^C
--- 10.1.74.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.829/0.880/0.932 ms
/ # ping -c2 10.1.71.4
PING 10.1.71.4 (10.1.71.4): 56 data bytes
64 bytes from 10.1.71.4: seq=0 ttl=60 time=0.714 ms
64 bytes from 10.1.71.4: seq=1 ttl=60 time=0.593 ms
^C
--- 10.1.71.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.593/0.653/0.714 ms

 

[root@master2 centos]# docker attach busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
737: eth0@if738: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 8973 qdisc noqueue
    link/ether 02:42:0a:01:4a:03 brd ff:ff:ff:ff:ff:ff
    inet 10.1.74.3/24 brd 10.1.74.255 scope global eth0
       valid_lft forever preferred_lft forever

/ # ping -c2 10.1.86.3
PING 10.1.86.3 (10.1.86.3): 56 data bytes
64 bytes from 10.1.86.3: seq=0 ttl=60 time=0.846 ms
64 bytes from 10.1.86.3: seq=1 ttl=60 time=0.756 ms

--- 10.1.86.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.756/0.801/0.846 ms
/ # ping -c2 10.1.71.4
PING 10.1.71.4 (10.1.71.4): 56 data bytes
64 bytes from 10.1.71.4: seq=0 ttl=60 time=0.623 ms
64 bytes from 10.1.71.4: seq=1 ttl=60 time=0.559 ms

--- 10.1.71.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.559/0.591/0.623 ms

[root@master3 centos]# docker attach busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
726: eth0@if727: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 8973 qdisc noqueue
    link/ether 02:42:0a:01:47:04 brd ff:ff:ff:ff:ff:ff
    inet 10.1.71.4/24 brd 10.1.71.255 scope global eth0
       valid_lft forever preferred_lft forever

/ # ping -c2 10.1.86.3
PING 10.1.86.3 (10.1.86.3): 56 data bytes
64 bytes from 10.1.86.3: seq=0 ttl=60 time=0.694 ms
64 bytes from 10.1.86.3: seq=1 ttl=60 time=0.586 ms

--- 10.1.86.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.586/0.640/0.694 ms
/ # ping -c2 10.1.74.3
PING 10.1.74.3 (10.1.74.3): 56 data bytes
64 bytes from 10.1.74.3: seq=0 ttl=60 time=0.608 ms
64 bytes from 10.1.74.3: seq=1 ttl=60 time=0.544 ms

--- 10.1.74.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.544/0.576/0.608 ms

10、总结

可以看出上面测试, 0% packet loss,说明测试连通成功。

Flannel网络工作原理,可以参考博文:https://www.geek-share.com/detail/2682225832.html

阅读更多
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: