您的位置:首页 > 产品设计 > UI/UE

生产环境部署swarm集群及 swarm-overlay-portainer(webui) 实战应用

2017-08-30 15:17 906 查看
参考文档:
调度过滤器:filter
https://docs.docker.com/swarm/scheduler/filter/#how-to-write-filter-expressions
调度策略:strategy
https://docs.docker.com/swarm/scheduler/strategy/#spread-strategy-example
部署生产环境swarm集群
https://docs.docker.com/swarm/install-manual/#step-5-create-swarm-cluster
WEBUI portainer
https://portainer.readthedocs.io/en/latest/deployment.html
Overlay 网络搭建
http://note.youdao.com/noteshare?id=e02172789830ac387b8ad0216e984b8a

一、安装步骤

1. Set up a discovery backend :

10 上启动consul
docker run --restart=always -d -p 8500:8500 -h consul --name consul progrium/consul \
-server -bootstrap -advertise 172.19.9.10

2. Start the Swarm manager :
20/30 上启动manage01 manage02
docker run -d --restart=always --name swarm-manage01 -p 4000:4000 172.19.9.10:5000/swarm \
manage -H :4000 --replication --advertise 172.19.9.20:4000 consul://172.19.9.10:8500

docker run -d --restart=always --name=swarm-manage01 -p 4000:4000 172.19.9.10:5000/swarm \
manage -H :4000 --replication --advertise 172.19.9.30:4000 consul://172.19.9.10:8500

3. Connect to node01 and node02 in turn and join them to the cluster :
40/50 上启动node01 node02
docker run -d --restart=always --name=swarm-node01 172.19.9.10:5000/swarm \
join --advertise=172.19.9.40:2375 consul://172.19.9.10:8500

docker run -d --restart=always --name=swarm-node02 172.19.9.10:5000/swarm \
join --advertise=172.19.9.50:2375 consul://172.19.9.10:8500

4. 添加--label标签
40
#docker daemon --label region=huilongguan
vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --graph=/data1/docker -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --cluster-store=consul://172.19.9.10:8500/network
--cluster-advertise=em1:2375 --insecure-registry=172.19.9.10:5000 --label region=huilongguan
50
#docker daemon --label region=shangdi
vim /usr/lib/systemd/system/docker.service
ExecStart= ...... --label region=shangdi

二、测试结果
Node filters
Use a constraint filter

docker -H :4000 run -d --net=internaloverlay --name swarm-test-redis-0 -e constraint:node==server3.riskdetection
172.19.9.10:5000/redis
docker -H :4000 run -d --net=internaloverlay --name swarm-test-redis-1 -e constraint:node==server4.riskdetection
172.19.9.10:5000/redis
docker -H :4000 exec -ti swarm-test-redis-1 ping swarm-test-redis-0
PING swarm-test-redis-0 (10.10.10.10): 56 data bytes
64 bytes from 10.10.10.10: icmp_seq=0 ttl=64 time=1.261 ms
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.314 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.292 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.293 ms
64 bytes from 10.10.10.10: icmp_seq=4 ttl=64 time=0.284 ms
64 bytes from 10.10.10.10: icmp_seq=5 ttl=64 time=0.273 ms
64 bytes from 10.10.10.10: icmp_seq=6 ttl=64 time=0.259 ms

docker -H :4000 run -d --net=internaloverlay --name swarm-test-redis-hlg-e constraint:region==huilongguan
172.19.9.10:5000/redis
docker -H :4000 run -d --net=internaloverlay --name swarm-test-redis-sd-e constraint:region==shangdi
172.19.9.10:5000/redis
docker -H :4000 exec -ti swarm-test-redis-sd ping swarm-test-redis-hlg
PING swarm-test-redis-hlg (10.10.10.9): 56 data bytes
64 bytes from 10.10.10.9: icmp_seq=0 ttl=64 time=0.889 ms
64 bytes from 10.10.10.9: icmp_seq=1 ttl=64 time=0.268 ms
64 bytes from 10.10.10.9: icmp_seq=2 ttl=64 time=0.297 ms
64 bytes from 10.10.10.9: icmp_seq=3 ttl=64 time=0.254 ms
64 bytes from 10.10.10.9: icmp_seq=4 ttl=64 time=0.270 ms
64 bytes from 10.10.10.9: icmp_seq=5 ttl=64 time=0.283 ms
64 bytes from 10.10.10.9: icmp_seq=6 ttl=64 time=0.277 ms

docker -H :4000 ps | grep swarm-test
ea8cf21ae46f 172.19.9.10:5000/redis "docker-entrypoint.sh" 40 minutes ago Up 40 minutes
server3.riskdetection/swarm-test-redis-hlg
c93c6054f5aa 172.19.9.10:5000/redis "docker-entrypoint.sh" 42 minutes ago Up 42 minutes
server4.riskdetection/swarm-test-redis-sd
24a092af0e45 172.19.9.10:5000/redis "docker-entrypoint.sh" About an hour ago Up 40 minutes
server3.riskdetection/swarm-test-redis-0eda41c6d1b86
172.19.9.10:5000/redis "docker-entrypoint.sh" About an hour ago Up 41 minutes
server4.riskdetection/swarm-test-redis-1

小结:
通过overlay 不同主机间容器实现了互联,根据node,region分别将对应的容器分发到相应的node节点上。

三、注意:
Node filters除了 constraint 外,还有以下两种filter
Use the health filter
The node health filter prevents the scheduler from running containers on unhealthy nodes. A node is considered unhealthy if the node is down or it
can’t communicate with the cluster store.

Use the containerslots filter
--label containerslots=3
Swarm will run up to 3 containers at this node, if all nodes are “full”, an error is thrown indicating no suitable node can be found. If the value
is not castable to an integer number or is not present, there will be no limit on container number.

Container filters
affinity
dependency
port

四、安装管理界面WEBUI
docker run -d --restart=always --name portainer -p 9000:9000 portainer/portainer -H tcp://172.19.9.20:4000
http://172.19.9.10:9000/
设置密码,登陆
1.Cluster info



2.container 管理



3.images 管理



4.Network 管理



5.Volumes 管理



6.Swarm 集群管理

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息