Kubernetes学习2--集群部署与搭建
2018-01-02 17:04
861 查看
接着上一篇的介绍完核心概念后,尝试下搭建k8s的集群,准备了六台虚拟机用于部署k8s的运行环境。
2379是默认的使用端口,为了防止端口占用问题的出现,增加4001端口备用。
master1 :
# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/test.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master1:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380,etcd3=http://master3:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby"
ETCD_ADVERTISE_CLIENT_URLS="http://master1:2379,http://master1:4001"
master2 :
master3 :
首先修改 /etc/kubernetes/apiserver 文件:
首先修改 /etc/kubernetes/config 文件:(注意:这里配置的是etcd的地址,也就是master1/2/3的地址其中之一)
接
b775
着修改 /etc/kubernetes/kubelet 文件:(注:--hostname-override= 对应的node机器)
至此,已经搭建了一个kubernetes集群了,但目前该集群还不能很好的工作,因为需要对集群中pod的网络进行统一管理。
(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
在 node 虚机上执行:
参考:
Centos7下Etcd集群搭建:http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html
Centos7部署Kubernetes集群:http://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html
Docker网络及flannel介绍:http://blog.csdn.net/zhaoguoguang/article/details/51161957
一. 环境准备及虚拟机信息
1. 虚拟机环境
2. 六台虚机信息(自行配置 /etc/hosts 文件)
节点功能 | 主机名 | IP |
master1、etcd1 | master1 | 192.168.8.224 |
master2、etcd2 | master2 | 192.168.8.225 |
master3、etcd3 | master3 | 192.168.8.226 |
node1 | node1 | 192.168.8.227 |
node2 | node2 | 192.168.8.228 |
node3 | node3 | 192.168.8.229 |
3. 关闭六台虚机的防火墙:
systemctl disable firewalld systemctl stop firewalld
二. etcd集群搭建
1. 安装etcd
yum install etcd -y
2. 配置etcd
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf,以下将三个节点上的配置贴出来,请注意不同点。2379是默认的使用端口,为了防止端口占用问题的出现,增加4001端口备用。
master1 :
# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd/test.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master1:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380,etcd3=http://master3:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby"
ETCD_ADVERTISE_CLIENT_URLS="http://master1:2379,http://master1:4001"
master2 :
# [member] ETCD_NAME=etcd2 ETCD_DATA_DIR="/var/lib/etcd/test.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master2:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380,etcd3=http://master3:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby" ETCD_ADVERTISE_CLIENT_URLS="http://master2:2379,http://master2:4001"
master3 :
# [member] ETCD_NAME=etcd3 ETCD_DATA_DIR="/var/lib/etcd/test.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master3:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380,etcd3=http://master3:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby" ETCD_ADVERTISE_CLIENT_URLS="http://master3:2379,http://master3:4001"修改好以上配置后,在各个节点上启动etcd服务,并验证集群状态:
systemctl start etcd systemctl enable etcd etcdctl -C http://etcd:2379 cluster-health etcdctl -C http://etcd:4001 cluster-health集群状态没问题后可继续操作:
三. 部署 master
1. 安装 docker ,设置开机自启动并开启服务
yum install docker -y chkconfig docker on service docker start
2. 安装 kubernetes
yum install kubernetes -y在 master 的虚机上,需要运行三个组件:Kubernets API Server、Kubernets Controller Manager、Kubernets Scheduler。
首先修改 /etc/kubernetes/apiserver 文件:
### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies # KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" # Add your own! KUBE_API_ARGS=""接着修改 /etc/kubernetes/config 文件:(最后一句 masterX:8080 ,对应master1/2/3机器就好)
### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://master1:8080"修改完成后,启动服务并设置开机自启动即可:
systemctl enable kube-apiserver systemctl start kube-apiserver systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl enable kube-scheduler systemctl start kube-scheduler
四. 部署 node
1. 安装 docker ,设置开机自启动并开启服务
yum install docker -y chkconfig docker on service docker start
2. 安装 kubernetes
yum install kubernetes -y在 node 的虚机上,需要运行三个组件:Kubelet、Kubernets Proxy。
首先修改 /etc/kubernetes/config 文件:(注意:这里配置的是etcd的地址,也就是master1/2/3的地址其中之一)
### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://etcd:8080"
接
b775
着修改 /etc/kubernetes/kubelet 文件:(注:--hostname-override= 对应的node机器)
### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=node1" # location of the api-server KUBELET_API_SERVER="--api-servers=http://etcd:8080" # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS=""修改完成后,启动服务并设置开机自启动即可:
systemctl enable kubelet systemctl start kubelet systemctl enable kube-proxy systemctl start kube-proxy
3. 查看集群状态
在任意一台master上查看集群中节点及节点状态:kubectl get node
至此,已经搭建了一个kubernetes集群了,但目前该集群还不能很好的工作,因为需要对集群中pod的网络进行统一管理。
五. 创建覆盖网络 flannel
1. 在master、node上均执行如下命令,安装 flannel
yum install flannel -y
2. 在master、node上均编辑 /etc/sysconfig/flanneld 文件
# Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://etcd:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
3. 配置etcd中关于flannel的key
flannel使用etcd进行配置,来保证多个flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
4. 启动修改后的 flannel ,并依次重启docker、kubernete
在 master 虚机上执行:systemctl enable flanneld systemctl start flanneld service docker restart systemctl restart kube-apiserver systemctl restart kube-controller-manager systemctl restart kube-scheduler
在 node 虚机上执行:
systemctl enable flanneld systemctl start flanneld service docker restart systemctl restart kubelet systemctl restart kube-proxy
参考:
Centos7下Etcd集群搭建:http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html
Centos7部署Kubernetes集群:http://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html
Docker网络及flannel介绍:http://blog.csdn.net/zhaoguoguang/article/details/51161957
相关文章推荐
- Kubernetes学习2--集群部署与搭建
- Kubernetes--学习笔记-4-Kubernetes 集群搭建过程中常用命令
- Kubernetes(二) - 使用Rancher部署K8S集群(搭建Rancher)
- Kubernetes(三) - 使用Rancher部署K8S集群(搭建Kubernetes)
- 架构师之路--服务器集群搭建、管理、与快速部署
- Kubernetes集群搭建指南
- kubeadm 搭建 kubernetes 集群
- redis主从集群搭建及容灾部署(哨兵sentinel)
- dubbo学习之dubbo管理控制台装配及集成zookeeper集群部署(1)【转】
- 2014-01-14---Hadoop的基础学习(八)---HDFS的HA机制及Hadoop集群搭建
- kubernetes集群部署
- 大数据环境搭建-之-hadoop 2.x分布式部署-集群配置
- Etcd学习(二)集群搭建Clustering
- kubernetes学习笔记之cluster部署篇
- ElasticSearch学习笔记-集群安装部署
- dubbo学习之dubbo管理控制台装配及集成zookeeper集群部署(1)
- CDH5.9.0集群部署与搭建
- kubernetes 1.3 的安装和集群环境部署
- China Azure中部署Kubernetes(K8S)集群