Ubuntu16.04多主机集群上手动部署Kubernetes,配置docker私有registry,配置Kubernetes-dashboard WEB ui
2017-03-17 20:51
891 查看
Ubuntu16.04多主机集群上手动部署Kubernetes,配置docker私有registry,配置Kubernetes-dashboard WEB ui
主要流程:- 部署etcd集群
- 部署K8s Master
- 配置flannel服务
- 部署K8s Node
- 部署DNS
- 部署Dashboard
环境信息
项目 | 版本 |
---|---|
Etcd | 3.1.2 |
docker | 17.03.0-ce |
flannel | v0.7.0 |
Kubernetes | v1.4.9 |
名称 | ip | os |
---|---|---|
master | 10.107.20.5 | Ubuntu 16.04.2 LTS |
node1 | 10.107.20.6 | Ubuntu 16.04.2 LTS |
node2 | 10.107.20.7 | Ubuntu 16.04.2 LTS |
node3 | 10.107.20.8 | Ubuntu 16.04.2 LTS |
node4 | 10.107.20.9 | Ubuntu 16.04.2 LTS |
部署etcd集群
我们将在5台主机上安装部署etcd集群下载etcd
在部署机上下载etcdETCD_VERSION=${ETCD_VERSION:-"3.1.2"} ETCD="etcd-v${ETCD_VERSION}-linux-amd64" curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz tar xzf etcd.tar.gz -C /tmp cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64 for h in master node1 node2 node3 node4; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done for h in master node1 node2 node3 node4; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done
可以手动到github上下载etcd release (https://github.com/coreos/etcd/releases/) 的.tar.gz包,解压。通过scp复制到 etcd 和etcdctl到各个主机(每台主机需要配置ssh,可参考博主其他博客)。然后复制到/opt/bin目录下。
配置etcd服务
在每台主机上,分别创建/opt/config/etcd.conf和/lib/systemd/system/etcd.service文件,(注意修改IP地址,和主机名)/opt/config/etcd.conf
sudo mkdir -p /var/lib/etcd/ sudo mkdir -p /opt/config/ sudo cat <<EOF | sudo tee /opt/config/etcd.conf ETCD_DATA_DIR=/var/lib/etcd ETCD_NAME=etcd5 ETCD_INITIAL_CLUSTER=etcd5=http://10.107.20.5:2380,etcd6=http://10.107.20.6:2380,etcd7=http://10.107.20.7:2380,etcd8=http://10.107.20.8:2380,etcd9=http://10.107.20.9:2380 ETCD_INITIAL_CLUSTER_STATE=new ETCD_LISTEN_PEER_URLS=http://10.107.20.5:2380 ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.107.20.5:2380 ETCD_ADVERTISE_CLIENT_URLS=http://10.107.20.5:2379 ETCD_LISTEN_CLIENT_URLS=http://10.107.20.5:2379,http://127.0.0.1:2379 GOMAXPROCS=$(nproc) EOF
此处五台主机的ETCD_NAME为etcd 5-9,可修改ETCD_NAME为自己起得名字(相应的ETCD_INITIAL_CLUSTER中对应五个名字)。每台主机上修改ETCD_LISTEN_PEER_URLS、ETCD_INITIAL_ADVERTISE_PEER_URLS、ETCD_ADVERTISE_CLIENT_URLS、ETCD_LISTEN_CLIENT_URLS为本机的ip。
/lib/systemd/system/etcd.service
[Unit] Description=Etcd Server Documentation=https://github.com/coreos/etcd After=network.target [Service] User=root Type=simple EnvironmentFile=-/opt/config/etcd.conf ExecStart=/opt/bin/etcd Restart=on-failure RestartSec=10s LimitNOFILE=40000 [Install] WantedBy=multi-user.target
然后在每台主机上运行如下命令,将etcd加入开机服务并启动
sudo systemctl daemon-reload sudo systemctl enable etcd sudo systemctl start etcd
部署K8s Master
下载Flannel
FLANNEL_VERSION=${FLANNEL_VERSION:-"v0.7.0"} curl -L https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz tar xzf flannel.tar.gz -C /tmp
如etcd一般,可手动到github上下载,解压。
下载K8s release包,并解压(https://github.com/kubernetes/kubernetes/releases)
复制程序文件cd /tmp scp kubernetes/server/bin/kube-apiserver \ kubernetes/server/bin/kube-controller-manager \ kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@10.107.20.5:~/kube scp flannel-${FLANNEL_VERSION}/flanneld user@10.107.20.5:~/kube ssh -t user@10.107.20.5 'sudo mv ~/kube/* /opt/bin/'
将kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、flanneld复制到master主机。
创建证书
在master主机上 ,运行如下命令创建证书mkdir -p /srv/kubernetes/ cd /srv/kubernetes export MASTER_IP=10.107.20.5 openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt openssl genrsa -out server.key 2048 openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000
配置kube-apiserver服务
我们使用如下的Service以及Flannel的网段:SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16
FLANNEL_NET=192.168.0.0/16
在master主机上,创建/lib/systemd/system/kube-apiserver.service文件,内容如下
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] User=root ExecStart=/opt/bin/kube-apiserver \ --insecure-bind-address=0.0.0.0 \ --insecure-port=8080 \ --etcd-servers=http://10.107.20.5:2379, http://10.107.20.6:2379, http://10.107.20.7:2379, http://10.107.200 .8:2379, http://10.107.20.9:2379 \ --logtostderr=true \ --allow-privileged=false \ --service-cluster-ip-range=172.18.0.0/16 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \ --service-node-port-range=30000-32767 \ --advertise-address=10.107.20.5 \ --client-ca-file=/srv/kubernetes/ca.crt \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.key Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
–etcd_servers:指定etcd服务的URL;–insecure-bind-address:绑定主机的非安全地址,设置0.0.0.0表示绑定所有ip地址;–insecure-port:apiserver绑定主机的非安全端口号,默认为8080;–service-cluster-ip-range:Kubernetes集群中Service的虚拟ip地址范围,以CIDR格式表示,该ip范围不能与物理机的真实ip段有重合。
配置kube-controller-manager服务
在master主机上,创建/lib/systemd/system/kube-controller-manager.service文件,内容如下[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] User=root ExecStart=/opt/bin/kube-controller-manager \ --master=127.0.0.1:8080 \ --root-ca-file=/srv/kubernetes/ca.crt \ --service-account-private-key-file=/srv/kubernetes/server.key \ --logtostderr=true Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
配置kuber-scheduler服务
在master主机上,创建/lib/systemd/system/kube-scheduler.service文件,内容如下[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] User=root ExecStart=/opt/bin/kube-scheduler \ --logtostderr=true \ --master=127.0.0.1:8080 Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
在master主机上,创建/lib/systemd/system/flanneld.service文件,内容如下
配置flanneld服务
[Unit] Description=Flanneld Documentation=https://github.com/coreos/flannel After=network.target Before=docker.service [Service] User=root ExecStart=/opt/bin/flanneld \ --etcd-endpoints="http://10.107.20.5:2379" \ --iface=10.107.20.5 \ --ip-masq Restart=on-failure Type=notify LimitNOFILE=65536
启动服务
/opt/bin/etcdctl --endpoints="http://10.107.20.5:2379,http://10.107.20.6:2379,http://10.107.20.7:2379,http://10.107.20.8:2379,http://10.107.20.9:2379" mk /coreos.com/network/config '{"Network":"192.168.0.0/16", "Backend": {"Type": "vxlan"}}' sudo systemctl daemon-reload sudo systemctl enable kube-apiserver sudo systemctl enable kube-controller-manager sudo systemctl enable kube-scheduler sudo systemctl enable flanneld sudo systemctl start kube-apiserver sudo systemctl start kube-controller-manager sudo systemctl start kube-scheduler sudo systemctl start flanneld
修改Docker服务
source /run/flannel/subnet.env sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service rc=0 ip link show docker0 >/dev/null 2>&1 || rc="$?" if [[ "$rc" -eq "0" ]]; then ip link set dev docker0 down ip link delete docker0 fi sudo systemctl daemon-reload sudo systemctl enable docker sudo systemctl restart docker
如果是手动安装的docker则需要去github上下载,docker.service和docker.socket文件到/lib/systemd/system/目录
部署K8s Node
复制程序文件
cd /tmp for h in master node1 node2 node3 node4; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done for h in master node1 node2 node3 node4; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done for h in master node1 node2 node3 node4; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done
复制kube-proxy、kubelet到各节点。
配置Flanned以及修改Docker服务
参见Master部分相关步骤: 配置Flanneld服务,启动Flanneld服务,修改Docker服务。注意修改iface的地址配置kubelet服务(在master上也配置kubelet和kube-proxy)
/lib/systemd/system/kubelet.service,注意修改–hostname-override的IP地址,改为各主机的ip[Unit] Description=Kubernetes Kubelet After=docker.service c2e3 Requires=docker.service [Service] ExecStart=/opt/bin/kubelet \ --hostname-override=10.107.20.5 \ --api-servers=http://10.107.20.5:8080 \ --logtostderr=true Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload sudo systemctl enable kubelet sudo systemctl start kubelet
配置kube-proxy服务
/lib/systemd/system/kube-proxy.service,注意修改IP地址[Unit] Description=Kubernetes Proxy After=network.target [Service] ExecStart=/opt/bin/kube-proxy \ --hostname-override=10.107.20.5 \ --master=http://10.107.20.5:8080 \ --logtostderr=true Restart=on-failure [Install] WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload sudo systemctl enable kube-proxy sudo systemctl start kube-proxy
部署DNS
我们设置DNS_SERVER_IP=”172.18.8.8”;DNS_DOMAIN=”cluster.local”;DNS_REPLICAS=1在master上创建文件, skydns.yml
apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v17.1 namespace: kube-system labels: k8s-app: kube-dns version: v17.1 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v17.1 template: metadata: labels: k8s-app: kube-dns version: v17.1 kubernetes.io/cluster-service: "true" spec: containers: - name: kubedns image: 10.107.20.5:5000/mritd/kubedns-amd64 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube-dns" - --domain=cluster.local - --dns-port=10053 - --kube-master-url=http://10.107.20.5:8080 ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - name: dnsmasq image: 10.107.20.5:5000/kube-dnsmasq-amd64 args: - --cache-size=1000 - --no-resolv - --server=127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: 10.107.20.5:5000/exechealthz-amd64 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 50Mi requests: cpu: 10m # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 - -quiet ports: - containerPort: 8080 protocol: TCP dnsPolicy: Default # Don't use cluster DNS. --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 172.18.8.8 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
创建pod和服务
kubectl create -f skydns.yml
然后,修该各节点的kubelet.service,添加–cluster-dns=172.18.8.8以及–cluster-domain=cluster.local
此处可能出现问题,kubernetes在创建pod时,还通过启动一个名为google_containers/pause的镜像来实现pod的概念。离线环境需要先下载pause镜像。此处,笔者通过一台嫩够连接上Internet的电脑下载一个docker registry镜像,docker save -o 保存为tar包,拷贝到master主机,docker load 为本地镜像
运行起来构建一个私有registry。从docker hub下载一个mritd/pause-amd64镜像,docker tag打上标签10.107.20.5:5000/mritd/pause-amd64,推到私有registry。
之后,给每台Node的kubelet服务的启动参数加上–pod_infra_container_image参数,指定为私有Docker Registry中的pause镜像的地址。–pod_infra_container_image=10.107.20.5:5000/mritd/pause-amd64.
来另外,kube-dns中庸用到的镜像(kubedns-amd64,kube-dnsmasq-amd64,exechealthz-amd64)也是离线下载推到私有registry中的。需要你自己手动完成这些操作。
对于客户端对于私有docker registry不能拉取推送的问题。到客户端上配置/etc/docker/daemon.json.添加{“insecure-registries”:[“registry的ip:5000”]} 如 {“insecure-registries”:[“10.107.20.5:5000”]}。
部署Dashboard
在master上创建kube-dashboard.yml文件kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: kubernetes-dashboard template: metadata: labels: app: kubernetes-dashboard # Comment the following annotation if Dashboard must not be deployed on master annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: containers: - name: kubernetes-dashboard image: 10.107.20.5:5000/mritd/kubernetes-dashboard-amd64 ports: - containerPort: 9090 protocol: TCP args: - --apiserver-host=http://10.107.20.5:8080 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard
kubectl create -f kube-dashboard.yml
参考dns的部署
查看kubectl describe -f kube-dashboard.yml pod被部署到了哪个节点,查看映射的端口。通过访问被部署到的节点的ip和port就可访问ui页面了。
相关文章推荐
- 可能是最详细的部署:Docker Registry企业级私有镜像仓库Harbor管理WEB UI
- 使用官方 docker registry 搭建私有镜像仓库及部署 web ui
- Ubuntu 16.04 部署自己的私有 Docker Registry
- 可能是最详细的部署:Docker Registry企业级私有镜像仓库Harbor管理WEB UI
- 可能是最详细的部署:Docker Registry企业级私有镜像仓库Harbor管理WEB UI
- ubuntu下部署带认证的私有docker registry(原创请注明出处)
- docker 私有仓库 registry 部署
- Ubuntu下使用Docker 建立本地私有registry
- 为docker私有registry配置nginx反向代理
- ubuntu下安装配置部署zabbix——基于docker
- centos 安装部署docker与局域网主机相通详细配置
- 集群配置虚拟主机及部署Hadoop集群碰到的问题
- 如何在Ubuntu14.04上搭建私有docker registry
- ubuntu server14.04LTS下手动模拟DNS并配置虚拟主机以及配置过程遇到的问题的解决方法
- Kubernetes管理Docker集群之部署篇
- Docker集群管理工具-Kubernetes部署记录
- ubuntu12上docker部署hadoop集群环境
- Ubuntu下kubernetes集群配置
- 部署自己的私有 Docker Registry
- Docker私有仓库Registry搭建(localhost 可行但跨主机有问题)