您的位置:首页 > 运维架构 > Docker

[Kubernetes]Kubernetes集群和Docker私有库搭建(CentOS 7)

2016-08-01 18:57 615 查看
在参考了如下文档后,顺利地搭建了一个Docker私有库和使用该私有库的Kubernetes集群环境:

Kubernetes官方的安装文档(CentOS 7)
[推荐]Installing Kubernetes Cluster with 3 minions on CentOS 7 to manage pods and services
Docker官方的私有库部署文档
Docker官方的不安全私有库部署文档
Kubernetes官方的Web UI文档
搭建期间也遇到了许许多多的问题,但是基本上都能靠悟性+谷歌解决。想要参考此文章进行环境搭建的读者,建议应具备熟练的linux使用经验、一定的网络和安全知识也是必备的。

环境规划

作者手里的环境是四台安装了CentOS 7的主机。环境规划如下:

Kubernetes Master 节点:192.168.169.120
Kubernetes Node 节点:192.168.169.121, 192.168.169.124
Docker私有库节点:192.168.169.125
每台主机上都运行了如下命令来关闭防火墙和启用ntp:

# systemctl stop firewalld
# systemctl disable firewalld
# yum -y install ntp
# systemctl start ntpd
# systemctl enable ntpd


Kubernetes Master节点的安装与配置

在Kubernetes Master节点上安装etcd, docker和Kubernetes

# yum -y install etcd docker kubernetes


对etcd进行配置,编辑/etc/etcd/etcd.conf,内容如下:

ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"


其中ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"表示etcd在2379端口上监听所有网络接口。

对Master节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.169.120:8080"


KUBE_MASTER="--master=http://192.168.169.120:8080"是将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler和proxy进程。

编辑配置文件/etc/kubernetes/apiserver

KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""


这些配置让apiserver进程在8080端口上监听所有网络接口,并告诉apiserver进程etcd服务的地址。

现在,启动Kubernetes Master节点上的etcd, docker, apiserver, controller-manager和scheduler进程并查看其状态:

# for SERVICES  in etcd docker kube-apiserver kube-controller-manager kube-scheduler;  do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done


在etcd里定义flannel网络配置:

# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'


在随后Kubernetes的Node节点搭建和配置时,我们可以看到,etcd里的/atomic.io/network/config节点会被Node节点上的flannel用来创建网络的iptables

现在我们可以使用kubectl get nodes命令来查看,当然,目前还没有Node节点加入到该Kubernetes集群,所以命令的执行结果是空的:

# kubectl get nodes
NAME              STATUS    AGE


Kubernetes Node节点的安装与配置

在Kubernetes Node节点上安装flannel, docker和Kubernetes

# yum -y install flannel docker kubernetes


对flannel进行配置,编辑/etc/etcd/etcd.conf,内容如下:

FLANNEL_ETCD="http://192.168.169.120:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"


配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置

对Node节点上的Kubernetes进行配置,两台Node节点(192.168.169.121和192.168.169.124)上的配置文件/etc/kubernetes/config内容和Master节点(192.168.169.120)相同,内容如下:

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.169.120:8080"


KUBE_MASTER="--master=http://192.168.169.120:8080"是将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler和proxy进程。

而两台Node节点上的/etc/kubernetes/kubelet配置文件内容略微有点不同。

192.168.169.121节点的/etc/kubernetes/kubelet:

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.169.121"
KUBELET_API_SERVER="--api-servers=http://192.168.169.120:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""


192.168.169.124节点的/etc/kubernetes/kubelet:

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.169.124"
KUBELET_API_SERVER="--api-servers=http://192.168.169.120:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""


分别在两个Kubernetes Node节点上启动kube-proxy kubelet docker和flanneld进程并查看其状态:

# for SERVICES in kube-proxy kubelet docker flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done


现在我们在Master节点上使用kubectl get nodes命令来查看,可以看到加入的两个Node节点:

# kubectl get nodes
NAME              STATUS    AGE
192.168.169.121   Ready     2d
192.168.169.124   Ready     2d


至此,Kubernetes集群环境搭建完毕,但是我的故事还没有结束。

Docker私有库搭建

搭建完了Kubernetes集群环境,我满心欢喜地去创建Pods,失败了。用kubectl describe和kubectl logs命令定位原因,发现原因是我的集群环境无法从gcr.io(Google Container-Registry)拉取镜像,但是从Docker hub可以拉取镜像。所以我萌生了搭建一个Docker私有镜像的想法。查阅资料后,过程描述如下。

为了让Docker私有库更加安全,我生成了自签名的证书来配置TLS. 首先编辑/etc/pki/tls/openssl.cnf,在[ v3_ca ]下增加了一行:

[ v3_ca ]
subjectAltName = IP:192.168.169.125


然后使用openssl命令在当前的certs目录下创建了一个自签名的证书:

# mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt


在证书的创建过程中,会询问国家、省分、城市、组织、部门和common name的信息,其中common name信息我填写的是主机的IP 192.168.169.125. 证书创建完毕后,在certs目录下出现了两个文件:证书文件domain.crt和私钥文件domain.key

在192.168.169.125上安装docker

# yum -y install docker


将前面生成的domain.crt文件复制到/etc/docker/certs.d/192.168.169.125:5000目录下,然后重启docker进程:

# mkdir -p /etc/docker/certs.d/192.168.169.125:5000
# cp certs/domain.crt /etc/docker/certs.d/192.168.169.125:5000/ca.crt
# systemctl restart docker


在Docker私有库节点192.168.169.125上运行registry容器,并暴露容器的5000端口:

# docker run -d -p 5000:5000 --restart=always --name registry   -v `pwd`/certs:/certs   -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt   -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key   registry:2


最后,将domain.crt文件复制到Kubernetes集群里的所有节点的/etc/docker/certs.d/192.168.169.125:5000目录下,并重启各节点的docker进程,例如在192.168.169.121节点上运行:

# mkdir -p /etc/docker/certs.d/192.168.169.125:5000
# scp root@192.168.169.125:~/certs/domain.crt /etc/docker/certs.d/192.168.169.125:5000/ca.crt
# systemctl restart docker


至此,Docker私有库搭建完成。

Kubernetes Web UI搭建

这节我以搭建Kubernetes Web UI(kubernetes-dashboard)来简要演示如何使用Docker私有库。

由于我的Kubernetes集群无法直接从gcr.io拉取kubernetes-dashboard的镜像,我事先下载了镜像文件并使用docker load命令加载镜像:

# docker load < kubernetes-dashboard-amd64_v1.1.0.tar.gz
# docker images
REPOSITORY                                        TAG                 IMAGE ID            CREATED             SIZE
registry                                          2                   c6c14b3960bd        3 days ago          33.28 MB
ubuntu                                            latest              42118e3df429        9 days ago          124.8 MB
hello-world                                       latest              c54a2cc56cbb        4 weeks ago         1.848 kB
172.28.80.11:5000/kubernetes-dashboard-amd64      v1.1.0              20b7531358be        5 weeks ago         58.52 MB
registry                                          2                   8ff6a4aae657        7 weeks ago         171.5 MB


我为加载的kubernetes-dashboard镜像打上私有库的标签并推送到私有库:

# docker tag 20b7531358be 192.168.169.125:5000/kubernetes-dashboard-amd64
# docker push 192.168.169.125:5000/kubernetes-dashboard-amd64


从Kubernetes官网获取了kubernetes-dashboard的配置文件https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml,对其进行编辑如下:

# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0 #
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 192.168.169.125:5000/kubernetes-dashboard-amd64
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
- --apiserver-host=192.168.169.120:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard


尤其要注意:1 创建的Pods所要拉取的镜像是Docker私有库的192.168.169.125:5000/kubernetes-dashboard-adm64; 2 apiserver-host参数是192.168.169.120:8080,即Kubernetes Master节点的aipserver服务地址。

修改完kubernetes-dashboard.yaml后保存到Kubernetes Master节点192.168.169.120节点上,在该节点上用kubectl create命令创建kubernetes-dashboard:

# kubectl create -f kubernetes-dashboard.yaml


创建完成后,查看Pods和Service的详细信息:

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       nginx                                   1/1       Running   0          3h
kube-system   kubernetes-dashboard-4164430742-lqhcg   1/1       Running   0          2h


# kubectl describe pods/kubernetes-dashboard-4164430742-lqhcg --namespace="kube-system"
Name:        kubernetes-dashboard-4164430742-lqhcg
Namespace:    kube-system
Node:        192.168.169.124/192.168.169.124
Start Time:    Mon, 01 Aug 2016 16:12:02 +0800
Labels:        app=kubernetes-dashboard,pod-template-hash=4164430742
Status:        Running
IP:        172.17.17.3
Controllers:    ReplicaSet/kubernetes-dashboard-4164430742
Containers:
  kubernetes-dashboard:
    Container ID:    docker://40ab377c5b8a333487f251547e5de51af63570c31f9ba05fe3030a02cbb3660c
    Image:        192.168.169.125:5000/kubernetes-dashboard-amd64
    Image ID:        docker://sha256:20b7531358be693a34eafdedee2954f381a95db469457667afd4ceeb7146cd1f
    Port:        9090/TCP
    Args:
      --apiserver-host=192.168.169.120:8080
    QoS Tier:
      cpu:        BestEffort
      memory:        BestEffort
    State:        Running
      Started:        Mon, 01 Aug 2016 16:12:03 +0800
    Ready:        True
    Restart Count:    0
    Liveness:        http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment Variables:
Conditions:
  Type        Status
  Ready     True
No volumes.
No events.


# kubectl describe service/kubernetes-dashboard --namespace="kube-system"
Name:            kubernetes-dashboard
Namespace:        kube-system
Labels:            app=kubernetes-dashboard
Selector:        app=kubernetes-dashboard
Type:            NodePort
IP:            10.254.213.209
Port:            <unset>    80/TCP
NodePort:        <unset>    31482/TCP
Endpoints:        172.17.17.3:9090
Session Affinity:    None
No events.


从kubernetes-dashboard的service详细信息可以看到,该service绑定到了Node节点的31482端口上。现在,通过浏览器访问该端口就能看到Kubernetes的Web UI:

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息