您的位置:首页 > 理论基础 > 计算机网络

无网络centos7中部署kubernetes

2015-11-12 14:28 585 查看
本文提供的kubernetes1.1实际为kubernetes0.8,最新kubernetes部署方式见下一篇文章:centos下kubernetes+flannel部署。

一、部署环境信息:

1)master主机

IP:10.11.150.74;主机名:tc_150_74;DNS配置中的主机名:tc-150-74;内核:Linux version 3.10.0-229.11.1.el7.x86_64

2)node主机

IP:10.11.150.73;主机名:tc_150_73;DNS配置中的主机名:tc-150-73;内核:Linux version 3.10.0-123.el7.x86_64

部署过程主要参考kubernetes的官网(戳这里)。

二、准备工作:

1)各个rpm包下载(百度网盘备份):cadvisor-0.14.0docker-1.7.1etcd-0.4.6kubernetes-client-1.1.0kubernetes-master-1.1.0kubernetes-node-1.1.0etcdctl。主要下载源为fedora的镜像源(戳这里戳这里)。

2)在73和74机上安装docker。master上安装docker非必需,但docker是安装kubernetes-node的依赖项,所以如果需要安装kubernetes-node则必须安装docker。官网上建议使用docker-1.6.2和docker-1.7.1,在实际安装过程中发现,非1.7.1版本的docker下安装kubernetes-node会出现冲突错误。如安装docker-1.8.2之后再安装kubernetes-node时出现如下错误:

错误:docker-engine conflicts with docker-1.8.2-7.el7.centos.x86_64


安装方式为:

sudo yum localinstall docker-1.7.1-115.el7.x86_64.rpm -y


如果安装docker时出现类似如下错误:

错误:软件包:docker-1.7.1-108.el7.centos.x86_64 (/docker-1.7.1-108.el7.centos.x86_64)
需要:docker-selinux >= 1.7.1-108.el7.centos
可用: docker-selinux-1.7.1-108.el7.x86_64 (7ASU1-updates)
docker-selinux = 1.7.1-108.el7


则到需要先下载安装对应版本的docker-selinux(下载戳这里)。

3)安装etcd

sudo yum localinstall etcd-0.4.6-7.el7.centos.x86_64.rpm -y


etcd只要在master主机上安装即可。

4)安装cadvisor(非必须)

sudo yum localinstall cadvisor-0.14.0-1.el7.x86_64.rpm -y


只需要在node主机上安装即可。

5)安装kubernetes

需要先安装client再安装master和node。

sudo yum localinstall kubernetes-client-1.1.0-0.17.git388061f.fc23.x86_64.rpm -y
sudo yum localinstall kubernetes-master-1.1.0-0.17.git388061f.fc23.x86_64.rpm -y
sudo yum localinstall kubernetes-node-1.1.0-0.17.git388061f.fc23.x86_64.rpm -y


三、配置与启动服务

1)etcd的启动

sudo etcd -peer-addr 10.11.150.74:7001 -addr 10.11.150.74:4001 -peer-bind-addr 0.0.0.0:7001 -bind-addr 0.0.0.0:4001 &


首先必须确保etcd启动成功且能被正常访问,否则启动 kube-apiserver时会出现如下错误:

I1111 13:25:42.451759    7611 plugins.go:69] No cloud provider specified.
I1111 13:25:42.452027    7611 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I1111 13:25:42.453609    7611 master.go:295] Will report 10.11.150.74 as public IP address.
[restful] 2015/11/11 13:25:42 log.go:30: [restful/swagger] listing is available at https://10.11.150.74:6443/swaggerapi/ [restful] 2015/11/11 13:25:42 log.go:30: [restful/swagger] https://10.11.150.74:6443/swaggerui/ is mapped to folder /swagger-ui/
F1111 13:25:52.516153    7611 controller.go:80] Unable to perform initial IP allocation check: unable to refresh the service IP block: no kind "RangeAllocation" is registered for version "v1beta3"


在73和74机上使用etcdctl查看etcd的运行状态,执行如下命令,如果正常则返回已创建的表结构。

./etcdctl --peers="http://10.11.150.74:7001" ls


2)hosts 配置

编辑73和74机的 /etc/hosts 文件,在其中添加如下语句:

10.11.150.73 tc-150-73
10.11.150.74 tc-150-74


注意host的取名要规范,如"tc_150_73"这种方式是不正确的,取这个名字的话在后面通过node.json创建节点时会报如下错误:

The Node "tc_150_73" is invalid:metadata.name: invalid value 'tc_150_73': must be a DNS subdomain (at most 253 characters, matching regex [a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*): e.g. "example.com"


3)config文件配置

编辑73和74机上的 /etc/kubernetes/config 文件,内容为:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://tc-150-74:8080"


4)关闭防火墙

systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld


5)配置apiserver

编辑74主机上的 /etc/kubernetes/apiserver 文件,内容如下:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--insecure-bind-address=0.0.0.0 --insecure-port=8080"


6)配置kubelet

编辑73主机上的 /etc/kubernetes/kubelet 文件,内容如下:

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=tc-150-73"

# location of the api-server
KUBELET_API_SERVER="--api_servers=http://tc-150-74:8080"

# Add your own!
KUBELET_ARGS="--pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest"


此处注意 KUBELET_ARGS 中 --pod-infra-container-image 的设置,帮助文档中的说明为:

--pod-infra-container-image="gcr.io/google_containers/pause:0.8.0": The image whose network/ipc namespaces containers in each pod will use.


即需要到 google 提供的一个docker镜像站下载每个pods创建时需要运行的pause基础镜像,由于GreatWall的存在,可将该基础镜像pull下来放到自己的一个registry中再进行下载(本文中放到了76机上的私有registry中)。

7)启动master服务

在74机上创建如下脚本,即以服务形式启动 kube-apiserver,kube-controller-manager 和 kube-scheduler:

#!/bin/bash

for SERVICES in kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES -l
done


运行脚本,正常启动会显示如下内容:

kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled)
Active: active (running) since 四 2015-11-12 13:30:01 CST; 85ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 20164 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─20164 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http...

11月 12 13:30:01 tc_150_73 systemd[1]: Started Kubernetes Kube-Proxy Server.
kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled)
Active: active (running) since 四 2015-11-12 13:30:01 CST; 124ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 20207 (kubelet)
CGroup: /system.slice/kubelet.service
└─20207 /usr/bin/kubelet --logtostderr=true --v=0 --api_servers=ht...

11月 12 13:30:01 tc_150_73 systemd[1]: Started Kubernetes Kubelet Server.
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.478089   20207 ma..."
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.479634   20207 fs...r
11月 12 13:30:01 tc_150_73 kubelet[20207]: f48ee5c424bbed5 major:253 minor:...]
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.489689   20207 ma...4
11月 12 13:30:01 tc_150_73 kubelet[20207]: Scheduler:none} 253:15:{Name:dm-...
11月 12 13:30:01 tc_150_73 kubelet[20207]: :32768 Type:Instruction Level:1} ...
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.529029   20207 ma...}
11月 12 13:30:01 tc_150_73 kubelet[20207]: I1112 13:30:01.529852   20207 pl....
Hint: Some lines were ellipsized, use -l to show in full.
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: active (running) since 四 2015-11-12 13:30:03 CST; 81ms ago
Docs: http://docs.docker.com Main PID: 20264 (docker)
CGroup: /system.slice/docker.service
└─20264 /usr/bin/docker -d --selinux-enabled --add-registry regist...

11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.4491017..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.4532426..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.4562807..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6940850..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6944571..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6944879..."
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6945164...1
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.6953038..."
11月 12 13:30:03 tc_150_73 systemd[1]: Started Docker Application Container....
11月 12 13:30:03 tc_150_73 docker[20264]: time="2015-11-12T13:30:03.7360503..."
Hint: Some lines were ellipsized, use -l to show in full.


View Code
此时在74机上运行 kubectl get nodes,如果前面配置一切正常的话node的状态会变成Ready:

NAME        LABELS            STATUS
tc-150-73   name=node-label   Ready
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: