您的位置:首页 > 其它

搭建kubernetes(v1.3.3)集群及相关管理组件

2017-01-23 15:01 591 查看
目 录

1. 综述 2

1.1. 背景 2

1.2. 目的 2

1.3. 适用范围 2

2. 集群环境说明 3

3. Docker安装 3

3.1. 解压docker二进制文件,并移动至/usr/bin 3

3.2. 配置docker.service 3

3.3. 启动docker.service 3

4. Master配置 4

4.1. 配置TLS 4

4.2. 安装calico-etcd 5

4.3. 安装calico 6

4.4. 安装kubernetes 6

5. Nodes配置 7

5.1. 配置TLS 7

5.2. 配置工作节点kubelet 8

5.3. 安装calico 9

5.4. 安装kubernetes 11

6. 配置kubectl远程访问 12

7. 安装DNS插件 13

8. 安装kubernetes UI插件(可选) 13

附件 14

附件1 calico-etcd.manifest 14

附件2 network-environment 14

附件3 calico-node.service 15

附件4 kubelet.service 15

附件5 kubernetes-master.manifest 16

附件6 network-environment 19

附件7 kubelet.service 19

附件8 kube-proxy.manifest 20

附件9 skydns.yaml 21

附件10 kubernetes-dashboard.yaml 24

1. 综述

1.1. 背景

Kubernetes是一个全新的基于容器技术的分布式架构领先方案,kubernetes具有完备的集群管理能力,包括多层次的安全防护和准入机制、多用户应用支撑能力、透明的服务注册和服务发现机制、内建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制,以及多粒度的资源配额管理能力。同时,kubernetes还提供了完善的管理工具,这些工具涵盖了包括开发、部署测试、运维监控在内的各个环节。

1.2. 目的

本文创建是为了了解kubernetes集群搭建,及配套功能的使用。本文中kubernetes包含calico、dns、dashboard。

1.3. 适用范围

本文档用于指导kubernetes集群搭建,主机操作系统为RHEL7.2

2. 集群环境说明

Kubernetes集群中的机器划分为一个Master节点和一群工作节点(Node)。

Master节点上运行着集群管理相关的一组进程kube-apiserver、kube-controller-manager和kube-scheduler,这些进程实现了整个集群的资源管理、Pod管理、弹性伸缩、安全控制、系统监控和纠错等管理能力,并且都是自动完成的。

Node作为工作节点,运行真正的应用程序,在Node上kubernetes管理的最小运行单元是Pod。Node上运行着kubernetes的kubelet和kube-proxy服务进程,这些服务进程负责Pod的创建、启动、监控、重启、销毁,以及实现软件模式的负载均衡器。

**

3. Docker安装

**

3.1. 解压安装文件

解压docker二进制文件,并移动至/usr/bin

tar –zxf docker-1.12.1.tgz

mv docker/* /usr/bin

3.2. 配置docker.service

[Unit]
Description=Docker Daemon

[Service]
ExecStart=/usr/bin/dockerd \
--insecure-registry=192.168.2.100:5000
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target


3.3. 启动docker.service

systemctl start docker.service

systemctl enable docker.service

systemctl status docker.service

**

4. Master配置

**

4.1. 配置TLS

Master节点要求有root CA公钥ca.pem;apiserver的证书apiserver.pem以及它的私钥apiserver-key.pem

**1. 按照下文创建 openssl.cnf

[req]

req_extensions = v3_req

distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

[alt_names]

DNS.1 = kubernetes

DNS.2 = kubernetes.default

IP.1 = 10.100.0.1

IP.2 = ${MASTER_IPV4}

2. 定义MASTER_IPV4地址,替换master_ipv4为master节点地址

export MASTER_IPV4=

**3. 生成root 证

openssl genrsa -out ca-key.pem 2048.

4、指定证书申请者的个人信息

openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj “/CN=kube-ca”

5. 生成apiserver 密钥对

openssl genrsa -out apiserver-key.pem 2048

openssl req -new -key apiserver-key.pem -out apiserver.csr -subj “/CN=kube-apiserver” -config openssl.cnf

openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf

现在生成文件有ca.pem,apiserver.pem和apiserver-key.pem

6. 移动文件至/etc/kubernetes/ssl下,并设置仅root用户只读

mkdir -p /etc/kubernetes/ssl/

mv -t /etc/kubernetes/ssl/ ca.pem apiserver.pem apiserver-key.pem

设置权限

chmod 600 /etc/kubernetes/ssl/apiserver-key.pem

chown root:root /etc/kubernetes/ssl/apiserver-key.pem

4.2. 安装calico-etcd

Calico需要配置etcd来存储状态,这里我们在master上安装一个单节点etcd。

注意:在生产部署时,建议运行分布式ercd集群,在本文中采用简单的单节点etcd。

1. 下载模板manifest文件(见附件1 calico-etcd.manifest)

wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/calico-etcd.manifest

2. 替换calico-etcd.amnifest中,所有为master地址

3. 移动文件至/etc/kubernetes/manifests目录,该文件暂不启动直到kubelet启动。

mv -f calico-etcd.manifest /etc/kubernetes/manifests

4.3. 安装calico

在master安装calico,主节点将被允许在各节点之间转发数据包。

1. 安装calicoctl工具

wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl

chmod +x calicoctl

mv calicoctl /usr/bin

2. 获取calico镜像

docker pull calico/node:v0.15.0

3. 从calico-kubernetes仓库下载network-environment样例文件

(见附件2 network-environment)

wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/network-environment-template

4. 修改network-environment

替换 为master的IP地址,这个IP需要能与工作节点通信。

export ETCD_AUTHORITY=:6666

5. 移动network-environment至/etc

mv -f network-environment /etc

6. 安装并启用calico-node(见附件3 calico-node.service)

wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service

systemctl enable /etc/systemd/calico-node.service

systemctl start calico-node.service

4.4. 安装kubernetes

利用kubelet来引导kubernetes。

1. 下载并安装kubelet和kubectl的二进制文件

wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl

wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet

chmod +x /usr/bin/kubelet /usr/bin/kubectl

2. 安装kubelet的单元文件并启动kubelet(见附件4 kubelet.service)

sudo wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubelet.service

sudo systemctl enable /etc/systemd/kubelet.service

启动服务

sudo systemctl start kubelet.service

3. 下载并安装master的manifest文件,kubernetes的master服务将自动运行。

(见附件5 kubernetes-master.manifest)

mkdir -p /etc/kubernetes/manifests

wget -N -P /etc/kubernetes/manifests https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kubernetes-master.manifest

4. 使用docker ps检查进程正常,稍等片刻可以看到etcd、apiserver、controller-manager、schedule和kebe-proxy容器在运行。

注意:启动全部容器可能需要一些时间,不用担心docker ps查到的容器启动顺序。

5. Nodes配置

以下操作在个node节点操作

5.1. 配置TLS

工作节点要求3个密钥:ca.pem、worker.pem和worker-key.pem。我们在主节点中已经获得了ca.pem和ca-key.pem,工作节点的密钥对需要在各个工作节点中生成。

1. 创建worker-openssl.cnf文件

[req]

req_extensions = v3_req

distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

[alt_names]

IP.1 = $ENV::WORKER_IP

2. 输出woker的地址并生成密钥对

export WORKER_IP=

生成密钥

openssl genrsa -out worker-key.pem 2048

openssl req -new -key worker-key.pem -out worker.csr -subj “/CN=worker-key” -config worker-openssl.cnf

openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf

3. 发送密钥对至工作节点

发送3个文件(ca.pem、worker.pem和worker-key.pem)到工作节点

4. 移动密钥对至/etc/kubernetes/ssl目录下

sudo mkdir -p /etc/kubernetes/ssl/

sudo mv -t /etc/kubernetes/ssl/ ca.pem worker.pem worker-key.pem

设置权限

sudo chmod 600 /etc/kubernetes/ssl/worker-key.pem

sudo chown root:root /etc/kubernetes/ssl/worker-key.pem

5.2. 配置工作节点kubelet

放置好证书文件至指定位置后,为worker创建一个kubeconfig配置文件/etc/kubernetes/worker-kubeconfig.yaml,替换其中的为master地址

apiVersion: v1

kind: Config

clusters:

- name: local

cluster:

server: https://:443
certificate-authority: /etc/kubernetes/ssl/ca.pem

users:

- name: kubelet

user:

client-certificate: /etc/kubernetes/ssl/worker.pem

client-key: /etc/kubernetes/ssl/worker-key.pem

contexts:

- context:

cluster: local

user: kubelet

name: kubelet-context

current-context: kubelet-context

5.3. 安装calico

在你的计算节点上,在安装kubernetes之前安装calico是十分重要的。我们使用提供的calico-node.service单元文件来安装calico。

1、安装calicoctl二进制文件

wget https://github.com/projectcalico/calico-containers/releases/download/v0.15.0/calicoctl

chmod +x calicoctl

mv calicoctl /usr/bin

2、获取calico/node容器镜像文件

docker pull calico/node:v0.15.0

3、从calico-cni注册库下载network-environment模板(见附件6 network-environment)

wget -O network-environment https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/network-environment-template

4、编辑network-environment文件,安装node信息进行设置

① 替换其中的为该node地址

② 替换 为主节点的主机名或IP地址

5、移动network-environment至/etc

mv -f network-environment /etc

6、calico-node服务

wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/common/calico-node.service

systemctl enable /etc/systemd/calico-node.service

systemctl start calico-node.service

7、安装calico CNI插件

mkdir -p /opt/cni/bin/

wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico

wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v1.0.0/calico-ipam

chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam

8、创建calico CNI网络配置文件

该配置文件用来告诉kubernetes创建一个名称为calico-k8s-network的网络,并为这个网络启用calico插件。按照下文创建/etc/cni/net.d/10-calico.conf 并替换其中的为master地址(这个文件应该在各节点一致)

mkdir -p /etc/cni/net.d

设置calico配置文件

cat >/etc/cni/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_authority": "<KUBERNETES_MASTER>:6666",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
}
}
EOF


由于这是我们创建的唯一一个网络,这将成为kubelet的默认网络。

9、确认calico正确启动

calicoctl status

应当看到Felix(calico在各节点的代理)处于running状态,并且BGP中有master中配置的其它节点的地址,“Info”列应该为“Established”

$ calicoctl status
calico-node container is running. Status: Up 15 hours
Running felix version 1.3.0rc5

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
|  Peer address |     Peer type     | State |  Since   |     Info    |
+---------------+-------------------+-------+----------+-------------+
| 172.18.203.41 | node-to-node mesh |   up  | 17:32:26 | Established |
| 172.18.203.42 | node-to-node mesh |   up  | 17:32:25 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+


如果“Info”这一列显示为“Active”或者其它值,然后calico连接不到其它主机。检查peer地址是否正确,检查network-environment中配置。

6. 安装kubernetes

1. 下载并安装kubelet二进制文件

wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubelet

chmod +x /usr/bin/kubelet

2. 安装kubelet单元文件(见附件7 kubelet.service)

wget -N -P /etc/systemd https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/node/kubelet.service

3、启动kubelet

systemctl enable /etc/systemd/kubelet.service

systemctl start kubelet.service

4、下载kube-proxy文件(见附件8 kube-proxy.manifest)

wget https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-doc 13198
s/samples/kubernetes/node/kube-proxy.manifest

5、修改样例文件

替换文件中为master地址后,移动至指定位置。

mkdir -p /etc/kubernetes/manifests/

mv kube-proxy.manifest /etc/kubernetes/manifests/

7、 配置kubectl远程访问

为了在独立主机上(例如你的笔记本),管理你的集群,你需要早期生成的root CA认证,以及admin的密钥对(ca.pem,admin.pem,admin-key.pem)。在你要用来作为远程管理集群的机器上执行以下步骤。

1. 下载kubectl二进制文件

wget -N -P /usr/bin http://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl

chmod +x /usr/bin/kubectl

2. 生成admin的公/私密钥对。

3. 输出必要的变量,替换其中的值为主机信息。

export CA_CERT_PATH=

export ADMIN_CERT_PATH=

export ADMIN_KEY_PATH=

export MASTER_IPV4=

4. 为你主机的kubectl设置admin证书

kubectl config set-cluster calico-cluster –server=https://MASTERIPV4−−certificate−authority={CA_CERT_PATH}

kubectl config set-credentials calico-admin –certificate-authority=CACERTPATH−−client−key={ADMIN_KEY_PATH} –client-certificate=${ADMIN_CERT_PATH}

kubectl config set-context calico –cluster=calico-cluster –user=calico-admin

kubectl config use-context calico

检查kubectl get nodes执行结果,应该能成功显示各节点。

8. 安装DNS插件

大多数kubernetes部署都要求有DNS插件用于服务发现。按照一下提供的命令,创建skydns服务和rc。此时将用到之前对kubectl的设置。(见附件9 skydns.yaml)

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/dns/skydns.yaml

**

9. 安装kubernetes UI插件(可选)

**

Kubernetes UI可以通过kubectl运行以下文件来安装

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-cni/k8s-1.1-docs/samples/kubernetes/master/kube-ui/kube-ui.yaml

注意:kubernetes UI插件已经被废弃,已经由kubernetes dashboard替代,你可以运行以下命令来安装:(参考附件10 kubernetes-dashboard.yaml)

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

参考 Kubernetes Dashboard

至此,kubernetes集群已部署完成,你可以参考standard documentation 在你的kubernetes上设置其它服务。

附件

附件1 calico-etcd.manifest

apiVersion: v1
kind: Pod
metadata:
name: calico-etcd
namespace: calico-system
spec:
hostNetwork: true
containers:
- name: calico-etcd-container
image: 192.168.2.100:5000/etcd:3.0.3
command:
- "/usr/local/bin/etcd"
- "--name=calico-etcd"
- "--data-dir=/var/etcd/calico-data"
- "--advertise-client-urls=http://<MASTER_IPV4>:6666"
- "--listen-client-urls=http://0.0.0.0:6666"
- "--listen-peer-urls=http://0.0.0.0:6660"
securityContext:
privileged: true
ports:
- name: clientport
containerPort: 6666
hostPort: 6666
volumeMounts:
- mountPath: /var/etcd
name: varetcd
volumes:
- name: "varetcd"
hostPath:
path: "/mnt/master-pd/var/etcd"


附件2 network-environment

# This host's IPv4 address (the source IP address used to reach other nodes
# in the Kubernetes cluster).
DEFAULT_IPV4=<KUBERNETES_MASTER>

# IP and port of etcd instance used by Calico
ETCD_AUTHORITY=<KUBERNETES_MASTER>:6666


附件3 calico-node.service

[Unit]
Description=Calico per-node agent
Documentation=https://github.com/projectcalico/calico-docker
Requires=docker.service
After=docker.service

[Service]
User=root
EnvironmentFile=/etc/network-environment
PermissionsStartOnly=true
ExecStart=/usr/bin/calicoctl node --ip=${DEFAULT_IPV4} --detach=false
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target


附件4 kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
Requires=docker.service
After=docker.service

[Service]
ExecStart=/usr/bin/kubelet \
--register-node=false \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.100.0.10 \
--cluster_domain=cluster.local \
--logtostderr=true
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target


附件5 kubernetes-master.manifest

apiVersion: v1
kind: Pod
metadata:
name: kube-controller
namespace: kube-system
labels:
k8s-app: kube-infra
spec:
hostNetwork: true
volumes:
- name: "etc-kubernetes"
hostPath:
path: "/etc/kubernetes"
- name: ssl-certs-kubernetes
hostPath:
path: /etc/kubernetes/ssl
- name: "ssl-certs-host"
hostPath:
path: "/usr/share/ca-certificates"
- name: "var-run-kubernetes"
hostPath:
path: "/var/run/kubernetes"
- name: "etcd-datadir"
hostPath:
path: "/var/lib/etcd"
- name: "usr"
hostPath:
path: "/usr"
- name: "lib64"
hostPath:
path: "/lib64"
containers:
- name: etcd
image: 192.168.2.100:5000/etcd:3.0.3
command:
- "/usr/local/bin/etcd"
- "--data-dir=/var/lib/etcd"
- "--advertise-client-urls=http://127.0.0.1:2379"
- "--listen-client-urls=http://127.0.0.1:2379"
- "--listen-peer-urls=http://127.0.0.1:2380"
- "--name=etcd"
volumeMounts:
- mountPath: /var/lib/etcd
name: "etcd-datadir"

- name: kube-apiserver
image: 192.168.2.100:5000/kube-apiserver:1.3.5
command:
- /usr/local/bin/kube-apiserver
- --allow-privileged=true
- --bind-address=0.0.0.0
- --insecure-bind-address=0.0.0.0
- --secure-port=443
- --insecure-port=8080
- --etcd-servers=http://127.0.0.1:2379
- --service-cluster-ip-range=10.100.0.0/16
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --logtostderr=true
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /etc/kubernetes
name: "etc-kubernetes"
- mountPath: /var/run/kubernetes
name: "var-run-kubernetes"

- name: kube-controller-manager
image: 192.168.2.100:5000/kube-controller-manager:1.3.5
command:
- /usr/local/bin/kube-controller-manager
- --master=http://127.0.0.1:8080
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252s
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true

- name: kube-scheduler
image: 192.168.2.100:5000/kube-scheduler:1.3.5
command:
- /usr/local/bin/kube-scheduler
- --master=http://127.0.0.1:8080
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 1

#    - name: kube-proxy
#      image: 192.168.2.100:5000/kube-proxy:1.3.5
#      command:
#      - /usr/local/bin/kube-proxy
#      - --master=http://127.0.0.1:8080
#      - --proxy-mode=iptables
#      securityContext:
#        privileged: true
#      volumeMounts:
#      - mountPath: /etc/ssl/certs
#        name: ssl-certs-host
#        readOnly: true


附件6 network-environment

# This host's IPv4 address (the source IP address used to reach other nodes
# in the Kubernetes cluster).
DEFAULT_IPV4=<DEFAULT_IPV4>

# The Kubernetes master IP
KUBERNETES_MASTER=<KUBERNETES_MASTER>

# IP and port of etcd instance used by Calico
ETCD_AUTHORITY=<KUBERNETES_MASTER>:6666


附件7 kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=calico-node.service
Requires=calico-node.service

[Service]
EnvironmentFile=/etc/network-environment
ExecStart=/usr/bin/kubelet \
--address=0.0.0.0 \
--allow-privileged=true \
--cluster-dns=10.100.0.10 \
--cluster-domain=cluster.local \
--config=/etc/kubernetes/manifests \
--hostname-override=${DEFAULT_IPV4} \
--api-servers=https://${KUBERNETES_MASTER}:443 \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--logtostderr=true \
--network-plugin=cni \
--network-plugin-dir=/etc/cni/net.d
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target


附件8 kube-proxy.manifest

apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: 192.168.2.100:5000/kube-proxy:v1.3.5
command:
- /hyperkube
- proxy
- --master=https://<KUBERNETES_MASTER>
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
- --proxy-mode=iptables
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"


附件9 skydns.yaml

apiVersion: v1
kind: Namespace
metadata:
name: kube-system

---

apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

---

apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v9
namespace: kube-system
labels:
k8s-app: kube-dns
version: v9
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v9
template:
metadata:
labels:
k8s-app: kube-dns
version: v9
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: 192.168.2.100:5000/google_containers/etcd:2.0.9
resources:
limits:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: 192.168.2.100:5000/google_containers/kube2sky:1.11
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- -domain=cluster.local
- -kubecfg_file=/etc/kubernetes/worker-kubeconfig.yaml
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "worker-kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
- name: skydns
image: 192.168.2.100:5000/google_containers/skydns:2015-03-11-001
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://localhost:4001
- -addr=0.0.0.0:53
- -domain=cluster.local.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: 192.168.2.100:5000/google_containers/exechealthz:1.0
resources:
limits:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "worker-kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"
dnsPolicy: Default  # Don't use cluster DNS.


附件10 kubernetes-dashboard.yaml

# This file should be kept in sync with cluster/gce/coreos/kube-manifests/addons/dashboard/dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
---
apiVersion: v1
kind: ReplicationController
metadata:
name: kubernetes-dashboard-v1.1.1
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.1.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: v1.1.0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubernetes-dashboard
image: 192.168.2.100:5000/kubernetes-dashboard-amd64:v1.1.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://192.168.31.68:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30


参考官方文档http://kubernetes.io/docs/getting-started-guides/ubuntu-calico/
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: