kubernetes_01_kubeadm方式安装_03_control-plane_node_20190917
kube-apiserver-k8s-master01
etcd-k8s-master01
kube-controller-manager-k8s-master01
kube-scheduler-k8s-master01
kubelet
kube-proxy-lcgr4
原理:mater节点高可用,主要解决API server的高可用问题(keepalived/ heartbeat + nginx/haproxy)
方案:睿云Breeze_keeplived+haproxy
涉及的配置脚本文件有4个:
/data/lb/etc/haproxy.cfg
/data/lb/start-haproxy.sh
/data/lb/start-keepalived.sh
kubeadm-config.yaml
1.在主节点启动 Haproxy 与 keepalived 容器(mmm)
1)启动haproxy容器
1.1)导入镜像
$ scp haproxy-keepalived.zip root@k8s-ha-master01:~/k8s-install
[root@k8s-ha-master01 ~]# yum install unzip
[root@k8s-ha-master01 k8s-install]# unzip haproxy-keepalived.zip
[root@k8s-ha-master01 k8s-install]# docker load -i haproxy.tar
[root@k8s-ha-master01 k8s-install]# docker load -i keepalived.tar
1.2)修改haproxy配置文件
[root@k8s-ha-master01 etc]# vim /data/lb/etc/haproxy.cfg
[code]global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 #chroot /usr/share/haproxy #user haproxy #group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 50000 timeout server 50000 frontend stats-front bind *:8081 mode http default_backend stats-back # keypoint_01 bind *:6444 frontend fe_k8s_6444 bind *:6444 mode tcp timeout client 1h log global option tcplog default_backend be_k8s_6443 acl is_websocket hdr(Upgrade) -i WebSocket acl is_websocket hdr_beg(Host) -i ws backend stats-back mode http balance roundrobin stats uri /haproxy/stats stats auth pxcstats:secret backend be_k8s_6443 mode tcp timeout queue 1h timeout server 1h timeout connect 1h log global balance roundrobin # keypoint_02 启动的第一个节点ip server rancher01 192.168.43.110:6443 # server rancher02 192.168.43.120:6443 # server rancher03 192.168.43.130:6443
1.3)修改haproxy容器启动脚本
[root@k8s-ha-master01 lb]# vim /data/lb/start-haproxy.sh
[code]#!/bin/bash MasterIP1=192.168.43.110 MasterIP2=192.168.43.120 MasterIP3=192.168.43.130 MasterPort=6443 docker run -d --restart=always --name HAProxy-K8S -p 6444:6444 \ -e MasterIP1=$MasterIP1 \ -e MasterIP2=$MasterIP2 \ -e MasterIP3=$MasterIP3 \ -e MasterPort=$MasterPort \ -v /data/lb/etc/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg \ wise2c/haproxy-k8s
1.4)启动haproxy容器创建脚本
[root@k8s-ha-master01 lb]# ./data/lb/start-haproxy.sh
[root@k8s-ha-master01 lb]# netstat -antpu | grep 6444
tcp6 0 0 :::6444 :::* LISTEN 1795/docker-proxy
2)启动keepalived
2.1)修改keepalived镜像启动脚本
[root@k8s-ha-master01 lb]# vim /data/lb/start-keepalived.sh
[code]#!/bin/bash # keyporint_01 VIRTUAL_IP=192.168.43.100 INTERFACE=ens34 NETMASK_BIT=24 CHECK_PORT=6444 RID=10 VRID=160 MCAST_GROUP=224.0.0.18 docker run -itd --restart=always --name=Keepalived-K8S \ --net=host --cap-add=NET_ADMIN \ -e VIRTUAL_IP=$VIRTUAL_IP \ -e INTERFACE=$INTERFACE \ -e CHECK_PORT=$CHECK_PORT \ -e RID=$RID \ -e VRID=$VRID \ -e NETMASK_BIT=$NETMASK_BIT \ -e MCAST_GROUP=$MCAST_GROUP \ wise2c/keepalived-k8s
2.2)启动
[root@k8s-ha-master01 lb]# ./start-keepalived.sh
[root@k8s-ha-master01 lb]# ip addr show
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:2c:0b:2f brd ff:ff:ff:ff:ff:ff
inet 192.168.43.110/24 brd 192.168.43.255 scope global noprefixroute ens34
valid_lft forever preferred_lft forever
inet 192.168.43.100/24 scope global secondary ens34
valid_lft forever preferred_lft forever
2.初始化主节点(注意:只初始化一个主节点k8s-ha-master01)
[root@k8s-ha-master01 ~]# cd /root/k8s_install
1) 确认非kubernetes指定的网卡已关闭
2) 获取kubernetes init默认初始化配置文件
[root@k8s-ha-master01 ~]# kubeadm config print init-defaults > kubeadm-config.yaml
3) 修改配置
[root@k8s-ha-master01 ~]# vim kubeadm-config.yaml
[code]apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: #keypoint_01 当前主机ip advertiseAddress: 192.168.43.110 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock #keypoint_02 mmm name: k8s-ha-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes #keypoint_03 mmm controlPlaneEndpoint: "192.168.43.100:6444" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration #keypoint_04 使用的版本 kubernetesVersion: v1.15.1 networking: dnsDomain: cluster.local #keypoint_05 flannel网络插件的默认网关 podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 scheduler: {} #keypoint_06 指定ipvs转发 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvs
4) 初始化主节点
[root@k8s-ha-master01 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
5) From kubeadm-init.log, 执行以下命令
[root@k8s-ha-master01 k8s-install]# mkdir -p $HOME/.kube
[root@k8s-ha-master01 k8s-install]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-ha-master01 k8s-install]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
6) 查看
[root@k8s-ha-master01 k8s-install]# vim $HOME/.kube/config
server: https://192.168.43.110:6443
[root@k8s-ha-master01 ~]# kubectl edit configmaps -n kube-system kubeadm-config
[root@k8s-ha-master01 k8s-install]# kubectl get node
[root@k8s-ha-master01 k8s-install]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-5x5hj 0/1 Pending 0 3m7s
coredns-5c98db65d4-b6mzd 0/1 Pending 0 3m7s
etcd-k8s-ha-master01 1/1 Running 0 2m4s
kube-apiserver-k8s-ha-master01 1/1 Running 0 2m22s
kube-controller-manager-k8s-ha-master01 1/1 Running 0 2m20s
kube-proxy-n2knf 1/1 Running 0 3m7s
kube-scheduler-k8s-ha-master01 1/1 Running 0 2m16s
[root@k8s-ha-master01 k8s-install]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m50s
[root@k8s-ha-master01 k8s-install]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4m59s
3.k8s-ha-master02节点加入集群
1)k8s-ha-master02完成上面第1步,启动 Haproxy 与 keepalived 容器
2)确认非kubernetes指定的网卡已关闭
[root@k8s-ha-master02 ~]# ip a
3)检查hosts文件,须包含本主机
4)From kubeadm-init.log, 加入集群
[root@k8s-ha-master02 ~]# kubeadm join 192.168.43.100:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:59c2072cf06f03d0bf9992987416309146632f8124115ccf171eca1cbe87df39 \
--control-plane --certificate-key 49c93125e0a23c4e0323b0590abe043b51ce9d0555b42a03da92925b49b36d1a
5)执行提示操作
[root@k8s-ha-master02 ~]# mkdir -p $HOME/.kube
[root@k8s-ha-master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-ha-master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
6) 查看
[root@k8s-ha-master02 k8s-install]# vim $HOME/.kube/config
server: https://192.168.43.120:6443
[root@k8s-ha-master02 ~]# kubectl get node
[root@k8s-ha-master02 ~]# kubectl get pod -n kube-system
[root@k8s-ha-master02 ~]# kubectl edit configmaps -n kube-system kubeadm-config
4.k8s-ha-master03节点加入集群(同k8s-ha-master02节点 完全一样)
[root@k8s-ha-master02 k8s-install]# vim $HOME/.kube/config
server: https://192.168.43.130:6443
5.完成haproxy的配置 ,3个ha-master-node都要配置
需要修改的地方,红色已标出
[root@k8s-ha-master01 k8s-install]# vim /data/lb/etc/haproxy.cfg
*********
server rancher01 192.168.43.110:6443
server rancher02 192.168.43.120:6443
server rancher03 192.168.43.130:6443
[root@k8s-ha-master01 etc]# docker rm -f HAProxy-K8S && bash /data/lb/start-haproxy.sh
6.部署flannel网络
打开三个主节点外部网络网卡
[root@k8s-ha-master01 etc]# ifup ens33
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/3)
[root@k8s-ha-master01 etc]# ping -c 10 www.baidu.com
任一主节点进行以下操作
[root@k8s-ha-master01 k8s-install]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
找出镜像版本,pull必需的镜像
[root@k8s-ha-master01 k8s-install]# cat kube-flannel.yml | grep image
[root@k8s-ha-master01 k8s-install]# docker pull quay.io/coreos/flannel:v0.11.0-amd64
[root@k8s-ha-master01 k8s_install]# kubectl apply -f kube-flannel.yml
[root@k8s-ha-master01 k8s_install]# kubectl get pod -n kube-system
[root@k8s-master01 k8s_documents]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-ha-master01 Ready master 50m v1.15.1
[root@k8s-master01 k8s_documents]# ifconfig
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
7.查看集群信息:
1)查看 controller-manager
[root@k8s-ha-master02 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
2)查看 scheduler
[root@k8s-ha-master02 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
3)查询Etcd 集群状态
[root@k8s-ha-master02 ~]# kubectl -n kube-system exec etcd-k8s-ha-master03 -- etcdctl --endpoints=https://192.168.43.110:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key cluster-health
member 10260bfec73117a0 is healthy: got healthy result from https://192.168.43.110:2379
member 258562861e8b997b is healthy: got healthy result from https://192.168.43.130:2379
member 8b0e262770f25357 is healthy: got healthy result from https://192.168.43.120:2379
4)查看kubeadm配置kubeadm-config
[root@k8s-ha-master03 ~]# kubectl edit configmaps -n kube-system kubeadm-config
[code]# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: ClusterConfiguration: | apiServer: extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes # keypoint_01 controlPlaneEndpoint: 192.168.43.100:6444 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.15.1 networking: dnsDomain: cluster.local # keypoint_02 podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} ClusterStatus: | apiEndpoints: # keypoint_03 k8s-ha-master01: advertiseAddress: 192.168.43.110 bindPort: 6443 k8s-ha-master02: advertiseAddress: 192.168.43.120 bindPort: 6443 k8s-ha-master03: advertiseAddress: 192.168.43.130 bindPort: 6443 apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterStatus kind: ConfigMap metadata: creationTimestamp: "2020-03-11T07:38:32Z" name: kubeadm-config namespace: kube-system resourceVersion: "4496" selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config uid: c75e3ed8-c857-408c-96c8-0b7d176ddb2b
- 点赞
- 收藏
- 分享
- 文章举报
- kubernetes_01_kubeadm方式安装_02_apiserver证书默认有效期修改_kubeadm重新编译_20191023
- Kubernetes 1.4 基础篇:kubeadm方式安装
- kubernetes 1.9.0 kubeadm方式安装
- Kubernetes安装系列之Node-Kube-proxy安装
- Kubernetes安装配置指南(kubeadm工具安装)
- Kubeadm安装Kubernetes环境
- centos7环境下kubeadm方式安装kubernates1.13
- centos7环境下kubeadm方式部署 kubernetes 1.7
- centos7 、kubeadm 安装kubernetes 1.9
- 如何使用 kubeadm 安装 Kubernetes?
- 二进制安装kubernetes1.14.1-pod配置清单之客户端访问方式03
- Kubernetes的几种主流部署方式02-kubeadm部署1.14版本高可用集群
- kubeadm 安装 kubernetes
- kubeadm安装kubernetes1.9.2集群
- kubeadm 安装 kubernetes1.9.1
- kubernetes实战(二十五):kubeadm 安装 高可用 k8s v1.13.x
- kubeadm安装kubernetes(weave)
- kubeadm安装kubernetes
- 使用 kubeadm 安装部署 kubernetes 1.9