docker系列4:k8s集群[kubeadm安装]
2018-12-19 00:09
691 查看
参考:
http://dockone.io/article/950
https://www.geek-share.com/detail/2721302245.html
http://blog.51cto.com/devingeng/2096495
https://www.geek-share.com/detail/2720246660.html
1,k8s组件安装
- 环境依赖: etcd,docker,
- master节点: kube-apiserver,kube-controller-manager,kube-scheduler
- minion节点: kubelet,kube-proxy
最终的服务启动情况 如下:
阿里docker镜像仓库
阿里k8s镜像加速:https://blog.csdn.net/eyeofeagle/article/details/85015303
ubuntu安装k8s
apt-get install -y curl apt-transport-https curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - echo "deb [arch=amd64] https://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-$(lsb_release -cs) main" >> /etc/apt/sources.list.d/kubernetes.list #curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg #deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main apt-get update && apt-get install -y kubelet kubeadm kubectl kubernetes-cni --allow-unauthenticated
centos7 安装k8s
#1, 系统环境: 网络. 缓存, 防火墙配置 wapoff -a cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF modprobe br_netfilter ls /proc/sys/net/bridge sudo sysctl -p /etc/sysctl.d/k8s.conf sysctl -p /etc/sysctl.d/k8s.conf service firewalld stop systemctl disable firewalld setenforce 0 sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config # 2, 获取k8s组件资源 cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF # 3, 启动组件 yum install -y etcd kubernetes systemctl enable kubelet && systemctl start kubelet #######使用kubectl 启动服务遇到的问题 [root@cent7-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE mysql-f61wb 0/1 ContainerCreating 0 54m tomcat-web-kc2rk 0/1 ContainerCreating 0 37m [root@cent7-1 ~]# kubectl describe pod mysql Name: mysql-f61wb Labels: app=mysql Status: Pending Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 55m 55m 1 {default-scheduler } Normal Scheduled Successfully assigned mysql-f61wb to 127.0.0.1 55m 3m 15 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" 55m 5s 241 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\"" 原因: /etc/rhsm/ca/redhat-uep.pem文件不存在 [root@cent7-1 ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory [root@cent7-1 ~]# ll /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt lrwxrwxrwx. 1 root root 27 12月 22 21:12 /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt -> /etc/rhsm/ca/redhat-uep.pem 解决: # 1,生成该pem文件 wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem # 2, 手动拉取镜像 docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
2, 拉取k8s 所需的docker镜像
使用镜像加速
echo "OPTIONS='--selinux-enabled --log-driver=journald --registry-mirror=http://yywkvob3.mirror.aliyuncs.com' " >> /etc/sysconfig/docker #或者 --insecure-registry gcr.io
查看安装的k8s组件版本:
root@wang-GA-MA770T-UD3P:~/sh-docker# apt list |grep kube WARNING: apt does not have a stable CLI interface. Use with caution in scripts. cri-tools/kubernetes-xenial,now 1.12.0-00 amd64 [installed,automatic] docker-engine/kubernetes-xenial 1.11.2-0~xenial amd64 kubeadm/kubernetes-xenial,now 1.13.1-00 amd64 [installed] kubectl/kubernetes-xenial,now 1.13.1-00 amd64 [installed] kubelet/kubernetes-xenial,now 1.13.1-00 amd64 [installed] kubernetes-cni/kubernetes-xenial,now 0.6.0-00 amd64 [installed] rkt/kubernetes-xenial 1.29.0-1 amd64
使用kubeadm安装kubenete所有服务
初始化 && 拉取镜像文件
root@wang-GA-MA770T-UD3P:~/sh-docker# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 this version of kubeadm only supports deploying clusters with the control plane version >= 1.12.0. Current version: v1.10.0 root@wang-GA-MA770T-UD3P:~/sh-docker# kubeadm init --kubernetes-version=v1.13.1 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ################################################ #root@wang-GA-MA770T-UD3P:/home/wang# kubeadm config images pull #[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.13.1 #[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.13.1 #[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.13.1 #[config/images] Pulled k8s.gcr.io/kube-proxy:v1.13.1 #[config/images] Pulled k8s.gcr.io/pause:3.1 #[config/images] Pulled k8s.gcr.io/etcd:3.2.24 #[config/images] Pulled k8s.gcr.io/coredns:1.2.6 ################################################ [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [wang-ga-ma770t-ud3p kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.12] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [wang-ga-ma770t-ud3p localhost] and IPs [192.168.1.12 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [wang-ga-ma770t-ud3p localhost] and IPs [192.168.1.12 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 36.005486 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "wang-ga-ma770t-ud3p" as an annotation [mark-control-plane] Marking the node wang-ga-ma770t-ud3p as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node wang-ga-ma770t-ud3p as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: d6q5e5.j1qzgg38ct6z2vo9 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.1.12:6443 --token d6q5e5.j1qzgg38ct6z2vo9 --discovery-token-ca-cert-hash sha256:d9d9880d1855f3ec0acf96f74161b11214f742c74c9ea6f7042a4f378726f0df root@wang-GA-MA770T-UD3P:~/sh-docker# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile #=================network flannel install mkdir -p /etc/cni/net.d/ cat <<EOF> /etc/cni/net.d/10-flannel.conf { “name”: “cbr0”, “type”: “flannel”, “delegate”: { “isDefaultGateway”: true } } EOF mkdir /usr/share/oci-umount/oci-umount.d -p mkdir /run/flannel/ cat <<EOF> /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.1.0/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true EOF kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml ### clusterrole.rbac.authorization.k8s.io/flannel created ### clusterrolebinding.rbac.authorization.k8s.io/flannel created ### serviceaccount/flannel created ### configmap/kube-flannel-cfg created ### daemonset.extensions/kube-flannel-ds created
3, 保存镜像到阿里云的镜像仓库
为了管理方便,把拉取到的镜像推送到阿里云私有仓库存起来
################1 images=( #=========== master ================= k8s.gcr.io/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 #=========== node ================= k8s.gcr.io/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy-amd64:v1.10.0 k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1 #=========== env soft ================= k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/etcd-amd64:3.1.12 k8s.gcr.io/coredns:1.2.6 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 ) for i in ${images[*]} do my_tag=`echo $i |cut -d '/' -f2` echo "$i ==== > registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/$my_tag" docker tag $i registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/$my_tag done ################2 images2=( #=========== master ================= gcr.io/google_containers/kube-apiserver-amd64:v1.8.7 gcr.io/google_containers/kube-scheduler-amd64:v1.8.7 #=========== node ================= gcr.io/google_containers/pause-amd64:3.0 #=========== env soft ================= gcr.io/google_containers/etcd-amd64:3.0.17 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 ) for i in ${images2[*]} do my_tag=`echo $i |cut -d '/' -f3` echo "$i ==== > registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/$my_tag" docker tag $i registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/$my_tag done ################3 root@wang-GA-MA770T-UD3P:/home/wang/txt# docker images |grep registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/ |awk '{print $1":"$2}' registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-proxy:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-scheduler:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-apiserver:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-controller-manager:v1.13.1 root@wang-GA-MA770T-UD3P:/home/wang/txt# docker images |grep registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/ |awk '{print $1":"$2}'|xargs -n 1 docker push The push refers to repository [registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-proxy] f9cdaf1489a0: Pushed e5a609b37e16: Pushed 5fe6d025ca50: Pushed ############4, 测试 镜像拉取 images=( #=========== master ================= registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-apiserver:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-apiserver-amd64:v1.10.0 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-controller-manager:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-controller-manager-amd64:v1.10.0 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-scheduler:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-scheduler-amd64:v1.10.0 #=========== node ================= registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-proxy:v1.13.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kube-proxy-amd64:v1.10.0 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/pause-amd64:3.1 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/pause:3.1 #=========== env soft ================= registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/etcd:3.2.24 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/etcd-amd64:3.1.12 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/coredns:1.2.6 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/kubernetes-dashboard-amd64:v1.8.3 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/k8s-dns-dnsmasq-nanny-amd64:1.14.8 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/k8s-dns-sidecar-amd64:1.14.8 registry.cn-beijing.aliyuncs.com/kube_eyeofeagle/k8s-dns-kube-dns-amd64:1.14.8 ) for i in ${images[*]} do docker pull $i my_tag=`echo $i |cut -d '/' -f3` echo "$i ==== >k8s.gcr.io/$my_tag" docker tag $i k8s.gcr.io/$my_tag docker rmi $i done
相关文章推荐
- docker系列6 :k8s集群[ 解压安装 ]
- k8s入门系列之集群安装篇
- k8s入门系列之扩展组件(二)kube-ui安装篇
- k8s入门系列之扩展组件(二)kube-ui安装篇
- [容器]kubeadm安装k8s 1.6
- docker系列9: docker安装cdh集群[单机多节点]
- k8s集群之kubernetes-dashboard和kube-dns组件部署安装
- k8s入门系列之集群安装篇
- kubeadm安装kubernetes1.9.2集群
- Kubernetes学习之路(一)之Kubeadm部署K8S集群
- 使用kubeadm部署k8s集群04-配置kubelet访问kube-apiserver
- 使用kubeadm安装k8s集群故障处理三则
- 『GreenPlum系列』GreenPlum 4节点集群安装(图文教程)
- Storm系列(一)集群的安装配置
- 【转载】Ubuntu 系列安装 Docker