您的位置:首页 > 运维架构 > Kubernetes

使用kubeadm 安装 kubernetes 1.12.0

2018-10-09 16:55 1156 查看

目录

  • 使用kubeadm部署Kubernetes:
  • 参考:
  • Kubernetes作为Google开源的容器运行平台,受到了大家的热捧。搭建一套完整的kubernetes平台,也成为试用这套平台必须迈过的坎儿。kubernetes1.5版本以及之前,安装还是相对比较方便的,官方就有通过yum源在centos7安装kubernetes。但是在kubernetes1.6之后,安装就比较繁琐了,需要证书各种认证,对于刚接触kubernetes的人来说很不友好。

    架构说明:

    两台主机:

    18.16.202.35 master
    18.16.202.36 slaver
    [root@localhost /]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    18.16.202.35 node1
    18.16.202.36 node2

    系统配置:

    1.1 关闭防火墙

    systemctl stop firewalld
    systemctl disable firewalld

    1.2 禁用SELinux

    setenforce 0

    编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:

    sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
    
    #SELINUX=disabled

    1.3 关闭系统Swap

    Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。方法一,通过kubelet的启动参数–fail-swap-on=false更改这个限制。方法二,关闭系统的Swap。

    swapoff -a

    修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。

    #注释掉swap分区
    [root@localhost /]# sed -i 's/.*swap.*/#&/' /etc/fstab
    
    #/dev/mapper/centos-swap swap                    swap    defaults        0 0
    
    [root@localhost /]# free -m
    total        used        free      shared  buff/cache   available
    Mem:            962         154         446           6         361         612
    Swap:             0           0           0

    1.4 安装docker

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    sudo yum makecache fast
    
    sudo yum -y install docker-ce
    systemctl enable docker.service
    systemctl restart docker

    我这里安装的是docker-ce 18.06

    docker的iptables设置:

    docker 版本为1.13.1时,iptables的情况

    [root@localhost /]# iptables -nvL
    Chain INPUT (policy ACCEPT 423 packets, 66469 bytes)
    pkts bytes target     prot opt in     out     source               destination
    423 66469 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    
    Chain FORWARD (policy DROP 0 packets, 0 bytes)
    pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-ISOLATION  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0
    
    Chain OUTPUT (policy ACCEPT 385 packets, 63638 bytes)
    pkts bytes target     prot opt in     out     source               destination
    385 63638 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    
    Chain DOCKER (1 references)
    pkts bytes target     prot opt in     out     source               destination
    
    Chain DOCKER-ISOLATION (1 references)
    pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
    
    Chain KUBE-SERVICES (2 references)
    pkts bytes target     prot opt in     out     source               destination

    docker版本为18.06时,iptables

    [root@localhost /]# iptables -nvL
    Chain INPUT (policy ACCEPT 12218 packets, 1299K bytes)
    pkts bytes target     prot opt in     out     source               destination
    
    Chain FORWARD (policy DROP 0 packets, 0 bytes)
    pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0
    
    Chain OUTPUT (policy ACCEPT 608 packets, 56787 bytes)
    pkts bytes target     prot opt in     out     source               destination
    
    Chain DOCKER (1 references)
    pkts bytes target     prot opt in     out     source               destination
    
    Chain DOCKER-ISOLATION-STAGE-1 (1 references)
    pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
    
    Chain DOCKER-ISOLATION-STAGE-2 (1 references)
    pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
    
    Chain DOCKER-USER (1 references)
    pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

    Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806,发现默认策略又改回了ACCEPT,这个不知道是从哪个版本改回的,因为我们线上版本使用的1706还是需要手动调整这个策略的。

    其他版本docker操作:

    # 开启forward
    # Docker从1.13版本开始调整了默认的防火墙规则
    # 禁用了iptables filter表中FOWARD链
    # 这样会引起Kubernetes集群中跨Node的Pod无法通信
    $ iptables -P FORWARD ACCEPT

    使用kubeadm部署Kubernetes:

    2.1 安装kubeadm和kubelet

    下面在各节点安装kubeadm和kubelet:

    # 配置源
    $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    # 安装
    $ yum makecache fast
    $ yum install -y kubelet kubeadm kubectl ipvsadm

    配置:

    # 配置转发相关参数,否则可能会出错
    $ cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness=0
    EOF
    
    # 使配置生效
    $ sysctl --system
    
    # 如果net.bridge.bridge-nf-call-iptables报错,加载br_netfilter模块
    $ modprobe br_netfilter
    $ sysctl -p /etc/sysctl.d/k8s.conf
    
    # 加载ipvs相关内核模块
    # 如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
    $ modprobe ip_vs
    $ modprobe ip_vs_rr
    $ modprobe ip_vs_wrr
    $ modprobe ip_vs_sh
    $ modprobe nf_conntrack_ipv4
    # 查看是否加载成功
    $ lsmod | grep ip_vs

    配置启动kubelet(所有节点)

    # 配置kubelet使用国内pause镜像
    # 配置kubelet的cgroups
    # 获取docker的cgroups
    DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
    echo $DOCKER_CGROUPS
    cat >/etc/sysconfig/kubelet<<EOF
    KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
    EOF
    
    # 启动
    $ systemctl daemon-reload
    $ systemctl enable kubelet && systemctl restart kubelet

    在这里使用systemctl status kubelet,你会发现报错误信息;

    10月 11 00:26:43 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    10月 11 00:26:43 node1 systemd[1]: Unit kubelet.service entered failed state.
    10月 11 00:26:43 node1 systemd[1]: kubelet.service failed.

    运行

    journalctl -xefu kubelet
    命令查看systemd日志才发现,真正的错误是:

    unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

    这个错误在运行

    kubeadm init
    生成CA证书后会被自动解决,此处可先忽略。

    简单地说就是在

    kubeadm init
    之前kubelet会不断重启。

    2.2 配置master节点

    直接使用命令:

    kubeadm init \
    --kubernetes-version=v1.12.0 \
    --pod-network-cidr=10.244.0.0/16 \
    --apiserver-advertise-address=192.168.61.11 \
    --ignore-preflight-errors=Swap

    或者使用kubeadm-master.config配置文件,在/etc/kubernetes/文件夹下面操作:

    # 1.11 版本 centos 下使用 ipvs 模式会出问题
    # 参考 https://github.com/kubernetes/kubernetes/issues/65461
    
    # 生成配置文件
    cat >kubeadm-master.config<<EOF
    apiVersion: kubeadm.k8s.io/v1alpha2
    kind: MasterConfiguration
    kubernetesVersion: v1.12.0
    imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
    api:
    advertiseAddress: 18.16.202.35
    
    controllerManagerExtraArgs:
    node-monitor-grace-period: 10s
    pod-eviction-timeout: 10s
    
    networking:
    podSubnet: 10.244.0.0/16
    
    kubeProxy:
    config:
    mode: ipvs
    # mode: iptables
    EOF
    
    # 提前拉取镜像
    # 如果执行失败 可以多次执行
    kubeadm config images pull --config /etc/kubernetes/kubeadm-master.config
    
    # 初始化
    kubeadm init --config /etc/kubernetes/kubeadm-master.config
    # or
    kubeadm init --config /etc/kubernetes/kubeadm-master.config --ignore-preflight-errors=all

    安装过程中遇到异常:

    [preflight] Some fatal errors occurred:
    [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

    直接删除/var/lib/etcd文件夹

    如果初始化过程出现问题,使用如下命令重置:

    kubeadm reset

    rm -rf /var/lib/cni/ $HOME/.kube/config

    2.3 初始化master节点:

    [root@localhost kubernetes]#  kubeadm init --config kubeadm-master.config
    [init] using Kubernetes version: v1.12.0
    [preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
    you can solve this problem with following methods:
    1. Run 'modprobe -- ' to load missing kernel modules;
    2. Provide the missing builtin kernel ipvs support
    
    [preflight/images] Pulling images required for setting up a Kubernetes cluster
    [preflight/images] This might take a minute or two, depending on the speed of your internet connection
    [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [preflight] Activating the kubelet service
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [127.0.0.1 ::1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [18.16.202.35 127.0.0.1 ::1]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 18.16.202.35]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    [certificates] Generated sa key and public key.
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
    [init] this might take a minute or longer if the control plane images have to be pulled
    [apiclient] All control plane components are healthy after 40.510372 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
    [markmaster] Marking the node localhost.localdomain as master by adding the label "node-role.kubernetes.io/master=''"
    [markmaster] Marking the node localhost.localdomain as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
    [bootstraptoken] using token: xc9gpo.mmv1mmsjhq6tzhdc
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
    kubeadm join 18.16.202.35:6443 --token ccxrk8.myui0xu4syp99gxu --discovery-token-ca-cert-hash sha256:e3c90ace969aa4d62143e7da6202f548662866dfe33c140095b020031bff2986

    上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。

    其中有以下关键内容:

    • [kubelet]
      生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

    • [certificates]
      生成相关的各种证书

    • [kubeconfig]
      生成相关的kubeconfig文件

    • [bootstraptoken]
      生成token记录下来,后边使用
      kubeadm join
      往集群中添加节点时会用到

    • 下面的命令是配置常规用户如何使用kubectl访问集群:

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    • 最后给出了将节点加入集群的命令:

      kubeadm join 18.16.202.35:6443 --token ccxrk8.myui0xu4syp99gxu --discovery-token-ca-cert-hash sha256:e3c90ace969aa4d62143e7da6202f548662866dfe33c140095b020031bff2986

    如果安装过程中爆出异常,kubelet不能启动,但是查看kubelet实际上启动了,查看kubelet日志发现

    journalctl -xeu kubelet
    
    10月 11 21:29:14 node1 kubelet[5351]: W1011 21:29:14.012763    5351 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
    10月 11 21:29:14 node1 kubelet[5351]: E1011 21:29:14.012853    5351 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:Ne
    ....
    10月 11 21:29:15 node1 kubelet[5351]: E1011 21:29:15.473163    5351 event.go:212] Unable to write event: 'Post https://18.16.202.35:6443/api/v1/namesp

    在国外服务器执行:

    docker pull quay.io/coreos/flannel:v0.10.0-amd64
    docker tag quay.io/coreos/flannel:v0.10.0-amd64 ${username}/flannel:v0.10.0-amd64
    docker push ${username}/flannel:v0.10.0-amd64
    docker rmi quay.io/coreos/flannel:v0.10.0-amd64
    docker rmi ${username}/flannel:v0.10.0-amd64

    在国内服务器执行:

    sudo docker pull ${username}/flannel:v0.10.0-amd64
    sudo docker tag ${username}/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    sudo docker rmi ${username}/flannel:v0.10.0-amd64

    或者直接使用yqfwind/flannel

    重新init, 然后apply flannel之后,可以查看/etc/cni/net.d/目录下是有配置文件10-flannel.conflist

    最后,在其他节点也需要下载相关镜像

    如果节点服务区/etc/cni/net.d/10-flannel.conflist不存在,一是检查flannel镜像是否存在,也可以拷贝master服务器的文件至相关目录下。

    我使用下面的命令:

    docker pull quay.io/coreos/flannel:v0.10.0-amd64
    mkdir -p /etc/cni/net.d/
    cat <<EOF> /etc/cni/net.d/10-flannel.conf
    {"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
    EOF
    mkdir /usr/share/oci-umount/oci-umount.d -p
    mkdir /run/flannel/
    cat <<EOF> /run/flannel/subnet.env
    FLANNEL_NETWORK=172.100.0.0/16
    FLANNEL_SUBNET=172.100.1.0/24
    FLANNEL_MTU=1450
    FLANNEL_IPMASQ=true
    EOF
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

    2.4 配置使用kubectl

    如下操作在master节点操作

    $ rm -rf $HOME/.kube
    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    # 查看node节点
    $ kubectl get nodes
    NAME    STATUS     ROLES    AGE     VERSION
    node1   NotReady   master   6m19s   v1.12.0
    
    # 只有网络插件也安装配置完成之后,才能会显示为ready状态
    # 设置master允许部署应用pod,参与工作负载,现在可以部署其他系统组件
    # 如 dashboard, heapster, efk等
    $ kubectl taint nodes --all node-role.kubernetes.io/master-
    node/node1 untainted

    如果曝出异常:

    node "master" untainted

    或者

    error: taint "node-role.kubernetes.io/master:" not found

    一般是因为上一次安装没有卸载干净。

    2.5 配置使用网络插件

    如下操作在master节点操作

    # 下载配置
    $ cd ~ && mkdir flannel && cd flannel
    $ wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

    修改配置文件kube-flannel.yml:

    # 修改kube-flannel.yml中配置
    # 此处的ip配置要与上面kubeadm的pod-network一致
    net-conf.json: |
    {
    "Network": "10.244.0.0/16",
    "Backend": {
    "Type": "vxlan"
    }
    }
    
    # 默认的镜像是quay.io/coreos/flannel:v0.10.0-amd64,如果你能pull下来就不用修改镜像地址,否则,修改yml中镜像地址为阿里镜像源
    image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
    
    # 如果Node有多个网卡的话,参考flannel issues 39701,
    # https://github.com/kubernetes/kubernetes/issues/39701
    # 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,
    # 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,
    # flanneld启动参数加上--iface=<iface-name>
    containers:
    - name: kube-flannel
    image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=eth1
    
    ⚠️⚠️⚠️--iface=eth1 的值,是你当前的网卡

    启动:

    # 启动
    $ kubectl apply -f kube-flannel.yml
    
    # 查看
    $ kubectl get pods --namespace kube-system
    $ kubectl get svc --namespace kube-system
    
    # 只有网络插件也安装配置完成之后,才能会显示为ready状态
    # 设置master允许部署应用pod,参与工作负载,现在可以部署其他系统组件
    # 如 dashboard, heapster, efk等
    # kubectl taint nodes --all node-role.kubernetes.io/master-

    操作记录:

    [root@localhost flannel]#  kubectl apply -f kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds created
    [root@localhost flannel]# kubectl get pods --namespace kube-system
    NAME                                            READY   STATUS    RESTARTS   AGE
    coredns-6c66ffc55b-ggsgx                        0/1     Pending   0          26m
    coredns-6c66ffc55b-m457x                        0/1     Pending   0          26m
    etcd-localhost.localdomain                      1/1     Running   0          25m
    kube-apiserver-localhost.localdomain            1/1     Running   0          25m
    kube-controller-manager-localhost.localdomain   1/1     Running   0          25m
    kube-proxy-9jqwm                                1/1     Running   0          26m
    kube-scheduler-localhost.localdomain            1/1     Running   0          25m
    [root@localhost flannel]#  kubectl get svc --namespace kube-system
    NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
    kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   26m

    2.6 配置node节点加入集群

    如下操作在所有node节点操作

    # 此命令为初始化master成功后返回的结果
    $ kubeadm join 18.16.202.35:6443 --token ccxrk8.myui0xu4syp99gxu --discovery-token-ca-cert-hash sha256:e3c90ace969aa4d62143e7da6202f548662866dfe33c140095b020031bff2986
    
    异常信息:
    [preflight] running pre-flight checks
    [discovery] Trying to connect to API Server "18.16.202.35:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://18.16.202.35:6443"
    [discovery] Requesting info from "https://18.16.202.35:6443" again to validate TLS against the pinned public key
    [discovery] Failed to request cluster info, will try again: [Get https://18.16.202.35:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
    [discovery] Failed to request cluster info, will try again: [Get https://18.16.202.35:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]

    这个问题我这边是因为服务器时间不对,调整以后就ok了。

    操作记录:

    [root@node1 flannel]# kubectl get nodes
    NAME    STATUS     ROLES    AGE   VERSION
    node1   NotReady   master   19m   v1.12.0
    node2   NotReady   <none>   14s   v1.12.0

    参考:

    https://www.geek-share.com/detail/2713768340.html

    https://my.oschina.net/binges/blog/1615955?p=2&temp=1521445654544

    https://blog.frognew.com/2018/10/kubeadm-install-kubernetes-1.12.html

    https://www.jianshu.com/p/31bee0cecaf2

    https://www.zybuluo.com/ncepuwanghui/note/953929

    https://www.kubernetes.org.cn/4256.html

    https://note.youdao.com/share/?id=31d9d5db79cc3ae27e72c029b09ac4ab&type=note#/

    https://juejin.im/post/5b45d4185188251ac062f27c

    https://www.jianshu.com/p/02dc13d2f651

    https://blog.csdn.net/qq_34857250/article/details/82562514

    内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
    标签: