Installing Kubernetes on Linux with kubeadm
2017-01-19 16:46
507 查看
转自:https://kubernetes.io/docs/getting-started-guides/kubeadm/
This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+. The installation uses a tool called
is part of Kubernetes.
This process works with local VMs, physical servers and/or cloud servers. It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
See the full
information on all
flags and for advice on automating
The
is currently in alpha but please try it out and give us feedback! Be sure to read the limitations;
in particular note that kubeadm doesn’t have great support for automatically configuring cloud providers. Please refer to the specific cloud provider documentation or use another provisioning system.
kubeadm assumes you have a set of machines (virtual or real) that are up and running. It is designed to be part of a large provisioning system - or just for easy manual provisioning. kubeadm is a great choice where you have your own infrastructure (e.g. bare
metal), or where you have an existing orchestration system (e.g. Puppet) that you have to integrate with.
If you are not constrained, there are some other tools built to give you complete clusters:
On GCE, Google Container Engine gives
you one-click Kubernetes clusters
On AWS, kops makes cluster installation and management
easy (and supports high availability)
One or more machines running Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+
1GB or more of RAM per machine (any less will leave little room for your apps)
Full network connectivity between all machines in the cluster (public or private network is fine)
Install a secure Kubernetes cluster on your machines
Install a pod network on the cluster so that application components (pods) can talk to each other
Install a sample microservices application (a socks shop) on the cluster
You will install the following packages on all the machines:
the container runtime, which Kubernetes depends on. v1.11.2 is recommended, but v1.10.3 and v1.12.1 are known to work as well.
the most core component of Kubernetes. It runs on all of the machines in your cluster and does things like starting pods and containers.
the command to control the cluster once it’s running. You will only need this on the master, but it can be useful to have on the other nodes as well.
the command to bootstrap the cluster.
NOTE: If you already have kubeadm installed, you should do a
releases
For each host in turn:
SSH into the machine and become
you are not already (for example, run
If the machine is running Ubuntu or HypriotOS, run:
If the machine is running CentOS, run:
The kubelet is now restarting every few seconds, as it waits in a crashloop for
tell it what to do.
Note: Disabling SELinux by running
The master is the machine where the “control plane” components run, including
cluster database) and the API server (which the
communicates with). All of these components run in pods started by
Right now you can’t run
If you try to run
warn you about things that might not work or it will error out for unsatisfied mandatory requirements.
To initialize the master, pick one of the machines you previously installed
and run:
Note: this will autodetect the network interface to advertise the master on as the interface with the default gateway. If you want to use a different interface, specify
to
If you want to use flannel as the pod network, specify
you’re using the daemonset manifest below. However, please note that this is not required for any other networks besides Flannel.
Please refer to the kubeadm reference doc if you want to read more about the flags
This will download and install the cluster database and “control plane” components. This may take several minutes.
The output should look like:
Make a record of the
The key is used for mutual authentication between the master and the joining nodes.
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run:
This will remove the “dedicated” taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
You must install a pod network add-on so that your pods can communicate with each other.
It is necessary to do this before you try to deploy any applications to your cluster, and before
start up. Note also that
supports CNI based networks and therefore kubenet based networks will not work.
Several projects provide Kubernetes pod networks using CNI, some of which also support Network Policy. See
the add-ons page for a complete list of available network add-ons.
You can install a pod network add-on with the following command:
Please refer to the specific add-on installation guide for exact details. You should only install one pod network per cluster.
If you are on another architecture than amd64, you should use the flannel overlay network as described in the
multi-platform section
NOTE: You can install only one pod network per cluster.
Once a pod network has been installed, you can confirm that it is working by checking that the
is
the output of
And once the
is up and running, you can continue by joining your nodes.
The nodes are where your workloads (containers and pods, etc) run. If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g.
A few seconds later, you should notice that running
In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the
from your master to your laptop like this:
If you want to connect to the API Server for viewing the dashboard (note: the dashboard isn’t deployed by default) from outside the cluster for example, you can use
You can now access the API Server locally at
As an example, install a sample microservices application, a socks shop, to put your cluster through its paces. Note that this demo does only work on
To learn more about the sample microservices app, see the GitHub README.
You can then find out the port that the NodePort feature of services allocated for the front-end service by running:
It takes several minutes to download and start all the containers, watch the output of
Then go to the IP address of your cluster’s master node in your browser, and specify the given port. So for example,
In the example above, this was
but it is a different port for you.
If there is a firewall, make sure it exposes this port to the internet before you try to access it.
To uninstall the socks shop, run
To undo what
simply run:
If you wish to start over, run
See the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization
& control of your Kubernetes cluster.
Learn about
advanced usage on the advanced reference doc
Learn more about Kubernetes concepts
and kubectl in Kubernetes 101.
Slack Channel: #sig-cluster-lifecycle
Mailing List: kubernetes-sig-cluster-lifecycle
GitHub Issues in the kubeadm repository
kubeadm deb packages and binaries are built for amd64, arm and arm64, following the multi-platform
proposal.
deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there’s interest).
Currently, only the pod network flannel is working on multiple architectures. You can install it this way:
Replace
on the platform you’re running on. Note that the Raspberry Pi 3 is in ARM 32-bit mode, so for RPi 3 you should set
not
Enabling specific cloud providers is a common request, this currently requires manual configuration and is therefore not yet supported. If you wish to do so, edit the
for the
(
on all nodes, including the master. If your cloud provider requires any extra packages installed on host, for example for volume mounting/unmounting, install those packages.
Specify the
to kubelet and set it to the cloud of your choice. If your cloudprovider requires a configuration file, create the file
every node. The exact format and content of that file depends on the requirements imposed by your cloud provider. If you use the
you must append it to the
as follows:
Lastly, run
This workflow is not yet fully supported, however we hope to make it extremely easy to spin up clusters with cloud providers in the future. (See this
proposal for more information) The Kubelet Dynamic Settings feature may also help to fully automate this
process in the future.
Please note:
a work in progress and these limitations will be addressed in due course.
The cluster created here has a single master, with a single
running on it. This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch. Adding HA support (multiple
multiple API servers, etc) to
still a work-in-progress.
Workaround: regularly back up etcd. The
directory configured by
at
the master.
The
does not work with kubeadm due to that CNI networking is used, see issue #31307.
Workaround: use the NodePort feature of services instead, or use HostNetwork.
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure
set to 1 in your sysctl config, eg.
There is no built-in way of fetching the token easily once the cluster is up and running, but here is a
you can copy and paste that will print out the token for you:
If you are using VirtualBox (directly or via Vagrant), you will need to ensure that
take a look at this
how you this can be achieved.
Overview
This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04, CentOS 7 or HypriotOS v1.0.1+. The installation uses a tool called kubeadmwhich
is part of Kubernetes.
This process works with local VMs, physical servers and/or cloud servers. It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
See the full
kubeadmreference for
information on all
kubeadmcommand-line
flags and for advice on automating
kubeadmitself.
The
kubeadmtool
is currently in alpha but please try it out and give us feedback! Be sure to read the limitations;
in particular note that kubeadm doesn’t have great support for automatically configuring cloud providers. Please refer to the specific cloud provider documentation or use another provisioning system.
kubeadm assumes you have a set of machines (virtual or real) that are up and running. It is designed to be part of a large provisioning system - or just for easy manual provisioning. kubeadm is a great choice where you have your own infrastructure (e.g. bare
metal), or where you have an existing orchestration system (e.g. Puppet) that you have to integrate with.
If you are not constrained, there are some other tools built to give you complete clusters:
On GCE, Google Container Engine gives
you one-click Kubernetes clusters
On AWS, kops makes cluster installation and management
easy (and supports high availability)
Prerequisites
One or more machines running Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+1GB or more of RAM per machine (any less will leave little room for your apps)
Full network connectivity between all machines in the cluster (public or private network is fine)
Objectives
Install a secure Kubernetes cluster on your machinesInstall a pod network on the cluster so that application components (pods) can talk to each other
Install a sample microservices application (a socks shop) on the cluster
Instructions
(1/4) Installing kubelet and kubeadm on your hosts
You will install the following packages on all the machines:docker:
the container runtime, which Kubernetes depends on. v1.11.2 is recommended, but v1.10.3 and v1.12.1 are known to work as well.
kubelet:
the most core component of Kubernetes. It runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl:
the command to control the cluster once it’s running. You will only need this on the master, but it can be useful to have on the other nodes as well.
kubeadm:
the command to bootstrap the cluster.
NOTE: If you already have kubeadm installed, you should do a
apt-get update && apt-get upgradeor
yum updateto get the latest version of kubeadm. See the reference doc if you want to read about the different kubeadm
releases
For each host in turn:
SSH into the machine and become
rootif
you are not already (for example, run
sudo su -).
If the machine is running Ubuntu or HypriotOS, run:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update # Install docker if you don't have it already. apt-get install -y docker.io apt-get install -y kubelet kubeadm kubectl kubernetes-cni
If the machine is running CentOS, run:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y docker kubelet kubeadm kubectl kubernetes-cni systemctl enable docker && systemctl start docker systemctl enable kubelet && systemctl start kubelet
The kubelet is now restarting every few seconds, as it waits in a crashloop for
kubeadmto
tell it what to do.
Note: Disabling SELinux by running
setenforce 0is required in order to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until kubelet can handle SELinux better.
(2/4) Initializing your master
The master is the machine where the “control plane” components run, including etcd(the
cluster database) and the API server (which the
kubectlCLI
communicates with). All of these components run in pods started by
kubelet.
Right now you can’t run
kubeadm inittwice without tearing down the cluster in between, see Tear down.
If you try to run
kubeadm initand your machine is in a state that is incompatible with starting a Kubernetes cluster,
kubeadmwill
warn you about things that might not work or it will error out for unsatisfied mandatory requirements.
To initialize the master, pick one of the machines you previously installed
kubeletand
kubeadmon,
and run:
# kubeadm init
Note: this will autodetect the network interface to advertise the master on as the interface with the default gateway. If you want to use a different interface, specify
--api-advertise-addresses=<ip-address>argument
to
kubeadm init.
If you want to use flannel as the pod network, specify
--pod-network-cidr=10.244.0.0/16if
you’re using the daemonset manifest below. However, please note that this is not required for any other networks besides Flannel.
Please refer to the kubeadm reference doc if you want to read more about the flags
kubeadm initprovides.
This will download and install the cluster database and “control plane” components. This may take several minutes.
The output should look like:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [init] Using Kubernetes version: v1.5.1 [tokens] Generated token: "064158.548b9ddb1d3fad3e" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 61.317580 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 6.556101 seconds [apiclient] Creating a test deployment [apiclient] Test deployment succeeded [token-discovery] Created the kube-discovery deployment, waiting for it to become ready [token-discovery] kube-discovery is ready after 6.020980 seconds [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=<token> <master-ip>
Make a record of the
kubeadm joincommand that
kubeadm initoutputs. You will need this in a moment. The key included here is secret, keep it safe — anyone with this key can add authenticated nodes to your cluster.
The key is used for mutual authentication between the master and the joining nodes.
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run:
# kubectl taint nodes --all dedicated- node "test-01" tainted taint key="dedicated" and effect="" not found. taint key="dedicated" and effect="" not found.
This will remove the “dedicated” taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
(3/4) Installing a pod network
You must install a pod network add-on so that your pods can communicate with each other.It is necessary to do this before you try to deploy any applications to your cluster, and before
kube-dnswill
start up. Note also that
kubeadmonly
supports CNI based networks and therefore kubenet based networks will not work.
Several projects provide Kubernetes pod networks using CNI, some of which also support Network Policy. See
the add-ons page for a complete list of available network add-ons.
You can install a pod network add-on with the following command:
# kubectl apply -f <add-on.yaml>
Please refer to the specific add-on installation guide for exact details. You should only install one pod network per cluster.
If you are on another architecture than amd64, you should use the flannel overlay network as described in the
multi-platform section
NOTE: You can install only one pod network per cluster.
Once a pod network has been installed, you can confirm that it is working by checking that the
kube-dnspod
is
Runningin
the output of
kubectl get pods --all-namespaces.
And once the
kube-dnspod
is up and running, you can continue by joining your nodes.
(4/4) Joining your nodes
The nodes are where your workloads (containers and pods, etc) run. If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. sudo su -) and run the command that was output by
kubeadm init. For example:
# kubeadm join --token <token> <master-ip> [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [tokens] Validating provided token [discovery] Created cluster info discovery client, requesting info from "http://192.168.x.y:9898/cluster-info/v1/?token-id=f11877" [discovery] Cluster info object received, verifying signature using given token [discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.x.y:6443] [bootstrap] Trying to connect to endpoint https://192.168.x.y:6443 [bootstrap] Detected server version: v1.5.1 [bootstrap] Successfully established connection with endpoint "https://192.168.x.y:6443" [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server: Issuer: CN=kubernetes | Subject: CN=system:node:yournode | CA: false Not before: 2016-12-15 19:44:00 +0000 UTC Not After: 2017-12-15 19:44:00 +0000 UTC [csr] Generating kubelet configuration [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
A few seconds later, you should notice that running
kubectl get nodeson the master shows a cluster with as many machines as you created.
(Optional) Controlling your cluster from machines other than the master
In order to get a kubectl on your laptop for example to talk to your cluster, you need to copy the KubeConfigfile
from your master to your laptop like this:
# scp root@<master ip>:/etc/kubernetes/admin.conf . # kubectl --kubeconfig ./admin.conf get nodes
(Optional) Connecting to the API Server
If you want to connect to the API Server for viewing the dashboard (note: the dashboard isn’t deployed by default) from outside the cluster for example, you can use kubectl proxy:
# scp root@<master ip>:/etc/kubernetes/admin.conf . # kubectl --kubeconfig ./admin.conf proxy
You can now access the API Server locally at
http://localhost:8001/api/v1
(Optional) Installing a sample application
As an example, install a sample microservices application, a socks shop, to put your cluster through its paces. Note that this demo does only work on amd64.
To learn more about the sample microservices app, see the GitHub README.
# kubectl create namespace sock-shop # kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
You can then find out the port that the NodePort feature of services allocated for the front-end service by running:
# kubectl describe svc front-end -n sock-shop Name: front-end Namespace: sock-shop Labels: name=front-end Selector: name=front-end Type: NodePort IP: 100.66.88.176 Port: <unset> 80/TCP NodePort: <unset> 31869/TCP Endpoints: <none> Session Affinity: None
It takes several minutes to download and start all the containers, watch the output of
kubectl get pods -n sock-shopto see when they’re all up and running.
Then go to the IP address of your cluster’s master node in your browser, and specify the given port. So for example,
http://<master_ip>:<port>.
In the example above, this was
31869,
but it is a different port for you.
If there is a firewall, make sure it exposes this port to the internet before you try to access it.
Tear down
To uninstall the socks shop, run kubectl delete namespace sock-shopon the master.
To undo what
kubeadmdid,
simply run:
# kubeadm reset
If you wish to start over, run
systemctl start kubeletfollowed by
kubeadm initor
kubeadm join.
Explore other add-ons
See the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization& control of your Kubernetes cluster.
What’s next
Learn about kubeadm’s
advanced usage on the advanced reference doc
Learn more about Kubernetes concepts
and kubectl in Kubernetes 101.
Feedback
Slack Channel: #sig-cluster-lifecycleMailing List: kubernetes-sig-cluster-lifecycle
GitHub Issues in the kubeadm repository
kubeadm is multi-platform
kubeadm deb packages and binaries are built for amd64, arm and arm64, following the multi-platformproposal.
deb-packages are released for ARM and ARM 64-bit, but not RPMs (yet, reach out if there’s interest).
Currently, only the pod network flannel is working on multiple architectures. You can install it this way:
# export ARCH=amd64 # curl -sSL "https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml?raw=true" | sed "s/amd64/${ARCH}/g" | kubectl create -f -
Replace
ARCH=amd64with
ARCH=armor
ARCH=arm64depending
on the platform you’re running on. Note that the Raspberry Pi 3 is in ARM 32-bit mode, so for RPi 3 you should set
ARCHto
arm,
not
arm64.
Cloudprovider integrations (experimental)
Enabling specific cloud providers is a common request, this currently requires manual configuration and is therefore not yet supported. If you wish to do so, edit the kubeadmdropin
for the
kubeletservice
(
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf)
on all nodes, including the master. If your cloud provider requires any extra packages installed on host, for example for volume mounting/unmounting, install those packages.
Specify the
--cloud-providerflag
to kubelet and set it to the cloud of your choice. If your cloudprovider requires a configuration file, create the file
/etc/kubernetes/cloud-configon
every node. The exact format and content of that file depends on the requirements imposed by your cloud provider. If you use the
/etc/kubernetes/cloud-configfile,
you must append it to the
kubeletarguments
as follows:
--cloud-config=/etc/kubernetes/cloud-config
Lastly, run
kubeadm init --cloud-provider=xxxto bootstrap your cluster with cloud provider features.
This workflow is not yet fully supported, however we hope to make it extremely easy to spin up clusters with cloud providers in the future. (See this
proposal for more information) The Kubelet Dynamic Settings feature may also help to fully automate this
process in the future.
Limitations
Please note: kubeadmis
a work in progress and these limitations will be addressed in due course.
The cluster created here has a single master, with a single
etcddatabase
running on it. This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch. Adding HA support (multiple
etcdservers,
multiple API servers, etc) to
kubeadmis
still a work-in-progress.
Workaround: regularly back up etcd. The
etcddata
directory configured by
kubeadmis
at
/var/lib/etcdon
the master.
The
HostPortand
HostIPfunctionality
does not work with kubeadm due to that CNI networking is used, see issue #31307.
Workaround: use the NodePort feature of services instead, or use HostNetwork.
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure
net.bridge.bridge-nf-call-iptablesis
set to 1 in your sysctl config, eg.
console # cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
There is no built-in way of fetching the token easily once the cluster is up and running, but here is a
kubectlcommand
you can copy and paste that will print out the token for you:
console # kubectl -n kube-system get secret clusterinfo -o yaml | grep token-map | awk '{print $2}' | base64 -d | sed "s|{||g;s|}||g;s|:|.|g;s/\"//g;" | xargs echo
If you are using VirtualBox (directly or via Vagrant), you will need to ensure that
hostname -ireturns a routable IP address (i.e. one on the second network interface, not the first one). By default, it doesn’t do this and kubelet ends-up using first non-loopback network interface, which is usually NATed. Workaround: Modify
/etc/hosts,
take a look at this
Vagrantfilefor
how you this can be achieved.
相关文章推荐
- Tutorial : Getting Started with Kubernetes on your Windows Laptop with Minikube
- Installing Kubernetes on AWS with kops
- Installing Kubernetes Cluster with 3 minions on CentOS 7 to manage pods and services
- Installing Apache Tomcat 5.5/6/7 with a multi-instance layout on Linux
- Deploying Django with Apache and mod_wsgi Based on Red Hat Enterprise Linux Server
- kubeadm 搭建 kubernetes 集群
- 【R】Linux安装R语言包(Installing R packages on Linux)
- centos7环境下kubeadm方式部署 kubernetes 1.7
- 官网 Mysql Installing MySQL on Linux Using RPM Packages from Oracle
- 国内使用 kubeadm 在 Centos 7 搭建 Kubernetes 集群
- Static NAT with iptables on Linux
- kubernetes 1.9.0 kubeadm方式安装
- Installing Oracle11g R2 RAC on RedHat Linux AS 5.8
- Coping with the TCP TIME-WAIT state on busy Linux servers
- Installing VMware Tools on Kali Linux and Some Debugging Basics
- How to install Toad on linux with Corssover
- Installing Nvidia CUDA on Ubuntu 14.04 for Linux GPU Computing
- JFreeChart在Linux下图片不显示 No JFreeChart object found on the stack with name chart
- Cloud in Action: Play with Minikube / Kubernetes
- kubeadm安装kubernetes(weave)