Deploy Ceph and start using it: end to end tutorial – Installation (part 1/3)
2014-06-27 19:52
681 查看
http://blog.zhaw.ch/icclab/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13/
Ceph is one
of the most interesting distributed storage systems available, with a veryactive
development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client
usinglibrados. Please refer to the Ceph
documentation for detailed insights on Ceph components.
(Part
2/3 – Troubleshooting - Part
3/3 – librados client)
Ceph version: 0.79
Installation with
Operating system for the Ceph nodes: Ubuntu 14.04
In a minimum Ceph deployment, a Ceph cluster includes one Ceph monitor (MON) and a number of Object Storage Devices (OSD).
Administrative and control operations are issued from an admin node, which must not necessarily be separated from the Ceph cluster (e.g., the monitor node can also act as the admin node). Metadata server nodes (MDS) are required only for Ceph Filesystem (Ceph
Block Devices and Ceph Object Storage do not use MDS).
WARNING: preparing the storage for Ceph means to delete a disk’s partition table and lose all its data. Proceed
only if you know exactly what you are doing!
Ceph will need some physical storage to be used as Object Storage Devices (OSD) and Journal. As
the project documentation recommends, for better performance, the Journal should be on a separate drive than the OSD. Ceph
supports ext4, btrfs and xfs.
I tried setting up clusters with both btrfs and xfs,
however I could achieve stable results only with xfs, so I will refer to this latter.
Prepare a GPT partition table (I have observed stability issues when using a dospartition)
if parted complains about alignment issues (“Warning: The resulting partition is not properly aligned for best performance”),
check this two links to find a solution: 1 and2.
Format the disk with xfs (you might need to install xfs tools
with
Create a Journal partition (raw/unformatted)
The
node. Access to the other nodes for configuration purposes will be handled by
SSH (with keys).
Add Ceph repository to your apt configuration, replace
the Ceph release name that you want to install (e.g., emperor, firefly, …)
Install the trusted key with
If there is no repository for your Ubuntu version, you can try to select the newest one available by manually editing the file
changing the Ubuntu codename (e.g., trusty -> raring)
Install ceph-deploy
Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration,
be able to install and configure every node of the cluster.
NOTE: the hostnames (i.e., the output of
[optional] Create a dedicated user for cluster administration (this is particularly useful if the admin node is part of the Ceph cluster)
then set a password and switch to the new user
Install SSH server on all the cluster nodes (even if a cluster node is also an admin node)
Add a ceph user on each Ceph cluster node (even if a cluster node is also an admin node) and give it passwordless sudo permissions
Edit the
nodes. Example:
to enable dns resolution with the hosts file, install dnsmasq
Generate a public key for the admin user and install it on every ceph nodes
Setup an SSH access configuration by editing the
Example:
Before proceeding, check that
work for each node
Administration of the cluster is done entirely from the admin node.
Move to a dedicated directory to collect the files that
generate. This will be the working directory for any further use of
Deploy the monitor node(s) – replace
of hostnames of the initial monitor nodes
Add a public network entry in the
have separate public and cluster networks (check the network
configuration reference)
Install ceph in all the nodes of the cluster. Use the
if you are using different apt configurations for ceph. NOTE: you may need to confirm the authenticity of
the hosts if your accessing them on SSH for the first time!
Example (replace
Create monitor and gather keys
The content of the working directory after this step should look like
When deploying OSDs, consider that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.
List disks on a node (replace
storage node(s))
This command is also useful for diagnostics: when an OSD is correctly mounted on Ceph, you should see entries similar to this one in the output:
If you haven’t already prepared your storage, or if you want to reformat a partition, use the zap command (WARNING: this
will erase the partition)
Prepare and activate the disks (
that should combine this two operations together, but for some reason it was not working for me). In this example, we are using
OSD and
Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.
Copy keys and configuration files, (replace
the name of your Ceph nodes)
Ensure proper permissions for admin keyring
Check the Ceph status and health
If, at this point, the reported health of your cluster is
then most of the work is done. Otherwise, try to check the troubleshooting
part of this tutorial.
There are useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.
This will remove Ceph configuration and keys
This will also remove Ceph packages
Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the “Setup the cluster”, “Prepare OSDs and OSD Daemons” and “Final steps” parts multiple times, while removing every warning that
reporting.
Ceph is one
of the most interesting distributed storage systems available, with a veryactive
development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client
usinglibrados. Please refer to the Ceph
documentation for detailed insights on Ceph components.
(Part
2/3 – Troubleshooting - Part
3/3 – librados client)
Assumptions
Ceph version: 0.79Installation with
ceph-deploy
Operating system for the Ceph nodes: Ubuntu 14.04
Cluster architecture
In a minimum Ceph deployment, a Ceph cluster includes one Ceph monitor (MON) and a number of Object Storage Devices (OSD).Administrative and control operations are issued from an admin node, which must not necessarily be separated from the Ceph cluster (e.g., the monitor node can also act as the admin node). Metadata server nodes (MDS) are required only for Ceph Filesystem (Ceph
Block Devices and Ceph Object Storage do not use MDS).
Preparing the storage
WARNING: preparing the storage for Ceph means to delete a disk’s partition table and lose all its data. Proceedonly if you know exactly what you are doing!
Ceph will need some physical storage to be used as Object Storage Devices (OSD) and Journal. As
the project documentation recommends, for better performance, the Journal should be on a separate drive than the OSD. Ceph
supports ext4, btrfs and xfs.
I tried setting up clusters with both btrfs and xfs,
however I could achieve stable results only with xfs, so I will refer to this latter.
Prepare a GPT partition table (I have observed stability issues when using a dospartition)
$ sudo parted /dev/sd<x> (parted) mklabel gpt (parted) mkpart primary xfs 0 100% (parted) quit
if parted complains about alignment issues (“Warning: The resulting partition is not properly aligned for best performance”),
check this two links to find a solution: 1 and2.
Format the disk with xfs (you might need to install xfs tools
with
sudo apt-get install xfsprogs)
$ sudo mkfs.xfs /dev/sd<x>1
Create a Journal partition (raw/unformatted)
$ sudo parted /dev/sd<y> (parted) mklabel gpt (parted) mkpart primary 0 100%
Install Ceph deploy
The ceph-deploytool must only be installed on the admin
node. Access to the other nodes for configuration purposes will be handled by
ceph-deployover
SSH (with keys).
Add Ceph repository to your apt configuration, replace
{ceph-stable-release}with
the Ceph release name that you want to install (e.g., emperor, firefly, …)
$ echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
Install the trusted key with
$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
If there is no repository for your Ubuntu version, you can try to select the newest one available by manually editing the file
/etc/apt/sources.list.d/ceph.listand
changing the Ubuntu codename (e.g., trusty -> raring)
$ deb http://ceph.com/debian-emperor raring main
Install ceph-deploy
$ sudo apt-get update $ sudo apt-get install ceph-deploy
Setup the admin node
Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration, ceph-deploywill
be able to install and configure every node of the cluster.
NOTE: the hostnames (i.e., the output of
hostname -s) must match the Ceph node names!
[optional] Create a dedicated user for cluster administration (this is particularly useful if the admin node is part of the Ceph cluster)
$ sudo useradd -d /home/cluster-admin -m cluster-admin -s /bin/bash
then set a password and switch to the new user
$ sudo passwd cluster-admin $ su cluster-admin
Install SSH server on all the cluster nodes (even if a cluster node is also an admin node)
$ sudo apt-get install openssh-server
Add a ceph user on each Ceph cluster node (even if a cluster node is also an admin node) and give it passwordless sudo permissions
$ sudo useradd -d /home/ceph -m ceph -s /bin/bash $ sudo passwd ceph <Enter password> $ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph $ sudo chmod 0440 /etc/sudoers.d/ceph
Edit the
/etc/hostsfile to add mappings to the cluster
nodes. Example:
$ cat /etc/hosts 127.0.0.1 localhost 192.168.58.2 mon0 192.168.58.3 osd0 192.168.58.4 osd1
to enable dns resolution with the hosts file, install dnsmasq
$ sudo apt-get install dnsmasq
Generate a public key for the admin user and install it on every ceph nodes
$ ssh-keygen $ ssh-copy-id ceph@mon0 $ ssh-copy-id ceph@osd0 $ ssh-copy-id ceph@osd1
Setup an SSH access configuration by editing the
.ssh/configfile.
Example:
Host osd0 Hostname osd0 User ceph Host osd1 Hostname osd1 User ceph Host mon0 Hostname mon0 User ceph
Before proceeding, check that
pingand
hostcommands
work for each node
$ ping mon0 $ ping osd0 ... $ host osd0 $ host osd1
Setup the cluster
Administration of the cluster is done entirely from the admin node.Move to a dedicated directory to collect the files that
ceph-deploywill
generate. This will be the working directory for any further use of
ceph-deploy
$ mkdir ceph-cluster $ cd ceph-cluster
Deploy the monitor node(s) – replace
mon0with the list
of hostnames of the initial monitor nodes
$ ceph-deploy new mon0 [ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy new mon0 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][DEBUG ] Resolving host mon0 [ceph_deploy.new][DEBUG ] Monitor mon0 at 192.168.58.2 [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph_deploy.new][DEBUG ] Monitor initial members are ['mon0'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.58.2'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
Add a public network entry in the
ceph.conffile if you
have separate public and cluster networks (check the network
configuration reference)
public network = {ip-address}/{netmask}
Install ceph in all the nodes of the cluster. Use the
--no-adjust-reposoption
if you are using different apt configurations for ceph. NOTE: you may need to confirm the authenticity of
the hosts if your accessing them on SSH for the first time!
Example (replace
mon0 osd0 osd1with your node names):
$ ceph-deploy install --no-adjust-repos mon0 osd0 osd1
Create monitor and gather keys
$ ceph-deploy mon create-initial
The content of the working directory after this step should look like
cadm@mon0:~/my-cluster$ ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph.conf ceph.log ceph.mon.keyring release.asc
Prepare OSDs and OSD Daemons
When deploying OSDs, consider that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.List disks on a node (replace
osd0with the name of your
storage node(s))
$ ceph-deploy disk list osd0
This command is also useful for diagnostics: when an OSD is correctly mounted on Ceph, you should see entries similar to this one in the output:
[ceph-osd1][DEBUG ] /dev/sdb : [ceph-osd1][DEBUG ] /dev/sdb1 other, xfs, mounted on /var/lib/ceph/osd/ceph-0
If you haven’t already prepared your storage, or if you want to reformat a partition, use the zap command (WARNING: this
will erase the partition)
$ ceph-deploy disk zap --fs-type xfs osd0:/dev/sd<x>1
Prepare and activate the disks (
ceph-deployalso has a
createcommand
that should combine this two operations together, but for some reason it was not working for me). In this example, we are using
/dev/sd<x>1as
OSD and
/dev/sd<y>2as journal on two different nodes,
osd0and
osd1
$ ceph-deploy osd prepare osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2 $ ceph-deploy osd activate osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2
Final steps
Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.Copy keys and configuration files, (replace
mon0 osd0 osd1with
the name of your Ceph nodes)
$ ceph-deploy admin mon0 osd0 osd1
Ensure proper permissions for admin keyring
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
Check the Ceph status and health
$ ceph health $ ceph status
If, at this point, the reported health of your cluster is
HEALTH_OK,
then most of the work is done. Otherwise, try to check the troubleshooting
part of this tutorial.
Revert installation
There are useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.This will remove Ceph configuration and keys
ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys
This will also remove Ceph packages
ceph-deploy purge {ceph-node} [{ceph-node}]
Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the “Setup the cluster”, “Prepare OSDs and OSD Daemons” and “Final steps” parts multiple times, while removing every warning that
ceph-deploywas
reporting.
相关文章推荐
- The ultimate end-to-end tutorial to create and deploy a fully decentralized Dapp in ethereum
- Tutorial: Using roslaunch to start Gazebo, world files and URDF models
- 32-bit Assembler is Easy, why and how to develop using the assembler; start learning to program in Assembly now!
- Add custom and listview web part to wiki page using powershell
- How To Do Math Using PowerShell, Part 1 and Part 2
- [Javascript] String Padding in Javascript using padStart and padEnd functions
- Using Wppackager to Package and Deploy Web Parts for Microsoft SharePoint Products and Technologies
- Model-View-Presenter: Why We Need It And The Basic Pattern (Introduction To CAB/SCSF Part 23)
- 论文阅读(Xiang Bai——【PAMI2017】An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition)
- 知识图谱4-【再看一篇论文《End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures》】
- Android: How to download the latest zip Android Source Code easily and using it in Intellij
- SCSF Business Modules: Start Up And The ControlledWorkItem (Introduction To CAB/SCSF Part 20)
- A complete guide to using Keras as part of a TensorFlow workflow: tutorial
- Effective IT Project Management: Using Teams to Get Projects Completed on Time and Under Budget
- [转] It’s time to start using JavaScript strict mode
- http://blogs.msdn.com/b/pranavwagh/archive/2007/03/03/word-2007-file-seems-to-be-deleted-when-you-open-and-save-it-using-dsoframer.aspx
- how to build your web application on spring boot and deploy it on heroku
- An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to S
- A complete guide to using Keras as part of a TensorFlow workflow: tutorial
- 论文笔记:An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application