使用Glusterfs作为kvm后端存储
2013-08-18 11:06
344 查看
1.测试环境
centos6.4 x86-64gluster-3.4
qemu-1.5.2
机器:
192.168.1.100:glusterfs+kvm
192.168.1.101-103:glusterfs
2.Glusterfs存储集群部署
先部署好glusterfs集群,部署教程参考这里/article/3522187.htmlgluster集群部署完毕后,创建一个volume用于存放vm镜像
gluster volume create vm-images stripe 2 replica 2 192.168.1.{100,101,102,103}:/data/vm-images
gluster volume start vm-images
3.虚拟化机器上安装qemu
centos6自带的qemu1.2版本并不支持glusterfs,这里要编译新版的qemu.安装qemu前先装上glusterfs-devel
1 rpm –ivh http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/CentOS/epel-6Server/x86_64/glusterfs-devel-3.4.0-8.el6.x86_64.rpm
安装依赖的包
1 yum install zlib-devel glib2-devel -y
开始编译qemu,加上参数--enable-glusterfs
1 wget http://wiki.qemu-project.org/download/qemu-1.5.2.tar.bz2 2 tar jxvf qemu-1.5.2.tar.bz2
3 cd qemu-1.5.2
4 ./configure --enable-glusterfs #这里加上enable-glusterfs
5 make;make install
4.使用
环境安装完毕,使用qemu-img创建一个虚拟机磁盘1 /usr/local/bin/qemu-img create -f qcow2 gluster://192.168.1.100/vm-images/disk1 10G
创建一个虚拟机
qemu-system-x86_64 --enable-kvm -m 1024 -drive file=gluster://192.168.1.100/vm-images/disk1 -vnc :15 -cdrom /data/CentOS-6.4-i386-minimal.iso
现在可以使用VNC连上并安装系统了.
5.后记
qemu连接glusterfs支持多种格式:gluster://1.2.3.4/testvol/a.img
gluster+tcp://1.2.3.4/testvol/a.img
gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
gluster+tcp://server.domain.com:24007/testvol/dir/a.img
gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
gluster+rdma://1.2.3.4:24007/testvol/a.img
经过测试发现qemu连接glusterfs是支持高可用的,如gluster://1.2.3.4/testvol/a.img,1.2.3.4宕机将不会影响虚拟机运行.
另外测试过程发现gluster对xfs分区格式兼容性不是太好,会有使用空间大小不正确的现象,解决方法是:
gluster volume set <volname> cluster.stripe-coalesce enable
6.附上老外的性能测试
由测试结果可以看到使用api连接glusterfs性能比fuse提高近一倍,并且接近本地磁盘的速度.The following numbers from FIO benchmark are to show the performance advantage of using QEMU’s GlusterFS block driver instead of the usual FUSE mount while accessing the VM image.
Test setup
Host | Dual core x86_64 system running Fedora 17 kernel (3.5.6-1.fc17.x86_64) |
Guest | Fedora 17 image, 4 way SMP, 2GB RAM, using virtio and cache=none QEMU options |
FUSE mount | qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=/mnt/F17,if=virtio,cache=none => /mnt is GlusterFS FUSE mount point |
GlusterFS block driver in QEMU (FUSE bypass) | qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=gluster://bharata/test/F17,if=virtio,cache=none |
Base (VM image accessed directly from brick) | qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=/test/F17,if=virtio,cache=none => /test is brick directory |
Sequential read direct IO | ; Read 4 files with aio at different depths [global] ioengine=libaiodirect=1rw=read bs=128k size=512m directory=/data1 [file1] iodepth=4 [file2] iodepth=32 [file3] iodepth=8 [file4] iodepth=16 |
Sequential write direct IO | ; Write 4 files with aio at different depths [global] ioengine=libaiodirect=1rw=write bs=128k size=512m directory=/data1 [file1] iodepth=4 [file2] iodepth=32 [file3] iodepth=8 [file4] iodepth=16 |
aggrb (KB/s) | minb (KB/s) | maxb (KB/s) | |
FUSE mount | 15219 | 3804 | 5792 |
QEMU’s GlusterFS block driver (FUSE bypass) | 39357 | 9839 | 12946 |
Base | 43802 | 10950 | 12918 |
aggrb (KB/s) | minb (KB/s) | maxb (KB/s) | |
FUSE mount | 24579 | 6144 | 8423 |
QEMU’s GlusterFS block driver (FUSE bypass) | 42707 | 10676 | 17262 |
Base | 42393 | 10598 | 15646 |
Updated numbers
Here are the recent FIO numbers averaged from 5 runs using latest QEMU (git commit: 03a36f17d77) and GlusterFS (git commit: cee1b62d01). The test environment remains same as above with the following two changes:The GlusterFS volume has write-behind translator turned off
The host kernel is upgraded to 3.6.7-4.fc17.x86_64
FIO READ numbers
aggrb (KB/s) | % Reduction from Base | |
Base | 44464 | 0 |
FUSE mount | 21637 | -51 |
QEMU’s GlusterFS block driver (FUSE bypass) | 38847 | -12.6 |
aggrb (KB/s) | % Reduction from Base | |
Base | 45824 | 0 |
FUSE mount | 40919 | -10.7 |
QEMU’s GlusterFS block driver (FUSE bypass) | 45627 | -0.43 |
相关文章推荐
- KVM使用glusterfs作为后端存储
- 使用Ceph作为OpenStack的后端存储
- 使用Ceph作为OpenStack的后端存储
- Openstack存储总结之:使用Ceph集群作为后端统一存储
- CentOS 6.5下配置Ceph作为KVM后端存储
- openstack kilo 卷备份使用nfs作为后端存储
- OpenStack入门修炼之Cinder服务-->使用NFS作为后端存储(19)
- GlusterFS作为OpenStack后端存储
- 微软企业库4.1学习笔记(十五)缓存模块3 使用数据库作为后端存储
- openstack-kilo,glance使用swift 作为后端存储
- Openstack存储总结之:详解如何使用NFS作为Cinder的后端存储
- 使用Swift作为Glance后端存储
- 使用Ceph作为OpenStack的后端存储
- 微软企业库4.1学习笔记(十五)缓存模块3 使用数据库作为后端存储
- 使用Swift作为Glance后端存储
- openstack中使用glusterfs作为nova共享存储
- 微软企业库4.1学习笔记(十五)缓存模块3 使用数据库作为后端存储
- jaeger 使用ElasticSearch 作为后端存储
- CentOS 6.5下配置Ceph作为KVM后端存储
- GlusterFS作为OpenStack后端存储