您的位置:首页 > 大数据 > 人工智能

【初学菜鸟作--LVM逻辑卷管理RAID软阵列】

2014-06-22 18:23 1226 查看
练习一:创建卷组
准备310G的空闲分区,将类型ID修改为8e LVM
[root@localhost~]# fdisk /dev/sdb进入交互模式通过新建n-p-分区号-开始位置-结束位置(分区大小)交互模式t修改类型-分区号-类型为8eDeviceBoot Start End Blocks Id System/dev/sdb1 1 1217 9775521 8e Linux LVMW保存后退出使用其中2块分区组建名为myvg的卷组,查看此卷组信息
先检查有哪些物理卷
[root@localhost~]# pvscanNomatching physical volumes found将两块空闲分区转换成物理卷
例:[root@localhost~]# pvcreate /dev/sdb1Writingphysical volume data to disk "/dev/sdb1"Physicalvolume "/dev/sdb1" successfully created再检查有哪些物理卷,查看其中一个物理卷的详细信息
[root@localhost~]# pvscanPV/dev/sdb1 lvm2 [9.32 GB]PV/dev/sdb2 lvm2 [9.32GB]Total:2 [18.65 GB] / in use: 0 [0 ] / in noVG: 2 [18.65 GB][root@localhost~]# pvdisplay /dev/sdb1"/dev/sdb1" is anew physical volume of "9.32 GB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 9.32 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 9QuHkE-pXKI-tlWM-vJdv-2qmt-Sd3A-p8Sbwq

先查看有哪些卷组
[root@localhost~]# vgdisplayNovolume groups found将两个物理卷整编成卷组myvg
[root@localhost~]# vgcreatemyvg /dev/sdb1 /dev/sdb2Volumegroup "myvg" successfully created再查看有哪些卷组,并查看卷组myvg的详细信息
[root@localhost~]# vgdisplay---Volume group ---VGName myvgSystemIDFormat lvm2MetadataAreas 2MetadataSequence No 1VG Access read/write
VGStatus resizableMAXLV 0CurLV 0OpenLV 0MaxPV 0CurPV 2ActPV 2VGSize 18.64 GBPESize 4.00 MBTotalPE 4772AllocPE / Size 0 / 0Free PE / Size 4772 / 18.64 GBVGUUID oSPZlv-Gt6D-gTQA-Gmw6-OsRd-TRqD-gcfbr0

练习二:创建/使用/扩展逻辑卷
划分一个16G的逻辑卷,名称为lvmox,查看逻辑卷信息
[root@localhost~]# lvcreate -L 16G -n lvmoxmyvgLogicalvolume "lvmox" created[root@localhost~]# lvdisplay---Logical volume ---LVName /dev/myvg/lvmoxVGName myvgLVUUID r22EGe-Cvg5-D1Qf-Q6lt-s3SJ-XuL1-gIALQDLVWrite Access read/writeLVStatus available#open 0LVSize 16.00 GBCurrentLE 4096Segments 2Allocation inheritReadahead sectors auto-currently set to 256Blockdevice 253:0将此逻辑卷格式化为ext3文件系统,并挂载到/mbox目录
格式化该逻辑卷:[root@localhost~]# mkfs.ext3 /dev/myvg/lvmox挂载[root@localhost~]# mkdir /mbox[root@localhost~]# mount /dev/myvg/lvmox /mbox/通过mount命令查看:/dev/mapper/myvg-lvmoxon /mbox type ext3 (rw)进入/mbox目录,测试读写操作
写入:[root@localhostmbox]#ifconfig> 121.txt[root@localhostmbox]#ls121.txtlost+found读取:[root@localhostmbox]# cat 121.txt
eth0 Link encap:EthernetHWaddr00:0C:29:19:BB:76
将逻辑卷从16G扩展为24G,确保df识别的大小准确
先扩展卷组(增加一个10G物理卷),再扩展逻辑卷
[root@localhostmbox]#vgextendmyvg /dev/sdb3Nophysical volume label read from /dev/sdb3Writingphysical volume data to disk "/dev/sdb3"Physicalvolume "/dev/sdb3" successfully createdVolumegroup "myvg" successfully extended扩展逻辑卷:[root@localhostmbox]#lvextend -L +8G /dev/myvg/lvmoxExtendinglogical volume lvmox to 24.00 GBLogicalvolume lvmox successfully resizedresize2fs识别新文件系统的大小
[root@localhostmbox]#resize2fs /dev/myvg/lvmox创建一个大小为250M的逻辑卷lvtest
[root@localhostmbox]#vgchange -s 1M myvgVolumegroup "myvg" successfully changed查看:[root@localhostmbox]#vgdisplay---Volume group ---VGName myvgSystemIDFormat lvm2MetadataAreas 3MetadataSequence No 5VGAccess read/writeVGStatus resizableMAXLV 0CurLV 1OpenLV 1MaxPV 0CurPV 3ActPV 3VGSize 27.96 GBPESize 1.00 MBTotalPE 28632AllocPE / Size 24576 / 24.00 GBFree PE / Size 4056 / 3.96 GBVGUUID oSPZlv-Gt6D-gTQA-Gmw6-OsRd-TRqD-gcfbr0
练习三:逻辑卷综合应用
删除上一练习建立的卷组myvg
保证没有使用或者挂载的时候删除[root@localhost~]# vgremovemyvgDoyou really want to remove volume group "myvg" containing 1 logicalvolumes? [y/n]: yDoyou really want to remove active logical volume lvmox? [y/n]: yLogicalvolume "lvmox" successfully removedVolumegroup "myvg" successfully removed使用其中2个物理卷组成卷组vgnsd,另一个物理卷组成卷组vgdata
[root@localhost~]# vgcreatevgnsd /dev/sdb1 /dev/sdb2Volumegroup "vgnsd" successfully created[root@localhost~]# vgcreatevgdata /dev/sdb3Volumegroup "vgdata" successfully created从卷组vgnsd中创建一个20G的逻辑卷lvhome
[root@localhost~]# lvcreate -L 16G -n lvhomevgnsdLogicalvolume "lvhome" created从卷组vgdata中创建一个4G的逻辑卷lvswap
[root@localhost~]# lvcreate -L 4G -n lvswapvgdataLogicalvolume "lvswap" created/home目录迁移到逻辑卷lvhome
[root@localhost~]# mkfs.ext3 /dev/vgnsd/lvhome[root@localhost~]# mkdir /1[root@localhost~]# mv /home/* /1[root@localhost~]# mount /dev/vgnsd/lvhome /home/dev/mapper/vgnsd-lvhomeon /home type ext3 (rw)将逻辑卷lvswap扩展到交换空间
格式化逻辑卷lvswap[root@localhost~]# mkswap /dev/vgdata/lvswapSettingup swapspace version 1, size = 4294963 kB[root@localhost~]# swapon /dev/vgdata/lvswap[root@localhost~]# swapon -sFilename Type Size Used Priority
/dev/sda3partition 200804 0 -1
/dev/mapper/vgdata-lvswap partition 4194296 0 -2
为第56步配置开机自动挂载,重启后验证
通过以下加载[root@localhost ~]# vim/etc/fstab
练习四:创建软RAID阵列
添加4块大小均为20GB的空磁盘
将其中第一块、第二块磁盘划分为单个主分区
把上述分区的类型ID改成fd
Device Boot Start End Blocks Id System
/dev/sdb1 1 2610 20964793+ fd Linux raid autodetect
2)阵列创建练习
创建RAID0设备/dev/md0RAID1设备/dev/md1
[root@localhost ~]# mdadm -C/dev/md0 -l0 -n2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0started.
[root@localhost ~]# mdadm -C/dev/md1 -l1 -n2 /dev/sdd /dev/sde
mdadm: array /dev/md1started.
b)查看这两个阵列的容量及成员盘个数-Q-D
[root@localhost ~]# mdadm -D/dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Jun 4 19:04:41 2014
Raid Level : raid0
Array Size : 41929344 (39.99GiB 42.94 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock ispersistent

Update Time : Wed Jun 4 19:04:41 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

UUID :923d3722:10437de4:f871f97a:b358ef7b
Events : 0.1

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1

[root@localhost ~]# mdadm -D/dev/md1
/dev/md1:
Version : 0.90
Creation Time : Wed Jun 4 19:05:15 2014
Raid Level : raid1
Array Size : 20971456 (20.00GiB 21.47 GB)
Used DevSize : 20971456(20.00 GiB 21.47 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock ispersistent

Update Time : Wed Jun 4 19:06:59 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID :1a6e3772:e4b55604:dbe09f01:b78a3faa
Events : 0.4

Number Major Minor RaidDevice State
0 8 48 0 active sync /dev/sdd
1 8 64 1 active sync /dev/sde
解散并删除阵列设备/dev/md0/dev/md1 -S
[root@localhost~]# mdadm -S /dev/md0mdadm:stopped /dev/md0[root@localhost~]# rm -rf /dev/md0创建RAID5软阵列设备 /dev/md0
第一个成员盘用分区来做
其余三个成员盘用整块磁盘来做
fdisk分别查看第一块、第二块磁盘的分区表
[root@localhost ~]# mdadm -C/dev/md0 -l5 -n4 /dev/sdb1 /dev/sd[c-e]
mdadm: /dev/sdb1 appears tobe part of a raid array:
level=raid0 devices=2ctime=Wed Jun 4 19:04:41 2014
mdadm: /dev/sdd appears tobe part of a raid array:
level=raid1 devices=2ctime=Wed Jun 4 19:05:15 2014
mdadm: /dev/sde appears tobe part of a raid array:
level=raid1 devices=2ctime=Wed Jun 4 19:05:15 2014
Continue creating array? y
mdadm: array /dev/md0started.

练习五:格式化并使用阵列
RAID5阵列/dev/md0格式化成EXT3文件系统
[root@localhost~]# mkfs.ext3 /dev/md0将阵列设备/dev/md0挂载到/mymd目录
[root@localhost~]# mkdir /mymd[root@localhost~]# mount /dev/md0 /mymd/Mount查看/dev/md0on /mymd type ext3 (rw)进入/mymd目录,测试读写
写入:[root@localhostmymd]#ls> 12.txt[root@localhostmymd]#ls12.txtlost+found读取:[root@localhostmymd]#cat 12.txt12.txtlost+found
练习六:RAID5阵列的故障测试
通过VMware设置拔掉阵列/dev/md0的最后一个成员
[root@localhostmymd]#mdadm -D /dev/md0/dev/md0:Version: 0.90CreationTime : Wed Jun 4 19:10:30 2014RaidLevel : raid5ArraySize : 62894016 (59.98 GiB 64.40 GB)UsedDevSize : 20964672 (19.99 GiB 21.47 GB)RaidDevices : 4TotalDevices : 4PreferredMinor : 0Persistence: Superblock is persistentUpdateTime : Wed Jun 4 19:16:02 2014State: clean, degradedActiveDevices : 3WorkingDevices : 3FailedDevices : 1SpareDevices : 0Layout: left-symmetricChunkSize : 64KUUID: 8a0dd0eb:2fdf8913:00f9e8e9:972e8b80Events: 0.14Number Major Minor RaidDevice State0 8 17 0 active sync /dev/sdb11 8 32 1 active sync /dev/sdc2 8 48 2 active sync /dev/sdd3 0 0 3 removed4 8 64 - faulty spare /dev/sde2)再次访问/mymd,测试读写读写功能正常3RAID5阵列的故障盘替换
将已失效的成员盘标记为失败
[root@localhostmymd]# mdadm/dev/md0 -f /dev/sde
mdadm: set /dev/sde faultyin /dev/md0
移除已失效的成员盘
[root@localhostmymd]# mdadm/dev/md0 -r /dev/sde
mdadm: hot removed /dev/sde
重新添加一个完好的成员盘(与其他成员盘大小一致)
[root@localhostmymd]#mdadm /dev/md0 -a /dev/sdemdadm:added /dev/sde观察阵列状态信息,查看修复过程
[root@localhostmymd]#watch cat /proc/mdstatEvery2.0s: cat /proc/mdstat Wed Jun 4 19:22:10 2014Personalities: [raid0] [raid1] [raid6] [raid5] [raid4]md0: active raid5 sde[4] sdd[2] sdc[1] sdb1[0]62894016blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_][================>....] recovery = 82.3% (17257344/20964672) finish=0.3minspeed=196343K/secunuseddevices: <none>
练习七:保存、重组阵列
查询当前正在运行的阵列设置
[root@localhostmymd]#mdadm -vDsARRAY/dev/md0 level=raid5 num-devices=4 metadata=0.90UUID=8a0dd0eb:2fdf8913:00f9e8e9:972e8b80devices=/dev/sdb1,/dev/sdc,/dev/sdd,/dev/sde保存正在运行的阵列设置为/etc/mdadm.conf
[root@localhostmymd]#mdadm -vDs> /etc/mdadm.conf解散并删除阵列/dev/md0
[root@localhost~]# umount /dev/md0[root@localhost~]# mdadm -S /dev/md0mdadm:stopped /dev/md0[root@localhost~]# rm -rf /dev/md0重组阵列/dev/md0,并挂载测试
[root@localhost~]# mdadm -A /dev/md0mdadm:/dev/md0 has been started with 4 drives.[root@localhost~]# mount /dev/md0 /mymd//dev/md0on /mymd type ext3 (rw)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: