您的位置:首页 > 运维架构 > Linux

linux系统中对磁盘配额(quota),软阵列(raid)的实现

2014-01-09 18:37 351 查看
1.创建测试的用户和修改挂载的参数
[root@localhost ~]# useradd user1 --新建两个用户
[root@localhost ~]# useradd user2
[root@localhost ~]# mount -o remount,usrquota,grpquota /mnt/sdb --重新挂载,加参数
[root@localhost ~]# mount -l --查看挂载选项
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/sdb1 on /mnt/sdb type ext4 (rw,usrquota,grpquota)
[root@localhost ~]# quotacheck -avug -mf --生成两个quota文件
quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota

to avoid running quotacheck after an unclean shutdown.
quotacheck: Scanning /dev/sdb1 [/mnt/sdb] done
quotacheck: Cannot stat old user quota file: No such file or directory
quotacheck: Cannot stat old group quota file: No such file or directory
quotacheck: Cannot stat old user quota file: No such file or directory
quotacheck: Cannot stat old group quota file: No such file or directory
quotacheck: Checked 2 directories and 0 files
quotacheck: Old file not found.
quotacheck: Old file not found.
[root@localhost ~]# ll /mnt/sdb --查看生成的两个文件
total 26
-rw-------. 1 root root 6144 Jan 9 17:59 aquota.group
-rw-------. 1 root root 6144 Jan 9 17:59 aquota.user
drwx------. 2 root root 12288 Jan 9 17:55 lost+found
[root@localhost ~]# quotaon -avug --开启quota功能
/dev/sdb1 [/mnt/sdb]: group quotas turned on
/dev/sdb1 [/mnt/sdb]: user quotas turned on
[root@localhost ~]# edquota -u user1
Disk quotas for user user1 (uid 500):
Filesystem blocks soft hard inodes soft hard
/dev/sdb1 0 10 20 0 0 0
[root@localhost ~]# edquota -u user2
Disk quotas for user user2 (uid 501):
Filesystem blocks soft hard inodes soft hard
/dev/sdb1 0 5 10 0 0 0

2.验证配额
[root@localhost ~]# su - user1
[user1@localhost ~]$ cd /mnt/sdb
[user1@localhost sdb]$ dd if=/dev/zero of=12 bs=1M count=5 --创建5M的文件没有警告信息,正常
5+0 records in
5+0 records out
5242880 bytes (5.2 MB) copied, 0.0525754 s, 99.7 MB/s
[user1@localhost sdb]$ ll -h 12
-rw-rw-r--. 1 user1 user1 5.0M Jan 9 18:16 12
[user1@localhost sdb]$ dd if=/dev/zero of=123 bs=1M count=21 --创建12M的文件有警告信息,表示失败
sdb1: warning, user block quota exceeded.
sdb1: write failed, user block limit reached.
dd: writing `123': Disk quota exceeded
20+0 records in
19+0 records out
20475904 bytes (20 MB) copied, 0.20094 s, 102 MB/s
[user1@localhost sdb]$ ll -h 123
-rw-rw-r--. 1 user1 user1 0 Jan 9 18:17 123
[user1@localhost sdb]$ exit
logout
[root@localhost ~]# su - user2 --用户user2测试
[user2@localhost ~]$ cd /mnt/sdb
[user2@localhost sdb]$ dd if=/dev/zero of=23 bs=1M count=8 --写入8M文件成功
sdb1: warning, user block quota exceeded.
8+0 records in
8+0 records out
8388608 bytes (8.4 MB) copied, 0.0923618 s, 90.8 MB/s
[user2@localhost sdb]$ ll -h 23 --查看文件大小
-rw-rw-r--. 1 user2 user2 8.0M Jan 9 18:23 23
[user2@localhost sdb]$
[user2@localhost sdb]$ dd if=/dev/zero of=23 bs=1M count=11 --写入11M文件失败
sdb1: warning, user block quota exceeded.
sdb1: write failed, user block limit reached.
dd: writing `23': Disk quota exceeded
10+0 records in
9+0 records out
10235904 bytes (10 MB) copied, 0.106298 s, 96.3 MB/s
[user2@localhost sdb]$

3.查看quota配置,修改警告时间,取消quota
[root@localhost ~]# quota -vu user1 user2 --查找指定的用户quota信息
Disk quotas for user user1 (uid 500):
Filesystem blocks quota limit grace files quota limit grace
/dev/sdb1 0 10000 20000 0 0 0
Disk quotas for user user2 (uid 501):
Filesystem blocks quota limit grace files quota limit grace
/dev/sdb1 8193* 5000 10000 6days 1 0 0
[root@localhost ~]# repquota -av --所有用户和quota信息
*** Report for user quotas on device /dev/sdb1
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 13 0 0 2 0 0
user1 -- 0 10000 20000 0 0 0
user2 +- 8193 5000 10000 6days 1 0 0
Statistics:
Total blocks: 7
Data blocks: 1
Entries: 3
Used average: 3.000000
[root@localhost ~]# edquota -t --修改文件警告天数(Block 天数 Inode 天数)
Grace period before enforcing soft limits for users:
Time units may be: days, hours, minutes, or seconds
Filesystem Block grace period Inode grace period
/dev/sdb1 7days 7days
[root@localhost ~]# vim /etc/warnquota.conf --查看警告信息
[root@localhost ~]# quotaoff /mnt/sdb --关闭quota功能

4.磁盘分区,转换磁盘的格式做软阵列
[root@localhost ~]# sfdisk -l --查看系统有多少块硬盘
Disk /dev/sda: 1044 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sda1 * 0+ 63- 64- 512000 83 Linux
/dev/sda2 63+ 1044- 981- 7875584 8e Linux LVM
/dev/sda3 0 - 0 0 0 Empty
/dev/sda4 0 - 0 0 0 Empty
Disk /dev/sdb: 74 cylinders, 255 heads, 63 sectors/track --第二块硬盘
Disk /dev/sdc: 79 cylinders, 255 heads, 63 sectors/track --第三块硬盘
Disk /dev/sdd: 74 cylinders, 255 heads, 63 sectors/track --第四块硬盘
Disk /dev/mapper/VolGroup-lv_root: 849 cylinders, 255 heads, 63 sectors/track
Disk /dev/mapper/VolGroup-lv_swap: 130 cylinders, 255 heads, 63 sectors/track
[root@localhost ~]# fdisk -cu /dev/sdb --分区交转换分区格式(以下分区全部这么做,这里我就不显示了)
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2255ec93.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-1196031, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-1196031, default 1196031): +100M
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First sector (206848-1196031, default 206848):
Using default value 206848
Last sector, +sectors or +size{K,M,G} (206848-1196031, default 1196031): +100M
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First sector (411648-1196031, default 411648):
Using default value 411648
Last sector, +sectors or +size{K,M,G} (411648-1196031, default 1196031): +100M
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sdb: 612 MB, 612368384 bytes
255 heads, 63 sectors/track, 74 cylinders, total 1196032 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x2255ec93
Device Boot Start End Blocks Id System
/dev/sdb1 2048 206847 102400 fd Linux raid autodetect
/dev/sdb2 206848 411647 102400 fd Linux raid autodetect
/dev/sdd3 411648 616447 102400 fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost ~]# partx -a /dev/sdb --强制读取分区表
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
BLKPG: Device or resource busy
error adding partition 3
[root@localhost ~]# partx -a /dev/sdc --强制读取分区表
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
BLKPG: Device or resource busy
error adding partition 3
[root@localhost ~]# partx -a /dev/sdd --强制读取分区表
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
BLKPG: Device or resource busy
error adding partition 3

5.将第二,三块硬盘的第一个分区做成raid0
[root@localhost ~]# mdadm --create /dev/md0 --raid-devices=2 --level=0 /dev/sd{b,c}1 --第二,三块硬盘第一分区做raid0
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]# cat /proc/mdstat --查看raid信息
Personalities : [raid0]
md0 : active raid0 sdc1[1] sdb1[0]
224256 blocks super 1.2 512k chunks
unused devices: <none>
[root@localhost ~]# mkfs.ext4 /dev/md0 --格式化
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=512 blocks, Stripe width=1024 blocks
56224 inodes, 224256 blocks
11212 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
28 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mount /dev/md0 /mnt/sdb
[root@localhost ~]#

6.将第二,三块硬盘的第二个分区做raid1
[root@localhost ~]# mdadm --create /dev/md1 --raid-devices=2 --level=1 /dev/sd{b,c}2
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array?
Continue creating array? (y/n) y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@localhost ~]# mkfs.ext4 /dev/md1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
28112 inodes, 112320 blocks
5616 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
14 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mount /dev/md1 /mnt/sdb1/

7.将第二,三,四块硬盘的第三个分区做成raid5
[root@localhost ~]# mdadm --create /dev/md2 --raid-devices=3 --level=5 /dev/sd{b,c,d}3
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@localhost ~]# mkfs.ext4 /dev/md2
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=512 blocks, Stripe width=1024 blocks
56224 inodes, 224256 blocks
11212 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
28 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mount /dev/md2 /mnt/sdb2/

8.查看raid信息
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdd3[3] sdc3[1] sdb3[0]
224256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdc2[1] sdb2[0]
112320 blocks super 1.2 [2/2] [UU]
md0 : active raid0 sdc1[1] sdb1[0]
224256 blocks super 1.2 512k chunks
unused devices: <none>
[root@localhost ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 6.9G 6.4G 166M 98% /
tmpfs tmpfs 262M 0 262M 0% /dev/shm
/dev/sda1 ext4 508M 48M 435M 10% /boot
/dev/md0 ext4 223M 6.4M 205M 3% /mnt/sdb
/dev/md1 ext4 112M 5.8M 100M 6% /mnt/sdb1
/dev/md2 ext4 223M 6.4M 205M 3% /mnt/sdb2
[root@localhost ~]#

9.raid故障恢复和用raid使用逻辑卷(lvm)
[root@localhost ~]# mdadm -a /dev/md2 /dev/sdd1 --在raid5中添加一块分区
mdadm: added /dev/sdd1
[root@localhost ~]# mdadm -f /dev/md2 /dev/sdd3 --将raid5中的第三个分区变为失效
mdadm: set /dev/sdd3 faulty in /dev/md2
[root@localhost ~]# mdadm -r /dev/md2 /dev/sdd3 --移除raid5中的第三个分区
mdadm: hot removed /dev/sdd3 from /dev/md2
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdd1[4] sdc3[1] sdb3[0] --查看raid5中的所有分区
224256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdc2[1] sdb2[0]
112320 blocks super 1.2 [2/2] [UU]
md0 : active raid0 sdc1[1] sdb1[0]
224256 blocks super 1.2 512k chunks
unused devices: <none>
[root@localhost ~]# pvcreate /dev/md2 --将raid5转换成物理卷
Physical volume "/dev/md2" successfully created
[root@localhost ~]# vgcreate vg0 /dev/md2 --物理卷组成卷组
Volume group "vg0" successfully created
[root@localhost ~]# lvcreate -L 150M -n test /dev/vg0 --从卷组中划分逻辑卷
Rounding up size to full physical extent 152.00 MiB
Logical volume "test" created
[root@localhost ~]# mkfs.ext4 /dev/vg0/test --格式化逻辑卷
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=512 blocks, Stripe width=1024 blocks
38912 inodes, 155648 blocks
7782 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
19 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mount /dev/vg0/test /mnt/sdb2/ --挂载
[root@localhost ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 6.9G 6.4G 166M 98% /
tmpfs tmpfs 262M 0 262M 0% /dev/shm
/dev/sda1 ext4 508M 48M 435M 10% /boot
/dev/md0 ext4 223M 6.4M 205M 3% /mnt/sdb
/dev/md1 ext4 112M 5.8M 100M 6% /mnt/sdb1
/dev/mapper/vg0-test
ext4 155M 5.8M 141M 4% /mnt/sdb2
[root@localhost ~]#

本文出自 “一起走过的日子” 博客,谢绝转载!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: