您的位置:首页 > 运维架构

openvz学习笔记

2016-06-20 22:46 281 查看
OpenVZ是基于Linux内核和作业系统的操作系统级虚拟化技术。OpenVZ允许物理服务器运行多个操作系统,被称虚拟专用服务器(VPS,Virtual
Private Server)或虚拟环境(VE, Virtual Environment)。 
与VMware这种虚拟机和Xen这种半虚拟化技术相比,OpenVZ的host OS和guest OS都必需是Linux(虽然在不同的虚拟环境里可以用不同的Linux发行版)。但是,OpenVZ声称这样做有性能上的优势。根据OpenVZ网站的说法,使用OpenVZ与使用独立的服务器相比,性能只会有1-3%的损失。 
OpenVZ是SWsoft, Inc.公司开发的专有软件Virtuozzo的基础。OpenVZ的授权为GPLv2。 

1、安装

wget http://download.openvz.org/openvz.repo rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ yum search ovzkernel
yum install ovzkernel ovzkernel-devel vzctl vzquota


2、配置ip转发和kernel debug功能,关闭selinux
 vi /etc/sysctl.conf 

net.ipv4.ip_forward = 1
kernel.sysrq = 1


 vi /etc/sysconfig/selinux 

SELINUX=disabled


3、重启

reboot


4、查看内核版本

uname -r
2.6.32-042stab116.1


5、查看服务状态

service vz status
OpenVZ is running...


6、下载ve模板

cd /vz/template/cache
wget http://download.openvz.org/template/precreated/centos-6-x86.tar.gz[/code] 
7、创建VZ

[root@lsn-linux ~]# vzctl create 101 --ostemplate centos-6-x86 --config basic
Creating container private area (centos-6-x86)
Performing postcreate actions
CT configuration saved to /etc/vz/conf/101.conf
Container private area was created
备注1:默认VZ文件系统为poolp,由于创建时报错如下,在/etc/vz/vz.conf中将VE_LAYOUT的ploop改为simfs后可以。
Can't load ploop library: libploop.so: cannot open shared object file: No such file or directory
Please install ploop packages!
Alternatively, if you can't or don't want to use ploop, please
add --layout simfs option, or set VE_LAYOUT=simfs in /etc/vz/vz.conf
Creation of container private area failed
备注2:可以指定模板指定配置,也可以在vz.conf配置默认的模板和配置,然后使用命令
vzctl create 101 创建即可。

8、VZ配置

启动

[root@lsn-linux ~]# vzctl set 101 --onboot yes --save
CT configuration saved to /etc/vz/conf/101.conf
配置主机名网络域名
[root@lsn-linux ~]# vzctl set 101 --hostname vm101.lsn.com --save
CT configuration saved to /etc/vz/conf/101.conf
[root@lsn-linux ~]# vzctl set 101 --ipadd 10.0.0.1 --save
CT configuration saved to /etc/vz/conf/101.conf
[root@lsn-linux ~]# vzctl set 101 --nameserver 192.168.1.1 --save
CT configuration saved to /etc/vz/conf/101.conf

设置VZ root密码

[root@lsn-linux ~]# vzctl set 101 --userpasswd root:123456
Changing password for user root.
passwd: all authentication tokens updated successfully.

备注:其实在/etc/vz/conf/101.conf配置也可以。

9、启动VZ

[root@lsn-linux ~]# vzctl start 101
Starting container...
Container is mounted
Adding IP address(es): 10.0.0.1
Setting CPU units: 1000
Container start in progress...
You have new mail in /var/spool/mail/root


10、扩展命令管理,使用vzctl exec 直接在VZ中执行命令

[root@lsn-linux ~]# vzctl exec 101 service sshd status
openssh-daemon (pid  531) is running...
[root@lsn-linux ~]# vzctl exec 101 hostname
vm101.lsn.com

在所有VZ上执行命令

for CT in $(vzlist -H -o ctid); do echo "== CT $CT =="; vzctl exec $CT command; done

[root@single-coremail vz]# for CT in $(vzlist -H -o ctid); do echo "== CT $CT =="; vzctl exec $CT uptime; done
== CT 101 ==
02:22:23 up 20 min,  0 users,  load average: 0.00, 0.00, 0.00


11、查看VZ运行状态

vzctl status ctid

vzlist ctid

[root@localhost cache]# vzctl status 101
CTID 101 exist mounted running
[root@localhost cache]# cat /proc/vz/veinfo   显示正在运行的vm
101     0    19    10.100.100.1
0     0   140
[root@localhost cache]# vzlist 101
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101         19 running   10.100.100.1    vm101.lsn.com

[root@localhost cache]# vzctl stop 101
Stopping container ...
Container was stopped
Container is unmounted
[root@localhost cache]# vzlist 101
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101          - stopped   10.100.100.1    vm101.lsn.com
[root@localhost cache]# cat /proc/vz/veinfo
0     0   140
[root@localhost cache]# vzctl status 101
CTID 101 exist unmounted down
[root@localhost cache]# vzctl restart 101
Restarting container
Starting container...
Container is mounted
Adding IP address(es): 10.100.100.1
arpsend: 10.100.100.1 is detected on another
4000
computer : 74:26:ac:3b:17:c2
vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 10.100.100.1 eth0 FAILED
Setting CPU units: 1000
Container start in progress...
[root@localhost cache]# vzctl status 101
CTID 101 exist mounted running


12、设置名称别名
vzctl
set ctid --name name
--save

[root@localhost cache]# vzctl set 101 --name vm01 --save
Name vm01 assigned
CT configuration saved to /etc/vz/conf/101.conf
[root@localhost cache]# vzctl status vm01
CTID 101 exist mounted running
[root@localhost cache]# vzlist vm01
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101         19 running   10.100.100.1    vm101.lsn.com

备注:也可以通过配置文件配置
1、vi
/etc/vz/conf/101.conf NAME="vm01"

2、ln
--symbolic /etc/vz/conf/101.conf /etc/vz/names/vm01

13、新增VZ描述

vzctl set 101 --description "vm 101 owner - Mr.lin purpose - hosting the
test server" --save

[root@localhost cache]# vzlist -o description 101DESCRIPTIONvm 101 owner - Mr.lin purpose - hosting the test server


14、进入VZ
[root@single-coremail /]# vzctl enter 101
entered into CT 101
[root@vm101 /]# exit
logout
exited from CT 101


15、迁移VZvzmigrate

[root@localhost /]# vzmigrate 192.168.208.84 101
Locked CT 101
Starting migration of CT 101 to 192.168.208.84
Preparing remote node
Initializing remote quota
Syncing private
Stopping container
Syncing 2nd level quota
Starting container
Cleaning up

默认会迁移后会把源CT删除,-r no 会保留源
[root@single-coremail .ssh]# vzmigrate -r no  192.168.208.68 101
Locked CT 101
Starting migration of CT 101 to 192.168.208.68
Preparing remote node
Initializing remote quota
Syncing private
Stopping container
Syncing 2nd level quota
Starting container
Cleaning up
[root@localhost .ssh]# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
[root@localhost .ssh]# cp /etc/vz/conf/101.conf.migrated /etc/vz/conf/101.conf    #需复制配置文件
[root@localhost .ssh]# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101          - stopped   10.100.100.1    vm101.lsn.com
在线迁移
[root@single-coremail vz]# vzmigrate --online 192.168.208.68 101
Locked CT 101
Starting live migration of CT 101 to 192.168.208.68
Preparing remote node
Initializing remote quota
Syncing private
Live migrating container...
Syncing 2nd level quota
Cleaning up

16、本机克隆
[root@localhost ~]# vzctl stop 101  #停机
Stopping container ...
Container was stopped
Container is unmounted
[root@localhost ~]# cp -r /vz/private/101 /vz/private/102    #复制镜像
[root@localhost ~]# cp /etc/vz/conf/101.conf /etc/vz/conf/102.conf	#复制配置文件
[root@localhost ~]# vzctl set 102 --hostname vm102.lsn.com --save   #修改主机名,ip等
CT configuration saved to /etc/vz/conf/102.conf
[root@localhost ~]# vzctl set 102 --ipdel 10.100.100.1 --save
CT configuration saved to /etc/vz/conf/102.conf
[root@localhost ~]# vzctl set 102 --ipadd 10.100.100.2 --save
CT configuration saved to /etc/vz/conf/102.conf
[root@localhost ~]# vzctl start 102    #启动
Starting container...
Initializing quota ...
Container is mounted
Adding IP address(es): 10.100.100.2
arpsend: 10.100.100.2 is detected on another computer : 74:26:ac:3b:17:c2
vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 10.100.100.2 eth0 FAILED
Setting CPU units: 1000
Container start in progress...
[root@localhost ~]# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101          - mounted   10.100.100.1    vm101.lsn.com
102         19 running   10.100.100.2    vm102.lsn.com


17、销毁VZ

[root@localhost .ssh]# vzctl destroy 101
Container is currently running. Stop it first.
[root@localhost .ssh]# vzctl stop 101
Stopping container ...
Container was stopped
Container is unmounted
[root@localhost .ssh]# vzctl destroy 101
Destroying container private area: /vz/private/101
Container private area was destroyed
[root@localhost .ssh]# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
[root@localhost .ssh]#


18、挂起和恢复VZ
[root@single-coremail vz]# vzctl chkpnt 101  挂起
Setting up checkpoint...
suspend...
dump...
kill...
Checkpointing completed successfully
Container is unmounted
[root@single-coremail vz]# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101          - suspended 10.100.100.1    vm101.lsn.com

[root@single-coremail vz]# vzctl restore 101   恢复
Restoring container ...
Container is mounted
undump...
Adding IP address(es): 10.100.100.1
arpsend: 10.100.100.1 is detected on another computer : 74:26:ac:3b:17:c2
vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 10.100.100.1 eth0 FAILED
Setting CPU units: 1000
resume...
Container start in progress...
Restoring completed successfully
[root@single-coremail vz]# vzlist -a
CTID      NPROC STATUS    IP_ADDR         HOSTNAME
101         19 running   10.100.100.1    vm101.lsn.com

19、资源管理-磁盘参数
DISK_QUOTA 磁盘配额,默认在全局开启。禁用磁盘配额后,VZ就可使用HW的整个/VZ区的大小。

一般建议不在全局设置禁用,而在单独的VZ置文件里。修改参数时要先停止VZ。

[root@single-coremail /]# grep DISK_QUOTA /etc/vz/vz.conf
DISK_QUOTA=yes

DISKSPACE 空间大小,分软、硬设置

DISKINODES
inode数,分软、硬设置

QUOTATIME
当diskspace和diskinodes的软、硬设置不同时,设置允许VZ暂时超过软设置的时间,单位是秒;但硬设置是一定不能超出的。

# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="2G:2.2G"
DISKINODES="131072:144179"
QUOTATIME="0"

d3a8
可以通过命令配置
vzctl
set 101 --diskspace 1000000:1100000 --save 大小计算是通过除以 Block size: 4096 得到的

vzctl
set 101 --diskinodes 90000:91000 --save

vzctl
set 101 --quotatime 600 --save

QUOTAUGIDLIMIT
默认情况下,QUOTAUGIDLIMIT是关闭的。而且,若不打开该VE的DISK_QUOTA,将不能使用QUOTAUGIDLIMIT设置

以一个基于Red
Hat的VZ为例,其/etc/passwd和/etc/group就会包含约80个entries(条目),所以,该值必须大于80,常设置为100。若在VZ中需要继续增加用户,同样的,就需要加大该参数的值。也就是说,如果entries数已经达到限制该值的限制,那新增加的用户就不能拥有其文件了。需要注意的是,该值并不是越大越好的,过大的设置会消耗VPS的内存资源,够用就可以了。

QUOTAUGIDLIMIT="100"或者vzctl
set 101 --quotaugidlimit 100 --save需重启生效

20、资源管理-CPU参数

检查可用cpu

[root@single-coremail /]# vzcpucheck
Current CPU utilization: 2000  #当前vm占用的cpu time   即CPUUNITS+VE0CPUUNITS得出
Power of the node: 84803   #物理机所有的cpu time总值

VE0CPUUNITS 在全局配置文件中 /etc/vz/vz.conf,定义VPS 0可使用的最少CPU time,也就是物理机可获得保证的最少资源值。建议设置为CPU time总值的5~10%;
# grep 'VE0CPUUNITS=1000' /etc/vz/vz.conf
VE0CPUUNITS=1000

CPUUNITS VM可获得保证的最少CPU time资源

CPULIMIT 该VM不允许超过的CPU time百分比

/etc/vz/conf/101.conf
CPUUNITS="2500"
CPULIMIT="5"

或者
# vzctl set 101 --cpuunits 2500 --cpulimit 5 --save
1)即使在物理机的CPU满负荷,或者讲,the current CPU utilization等于the power of the Node的情况下,101 VZ都可以得到约4%((2500+1000)/84803)的CPU time资源;
2)但在CPU空闲的情况下,101 VZ并不能得到超过5%的CPU time资源;
3)也就是讲,正常情况下,101 VZ可用的CPU time资源在4%~5%之间;
4)若不设置cpulimit,那么101 VZ在没有其他VE竞争,而资源又允许的情况下,就可以超过3%,但只要不超过物理机的负荷就可以了。

[root@single-coremail /]# vzcpucheck
Current CPU utilization: 2000
Power of the node: 84803
[root@single-coremail /]# vzctl set 101 --cpuunits 2500 --cpulimit 5 --save
Setting CPU limit: 5
Setting CPU units: 2500
CT configuration saved to /etc/vz/conf/101.conf
[root@single-coremail /]# vzctl restart 101
Restarting container
Stopping container ...
Container was stopped
Container is unmounted
Starting container...
Container is mounted
Adding IP address(es): 10.100.100.1
arpsend: 10.100.100.1 is detected on another computer : 74:26:ac:3b:17:c2
vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 10.100.100.1 eth0 FAILED
Setting CPU limit: 5
Setting CPU units: 2500
Container start in progress...
[root@single-coremail /]# vzcpucheck
Current CPU utilization: 3500
Power of the node: 84803


21、资源管理-内存参数
# RAM

PHYSPAGES="0:32G"内存资源

#
Swap

SWAPPAGES="0:8G"交换空间资源

#vzctl set 101 --physpages 0:512M --swappages 0:512M --save

[root@single-coremail /]# vzctl exec 101 free -m
total       used       free     shared    buffers     cached
Mem:           256         42        213          0          0         18
-/+ buffers/cache:         24        231
Swap:            0          0          0
[root@single-coremail /]# vzctl set 101 --physpages 0:512M --swappages 0:512M --save
UB limits were set successfully
CT configuration saved to /etc/vz/conf/101.conf
[root@single-coremail /]# vzctl restart 101
Restarting container
Stopping container ...
Container was stopped
Container is unmounted
Starting container...
Container is mounted
Adding IP address(es): 10.100.100.1
arpsend: 10.100.100.1 is detected on another computer : 74:26:ac:3b:17:c2
vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 10.100.100.1 eth0 FAILED
Setting CPU limit: 5
Setting CPU units: 2500
Container start in progress...
[root@single-coremail /]# vzctl exec 101 free -m
total       used       free     shared    buffers     cached
Mem:           512         19        492          0          0         11
-/+ buffers/cache:          8        503
Swap:          512          0        512
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: