您的位置:首页 > 其它

KVM虚拟化

2015-08-12 21:01 169 查看
http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=20776139&id=3279579

KVM笔记!

kvm原理学习:

Linux as a Hypervisor:

By adding virtualization capabilities to a standard Linux kernel, we can enjoy all

the fine-tuning work that has gone (and is going) into the kernel, and bring that

benefit into a virtualized environment. Under this model, every virtual machine is a regular Linux process scheduled by the standard Linux scheduler. Its memory is allocated by the Linux memory allocator, with its knowledge of NUMA and integration into the
scheduler.

A normal Linux process has two modes of execution: kernel and user. Kvm adds a third mode: guest mode (which has its own kernel and user modes, but these do not interest the hypervisor at all).

三种工作模式如下:

 







kvm Components:

The simplicity of kvm is exemplified by its structure; there are two components:

● A device driver for managing the virtualization hardware; this driver exposes its capabilities via a character device /dev/kvm

● A user-space component for emulating PC hardware; this is a lightly modified qemu process,QEMU is a well known processor emulator written by French computer wizard Fabrice  Bellard.

见附件!



What is the difference between kvm and QEMU?  ##kvm和QEMU的不同点

Qemu uses emulation; kvm uses processor extensions for virtualization.

What is the difference between kvm and Xen?   ##xen和kvm的不同点 

Xen is an external hypervisor; it assumes control of the machine and divides resources among guests. On the other hand, kvm is part of Linux and uses the regular Linux 

scheduler and memory management. This means that kvm is much smaller and simpler to use.

On the other hand, Xen supports both full virtualization and a technique called paravirtualization, which allows better performance for modified guests. kvm does not 

at present support paravirtualization.

KVM 所使用的方法是通过简单地加载内核模块而将 Linux 内核转换为一个系统管理程序。这个内核模块导出了一个名为 /dev/kvm 的设备,它可以启用内核的客户模式(除了传统的内核模式和用户模式)。有了 /dev/kvm 设备,VM 使自己的地址空间独立于内核或运行着的任何其他 VM 的地址空间。设备树(/dev)中的设备对于所有用户空间进程来说都是通用的。但是每个打开 /dev/kvm 的进程看到的是不同的映射(为了支持 VM 间的隔离)。KVM安装 KVM 之后,您可以在用户空间启动客户操作系统。每个客户操作系统都是主机操作系统(或系统管理程序)的一个单个进程。 

KVM 是指基于 Linux 内核的虚拟机(Kernel-based Virtual Machine),2006年10月,由以色列的 Qumranet 组织开发的一种新的“虚拟机”实现方案。 2007年2月发布的 Linux 2.6.20内核第一次包含了KVM 。增加KVM到Linux内核是 Linux发展的一个重要里程碑,这也是第一个整合到Linux主线内核的虚拟化技术。

KVM 在标准 Linux 内核中增加了虚拟技术,从而我们可以通过优化的内核来使用虚拟技术。在 KVM 模型中,每一个虚拟机都是一个由 Linux 调度程序管理的标准进程,你可以在用户空间启动客户机操作系统。

一个普通的 Linux 进程有两种运行模式:内核和用户。 KVM 增加了第三种模式:客户模式(有自己的内核和用户模式)

作为VMM,KVM分为两部分,分别是运行于kernel模式的KVM和运行于User模式下的Qemu模块,这里的kernel模式和User模式,实际上指的是VMX根模式下的特权级0和特权级3,KVM将虚拟机所在的运行模式称为Gues模式 当一起工作的时候,KVM管理CPU和MEM的访问,QEMU仿真硬件资源(硬盘,声卡,USB,等等).而qemu-kvm是为了针对KVM专门做了修改和优化的QEMU分支.

 

 KVM的所有IO虚拟化工作时借助Qemu完成的,显著降低了实现的工作量!这是KVM的优势之一! ##看下图即可理解

KVM是Linux的一个模块。可以用modprobe去加载KVM模块。加载了模块后,才能进一步通过其他工具创建虚拟机。但仅有KVM模块是远远不够的,因为用户无法直接控制内核模块去作事情:还必须有一个用户空间的工具才行。这个用户空间的工具,开发者选择了已经成型的开源虚拟化软件QEMU。说起来QEMU也是一个虚拟化软件。它的特点是可虚拟不同的CPU。比如说在x86的CPU上可虚拟一个Power的CPU,并可利用它编译出可运行在Power上的程序。KVM使用了QEMU的一部分,并稍加改造,就成了可控制KVM的用户空间工具了





   

KVM 客户机网络连接有两种方式:

用户网络(User Networking):让虚拟机访问主机、互联网或本地网络上的资源的简单方法,但是不能从网络或其他的客户机访问客户机,性能上也需要大的调整。

虚拟网桥(Virtual Bridge):这种方式要比用户网络复杂一些,但是设置好后客户机与互联网,客户机与主机之间的通信都很容易。



启动一台客户机后,主机上多出了一个 tap0 虚拟网络设备,这就是 qemu-kvm 为客户机虚拟的 TAP 网络设备。查看网桥可以看出 tap0 加入了网桥 br0 。客户机就是通过网桥访问的外网。

1:[root@h4 ~]# sh install.sh 

ERROR    Guest name 'linuxvm02' is already in use.

ERROR    A name is required for the virtual machine. (use --prompt to run interactively)

[root@h4 ~]# cat install.sh 

virt-install -n linuxvm02 -r 1024 --vcpus=1 -l http://192.168.1.66 --nographics --os-type=linux --os-variant=rhel5 -f /data/linuxvm02.img -s 20 -w bridge:br0   --extra-args='console=tty0 console=ttyS0,115200n8'   --connect qemu:///system

显然已经安装过了linuxvm02,如果是上一次没有安装成功,删除这个虚拟机,然后重新运行该脚本即可!

删除虚拟机有两种方式:

1.1:图形界面方式,在物理机器图形界面下,运行virt-manager ,找到自己要删除的vm,如果是active状态,先将其关闭,然后点delete按钮删除

1.2: virsh undefine linuxvm02 即可从/etc/libvirt/qemu下将配置文件删除。然后手动删除linuxvm02对应的img文件即可

2:centos下安装kvm

centos5下安装:

[root@h4 CentOS]# yum grouplist | grep  -i kvm

   KVM

[root@h4 CentOS]# cat /etc/issue

CentOS release 5.8 (Final)

Kernel \r on an \m

显然centos5下要

# yum -y groupinstall KVM ##查看KVM组的信息

必需的软件包:

Required Packages

You must install the following packages:

kmod-kvm : kvm kernel module(s)

kvm : Kernel-based Virtual Machine

kvm-qemu-img : Qemu disk image utility

kvm-tools : KVM debugging and diagnostics tools

python-virtinst : Python modules and utilities for installing virtual machines

virt-manager : Virtual Machine Manager (GUI app, to install and configure VMs)

virt-viewer: Virtual Machine Viewer (another lightweight app to view VM console and/or install VMs)

bridge-utils : Utilities for configuring the Linux Ethernet bridge (this is recommended for KVM networking)

 [root@h4 CentOS]# yum groupinfo KVM

Loaded plugins: fastestmirror, security

Setting up Group Process

Loading mirror speeds from cached hostfile

 * base: centos.ustc.edu.cn

 * extras: centos.ustc.edu.cn

 * updates: centos.ustc.edu.cn

Group: KVM

 Description: Virtualization Support with KVM

 Mandatory Packages:

   etherboot-zroms

   etherboot-zroms-kvm

   kmod-kvm

   kvm

   kvm-qemu-img

 Default Packages:

   Virtualization-en-US

   libvirt

   virt-manager

   virt-viewer

   virt-who

 Optional Packages:

   etherboot-pxes

   etherboot-roms

   etherboot-roms-kvm

   gpxe-roms-qemu

   iasl

   kvm-tools

   libcmpiutil

   libvirt-cim

   qspice

   qspice-libs-devel

老外的一篇文章:写的挺好的!

A Note About libvirt:

libvirt is an open source API and management tool for managing platform virtualization. It is used to manage Linux KVM and Xen virtual machines through graphical interfaces such as Virtual Machine Manager and higher level tools such as oVirt. See the official
website for more information.

A Note About QEMU:

QEMU is a processor emulator that relies on dynamic binary translation to achieve a reasonable speed while being easy to port on new host CPU architectures. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly
on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests. See the official website for more information.

A Note About Virtio Drivers:

  Virtio is paravirtualized dri
e2be
vers for kvm/Linux. With this you can can run multiple virtual machines running unmodified Linux or Windows VMs. Each virtual machine has private virtualized hardware a network card, disk, graphics adapter, etc. According to Redhat:

  Para-virtualized drivers enhance the performance of fully virtualized guests. With the para-virtualized drivers guest I/O latency decreases and throughput increases to near bare-metal levels. It is recommended to use the para-virtualized drivers for fully
virtualized guests running I/O heavy tasks and applications.

Host Operating System

Your main operating system such as CentOS or RHEL is known as host operating system. KVM is a Linux kernel module that enables a modified QEMU program to use hardware virtualization. You only need to install KVM under host operating systems.

KVM Domains

It is nothing but a guest operating system running under host operating system. Each kvm domain must have a unique name and ID (assigned by system).

Guest Operating Systems

KVM supports various guest operating systems such as

MS-Windows 2008 / 2000 / 2003 Server

MS-Windows 7 / Vista / XP

FreeBSD

OpenBSD

Sun Solaris

Various Linux distributions.

NetBSD

MINIX

QNX

MS DOS

FreeDOS

Haiku

Amiga Research OS

Important Configuration And Log Files (Directories) Location

The following files are required to manage and debug KVM problems:

/etc/libvirt/ - Main configuration directory.

/etc/libvirt/qemu/ - Virtual machine configuration directory. All xml files regarding VMs are stored here. You can edit them manually or via virt-manager.

/etc/libvirt/qemu/networks/ - Networking for your KVM including default NAT. NAT is only recommended for small setup or desktops. I strongly suggest you use bridged based networking for performance.

/etc/libvirt/qemu/networks/default.xml - The default NAT configuration used by NAT device virbr0.

/var/log/libvirt/ - The default log file directory. All VM specific logs files are stored here.

/etc/libvirt/libvirtd.conf - Master libvirtd configuration file.

/etc/libvirt/qemu.conf - Master configuration file for the QEMU driver.

TCP/UDP Ports

By default libvirt does not opens any TCP or UDP ports. However, you can configure the same by editing the /etc/libvirt/libvirtd.conf file. Also, VNC is configured to listen on 127.0.0.1 by default. To make it listen on all public interfaces, edit /etc/libvirt/qemu.conf
file.

架构:



(Fig.01: Our sample server setup - you need to scroll to see complete diagram)

Where,

Host Configuration

OS - RHEL / CentOS v5.5 is our host operating system.

Host has two interface eth0 and eth1

LAN - eth0 with private ip

Internet - eth1 with public IPv4/IPv6 address.

Disk - 73x4 - 15k SAS disk in RAID 10 hardware mode. All VMs are stored on same server (later I will cover SAN/NFS/NAS configuration with live migration).

RAM - 16GB ECC

CPU - Dual core dual Intel Xeon CPU L5320 @ 1.86GHz with VT enabled in BIOS.

Virtual Machine Configuration

Bridged mode networking (eth0 == br0 and eth1 == br1) with full access to both LAN and Internet.

Accelerator virtio drivers used for networking (model=virtio)

Accelerator virtio drivers for disk (if=virtio) and disk will show up as /dev/vd[a-z][1-9] in VM.

Various virtual machines running different guest operating systems as per requirements.

 

3:确定KVM模块是否被加载

在启动完成后,你应该可以看到KVM内核模块已经被加载了:

[root@h5 ~]# lsmod |grep kvm 

kvm_intel 85256 1 

kvm 224800 2 ksm,kvm_intel

可以通过下面的命令看看KVM是不是真正运行了:

[root@h5 ~]# virsh -c qemu:///system list 

Id Name State ----------------------------------

4:设置某个虚拟机开机随系统启动:

[root@host1 qemu]# chkconfig --list | grep libvirtd  ##确认libvirtd开机随系统启动

libvirtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

 

[root@host1 ~]# virsh autostart linuxvm02

Domain linuxvm02 marked as autostarted

[root@host1 ~]# cd /etc/libvirt/

[root@host1 libvirt]# ls

cim libvirtd.conf nwfilter qemu qemu.conf storage

[root@host1 libvirt]# cd qemu

[root@host1 qemu]# ls

autostart kvm01.xml linuxvm02.xml networks

[root@host1 qemu]# cd autostart/

[root@host1 autostart]# ls  ##执行上述命令后,会在autostart目录下生成一个连接文件!

linuxvm02.xml

[root@host1 autostart]# ll

total 4

lrwxrwxrwx 1 root root 31 Aug 9 16:47 linuxvm02.xml -> /etc/libvirt/qemu/linuxvm02.xml

重启机器后测试,linuxvm02 已经随系统启动!

将刚刚随系统启动的虚拟机设置成禁止随系统启动:

[root@host1 qemu]# cd autostart/

[root@host1 autostart]# ls  ##显然有该文件

linuxvm02.xml

[root@host1 autostart]# ll

total 4

lrwxrwxrwx 1 root root 31 Aug  9 16:47 linuxvm02.xml -> /etc/libvirt/qemu/linuxvm02.xml

[root@host1 autostart]# virsh autostart linuxvm02 --disable

Domain linuxvm02 unmarked as autostarted 

[root@host1 autostart]# ls ##执行该命令后删除了 linuxvm02.xml软连接

[root@host1 autostart]# ll

5:kvm下每启动一个虚拟机,top里对应多出一个qemu-kvm进程!

解释如下:Under KVM's model, every virtual machine is a regular Linux process scheduled by the standard Linux scheduler.



本来kvm01 虚拟机启动的时候,只有一个qemu-kvm进程,随后我又启动一个linuxvm02进程,top显示时就有两个qemu-kvm进程!

6:查看vm和node的信息:

[root@host1 autostart]# virsh dominfo linuxvm02

Id: 1

Name: linuxvm02

UUID: d340f42e-8611-52c5-949c-8013181e73d4

OS Type: hvm

State: running

CPU(s): 1

CPU time: 251.7s

Max memory: 1048576 kB

Used memory: 1048576 kB

Persistent: yes

Autostart: disable

[root@host1 autostart]# virsh nodeinfo 

CPU model: x86_64

CPU(s): 2

CPU frequency: 1603 MHz

CPU socket(s): 1

Core(s) per socket: 1

Thread(s) per core: 2

NUMA cell(s): 1

Memory size: 4008652 kB

[root@host1 autostart]#

7:[root@host1 ~]# virsh resume linuxvm02

error: Failed to resume domain linuxvm02

error: Timed out during operation: cannot acquire state change lock

原因:很有可能是linuxvm02 已经在启动了,只是没启动起来而已,还处于启动过程中,只需要耐心等待几分钟即可!

Login as a root user and kill the libvirtd.

killall -9 libvirtd

Remove the libvirtd pid file.

rm /var/run/libvirtd.pid

Restart libvirtd.

/etc/init.d/libvirtd  restart

8:每个虚拟机启动的时候,Host上都会多出一个vnet相关网卡,例如:

[root@host1 ~]# ifconfig

br0 Link encap:Ethernet HWaddr 00:E0:6F:12:AC:8A

......

eth0 Link encap:Ethernet HWaddr 00:E0:6F:12:AC:8A

......

lo Link encap:Local Loopback

......

virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00

......

[root@host1 ~]# virsh start kvm01

Domain kvm01 started

[root@host1 ~]# virsh start linuxvm02

Domain linuxvm02 started

[root@host1 ~]# ifconfig

br0 Link encap:Ethernet HWaddr 00:E0:6F:12:AC:8A

......

eth0 Link encap:Ethernet HWaddr 00:E0:6F:12:AC:8A

......

lo Link encap:Local Loopback

.....

virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00

.....

vnet0 Link encap:Ethernet HWaddr FE:52:00:7F:0D:CB

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:500

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

vnet1 Link encap:Ethernet HWaddr FE:52:00:3B:54:33

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:500

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

显然物理机上多出来两种虚拟网卡! 总之每一个虚拟机都会对应一个物理网卡!

9:error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': 

[root@localhost network-scripts]# virsh list

error: Failed to reconnect to the hypervisor

error: no valid connection

error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

        

[root@localhost network-scripts]# lsmod  | grep kvm

kvm_intel              50380  0 

kvm                   305081  1 kvm_intel

[root@localhost network-scripts]# /etc/init.d/libvirt

libvirtd        libvirt-guests  

[root@localhost network-scripts]# /etc/init.d/libvirt

libvirtd        libvirt-guests  

[root@localhost network-scripts]# /etc/init.d/libvirtd  start

Starting libvirtd daemon:                                  [  OK  ]

[root@localhost network-scripts]# virsh list

 Id    Name                           State

----------------------------------------------------

[root@localhost network-scripts]# chkconfig  --list | grep  libvirt

libvirt-guests  0:off   1:off   2:on    3:on    4:on    5:on    6:off

libvirtd        0:off   1:off   2:off   3:on    4:on    5:on    6:off

[root@localhost network-scripts]# 

10:有时候在物理机里用virsh shutdown domain 不工作

Why doesn't 'shutdown' doesn't seem to work?

If you are using Xen HVM or QEMU/KVM, ACPI must be enabled in the guest for a graceful shutdown to work. To check if ACPI is enabled, run:

virsh dumpxml $your-vm-name | grep acpi

If nothing is printed, ACPI is not enabled for your machine. Use 'virsh edit' to add the following XML under :

If your VM is running Linux, the VM additionally needs to be running acpid to receive the ACPI events.

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: