您的位置:首页 > 运维架构 > Linux

Linux内存监控工具

2010-07-13 11:57 141 查看
本文为转载


http://www.opensolution.org.cn/archives/502.html

一、free



该工具主要是显示系统里可用和已用的内存



Linux
通常按一定的算法把常用的数据加载到系统的虚拟内存buffers
和cached
中,以便于用户程序在访问系统资源更快。而由free

查看到的buffers
是用于存放元数据,而cached
是用于存放真实的文件内容。

由上图free -k
的输出结果中可知:

系统总物理内存(total)


是4144656K(
约4G);

已用(Mem
行对应的used)
的物理内存

是3871932K(
约3.8G,
注:
这里包含了buffers
的152460K(
约152M)
和cached
的2253060K(2.2G).)
,他包含系统的buffers
和cached
的。

-/+ buffers/cache

对应的used


是1466412K(
约1.4G),
也就是Mem
行used(3871932K)-Mem
行buffers(152460K)-

Mem
行cached(2253060K)=1466412K(
约1.4G).
所以实际上可用于分配的物理内存(-/+ buffers/cache
行对应的free)
是2678244K(
约2.6G).



Shared

在man
手册里提示应该忽略(man free:The shared memory
column should be ignored; it is obsolete.)


Mem

行对应的free


对应的274220K(
约274M).
其实这个free
是有一定限制的:
不能低于min_free_kbytes


min_free_kbytes

用于计算系统里lowmem
zone(
物理内存0-896MB
之间的zone)
的值(This is used to force the Linux VM to
keep a minimum number of kilobytes free. The VM uses this number to compute a
pages_min value for each lowmem zone in the system. Each lowmem zone gets a
number of reserved free pages based proportionally on its
size.).

计算方式参见mm/page_alloc.c

的min_free_kbytes = sqrt(lowmem_kbytes *
16)

上述值是一定的公式计算

系统的lowmem
是872656KB

[root@crm_10 /root]grep LowTotal /proc/meminfo

LowTotal:
872656

min_free_kbytes=sqrt(872656*16)
约等于 3797

二、ps,top



这两个工具在内存监视方面有很大的相似性,所以一并说一下:

下面top
里的VIRT

相当于ps
里的VSZ

:
指用于这个任务的总虚拟内存(
虚拟内存包括物理内存和swap
交换分区),
包括所有的代码、数据、共享库以及已经被out
到swap
分区的数据。/* The
total amount of virtual memory used by the task. It includes all code, data and
shared libraries plus pages that have been swapped
out.*/

而top
里的RES


相当于ps

里的RSS

:

指用于这个任务的没被out
到swap
分区的总物理内存/* resident set size, the
non-swapped physical memory that a task has used */

top里的%MEM
:

指这个任务的RES
占总物理内存的比率/* Memory usage (RES) A task's
currently used share of available physical
memory.*/

三、vmstat



显示的值跟用free
工具查看到的值相似。一般情况下:
只要swap
一列的si/so
数值不超过1024
即可。

Swap

si: Amount of memory swapped in
from disk (/s).

so: Amount of memory swapped to
disk (/s).

四:VFS
里的meminfo
信息:





Dirty



是指数据已写入内存,但还没同步到外存(
磁盘)
的数据量.

Slab:

为了提供内核空间以页分配对有些调用(
只需小内存)
不合适的一种内存分配方式,
提出Pool
的概念。

Vmalloc:

为了解决非连续性内存的使用,提供的一种内存分配方式(
采用链表)


CommitLimit:

指当前可以分配给程序使用的虚拟内存(
只有当vm.overcommit_memory
的值设置为2
时,CommitLimit
才有意义)

CommitLimit: Based on the overcommit ratio
('vm.overcommit_ratio'),

this is the total amount of memory
currently available to

be allocated on the system. This limit
is only adhered to

if strict overcommit accounting is enabled
(mode 2 in

'vm.overcommit_memory').

The
CommitLimit is calculated with the following formula:

CommitLimit = ('vm.overcommit_ratio' * Physical RAM) +
Swap

For example, on a system with 1G of physical RAM
and 7G

of swap with a `vm.overcommit_ratio` of 30 it
would

yield a CommitLimit of 7.3G.

For more
details, see the memory overcommit documentation

in
vm/overcommit-accounting.

Committed_AS:

指当前已分配给程序使用的总虚拟内存(
包含已分配给进程但还没使用的内存)

Committed_AS: The
amount of memory presently allocated on the system.

The
committed memory is a sum of all of the memory which

has been
allocated by processes, even if it has not been

"used" by them
as of yet. A process which malloc()'s
1G

of memory, but only touches 300M of it will only show
up

as using 300M of memory even if it has the address
space

allocated for the entire 1G. This 1G is memory which
has

been "committed" to by the VM and can be used at any
time

by the allocating application. With strict
overcommit

enabled on the system (mode 2 in
'vm.overcommit_memory'),

allocations which would exceed the
CommitLimit (detailed

above) will not be permitted.
This
is useful if one needs

to guarantee that processes will not
fail due to lack of

memory once that memory has been
successfully allocated.

HugePagesize:

在X86
架构下,
通常linux
给内存分页时,默认是每页是4KB,
而有些应用自己可以管理内存(
例如db,java
……),
所以可以把页的大小分大些(
在32
位下:4K
或4M,
在PAE
模式下可以把每页设置为2M.
在64
位下:

4K, 8K, 64K, 256K, 1M,
4M, 16M,256M
),
这样可以增加TLB(
一个存储线性地址和物理地址对应表的高速缓冲器)
存储的条目,
这样就可减少线性地址到物理地址转换的过程.
可通过vm.hugetlb_shm_group
和vm.nr_hugepages
参数进行调整。具体可查看
/usr/share/doc/kernel-doc-`uname –r
|cut -d-
-f1`/Documentation/vm/hugetlbpage.txt(

如果你的机器上已经安装了
kernel-doc)

The intent of this
file is to give a brief summary of hugetlbpage support in

the Linux kernel.
This support is built on top of multiple page size support

that is provided by
most of modern architectures. For example, IA-32

architecture
supports 4K and 4M (2M in PAE mode) page sizes, IA-64

architecture
supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M,
16M,

256M. A TLB is a
cache of virtual-to-physical translations. Typically this

is a very scarce
resource on processor. Operating systems try to make best

use of limited
number of TLB resources. This optimization is more
critical


now
as bigger and bigger physical memories (several GBs) are more
readily


available.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: