您的位置:首页 > 运维架构 > Linux

Linux内存管理(深入理解Linux内核)

2013-12-30 11:29 489 查看
Linux的内存管理,实际上是借助80x86的硬件分段和分页电路,将逻辑地址转化为物理地址的。

物理内存中,有一部分是一直(Permanently)映射给内核使用的,这部分主要用于保存内核的代码,以及内核中静态的数据结构体。之所以要一直将这些物理内存映射给内核,是因为这些内容(代码,静态数据结构)是在整个操作系统运行过程中都一直需要不断地引用的,如果是通过动态分配和翻译的方式来维护它们在物理内存中的位置的话,就会耗费太多的CPU时间。

这种方式可以理解为以空间换时间的策略。

物理内存中的其余部分,是动态内存

动态内存是一项珍贵的资源,不仅被用户态的各个进程所需要,内核本身也是需要的。

Page Frame Management

Memory Area Management

是两种管理物理上连接的内存区域的方式。

Noncontiguous Memory Area Management

是处理物理上不连接的物理内存区域的方式。

Page Descriptor

内核必须维护每一个Page的当前状态,

比如,它必须能够区分某个物理页现在是被谁在使用:

1. 用户态的进程

2. 内核态的代码

3. 内核态的数据结构

同样,它也必须能够区分一个在Dynamic Memory中分配的物理内存页现在处于哪种状态:

1. 释放状态

2. 存储用户态的进程的数据

3. 存储一个软件的Cache

4. 存储动态分配的内核的数据结构

5. 存储设备驱动的缓存数据

6. 存储内核模块的代码

每个页的描述结构体,都存储成一个struct page的实例,这些实例保存在mem_map数组中。

每个struct page大小为32字节,因此大约会耗费1%(32 / 4096)的物理内存来保存这个数组。

内核提供以下几个宏,来获得page结构体的位置:

#define virt_to_page(kaddr)pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
/*
* supports 3 memory models.
*/
#if defined(CONFIG_FLATMEM)
 #define __pfn_to_page(pfn)(mem_map + ((pfn) - ARCH_PFN_OFFSET))
 #define __page_to_pfn(page)((unsigned long)((page) - mem_map) + \
ARCH_PFN_OFFSET)
#elif defined(CONFIG_DISCONTIGMEM)
 #define __pfn_to_page(pfn)\
 ({unsigned long __pfn = (pfn);\
 unsigned long __nid = arch_pfn_to_nid(__pfn);\
 NODE_DATA(__nid)->node_mem_map + arch_local_page_offset(__pfn, __nid);\
})
 #define __page_to_pfn(pg)\
 ({struct page *__pg = (pg);\
 struct pglist_data *__pgdat = NODE_DATA(page_to_nid(__pg));\
 (unsigned long)(__pg - __pgdat->node_mem_map) +\
__pgdat->node_start_pfn;\
})
#elif defined(CONFIG_SPARSEMEM_VMEMMAP)
/* memmap is virtually contiguous.*/
 #define __pfn_to_page(pfn)(vmemmap + (pfn))
 #define __page_to_pfn(page)(unsigned long)((page) - vmemmap)
#elif defined(CONFIG_SPARSEMEM)
/*
* Note: section's mem_map is encorded to reflect its start_pfn.
* section[i].section_mem_map == mem_map's address - start_pfn;
*/
 #define __page_to_pfn(pg)\
 ({struct page *__pg = (pg);\
 int __sec = page_to_section(__pg);\
 (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec)));\
})
 #define __pfn_to_page(pfn)\
 ({unsigned long __pfn = (pfn);\
 struct mem_section *__sec = __pfn_to_section(__pfn);\
 __section_mem_map_addr(__sec) + __pfn;\
})
#endif/* CONFIG_FLATMEM/DISCONTIGMEM/SPARSEMEM */
#define page_to_pfn __page_to_pfn
#define pfn_to_page __pfn_to_page

Page结构体

/*
* Each physical page in the system has a struct page associated with
* it to keep track of whatever it is we are using the page for at the
* moment. Note that we have no way to track which tasks are using
* a page, though if it is a pagecache page, rmap structures can tell us
* who is mapping it.
*/
struct page {
 unsigned long flags;/* Atomic flags, some possibly
* updated asynchronously */
 atomic_t _count;/* Usage count, see below. */
 union {
/*
* Count of ptes mapped in
* mms, to show when page is
* mapped & limit reverse map
* searches.
*
* Used also for tail pages
* refcounting instead of
* _count. Tail pages cannot
* be mapped and keeping the
* tail page _count zero at
* all times guarantees
* get_page_unless_zero() will
* never succeed on tail
* pages.
*/
 atomic_t _mapcount;
 struct {/* SLUB */
 u16 inuse;
 u16 objects;
};
};
 union {
 struct {
 unsigned long private;/* Mapping-private opaque data:
 * usually used for buffer_heads
* if PagePrivate set; used for
* swp_entry_t if PageSwapCache;
* indicates order in the buddy
* system if PG_buddy is set.
*/
 struct address_space *mapping;/* If low bit clear, points to
* inode address_space, or NULL.
* If page mapped as anonymous
* memory, low bit is set, and
* it points to anon_vma object:
* see PAGE_MAPPING_ANON below.
*/
};
#if USE_SPLIT_PTLOCKS
 spinlock_t ptl;
#endif
 struct kmem_cache *slab;/* SLUB: Pointer to slab */
 struct page *first_page;/* Compound tail pages */
};
 union {
 pgoff_t index;/* Our offset within mapping. */
 void *freelist;/* SLUB: freelist req. slab lock */
};
 struct list_head lru;/* Pageout list, eg. active_list
* protected by zone->lru_lock !
*/
/*
* On machines where all RAM is mapped into kernel address space,
* we can simply calculate the virtual address. On machines with
* highmem some memory is mapped into kernel virtual memory
* dynamically, so we need a place to store that address.
* Note that this field could be 16 bits on x86 ... ;)
*
* Architectures with slow multiplication can define
* WANT_PAGE_VIRTUAL in asm/page.h
*/
#if defined(WANT_PAGE_VIRTUAL)
 void *virtual;/* Kernel virtual address (NULL if
not kmapped, ie. highmem) */
#endif/* WANT_PAGE_VIRTUAL */
#ifdef CONFIG_WANT_PAGE_DEBUG_FLAGS
 unsigned long debug_flags;/* Use atomic bitops on this */
#endif
#ifdef CONFIG_KMEMCHECK
/*
* kmemcheck wants to track the status of each byte in a page; this
* is a pointer to such a status block. NULL if not tracked.
*/
 void *shadow;
#endif
};

1. flags, 定义当前页的状态的enum变量,同时也存储了当前页所在的zone号(also encodes the zone number to which the page frame belongs)

其中各个状态的定义如下:

enum pageflags {
 PG_locked,/* Page is locked. Don't touch. */
 PG_error,
 PG_referenced,
 PG_uptodate,
 PG_dirty,
 PG_lru,
 PG_active,
 PG_slab,
 PG_owner_priv_1,/* Owner use. If pagecache, fs may use*/
 PG_arch_1,
 PG_reserved,
 PG_private,/* If pagecache, has fs-private data */
 PG_private_2,/* If pagecache, has fs aux data */
 PG_writeback,/* Page is under writeback */
#ifdef CONFIG_PAGEFLAGS_EXTENDED
 PG_head,/* A head page */
 PG_tail,/* A tail page */
#else
 PG_compound,/* A compound page */
#endif
 PG_swapcache,/* Swap page: swp_entry_t in private */
 PG_mappedtodisk,/* Has blocks allocated on-disk */
 PG_reclaim,/* To be reclaimed asap */
 PG_swapbacked,/* Page is backed by RAM/swap */
 PG_unevictable,/* Page is "unevictable"*/
#ifdef CONFIG_MMU
 PG_mlocked,/* Page is vma mlocked */
#endif
#ifdef CONFIG_ARCH_USES_PG_UNCACHED
 PG_uncached,/* Page has been mapped as uncached */
#endif
#ifdef CONFIG_MEMORY_FAILURE
 PG_hwpoison,/* hardware poisoned page. Don't touch */
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 PG_compound_lock,
#endif
 __NR_PAGEFLAGS,
/* Filesystems */
 PG_checked = PG_owner_priv_1,
/* Two page bits are conscripted by FS-Cache to maintain local caching
* state.These bits are set on pages belonging to the netfs's inodes
* when those inodes are being locally cached.
*/
 PG_fscache = PG_private_2,/* page backed by cache */
/* XEN */
 PG_pinned = PG_owner_priv_1,
 PG_savepinned = PG_dirty,
/* SLOB */
 PG_slob_free = PG_private,
/* SLUB */
 PG_slub_frozen = PG_active,
};

其中,内核定义了方便操作状态的宏:

PageXXX()
SetPageXXX()
ClearPageXXX()

分别用于查询、设置和清除相应的状态位。

2. _count, 引用计数

page_count()

可以用于查询引用计数

The pool of Reserved Page Frames

保留的页分配池

当分配内存页时,可能会发生两种状态:

1. 空闲的内存页足够,分配立即成功;

2. 空闲的内存页不足够,必须进行内存回收(Memory Reclaiming), 而申请内存页的内核控制路径(Kernel Control Path)必须被block直到有足够的空闲内存页出现。

但是,某些Kernel Control Path是不能够被block的,比如:

1. 正在处理中断的Handler;

2. 处在关键区中的代码(Critical Section)

这些Kernel Control Path在申请内存页时,应该使用GFP_ATOMIC标志,该标志表示申请不应该被block,如果没有足够的内存页,就直接失败而返回。

但是内核必须尽量保证GFP_ATOMIC类型的申请能够正确地执行,因此内核保留了一定数量的物理内存页,这些内存页仅供处在低内存状态(Low-On-Memory)条件下的GFP_ATOMIC使用。

通常会分配min_free_kbytes(这么多KB)的内存来作为Pool。

通过公式

reserved_pool_size = floor(sqrt(16 * (ZONE_DMA + ZONE_NORMAL)))


来计算,而且限制在128~65536KB之间。

而且会按照大小比例,在ZONE_DMA及ZONE_NORMAL之间分配各自保留的比例。

Linux内核能够直接映射的线性地址范围为3GB~3GB+896MB。

如果分配这个范围内的物理内存页,那么可以直接返回分配到的页的线性内存地址。但是如果分配的物理内存不是这个范围内的,无法直接返回其对应的内核空间的线性地址,但是可以返回页结构体(struct page)的地址,因为所有物理内存页的页结构体都存放在mem_map中。

这种方式的限制是,如果分配了线性地址空间3GB~3GB+896MB范围之外的内存时,需要重新更新该物理内存页到线性地址的映射,即重新设置页表。

对页表的额外操作,使得分配这些内存就不如分配固定映射的那部分内存来得高效。

高端内存映射参考:http://linux.chinaitlab.com/administer/831348.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: