转自:http://blog.csdn.net/bullbat/article/details/7106094
版权声明:本文为博主原创文章,未经博主允许不得转载。
本文主要介绍linux内核中进程地址空间的数据结构描述,包括mm_struct/vm_area_struct。进程线性地址区间的分配流程,并对相应的源代码做了注释。
内核中的函数以相当直接了当的方式获得动态内存。当给用户态进程分配内存时,情况完全不同了。进程对动态内存的请求被认为是不紧迫的,一般来说,内核总是尽量推迟给用户态进程分配内存。由于用户进程时不可信任的,因此,内核必须能随时准备捕获用户态进程引起的所有寻址错误。当用户态进程请求动态内存时,并没有获得请求的页框,而仅仅获得对一个新的线性地址区间的使用权,而这一线性地址区间就成为进程地址空间的一部分。
进程地址空间由允许进程使用的全部线性地址组成。内核可以通过增加或删除某些线程地址区间来动态地修改进程的地址空间。内核通过所谓线性去得资源来标示线性地址区间,线性区是由起始线性地址、长度和一些访问权限来描述的。进程获得新线性区的一些典型情况:
1.但用户在控制台输入一条命令时,shell进程创建一个新的进程去执行这个命令。结果是,一个全新的地址空间(也就是一组线性区)分配给新进程。
2.正在运行的进程有可能决定装入一个完全不同的程序。这时,进程描述符不变,可是在装入这个程序以前所有的线性区却被释放,并有一组新的线性区被分配给这个进程。
3.正在运行的进程可能对一个文件执行内存映像。
4.进程可能持续向他的用户态堆栈增加数据,知道映像这个堆栈的线性区用完为止,此时,内核也许会决定扩展这个线性区的大小。
5.进程可能创建一个IPC共享线性区来与其他合作进程共享数据。此时,内核给这个进程分配一个新的线性区以实现这个方案。
6.进程可能通过调用类似malloc这样的函数扩展自己的动态堆。结果是,内核可能决定扩展给这个堆所分配的线性区。
数据结构描述
进程描述符task_struct中的mm字段描述了进程地址空间
- struct mm_struct {
- struct vm_area_struct * mmap; /* list of VMAs */
- struct rb_root mm_rb;
- struct vm_area_struct * mmap_cache; /* last find_vma result */
- unsigned long (*get_unmapped_area) (struct file *filp,
- unsigned long addr, unsigned long len,
- unsigned long pgoff, unsigned long flags);
- void (*unmap_area) (struct mm_struct *mm, unsigned long addr);
- unsigned long mmap_base; /* base of mmap area */
- unsigned long task_size; /* size of task vm space */
- unsigned long cached_hole_size; /* if non-zero, the largest hole below free_area_cache */
- unsigned long free_area_cache; /* first hole of size cached_hole_size or larger */
- pgd_t * pgd;
- atomic_t mm_users; /* How many users with user space? */
- atomic_t mm_count; /* How many references to "struct mm_struct" (users count as 1) */
- int map_count; /* number of VMAs */
- struct rw_semaphore mmap_sem;
- spinlock_t page_table_lock; /* Protects page tables and some counters */
- struct list_head mmlist; /* List of maybe swapped mm's. These are globally strung
- * together off init_mm.mmlist, and are protected
- * by mmlist_lock
- */
- /* Special counters, in some configurations protected by the
- * page_table_lock, in other configurations by being atomic.
- */
- mm_counter_t _file_rss;
- mm_counter_t _anon_rss;
- unsigned long hiwater_rss; /* High-watermark of RSS usage */
- unsigned long hiwater_vm; /* High-water virtual memory usage */
- unsigned long total_vm, locked_vm, shared_vm, exec_vm;
- unsigned long stack_vm, reserved_vm, def_flags, nr_ptes;
- unsigned long start_code, end_code, start_data, end_data;
- unsigned long start_brk, brk, start_stack;
- unsigned long arg_start, arg_end, env_start, env_end;
- unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */
- struct linux_binfmt *binfmt;
- cpumask_t cpu_vm_mask;/*用于懒惰TLB交换的位掩码*/
- /* Architecture-specific MM context */
- mm_context_t context;
- /* Swap token stuff */
- /*
- * Last value of global fault stamp as seen by this process.
- * In other words, this value gives an indication of how long
- * it has been since this task got the token.
- * Look at mm/thrash.c
- */
- unsigned int faultstamp;
- unsigned int token_priority;
- unsigned int last_interval;
- unsigned long flags; /* Must use atomic bitops to access the bits */
- struct core_state *core_state; /* coredumping support */
- #ifdef CONFIG_AIO
- spinlock_t ioctx_lock;
- struct hlist_head ioctx_list;/*一步IO上下文链表*/
- #endif
- #ifdef CONFIG_MM_OWNER
- /*
- * "owner" points to a task that is regarded as the canonical
- * user/owner of this mm. All of the following must be true in
- * order for it to be changed:
- *
- * current == mm->owner
- * current->mm != mm
- * new_owner->mm == mm
- * new_owner->alloc_lock is held
- */
- struct task_struct *owner;
- #endif
- #ifdef CONFIG_PROC_FS
- /* store ref to file /proc/<pid>/exe symlink points to */
- struct file *exe_file;
- unsigned long num_exe_file_vmas;
- #endif
- #ifdef CONFIG_MMU_NOTIFIER
- struct mmu_notifier_mm *mmu_notifier_mm;
- #endif
- };
关于mm_users字段和mm_count字段
mm_users字段存放共享mm_struct数据结构的轻量级进程的个数。mm_count字段是内存描述符的主使计数器,在mm_users次使用计数器中的所有用户在mm_count中只作为一个单位,每当mm_count递减时,内核都要检查他是否变为0,如果是,就要解除这个内存描述符,因为不再有用户使用他。
用一个例子解释mm_users和mm_count之间的不同。考虑一个内存描述符由两个轻量级进程共享。他的mm_users字段通常存放的值为2,而mm_count字段存放的值为1(两个所有者进程算作一个)。如果把内存描述符在一个长操作的中间不被释放,那么,就应该增加mm_users字段而不是mm_count字段的值。最终结果是相同的,因为mm_users的增加确保了mm_count不变为0,即使拥有这个内存描述符的所有轻量级进程全部死亡。
内核线程仅运行在内核态,因此,他们永远不会访问低于TASK_SIZE(等于PAGE_OFFSET,通常为0xc0000000)的地址。与普通进程相反,内核线程不用线性区,因此,内存描述符的很多字段对内核线程是没有意义的。也就是说,当创建内核线程时,内核线程的active_mm共享父进程的mm,但是只使用mm中部分数据与变量。
线性区
linux通过类型为vm_area_struct的对象实现线性区,它的字段为
- /*
- * This struct defines a memory VMM memory area. There is one of these
- * per VM-area/task. A VM area is any part of the process virtual memory
- * space that has a special rule for the page-fault handlers (ie a shared
- * library, the executable area etc).
- */
- struct vm_area_struct {
- struct mm_struct * vm_mm; /* The address space we belong to. */
- unsigned long vm_start; /* Our start address within vm_mm. */
- unsigned long vm_end; /* The first byte after our end address
- within vm_mm. */
- /* linked list of VM areas per task, sorted by address */
- struct vm_area_struct *vm_next;
- pgprot_t vm_page_prot; /* Access permissions of this VMA. */
- unsigned long vm_flags; /* Flags, see mm.h. */
- struct rb_node vm_rb;
- /*
- * For areas with an address space and backing store,
- * linkage into the address_space->i_mmap prio tree, or
- * linkage to the list of like vmas hanging off its node, or
- * linkage of vma in the address_space->i_mmap_nonlinear list.
- */
- union {
- struct {
- struct list_head list;
- void *parent; /* aligns with prio_tree_node parent */
- struct vm_area_struct *head;
- } vm_set;
- struct raw_prio_tree_node prio_tree_node;
- } shared;
- /*
- * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
- * list, after a COW of one of the file pages. A MAP_SHARED vma
- * can only be in the i_mmap tree. An anonymous MAP_PRIVATE, stack
- * or brk vma (with NULL file) can only be in an anon_vma list.
- */
- struct list_head anon_vma_node; /* Serialized by anon_vma->lock */
- struct anon_vma *anon_vma; /* Serialized by page_table_lock */
- /* Function pointers to deal with this struct. */
- const struct vm_operations_struct *vm_ops;
- /* Information about our backing store: */
- unsigned long vm_pgoff; /* Offset (within vm_file) in PAGE_SIZE
- units, *not* PAGE_CACHE_SIZE */
- struct file * vm_file; /* File we map to (can be NULL). */
- void * vm_private_data; /* was vm_pte (shared mem) */
- unsigned long vm_truncate_count;/* truncate_count or restart_addr */
- #ifndef CONFIG_MMU
- struct vm_region *vm_region; /* NOMMU mapping region */
- #endif
- #ifdef CONFIG_NUMA
- struct mempolicy *vm_policy; /* NUMA policy for the VMA */
- #endif
- };
进程所拥有的线性区从来不重叠,并且内核尽力把新分配的线性区与邻接的现有线性区进行合并。如果两个相邻区的访问权限相匹配,就能把他们合并在一起。
操作
线性区的处理
我们举一个常用的find_vma函数,是一个从rb树中查找指定的线性区间。其他的函数不再举例。
- /* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
- //deal with searching the virtual address space for mapped and free regions.
- //The two parameters are the top-level mm_struct that is to be searched and the address the caller is interested in
- struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
- {
- //Defaults to returning NULL for address not found.
- struct vm_area_struct *vma = NULL;
- //Makes sure the caller does not try to search a bogus mm.
- if (mm) {
- /* Check the cache first. */
- /* (Cache hit rate is typically around 35%.) */
- //mmap_cache has the result of the last call to find_vma().
- //This has a chance of not having to search at all through the red-black tree
- vma = mm->mmap_cache;
- //If it is a valid VMA that is being examined, this checks to see if the address being searched is contained within it. If it is,
- //the VMA was the mmap_cache one, so it can be returned. Otherwise, the tree is searched.
- if (!(vma && vma->vm_end > addr && vma->vm_start <= addr)) {
- //Starts at the root of the tree.
- struct rb_node * rb_node;
- rb_node = mm->mm_rb.rb_node;
- vma = NULL;
- //This block is the tree walk.
- while (rb_node) {
- struct vm_area_struct * vma_tmp;
- //The macro, as the name suggests, returns the VMA that this tree node points to.
- vma_tmp = rb_entry(rb_node,
- struct vm_area_struct, vm_rb);
- //Checks if the next node is traversed by the left or right leaf
- if (vma_tmp->vm_end > addr) {
- vma = vma_tmp;
- //If the current VMA is what is required, this exits the while loop
- if (vma_tmp->vm_start <= addr)
- break;
- rb_node = rb_node->rb_left;
- } else
- rb_node = rb_node->rb_right;
- }
- //If the VMA is valid, this sets the mmap_cache for the next call to find_vma().
- if (vma)
- mm->mmap_cache = vma;
- }
- }
- //Returns the VMA that contains the address or, as a side effect of the tree walk,
- //returns the VMA that is closest to the requested address.
- return vma;
- }
分配线性地址区间
do_mmap函数为当前进程创建并初始化一个新的线性区。不过,分配成功之后,可以把这个新的线性区与进程已有的其他线性区进行合并。
- /*创建并初始化一个新的线性地址区间,
- 不过,分配成功之后,可以把这个新的先行区间
- 与已有的其他线性区进行合并;
- file和offset:如果新的线性区将把一个文件映射到内存
- 则使用文件描述符指针file和文件偏移量offset
- addr:这个线性地址指定从何处开始查找一个
- 空闲的区间;
- len:线性区间的长度;
- prot:指定这个线性区所包含页的访问权限,
- 比如读写、执行;
- flag:指定线性区间的其他标志
- */
- static inline unsigned long do_mmap(struct file *file, unsigned long addr,
- unsigned long len, unsigned long prot,
- unsigned long flag, unsigned long offset)
- {
- unsigned long ret = -EINVAL;
- /*对offset的值进行一些初步的检查*/
- if ((offset + PAGE_ALIGN(len)) < offset)
- goto out;
- if (!(offset & ~PAGE_MASK))
- ret = do_mmap_pgoff(file, addr, len, prot, flag, offset >> PAGE_SHIFT);
- out:
- return ret;
- }
我们看do_mmap_pgoff函数做的实际工作
- unsigned long do_mmap_pgoff(struct file *file, unsigned long addr,
- unsigned long len, unsigned long prot,
- unsigned long flags, unsigned long pgoff)
- {
- struct mm_struct * mm = current->mm;
- struct inode *inode;
- unsigned int vm_flags;
- int error;
- unsigned long reqprot = prot;
- /*下面主要是对参数的基本检查,所提的请求
- 是否能满足要求*/
- /*
- * Does the application expect PROT_READ to imply PROT_EXEC?
- *
- * (the exception is when the underlying filesystem is noexec
- * mounted, in which case we dont add PROT_EXEC.)
- */
- if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC))
- if (!(file && (file->f_path.mnt->mnt_flags & MNT_NOEXEC)))
- prot |= PROT_EXEC;
- if (!len)
- return -EINVAL;
- if (!(flags & MAP_FIXED))
- addr = round_hint_to_min(addr);
- error = arch_mmap_check(addr, len, flags);
- if (error)
- return error;
- /* Careful about overflows.. */
- len = PAGE_ALIGN(len);
- if (!len || len > TASK_SIZE)
- return -ENOMEM;
- /* offset overflow? */
- if ((pgoff + (len >> PAGE_SHIFT)) < pgoff)
- return -EOVERFLOW;
- /* Too many mappings? */
- if (mm->map_count > sysctl_max_map_count)
- return -ENOMEM;
- if (flags & MAP_HUGETLB) {
- struct user_struct *user = NULL;
- if (file)
- return -EINVAL;
- /*
- * VM_NORESERVE is used because the reservations will be
- * taken when vm_ops->mmap() is called
- * A dummy user value is used because we are not locking
- * memory so no accounting is necessary
- */
- len = ALIGN(len, huge_page_size(&default_hstate));
- file = hugetlb_file_setup(HUGETLB_ANON_FILE, len, VM_NORESERVE,
- &user, HUGETLB_ANONHUGE_INODE);
- if (IS_ERR(file))
- return PTR_ERR(file);
- }
- /* Obtain the address to map to. we verify (or select) it and ensure
- * that it represents a valid section of the address space.
- */
- /*获得新线性区的线性地址区间*/
- addr = get_unmapped_area(file, addr, len, pgoff, flags);
- if (addr & ~PAGE_MASK)
- return addr;
- /* Do simple checking here so the lower-level routines won't have
- * to. we assume access permissions have been handled by the open
- * of the memory object, so we don't do any here.
- */
- /*通过把存放在prot和flags参数中的值进行组合
- 来计算新线性区描述符的标志*/
- vm_flags = calc_vm_prot_bits(prot) | calc_vm_flag_bits(flags) |
- mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;
- if (flags & MAP_LOCKED)
- if (!can_do_mlock())
- return -EPERM;
- /* mlock MCL_FUTURE? */
- if (vm_flags & VM_LOCKED) {
- unsigned long locked, lock_limit;
- locked = len >> PAGE_SHIFT;
- locked += mm->locked_vm;
- lock_limit = current->signal->rlim[RLIMIT_MEMLOCK].rlim_cur;
- lock_limit >>= PAGE_SHIFT;
- if (locked > lock_limit && !capable(CAP_IPC_LOCK))
- return -EAGAIN;
- }
- inode = file ? file->f_path.dentry->d_inode : NULL;
- if (file) {
- switch (flags & MAP_TYPE) {
- case MAP_SHARED:
- if ((prot&PROT_WRITE) && !(file->f_mode&FMODE_WRITE))
- return -EACCES;
- /*
- * Make sure we don't allow writing to an append-only
- * file..
- */
- if (IS_APPEND(inode) && (file->f_mode & FMODE_WRITE))
- return -EACCES;
- /*
- * Make sure there are no mandatory locks on the file.
- */
- if (locks_verify_locked(inode))
- return -EAGAIN;
- vm_flags |= VM_SHARED | VM_MAYSHARE;
- if (!(file->f_mode & FMODE_WRITE))
- vm_flags &= ~(VM_MAYWRITE | VM_SHARED);
- /* fall through */
- case MAP_PRIVATE:
- if (!(file->f_mode & FMODE_READ))
- return -EACCES;
- if (file->f_path.mnt->mnt_flags & MNT_NOEXEC) {
- if (vm_flags & VM_EXEC)
- return -EPERM;
- vm_flags &= ~VM_MAYEXEC;
- }
- if (!file->f_op || !file->f_op->mmap)
- return -ENODEV;
- break;
- default:
- return -EINVAL;
- }
- } else {
- switch (flags & MAP_TYPE) {
- case MAP_SHARED:
- /*
- * Ignore pgoff.
- */
- pgoff = 0;
- vm_flags |= VM_SHARED | VM_MAYSHARE;
- break;
- case MAP_PRIVATE:
- /*
- * Set pgoff according to addr for anon_vma.
- */
- pgoff = addr >> PAGE_SHIFT;
- break;
- default:
- return -EINVAL;
- }
- }
- error = security_file_mmap(file, reqprot, prot, flags, addr, 0);
- if (error)
- return error;
- error = ima_file_mmap(file, prot);
- if (error)
- return error;
- /*实际工作*/
- return mmap_region(file, addr, len, flags, vm_flags, pgoff);
- }
我们get_unmapped_area函数获得新的线性地址区间
- /*
- The parameters passed are the following:
- file The file or device being mapped
- addr The requested address to map to
- len The length of the mapping
- pgoff The offset within the file being mapped
- flags Protection flags
- */
- //When a new area is to be memory mapped, a free region has to be found that is large enough to contain the new mapping.
- /*查找进程地址空间以找到一个可以使用的
- 线性地址区间,函数根据线性地址区间是否应该
- 用于文件内存映射或匿名内存映射,调用两个
- 方法(get_unmapped_area文件操作和内存描述符的
- get_unmapped_area方法)中的一个*/
- unsigned long
- get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
- unsigned long pgoff, unsigned long flags)
- {
- unsigned long (*get_area)(struct file *, unsigned long,
- unsigned long, unsigned long, unsigned long);
- get_area = current->mm->get_unmapped_area;
- if (file && file->f_op && file->f_op->get_unmapped_area)
- get_area = file->f_op->get_unmapped_area;
- addr = get_area(file, addr, len, pgoff, flags);/*调用对应的函数*/
- if (IS_ERR_VALUE(addr))
- return addr;
- if (addr > TASK_SIZE - len)
- return -ENOMEM;
- if (addr & ~PAGE_MASK)
- return -EINVAL;
- /*x86 ia-32直接返回地址*/
- return arch_rebalance_pgtables(addr, len);
- }
我们看不使用文件的一个,对于和文件相关的一个,在文件系统中再来分析
对于内存相关的get_unmapped_area函数在如下函数中设置
- /*
- * This function, called very early during the creation of a new
- * process VM image, sets up which VM layout function to use:
- */
- void arch_pick_mmap_layout(struct mm_struct *mm)
- {
- if (mmap_is_legacy()) {
- mm->mmap_base = mmap_legacy_base();
- mm->get_unmapped_area = arch_get_unmapped_area;
- mm->unmap_area = arch_unmap_area;
- } else {
- mm->mmap_base = mmap_base();
- mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- mm->unmap_area = arch_unmap_area_topdown;
- }
- }
我们直接看arch_get_unmmapped_area,其他一个类似。
- unsigned long
- arch_get_unmapped_area(struct file *filp, unsigned long addr,
- unsigned long len, unsigned long pgoff, unsigned long flags)
- {
- struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
- unsigned long start_addr;
- if (len > TASK_SIZE)
- return -ENOMEM;
- if (flags & MAP_FIXED)
- return addr;
- if (addr) {
- addr = PAGE_ALIGN(addr);
- /*从现有地址空间中查找地址*/
- vma = find_vma(mm, addr);
- /*当地址合法,现有进程地址空间中没有
- vma或者该地址不属于现有进程地址空间中已经
- 的vma中(也就是说现有地址空间有vma存在)*/
- if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
- return addr;/*返回地址*/
- }
- /*达到这里表示addr为0或者前面的搜索失败*/
- /*cached_hole_size表示在free_area_cache下面地址中最大
- 的一个空洞,所以从free_area_cache开始搜索,
- 这样提高搜索效率*/
- if (len > mm->cached_hole_size) {
- start_addr = addr = mm->free_area_cache;
- } else {/*设置搜索起点为用户态地址空间的三分之一
- 处*/
- start_addr = addr = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = 0;
- }
- full_search:
- /*逐个访问查找从addr开始的vma*/
- for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
- /* At this point: (!vma || addr < vma->vm_end). */
- if (TASK_SIZE - len < addr) {
- /*
- * Start a new search - just in case we missed
- * some holes.
- */
- if (start_addr != TASK_UNMAPPED_BASE) {
- addr = TASK_UNMAPPED_BASE;
- start_addr = addr;
- mm->cached_hole_size = 0;
- goto full_search;
- }
- return -ENOMEM;
- }
- /*满足没映射的要求*/
- if (!vma || addr + len <= vma->vm_start) {
- /*
- * Remember the place where we stopped the search:
- */
- mm->free_area_cache = addr + len;
- return addr;
- }/*更新cached_hole_size,这里每次会更新cached_hole_size
- 变量,因为查找len长度为从低地址到高地址
- 依次开始查找的,所以第一个满足要求的肯定
- 满足比这个地址更低的地址中没有比他的空洞
- 更大的了,同时这里的每次更新和上面的
- free_area_cache变量的更新可以对应上*/
- if (addr + mm->cached_hole_size < vma->vm_start)
- mm->cached_hole_size = vma->vm_start - addr;
- addr = vma->vm_end;/*更新addr为本次搜索先行区间的末*/
- }
- }
接着上面的调用mmap_region函数
- unsigned long mmap_region(struct file *file, unsigned long addr,
- unsigned long len, unsigned long flags,
- unsigned int vm_flags, unsigned long pgoff)
- {
- struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma, *prev;
- int correct_wcount = 0;
- int error;
- struct rb_node **rb_link, *rb_parent;
- unsigned long charged = 0;
- struct inode *inode = file ? file->f_path.dentry->d_inode : NULL;
- /* Clear old maps */
- error = -ENOMEM;
- munmap_back:
- /*确定处于新区间之前的线性区对象的位置,
- 以及在红黑树这两个新线性区的位置*/
- vma = find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent);
- /*检查是否还存在于新区建重叠的线性区*/
- if (vma && vma->vm_start < addr + len) {
- if (do_munmap(mm, addr, len))/*删除新的区间*/
- return -ENOMEM;
- goto munmap_back;
- }
- /* Check against address space limit. */
- /*检查插入新的线性区是否引起进程地址空间的
- 大小超过上限*/
- if (!may_expand_vm(mm, len >> PAGE_SHIFT))
- return -ENOMEM;
- /*
- * Set 'VM_NORESERVE' if we should not account for the
- * memory use of this mapping.
- */
- if ((flags & MAP_NORESERVE)) {
- /* We honor MAP_NORESERVE if allowed to overcommit */
- if (sysctl_overcommit_memory != OVERCOMMIT_NEVER)
- vm_flags |= VM_NORESERVE;
- /* hugetlb applies strict overcommit unless MAP_NORESERVE */
- if (file && is_file_hugepages(file))
- vm_flags |= VM_NORESERVE;
- }
- /*
- * Private writable mapping: check memory availability
- */
- if (accountable_mapping(file, vm_flags)) {
- charged = len >> PAGE_SHIFT;
- if (security_vm_enough_memory(charged))
- return -ENOMEM;
- vm_flags |= VM_ACCOUNT;
- }
- /*
- * Can we just expand an old mapping?
- */
- /*检查是否可以和前一个线性区进行合并*/
- vma = vma_merge(mm, prev, addr, addr + len, vm_flags, NULL, file, pgoff, NULL);
- if (vma)/*合并成功*/
- goto out;
- /*
- * Determine the object being mapped and call the appropriate
- * specific mapper. the address has already been validated, but
- * not unmapped, but the maps are removed from the list.
- */
- /*程序运行到这里表示新区将建立为新区间
- 分配一个vma结构*/
- vma = kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL);
- if (!vma) {
- error = -ENOMEM;
- goto unacct_error;
- }
- /*初始化新区对象*/
- vma->vm_mm = mm;
- vma->vm_start = addr;
- vma->vm_end = addr + len;
- vma->vm_flags = vm_flags;
- vma->vm_page_prot = vm_get_page_prot(vm_flags);
- vma->vm_pgoff = pgoff;
- if (file) {
- error = -EINVAL;
- if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP))
- goto free_vma;
- if (vm_flags & VM_DENYWRITE) {
- error = deny_write_access(file);
- if (error)
- goto free_vma;
- correct_wcount = 1;
- }
- vma->vm_file = file;
- get_file(file);
- error = file->f_op->mmap(file, vma);
- if (error)
- goto unmap_and_free_vma;
- if (vm_flags & VM_EXECUTABLE)
- added_exe_file_vma(mm);
- /* Can addr have changed??
- *
- * Answer: Yes, several device drivers can do it in their
- * f_op->mmap method. -DaveM
- */
- addr = vma->vm_start;
- pgoff = vma->vm_pgoff;
- vm_flags = vma->vm_flags;
- }
- /*如果该区间是一个共享匿名区*/
- else if (vm_flags & VM_SHARED) {
- /*初始化,共享匿名区主要用于进程间通信*/
- error = shmem_zero_setup(vma);
- if (error)
- goto free_vma;
- }
- if (vma_wants_writenotify(vma))
- vma->vm_page_prot = vm_get_page_prot(vm_flags & ~VM_SHARED);
- /*将新区间插入到进程的线性地址空间中*/
- vma_link(mm, vma, prev, rb_link, rb_parent);
- file = vma->vm_file;
- /* Once vma denies write, undo our temporary denial count */
- if (correct_wcount)
- atomic_inc(&inode->i_writecount);
- out:
- perf_event_mmap(vma);
- /*增加total_vm字段大小*/
- mm->total_vm += len >> PAGE_SHIFT;
- vm_stat_account(mm, vm_flags, file, len >> PAGE_SHIFT);
- if (vm_flags & VM_LOCKED) {
- /*
- * makes pages present; downgrades, drops, reacquires mmap_sem
- */
- /*连续分配线性区的所有页,并将他们
- 锁在RAM中*/
- long nr_pages = mlock_vma_pages_range(vma, addr, addr + len);
- if (nr_pages < 0)
- return nr_pages; /* vma gone! */
- mm->locked_vm += (len >> PAGE_SHIFT) - nr_pages;
- } else if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK))
- /*连续分配线性区的所有页*/
- make_pages_present(addr, addr + len);
- return addr;/*返回新线性区地址*/
- unmap_and_free_vma:
- if (correct_wcount)
- atomic_inc(&inode->i_writecount);
- vma->vm_file = NULL;
- fput(file);
- /* Undo any partial mapping done by a device driver. */
- unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end);
- charged = 0;
- free_vma:
- kmem_cache_free(vm_area_cachep, vma);
- unacct_error:
- if (charged)
- vm_unacct_memory(charged);
- return error;
- }
到这里分配线性地址空间就算走完了,主要完成的工作依次由根据地址和长度在进程地址空间中查找一个未添加进来的线性区间,如果这个区间可以和当前进程线性地址空间的线性区间可以合并,则合并之。如果不能合并,创建一个线性区间,将这个线性区间vma插入到进程现有的线性地址空间里作为他的线性地址空间的一部分。最后对线性区间分配实际的物理页面并返回基地址。
本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/sky-heaven/p/5659537.html,如需转载请自行联系原作者