From: Haggai Eran Date: Fri, 28 Sep 2012 00:19:57 +0000 (+1000) Subject: mm: call invalidate_range_end in do_wp_page even for zero pages X-Git-Tag: next-20121008~2^2~27 X-Git-Url: https://git.kernelconcepts.de/?a=commitdiff_plain;ds=sidebyside;h=54c334505c072d30ca044fe79d4432f3b9f67448;p=karo-tx-linux.git mm: call invalidate_range_end in do_wp_page even for zero pages The previous patch "mm: wrap calls to set_pte_at_notify with invalidate_range_start and invalidate_range_end" only called the invalidate_range_end mmu notifier function in do_wp_page when the new_page variable wasn't NULL. This was done in order to only call invalidate_range_end after invalidate_range_start was called. Unfortunately, there are situations where new_page is NULL and invalidate_range_start is called. This caused invalidate_range_start to be called without a matching invalidate_range_end, causing kvm to loop indefinitely on the first page fault. This patch adds a flag variable to do_wp_page that marks whether the invalidate_range_start notifier was called. invalidate_range_end is then called if the flag is true. Reported-by: Jiri Slaby Signed-off-by: Haggai Eran Cc: Andrea Arcangeli Cc: Sagi Grimberg Cc: Peter Zijlstra Cc: Xiao Guangrong Cc: Or Gerlitz Cc: Haggai Eran Cc: Shachar Raindel Cc: Liran Liss Cc: Christoph Lameter Cc: Avi Kivity Cc: Hugh Dickins Signed-off-by: Andrew Morton --- diff --git a/mm/memory.c b/mm/memory.c index 7d5238884dff..05204312c7e5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2534,6 +2534,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page *dirty_page = NULL; unsigned long mmun_start; /* For mmu_notifiers */ unsigned long mmun_end; /* For mmu_notifiers */ + bool mmun_called = false; /* For mmu_notifiers */ old_page = vm_normal_page(vma, address, orig_pte); if (!old_page) { @@ -2711,8 +2712,9 @@ gotten: if (mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL)) goto oom_free_new; - mmun_start = address & PAGE_MASK; - mmun_end = (address & PAGE_MASK) + PAGE_SIZE; + mmun_start = address & PAGE_MASK; + mmun_end = (address & PAGE_MASK) + PAGE_SIZE; + mmun_called = true; mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); /* @@ -2781,8 +2783,7 @@ gotten: page_cache_release(new_page); unlock: pte_unmap_unlock(page_table, ptl); - if (new_page) - /* Only call the end notifier if the begin was called. */ + if (mmun_called) mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); if (old_page) { /*