]> git.kernelconcepts.de Git - karo-tx-linux.git/commit
mm: prevent double decrease of nr_reserved_highatomic
authorMinchan Kim <minchan@kernel.org>
Tue, 13 Dec 2016 00:42:08 +0000 (16:42 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 13 Dec 2016 02:55:07 +0000 (18:55 -0800)
commit4855e4a7f29d6d10b0b9c84e189c770c9a94e91e
treeeb75748238b9fd7e2be6b7f2e885f0526f24796a
parent88ed365ea227aa28841a8d6e196c9a261c76fffd
mm: prevent double decrease of nr_reserved_highatomic

There is race between page freeing and unreserved highatomic.

 CPU 0     CPU 1

    free_hot_cold_page
      mt = get_pfnblock_migratetype
      set_pcppage_migratetype(page, mt)
         unreserve_highatomic_pageblock
         spin_lock_irqsave(&zone->lock)
         move_freepages_block
         set_pageblock_migratetype(page)
         spin_unlock_irqrestore(&zone->lock)
      free_pcppages_bulk
        __free_one_page(mt) <- mt is stale

By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed.  By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock severak
times and then it will make mismatch between nr_reserved_highatomic and
the number of reserved pageblock.

So, this patch verifies whether the pageblock is highatomic or not and
decrease the count only if the pageblock is highatomic.

Link: http://lkml.kernel.org/r/1476259429-18279-3-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c