From: Andrew Morton Date: Thu, 8 Dec 2011 04:41:46 +0000 (+1100) Subject: mm-more-intensive-memory-corruption-debug-fix X-Git-Tag: next-20111213~1^2~152 X-Git-Url: https://git.kernelconcepts.de/?a=commitdiff_plain;h=19224577f74a79c7cecc83067728c5989caca9d3;p=karo-tx-linux.git mm-more-intensive-memory-corruption-debug-fix tweak documentation, s/flg/flag/ Cc: "Rafael J. Wysocki" Cc: Andrea Arcangeli Cc: Christoph Lameter Cc: Mel Gorman Cc: Stanislaw Gruszka Cc: Pekka Enberg Signed-off-by: Andrew Morton --- diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 9906afe52b65..7ad82479f4e9 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -625,21 +625,22 @@ bytes respectively. Such letter suffixes can also be entirely omitted. debug_guardpage_minorder= [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this - parameter allows control order of pages that will be - intentionally kept free (and hence protected) by buddy - allocator. Bigger value increase probability of - catching random memory corruption, but reduce amount - of memory for normal system use. Maximum possible - value is MAX_ORDER/2. Setting this parameter to 1 or 2, - should be enough to identify most random memory - corruption problems caused by bugs in kernel/drivers - code when CPU write to (or read from) random memory - location. Note that there exist class of memory - corruptions problems caused by buggy H/W or F/W or by - drivers badly programing DMA (basically when memory is - written at bus level and CPU MMU is bypassed), which - are not detectable by CONFIG_DEBUG_PAGEALLOC, hence this - option would not help tracking down these problems too. + parameter allows control of the order of pages that will + be intentionally kept free (and hence protected) by the + buddy allocator. Bigger value increase the probability + of catching random memory corruption, but reduce the + amount of memory for normal system use. The maximum + possible value is MAX_ORDER/2. Setting this parameter + to 1 or 2 should be enough to identify most random + memory corruption problems caused by bugs in kernel or + driver code when a CPU writes to (or reads from) a + random memory location. Note that there exists a class + of memory corruptions problems caused by buggy H/W or + F/W or by drivers badly programing DMA (basically when + memory is written at bus level and the CPU MMU is + bypassed) which are not detectable by + CONFIG_DEBUG_PAGEALLOC, hence this option will not help + tracking down these problems. debugpat [X86] Enable PAT debugging diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8b13233f851f..e84aba456112 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -419,18 +419,18 @@ static int __init debug_guardpage_minorder_setup(char *buf) } __setup("debug_guardpage_minorder=", debug_guardpage_minorder_setup); -static inline void set_page_guard_flg(struct page *page) +static inline void set_page_guard_flag(struct page *page) { __set_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags); } -static inline void clear_page_guard_flg(struct page *page) +static inline void clear_page_guard_flag(struct page *page) { __clear_bit(PAGE_DEBUG_FLAG_GUARD, &page->debug_flags); } #else -static inline void set_page_guard_flg(struct page *page) { } -static inline void clear_page_guard_flg(struct page *page) { } +static inline void set_page_guard_flag(struct page *page) { } +static inline void clear_page_guard_flag(struct page *page) { } #endif static inline void set_page_order(struct page *page, int order) @@ -556,7 +556,7 @@ static inline void __free_one_page(struct page *page, * merge with it and move up one order. */ if (page_is_guard(buddy)) { - clear_page_guard_flg(buddy); + clear_page_guard_flag(buddy); set_page_private(page, 0); __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order); } else { @@ -799,7 +799,7 @@ static inline void expand(struct zone *zone, struct page *page, * pages will stay not present in virtual address space */ INIT_LIST_HEAD(&page[size].lru); - set_page_guard_flg(&page[size]); + set_page_guard_flag(&page[size]); set_page_private(&page[size], high); /* Guard pages are not available for any usage */ __mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << high));