Skip to content

Commit

Permalink
mm: soft-offline: don't free target page in successful page migration
Browse files Browse the repository at this point in the history
Stress testing which injects soft offlining events for the process which
iterates "mmap-pagefault-munmap" loop can trigger BUG_ON in __free_one_page
due to PageHWPoison flag.
If page migration succeeds, the source page is supposed to be freed after
migration. But it seems that there can be a strange page state where it's
almost a free page, but it is somewhat reachable via drain_pages_zone.
maybe due to a race between __pagevec_lru_add_fn and putback_lru_page,
there could be

  [   14.025761] Soft offlining page 0x70fe1 at 0x70100008d000
  [   14.029400] Soft offlining page 0x705fb at 0x70300008d000
  [   14.030208] page:ffffea0001c3f840 count:0 mapcount:0 mapping:          (null) index:0x2
  [   14.031186] flags: 0x1fffff80800000(hwpoison)
  [   14.031186] page dumped because: VM_BUG_ON_PAGE(page->flags & ((1 << 25) - 1))
  [   14.031186] ------------[ cut here ]------------
  [   14.031186] kernel BUG at /src/linux-dev/mm/page_alloc.c:585!
  [   14.031186] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
  [   14.031186] Modules linked in: cfg80211 rfkill crc32c_intel microcode ppdev parport_pc pcspkr serio_raw virtio_balloon parport i2c_piix4 virtio_blk virtio_net ata_generic pata_acpi floppy
  [   14.031186] CPU: 3 PID: 1779 Comm: test_base_madv_ Not tainted 4.0.0-v4.0-150511-1451-00009-g82360a3730e6 torvalds#139
  [   14.031186] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
  [   14.031186] task: ffff88007d33bae0 ti: ffff88007a114000 task.ti: ffff88007a114000
  [   14.031186] RIP: 0010:[<ffffffff811a806a>]  [<ffffffff811a806a>] free_pcppages_bulk+0x52a/0x6f0
  [   14.031186] RSP: 0018:ffff88007a117d28  EFLAGS: 00010096
  [   14.031186] RAX: 0000000000000042 RBX: ffffea0001c3f860 RCX: 0000000000000006
  [   14.031186] RDX: 0000000000000007 RSI: 0000000000000000 RDI: ffff88011f50d3d0
  [   14.031186] RBP: ffff88007a117da8 R08: 000000000000000a R09: 00000000fffffffe
  [   14.031186] R10: 0000000000001d3e R11: 0000000000000002 R12: 0000000000070fe1
  [   14.031186] R13: 0000000000000000 R14: 0000000000000000 R15: ffffea0001c3f840
  [   14.031186] FS:  00007f8a8e3e1740(0000) GS:ffff88011f500000(0000) knlGS:0000000000000000
  [   14.031186] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
  [   14.031186] CR2: 00007f78c7341258 CR3: 000000007bb08000 CR4: 00000000000007e0
  [   14.031186] Stack:
  [   14.031186]  ffff88011f5189c8 ffff88011f5189b8 ffffea0001c3f840 ffff88011f518998
  [   14.031186]  ffffea0001d30cc0 0000001200000002 0000000200000012 0000000000000003
  [   14.031186]  ffff88007ffda6c0 000000000000000a ffff88007a117dd8 ffff88011f518998
  [   14.031186] Call Trace:
  [   14.031186]  [<ffffffff811a8380>] ? page_alloc_cpu_notify+0x50/0x50
  [   14.031186]  [<ffffffff811a82bd>] drain_pages_zone+0x3d/0x50
  [   14.031186]  [<ffffffff811a839d>] drain_local_pages+0x1d/0x30
  [   14.031186]  [<ffffffff81122a96>] on_each_cpu_mask+0x46/0x80
  [   14.031186]  [<ffffffff811a5e8b>] drain_all_pages+0x14b/0x1e0
  [   14.031186]  [<ffffffff812151a2>] soft_offline_page+0x432/0x6e0
  [   14.031186]  [<ffffffff811e2dac>] SyS_madvise+0x73c/0x780
  [   14.031186]  [<ffffffff810dcb3f>] ? put_prev_task_fair+0x2f/0x50
  [   14.031186]  [<ffffffff81143f74>] ? __audit_syscall_entry+0xc4/0x120
  [   14.031186]  [<ffffffff8105bdac>] ? do_audit_syscall_entry+0x6c/0x70
  [   14.031186]  [<ffffffff8105cc63>] ? syscall_trace_enter_phase1+0x103/0x170
  [   14.031186]  [<ffffffff816f908e>] ? int_check_syscall_exit_work+0x34/0x3d
  [   14.031186]  [<ffffffff816f8e72>] system_call_fastpath+0x12/0x17
  [   14.031186] Code: ff 89 45 b4 48 8b 45 c0 48 83 b8 a8 00 00 00 00 0f 85 e3 fb ff ff 0f 1f 00 0f 0b 48 8b 7d 90 48 c7 c6 e8 95 a6 81 e8 e6 32 02 00 <0f> 0b 8b 45 cc 49 89 47 30 41 8b 47 18 83 f8 ff 0f 85 10 ff ff
  [   14.031186] RIP  [<ffffffff811a806a>] free_pcppages_bulk+0x52a/0x6f0
  [   14.031186]  RSP <ffff88007a117d28>
  [   14.031186] ---[ end trace 53926436e76d1f35 ]---

ate_pages()
This looks strange to me because soft offline makes target page free at first
then mark PageHWPoison. We call drain_all_pages() ...

Currently hard offline (memory_failure()) and soft offline isolate the in-use
target page differently. Hard offline keeps its refcount (so it intentionally
leaks error pages,) but soft offline frees it then marks PageHWPoison.
This "free then mark PageHWPoison" behavior sometimes doesn't work due to
pcplist (commit 9ab3b59 "mm: hwpoison: drop lru_add_drain_all() in
__soft_offline_page()" refers to it, but unfortunately it didn't solve this
problem.) That's because page freeing from drain_pages_zone() can't handle
hwpoisoned page and it's still on a pcplist after page migration with "freed"
status (so drain_all_pages() doesn't work.) I don't have clear idea about
a fix on freeing code, but think that we should avoid freeing hwpoisoned page
as hard offline code does. So this patch does this.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  • Loading branch information
Naoya Horiguchi committed May 11, 2015
1 parent 82360a3 commit 8f9f877
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 25 deletions.
22 changes: 0 additions & 22 deletions mm/memory-failure.c
Original file line number Diff line number Diff line change
Expand Up @@ -1626,20 +1626,7 @@ static int __soft_offline_page(struct page *page, int flags)
if (ret > 0)
ret = -EIO;
} else {
/*
* After page migration succeeds, the source page can
* be trapped in pagevec and actual freeing is delayed.
* Freeing code works differently based on PG_hwpoison,
* so there's a race. We need to make sure that the
* source page should be freed back to buddy before
* setting PG_hwpoison.
*/
if (!is_free_buddy_page(page))
drain_all_pages(page_zone(page));
SetPageHWPoison(page);
if (!is_free_buddy_page(page))
pr_info("soft offline: %#lx: page leaked\n",
pfn);
atomic_long_inc(&num_poisoned_pages);
}
} else {
Expand Down Expand Up @@ -1691,14 +1678,6 @@ int soft_offline_page(struct page *page, int flags)

get_online_mems();

/*
* Isolate the page, so that it doesn't get reallocated if it
* was free. This flag should be kept set until the source page
* is freed and PG_hwpoison on it is set.
*/
if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
set_migratetype_isolate(page, true);

ret = get_any_page(page, pfn, flags);
put_online_mems();
if (ret > 0) { /* for in-use pages */
Expand All @@ -1717,6 +1696,5 @@ int soft_offline_page(struct page *page, int flags)
atomic_long_inc(&num_poisoned_pages);
}
}
unset_migratetype_isolate(page, MIGRATE_MOVABLE);
return ret;
}
8 changes: 5 additions & 3 deletions mm/migrate.c
Original file line number Diff line number Diff line change
Expand Up @@ -906,7 +906,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
*/
static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page,
unsigned long private, struct page *page, int force,
enum migrate_mode mode)
enum migrate_mode mode, enum migrate_reason reason)
{
int rc = 0;
int *result = NULL;
Expand Down Expand Up @@ -937,7 +937,8 @@ static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page,
list_del(&page->lru);
dec_zone_page_state(page, NR_ISOLATED_ANON +
page_is_file_cache(page));
putback_lru_page(page);
if (reason != MR_MEMORY_FAILURE)
putback_lru_page(page);
}

/*
Expand Down Expand Up @@ -1110,7 +1111,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
pass > 2, mode);
else
rc = unmap_and_move(get_new_page, put_new_page,
private, page, pass > 2, mode);
private, page, pass > 2, mode,
reason);

switch(rc) {
case -ENOMEM:
Expand Down

0 comments on commit 8f9f877

Please sign in to comment.