Skip to content

Commit

Permalink
mm/page_alloc: remotely drain per-cpu lists
Browse files Browse the repository at this point in the history
Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu
drain work queued by __drain_all_pages().  So introduce a new mechanism to
remotely drain the per-cpu lists.  It is made possible by remotely locking
'struct per_cpu_pages' new per-cpu spinlocks.  A benefit of this new
scheme is that drain operations are now migration safe.

There was no observed performance degradation vs.  the previous scheme. 
Both netperf and hackbench were run in parallel to triggering the
__drain_all_pages(NULL, true) code path around ~100 times per second.  The
new scheme performs a bit better (~5%), although the important point here
is there are no performance regressions vs.  the previous mechanism. 
Per-cpu lists draining happens only in slow paths.

Minchan Kim tested this independently and reported;

	My workload is not NOHZ CPUs but run apps under heavy memory
	pressure so they goes to direct reclaim and be stuck on
	drain_all_pages until work on workqueue run.

	unit: nanosecond
	max(dur)        avg(dur)                count(dur)
	166713013       487511.77786438033      1283

	From traces, system encountered the drain_all_pages 1283 times and
	worst case was 166ms and avg was 487us.

	The other problem was alloc_contig_range in CMA. The PCP draining
	takes several hundred millisecond sometimes though there is no
	memory pressure or a few of pages to be migrated out but CPU were
	fully booked.

	Your patch perfectly removed those wasted time.

Link: https://lore.kernel.org/r/20211103170512.2745765-4-nsaenzju@redhat.com
Link: https://lkml.kernel.org/r/20220512085043.5234-7-mgorman@techsingularity.net
Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Tested-by: Minchan Kim <minchan@kernel.org>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
  • Loading branch information
vianpl authored and akpm00 committed May 27, 2022
1 parent 3dfdcac commit e9d7597
Showing 1 changed file with 5 additions and 54 deletions.
59 changes: 5 additions & 54 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -165,13 +165,7 @@ DEFINE_PER_CPU(int, _numa_mem_); /* Kernel "local memory" node */
EXPORT_PER_CPU_SYMBOL(_numa_mem_);
#endif

/* work_structs for global per-cpu drains */
struct pcpu_drain {
struct zone *zone;
struct work_struct work;
};
static DEFINE_MUTEX(pcpu_drain_mutex);
static DEFINE_PER_CPU(struct pcpu_drain, pcpu_drain);

#ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
volatile unsigned long latent_entropy __latent_entropy;
Expand Down Expand Up @@ -3101,9 +3095,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
* Called from the vmstat counter updater to drain pagesets of this
* currently executing processor on remote nodes after they have
* expired.
*
* Note that this function must be called with the thread pinned to
* a single processor.
*/
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
{
Expand All @@ -3128,10 +3119,6 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)

/*
* Drain pcplists of the indicated processor and zone.
*
* The processor must either be the current processor and the
* thread pinned to the current processor or a processor that
* is not online.
*/
static void drain_pages_zone(unsigned int cpu, struct zone *zone)
{
Expand All @@ -3150,10 +3137,6 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)

/*
* Drain pcplists of all zones on the indicated processor.
*
* The processor must either be the current processor and the
* thread pinned to the current processor or a processor that
* is not online.
*/
static void drain_pages(unsigned int cpu)
{
Expand All @@ -3166,9 +3149,6 @@ static void drain_pages(unsigned int cpu)

/*
* Spill all of this CPU's per-cpu pages back into the buddy allocator.
*
* The CPU has to be pinned. When zone parameter is non-NULL, spill just
* the single zone's pages.
*/
void drain_local_pages(struct zone *zone)
{
Expand All @@ -3180,24 +3160,6 @@ void drain_local_pages(struct zone *zone)
drain_pages(cpu);
}

static void drain_local_pages_wq(struct work_struct *work)
{
struct pcpu_drain *drain;

drain = container_of(work, struct pcpu_drain, work);

/*
* drain_all_pages doesn't use proper cpu hotplug protection so
* we can race with cpu offline when the WQ can move this from
* a cpu pinned worker to an unbound one. We can operate on a different
* cpu which is alright but we also have to make sure to not move to
* a different one.
*/
migrate_disable();
drain_local_pages(drain->zone);
migrate_enable();
}

/*
* The implementation of drain_all_pages(), exposing an extra parameter to
* drain on all cpus.
Expand All @@ -3218,13 +3180,6 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
*/
static cpumask_t cpus_with_pcps;

/*
* Make sure nobody triggers this path before mm_percpu_wq is fully
* initialized.
*/
if (WARN_ON_ONCE(!mm_percpu_wq))
return;

/*
* Do not drain if one is already in progress unless it's specific to
* a zone. Such callers are primarily CMA and memory hotplug and need
Expand Down Expand Up @@ -3274,14 +3229,12 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
}

for_each_cpu(cpu, &cpus_with_pcps) {
struct pcpu_drain *drain = per_cpu_ptr(&pcpu_drain, cpu);

drain->zone = zone;
INIT_WORK(&drain->work, drain_local_pages_wq);
queue_work_on(cpu, mm_percpu_wq, &drain->work);
if (zone) {
drain_pages_zone(cpu, zone);
} else {
drain_pages(cpu);
}
}
for_each_cpu(cpu, &cpus_with_pcps)
flush_work(&per_cpu_ptr(&pcpu_drain, cpu)->work);

mutex_unlock(&pcpu_drain_mutex);
}
Expand All @@ -3290,8 +3243,6 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
* Spill all the per-cpu pages from all CPUs back into the buddy allocator.
*
* When zone parameter is non-NULL, spill just the single zone's pages.
*
* Note that this can be extremely slow as the draining happens in a workqueue.
*/
void drain_all_pages(struct zone *zone)
{
Expand Down

0 comments on commit e9d7597

Please sign in to comment.