Skip to content

Commit

Permalink
mm-oom-rework-oom-detection-checkpatch-fixes
Browse files Browse the repository at this point in the history
Cc: David Rientjes <rientjes@google.com>

WARNING: line over 80 characters
torvalds#99: FILE: mm/page_alloc.c:2965:
+ * zone list (with a backoff mechanism which is a function of no_progress_loops).

WARNING: line over 80 characters
torvalds#129: FILE: mm/page_alloc.c:2995:
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.

WARNING: line over 80 characters
torvalds#134: FILE: mm/page_alloc.c:3000:
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {

WARNING: line over 80 characters
torvalds#138: FILE: mm/page_alloc.c:3004:
+		available -= DIV_ROUND_UP(no_progress_loops * available, MAX_RECLAIM_RETRIES);

WARNING: line over 80 characters
torvalds#142: FILE: mm/page_alloc.c:3008:
+		 * Would the allocation succeed if we reclaimed the whole available?

WARNING: line over 80 characters
torvalds#146: FILE: mm/page_alloc.c:3012:
+			/* Wait for some write requests to complete then retry */

total: 0 errors, 6 warnings, 202 lines checked

./patches/mm-oom-rework-oom-detection.patch has style problems, please review.

NOTE: If any of the errors are false positives, please report
      them to the maintainer, see CHECKPATCH in MAINTAINERS.

Please run checkpatch prior to sending patches

Cc: David Rientjes <rientjes@google.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
  • Loading branch information
akpm00 authored and sfrothwell committed Feb 25, 2016
1 parent f9ec43a commit 1f42de1
Showing 1 changed file with 13 additions and 9 deletions.
22 changes: 13 additions & 9 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -3101,7 +3101,8 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
* the last reclaim round), pages_reclaimed (cumulative number of reclaimed
* pages) and no_progress_loops (number of reclaim rounds without any progress
* in a row) is considered as well as the reclaimable pages on the applicable
* zone list (with a backoff mechanism which is a function of no_progress_loops).
* zone list (with a backoff mechanism which is a function of
* no_progress_loops).
*
* Returns true if a retry is viable or false to enter the oom path.
*/
Expand Down Expand Up @@ -3131,24 +3132,27 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
}

/*
* Keep reclaiming pages while there is a chance this will lead somewhere.
* If none of the target zones can satisfy our allocation request even
* if all reclaimable pages are considered then we are screwed and have
* to go OOM.
* Keep reclaiming pages while there is a chance this will lead
* somewhere. If none of the target zones can satisfy our allocation
* request even if all reclaimable pages are considered then we are
* screwed and have to go OOM.
*/
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
ac->high_zoneidx, ac->nodemask) {
unsigned long available;

available = zone_reclaimable_pages(zone);
available -= DIV_ROUND_UP(no_progress_loops * available, MAX_RECLAIM_RETRIES);
available -= DIV_ROUND_UP(no_progress_loops * available,
MAX_RECLAIM_RETRIES);
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);

/*
* Would the allocation succeed if we reclaimed the whole available?
* Would the allocation succeed if we reclaimed the whole
* available?
*/
if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
ac->high_zoneidx, alloc_flags, available)) {
/* Wait for some write requests to complete then retry */
/* Wait for some writes to complete then retry */
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
return true;
}
Expand Down

0 comments on commit 1f42de1

Please sign in to comment.