Andy Whitcroft [Tue, 10 Jan 2012 23:10:03 +0000 (15:10 -0800)]
checkpatch: only apply kconfig help checks for options which prompt
The intent of this check is to catch the options which the user will see
and ensure they are properly described. It is also common for internal
only options to have a brief description. Allow this form.
Reported-by: Steven Rostedt <rostedt@goodmis.org> Tested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andy Whitcroft <apw@canonical.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andy Whitcroft [Tue, 10 Jan 2012 23:10:01 +0000 (15:10 -0800)]
checkpatch: optimise statement scanner when mid-statement
In the middle of a long definition or similar, there is no possibility of
finding a smaller sub-statement. Optimise this case by skipping statement
aquirey where there are no starts of statement (open brace '{' or
semi-colon ';'). We are likely to scan slightly more than needed still
but this is safest.
Signed-off-by: Andy Whitcroft <apw@canonical.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andy Whitcroft [Tue, 10 Jan 2012 23:10:00 +0000 (15:10 -0800)]
checkpatch: ## is not a valid modifier
Inserting a # into the modifiers list will incorrectly add the null string
to the modifiers list, leading to an infinite loop. As neither of these
is a valid modifier form simply ignore them.
Signed-off-by: Andy Whitcroft <apw@canonical.com> Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Tue, 10 Jan 2012 23:09:58 +0000 (15:09 -0800)]
checkpatch: improve memset and min/max with cast checking
Improve the checking of arguments to memset and min/max tests.
Move the checking of min/max to statement blocks instead of single line.
Change $Constant to allow any case type 0x initiator and trailing ul
specifier. Add $FuncArg type as any function argument with or without a
cast. Print the whole statement when showing memset or min/max messages.
Improve the memset with 0 as 3rd argument error message.
There are still weaknesses in the $FuncArg and $Constant code as arbitrary
parentheses and negative signs are not generically supported.
[akpm@linux-foundation.org: fix per Andy] Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andy Whitcroft [Tue, 10 Jan 2012 23:09:57 +0000 (15:09 -0800)]
checkpatch: check for common memset parameter issues against statments
Move the memset checks over to work against the statement. Also add
checks for 0 and 1 used as lengths. Generally these indicate badly
ordered parameters.
Signed-off-by: Andy Whitcroft <apw@canonical.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andy Whitcroft [Tue, 10 Jan 2012 23:09:54 +0000 (15:09 -0800)]
checkpatch: correctly track the end of preprocessor commands in context
When looking for a statement we currently run on through preprocessor
commands. This means that a header file with just definitions is parsed
over and over again combining all of the lines from the current line to
the end of file leading to severe performance issues.
Fix up context accumulation to track preprocessor commands and stop when
reaching the end of them. At the same time vastly simplify the #define
handling.
Signed-off-by: Andy Whitcroft <apw@canonical.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Tue, 10 Jan 2012 23:09:50 +0000 (15:09 -0800)]
checkpatch: update signature "might be better as" warning
email header lines can look like signature tags. It's valid to have
multiple email recipients on a single line but not valid to have multiple
signatures on a single line.
Validate signatures only when not in the email headers.
Clear the $in_commit_log flag when the patch filename appears.
Add '-' to the valid chars in a message header for headers
like "Message-Id:" and "In-Reply-To:".
Signed-off-by: Joe Perches <joe@perches.com> Reported-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Steve Hodgson [Tue, 10 Jan 2012 23:09:47 +0000 (15:09 -0800)]
btree: export btree_get_prev() so modules can use btree_for_each
The btree_for_each API is implemented with macros that internally call
btree_get_prev(), so if btree_get_prev() isn't exported then modules fail
to link if they try to use one of the btree_for_each macros. Since the
rest of the btree API is exported, we should keep things orthogonal and
make this work too.
Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Steve Hodgson <steve@purestorage.com> Acked-by: Joern Engel <joern@logfs.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mark Brown [Tue, 10 Jan 2012 23:09:46 +0000 (15:09 -0800)]
leds: convert wm8350 driver to devm_kzalloc()
Saves a small amount of code and systematically eliminates leaks.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mark Brown [Tue, 10 Jan 2012 23:09:45 +0000 (15:09 -0800)]
leds: convert wm831x status driver to devm_kzalloc()
Saves a small amount of code and systematically eliminates leaks.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:43 +0000 (15:09 -0800)]
drivers/leds/leds-mc13783.c: fix off-by-one for checking num_leds
The LED id begins from 0. Thus the maximum number of leds should be
MC13783_LED_MAX + 1.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Acked-by: Philippe Retornaz <philippe.retornaz@epfl.ch> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:37 +0000 (15:09 -0800)]
drivers/leds/leds-netxbig.c: use gpio_request_one()
Use gpio_request_one() instead of multiple gpiolib calls. This also
simplifies error handling a bit.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Cc: Simon Guinot <sguinot@lacie.com> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:35 +0000 (15:09 -0800)]
drivers/leds/leds-bd2802.c: use gpio_request_one()
Use gpio_request_one() instead of multiple gpiolib calls.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Cc: Kim Kyuwon <q1.kim@samsung.com> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Axel Lin <axel.lin@gmail.com> Cc: Samu Onkalo <samu.p.onkalo@nokia.com> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:30 +0000 (15:09 -0800)]
leds: convert leds-dac124s085 to module_spi_driver
Factor out some boilerplate code for spi driver registration into
module_spi_driver.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Cc: Haojian Zhuang <hzhuang1@marvell.com> Cc: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: Michael Hennerich <hennerich@blackfin.uclinux.org> Cc: Mike Rapoport <mike@compulab.co.il> Acked-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:27 +0000 (15:09 -0800)]
leds: convert led i2c drivers to module_i2c_driver
Factor out some boilerplate code for i2c driver registration
into module_i2c_driver.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Cc: Haojian Zhuang <hzhuang1@marvell.com> Cc: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: Michael Hennerich <hennerich@blackfin.uclinux.org> Cc: Mike Rapoport <mike@compulab.co.il> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:24 +0000 (15:09 -0800)]
leds: convert led platform drivers to module_platform_driver
Factor out some boilerplate code for platform driver registration into
module_platform_driver.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Acked-by: Haojian Zhuang <hzhuang1@marvell.com> [led-88pm860x.c] Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: Michael Hennerich <hennerich@blackfin.uclinux.org> Cc: Mike Rapoport <mike@compulab.co.il> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jingoo Han [Tue, 10 Jan 2012 23:09:18 +0000 (15:09 -0800)]
drivers/video/backlight/ep93xx_bl.c: remove duplicated header include
module.h is included twice.
Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com> Cc: Ryan Mallon <rmallon@gmail.com> Cc: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Donghwa Lee [Tue, 10 Jan 2012 23:09:15 +0000 (15:09 -0800)]
backlight/ld9040.c: regulator control in the driver
This patch supports regulator power control in the driver. Current ld9040
driver was controlled power on/off sequence by callback function in the
board file. But, by doing this, there's no need to register lcd power
on/off callback function in the board file.
Signed-off-by: Donghwa Lee <dh09.lee@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Inki Dae <inki.dae@samsung.com> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 10 Jan 2012 23:09:11 +0000 (15:09 -0800)]
backlight: convert drivers/video/backlight/* to use module_platform_driver()
Convert the drivers in drivers/video/backlight/* to use the
module_platform_driver() macro which makes the code smaller and a bit
simpler.
Signed-off-by: Axel Lin <axel.lin@gmail.com> Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com> [ep93xx_bl.c] Cc: Mike Rapoport <mike@compulab.co.il> Cc: Richard Purdie <rpurdie@rpsys.net> Acked-by: Michael Hennerich <michael.hennerich@analog.com> Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Paul Bolle [Tue, 10 Jan 2012 23:09:10 +0000 (15:09 -0800)]
backlight: remove ADX backlight device support
Support for the Avionic Design Xanthos backlight device got added in
commit 3b96ea9ef8 ("backlight: Add support for the Avionic Design Xanthos
backlight device."). That support depends on ARCH_PXA_ADX. The code that
should have provided that Kconfig symbol never got submitted. It has
never been possible to even build this driver. Remove it.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Acked-by: Thierry Reding <thierry.reding@avionic-design.de> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kyungmin Park [Tue, 10 Jan 2012 23:09:09 +0000 (15:09 -0800)]
devfreq: add devfreq maintainer entry
As devfreq is merged at mainline. Also update the maintainer entry.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Cc: Kevin Hilman <khilman@ti.com> Cc: MyungJoo Ham <myungjoo.ham@samsung.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Tue, 10 Jan 2012 23:08:56 +0000 (15:08 -0800)]
MAINTAINERS: update tulip F: patterns
commit a88394cfb58 ("ewrk3/tulip: Move the DEC - Tulip drivers") moved the
files, update the patterns.
Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Grant Grundler <grundler@parisc-linux.org> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Tobias Ringstrom <tori@unhappy.mine.nu> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: David Davies <davies@maniac.ultranet.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Tue, 10 Jan 2012 23:08:46 +0000 (15:08 -0800)]
MAINTAINERS: update bt8xx gpio F: patterns
Commit c103de240439d ("gpio: reorganize drivers") renamed the file, update
the pattern.
Signed-off-by: Joe Perches <joe@perches.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Michael Buesch <m@bues.ch> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Tue, 10 Jan 2012 23:08:44 +0000 (15:08 -0800)]
MAINTAINERS: update adp gpio F: patterns
Commit c103de240439df ("gpio: reorganize drivers") renamed the files,
update the patterns.
Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Michael Hennerich <michael.hennerich@analog.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Tue, 10 Jan 2012 23:08:42 +0000 (15:08 -0800)]
MAINTAINERS: update various arm F: patterns
Track renames and missing or deleted files.
Signed-off-by: Joe Perches <joe@perches.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ian Campbell [Tue, 10 Jan 2012 23:08:41 +0000 (15:08 -0800)]
get_maintainers.pl: follow renames when looking up commit signers
I happen to have had a commit to various network drivers since the big
renaming/reorg which happened to drivers/net recently. This means that I
now appear to be in the top few commit signers (by %age) for many of them
so am getting sent all sorts of stuff and people who are involved with the
driver are not. e.g. (to pick one at random):
$ ./scripts/get_maintainer.pl -f drivers/net/ethernet/nvidia/forcedeth.c
"David S. Miller" <davem@davemloft.net> (commit_signer:5/7=71%)
Ian Campbell <ian.campbell@citrix.com> (commit_signer:2/7=29%)
Eric Dumazet <eric.dumazet@gmail.com> (commit_signer:1/7=14%)
Jeff Kirsher <jeffrey.t.kirsher@intel.com> (commit_signer:1/7=14%)
Jiri Pirko <jpirko@redhat.com> (commit_signer:1/7=14%)
netdev@vger.kernel.org (open list:NETWORKING DRIVERS)
linux-kernel@vger.kernel.org (open list)
With the following patch the renames are followed and the result appears
much more sensible:
Minchan Kim [Tue, 10 Jan 2012 23:08:39 +0000 (15:08 -0800)]
mm/vmalloc.c: change void* into explict vm_struct*
vmap_area->private is void* but we don't use the field for various purpose
but use only for vm_struct. So change it to a vm_struct* with naming to
improve for readability and type checking.
Signed-off-by: Minchan Kim <minchan@kernel.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Tejun Heo [Tue, 10 Jan 2012 23:08:28 +0000 (15:08 -0800)]
mempool: fix first round failure behavior
mempool modifies gfp_mask so that the backing allocator doesn't try too
hard or trigger warning message when there's pool to fall back on. In
addition, for the first try, it removes __GFP_WAIT and IO, so that it
doesn't trigger reclaim or wait when allocation can be fulfilled from
pool; however, when that allocation fails and pool is empty too, it waits
for the pool to be replenished before retrying.
Allocation which could have succeeded after a bit of reclaim has to wait
on the reserved items and it's not like mempool doesn't retry with
__GFP_WAIT and IO. It just does that *after* someone returns an element,
pointlessly delaying things.
Fix it by retrying immediately if the first round of allocation attempts
w/o __GFP_WAIT and IO fails.
[akpm@linux-foundation.org: shorten the lock hold time] Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Tejun Heo [Tue, 10 Jan 2012 23:08:26 +0000 (15:08 -0800)]
mempool: drop unnecessary and incorrect BUG_ON() from mempool_destroy()
mempool_destroy() is a thin wrapper around free_pool(). The only thing it
adds is BUG_ON(pool->curr_nr != pool->min_nr). The intention seems to be
to enforce that all allocated elements are freed; however, the BUG_ON()
can't achieve that (it doesn't know anything about objects above min_nr)
and incorrect as mempool_resize() is allowed to leave the pool extended
but not filled. Furthermore, panicking is way worse than any memory leak
and there are better debug tools to track memory leaks.
Drop the BUG_ON() from mempool_destory() and as that leaves the function
identical to free_pool(), replace it.
Tejun Heo [Tue, 10 Jan 2012 23:08:23 +0000 (15:08 -0800)]
mempool: fix and document synchronization and memory barrier usage
mempool_alloc/free() use undocumented smp_mb()'s. The code is slightly
broken and misleading.
The lockless part is in mempool_free(). It wants to determine whether the
item being freed needs to be returned to the pool or backing allocator
without grabbing pool->lock. Two things need to be guaranteed for correct
operation.
1. pool->curr_nr + #allocated should never dip below pool->min_nr.
2. Waiters shouldn't be left dangling.
For #1, The only necessary condition is that curr_nr visible at free is
from after the allocation of the element being freed (details in the
comment). For most cases, this is true without any barrier but there can
be fringe cases where the allocated pointer is passed to the freeing task
without going through memory barriers. To cover this case, wmb is
necessary before returning from allocation and rmb is necessary before
reading curr_nr. IOW,
ALLOCATING TASK FREEING TASK
update pool state after alloc;
wmb();
pass pointer to freeing task;
read pointer;
rmb();
read pool state to free;
The current code doesn't have wmb after pool update during allocation and
may theoretically, on machines where unlock doesn't behave as full wmb,
lead to pool depletion and deadlock. smp_wmb() needs to be added after
successful allocation from reserved elements and smp_mb() in
mempool_free() can be replaced with smp_rmb().
For #2, the waiter needs to add itself to waitqueue and then check the
wait condition and the waker needs to update the wait condition and then
wake up. Because waitqueue operations always go through full spinlock
synchronization, there is no need for extra memory barriers.
Furthermore, mempool_alloc() is already holding pool->lock when it decides
that it needs to wait. There is no reason to do unlock - add waitqueue -
test condition again. It can simply add itself to waitqueue while holding
pool->lock and then unlock and sleep.
This patch adds smp_wmb() after successful allocation from reserved pool,
replaces smp_mb() in mempool_free() with smp_rmb() and extend pool->lock
over waitqueue addition. More importantly, it explains what memory
barriers do and how the lockless testing is correct.
-v2: Oleg pointed out that unlock doesn't imply wmb. Added explicit
smp_wmb() after successful allocation from reserved pool and
updated comments accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 10 Jan 2012 23:08:21 +0000 (15:08 -0800)]
mm/mempolicy.c: mpol_equal(): use bool
mpol_equal() logically returns a boolean. Use a bool type to slightly
improve readability.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Stephen Wilson <wilsons@start.ca> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Tue, 10 Jan 2012 23:08:18 +0000 (15:08 -0800)]
mm/vmscan.c: consider swap space when deciding whether to continue reclaim
It's pointless to continue reclaiming when we have no swap space and lots
of anon pages in the inactive list.
Without this patch, it is possible when swap is disabled to continue
trying to reclaim when there are only anonymous pages in the system even
though that will not make any progress.
Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Acked-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <jweiner@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Tue, 10 Jan 2012 23:08:15 +0000 (15:08 -0800)]
mm: bootmem: try harder to free pages in bulk
The loop that frees pages to the page allocator while bootstrapping tries
to free higher-order blocks only when the starting address is aligned to
that block size. Otherwise it will free all pages on that node
one-by-one.
Change it to free individual pages up to the first aligned block and then
try higher-order frees from there.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Tue, 10 Jan 2012 23:08:13 +0000 (15:08 -0800)]
mm: bootmem: drop superfluous range check when freeing pages in bulk
The area node_bootmem_map represents is aligned to BITS_PER_LONG, and all
bits in any aligned word of that map valid. When the represented area
extends beyond the end of the node, the non-existant pages will be marked
as reserved.
As a result, when freeing a page block, doing an explicit range check for
whether that block is within the node's range is redundant as the bitmap
is consulted anyway to see whether all pages in the block are unreserved.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Tue, 10 Jan 2012 23:08:10 +0000 (15:08 -0800)]
mm: page_alloc: generalize order handling in __free_pages_bootmem()
__free_pages_bootmem() used to special-case higher-order frees to save
individual page checking with free_pages_bulk().
Nowadays, both zero order and non-zero order frees use free_pages(), which
checks each individual page anyway, and so there is little point in making
the distinction anymore. The higher-order loop will work just fine for
zero order pages.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
tracepoint: add tracepoints for debugging oom_score_adj
oom_score_adj is used for guarding processes from OOM-Killer. One of
problem is that it's inherited at fork(). When a daemon set oom_score_adj
and make children, it's hard to know where the value is set.
This patch adds some tracepoints useful for debugging. This patch adds
3 trace points.
- creating new task
- renaming a task (exec)
- set oom_score_adj
To debug, users need to enable some trace pointer. Maybe filtering is useful as
KOSAKI Motohiro [Tue, 10 Jan 2012 23:08:07 +0000 (15:08 -0800)]
mm: simplify find_vma_prev()
commit 297c5eee37 ("mm: make the vma list be doubly linked") added the
vm_prev member to vm_area_struct. We can simplify find_vma_prev() by
using it. Also, this change helps to improve page fault performance
because it has stronger locality of reference.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Michal Hocko <mhocko@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrea Arcangeli [Tue, 10 Jan 2012 23:08:05 +0000 (15:08 -0800)]
mremap: enforce rmap src/dst vma ordering in case of vma_merge() succeeding in copy_vma()
migrate was doing an rmap_walk with speculative lock-less access on
pagetables. That could lead it to not serializing properly against mremap
PT locks. But a second problem remains in the order of vmas in the
same_anon_vma list used by the rmap_walk.
If vma_merge succeeds in copy_vma, the src vma could be placed after the
dst vma in the same_anon_vma list. That could still lead to migrate
missing some pte.
This patch adds an anon_vma_moveto_tail() function to force the dst vma at
the end of the list before mremap starts to solve the problem.
If the mremap is very large and there are a lots of parents or childs
sharing the anon_vma root lock, this should still scale better than taking
the anon_vma root lock around every pte copy practically for the whole
duration of mremap.
Update: Hugh noticed special care is needed in the error path where
move_page_tables goes in the reverse direction, a second
anon_vma_moveto_tail() call is needed in the error path.
This program exercises the anon_vma_moveto_tail:
===
int main()
{
static struct timeval oldstamp, newstamp;
long diffsec;
char *p, *p2, *p3, *p4;
if (posix_memalign((void **)&p, 2*1024*1024, SIZE))
perror("memalign"), exit(1);
if (posix_memalign((void **)&p2, 2*1024*1024, SIZE))
perror("memalign"), exit(1);
if (posix_memalign((void **)&p3, 2*1024*1024, SIZE))
perror("memalign"), exit(1);
Michal Hocko [Tue, 10 Jan 2012 23:08:02 +0000 (15:08 -0800)]
mm: fix off-by-two in __zone_watermark_ok()
Commit 88f5acf88ae6 ("mm: page allocator: adjust the per-cpu counter
threshold when memory is low") changed the form how free_pages is
calculated but it forgot that we used to do free_pages - ((1 << order) -
1) so we ended up with off-by-two when calculating free_pages.
Reported-by: Wang Sheng-Hui <shhuiw@gmail.com> Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The first entry of bdata->node_bootmem_map holds the data for
bdata->node_min_pfn up to bdata->node_min_pfn + BITS_PER_LONG - 1. So the
test for freeing all pages of a single map entry can be slightly relaxed.
Moreover use DIV_ROUND_UP in another place instead of open coding it.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Johannes Weiner <hannes@saeurebad.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hillf Danton [Tue, 10 Jan 2012 23:07:59 +0000 (15:07 -0800)]
mm: compaction: push isolate search base of compact control one pfn ahead
After isolated the current pfn will no longer be scanned and isolated if
the next round is necessary, so push the isolate_migratepages search base
of the given compact_control one step ahead.
Signed-off-by: Hillf Danton <dhillf@gmail.com> Reviewed-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Tue, 10 Jan 2012 23:07:49 +0000 (15:07 -0800)]
mm: try to distribute dirty pages fairly across zones
The maximum number of dirty pages that exist in the system at any time is
determined by a number of pages considered dirtyable and a user-configured
percentage of those, or an absolute number in bytes.
This number of dirtyable pages is the sum of memory provided by all the
zones in the system minus their lowmem reserves and high watermarks, so
that the system can retain a healthy number of free pages without having
to reclaim dirty pages.
But there is a flaw in that we have a zoned page allocator which does not
care about the global state but rather the state of individual memory
zones. And right now there is nothing that prevents one zone from filling
up with dirty pages while other zones are spared, which frequently leads
to situations where kswapd, in order to restore the watermark of free
pages, does indeed have to write pages from that zone's LRU list. This
can interfere so badly with IO from the flusher threads that major
filesystems (btrfs, xfs, ext4) mostly ignore write requests from reclaim
already, taking away the VM's only possibility to keep such a zone
balanced, aside from hoping the flushers will soon clean pages from that
zone.
Enter per-zone dirty limits. They are to a zone's dirtyable memory what
the global limit is to the global amount of dirtyable memory, and try to
make sure that no single zone receives more than its fair share of the
globally allowed dirty pages in the first place. As the number of pages
considered dirtyable excludes the zones' lowmem reserves and high
watermarks, the maximum number of dirty pages in a zone is such that the
zone can always be balanced without requiring page cleaning.
As this is a placement decision in the page allocator and pages are
dirtied only after the allocation, this patch allows allocators to pass
__GFP_WRITE when they know in advance that the page will be written to and
become dirty soon. The page allocator will then attempt to allocate from
the first zone of the zonelist - which on NUMA is determined by the task's
NUMA memory policy - that has not exceeded its dirty limit.
At first glance, it would appear that the diversion to lower zones can
increase pressure on them, but this is not the case. With a full high
zone, allocations will be diverted to lower zones eventually, so it is
more of a shift in timing of the lower zone allocations. Workloads that
previously could fit their dirty pages completely in the higher zone may
be forced to allocate from lower zones, but the amount of pages that
"spill over" are limited themselves by the lower zones' dirty constraints,
and thus unlikely to become a problem.
For now, the problem of unfair dirty page distribution remains for NUMA
configurations where the zones allowed for allocation are in sum not big
enough to trigger the global dirty limits, wake up the flusher threads and
remedy the situation. Because of this, an allocation that could not
succeed on any of the considered zones is allowed to ignore the dirty
limits before going into direct reclaim or even failing the allocation,
until a future patch changes the global dirty throttling and flusher
thread activation so that they take individual zone states into account.
Test results
15M DMA + 3246M DMA32 + 504 Normal = 3765M memory
40% dirty ratio
16G USB thumb drive
10 runs of dd if=/dev/zero of=disk/zeroes bs=32k count=$((10 << 15))
Johannes Weiner [Tue, 10 Jan 2012 23:07:44 +0000 (15:07 -0800)]
mm: writeback: cleanups in preparation for per-zone dirty limits
The next patch will introduce per-zone dirty limiting functions in
addition to the traditional global dirty limiting.
Rename determine_dirtyable_memory() to global_dirtyable_memory() before
adding the zone-specific version, and fix up its documentation.
Also, move the functions to determine the dirtyable memory and the
function to calculate the dirty limit based on that together so that their
relationship is more apparent and that they can be commented on as a
group.
Signed-off-by: Johannes Weiner <jweiner@redhat.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Acked-by: Mel Gorman <mel@suse.de> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Jan Kara <jack@suse.cz> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Chris Mason <chris.mason@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Tue, 10 Jan 2012 23:07:42 +0000 (15:07 -0800)]
mm: exclude reserved pages from dirtyable memory
Per-zone dirty limits try to distribute page cache pages allocated for
writing across zones in proportion to the individual zone sizes, to reduce
the likelihood of reclaim having to write back individual pages from the
LRU lists in order to make progress.
This patch:
The amount of dirtyable pages should not include the full number of free
pages: there is a number of reserved pages that the page allocator and
kswapd always try to keep free.
The closer (reclaimable pages - dirty pages) is to the number of reserved
pages, the more likely it becomes for reclaim to run into dirty pages:
This patch introduces a per-zone dirty reserve that takes both the lowmem
reserve as well as the high watermark of the zone into account, and a
global sum of those per-zone values that is subtracted from the global
amount of dirtyable pages. The lowmem reserve is unavailable to page
cache allocations and kswapd tries to keep the high watermark free. We
don't want to end up in a situation where reclaim has to clean pages in
order to balance zones.
Not treating reserved pages as dirtyable on a global level is only a
conceptual fix. In reality, dirty pages are not distributed equally
across zones and reclaim runs into dirty pages on a regular basis.
But it is important to get this right before tackling the problem on a
per-zone level, where the distance between reclaim and the dirty pages is
mostly much smaller in absolute numbers.
[akpm@linux-foundation.org: fix highmem build] Signed-off-by: Johannes Weiner <jweiner@redhat.com> Reviewed-by: Rik van Riel <riel@redhat.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Acked-by: Mel Gorman <mgorman@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Jan Kara <jack@suse.cz> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Chris Mason <chris.mason@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Tue, 10 Jan 2012 23:07:38 +0000 (15:07 -0800)]
mm, debug: test for online nid when allocating on single node
Calling alloc_pages_exact_node() means the allocation only passes the
zonelist of a single node into the page allocator. If that node isn't
online, it's zonelist may never have been initialized causing a strange
oops that may not immediately be clear.
I recently debugged an issue where node 0 wasn't online and an allocator
was passing 0 to alloc_pages_exact_node() and it resulted in a NULL
pointer on zonelist->_zoneref. If CONFIG_DEBUG_VM is enabled, though, it
would be nice to catch this a bit earlier.
Signed-off-by: David Rientjes <rientjes@google.com> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shawn Bohrer [Tue, 10 Jan 2012 23:07:35 +0000 (15:07 -0800)]
fadvise: only initiate writeback for specified range with FADV_DONTNEED
Previously POSIX_FADV_DONTNEED would start writeback for the entire file
when the bdi was not write congested. This negatively impacts performance
if the file contains dirty pages outside of the requested range. This
change uses __filemap_fdatawrite_range() to only initiate writeback for
the requested range.
Signed-off-by: Shawn Bohrer <sbohrer@rgmadvisors.com> Acked-by: Johannes Weiner <jweiner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Disable slub debug facilities and allocate slabs at minimal order when
debug_guardpage_minorder > 0 to increase probability to catch random
memory corruption by cpu exception.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When debugging with CONFIG_DEBUG_PAGEALLOC and debug_guardpage_minorder >
0, we have lot of free pages that are not marked so. Snapshot code
account them as savable, what cause hibernate memory preallocation
failure.
It is pretty hard to make hibernate allocation succeed with
debug_guardpage_minorder=1. This change at least make it possible when
system has relatively big amount of RAM.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With CONFIG_DEBUG_PAGEALLOC configured, the CPU will generate an exception
on access (read,write) to an unallocated page, which permits us to catch
code which corrupts memory. However the kernel is trying to maximise
memory usage, hence there are usually few free pages in the system and
buggy code usually corrupts some crucial data.
This patch changes the buddy allocator to keep more free/protected pages
and to interlace free/protected and allocated pages to increase the
probability of catching corruption.
When the kernel is compiled with CONFIG_DEBUG_PAGEALLOC,
debug_guardpage_minorder defines the minimum order used by the page
allocator to grant a request. The requested size will be returned with
the remaining pages used as guard pages.
The default value of debug_guardpage_minorder is zero: no change from
current behaviour.
[akpm@linux-foundation.org: tweak documentation, s/flg/flag/] Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Daney [Tue, 10 Jan 2012 23:07:25 +0000 (15:07 -0800)]
kernel.h: add BUILD_BUG() macro
We can place this in definitions that we expect the compiler to remove by
dead code elimination. If this assertion fails, we get a nice error
message at build time.
The GCC function attribute error("message") was added in version 4.3, so
we define a new macro __linktime_error(message) to expand to this for
GCC-4.3 and later. This will give us an error diagnostic from the
compiler on the line that fails. For other compilers
__linktime_error(message) expands to nothing, and we have to be content
with a link time error, but at least we will still get a build error.
BUILD_BUG() expands to the undefined function __build_bug_failed() and
will fail at link time if the compiler ever emits code for it. On GCC-4.3
and later, attribute((error())) is used so that the failure will be noted
at compile time instead.
Signed-off-by: David Daney <david.daney@cavium.com> Acked-by: David Rientjes <rientjes@google.com> Cc: DM <dm.n9107@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hugetlb.c: fix virtual address handling in hugetlb fault
handle_mm_fault() passes 'faulted' address to hugetlb_fault(). This
address is not aligned to a hugepage boundary.
Most of the functions for hugetlb pages are aware of that and calculate an
alignment themselves. However some functions such as
copy_user_huge_page() and clear_huge_page() don't handle alignment by
themselves.
This patch make hugeltb_fault() fix the alignment and pass an aligned
addresss (to address of a faulted hugepage) to functions.
[akpm@linux-foundation.org: use &=] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Let's make it clear that we cannot race with other fault handlers due to
hugetlb (global) mutex. Also make it clear that we want to keep pte_same
checks anayway to have a transition from the global mutex easier.
Signed-off-by: Michal Hocko <mhocko@suse.cz> Cc: Hillf Danton <dhillf@gmail.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <jweiner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hillf Danton [Tue, 10 Jan 2012 23:07:20 +0000 (15:07 -0800)]
hugetlb: detect race upon page allocation failure during COW
Currently we are not rechecking pte_same in hugetlb_cow after we take ptl
lock again in the page allocation failure code path and simply retry
again. This is not an issue at the moment because hugetlb fault path is
protected by hugetlb_instantiation_mutex so we cannot race.
The original page is locked and so we cannot race even with the page
migration.
Let's add the pte_same check anyway as we want to be consistent with the
other check later in this function and be safe if we ever remove the
mutex.
[mhocko@suse.cz: reworded the changelog] Signed-off-by: Hillf Danton <dhillf@gmail.com> Signed-off-by: Michal Hocko <mhocko@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <jweiner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm: account reaped page cache on inode cache pruning
Inode cache pruning indirectly reclaims page-cache by invalidating mapping
pages. Let's account them into reclaim-state to notice this progress in
memory reclaimer.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Tue, 10 Jan 2012 23:07:15 +0000 (15:07 -0800)]
mm: avoid livelock on !__GFP_FS allocations
Colin Cross reported;
Under the following conditions, __alloc_pages_slowpath can loop forever:
gfp_mask & __GFP_WAIT is true
gfp_mask & __GFP_FS is false
reclaim and compaction make no progress
order <= PAGE_ALLOC_COSTLY_ORDER
These conditions happen very often during suspend and resume,
when pm_restrict_gfp_mask() effectively converts all GFP_KERNEL
allocations into __GFP_WAIT.
The oom killer is not run because gfp_mask & __GFP_FS is false,
but should_alloc_retry will always return true when order is less
than PAGE_ALLOC_COSTLY_ORDER.
In his fix, he avoided retrying the allocation if reclaim made no progress
and __GFP_FS was not set. The problem is that this would result in
GFP_NOIO allocations failing that previously succeeded which would be very
unfortunate.
The big difference between GFP_NOIO and suspend converting GFP_KERNEL to
behave like GFP_NOIO is that normally flushers will be cleaning pages and
kswapd reclaims pages allowing GFP_NOIO to succeed after a short delay.
The same does not necessarily apply during suspend as the storage device
may be suspended.
This patch special cases the suspend case to fail the page allocation if
reclaim cannot make progress and adds some documentation on how
gfp_allowed_mask is currently used. Failing allocations like this may
cause suspend to abort but that is better than a livelock.
[mgorman@suse.de: Rework fix to be suspend specific]
[rientjes@google.com: Move suspended device check to should_alloc_retry] Reported-by: Colin Cross <ccross@android.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Acked-by: David Rientjes <rientjes@google.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Tue, 10 Jan 2012 23:07:14 +0000 (15:07 -0800)]
mm: reduce the amount of work done when updating min_free_kbytes
When min_free_kbytes is updated, some pageblocks are marked
MIGRATE_RESERVE. Ordinarily, this work is unnoticable as it happens early
in boot but on large machines with 1TB of memory, this has been reported
to delay boot times, probably due to the NUMA distances involved.
The bulk of the work is due to calling calling pageblock_is_reserved() an
unnecessary amount of times and accessing far more struct page metadata
than is necessary. This patch significantly reduces the amount of work
done by setup_zone_migrate_reserve() improving boot times on 1TB machines.
Jacobo Giralt [Tue, 10 Jan 2012 23:07:11 +0000 (15:07 -0800)]
mm: migrate: one less atomic operation
migrate_page_move_mapping() drops a reference from the old page after
unfreezing its counter. Both operations can be merged into a single
atomic operation by directly unfreezing to one less reference.
The same applies to migrate_huge_page_move_mapping().
Signed-off-by: Jacobo Giralt <jacobo.giralt@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Johannes Weiner <jweiner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rename mm_page_free_direct into mm_page_free and mm_pagevec_free into
mm_page_free_batched
Since v2.6.33-5426-gc475dab the kernel triggers mm_page_free_direct for
all freed pages, not only for directly freed. So, let's name it properly.
For pages freed via page-list we also trigger mm_page_free_batched event.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch adds helper free_hot_cold_page_list() to free list of 0-order
pages. It frees pages directly from list without temporary page-vector.
It also calls trace_mm_pagevec_free() to simulate pagevec_free()
behaviour.
vmscan: activate executable pages after first usage
Logic added in commit 8cab4754d24a0 ("vmscan: make mapped executable pages
the first class citizen") was noticeably weakened in commit 645747462435d84 ("vmscan: detect mapped file pages used only once").
Currently these pages can become "first class citizens" only after second
usage. After this patch page_check_references() will activate they after
first usage, and executable code gets yet better chance to stay in memory.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 645747462435 ("vmscan: detect mapped file pages used only once")
greatly decreases lifetime of single-used mapped file pages.
Unfortunately it also decreases life time of all shared mapped file
pages. Because after commit bf3f3bc5e7347 ("mm: don't mark_page_accessed
in fault path") page-fault handler does not mark page active or even
referenced.
Thus page_check_references() activates file page only if it was used twice
while it stays in inactive list, meanwhile it activates anon pages after
first access. Inactive list can be small enough, this way reclaimer can
accidentally throw away any widely used page if it wasn't used twice in
short period.
After this patch page_check_references() also activate file mapped page at
first inactive list scan if this page is already used multiple times via
several ptes.
I found this while trying to fix degragation in rhel6 (~2.6.32) from rhel5
(~2.6.18). There a complete mess with >100 web/mail/spam/ftp containers,
they share all their files but there a lot of anonymous pages: ~500mb
shared file mapped memory and 15-20Gb non-shared anonymous memory. In
this situation major-pagefaults are very costly, because all containers
share the same page. In my load kernel created a disproportionate
pressure on the file memory, compared with the anonymous, they equaled
only if I raise swappiness up to 150 =)
These patches actually wasn't helped a lot in my problem, but I saw
noticable (10-20 times) reduce in count and average time of
major-pagefault in file-mapped areas.
Actually both patches are fixes for commit v2.6.33-5448-g6457474, because
it was aimed at one scenario (singly used pages), but it breaks the logic
in other scenarios (shared and/or executable pages)
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Acked-by: Pekka Enberg <penberg@kernel.org> Acked-by: Minchan Kim <minchan.kim@gmail.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Tue, 10 Jan 2012 01:37:37 +0000 (17:37 -0800)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
vfs: new helper - d_make_root()
dcache: use a dispose list in select_parent
ceph: d_alloc_root() may fail
ext4: fix failure exits
isofs: inode leak on mount failure
Dave Chinner [Tue, 23 Aug 2011 08:56:24 +0000 (18:56 +1000)]
dcache: use a dispose list in select_parent
select_parent currently abuses the dentry cache LRU to provide
cleanup features for child dentries that need to be freed. It moves
them to the tail of the LRU, then tells shrink_dcache_parent() to
calls __shrink_dcache_sb to unconditionally move them to a dispose
list (as DCACHE_REFERENCED is ignored). __shrink_dcache_sb() has to
relock the dentries to move them off the LRU onto the dispose list,
but otherwise does not touch the dentries that select_parent() moved
to the tail of the LRU. It then passses the dispose list to
shrink_dentry_list() which tries to free the dentries.
IOWs, the use of __shrink_dcache_sb() is superfluous - we can build
exactly the same list of dentries for disposal directly in
select_parent() and call shrink_dentry_list() instead of calling
__shrink_dcache_sb() to do that. This means that we avoid long holds
on the lru lock walking the LRU moving dentries to the dispose list
We also avoid the need to relock each dentry just to move it off the
LRU, reducing the numebr of times we lock each dentry to dispose of
them in shrink_dcache_parent() from 3 to 2 times.
Further, we remove one of the two callers of __shrink_dcache_sb().
This also means that __shrink_dcache_sb can be moved into back into
prune_dcache_sb() and we no longer have to handle referenced
dentries conditionally, simplifying the code.
Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
igmp: Avoid zero delay when receiving odd mixture of IGMP queries
netdev: make net_device_ops const
bcm63xx: make ethtool_ops const
usbnet: make ethtool_ops const
net: Fix build with INET disabled.
net: introduce netif_addr_lock_nested() and call if when appropriate
net: correct lock name in dev_[uc/mc]_sync documentations.
net: sk_update_clone is only used in net/core/sock.c
8139cp: fix missing napi_gro_flush.
pktgen: set correct max and min in pktgen_setup_inject()
smsc911x: Unconditionally include linux/smscphy.h in smsc911x.h
asix: fix infinite loop in rx_fixup()
net: Default UDP and UNIX diag to 'n'.
r6040: fix typo in use of MCR0 register bits
net: fix sock_clone reference mismatch with tcp memcontrol
Linus Torvalds [Mon, 9 Jan 2012 22:44:15 +0000 (14:44 -0800)]
Merge tag 'clk' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
clock management changes for i.MX
Another simple series related to clock management, this time only for
imx.
* tag 'clk' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: mxs: select HAVE_CLK_PREPARE for clock
clk: add config option HAVE_CLK_PREPARE into Kconfig
ASoC: mxs-saif: convert to clk_prepare/clk_unprepare
video: mxsfb: convert to clk_prepare/clk_unprepare
serial: mxs-auart: convert to clk_prepare/clk_unprepare
net: flexcan: convert to clk_prepare/clk_unprepare
mtd: gpmi-lib: convert to clk_prepare/clk_unprepare
mmc: mxs-mmc: convert to clk_prepare/clk_unprepare
dma: mxs-dma: convert to clk_prepare/clk_unprepare
net: fec: add clk_prepare/clk_unprepare
ARM: mxs: convert platform code to clk_prepare/clk_unprepare
clk: add helper functions clk_prepare_enable and clk_disable_unprepare
Fix up trivial conflicts in drivers/net/ethernet/freescale/fec.c due to
commit 0ebafefcaa7a ("net: fec: add clk_prepare/clk_unprepare") clashing
trivially with commit e163cc97f9ac ("net/fec: fix the .remove code").
Linus Torvalds [Mon, 9 Jan 2012 22:40:48 +0000 (14:40 -0800)]
Merge tag 'timer' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
timer changes for msm
A very simple series. We used to have more churn in the timer
area, so this is kept separate. Will probably put this into the
drivers series next time.
* tag 'timer' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
msm: timer: Use clockevents_config_and_register()
msm: timer: Setup interrupt after registering clockevent
msm: timer: Remove SoC specific #ifdefs
msm: timer: Remove msm_clocks[] and simplify code
msm: timer: Fix ONESHOT mode interrupts
msm: timer: Use GPT for clockevents and DGT for clocksource
msm: timer: Cleanup #includes and #defines
msm: timer: Tighten #ifdef for local timer support
Linus Torvalds [Mon, 9 Jan 2012 22:39:59 +0000 (14:39 -0800)]
Merge tag 'pm' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
power management changes for omap and imx
A significant part of the changes for these two platforms went into
power management, so they are split out into a separate branch.
* tag 'pm' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (65 commits)
ARM: imx6: remove __CPUINIT annotation from v7_invalidate_l1
ARM: imx6: fix v7_invalidate_l1 by adding I-Cache invalidation
ARM: imx6q: resume PL310 only when CACHE_L2X0 defined
ARM: imx6q: build pm code only when CONFIG_PM selected
ARM: mx5: use generic irq chip pm interface for pm functions on
ARM: omap: pass minimal SoC/board data for UART from dt
arm/dts: Add minimal device tree support for omap2420 and omap2430
omap-serial: Add minimal device tree support
omap-serial: Use default clock speed (48Mhz) if not specified
omap-serial: Get rid of all pdev->id usage
ARM: OMAP2+: hwmod: Add a new flag to handle hwmods left enabled at init
ARM: OMAP4: PRM: use PRCM interrupt handler
ARM: OMAP3: pm: use prcm chain handler
ARM: OMAP: hwmod: add support for selecting mpu_irq for each wakeup pad
ARM: OMAP2+: mux: add support for PAD wakeup interrupts
ARM: OMAP: PRCM: add suspend prepare / finish support
ARM: OMAP: PRCM: add support for chain interrupt handler
ARM: OMAP3/4: PRM: add functions to read pending IRQs, PRM barrier
ARM: OMAP2+: hwmod: Add API to enable IO ring wakeup
ARM: OMAP2+: mux: add wakeup-capable hwmod mux entries to dynamic list
...
Linus Torvalds [Mon, 9 Jan 2012 22:39:22 +0000 (14:39 -0800)]
Merge tag 'drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Driver specific changes
Again, a lot of platforms have changes in here: pxa, samsung, omap,
at91, imx, ...
* tag 'drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (54 commits)
ARM: sa1100: clean up of the clock support
ARM: pxa: add dummy clock for sa1100-rtc
RTC: sa1100: support sa1100, pxa and mmp soc families
RTC: sa1100: remove redundant code of setting alarm
RTC: sa1100: Clean out ost register
Input: zylonite-wm97xx - replace IRQ_GPIO() with gpio_to_irq()
pcmcia: pxa: replace IRQ_GPIO() with gpio_to_irq()
ARM: EXYNOS: Modified files for SPI consolidation work
ARM: S5P64X0: Enable SDHCI support
ARM: S5P64X0: Add lookup of sdhci-s3c clocks using generic names
ARM: S5P64X0: Add HSMMC setup for host Controller
ARM: EXYNOS: Add USB OHCI support to ORIGEN board
USB: Add Samsung Exynos OHCI diver
ARM: EXYNOS: Add USB OHCI support to SMDKV310 board
ARM: EXYNOS: Add USB OHCI device
net: macb: fix build break with !CONFIG_OF
i2c: tegra: Support DVC controller in device tree
i2c: tegra: Add __devinit/exit to probe/remove
net/at91_ether: use gpio_is_valid for phy IRQ line
ARM: at91/net: add macb ethernet controller in 9g45/9g20 DT
...
Linus Torvalds [Mon, 9 Jan 2012 22:38:51 +0000 (14:38 -0800)]
Merge tag 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
New feature development
This adds support for new features, and contains stuff from most
platforms. A number of these patches could have fit into other
branches, too, but were small enough not to cause too much
confusion here.
* tag 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (28 commits)
mfd/db8500-prcmu: remove support for early silicon revisions
ARM: ux500: fix the smp_twd clock calculation
ARM: ux500: remove support for early silicon revisions
ARM: ux500: update register files
ARM: ux500: register DB5500 PMU dynamically
ARM: ux500: update ASIC detection for U5500
ARM: ux500: support DB8520
ARM: picoxcell: implement watchdog restart
ARM: OMAP3+: hwmod data: Add the default clockactivity for I2C
ARM: OMAP3: hwmod data: disable multiblock reads on MMC1/2 on OMAP34xx/35xx <= ES2.1
ARM: OMAP: USB: EHCI and OHCI hwmod structures for OMAP4
ARM: OMAP: USB: EHCI and OHCI hwmod structures for OMAP3
ARM: OMAP: hwmod data: Add support for AM35xx UART4/ttyO3
ARM: Orion: Remove address map info from all platform data structures
ARM: Orion: Get address map from plat-orion instead of via platform_data
ARM: Orion: mbus_dram_info consolidation
ARM: Orion: Consolidate the address map setup
ARM: Kirkwood: Add configuration for MPP12 as GPIO
ARM: Kirkwood: Recognize A1 revision of 6282 chip
ARM: ux500: update the MOP500 GPIO assignments
...
Linus Torvalds [Mon, 9 Jan 2012 22:37:41 +0000 (14:37 -0800)]
Merge tag 'boards' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Board-level changes
This adds and extends support for specific boards on a number of
ARM platforms: omap, imx, samsung, tegra, ...
* tag 'boards' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (49 commits)
Enable 32 bit flash support for iMX21ADS board
ARM: mx31pdk: Add MC13783 RTC support
iomux-mx25: configuration to support CSPI3 on CSI pins
MX1:apf9328: Add i2c support
mioa701: add newly available DoC G3 chip
arm/tegra: remove __initdata annotation from pinmux tables
arm/tegra: Use bus notifiers to trigger pinmux setup
arm/tegra: Refactor board-*-pinmux.c to share code
arm/tegra: Fix mistake in Trimslice's pinmux
arm/tegra: Rework Seaboard-vs-Ventana pinmux table
arm/tegra: Remove useless entries from ventana_pinmux[]
arm/tegra: PCIe: Remove include of mach/pinmux.h
arm/tegra: Harmony PCIe: Don't touch pinmux
arm/tegra: Add AUXDATA for tegra-pinmux and tegra-gpio
arm/tegra: Split Seaboard GPIO table to allow for Ventana
ARM: imx6q: generate imx6q dtb files
arm/imx6q: Rename Sabreauto to Armadillo2
arm/imx6q-sabrelite: add enet phy ksz9021rn fixup
arm/imx6: add imx6q sabrelite board support
dts/imx: rename uart labels to consistent with hw spec
...
Linus Torvalds [Mon, 9 Jan 2012 22:33:17 +0000 (14:33 -0800)]
Merge tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
SoC-level changes for tegra and omap
This adds support for the new tegra30 SoC, as well as small
changes to support minor variations of existing omap SoCs.
* tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (26 commits)
arm/tegra: Compile tegra_dt_init_irq only when CONFIG_OF
arm/tegra: Make MACH_TEGRA_DT depend on ARCH_TEGRA_2x_SOC
arm/tegra: Delete tegra_init_clock()
arm/tegra: Fix section mismatch errors in tegra30 pinmux
arm/tegra: Fix section mismatch errors in tegra20 pinmux
arm/tegra: refresh defconfig for tegra30
arm/tegra: add support for tegra30 based board cardhu
arm/tegra: implement support for tegra30
arm/tegra: pinmux tables and definitions for tegra30
arm/tegra: add new fields to struct tegra_pingroup_desc
arm/tegra: prepare pinmux code for multiple tegra variants
arm/tegra: rename tegra20 pinmux files
arm/tegra: generalize L2 cache initialization
arm/tegra: use PMC reset
arm/tegra: rename board-dt.c to board-dt-tegra20.c
arm/tegra: prepare early init for multiple tegra variants
arm/tegra: don't export clk_measure_input_freq
arm/tegra: prepare clock code for multiple tegra variants
arm/tegra: cleanup tegra20 support
arm/tegra: clk_get should not be fatal
...
Fix up trivial conflict in arch/arm/mach-tegra/board-dt-tegra20.c