Daniel Wagner [Thu, 22 Sep 2016 11:54:29 +0000 (13:54 +0200)]
NFS: cache_lib: use complete() instead of complete_all()
There is only one waiter for the completion, therefore there
is no need to use complete_all(). Let's make that clear by
using complete() instead of complete_all().
The generic caching code from sunrpc is calling revisit() only once.
Daniel Wagner [Thu, 22 Sep 2016 11:54:28 +0000 (13:54 +0200)]
NFS: direct: use complete() instead of complete_all()
There is only one waiter for the completion, therefore there
is no need to use complete_all(). Let's make that clear by
using complete() instead of complete_all().
nfs_file_direct_write() or nfs_file_direct_read() allocated a request
object via nfs_direct_req_alloc(), which initializes the
completion. The request object then is freed later in the exit path.
Between the initialization and the release either
nfs_direct_write_schedule_iovec() resp
nfs_direct_read_schedule_iovec() are called which will asynchronously
process the request. The calling function waits via nfs_direct_wait()
till the async work has been done. Thus there is only one waiter on
the completion.
nfs_direct_pgio_init() and nfs_direct_read_completion() are passed via
function pointers to nfs pageio. The first function does a ref
counting (get_dreq() and put_dreq()) which ensures that
nfs_direct_read_completion() and nfs_direct_read_schedule_iovec() only
call the completion path once.
Before we try to stash it in the dcache, we need to at least check
that the filename passed to us by the server is non-empty and doesn't
contain any illegal '\0' or '/' characters.
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Jeff Layton [Sat, 17 Sep 2016 22:17:39 +0000 (18:17 -0400)]
nfs: allow blocking locks to be awoken by lock callbacks
Add a waitqueue head to the client structure. Have clients set a wait
on that queue prior to requesting a lock from the server. If the lock
is blocked, then we can use that to wait for wakeups.
Note that we do need to do this "manually" since we need to set the
wait on the waitqueue prior to requesting the lock, but requesting a
lock can involve activities that can block.
However, only do that for NFSv4.1 locks, either by compiling out
all of the waitqueue handling when CONFIG_NFS_V4_1 is disabled, or
skipping all of it at runtime if we're dealing with v4.0, or v4.1
servers that don't send lock callbacks.
Note too that even when we expect to get a lock callback, RFC5661
section 20.11.4 is pretty clear that we still need to poll for them,
so we do still sleep on a timeout. We do however always poll at the
longest interval in that case.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
[Anna: nfs4_retry_setlk() "status" should default to -ERESTARTSYS] Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Jeff Layton [Sat, 17 Sep 2016 22:17:37 +0000 (18:17 -0400)]
nfs: move nfs4_set_lock_state call into caller
We need to have this info set up before adding the waiter to the
waitqueue, so move this out of the _nfs4_proc_setlk and into the
caller. That's more efficient anyway since we don't need to do
this more than once if we end up waiting on the lock.
Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Jeff Layton [Sat, 17 Sep 2016 22:17:33 +0000 (18:17 -0400)]
nfs: use safe, interruptible sleeps when waiting to retry LOCK
We actually want to use TASK_INTERRUPTIBLE sleeps when we're in the
process of polling for a NFSv4 lock. If there is a signal pending when
the task wakes up, then we'll be returning an error anyway. So, we might
as well wake up immediately for non-fatal signals as well. That allows
us to return to userland more quickly in that case, but won't change the
error that userland sees.
Also, there is no need to use the *_unsafe sleep variants here, as no
vfs-layer locks should be held at this point.
Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
David Vrabel [Mon, 19 Sep 2016 12:58:30 +0000 (13:58 +0100)]
sunrpc: fix write space race causing stalls
Write space becoming available may race with putting the task to sleep
in xprt_wait_for_buffer_space(). The existing mechanism to avoid the
race does not work.
This (edited) partial trace illustrates the problem:
[2] Space becomes available, xs_write_space() clears the
SOCKWQ_ASYNC_NOSPACE bit.
[3] xprt_write_space() attemts to wake xprt->snd_task (== 43546), but
this has not yet been queued and the wake up is lost.
[4] xs_nospace() is called which calls xprt_wait_for_buffer_space()
which queues task 43546.
[5] The call to sk->sk_write_space() at the end of xs_nospace() (which
is supposed to handle the above race) does not call
xprt_write_space() as the SOCKWQ_ASYNC_NOSPACE bit is clear and
thus the task is not woken.
Fix the race by resetting the SOCKWQ_ASYNC_NOSPACE bit in xs_nospace()
so the second call to sk->sk_write_space() calls xprt_write_space().
Suggested-by: Trond Myklebust <trondmy@primarydata.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
cc: stable@vger.kernel.org # 4.4 Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Jeff Layton [Thu, 15 Sep 2016 18:40:49 +0000 (14:40 -0400)]
pnfs: add a new mechanism to select a layout driver according to an ordered list
Currently, the layout driver selection code always chooses the first one
from the list. That's not really ideal however, as the server can send
the list of layout types in any order that it likes. It's up to the
client to select the best one for its needs.
This patch adds an ordered list of preferred driver types and has the
selection code sort the list of available layout drivers according to it.
Any unrecognized layout type is sorted to the end of the list.
For now, the order of preference is hardcoded, but it should be possible
to make this configurable in the future.
Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: J. Bruce Fields <bfields@fieldses.org> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:57:32 +0000 (10:57 -0400)]
xprtrdma: Support larger inline thresholds
The Version One default inline threshold is still 1KB. But allow
testing with thresholds up to 64KB.
This maximum is somewhat arbitrary. There's no fundamental
architectural limit I'm aware of, but it's good to keep the size of
Receive buffers reasonable. Now that Send can use a s/g list, a
Send buffer is only as large as each RPC requires. Receive buffers
are always the size of the inline threshold, however.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:57:24 +0000 (10:57 -0400)]
xprtrdma: Use gathered Send for large inline messages
An RPC Call message that is sent inline but that has a data payload
(ie, one or more items in rq_snd_buf's page list) must be "pulled
up:"
- call_allocate has to reserve enough RPC Call buffer space to
accommodate the data payload
- call_transmit has to memcopy the rq_snd_buf's page list and tail
into its head iovec before it is sent
As the inline threshold is increased beyond its current 1KB default,
however, this means data payloads of more than a few KB are copied
by the host CPU. For example, if the inline threshold is increased
just to 4KB, then NFS WRITE requests up to 4KB would involve a
memcpy of the NFS WRITE's payload data into the RPC Call buffer.
This is an undesirable amount of participation by the host CPU.
The inline threshold may be much larger than 4KB in the future,
after negotiation with a peer server.
Instead of copying the components of rq_snd_buf into its head iovec,
construct a gather list of these components, and send them all in
place. The same approach is already used in the Linux server's
RPC-over-RDMA reply path.
This mechanism also eliminates the need for rpcrdma_tail_pullup,
which is used to manage the XDR pad and trailing inline content when
a Read list is present.
This requires that the pages in rq_snd_buf's page list be DMA-mapped
during marshaling, and unmapped when a data-bearing RPC is
completed. This is slightly less efficient for very small I/O
payloads, but significantly more efficient as data payload size and
inline threshold increase past a kilobyte.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:57:07 +0000 (10:57 -0400)]
xprtrdma: Client-side support for rpcrdma_connect_private
Send an RDMA-CM private message on connect, and look for one during
a connection-established event.
Both sides can communicate their various implementation limits.
Implementations that don't support this sideband protocol ignore it.
Once the client knows the server's inline threshold maxima, it can
adjust the use of Reply chunks, and eliminate most use of Position
Zero Read chunks. Moderately-sized I/O can be done using a pure
inline RDMA Send instead of RDMA operations that require memory
registration.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:56:59 +0000 (10:56 -0400)]
rpcrdma: RDMA/CM private message data structure
Introduce data structure used by both client and server to exchange
implementation details during RDMA/CM connection establishment.
This is an experimental out-of-band exchange between Linux
RPC-over-RDMA Version One implementations, replacing the deprecated
CCP (see RFC 5666bis). The purpose of this extension is to enable
prototyping of features that might be introduced in a subsequent
version of RPC-over-RDMA.
Suggested by Christoph Hellwig and Devesh Sharma.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:56:51 +0000 (10:56 -0400)]
xprtrdma: Move recv_wr to struct rpcrdma_rep
Clean up: The fields in the recv_wr do not vary. There is no need to
initialize them before each ib_post_recv(). This removes a large-ish
data structure from the stack.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:56:43 +0000 (10:56 -0400)]
xprtrdma: Move send_wr to struct rpcrdma_req
Clean up: Most of the fields in each send_wr do not vary. There is
no need to initialize them before each ib_post_send(). This removes
a large-ish data structure from the stack.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:56:18 +0000 (10:56 -0400)]
xprtrdma: Delay DMA mapping Send and Receive buffers
Currently, each regbuf is allocated and DMA mapped at the same time.
This is done during transport creation.
When a device driver is unloaded, every DMA-mapped buffer in use by
a transport has to be unmapped, and then remapped to the new
device if the driver is loaded again. Remapping will have to be done
_after_ the connect worker has set up the new device.
But there's an ordering problem:
call_allocate, which invokes xprt_rdma_allocate which calls
rpcrdma_alloc_regbuf to allocate Send buffers, happens _before_
the connect worker can run to set up the new device.
Instead, at transport creation, allocate each buffer, but leave it
unmapped. Once the RPC carries these buffers into ->send_request, by
which time a transport connection should have been established,
check to see that the RPC's buffers have been DMA mapped. If not,
map them there.
When device driver unplug support is added, it will simply unmap all
the transport's regbufs, but it doesn't have to deallocate the
underlying memory.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:56:10 +0000 (10:56 -0400)]
xprtrdma: Replace DMA_BIDIRECTIONAL
The use of DMA_BIDIRECTIONAL is discouraged by DMA-API.txt.
Fortunately, xprtrdma now knows which direction I/O is going as
soon as it allocates each regbuf.
The RPC Call and Reply buffers are no longer the same regbuf. They
can each be labeled correctly now. The RPC Reply buffer is never
part of either a Send or Receive WR, but it can be part of Reply
chunk, which is mapped and registered via ->ro_map . So it is not
DMA mapped when it is allocated (DMA_NONE), to avoid a double-
mapping.
Since Receive buffers are no longer DMA_BIDIRECTIONAL and their
contents are never modified by the host CPU, DMA-API-HOWTO.txt
suggests that a DMA sync before posting each buffer should be
unnecessary. (See my_card_interrupt_handler).
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:56:02 +0000 (10:56 -0400)]
xprtrdma: Use smaller buffers for RPC-over-RDMA headers
Commit 949317464bc2 ("xprtrdma: Limit number of RDMA segments in
RPC-over-RDMA headers") capped the number of chunks that may appear
in RPC-over-RDMA headers. The maximum header size can be estimated
and fixed to avoid allocating buffer space that is never used.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:55:53 +0000 (10:55 -0400)]
xprtrdma: Initialize separate RPC call and reply buffers
RPC-over-RDMA needs to separate its RPC call and reply buffers.
o When an RPC Call is sent, rq_snd_buf is DMA mapped for an RDMA
Send operation using DMA_TO_DEVICE
o If the client expects a large RPC reply, it DMA maps rq_rcv_buf
as part of a Reply chunk using DMA_FROM_DEVICE
The two mappings are for data movement in opposite directions.
DMA-API.txt suggests that if these mappings share a DMA cacheline,
bad things can happen. This could occur in the final bytes of
rq_snd_buf and the first bytes of rq_rcv_buf if the two buffers
happen to share a DMA cacheline.
On x86_64 the cacheline size is typically 8 bytes, and RPC call
messages are usually much smaller than the send buffer, so this
hasn't been a noticeable problem. But the DMA cacheline size can be
larger on other platforms.
Also, often rq_rcv_buf starts most of the way into a page, thus
an additional RDMA segment is needed to map and register the end of
that buffer. Try to avoid that scenario to reduce the cost of
registering and invalidating Reply chunks.
Instead of carrying a single regbuf that covers both rq_snd_buf and
rq_rcv_buf, each struct rpcrdma_req now carries one regbuf for
rq_snd_buf and one regbuf for rq_rcv_buf.
Some incidental changes worth noting:
- To clear out some spaghetti, refactor xprt_rdma_allocate.
- The value stored in rg_size is the same as the value stored in
the iov.length field, so eliminate rg_size
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:55:45 +0000 (10:55 -0400)]
SUNRPC: Add a transport-specific private field in rpc_rqst
Currently there's a hidden and indirect mechanism for finding the
rpcrdma_req that goes with an rpc_rqst. It depends on getting from
the rq_buffer pointer in struct rpc_rqst to the struct
rpcrdma_regbuf that controls that buffer, and then to the struct
rpcrdma_req it goes with.
This was done back in the day to avoid the need to add a per-rqst
pointer or to alter the buf_free API when support for RPC-over-RDMA
was introduced.
I'm about to change the way regbuf's work to support larger inline
thresholds. Now is a good time to replace this indirect mechanism
with something that is more straightforward. I guess this should be
considered a clean up.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:55:37 +0000 (10:55 -0400)]
SUNRPC: Separate buffer pointers for RPC Call and Reply messages
For xprtrdma, the RPC Call and Reply buffers are involved in real
I/O operations.
To start with, the DMA direction of the I/O for a Call is opposite
that of a Reply.
In the current arrangement, the Reply buffer address is on a
four-byte alignment just past the call buffer. Would be friendlier
on some platforms if that was at a DMA cache alignment instead.
Because the current arrangement allocates a single memory region
which contains both buffers, the RPC Reply buffer often contains a
page boundary in it when the Call buffer is large enough (which is
frequent).
It would be a little nicer for setting up DMA operations (and
possible registration of the Reply buffer) if the two buffers were
separated, well-aligned, and contained as few page boundaries as
possible.
Now, I could just pad out the single memory region used for the pair
of buffers. But frequently that would mean a lot of unused space to
ensure the Reply buffer did not have a page boundary.
Add a separate pointer to rpc_rqst that points right to the RPC
Reply buffer. This makes no difference to xprtsock, but it will help
xprtrdma in subsequent patches.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:55:29 +0000 (10:55 -0400)]
SUNRPC: Generalize the RPC buffer release API
xprtrdma needs to allocate the Call and Reply buffers separately.
TBH, the reliance on using a single buffer for the pair of XDR
buffers is transport implementation-specific.
Instead of passing just the rq_buffer into the buf_free method, pass
the task structure and let buf_free take care of freeing both
XDR buffers at once.
There's a micro-optimization here. In the common case, both
xprt_release and the transport's buf_free method were checking if
rq_buffer was NULL. Now the check is done only once per RPC.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Chuck Lever [Thu, 15 Sep 2016 14:55:20 +0000 (10:55 -0400)]
SUNRPC: Generalize the RPC buffer allocation API
xprtrdma needs to allocate the Call and Reply buffers separately.
TBH, the reliance on using a single buffer for the pair of XDR
buffers is transport implementation-specific.
Transports that want to allocate separate Call and Reply buffers
will ignore the "size" argument anyway. Don't bother passing it.
The buf_alloc method can't return two pointers. Instead, make the
method's return value an error code, and set the rq_buffer pointer
in the method itself.
This gives call_allocate an opportunity to terminate an RPC instead
of looping forever when a permanent problem occurs. If a request is
just bogus, or the transport is in a state where it can't allocate
resources for any request, there needs to be a way to kill the RPC
right there and not loop.
This immediately fixes a rare problem in the backchannel send path,
which loops if the server happens to send a CB request whose
call+reply size is larger than a page (which it shouldn't do yet).
One more issue: looks like xprt_inject_disconnect was incorrectly
placed in the failure path in call_allocate. It needs to be in the
success path, as it is for other call-sites.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Andy Adamson [Fri, 9 Sep 2016 13:22:29 +0000 (09:22 -0400)]
NFS pnfs data server multipath session trunking
Try all multipath addresses for a data server. The first address that
successfully connects and creates a session is the DS mount address.
All subsequent addresses are tested for session trunking and
added as aliases.
Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Andy Adamson [Fri, 9 Sep 2016 13:22:28 +0000 (09:22 -0400)]
NFS test session trunking with exchange id
Use an async exchange id call to test for session trunking
To conform with RFC 5661 section 18.35.4, the Non-Update on
Existing Clientid case, save the exchange id verifier in
cl_confirm and use it for the session trunking exhange id test.
Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Andy Adamson [Fri, 9 Sep 2016 13:22:18 +0000 (09:22 -0400)]
NFS setup async exchange_id
Testing an rpc_xprt for session trunking should not delay application
progress over already established transports.
Setup exchange_id to be able to be an async call to test an rpc_xprt
for session trunking use.
Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Jeff Layton [Wed, 10 Aug 2016 19:58:24 +0000 (15:58 -0400)]
pnfs: track multiple layout types in fsinfo structure
Current NFSv4.1/pNFS client assumes that MDS supports only one layout
type. While it's true for most existing servers, nevertheless, this can
be change in the near future.
For now, this patch just plumbs in the ability to track a list of
layouts in the fsinfo structure. The existing behavior of the client
is preserved, by having it just select the first entry in the list.
Signed-off-by: Tigran Mkrtchyan <tigran.mkrtchyan@desy.de> Signed-off-by: Jeff Layton <jlayton@poochiereds.net> Reviewed-by: J. Bruce Fields <bfields@fieldses.org> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
The variable `err` is not used anywhere and just returns the
predefined value `0` at the end of the function. Hence, remove the
variable and return 0 explicitly.
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Merge tag 'usb-4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Here are two small fixes, and one new device id, for 4.8-rc7
The fixes solve a build error that was reported in your tree for the
blackfin arch, and resolve an issue with a number of broken USB
devices that reported the wrong interval rate. Included here is also
a new device id for the usb-serial driver.
All have been in linux-next with no reported issues"
* tag 'usb-4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
USB: change bInterval default to 10 ms
usb: musb: Fix tusb6010 compile error on blackfin
USB: serial: simple: add support for another Infineon flashloader
Merge tag 'fixes-for-linus-v4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull uaccess fixes from Guenter Roeck:
"Two patches fixing problems introduced with copy_from_user changes"
* tag 'fixes-for-linus-v4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
openrisc: fix the fix of copy_from_user()
avr32: fix 'undefined reference to `___copy_from_user'
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Thomas Gleixner:
"A couple of small fixes to x86 perf drivers:
- Measure L2 for HW_CACHE* events on AMD
- Fix the address filter handling in the intel/pt driver
- Handle the BTS disabling at the proper place"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/amd: Make HW_CACHE_REFERENCES and HW_CACHE_MISSES measure L2
perf/x86/intel/pt: Do validate the size of a kernel address filter
perf/x86/intel/pt: Fix kernel address filter's offset validation
perf/x86/intel/pt: Fix an off-by-one in address filter configuration
perf/x86/intel: Don't disable "intel_bts" around "intel" event batching
Merge branch 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fixes from Thomas Gleixner:
"Two patches from Boris which address a potential deadlock in the atmel
irq chip driver"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/atmel-aic: Fix potential deadlock in ->xlate()
genirq: Provide irq_gc_{lock_irqsave,unlock_irqrestore}() helpers
Since commit acb2505d0119 ("openrisc: fix copy_from_user()"),
copy_from_user() returns the number of bytes requested, not the
number of bytes not copied.
avr32: fix 'undefined reference to `___copy_from_user'
avr32 builds fail with:
arch/avr32/kernel/built-in.o: In function `arch_ptrace':
(.text+0x650): undefined reference to `___copy_from_user'
arch/avr32/kernel/built-in.o:(___ksymtab+___copy_from_user+0x0): undefined
reference to `___copy_from_user'
kernel/built-in.o: In function `proc_doulongvec_ms_jiffies_minmax':
(.text+0x5dd8): undefined reference to `___copy_from_user'
kernel/built-in.o: In function `proc_dointvec_minmax_sysadmin':
sysctl.c:(.text+0x6174): undefined reference to `___copy_from_user'
kernel/built-in.o: In function `ptrace_has_cap':
ptrace.c:(.text+0x69c0): undefined reference to `___copy_from_user'
kernel/built-in.o:ptrace.c:(.text+0x6b90): more undefined references to
`___copy_from_user' follow
Merge tag 'mmc-v4.8-rc6' of git://git.linaro.org/people/ulf.hansson/mmc
Pull MMC fixes from Ulf Hansson:
"MMC host:
- omap/omap_hsmmc: Initialize dma_slave_config to avoid random data
- sdhci-st: Handle interconnect clock"
* tag 'mmc-v4.8-rc6' of git://git.linaro.org/people/ulf.hansson/mmc:
mmc: omap: Initialize dma_slave_config to avoid random data in it's fields
mmc: omap_hsmmc: Initialize dma_slave_config to avoid random data
mmc: sdhci-st: Handle interconnect clock
dt-bindings: mmc: sdhci-st: Mention the discretionary "icn" clock
Merge tag 'powerpc-4.8-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
"Fixes for code merged this cycle:
- Fix restore of SPRs upon wake up from hypervisor state loss from
Gautham R Shenoy
- Fix the state of root PE from Gavin Shan
- Detach from PE on releasing PCI device from Gavin Shan
- Fix size of NUM_CPU_FTR_KEYS on 32-bit
- Fix missed TCE invalidations that should fallback to OPAL"
* tag 'powerpc-4.8-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/powernv/pci: Fix missed TCE invalidations that should fallback to OPAL
powerpc/powernv: Detach from PE on releasing PCI device
powerpc/powernv: Fix the state of root PE
powerpc/kernel: Fix size of NUM_CPU_FTR_KEYS on 32-bit
powerpc/powernv: Fix restore of SPRs upon wake up from hypervisor state loss
Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6
Pull cifs fixes from Steve French:
"Small set of cifs fixes"
* 'for-next' of git://git.samba.org/sfrench/cifs-2.6:
Move check for prefix path to within cifs_get_root()
Compare prepaths when comparing superblocks
Fix memory leaks in cifs_do_mount()
Merge tag 'drm-fixes-for-4.8-rc6' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"Two sets of i915 fixes, one set of vc4 crasher fixes, and a couple of
atmel fixes.
Nothing too out there at this stage, though I think some people are
holidaying so it's been quiet enough"
* tag 'drm-fixes-for-4.8-rc6' of git://people.freedesktop.org/~airlied/linux:
drm/i915: Ignore OpRegion panel type except on select machines
Revert "drm/i915/psr: Make idle_frames sensible again"
drm/i915: Restore lost "Initialized i915" welcome message
drm/vc4: mark vc4_bo_cache_purge() static
drm/i915: Add GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE to SNB
drm/i915: disable 48bit full PPGTT when vGPU is active
drm/i915: enable vGPU detection for all
drm/atmel-hlcdc: Make ->reset() implementation static
drm: atmel-hlcdc: Fix vertical scaling
drm/vc4: Allow some more signals to be packed with uniform resets.
drm/i915/dvo: Remove dangling call to drm_encoder_cleanup()
Merge tag 'pm-4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management fix from Rafael Wysocki:
"More annotations of tracepoints in the runtime PM framework to prevent
RCU from complaining when that code is invoked from the idle path
(Paul McKenney)"
* tag 'pm-4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
PM / runtime: Use _rcuidle for runtime suspend tracepoints
Dave Airlie [Fri, 16 Sep 2016 21:57:55 +0000 (07:57 +1000)]
Merge tag 'drm-vc4-fixes-2016-09-14' of https://github.com/anholt/linux into drm-fixes
This pull request brings in a fix for crashes in X on VC4.
* tag 'drm-vc4-fixes-2016-09-14' of https://github.com/anholt/linux:
drm/vc4: mark vc4_bo_cache_purge() static
drm/vc4: Allow some more signals to be packed with uniform resets.
Dave Airlie [Fri, 16 Sep 2016 21:57:21 +0000 (07:57 +1000)]
Merge tag 'drm-intel-fixes-2016-09-15' of git://anongit.freedesktop.org/drm-intel into drm-fixes
i915 fixes from Jani.
* tag 'drm-intel-fixes-2016-09-15' of git://anongit.freedesktop.org/drm-intel:
drm/i915: Ignore OpRegion panel type except on select machines
Revert "drm/i915/psr: Make idle_frames sensible again"
drm/i915: Restore lost "Initialized i915" welcome message
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma
Pull rdma fixes from Doug Ledford:
"Round three of 4.8 rc fixes.
This is likely the last rdma pull request this cycle. The new rxe
driver had a few issues (you probably saw the boot bot bug report) and
they should be addressed now. There are a couple other fixes here,
mainly mlx4. There are still two outstanding issues that need
resolved but I don't think their fix will make this kernel cycle.
Summary:
- Various fixes to rdmavt, ipoib, mlx5, mlx4, rxe"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma:
IB/rdmavt: Don't vfree a kzalloc'ed memory region
IB/rxe: Fix kmem_cache leak
IB/rxe: Fix race condition between requester and completer
IB/rxe: Fix duplicate atomic request handling
IB/rxe: Fix kernel panic in udp_setup_tunnel
IB/mlx5: Set source mac address in FTE
IB/mlx5: Enable MAD_IFC commands for IB ports only
IB/mlx4: Diagnostic HW counters are not supported in slave mode
IB/mlx4: Use correct subnet-prefix in QP1 mads under SR-IOV
IB/mlx4: Fix code indentation in QP1 MAD flow
IB/mlx4: Fix incorrect MC join state bit-masking on SR-IOV
IB/ipoib: Don't allow MC joins during light MC flush
IB/rxe: fix GFP_KERNEL in spinlock context
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Arnd Bergmann:
"Here are a couple of bugfixes for v4.8-rc.
Most of them have actually been around for a while this time but for
some reason didn't get applied early on. The shmobile regulator fix
is the only one that isn't completely obvious.
Device tree changes:
- archtimer interrupts must be level triggered (multiple platforms)
- fix for USB and MMC clocks on STiH410
- fix split DT repository in case of raspberry-pi 3
- a new use of skeleton.dtsi on arm64 has crept in after that was
removed.
defconfig updates:
- xilinx vdma has a new Kconfig symbol name
- keystone requires CONFIG_NOP_USB_XCEIV since v4.8-rc1
Code fixes:
- fix regulator quirk on shmobile
- suspend-to-ram regression on EXYNOS
Maintainer updates:
- Javier Martinez Canillas is now a reviewer for Samsung EXYNOS"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: keystone: defconfig: Fix USB configuration
arm64: dts: Fix broken architected timer interrupt trigger
ARM: multi_v7_defconfig: update XILINX_VDMA
ARM64: dts: bcm: Use a symlink to R-Pi dtsi files from arch=arm
ARM: dts: Remove use of skeleton.dtsi from bcm283x.dtsi
ARM: dts: STiH407-family: Provide interconnect clock for consumption in ST SDHCI
ARM: dts: STiH410: Handle interconnect clock required by EHCI/OHCI (USB)
ARM: shmobile: fix regulator quirk for Gen2
ARM: EXYNOS: Clear OF_POPULATED flag from PMU node in IRQ init callback
MAINTAINERS: Add myself as reviewer for Samsung Exynos support
Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
"Most of this update are fixes primarily discovered from testing on the
older StrongARM 1110 and PXA systems, as a result of recent interest
from several people in these platforms:
- Locomo interrupt handling incorrectly stores the handler data in
the chip's private data slot: when Locomo is combined with an
interrupt controller who's chip uses the chip private data, this
leads to an oops.
- SA1111 was missing a call to clk_disable() to clean up after a
failed probe.
- SA1111 and PCMCIA suspend/resume was broken:
The PCMCIA "ds" layer was using the legacy bus suspend/resume
methods, which the core PM code is no longer calling as a result of
device_pm_check_callbacks() introduced in commit aa8e54b559479
("PM / sleep: Go direct_complete if driver has no callbacks").
SA1111 was broken due to changes to PCMCIA which makes PCMCIA
suspend itself later than the SA1111 code expects, and resume
before the SA1111 code has initialised access to the pcmcia
sub-device.
- the default SA1111 interrupt mask polarity got messed up when it
was converted to use a dynamic interrupt base number for its
interrupts.
- fix platform_get_irq() error code propagation, which was causing
problems on platforms where the interrupt may not be available at
probe time in DT setups.
- fix the lack of clock to PCMCIA code on PXA platforms, which was
omitted in conversions of PXA to CCF.
- fix an oops in the PXA PCMCIA code caused by a previous commit not
realising that Lubbock is different from the rest of the PXA PCMCIA
drivers.
- ensure that SA1111 low-level PCMCIA drivers propagate their error
codes to the main probe function, rather than the driver silently
accepting a failure.
- fix the sa11xx debugfs reporting of timing information, which
always indicated zero due to the clock being a factor of 1000 out.
- fix the polarity of the status change signal reported from the
sockets.
Lastly, one ARM specific commit from Stefan Agner fixing the LPAE
cache attributes"
* 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: pxa/lubbock: add pcmcia clock
ARM: locomo: fix locomo irq handling
ARM: 8612/1: LPAE: initialize cache policy correctly
ARM: sa1111: fix missing clk_disable()
ARM: sa1111: fix pcmcia suspend/resume
ARM: sa1111: fix pcmcia interrupt mask polarity
ARM: sa1111: fix error code propagation in sa1111_probe()
pcmcia: lubbock: fix sockets configuration
pcmcia: sa1111: fix propagation of lowlevel board init return code
pcmcia: soc_common: fix SS_STSCHG polarity
pcmcia: sa11xx_base: add units to the timing information
pcmcia: sa11xx_base: fix reporting of timing information
pcmcia: ds: fix suspend/resume
Colin Ian King [Fri, 9 Sep 2016 07:15:37 +0000 (08:15 +0100)]
IB/rdmavt: Don't vfree a kzalloc'ed memory region
The userspace memory region 'mr' is allocated with kzalloc in
__rvt_alloc_mr however it is incorrectly being freed with vfree in
__rvt_free_mr. Fix this by using kfree to free it.
Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
IB/rxe: Fix race condition between requester and completer
rxe_requester() is sending a pkt with rxe_xmit_packet() and
then calls rxe_update() to update the wqe and qp's psn values.
But sometimes the response is received before the requester
had time to update the wqe in which case the completer
acts on errornous wqe values.
This fix updates the wqe and qp before actually sending
the request and rolls back when xmit fails.
Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
When handling ack for atomic opcodes like "fetch&add"
or "cmp&swp", the method send_atomic_ack() saves the ack
before sending it, in case it gets lost and never reach the
requester. In which case the method duplicate_request()
will need to find it using the duplicated request.psn.
But send_atomic_ack() used a wrong psn value and thus
the above ack was never found.
This fix uses the ack.psn to locate the ack in case
its needed.
This fix also copies the ack packet to the skb's control buffer
since duplicate_request() will need it when calling rxe_xmit_packet()
Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
Jack Morgenstein [Mon, 12 Sep 2016 16:16:20 +0000 (19:16 +0300)]
IB/mlx4: Use correct subnet-prefix in QP1 mads under SR-IOV
When sending QP1 MAD packets which use a GRH, the source GID
(which consists of the 64-bit subnet prefix, and the 64 bit port GUID)
must be included in the packet GRH.
For SR-IOV, a GID cache is used, since the source GID needs to be the
slave's source GID, and not the Hypervisor's GID. This cache also
included a subnet_prefix. Unfortunately, the subnet_prefix field in
the cache was never initialized (to the default subnet prefix 0xfe80::0).
As a result, this field remained all zeroes. Therefore, when SR-IOV
was active, all QP1 packets which included a GRH had a source GID
subnet prefix of all-zeroes.
However, the subnet-prefix should initially be 0xfe80::0 (the default
subnet prefix). In addition, if OpenSM modifies a port's subnet prefix,
the new subnet prefix must be used in the GRH when sending QP1 packets.
To fix this we now initialize the subnet prefix in the SR-IOV GID cache
to the default subnet prefix. We update the cached value if/when OpenSM
modifies the port's subnet prefix. We take this cached value when sending
QP1 packets when SR-IOV is active.
Note that the value is stored as an atomic64. This eliminates any need
for locking when the subnet prefix is being updated.
Note also that we depend on the FW generating the "port management change"
event for tracking subnet-prefix changes performed by OpenSM. If running
early FW (before 2.9.4630), subnet prefix changes will not be tracked (but
the default subnet prefix still will be stored in the cache; therefore
users who do not modify the subnet prefix will not have a problem).
IF there is a need for such tracking also for early FW, we will add that
capability in a subsequent patch.
Fixes: 1ffeb2eb8be9 ("IB/mlx4: SR-IOV IB context objects and proxy/tunnel SQP support") Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
Alex Vesker [Mon, 12 Sep 2016 16:16:18 +0000 (19:16 +0300)]
IB/mlx4: Fix incorrect MC join state bit-masking on SR-IOV
Because of an incorrect bit-masking done on the join state bits, when
handling a join request we failed to detect a difference between the
group join state and the request join state when joining as send only
full member (0x8). This caused the MC join request not to be sent.
This issue is relevant only when SRIOV is enabled and SM supports
send only full member.
This fix separates scope bits and join states bits a nibble each.
Fixes: b9c5d6a64358 ('IB/mlx4: Add multicast group (MCG) paravirtualization for SR-IOV') Signed-off-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
Alex Vesker [Mon, 12 Sep 2016 06:55:28 +0000 (09:55 +0300)]
IB/ipoib: Don't allow MC joins during light MC flush
This fix solves a race between light flush and on the fly joins.
Light flush doesn't set the device to down and unset IPOIB_OPER_UP
flag, this means that if while flushing we have a MC join in progress
and the QP was attached to BC MGID we can have a mismatches when
re-attaching a QP to the BC MGID.
The light flush would set the broadcast group to NULL causing an on
the fly join to rejoin and reattach to the BC MCG as well as adding
the BC MGID to the multicast list. The flush process would later on
remove the BC MGID and detach it from the QP. On the next flush
the BC MGID is present in the multicast list but not found when trying
to detach it because of the previous double attach and single detach.
Fixes: ee1e2c82c245 ("IPoIB: Refresh paths instead of flushing them on SM change events") Signed-off-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
Merge tag 'samsung-fixes-4.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into fixes
Pull "ARM: exynos: Fixes for v4.8, secound round" from Krzysztof Kozłowski:
1. A recent change in populating irqchip devices from Device Tree
broke Suspend to RAM on Exynos boards due to lack of probing of
PMU (Power Management Unit) driver. Multiple drivers attach to
the PMU's DT node: irqchip, clock controller and PMU platform
driver for handling suspend. The new irqchip code marked the
PMU's DT node as OF_POPULATED but we need to attach to this
node also PMU platform driver.
2. Add Javier as additional reviewer for Exynos patches.
* tag 'samsung-fixes-4.8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux:
ARM: EXYNOS: Clear OF_POPULATED flag from PMU node in IRQ init callback
MAINTAINERS: Add myself as reviewer for Samsung Exynos support
Alan Stern [Fri, 16 Sep 2016 14:24:26 +0000 (10:24 -0400)]
USB: change bInterval default to 10 ms
Some full-speed mceusb infrared transceivers contain invalid endpoint
descriptors for their interrupt endpoints, with bInterval set to 0.
In the past they have worked out okay with the mceusb driver, because
the driver sets the bInterval field in the descriptor to 1,
overwriting whatever value may have been there before. However, this
approach was never sanctioned by the USB core, and in fact it does not
work with xHCI controllers, because they use the bInterval value that
was present when the configuration was installed.
Currently usbcore uses 32 ms as the default interval if the value in
the endpoint descriptor is invalid. It turns out that these IR
transceivers don't work properly unless the interval is set to 10 ms
or below. To work around this mceusb problem, this patch changes the
endpoint-descriptor parsing routine, making the default interval value
be 10 ms rather than 32 ms.
Tony Lindgren [Fri, 16 Sep 2016 14:24:44 +0000 (09:24 -0500)]
usb: musb: Fix tusb6010 compile error on blackfin
We have CONFIG_BLACKFIN ifdef redefining all musb registers in
musb_regs.h and tusb6010.h is never included causing a build
error with blackfin-allmodconfig and COMPILE_TEST.
Let's fix the issue by not building tusb6010 if CONFIG_BLACKFIN
is selected.
Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Bin Liu <b-liu@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matt Fleming [Wed, 24 Aug 2016 13:12:08 +0000 (14:12 +0100)]
perf/x86/amd: Make HW_CACHE_REFERENCES and HW_CACHE_MISSES measure L2
While the Intel PMU monitors the LLC when perf enables the
HW_CACHE_REFERENCES and HW_CACHE_MISSES events, these events monitor
L1 instruction cache fetches (0x0080) and instruction cache misses
(0x0081) on the AMD PMU.
This is extremely confusing when monitoring the same workload across
Intel and AMD machines, since parameters like,
$ perf stat -e cache-references,cache-misses
measure completely different things.
Instead, make the AMD PMU measure instruction/data cache and TLB fill
requests to the L2 and instruction/data cache and TLB misses in the L2
when HW_CACHE_REFERENCES and HW_CACHE_MISSES are enabled,
respectively. That way the events measure unified caches on both
platforms.
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1472044328-21302-1-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
perf/x86/intel/pt: Do validate the size of a kernel address filter
Right now, the kernel address filters in PT are prone to integer overflow
that may happen in adding filter's size to its offset to obtain the end
of the range. Such an overflow would also throw a #GP in the PT event
configuration path.
Fix this by explicitly validating the result of this calculation.
Reported-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: stable@vger.kernel.org # v4.7 Cc: stable@vger.kernel.org#v4.7 Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160915151352.21306-4-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
The kernel_ip() filter is used mostly by the DS/LBR code to look at the
branch addresses, but Intel PT also uses it to validate the address
filter offsets for kernel addresses, for which it is not sufficient:
supplying something in bits 64:48 that's not a sign extension of the lower
address bits (like 0xf00d000000000000) throws a #GP.
This patch adds address validation for the user supplied kernel filters.
Reported-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: stable@vger.kernel.org # v4.7 Cc: stable@vger.kernel.org#v4.7 Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160915151352.21306-3-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
perf/x86/intel/pt: Fix an off-by-one in address filter configuration
PT address filter configuration requires that a range is specified by
its first and last address, but at the moment we're obtaining the end
of the range by adding user specified size to its start, which is off
by one from what it actually needs to be.
Fix this and make sure that zero-sized filters don't pass the filter
validation.
Reported-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: stable@vger.kernel.org # v4.7 Cc: stable@vger.kernel.org#v4.7 Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160915151352.21306-2-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Paul E. McKenney [Tue, 26 Apr 2016 17:42:25 +0000 (10:42 -0700)]
PM / runtime: Use _rcuidle for runtime suspend tracepoints
Further testing with false negatives suppressed by commit 293e2421fe25
("rcu: Remove superfluous versions of rcu_read_lock_sched_held()")
identified a few more unprotected uses of RCU from the idle loop.
Because RCU actively ignores idle-loop code (for energy-efficiency
reasons, among other things), using RCU from the idle loop can result
in too-short grace periods, in turn resulting in arbitrary misbehavior.
RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 0
RCU used illegally from extended quiescent state!
1 lock held by swapper/0/0:
#0: (&(&dev->power.lock)->rlock){-.-...}, at: [<c052ee24>] __pm_runtime_suspend+0x54/0x84
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6.0-rc5-next-20160426+ #1112
Hardware name: Generic OMAP36xx (Flattened Device Tree)
[<c0110308>] (unwind_backtrace) from [<c010c3a8>] (show_stack+0x10/0x14)
[<c010c3a8>] (show_stack) from [<c047fec8>] (dump_stack+0xb0/0xe4)
[<c047fec8>] (dump_stack) from [<c052d7b4>] (rpm_suspend+0x604/0x7e4)
[<c052d7b4>] (rpm_suspend) from [<c052ee34>] (__pm_runtime_suspend+0x64/0x84)
[<c052ee34>] (__pm_runtime_suspend) from [<c04bf3bc>] (omap2_gpio_prepare_for_idle+0x5c/0x70)
[<c04bf3bc>] (omap2_gpio_prepare_for_idle) from [<c01255e8>] (omap_sram_idle+0x140/0x244)
[<c01255e8>] (omap_sram_idle) from [<c0126b48>] (omap3_enter_idle_bm+0xfc/0x1ec)
[<c0126b48>] (omap3_enter_idle_bm) from [<c0601db8>] (cpuidle_enter_state+0x80/0x3d4)
[<c0601db8>] (cpuidle_enter_state) from [<c0183c74>] (cpu_startup_entry+0x198/0x3a0)
[<c0183c74>] (cpu_startup_entry) from [<c0b00c0c>] (start_kernel+0x354/0x3c8)
[<c0b00c0c>] (start_kernel) from [<8000807c>] (0x8000807c)
Reported-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Guenter Roeck <linux@roeck-us.net>
[ rjw: Subject ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>