We allocate message buffers with GFP_KERNEL allocation flags if
possible. However when an incoming request message is received we
can be in interrupt context, so we must use GFP_ATOMIC in that case.
The computation of gfp_flags in gb_operation_message_init() is
wrong. It is needlessly using GFP_ATOMIC when allocating outbound
response buffers. Fix the flawed logic.
Change the name of "data_out" to be "outbound" to be consistent with
usage elsewhere. (Data/messages are "inbound" or "outbound";
requests are "incoming" or "outgoing".)
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: Greg Kroah-Hartman <greg@kroah.com>
*/
static int gb_operation_message_init(struct gb_operation *operation,
u8 type, size_t size,
- bool request, bool data_out)
+ bool request, bool outbound)
{
struct gb_connection *connection = operation->connection;
struct greybus_host_device *hd = connection->hd;
struct gb_message *message;
struct gb_operation_msg_hdr *header;
struct gbuf *gbuf;
- gfp_t gfp_flags = data_out ? GFP_KERNEL : GFP_ATOMIC;
+ gfp_t gfp_flags = request && !outbound ? GFP_ATOMIC : GFP_KERNEL;
u16 dest_cport_id;
int ret;
}
gbuf = &message->gbuf;
- if (data_out)
+ if (outbound)
dest_cport_id = connection->interface_cport_id;
else
dest_cport_id = CPORT_ID_BAD;