]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/commitdiff
bpf: Add schedule points in batch ops
authorEric Dumazet <edumazet@google.com>
Thu, 17 Feb 2022 18:19:02 +0000 (10:19 -0800)
committerPaolo Pisati <paolo.pisati@canonical.com>
Mon, 7 Mar 2022 10:45:58 +0000 (11:45 +0100)
BugLink: https://bugs.launchpad.net/bugs/1963891
commit 75134f16e7dd0007aa474b281935c5f42e79f2c8 upstream.

syzbot reported various soft lockups caused by bpf batch operations.

 INFO: task kworker/1:1:27 blocked for more than 140 seconds.
 INFO: task hung in rcu_barrier

Nothing prevents batch ops to process huge amount of data,
we need to add schedule points in them.

Note that maybe_wait_bpf_programs(map) calls from
generic_map_delete_batch() can be factorized by moving
the call after the loop.

This will be done later in -next tree once we get this fix merged,
unless there is strong opinion doing this optimization sooner.

Fixes: aa2e93b8e58e ("bpf: Add generic support for update and delete batch ops")
Fixes: cb4d03ab499d ("bpf: Add generic support for lookup batch op")
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Brian Vazquez <brianvv@google.com>
Link: https://lore.kernel.org/bpf/20220217181902.808742-1-eric.dumazet@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Paolo Pisati <paolo.pisati@canonical.com>
kernel/bpf/syscall.c

index ecd51a8a8680c8151c9a348a1713effe3520ad64..53384622e8dac34a69c5d5e50067b30a1e96dfe8 100644 (file)
@@ -1337,6 +1337,7 @@ int generic_map_delete_batch(struct bpf_map *map,
                maybe_wait_bpf_programs(map);
                if (err)
                        break;
+               cond_resched();
        }
        if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
                err = -EFAULT;
@@ -1394,6 +1395,7 @@ int generic_map_update_batch(struct bpf_map *map,
 
                if (err)
                        break;
+               cond_resched();
        }
 
        if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
@@ -1491,6 +1493,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
                swap(prev_key, key);
                retry = MAP_LOOKUP_RETRIES;
                cp++;
+               cond_resched();
        }
 
        if (err == -EFAULT)