]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/commitdiff
s390/crypto: Fix return code checking in cbc_paes_crypt()
authorIngo Franzki <ifranzki@linux.ibm.com>
Wed, 26 Sep 2018 14:37:00 +0000 (16:37 +0200)
committerStefan Bader <stefan.bader@canonical.com>
Mon, 1 Oct 2018 14:54:12 +0000 (16:54 +0200)
BugLink: https://bugs.launchpad.net/bugs/1794294
The return code of cpacf_kmc() is less than the number of
bytes to process in case of an error, not greater.
The crypt routines for the other cipher modes already have
this correctly.

Cc: stable@vger.kernel.org # v4.11+
Fixes: 279378430768 ("s390/crypt: Add protected key AES module")
Signed-off-by: Ingo Franzki <ifranzki@linux.ibm.com>
Acked-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
(cherry picked from commit b81126e01a8c6048249955feea46c8217ebefa91)
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Colin King <colin.king@canonical.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
arch/s390/crypto/paes_s390.c

index 80b27294c1de0844f07d01c5aecbb5410a38602e..ab9a0ebecc199b52507246b47db7b79dd0420058 100644 (file)
@@ -208,7 +208,7 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
                              walk->dst.virt.addr, walk->src.virt.addr, n);
                if (k)
                        ret = blkcipher_walk_done(desc, walk, nbytes - k);
-               if (n < k) {
+               if (k < n) {
                        if (__cbc_paes_set_key(ctx) != 0)
                                return blkcipher_walk_done(desc, walk, -EIO);
                        memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);