Commit d2631e61 authored by Jakub Kicinski's avatar Jakub Kicinski Committed by Ziyang Xuan
Browse files

tls: fix race between tx work scheduling and socket close

mainline inclusion
from mainline-v6.8-rc5
commit e01e3934a1b2d122919f73bc6ddbe1cdafc4bbdb
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I92REK
CVE: CVE-2024-26583

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e01e3934a1b2d122919f73bc6ddbe1cdafc4bbdb



--------------------------------

Similarly to previous commit, the submitting thread (recvmsg/sendmsg)
may exit as soon as the async crypto handler calls complete().
Reorder scheduling the work before calling complete().
This seems more logical in the first place, as it's
the inverse order of what the submitting thread will do.

Reported-by: default avatarvalis <sec@valis.email>
Fixes: a42055e8 ("net/tls: Add support for async encryption of records for performance")
Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
Reviewed-by: default avatarSimon Horman <horms@kernel.org>
Reviewed-by: default avatarSabrina Dubroca <sd@queasysnail.net>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Conflicts:
	net/tls/tls_sw.c
Signed-off-by: default avatarZiyang Xuan <william.xuanziyang@huawei.com>
parent 6267db2a
Loading
Loading
Loading
Loading
+6 −10
Original line number Diff line number Diff line
@@ -447,7 +447,6 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err)
	struct scatterlist *sge;
	struct sk_msg *msg_en;
	struct tls_rec *rec;
	bool ready = false;

	rec = container_of(aead_req, struct tls_rec, aead_req);
	msg_en = &rec->msg_encrypted;
@@ -478,19 +477,16 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err)
		/* If received record is at head of tx_list, schedule tx */
		first_rec = list_first_entry(&ctx->tx_list,
					     struct tls_rec, list);
		if (rec == first_rec)
			ready = true;
		if (rec == first_rec) {
			/* Schedule the transmission */
			if (!test_and_set_bit(BIT_TX_SCHEDULED,
					      &ctx->tx_bitmask))
				schedule_delayed_work(&ctx->tx_work.work, 1);
		}
	}

	if (atomic_dec_and_test(&ctx->encrypt_pending))
		complete(&ctx->async_wait.completion);

	if (!ready)
		return;

	/* Schedule the transmission */
	if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
		schedule_delayed_work(&ctx->tx_work.work, 1);
}

static int tls_encrypt_async_wait(struct tls_sw_context_tx *ctx)