From patchwork Tue Aug 10 17:30:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 494945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ACE2C4320A for ; Tue, 10 Aug 2021 18:00:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8163D60E52 for ; Tue, 10 Aug 2021 18:00:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234829AbhHJSAV (ORCPT ); Tue, 10 Aug 2021 14:00:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:55478 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235629AbhHJR6g (ORCPT ); Tue, 10 Aug 2021 13:58:36 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 07ABE6137E; Tue, 10 Aug 2021 17:45:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1628617551; bh=bGh5QvqC+eND7kjxfvppyy/sXfLhsN9tVo9c+eq8RBo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h+z+2fX9bQhbNHc791cunL4R6nuN2KEXDx3vlrx3g68uWKDWOaJHmVVv2rn/r2Yju ajGMI0rbJHGtRxGBYSWmMBY8wleY0Emq9Vf40oRApA56HzJW9zv84Pduuh7hHC8DM3 PPYqf6546Rp3KqMl39GJcRKQ3F8aGw4V1atyzs8U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tyler Hicks , Jens Wiklander , Sumit Garg Subject: [PATCH 5.13 109/175] optee: Clear stale cache entries during initialization Date: Tue, 10 Aug 2021 19:30:17 +0200 Message-Id: <20210810173004.549410482@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210810173000.928681411@linuxfoundation.org> References: <20210810173000.928681411@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Tyler Hicks commit b5c10dd04b7418793517e3286cde5c04759a86de upstream. The shm cache could contain invalid addresses if optee_disable_shm_cache() was not called from the .shutdown hook of the previous kernel before a kexec. These addresses could be unmapped or they could point to mapped but unintended locations in memory. Clear the shared memory cache, while being careful to not translate the addresses returned from OPTEE_SMC_DISABLE_SHM_CACHE, during driver initialization. Once all pre-cache shm objects are removed, proceed with enabling the cache so that we know that we can handle cached shm objects with confidence later in the .shutdown hook. Cc: stable@vger.kernel.org Signed-off-by: Tyler Hicks Reviewed-by: Jens Wiklander Reviewed-by: Sumit Garg Signed-off-by: Jens Wiklander Signed-off-by: Greg Kroah-Hartman --- drivers/tee/optee/call.c | 36 +++++++++++++++++++++++++++++++++--- drivers/tee/optee/core.c | 9 +++++++++ drivers/tee/optee/optee_private.h | 1 + 3 files changed, 43 insertions(+), 3 deletions(-) --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -416,11 +416,13 @@ void optee_enable_shm_cache(struct optee } /** - * optee_disable_shm_cache() - Disables caching of some shared memory allocation - * in OP-TEE + * __optee_disable_shm_cache() - Disables caching of some shared memory + * allocation in OP-TEE * @optee: main service struct + * @is_mapped: true if the cached shared memory addresses were mapped by this + * kernel, are safe to dereference, and should be freed */ -void optee_disable_shm_cache(struct optee *optee) +static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped) { struct optee_call_waiter w; @@ -439,6 +441,13 @@ void optee_disable_shm_cache(struct opte if (res.result.status == OPTEE_SMC_RETURN_OK) { struct tee_shm *shm; + /* + * Shared memory references that were not mapped by + * this kernel must be ignored to prevent a crash. + */ + if (!is_mapped) + continue; + shm = reg_pair_to_ptr(res.result.shm_upper32, res.result.shm_lower32); tee_shm_free(shm); @@ -449,6 +458,27 @@ void optee_disable_shm_cache(struct opte optee_cq_wait_final(&optee->call_queue, &w); } +/** + * optee_disable_shm_cache() - Disables caching of mapped shared memory + * allocations in OP-TEE + * @optee: main service struct + */ +void optee_disable_shm_cache(struct optee *optee) +{ + return __optee_disable_shm_cache(optee, true); +} + +/** + * optee_disable_unmapped_shm_cache() - Disables caching of shared memory + * allocations in OP-TEE which are not + * currently mapped + * @optee: main service struct + */ +void optee_disable_unmapped_shm_cache(struct optee *optee) +{ + return __optee_disable_shm_cache(optee, false); +} + #define PAGELIST_ENTRIES_PER_PAGE \ ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1) --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -686,6 +686,15 @@ static int optee_probe(struct platform_d optee->memremaped_shm = memremaped_shm; optee->pool = pool; + /* + * Ensure that there are no pre-existing shm objects before enabling + * the shm cache so that there's no chance of receiving an invalid + * address during shutdown. This could occur, for example, if we're + * kexec booting from an older kernel that did not properly cleanup the + * shm cache. + */ + optee_disable_unmapped_shm_cache(optee); + optee_enable_shm_cache(optee); if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -159,6 +159,7 @@ int optee_cancel_req(struct tee_context void optee_enable_shm_cache(struct optee *optee); void optee_disable_shm_cache(struct optee *optee); +void optee_disable_unmapped_shm_cache(struct optee *optee); int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages, size_t num_pages,