From patchwork Mon Nov 9 12:55:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 322730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 806C6C2D0A3 for ; Mon, 9 Nov 2020 13:18:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2ED9A206D8 for ; Mon, 9 Nov 2020 13:18:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604927904; bh=w4PsqQfYnatGqnLegGy8B9w3SU+TbRIEYW0rSN8944Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=W/Ne3b2v7ghBIWAdtClnal9kQqGvaivwM+hfsAoUL1U2ieB1PKuGbKlYoB3fK8YA0 4ytHrmJ0cn86qZlooNsFj2Hqhs6r5w4UKKHBKXCDSIxVt0g5a8ypKCaNz2yh7NFiwB GwKSUueQzISlcGCFQksTV08jdMFpun+wfY/fe2Eg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731908AbgKINSW (ORCPT ); Mon, 9 Nov 2020 08:18:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:45568 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387760AbgKINSM (ORCPT ); Mon, 9 Nov 2020 08:18:12 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 791C920663; Mon, 9 Nov 2020 13:18:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604927891; bh=w4PsqQfYnatGqnLegGy8B9w3SU+TbRIEYW0rSN8944Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rpzg4z1l+jcW/rSdTQWg51Mr4zwMNHEkpZL/mLrgColew4spMvZw1JXVEeRjkiiwe RVGObs3DO+G1DcZ5GiNmdU3nwIJws7gCJjurxgfaOEQXic3svokT9qaz3NLWRfZH1o 4LA/2lO8BmcVocmaV/vqwVZE7ACAg9r11JjnBZVI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jason Gunthorpe , Andrew Morton , Arnd Bergmann , Tom Lendacky , Thomas Gleixner , Andrey Ryabinin , Borislav Petkov , Brijesh Singh , Jonathan Corbet , Dmitry Vyukov , "Dave Young" , Alexander Potapenko , Konrad Rzeszutek Wilk , Andy Lutomirski , Larry Woodman , Matt Fleming , Ingo Molnar , "Michael S. Tsirkin" , Paolo Bonzini , Peter Zijlstra , Rik van Riel , Toshimitsu Kani , Linus Torvalds Subject: [PATCH 5.9 056/133] mm: always have io_remap_pfn_range() set pgprot_decrypted() Date: Mon, 9 Nov 2020 13:55:18 +0100 Message-Id: <20201109125033.417115637@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201109125030.706496283@linuxfoundation.org> References: <20201109125030.706496283@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jason Gunthorpe commit f8f6ae5d077a9bdaf5cbf2ac960a5d1a04b47482 upstream. The purpose of io_remap_pfn_range() is to map IO memory, such as a memory mapped IO exposed through a PCI BAR. IO devices do not understand encryption, so this memory must always be decrypted. Automatically call pgprot_decrypted() as part of the generic implementation. This fixes a bug where enabling AMD SME causes subsystems, such as RDMA, using io_remap_pfn_range() to expose BAR pages to user space to fail. The CPU will encrypt access to those BAR pages instead of passing unencrypted IO directly to the device. Places not mapping IO should use remap_pfn_range(). Fixes: aca20d546214 ("x86/mm: Add support to make use of Secure Memory Encryption") Signed-off-by: Jason Gunthorpe Signed-off-by: Andrew Morton Cc: Arnd Bergmann Cc: Tom Lendacky Cc: Thomas Gleixner Cc: Andrey Ryabinin Cc: Borislav Petkov Cc: Brijesh Singh Cc: Jonathan Corbet Cc: Dmitry Vyukov Cc: "Dave Young" Cc: Alexander Potapenko Cc: Konrad Rzeszutek Wilk Cc: Andy Lutomirski Cc: Larry Woodman Cc: Matt Fleming Cc: Ingo Molnar Cc: "Michael S. Tsirkin" Cc: Paolo Bonzini Cc: Peter Zijlstra Cc: Rik van Riel Cc: Toshimitsu Kani Cc: Link: https://lkml.kernel.org/r/0-v1-025d64bdf6c4+e-amd_sme_fix_jgg@nvidia.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/mm.h | 9 +++++++++ include/linux/pgtable.h | 4 ---- 2 files changed, 9 insertions(+), 4 deletions(-) --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2735,6 +2735,15 @@ static inline vm_fault_t vmf_insert_page return VM_FAULT_NOPAGE; } +#ifndef io_remap_pfn_range +static inline int io_remap_pfn_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long pfn, + unsigned long size, pgprot_t prot) +{ + return remap_pfn_range(vma, addr, pfn, size, pgprot_decrypted(prot)); +} +#endif + static inline vm_fault_t vmf_error(int err) { if (err == -ENOMEM) --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1399,10 +1399,6 @@ typedef unsigned int pgtbl_mod_mask; #endif /* !__ASSEMBLY__ */ -#ifndef io_remap_pfn_range -#define io_remap_pfn_range remap_pfn_range -#endif - #ifndef has_transparent_hugepage #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define has_transparent_hugepage() 1