From patchwork Tue Mar 22 12:41:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Halil Pasic X-Patchwork-Id: 553730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B636AC433FE for ; Tue, 22 Mar 2022 12:41:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233608AbiCVMnQ (ORCPT ); Tue, 22 Mar 2022 08:43:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233731AbiCVMnP (ORCPT ); Tue, 22 Mar 2022 08:43:15 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5490585960 for ; Tue, 22 Mar 2022 05:41:47 -0700 (PDT) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22MBaoRw003860; Tue, 22 Mar 2022 12:41:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=jTGnOm9fABEgSPw8R/iBMBh1U6ZiIXDHxLqXXM1MhEA=; b=eC9fQ2eAm9XJaXewYHESIpui8QCCCOXYsaG3jmDF0JAPq+b86mmO7WU3oT0IpG0QYXVY j3SvHp8O7ZBnzYiXUHFP3/RMpT1ghh1YOG6SxNcNjqAVyGnafuV+oc0rxyyYUE2mFc9v EX5+Ht+dnE9uZBiv1RYpOkQQbBFRtse+PsKABK/mJrsSvARhGIiESDC838GYTe9N6WTA Hr2azW5Y8Q8qYf60dh25gbrAyFXienf8fcbIVWJAiDh9Ha36oGeBG2ryayou5jjq5Isp 7MgdGf//0ydjLD9bnOEel+zC0Tg3ESAJakictXFYN1NknYql0IpYDIkpF5qRKgieClz3 DQ== Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0b-001b2d01.pphosted.com with ESMTP id 3ey7nah8am-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 22 Mar 2022 12:41:38 +0000 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22MCZi5G019444; Tue, 22 Mar 2022 12:41:36 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma06fra.de.ibm.com with ESMTP id 3ew6ehwbte-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 22 Mar 2022 12:41:36 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22MCfW6b39452982 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 22 Mar 2022 12:41:32 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D21C5AE053; Tue, 22 Mar 2022 12:41:32 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 85412AE056; Tue, 22 Mar 2022 12:41:32 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 22 Mar 2022 12:41:32 +0000 (GMT) From: Halil Pasic To: stable@vger.kernel.org Cc: Halil Pasic , Christoph Hellwig , Linus Torvalds , Christoph Hellwig , Doug Gilbert , Christian Borntraeger , Anshuman Khandual , Thiago Jung Bauermann , Tom Lendacky Subject: [PATCH for 5.4.x 2/2] swiotlb: rework "fix info leak with DMA_FROM_DEVICE" Date: Tue, 22 Mar 2022 13:41:28 +0100 Message-Id: <20220322124128.2232849-3-pasic@linux.ibm.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220322124128.2232849-1-pasic@linux.ibm.com> References: <20220322124128.2232849-1-pasic@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: s0QO3Hx9j_dLsRu6V4t9taFzF24YKlUD X-Proofpoint-ORIG-GUID: s0QO3Hx9j_dLsRu6V4t9taFzF24YKlUD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-22_04,2022-03-22_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 mlxscore=0 suspectscore=0 lowpriorityscore=0 spamscore=0 bulkscore=0 adultscore=0 impostorscore=0 mlxlogscore=999 phishscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203220072 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 upstream. Unfortunately, we ended up merging an old version of the patch "fix info leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph (the swiotlb maintainer), he asked me to create an incremental fix (after I have pointed this out the mix up, and asked him for guidance). So here we go. The main differences between what we got and what was agreed are: * swiotlb_sync_single_for_device is also required to do an extra bounce * We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters * The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE must take precedence over DMA_ATTR_SKIP_CPU_SYNC Thus this patch removes DMA_ATTR_OVERWRITE, and makes swiotlb_sync_single_for_device() bounce unconditionally (that is, also when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale data from the swiotlb buffer. Let me note, that if the size used with dma_sync_* API is less than the size used with dma_[un]map_*, under certain circumstances we may still end up with swiotlb not being transparent. In that sense, this is no perfect fix either. To get this bullet proof, we would have to bounce the entire mapping/bounce buffer. For that we would have to figure out the starting address, and the size of the mapping in swiotlb_sync_single_for_device(). While this does seem possible, there seems to be no firm consensus on how things are supposed to work. Signed-off-by: Halil Pasic Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE") Cc: stable@vger.kernel.org Reviewed-by: Christoph Hellwig Signed-off-by: Linus Torvalds --- Documentation/DMA-attributes.txt | 10 ---------- include/linux/dma-mapping.h | 8 -------- kernel/dma/swiotlb.c | 25 ++++++++++++++++--------- 3 files changed, 16 insertions(+), 27 deletions(-) diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt index 7193505a98ca..8f8d97f65d73 100644 --- a/Documentation/DMA-attributes.txt +++ b/Documentation/DMA-attributes.txt @@ -156,13 +156,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). - -DMA_ATTR_PRIVILEGED -------------------- - -Some advanced peripherals such as remote processors and GPUs perform -accesses to DMA buffers in both privileged "supervisor" and unprivileged -"user" modes. This attribute is used to indicate to the DMA-mapping -subsystem that the buffer is fully accessible at the elevated privilege -level (and ideally inaccessible or at least read-only at the -lesser-privileged levels). diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index da90f20e11c1..4d450672b7d6 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -70,14 +70,6 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) -/* - * This is a hint to the DMA-mapping subsystem that the device is expected - * to overwrite the entire mapped size, thus the caller does not require any - * of the previous buffer contents to be preserved. This allows - * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. - */ -#define DMA_ATTR_OVERWRITE (1UL << 10) - /* * A dma_addr_t can hold any valid DMA or bus address for the platform. * It can be given to a device to use as a DMA source or target. A CPU cannot diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f17b771856d1..76fb001dddd4 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -571,10 +571,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, */ for (i = 0; i < nslots; i++) io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + /* + * When dir == DMA_FROM_DEVICE we could omit the copy from the orig + * to the tlb buffer, if we knew for sure the device will + * overwirte the entire current content. But we don't. Thus + * unconditional bounce may prevent leaking swiotlb content (i.e. + * kernel memory) to user-space. + */ + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); return tlb_addr; } @@ -649,11 +653,14 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, BUG_ON(dir != DMA_TO_DEVICE); break; case SYNC_FOR_DEVICE: - if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, - size, DMA_TO_DEVICE); - else - BUG_ON(dir != DMA_FROM_DEVICE); + /* + * Unconditional bounce is necessary to avoid corruption on + * sync_*_for_cpu or dma_ummap_* when the device didn't + * overwrite the whole lengt of the bounce buffer. + */ + swiotlb_bounce(orig_addr, tlb_addr, + size, DMA_TO_DEVICE); + BUG_ON(!valid_dma_direction(dir)); break; default: BUG();