From patchwork Tue May 18 22:18:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jianxiong Gao X-Patchwork-Id: 441771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A689C433ED for ; Tue, 18 May 2021 22:19:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D71F6135D for ; Tue, 18 May 2021 22:19:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236372AbhERWUm (ORCPT ); Tue, 18 May 2021 18:20:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352714AbhERWUl (ORCPT ); Tue, 18 May 2021 18:20:41 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B357C06175F for ; Tue, 18 May 2021 15:19:22 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id w195-20020a627bcc0000b029028e75db9c52so6861966pfc.5 for ; Tue, 18 May 2021 15:19:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bTwwGzfhYSt6vGcicTKXE7GYTCRotYqWPmLal+6u/a0=; b=eI6+EidL1fJmTlLNc/MjCFr6dRMVC4xvviX1cWYgOkHc4oCihJ9ZNhKiYnsIgsHhMu ya4cY9zd5XRnSSKUqqDEPnXe6sGauecK92jcsPfGxi+CntnbPEFZorFqcoHi3nvWxD41 sThVk2Q7w/btms2HYm01FnczWQky/xBILHwxrkRiVkqI2dFLIlkMa7MbLNC5G1NtzsoP Qn6dA/ca9IOW+Hd2fRbZfy5PgE2ZNBhvcUaUK93aNRheq/uVo/QWkeutdB0Nae4DbDvR mSKp4geVWQYGRAw/KyNPJ1+RcLm9V9AnmQBSUPC1fQS9oM9QRqx/38cJbqzRepGST/Rb IySw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bTwwGzfhYSt6vGcicTKXE7GYTCRotYqWPmLal+6u/a0=; b=J97WkmtzSFWCyjL/Ap85bZIZ7oN8DRdTg1mRhcVCX7obpGkBj/nXHr6D7g99ONjv9R cZqphmQvgVzQp1HjnAba3B9ymXrct5kCoYG9Cc9Zj2u1ZV7NMzolmkTnM1uoDEDzfpR5 FJXiXaqE6Jvp4ZKWrrNUkomiJVgsZkkIiE7HN1n8s2Mb5e1kzQ0yrFwkHFu9PTwTuREp FZtGUmblSTGYxz9uFD4/lMJEKyNMMCj3gQki3Dg0OsQlaZz/ZyzBV5ZQycyZmOfKZ/fY t+eGjqrRlY1p2m2pytyraR6tTr8qMyKD9JbUETlstFXa/SP6AkfqhNKvZCsiX3Vh8qx0 rxHw== X-Gm-Message-State: AOAM531/Bvt6/nBBy1klIBnFQAG39j47oTwKsz4zyCaUvAG+tDcDv7aJ SqpfgorA4b7MvVpqfb+80CbhZUo7yY43xbstcImKOmq5udhCcx0ESDZeU9GldQn+NBSikEfolqt emXHZ0kW149OyXbnxI7mfaBKwxQsYcVP87TKC0W7wG/YiU0HmJJ2UFYiuDS8= X-Google-Smtp-Source: ABdhPJyjZmmey4d69gx+QsK1VGUoY8xPY88JbbU2CXNK7ruEQwznDCrvxVxruMhS57++PNK/YYKJdRu0NQ== X-Received: from jxgao-snp.c.googlers.com ([fda3:e722:ac3:10:7f:e700:c0a8:1373]) (user=jxgao job=sendgmr) by 2002:a17:902:9001:b029:ee:f24a:7e7d with SMTP id a1-20020a1709029001b02900eef24a7e7dmr7018452plp.42.1621376361477; Tue, 18 May 2021 15:19:21 -0700 (PDT) Date: Tue, 18 May 2021 22:18:12 +0000 In-Reply-To: <20210518221818.2963918-1-jxgao@google.com> Message-Id: <20210518221818.2963918-3-jxgao@google.com> Mime-Version: 1.0 References: <20210518221818.2963918-1-jxgao@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH 5.4 v2 2/9] swiotlb: add a IO_TLB_SIZE define From: Jianxiong Gao To: stable@vger.kernel.org, hch@lst.de, marcorr@google.com, sashal@kernel.org Cc: Jianxiong Gao , Konrad Rzeszutek Wilk Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Add a new IO_TLB_SIZE define instead open coding it using IO_TLB_SHIFT all over. Signed-off-by: Christoph Hellwig Acked-by: Jianxiong Gao Tested-by: Jianxiong Gao Signed-off-by: Konrad Rzeszutek Wilk Upstream: b5d7ccb7aac3895c2138fe0980a109116ce15eff Signed-off-by: Jianxiong Gao --- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 12 ++++++------ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 0a8fced6aaec..f7aadd297aa9 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -29,6 +29,7 @@ enum swiotlb_force { * controllable. */ #define IO_TLB_SHIFT 11 +#define IO_TLB_SIZE (1 << IO_TLB_SHIFT) extern void swiotlb_init(int verbose); int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f99b79d7e123..af4130059202 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -479,20 +479,20 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, tbl_dma_addr &= mask; - offset_slots = ALIGN(tbl_dma_addr, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + offset_slots = ALIGN(tbl_dma_addr, IO_TLB_SIZE) >> IO_TLB_SHIFT; /* * Carefully handle integer overflow which can occur when mask == ~0UL. */ max_slots = mask + 1 - ? ALIGN(mask + 1, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT + ? ALIGN(mask + 1, IO_TLB_SIZE) >> IO_TLB_SHIFT : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT); /* * For mappings greater than or equal to a page, we limit the stride * (and hence alignment) to a page size. */ - nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + nslots = ALIGN(alloc_size, IO_TLB_SIZE) >> IO_TLB_SHIFT; if (alloc_size >= PAGE_SIZE) stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); else @@ -586,7 +586,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, enum dma_data_direction dir, unsigned long attrs) { unsigned long flags; - int i, count, nslots = ALIGN(alloc_size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + int i, count, nslots = ALIGN(alloc_size, IO_TLB_SIZE) >> IO_TLB_SHIFT; int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = io_tlb_orig_addr[index]; @@ -637,7 +637,7 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, if (orig_addr == INVALID_PHYS_ADDR) return; - orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1); + orig_addr += (unsigned long)tlb_addr & (IO_TLB_SIZE - 1); switch (target) { case SYNC_FOR_CPU: @@ -693,7 +693,7 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr, size_t swiotlb_max_mapping_size(struct device *dev) { - return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; + return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; } bool is_swiotlb_active(void) From patchwork Tue May 18 22:18:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jianxiong Gao X-Patchwork-Id: 441770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8401C433B4 for ; Tue, 18 May 2021 22:19:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A32FE6135F for ; Tue, 18 May 2021 22:19:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352716AbhERWUz (ORCPT ); Tue, 18 May 2021 18:20:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352714AbhERWUz (ORCPT ); Tue, 18 May 2021 18:20:55 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BACFBC061573 for ; Tue, 18 May 2021 15:19:35 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id w3-20020a170902d103b02900f057b7e766so4560424plw.13 for ; Tue, 18 May 2021 15:19:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=c2Ze3UcqldjUtVJBEeD6v68APB1JJNQvO75YMMSv/V4=; b=XIBBFtJCCmZPummNYWTQAUlZnCSavmhdNsbYRFVE2iN9G0RftCxVwQzNVkr/3NvK3s 8oSSNLWXBiCtG/Usiw2mxCOYPe4FCPMyRVJUTf63D3reYGi1obSxr5KBPTXLx/Lmmokn O2yJ95zg/bbrmsWSRSjVjAIr1Ivd6UtQ9RPNiFT+oe5/pODh6b3a5jgvN+jvy8tKAeS6 R/mRKwAziGkrSRmDsNQjrTHz2gm+qRsy43Yv8GEnhu2fCl9xIuRja79E/838979yrvlt f3u1Cy/ypYYq44tx1zsS1/IRSGbNkH+v9ythHZJdYSuWR9K68JMNqHQbMwQapWm2tH/u o4pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=c2Ze3UcqldjUtVJBEeD6v68APB1JJNQvO75YMMSv/V4=; b=tDz00Ra80ocUcnCj6ZjZInorMWqONC54FmedO8SCG7tlexWd37bl46N/SSOP5ZNBJ8 GuGMmugBqBsKQ9fKaUi9AIfcLDn4L1gQ4FQuU++c+caBuxwtTKLUqVsFMIleLr/vMn/Z yUxzqKlK6IlomLEMR6BU0zJHn9kOzZkmQvxc5ZHex84SKfXFe3FT5hBflK16Nrbk6jSR Kj3ud3Yf92NXZ/eoYItNYrXRfEBVfcDpvQ1RgD8vKKfE4SjSBznaCES9oR4VO8eG+kHf gChnPIsoAwJaSXOfXZcp/HnWaBGbLG2AOHltWT9KONnja/TC1V34FmKPAWQQ1wV71DRo 7VLw== X-Gm-Message-State: AOAM533ria8CJhj69xwBnbkDwZXR1CzslZR/jS94+oqZM2OsAD8emjbu c8e7j9jAoD/Ae1sfUhXaaVqsXsCsn7saq+pf+r3VNgJzf/TvcPy9z28I2ufdnyiHkT/v1B9QYvo S+qnWJqSOzUap6o254ssnNVlX243xWu1CuguSYMzKAUNJ4R+LMLo9Hvr9Rjw= X-Google-Smtp-Source: ABdhPJz88PVHAUyGJX1tr8TShCGXuUOfrqgN5OOdPJCYzheHIgMpE3CevLXbnW3kWmTzBFVzLm/TCFdZMQ== X-Received: from jxgao-snp.c.googlers.com ([fda3:e722:ac3:10:7f:e700:c0a8:1373]) (user=jxgao job=sendgmr) by 2002:a17:903:3046:b029:ee:f24a:7517 with SMTP id u6-20020a1709033046b02900eef24a7517mr7053164pla.17.1621376375169; Tue, 18 May 2021 15:19:35 -0700 (PDT) Date: Tue, 18 May 2021 22:18:14 +0000 In-Reply-To: <20210518221818.2963918-1-jxgao@google.com> Message-Id: <20210518221818.2963918-5-jxgao@google.com> Mime-Version: 1.0 References: <20210518221818.2963918-1-jxgao@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH 5.4 v2 4/9] swiotlb: factor out a nr_slots helper From: Jianxiong Gao To: stable@vger.kernel.org, hch@lst.de, marcorr@google.com, sashal@kernel.org Cc: Jianxiong Gao , Konrad Rzeszutek Wilk Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Factor out a helper to find the number of slots for a given size. Signed-off-by: Christoph Hellwig Acked-by: Jianxiong Gao Tested-by: Jianxiong Gao Signed-off-by: Konrad Rzeszutek Wilk Upstream: c32a77fd18780a5192dfb6eec69f239faebf28fd Signed-off-by: Jianxiong Gao --- kernel/dma/swiotlb.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index db265dc324b9..b57e0741ce2f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -179,6 +179,11 @@ static inline unsigned long io_tlb_offset(unsigned long val) return val & (IO_TLB_SEGSIZE - 1); } +static inline unsigned long nr_slots(u64 val) +{ + return DIV_ROUND_UP(val, IO_TLB_SIZE); +} + /* * Early SWIOTLB allocation may be too early to allow an architecture to * perform the desired operations. This function allows the architecture to @@ -481,20 +486,20 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, tbl_dma_addr &= mask; - offset_slots = ALIGN(tbl_dma_addr, IO_TLB_SIZE) >> IO_TLB_SHIFT; + offset_slots = nr_slots(tbl_dma_addr); /* * Carefully handle integer overflow which can occur when mask == ~0UL. */ max_slots = mask + 1 - ? ALIGN(mask + 1, IO_TLB_SIZE) >> IO_TLB_SHIFT + ? nr_slots(mask + 1) : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT); /* * For mappings greater than or equal to a page, we limit the stride * (and hence alignment) to a page size. */ - nslots = ALIGN(alloc_size, IO_TLB_SIZE) >> IO_TLB_SHIFT; + nslots = nr_slots(alloc_size); if (alloc_size >= PAGE_SIZE) stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); else @@ -590,7 +595,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, enum dma_data_direction dir, unsigned long attrs) { unsigned long flags; - int i, count, nslots = ALIGN(alloc_size, IO_TLB_SIZE) >> IO_TLB_SHIFT; + int i, count, nslots = nr_slots(alloc_size); int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = io_tlb_orig_addr[index]; From patchwork Tue May 18 22:18:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jianxiong Gao X-Patchwork-Id: 441769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F321AC433ED for ; Tue, 18 May 2021 22:19:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D86316135F for ; Tue, 18 May 2021 22:19:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352718AbhERWU5 (ORCPT ); Tue, 18 May 2021 18:20:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352714AbhERWU5 (ORCPT ); Tue, 18 May 2021 18:20:57 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7969C061573 for ; Tue, 18 May 2021 15:19:38 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id f19-20020a056a002393b02902d8b0956281so6321026pfc.19 for ; Tue, 18 May 2021 15:19:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=U+Cylb2srYEXBpVSWZyMZ6EsfSeCcdDvcZ7KxfK651A=; b=PTEfATxQf46imZF/WN8c6fb3AWEtVEPHFIK5ue16CEOX8nGSLwACljchxya/XcZ5Yy r47bsMHfV716tfD7AhtJOiSLp8LJqVpYDkZD0uazQAO4FiI73uiCTAex30VkUgwoguJo CerHpDIOWr/zGG0m1uvNDjCwp5I0jI2UGJziVArAOk01B77A+EgwKq2UuZ8ezaeZyz82 AmMV1279OZ4X861DHwvFB3gNqtvupyZ5M3TbcUCadp6xJHJUKsNOgYiLIpnaMvOrTZh6 VRY2w/UGqFJOs7iNNfnAYuLqUmrbJy6hUFLo7OKhc3GBf3qmbJJ905Xb2OGX5icxMz8S NlMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U+Cylb2srYEXBpVSWZyMZ6EsfSeCcdDvcZ7KxfK651A=; b=aGs+MHYiX8hMvRtQTRTgm8ru+YLhCw5bs/EqIDhKV6azKHhPxxqxp8fHHmWOTEGTWO m2scnrAdKFFVYl85bX1Js7uvk5+G6AfzW5WGniY7a6s+Iy5E/7ClH/ICnyk/D5Bm0Xkv qEB/Ka1thahp3VBAz1y8TrKQ6yc9XHW4W0HKsUssK0tYXSecML1m0pc2aBGansmDOuqp P6mEwUZAvcZFRFVk1xO6s5hiJPwwEsO/cO9QJTmxNGEh/PQuHJbvcQYGUnaLgUdmqjI8 U+TEcLg7Zhr84Iir9qQXm1G8bhyHtpeoD2fDxi/8to3Zb4k0E21XfiCtXyOwkVa1nXUQ +jGA== X-Gm-Message-State: AOAM532YJNwb2X5/5usMDFJ0yxBt+/lavcSolrTRKRyQmxfnycrLyihO +eyhs502JDDsAULO/L+cWnDK1oahXmf08374qjIdumb+bYVHu4H3r9pOHQLO2TVkU0s7M7tcU4V ygwa1JC4gii3EpeDC1nE5mEyv+lwUcBAwJl26DPE2T976dQtuAGgnQO9CM/k= X-Google-Smtp-Source: ABdhPJzSwG4+tgn75Knqk9D11skUTbIXDn5xNrxPc5LbGJ7kotwuWS9uadtFHC6nXYSp3omNBcHJXNR9kw== X-Received: from jxgao-snp.c.googlers.com ([fda3:e722:ac3:10:7f:e700:c0a8:1373]) (user=jxgao job=sendgmr) by 2002:a17:90a:4898:: with SMTP id b24mr7504196pjh.110.1621376378360; Tue, 18 May 2021 15:19:38 -0700 (PDT) Date: Tue, 18 May 2021 22:18:16 +0000 In-Reply-To: <20210518221818.2963918-1-jxgao@google.com> Message-Id: <20210518221818.2963918-7-jxgao@google.com> Mime-Version: 1.0 References: <20210518221818.2963918-1-jxgao@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH 5.4 v2 6/9] swiotlb: refactor swiotlb_tbl_map_single From: Jianxiong Gao To: stable@vger.kernel.org, hch@lst.de, marcorr@google.com, sashal@kernel.org Cc: Jianxiong Gao , Konrad Rzeszutek Wilk Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Split out a bunch of a self-contained helpers to make the function easier to follow. Signed-off-by: Christoph Hellwig Acked-by: Jianxiong Gao Tested-by: Jianxiong Gao Signed-off-by: Konrad Rzeszutek Wilk Upstream: 26a7e094783d482f3e125f09945a5bb1d867b2e6 Signed-off-by: Jianxiong Gao --- kernel/dma/swiotlb.c | 184 +++++++++++++++++++++---------------------- 1 file changed, 91 insertions(+), 93 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index af22c3c5e488..d71f05a33aa4 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -453,133 +453,132 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } } -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, - dma_addr_t tbl_dma_addr, - phys_addr_t orig_addr, - size_t mapping_size, - size_t alloc_size, - enum dma_data_direction dir, - unsigned long attrs) +#define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT)) +/* + * Carefully handle integer overflow which can occur when boundary_mask == ~0UL. + */ +static inline unsigned long get_max_slots(unsigned long boundary_mask) { - unsigned long flags; - phys_addr_t tlb_addr; - unsigned int nslots, stride, index, wrap; - int i; - unsigned long mask; - unsigned long offset_slots; - unsigned long max_slots; - unsigned long tmp_io_tlb_used; - - if (no_iotlb_memory) - panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); - - if (mem_encrypt_active()) - pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); - - if (mapping_size > alloc_size) { - dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", - mapping_size, alloc_size); - return (phys_addr_t)DMA_MAPPING_ERROR; - } - - mask = dma_get_seg_boundary(hwdev); + if (boundary_mask == ~0UL) + return 1UL << (BITS_PER_LONG - IO_TLB_SHIFT); + return nr_slots(boundary_mask + 1); +} - tbl_dma_addr &= mask; +static unsigned int wrap_index(unsigned int index) +{ + if (index >= io_tlb_nslabs) + return 0; + return index; +} - offset_slots = nr_slots(tbl_dma_addr); +/* + * Find a suitable number of IO TLB entries size that will fit this request and + * allocate a buffer from that IO TLB pool. + */ +static int find_slots(struct device *dev, size_t alloc_size) +{ + unsigned long boundary_mask = dma_get_seg_boundary(dev); + dma_addr_t tbl_dma_addr = + __phys_to_dma(dev, io_tlb_start) & boundary_mask; + unsigned long max_slots = get_max_slots(boundary_mask); + unsigned int nslots = nr_slots(alloc_size), stride = 1; + unsigned int index, wrap, count = 0, i; + unsigned long flags; - /* - * Carefully handle integer overflow which can occur when mask == ~0UL. - */ - max_slots = mask + 1 - ? nr_slots(mask + 1) - : 1UL << (BITS_PER_LONG - IO_TLB_SHIFT); + BUG_ON(!nslots); /* * For mappings greater than or equal to a page, we limit the stride * (and hence alignment) to a page size. */ - nslots = nr_slots(alloc_size); if (alloc_size >= PAGE_SIZE) - stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); - else - stride = 1; + stride <<= (PAGE_SHIFT - IO_TLB_SHIFT); - BUG_ON(!nslots); - - /* - * Find suitable number of IO TLB entries size that will fit this - * request and allocate a buffer from that IO TLB pool. - */ spin_lock_irqsave(&io_tlb_lock, flags); - if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) goto not_found; - index = ALIGN(io_tlb_index, stride); - if (index >= io_tlb_nslabs) - index = 0; - wrap = index; - + index = wrap = wrap_index(ALIGN(io_tlb_index, stride)); do { - while (iommu_is_span_boundary(index, nslots, offset_slots, - max_slots)) { - index += stride; - if (index >= io_tlb_nslabs) - index = 0; - if (index == wrap) - goto not_found; - } - /* * If we find a slot that indicates we have 'nslots' number of * contiguous buffers, we allocate the buffers from that slot * and mark the entries as '0' indicating unavailable. */ - if (io_tlb_list[index] >= nslots) { - int count = 0; - - for (i = index; i < (int) (index + nslots); i++) - io_tlb_list[i] = 0; - for (i = index - 1; - io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && - io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; - tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT); - - /* - * Update the indices to avoid searching in the next - * round. - */ - io_tlb_index = ((index + nslots) < io_tlb_nslabs - ? (index + nslots) : 0); - - goto found; + if (!iommu_is_span_boundary(index, nslots, + nr_slots(tbl_dma_addr), + max_slots)) { + if (io_tlb_list[index] >= nslots) + goto found; } - index += stride; - if (index >= io_tlb_nslabs) - index = 0; + index = wrap_index(index + stride); } while (index != wrap); not_found: - tmp_io_tlb_used = io_tlb_used; - spin_unlock_irqrestore(&io_tlb_lock, flags); - if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) - dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", - alloc_size, io_tlb_nslabs, tmp_io_tlb_used); - return (phys_addr_t)DMA_MAPPING_ERROR; + return -1; + found: - io_tlb_used += nslots; + for (i = index; i < index + nslots; i++) + io_tlb_list[i] = 0; + for (i = index - 1; + io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && + io_tlb_list[i]; i--) + io_tlb_list[i] = ++count; + + /* + * Update the indices to avoid searching in the next round. + */ + if (index + nslots < io_tlb_nslabs) + io_tlb_index = index + nslots; + else + io_tlb_index = 0; + + io_tlb_used += nslots; + spin_unlock_irqrestore(&io_tlb_lock, flags); + return index; +} + +phys_addr_t swiotlb_tbl_map_single(struct device *dev, dma_addr_t dma_addr, + phys_addr_t orig_addr, size_t mapping_size, + size_t alloc_size, + enum dma_data_direction dir, + unsigned long attrs) +{ + unsigned int index, i; + phys_addr_t tlb_addr; + + if (no_iotlb_memory) + panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer"); + + if (mem_encrypt_active()) + pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n"); + + if (mapping_size > alloc_size) { + dev_warn_once(dev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", + mapping_size, alloc_size); + return (phys_addr_t)DMA_MAPPING_ERROR; + } + + index = find_slots(dev, alloc_size); + if (index == -1) { + if (!(attrs & DMA_ATTR_NO_WARN)) + dev_warn_ratelimited(dev, + "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", + alloc_size, io_tlb_nslabs, io_tlb_used); + return (phys_addr_t)DMA_MAPPING_ERROR; + } /* * Save away the mapping from the original address to the DMA address. * This is needed when we sync the memory. Then we sync the buffer if * needed. */ - for (i = 0; i < nslots; i++) - io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); + for (i = 0; i < nr_slots(alloc_size); i++) + io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); + + tlb_addr = slot_addr(io_tlb_start, index); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); From patchwork Tue May 18 22:18:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jianxiong Gao X-Patchwork-Id: 441768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D507DC433B4 for ; Tue, 18 May 2021 22:19:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B88CE6135D for ; Tue, 18 May 2021 22:19:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352721AbhERWVC (ORCPT ); Tue, 18 May 2021 18:21:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352714AbhERWVB (ORCPT ); Tue, 18 May 2021 18:21:01 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4126AC061573 for ; Tue, 18 May 2021 15:19:42 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id ep2-20020a17090ae642b029015f2a97b10fso1380178pjb.6 for ; Tue, 18 May 2021 15:19:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VdA5J5YV6brJDzrQC2mlhZfibBecErfzJdRDJYY/yP8=; b=sS75MRltc/5JsDF+Y0yMOO5Z9U//x0J1/jPqJtTK/gGRJwKd0DyIyeu0/GP9OOs9jS kiR0ehkeItDjJXLXYILLQnFd6boXKiOvbQAZ73gRaB9tKnN2W3hmDsFC+giWwDmZ7IY1 U0mQlaP8NMruTo5KlYLp36GREG+XlauCw+oglLECAZvbjpWyvriOmRdX4QI5wDj5TJMN w2uMdS0A1jgv32Ijw/FekPa0uFTBaw+LWj4FOOwYiHHI3Uq1HizzljjObun5ggmnXStx iKLwdV/dKsBxGIrLEYn5jrTqAgRn93pqIjV8q9RujxezdrlA7cbCHrlys9wH0eYInyfG OA3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VdA5J5YV6brJDzrQC2mlhZfibBecErfzJdRDJYY/yP8=; b=Ikjk6Q6Xe5N2g4w0x2kLRkfiUbGx2SK0UFpLIkVQvm++1VYbgUMQMw0HpVMuCAl6je n/rttfGENe6K56dnGhS+It6FHtCJyGDtoi1QFUSu6mfEyx+Pq9u35WRsdYPXOFgkGhTr RLq3cEbLuEJTmr101O2MNpPSIunXq2V2S1O1rgcjuaBWTNBNWnolZCm5X10OUGWabojT cM/vsj8P7018aKzZLUZbbzfmEUMOYtNPk3e3zEvJIw/PfrJlRmLIQ/fsqiV+yXEYgKHr CfhkEDfEo3pza+bXcsj2dcqj+OdBpdTuyfbRcoBYsr556AUkYYePka3K2c4FOoFbx/Q2 LDSQ== X-Gm-Message-State: AOAM530l+f0URhCD7Ts5Y8IRofXLmtAxIYYx9oWxMvKuHMRBR+BW+Wgj Sy7HxTHsBaO4GnhX8zW0/uCfgzbM4BltUj1pMap+k/YZEtBBcbQqQ6UKYmmUn3PMWsC67d9pNqf asfhMgZS+rDARL/rAc2sqGO6euFcBwIW7HQMttA88N/W+oRn9EFjWNXIjQWg= X-Google-Smtp-Source: ABdhPJzHNI4HFvSuns71m0zsbLyFVJZPCWqPxb7K+oK6+Yx6l1s7jKCo9CMsm9i/i3+eU6URVZbDrW55qQ== X-Received: from jxgao-snp.c.googlers.com ([fda3:e722:ac3:10:7f:e700:c0a8:1373]) (user=jxgao job=sendgmr) by 2002:a17:902:b285:b029:ef:9419:b91c with SMTP id u5-20020a170902b285b02900ef9419b91cmr7018595plr.21.1621376381710; Tue, 18 May 2021 15:19:41 -0700 (PDT) Date: Tue, 18 May 2021 22:18:18 +0000 In-Reply-To: <20210518221818.2963918-1-jxgao@google.com> Message-Id: <20210518221818.2963918-9-jxgao@google.com> Mime-Version: 1.0 References: <20210518221818.2963918-1-jxgao@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH 5.4 v2 8/9] swiotlb: respect min_align_mask From: Jianxiong Gao To: stable@vger.kernel.org, hch@lst.de, marcorr@google.com, sashal@kernel.org Cc: Jianxiong Gao , Konrad Rzeszutek Wilk Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org swiotlb: respect min_align_mask Respect the min_align_mask in struct device_dma_parameters in swiotlb. There are two parts to it: 1) for the lower bits of the alignment inside the io tlb slot, just extent the size of the allocation and leave the start of the slot empty 2) for the high bits ensure we find a slot that matches the high bits of the alignment to avoid wasting too much memory Based on an earlier patch from Jianxiong Gao . Signed-off-by: Christoph Hellwig Acked-by: Jianxiong Gao Tested-by: Jianxiong Gao Signed-off-by: Konrad Rzeszutek Wilk Upstream: 1f221a0d0dbf0e48ef3a9c62871281d6a7819f05 Signed-off-by: Jianxiong Gao --- kernel/dma/swiotlb.c | 42 ++++++++++++++++++++++++++++++++---------- 1 file changed, 32 insertions(+), 10 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f4e18ae33507..743bf7e36385 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -454,6 +454,15 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } #define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT)) + +/* + * Return the offset into a iotlb slot required to keep the device happy. + */ +static unsigned int swiotlb_align_offset(struct device *dev, u64 addr) +{ + return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1); +} + /* * Carefully handle integer overflow which can occur when boundary_mask == ~0UL. */ @@ -475,24 +484,29 @@ static unsigned int wrap_index(unsigned int index) * Find a suitable number of IO TLB entries size that will fit this request and * allocate a buffer from that IO TLB pool. */ -static int find_slots(struct device *dev, size_t alloc_size) +static int find_slots(struct device *dev, phys_addr_t orig_addr, + size_t alloc_size) { unsigned long boundary_mask = dma_get_seg_boundary(dev); dma_addr_t tbl_dma_addr = __phys_to_dma(dev, io_tlb_start) & boundary_mask; unsigned long max_slots = get_max_slots(boundary_mask); - unsigned int nslots = nr_slots(alloc_size), stride = 1; + unsigned int iotlb_align_mask = + dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); + unsigned int nslots = nr_slots(alloc_size), stride; unsigned int index, wrap, count = 0, i; unsigned long flags; BUG_ON(!nslots); /* - * For mappings greater than or equal to a page, we limit the stride - * (and hence alignment) to a page size. + * For mappings with an alignment requirement don't bother looping to + * unaligned slots once we found an aligned one. For allocations of + * PAGE_SIZE or larger only look for page aligned allocations. */ + stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; if (alloc_size >= PAGE_SIZE) - stride <<= (PAGE_SHIFT - IO_TLB_SHIFT); + stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT)); spin_lock_irqsave(&io_tlb_lock, flags); if (unlikely(nslots > io_tlb_nslabs - io_tlb_used)) @@ -500,6 +514,12 @@ static int find_slots(struct device *dev, size_t alloc_size) index = wrap = wrap_index(ALIGN(io_tlb_index, stride)); do { + if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != + (orig_addr & iotlb_align_mask)) { + index = wrap_index(index + 1); + continue; + } + /* * If we find a slot that indicates we have 'nslots' number of * contiguous buffers, we allocate the buffers from that slot @@ -545,6 +565,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, dma_addr_t dma_addr, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs) { + unsigned int offset = swiotlb_align_offset(dev, orig_addr); unsigned int index, i; phys_addr_t tlb_addr; @@ -560,7 +581,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, dma_addr_t dma_addr, return (phys_addr_t)DMA_MAPPING_ERROR; } - index = find_slots(dev, alloc_size); + index = find_slots(dev, orig_addr, alloc_size + offset); if (index == -1) { if (!(attrs & DMA_ATTR_NO_WARN)) dev_warn_ratelimited(dev, @@ -574,10 +595,10 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, dma_addr_t dma_addr, * This is needed when we sync the memory. Then we sync the buffer if * needed. */ - for (i = 0; i < nr_slots(alloc_size); i++) + for (i = 0; i < nr_slots(alloc_size + offset); i++) io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); - tlb_addr = slot_addr(io_tlb_start, index); + tlb_addr = slot_addr(io_tlb_start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); @@ -593,8 +614,9 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, enum dma_data_direction dir, unsigned long attrs) { unsigned long flags; - int i, count, nslots = nr_slots(alloc_size); - int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; + unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr); + int i, count, nslots = nr_slots(alloc_size + offset); + int index = (tlb_addr - offset - io_tlb_start) >> IO_TLB_SHIFT; phys_addr_t orig_addr = io_tlb_orig_addr[index]; /*