From patchwork Wed May 5 12:05:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 431597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B59BC433ED for ; Wed, 5 May 2021 12:06:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11C62613BF for ; Wed, 5 May 2021 12:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232589AbhEEMHl (ORCPT ); Wed, 5 May 2021 08:07:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:47698 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229699AbhEEMHW (ORCPT ); Wed, 5 May 2021 08:07:22 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1046D613C4; Wed, 5 May 2021 12:06:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620216385; bh=Ql7iDEykfMvIIpEsQFffngjm6a4WUByVYdx2J5aCiKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WJut0KrHhDO2gUEGdPRkhLrgsXMAQppGcwlUPa6UJVQev4rlo69OAYB0LxjDnDsFA lKWFrD91TTXY0qviUdSgQlp+WUZxRBDZsH8Dxc/h9hU+20WtxUL+flgBg4bJpwriWe YQT8yiYaIejKtm/jYMT5trvtqomOLqnJX5WuUulw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoph Hellwig , Jianxiong Gao , Konrad Rzeszutek Wilk Subject: [PATCH 5.10 14/29] swiotlb: factor out an io_tlb_offset helper Date: Wed, 5 May 2021 14:05:17 +0200 Message-Id: <20210505112326.666752898@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210505112326.195493232@linuxfoundation.org> References: <20210505112326.195493232@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jianxiong Gao commit: c7fbeca757fe74135d8b6a4c8ddaef76f5775d68 Replace the very genericly named OFFSET macro with a little inline helper that hardcodes the alignment to the only value ever passed. Signed-off-by: Christoph Hellwig Acked-by: Jianxiong Gao Tested-by: Jianxiong Gao Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Jianxiong Gao Signed-off-by: Greg Kroah-Hartman --- kernel/dma/swiotlb.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -50,9 +50,6 @@ #define CREATE_TRACE_POINTS #include -#define OFFSET(val,align) ((unsigned long) \ - ( (val) & ( (align) - 1))) - #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) /* @@ -176,6 +173,11 @@ void swiotlb_print_info(void) bytes >> 20); } +static inline unsigned long io_tlb_offset(unsigned long val) +{ + return val & (IO_TLB_SEGSIZE - 1); +} + /* * Early SWIOTLB allocation may be too early to allow an architecture to * perform the desired operations. This function allows the architecture to @@ -225,7 +227,7 @@ int __init swiotlb_init_with_tbl(char *t __func__, alloc_size, PAGE_SIZE); for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; } io_tlb_index = 0; @@ -359,7 +361,7 @@ swiotlb_late_init_with_tbl(char *tlb, un goto cleanup4; for (i = 0; i < io_tlb_nslabs; i++) { - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + io_tlb_list[i] = IO_TLB_SEGSIZE - io_tlb_offset(i); io_tlb_orig_addr[i] = INVALID_PHYS_ADDR; } io_tlb_index = 0; @@ -530,7 +532,9 @@ phys_addr_t swiotlb_tbl_map_single(struc for (i = index; i < (int) (index + nslots); i++) io_tlb_list[i] = 0; - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--) + for (i = index - 1; + io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && + io_tlb_list[i]; i--) io_tlb_list[i] = ++count; tlb_addr = io_tlb_start + (index << IO_TLB_SHIFT); @@ -616,7 +620,9 @@ void swiotlb_tbl_unmap_single(struct dev * Step 2: merge the returned slots with the preceding slots, * if available (non zero) */ - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) + for (i = index - 1; + io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 && + io_tlb_list[i]; i--) io_tlb_list[i] = ++count; io_tlb_used -= nslots;