From patchwork Sat Aug 14 21:16:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 497547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF2EAC4338F for ; Sat, 14 Aug 2021 21:17:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A91E56101E for ; Sat, 14 Aug 2021 21:17:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233738AbhHNVSL (ORCPT ); Sat, 14 Aug 2021 17:18:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233644AbhHNVSG (ORCPT ); Sat, 14 Aug 2021 17:18:06 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FA6FC061764; Sat, 14 Aug 2021 14:17:37 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id u21-20020a17090a8915b02901782c36f543so26165891pjn.4; Sat, 14 Aug 2021 14:17:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=WUu1xNUjvy4bTpjMlz8H2ANYID2J8eFWxeuve0Yjkwg=; b=uNmwgkrQZXo7uy2zqPntgSt2EAcXshqMlRXbyQovfbUdYqqX4OVwhHWosxa1OofaWz vYbBl+yJltGoirQh9Iy/8J2jegJSivJeC8vpalYrYxW6V1ToyclTQMjtW1ptvpxrtfHw yknXGCZUudRTxo8ug/txQ+A7Oz+3KPZY8xBG5K52YQMUEn3RT9A9PeUYjqAGY2jmyfqc japKIksHUl/tyz7HG+qVVpsVlX0kDelIeyCp0mG/xT6Ez0nNaXTT58woqDBng9sE5aDW xpiOJ42ihQS/XpmQqguG/nG7vNObkGQwtFOoJtQaRgy6UubbLhrU6jBr3xQ1jKdu5nn+ vYaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WUu1xNUjvy4bTpjMlz8H2ANYID2J8eFWxeuve0Yjkwg=; b=ZLM9E22eJ+3S/PyY/Ruk0XkdEmWOrELT+LwiCUSs1okMrOEMucta/46WFIbpM1ukpP r+OorigBoAcTvGg+B7+g9YWayVTRVbiIZH4oYU4m1pALYQxN4IBJ+2uu5Pnw2j/BlEmC ivcw7xh5NvCiyqQoxrSjjhcmrR7dNImpHcRGaTjVV4R9cEtw+vRogEyqwb1hKFusGuWy mja8MuTvj7fHX9/Mg29xPSIbDNnUxM/P6f1lMFwia9qL4P7kaC7mHdRrIHC301nUSF11 4K5+IejnoHT7wuTrncwCB2kvlabgrvH3mpoCgtDqefSZxD3ENSGm4nS7c1S/ESBohqjD kYPQ== X-Gm-Message-State: AOAM533+f/YHL0zZx+qwLUwVUO66TWUMCoSGrLuN8V/xNhxgmeIwmf1y /GLdmt/ugiyqmvT9tMz7xoE= X-Google-Smtp-Source: ABdhPJxgH+rTNPmEs5x2Gh++VOL/AA8Y7Vo/x5E0Ev5fpMQFJY98MqRjtuwA4jxLvfCXYfWL+sCBcw== X-Received: by 2002:a63:510f:: with SMTP id f15mr8270712pgb.222.1628975856778; Sat, 14 Aug 2021 14:17:36 -0700 (PDT) Received: from localhost ([12.28.44.171]) by smtp.gmail.com with ESMTPSA id c12sm5995449pfl.56.2021.08.14.14.17.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 14 Aug 2021 14:17:36 -0700 (PDT) From: Yury Norov To: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mmc@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, "James E.J. Bottomley" , Alexander Lobakin , Alexander Shishkin , Alexey Klimov , Andrea Merello , Andy Shevchenko , Arnaldo Carvalho de Melo , Arnd Bergmann , Ben Gardon , Benjamin Herrenschmidt , Brian Cain , Catalin Marinas , Christoph Lameter , Daniel Bristot de Oliveira , David Hildenbrand , Dennis Zhou , Geert Uytterhoeven , Heiko Carstens , Ian Rogers , Ingo Molnar , Jaegeuk Kim , Jakub Kicinski , Jiri Olsa , Joe Perches , Jonas Bonn , Leo Yan , Mark Rutland , Namhyung Kim , Palmer Dabbelt , Paolo Bonzini , Peter Xu , Peter Zijlstra , Petr Mladek , Rasmus Villemoes , Rich Felker , Samuel Mendoza-Jonas , Sean Christopherson , Sergey Senozhatsky , Shuah Khan , Stefan Kristiansson , Steven Rostedt , Tejun Heo , Thomas Bogendoerfer , Ulf Hansson , Will Deacon , Wolfram Sang , Yoshinori Sato , Yury Norov Subject: [PATCH 02/17] bitops: move find_bit_*_le functions from le.h to find.h Date: Sat, 14 Aug 2021 14:16:58 -0700 Message-Id: <20210814211713.180533-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210814211713.180533-1-yury.norov@gmail.com> References: <20210814211713.180533-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org It's convenient to have all find_bit declarations in one place. Signed-off-by: Yury Norov Tested-by: Wolfram Sang --- include/asm-generic/bitops/find.h | 69 +++++++++++++++++++++++++++++++ include/asm-generic/bitops/le.h | 64 ---------------------------- 2 files changed, 69 insertions(+), 64 deletions(-) diff --git a/include/asm-generic/bitops/find.h b/include/asm-generic/bitops/find.h index 835f959a25f2..91b1b23f2b0c 100644 --- a/include/asm-generic/bitops/find.h +++ b/include/asm-generic/bitops/find.h @@ -190,4 +190,73 @@ extern unsigned long find_next_clump8(unsigned long *clump, #define find_first_clump8(clump, bits, size) \ find_next_clump8((clump), (bits), (size), 0) +#if defined(__LITTLE_ENDIAN) + +static inline unsigned long find_next_zero_bit_le(const void *addr, + unsigned long size, unsigned long offset) +{ + return find_next_zero_bit(addr, size, offset); +} + +static inline unsigned long find_next_bit_le(const void *addr, + unsigned long size, unsigned long offset) +{ + return find_next_bit(addr, size, offset); +} + +static inline unsigned long find_first_zero_bit_le(const void *addr, + unsigned long size) +{ + return find_first_zero_bit(addr, size); +} + +#elif defined(__BIG_ENDIAN) + +#ifndef find_next_zero_bit_le +static inline +unsigned long find_next_zero_bit_le(const void *addr, unsigned + long size, unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val = *(const unsigned long *)addr; + + if (unlikely(offset >= size)) + return size; + + val = swab(val) | ~GENMASK(size - 1, offset); + return val == ~0UL ? size : ffz(val); + } + + return _find_next_bit(addr, NULL, size, offset, ~0UL, 1); +} +#endif + +#ifndef find_next_bit_le +static inline +unsigned long find_next_bit_le(const void *addr, unsigned + long size, unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val = *(const unsigned long *)addr; + + if (unlikely(offset >= size)) + return size; + + val = swab(val) & GENMASK(size - 1, offset); + return val ? __ffs(val) : size; + } + + return _find_next_bit(addr, NULL, size, offset, 0UL, 1); +} +#endif + +#ifndef find_first_zero_bit_le +#define find_first_zero_bit_le(addr, size) \ + find_next_zero_bit_le((addr), (size), 0) +#endif + +#else +#error "Please fix " +#endif + #endif /*_ASM_GENERIC_BITOPS_FIND_H_ */ diff --git a/include/asm-generic/bitops/le.h b/include/asm-generic/bitops/le.h index 5a28629cbf4d..d51beff60375 100644 --- a/include/asm-generic/bitops/le.h +++ b/include/asm-generic/bitops/le.h @@ -2,83 +2,19 @@ #ifndef _ASM_GENERIC_BITOPS_LE_H_ #define _ASM_GENERIC_BITOPS_LE_H_ -#include #include #include -#include #if defined(__LITTLE_ENDIAN) #define BITOP_LE_SWIZZLE 0 -static inline unsigned long find_next_zero_bit_le(const void *addr, - unsigned long size, unsigned long offset) -{ - return find_next_zero_bit(addr, size, offset); -} - -static inline unsigned long find_next_bit_le(const void *addr, - unsigned long size, unsigned long offset) -{ - return find_next_bit(addr, size, offset); -} - -static inline unsigned long find_first_zero_bit_le(const void *addr, - unsigned long size) -{ - return find_first_zero_bit(addr, size); -} - #elif defined(__BIG_ENDIAN) #define BITOP_LE_SWIZZLE ((BITS_PER_LONG-1) & ~0x7) -#ifndef find_next_zero_bit_le -static inline -unsigned long find_next_zero_bit_le(const void *addr, unsigned - long size, unsigned long offset) -{ - if (small_const_nbits(size)) { - unsigned long val = *(const unsigned long *)addr; - - if (unlikely(offset >= size)) - return size; - - val = swab(val) | ~GENMASK(size - 1, offset); - return val == ~0UL ? size : ffz(val); - } - - return _find_next_bit(addr, NULL, size, offset, ~0UL, 1); -} -#endif - -#ifndef find_next_bit_le -static inline -unsigned long find_next_bit_le(const void *addr, unsigned - long size, unsigned long offset) -{ - if (small_const_nbits(size)) { - unsigned long val = *(const unsigned long *)addr; - - if (unlikely(offset >= size)) - return size; - - val = swab(val) & GENMASK(size - 1, offset); - return val ? __ffs(val) : size; - } - - return _find_next_bit(addr, NULL, size, offset, 0UL, 1); -} #endif -#ifndef find_first_zero_bit_le -#define find_first_zero_bit_le(addr, size) \ - find_next_zero_bit_le((addr), (size), 0) -#endif - -#else -#error "Please fix " -#endif static inline int test_bit_le(int nr, const void *addr) {