From patchwork Tue Aug 24 13:33:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 502026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D492BC432BE for ; Tue, 24 Aug 2021 13:34:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD87F61368 for ; Tue, 24 Aug 2021 13:34:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237607AbhHXNe5 (ORCPT ); Tue, 24 Aug 2021 09:34:57 -0400 Received: from mga18.intel.com ([134.134.136.126]:13438 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237517AbhHXNey (ORCPT ); Tue, 24 Aug 2021 09:34:54 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10085"; a="204444830" X-IronPort-AV: E=Sophos;i="5.84,347,1620716400"; d="scan'208";a="204444830" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2021 06:34:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,347,1620716400"; d="scan'208";a="683975684" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga006.fm.intel.com with ESMTP; 24 Aug 2021 06:33:55 -0700 Received: by black.fi.intel.com (Postfix, from userid 1003) id 563601AA; Tue, 24 Aug 2021 16:33:56 +0300 (EEST) From: Andy Shevchenko To: Mauro Carvalho Chehab , Sakari Ailus , Andy Shevchenko , linux-media@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yong Zhi , Bingbu Cao , Dan Scally , Tianshu Qiu , Mauro Carvalho Chehab Subject: [PATCH v1 1/3] lib/sort: Split out choose_swap_func() local helper Date: Tue, 24 Aug 2021 16:33:49 +0300 Message-Id: <20210824133351.88179-1-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org In some new code we may need the same functionality as provided by newly introduced choose_swap_func() helper. Signed-off-by: Andy Shevchenko Acked-by: Sakari Ailus --- lib/sort.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/lib/sort.c b/lib/sort.c index aa18153864d2..d9b2f5b73620 100644 --- a/lib/sort.c +++ b/lib/sort.c @@ -151,6 +151,18 @@ static int do_cmp(const void *a, const void *b, cmp_r_func_t cmp, const void *pr return cmp(a, b, priv); } +static swap_func_t choose_swap_func(swap_func_t swap_func, void *base, size_t size) +{ + if (swap_func) + return swap_func; + + if (is_aligned(base, size, 8)) + return SWAP_WORDS_64; + if (is_aligned(base, size, 4)) + return SWAP_WORDS_32; + return SWAP_BYTES; +} + /** * parent - given the offset of the child, find the offset of the parent. * @i: the offset of the heap element whose parent is sought. Non-zero. @@ -208,14 +220,7 @@ void sort_r(void *base, size_t num, size_t size, if (!a) /* num < 2 || size == 0 */ return; - if (!swap_func) { - if (is_aligned(base, size, 8)) - swap_func = SWAP_WORDS_64; - else if (is_aligned(base, size, 4)) - swap_func = SWAP_WORDS_32; - else - swap_func = SWAP_BYTES; - } + swap_func = choose_swap_func(swap_func, base, size); /* * Loop invariants: From patchwork Tue Aug 24 13:33:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 502605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D539C4338F for ; Tue, 24 Aug 2021 13:34:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D4FF613B1 for ; Tue, 24 Aug 2021 13:34:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237591AbhHXNe4 (ORCPT ); Tue, 24 Aug 2021 09:34:56 -0400 Received: from mga18.intel.com ([134.134.136.126]:13438 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231237AbhHXNew (ORCPT ); Tue, 24 Aug 2021 09:34:52 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10085"; a="204444829" X-IronPort-AV: E=Sophos;i="5.84,347,1620716400"; d="scan'208";a="204444829" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2021 06:34:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,347,1620716400"; d="scan'208";a="685368422" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 24 Aug 2021 06:33:55 -0700 Received: by black.fi.intel.com (Postfix, from userid 1003) id 5F4E4167; Tue, 24 Aug 2021 16:33:56 +0300 (EEST) From: Andy Shevchenko To: Mauro Carvalho Chehab , Sakari Ailus , Andy Shevchenko , linux-media@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yong Zhi , Bingbu Cao , Dan Scally , Tianshu Qiu , Mauro Carvalho Chehab Subject: [PATCH v1 2/3] lib/sort: Introduce rotate() to circular shift an array of elements Date: Tue, 24 Aug 2021 16:33:50 +0300 Message-Id: <20210824133351.88179-2-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210824133351.88179-1-andriy.shevchenko@linux.intel.com> References: <20210824133351.88179-1-andriy.shevchenko@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org In some cases we want to circular shift an array of elements. Introduce rotate() helper for that. Signed-off-by: Andy Shevchenko --- include/linux/sort.h | 3 +++ lib/sort.c | 61 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 64 insertions(+) diff --git a/include/linux/sort.h b/include/linux/sort.h index b5898725fe9d..c881acb12ffc 100644 --- a/include/linux/sort.h +++ b/include/linux/sort.h @@ -13,4 +13,7 @@ void sort(void *base, size_t num, size_t size, cmp_func_t cmp_func, swap_func_t swap_func); +void rotate(void *base, size_t num, size_t size, size_t by, + swap_func_t swap_func); + #endif diff --git a/lib/sort.c b/lib/sort.c index d9b2f5b73620..b9243f8db34b 100644 --- a/lib/sort.c +++ b/lib/sort.c @@ -14,6 +14,7 @@ #include #include +#include #include /** @@ -275,3 +276,63 @@ void sort(void *base, size_t num, size_t size, return sort_r(base, num, size, _CMP_WRAPPER, swap_func, cmp_func); } EXPORT_SYMBOL(sort); + +/** + * rotate - rotate an array of elements by a number of elements + * @base: pointer to data to sort + * @num: number of elements + * @size: size of each element + * @by: number of elements to rotate by + * @swap_func: pointer to swap function or NULL + * + * Helper function to advance all the elements of a circular buffer by + * @by positions. + */ +void rotate(void *base, size_t num, size_t size, size_t by, + swap_func_t swap_func) +{ + struct { + size_t begin, end; + } arr[2] = { + { .begin = 0, .end = by - 1 }, + { .begin = by, .end = num - 1 }, + }; + + swap_func = choose_swap_func(swap_func, base, size); + +#define CHUNK_SIZE(a) ((a)->end - (a)->begin + 1) + + /* Loop as long as we have out-of-place entries */ + while (CHUNK_SIZE(&arr[0]) && CHUNK_SIZE(&arr[1])) { + size_t size0, i; + + /* + * Find the number of entries that can be arranged on this + * iteration. + */ + size0 = min(CHUNK_SIZE(&arr[0]), CHUNK_SIZE(&arr[1])); + + /* Swap the entries in two parts of the array */ + for (i = 0; i < size0; i++) { + void *a = base + size * (arr[0].begin + i); + void *b = base + size * (arr[1].begin + i); + + do_swap(a, b, size, swap_func); + } + + if (CHUNK_SIZE(&arr[0]) > CHUNK_SIZE(&arr[1])) { + /* The end of the first array remains unarranged */ + arr[0].begin += size0; + } else { + /* + * The first array is fully arranged so we proceed + * handling the next one. + */ + arr[0].begin = arr[1].begin; + arr[0].end = arr[1].begin + size0 - 1; + arr[1].begin += size0; + } + } +#undef CHUNK_SIZE +} +EXPORT_SYMBOL(rotate); From patchwork Tue Aug 24 13:33:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Shevchenko X-Patchwork-Id: 502027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 898C2C4338F for ; Tue, 24 Aug 2021 13:34:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6639F613AB for ; Tue, 24 Aug 2021 13:34:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237266AbhHXNep (ORCPT ); Tue, 24 Aug 2021 09:34:45 -0400 Received: from mga02.intel.com ([134.134.136.20]:57663 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233407AbhHXNem (ORCPT ); Tue, 24 Aug 2021 09:34:42 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10085"; a="204508204" X-IronPort-AV: E=Sophos;i="5.84,347,1620716400"; d="scan'208";a="204508204" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2021 06:33:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,347,1620716400"; d="scan'208";a="597584075" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 24 Aug 2021 06:33:55 -0700 Received: by black.fi.intel.com (Postfix, from userid 1003) id 68FC936A; Tue, 24 Aug 2021 16:33:56 +0300 (EEST) From: Andy Shevchenko To: Mauro Carvalho Chehab , Sakari Ailus , Andy Shevchenko , linux-media@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yong Zhi , Bingbu Cao , Dan Scally , Tianshu Qiu , Mauro Carvalho Chehab Subject: [RFT, PATCH v1 3/3] media: ipu3-cio2: Replace custom implementation of rotate() Date: Tue, 24 Aug 2021 16:33:51 +0300 Message-Id: <20210824133351.88179-3-andriy.shevchenko@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210824133351.88179-1-andriy.shevchenko@linux.intel.com> References: <20210824133351.88179-1-andriy.shevchenko@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org rotate() is more efficient than custom implementation. Replace the latter by the former. Signed-off-by: Andy Shevchenko --- This should be a copy'n'paste of the algorithm with a slight difference that it should copy by 4 or 8 bytes at a time. Nonetheless it has to be tested. Hence, RFT. (Obviously no hurry with this, we are close to release) drivers/media/pci/intel/ipu3/ipu3-cio2-main.c | 59 ++----------------- 1 file changed, 5 insertions(+), 54 deletions(-) diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c index 8bcba168cc57..0fd6040d2f2d 100644 --- a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c +++ b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -1877,56 +1878,6 @@ static int __maybe_unused cio2_runtime_resume(struct device *dev) return 0; } -/* - * Helper function to advance all the elements of a circular buffer by "start" - * positions - */ -static void arrange(void *ptr, size_t elem_size, size_t elems, size_t start) -{ - struct { - size_t begin, end; - } arr[2] = { - { 0, start - 1 }, - { start, elems - 1 }, - }; - -#define CHUNK_SIZE(a) ((a)->end - (a)->begin + 1) - - /* Loop as long as we have out-of-place entries */ - while (CHUNK_SIZE(&arr[0]) && CHUNK_SIZE(&arr[1])) { - size_t size0, i; - - /* - * Find the number of entries that can be arranged on this - * iteration. - */ - size0 = min(CHUNK_SIZE(&arr[0]), CHUNK_SIZE(&arr[1])); - - /* Swap the entries in two parts of the array. */ - for (i = 0; i < size0; i++) { - u8 *d = ptr + elem_size * (arr[1].begin + i); - u8 *s = ptr + elem_size * (arr[0].begin + i); - size_t j; - - for (j = 0; j < elem_size; j++) - swap(d[j], s[j]); - } - - if (CHUNK_SIZE(&arr[0]) > CHUNK_SIZE(&arr[1])) { - /* The end of the first array remains unarranged. */ - arr[0].begin += size0; - } else { - /* - * The first array is fully arranged so we proceed - * handling the next one. - */ - arr[0].begin = arr[1].begin; - arr[0].end = arr[1].begin + size0 - 1; - arr[1].begin += size0; - } - } -} - static void cio2_fbpt_rearrange(struct cio2_device *cio2, struct cio2_queue *q) { unsigned int i, j; @@ -1940,10 +1891,10 @@ static void cio2_fbpt_rearrange(struct cio2_device *cio2, struct cio2_queue *q) return; if (j) { - arrange(q->fbpt, sizeof(struct cio2_fbpt_entry) * CIO2_MAX_LOPS, - CIO2_MAX_BUFFERS, j); - arrange(q->bufs, sizeof(struct cio2_buffer *), - CIO2_MAX_BUFFERS, j); + rotate(q->fbpt, sizeof(struct cio2_fbpt_entry) * CIO2_MAX_LOPS, + CIO2_MAX_BUFFERS, j, NULL); + rotate(q->bufs, sizeof(struct cio2_buffer *), + CIO2_MAX_BUFFERS, j, NULL); } /*