From patchwork Wed Oct 11 06:14:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 732279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B30C0CD98F3 for ; Wed, 11 Oct 2023 06:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344448AbjJKGPC (ORCPT ); Wed, 11 Oct 2023 02:15:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344443AbjJKGO6 (ORCPT ); Wed, 11 Oct 2023 02:14:58 -0400 Received: from relay4-d.mail.gandi.net (relay4-d.mail.gandi.net [217.70.183.196]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8B0C98; Tue, 10 Oct 2023 23:14:55 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id B6529E0007; Wed, 11 Oct 2023 06:14:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1697004894; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e0ujzdffAJjksGlWhIAmGsiTFnivZ4Pu6Zkj+KUCbPE=; b=hE+u40WSpM1Wt70tQmibiGC4Mkoj9ChhI+13XBXiLnulnvkiMtdrge6tgbmLhOyv8pUpcc E8itJO5Un/jNV0nKkXF/VNB+FPmlOOj7RqhjwDIwJJCP3CkZqcDpl0xbGEKd5m9r6DWOmv GJNMnkB+Wxj9Ql+KJ9ICk6EKfQcdod/JugQcJc21mTsCE8WxGQuUUA7VSANXvCGI8vRDYM izVdr55SMk1d2unes1lacHkoi9/Qb3Njygxq3yyKKuF47xQSYmaKHP58ITbyrj+/no6hKX QYCLqS1YnKfS+6Y4F8Khf93ueiVrxcV4YQJ75SnwcNpiSPpucZET+zUOBc6XTQ== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lee Jones , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Simon Horman , Christophe JAILLET , Thomas Petazzoni Subject: [PATCH v8 02/30] soc: fsl: cpm1: qmc: Fix __iomem addresses declaration Date: Wed, 11 Oct 2023 08:14:06 +0200 Message-ID: <20231011061437.64213-3-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231011061437.64213-1-herve.codina@bootlin.com> References: <20231011061437.64213-1-herve.codina@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org Running sparse (make C=1) on qmc.c raises a lot of warning such as: ... warning: incorrect type in assignment (different address spaces) expected struct cpm_buf_desc [usertype] *[noderef] __iomem bd got struct cpm_buf_desc [noderef] [usertype] __iomem *txbd_free ... Indeed, some variable were declared 'type *__iomem var' instead of 'type __iomem *var'. Use the correct declaration to remove these warnings. Fixes: 3178d58e0b97 ("soc: fsl: cpm1: Add support for QMC") Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index b3c292c9a14e..7ad0d77f1740 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -175,7 +175,7 @@ struct qmc_chan { struct list_head list; unsigned int id; struct qmc *qmc; - void *__iomem s_param; + void __iomem *s_param; enum qmc_mode mode; u64 tx_ts_mask; u64 rx_ts_mask; @@ -203,9 +203,9 @@ struct qmc_chan { struct qmc { struct device *dev; struct tsa_serial *tsa_serial; - void *__iomem scc_regs; - void *__iomem scc_pram; - void *__iomem dpram; + void __iomem *scc_regs; + void __iomem *scc_pram; + void __iomem *dpram; u16 scc_pram_offset; cbd_t __iomem *bd_table; dma_addr_t bd_dma_addr; @@ -218,37 +218,37 @@ struct qmc { struct qmc_chan *chans[64]; }; -static inline void qmc_write16(void *__iomem addr, u16 val) +static inline void qmc_write16(void __iomem *addr, u16 val) { iowrite16be(val, addr); } -static inline u16 qmc_read16(void *__iomem addr) +static inline u16 qmc_read16(void __iomem *addr) { return ioread16be(addr); } -static inline void qmc_setbits16(void *__iomem addr, u16 set) +static inline void qmc_setbits16(void __iomem *addr, u16 set) { qmc_write16(addr, qmc_read16(addr) | set); } -static inline void qmc_clrbits16(void *__iomem addr, u16 clr) +static inline void qmc_clrbits16(void __iomem *addr, u16 clr) { qmc_write16(addr, qmc_read16(addr) & ~clr); } -static inline void qmc_write32(void *__iomem addr, u32 val) +static inline void qmc_write32(void __iomem *addr, u32 val) { iowrite32be(val, addr); } -static inline u32 qmc_read32(void *__iomem addr) +static inline u32 qmc_read32(void __iomem *addr) { return ioread32be(addr); } -static inline void qmc_setbits32(void *__iomem addr, u32 set) +static inline void qmc_setbits32(void __iomem *addr, u32 set) { qmc_write32(addr, qmc_read32(addr) | set); } @@ -318,7 +318,7 @@ int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length, { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; int ret; @@ -374,7 +374,7 @@ static void qmc_chan_write_done(struct qmc_chan *chan) void (*complete)(void *context); unsigned long flags; void *context; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; /* @@ -425,7 +425,7 @@ int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length, { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; int ret; @@ -488,7 +488,7 @@ static void qmc_chan_read_done(struct qmc_chan *chan) void (*complete)(void *context, size_t size); struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; void *context; u16 datalen; u16 ctrl; @@ -663,7 +663,7 @@ static void qmc_chan_reset_rx(struct qmc_chan *chan) { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; spin_lock_irqsave(&chan->rx_lock, flags); @@ -694,7 +694,7 @@ static void qmc_chan_reset_tx(struct qmc_chan *chan) { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; spin_lock_irqsave(&chan->tx_lock, flags);