From patchwork Thu Feb 11 01:46:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91DF6C433DB for ; Thu, 11 Feb 2021 01:48:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5636360295 for ; Thu, 11 Feb 2021 01:48:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229731AbhBKBsI (ORCPT ); Wed, 10 Feb 2021 20:48:08 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:31484 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229564AbhBKBrw (ORCPT ); Wed, 10 Feb 2021 20:47:52 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKs000423; Wed, 10 Feb 2021 17:47:07 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=aCFx8DkoPL+BqO4by2gnSH5BcyPsnZqU/7jiujV1YuI=; b=ZxCB3zhY5qGd0MpDU5/Nb7v4mScE0ohIrwC+Ztt6vlYUM3ufYxrxzJTpE59mbyHsravY PHe5b1CJKSDhExN9sbmtcRjeSGt213UkQ5A9dIkJf1TZ+bjQoJBKqabarUSx51K8SZfp syke/nX90hUlyVahKIMY8yotoY32nXhSrxm5r71i7IyIpDehJhfvrMsargv+eL4m1U0y dlURp3Wk1OU+eKBvLXEjMTZcdRszXxt4KPNI01UPQF3wiN4d2n4P14EfbUYoezjMc/vX XmQReS8Xb+1zsg85g4QOr/FTY8PtPXYC715rJUyfDomSF9Koe7bHOK8xstlpjnWWEqqw Mw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhba-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:07 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:05 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:04 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:04 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id A29043F7040; Wed, 10 Feb 2021 17:47:00 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 01/14] octeontx2-af: cn10k: Add mbox support for CN10K platform Date: Thu, 11 Feb 2021 07:16:18 +0530 Message-ID: <20210211014631.9578-2-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Subbaraya Sundeep Firmware allocates memory regions for PFs and VFs in DRAM. The PFs memory region is used for AF-PF and PF-VF mailbox. This mbox facilitates communication between AF-PF and PF-VF. On CN10K platform: The DRAM region allocated to PF is enumerated as PF BAR4 memory. PF BAR4 contains AF-PF mbox region followed by its VFs mbox region. AF-PF mbox region base address is configured at RVU_AF_PFX_BAR4_ADDR PF-VF mailbox base address is configured at RVU_PF(x)_VF_MBOX_ADDR = RVU_AF_PF()_BAR4_ADDR+64KB. PF access its mbox region via BAR4, whereas VF accesses PF-VF DRAM mailboxes via BAR2 indirect access. On CN9XX platform: Mailbox region in DRAM is divided into two parts AF-PF mbox region and PF-VF mbox region i.e all PFs mbox region is contiguous similarly all VFs. The base address of the AF-PF mbox region is configured at RVU_AF_PF_BAR4_ADDR. AF-PF1 mbox address can be calculated as RVU_AF_PF_BAR4_ADDR * mbox size. The base address of PF-VF mbox region for each PF is configure at RVU_AF_PF(0..15)_VF_BAR4_ADDR.PF access its mbox region via BAR4 and its VF mbox regions from RVU_PF_VF_BAR4_ADDR register, whereas VF access its mbox region via BAR4. This patch changes mbox initialization to support both CN9XX and CN10K platform. This patch also adds CN10K PTP subsystem and device IDs to ptp driver id table. Signed-off-by: Subbaraya Sundeep Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../net/ethernet/marvell/octeontx2/af/mbox.c | 59 +++++++-- .../net/ethernet/marvell/octeontx2/af/mbox.h | 4 + .../net/ethernet/marvell/octeontx2/af/ptp.c | 12 ++ .../net/ethernet/marvell/octeontx2/af/rvu.c | 122 ++++++++++++++---- .../net/ethernet/marvell/octeontx2/af/rvu.h | 23 ++++ .../ethernet/marvell/octeontx2/af/rvu_reg.h | 7 + 6 files changed, 191 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c index bbabb8e64201..0a37ca96aab8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c @@ -20,9 +20,9 @@ static const u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); void __otx2_mbox_reset(struct otx2_mbox *mbox, int devid) { - void *hw_mbase = mbox->hwbase + (devid * MBOX_SIZE); struct otx2_mbox_dev *mdev = &mbox->dev[devid]; struct mbox_hdr *tx_hdr, *rx_hdr; + void *hw_mbase = mdev->hwbase; tx_hdr = hw_mbase + mbox->tx_start; rx_hdr = hw_mbase + mbox->rx_start; @@ -56,12 +56,9 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox) } EXPORT_SYMBOL(otx2_mbox_destroy); -int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev, - void *reg_base, int direction, int ndevs) +static int otx2_mbox_setup(struct otx2_mbox *mbox, struct pci_dev *pdev, + void *reg_base, int direction, int ndevs) { - struct otx2_mbox_dev *mdev; - int devid; - switch (direction) { case MBOX_DIR_AFPF: case MBOX_DIR_PFVF: @@ -121,7 +118,6 @@ int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev, } mbox->reg_base = reg_base; - mbox->hwbase = hwbase; mbox->pdev = pdev; mbox->dev = kcalloc(ndevs, sizeof(struct otx2_mbox_dev), GFP_KERNEL); @@ -129,11 +125,27 @@ int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev, otx2_mbox_destroy(mbox); return -ENOMEM; } - mbox->ndevs = ndevs; + + return 0; +} + +int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev, + void *reg_base, int direction, int ndevs) +{ + struct otx2_mbox_dev *mdev; + int devid, err; + + err = otx2_mbox_setup(mbox, pdev, reg_base, direction, ndevs); + if (err) + return err; + + mbox->hwbase = hwbase; + for (devid = 0; devid < ndevs; devid++) { mdev = &mbox->dev[devid]; mdev->mbase = mbox->hwbase + (devid * MBOX_SIZE); + mdev->hwbase = mdev->mbase; spin_lock_init(&mdev->mbox_lock); /* Init header to reset value */ otx2_mbox_reset(mbox, devid); @@ -143,6 +155,35 @@ int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev, } EXPORT_SYMBOL(otx2_mbox_init); +/* Initialize mailbox with the set of mailbox region addresses + * in the array hwbase. + */ +int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, + struct pci_dev *pdev, void *reg_base, + int direction, int ndevs) +{ + struct otx2_mbox_dev *mdev; + int devid, err; + + err = otx2_mbox_setup(mbox, pdev, reg_base, direction, ndevs); + if (err) + return err; + + mbox->hwbase = hwbase[0]; + + for (devid = 0; devid < ndevs; devid++) { + mdev = &mbox->dev[devid]; + mdev->mbase = hwbase[devid]; + mdev->hwbase = hwbase[devid]; + spin_lock_init(&mdev->mbox_lock); + /* Init header to reset value */ + otx2_mbox_reset(mbox, devid); + } + + return 0; +} +EXPORT_SYMBOL(otx2_mbox_regions_init); + int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid) { unsigned long timeout = jiffies + msecs_to_jiffies(MBOX_RSP_TIMEOUT); @@ -175,9 +216,9 @@ EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp); void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid) { - void *hw_mbase = mbox->hwbase + (devid * MBOX_SIZE); struct otx2_mbox_dev *mdev = &mbox->dev[devid]; struct mbox_hdr *tx_hdr, *rx_hdr; + void *hw_mbase = mdev->hwbase; tx_hdr = hw_mbase + mbox->tx_start; rx_hdr = hw_mbase + mbox->rx_start; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index a0fa44941204..5a08f3e13933 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -52,6 +52,7 @@ struct otx2_mbox_dev { void *mbase; /* This dev's mbox region */ + void *hwbase; spinlock_t mbox_lock; u16 msg_size; /* Total msg size to be sent */ u16 rsp_size; /* Total rsp size to be sure the reply is ok */ @@ -98,6 +99,9 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox); int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase, struct pci_dev *pdev, void __force *reg_base, int direction, int ndevs); +int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase, + struct pci_dev *pdev, void __force *reg_base, + int direction, int ndevs); void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/ptp.c b/drivers/net/ethernet/marvell/octeontx2/af/ptp.c index f69f4f35ae48..1ee37853f338 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/ptp.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/ptp.c @@ -21,6 +21,9 @@ #define PCI_SUBSYS_DEVID_OCTX2_95XX_PTP 0xB300 #define PCI_SUBSYS_DEVID_OCTX2_LOKI_PTP 0xB400 #define PCI_SUBSYS_DEVID_OCTX2_95MM_PTP 0xB500 +#define PCI_SUBSYS_DEVID_CN10K_A_PTP 0xB900 +#define PCI_SUBSYS_DEVID_CNF10K_A_PTP 0xBA00 +#define PCI_SUBSYS_DEVID_CNF10K_B_PTP 0xBC00 #define PCI_DEVID_OCTEONTX2_RST 0xA085 #define PCI_PTP_BAR_NO 0 @@ -234,6 +237,15 @@ static const struct pci_device_id ptp_id_table[] = { { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_PTP, PCI_VENDOR_ID_CAVIUM, PCI_SUBSYS_DEVID_OCTX2_95MM_PTP) }, + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_PTP, + PCI_VENDOR_ID_CAVIUM, + PCI_SUBSYS_DEVID_CN10K_A_PTP) }, + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_PTP, + PCI_VENDOR_ID_CAVIUM, + PCI_SUBSYS_DEVID_CNF10K_A_PTP) }, + { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_PTP, + PCI_VENDOR_ID_CAVIUM, + PCI_SUBSYS_DEVID_CNF10K_B_PTP) }, { 0, } }; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 50c2a1d800f4..1729a2ad093e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -78,6 +78,9 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu) if (is_rvu_96xx_A0(rvu)) hw->cap.nix_rx_multicast = false; } + + if (!is_rvu_otx2(rvu)) + hw->cap.per_pf_mbox_regs = true; } /* Poll a RVU block's register 'offset', for a 'zero' @@ -1936,41 +1939,105 @@ static inline void rvu_afvf_mbox_up_handler(struct work_struct *work) __rvu_mbox_up_handler(mwork, TYPE_AFVF); } +static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, + int num, int type) +{ + struct rvu_hwinfo *hw = rvu->hw; + int region; + u64 bar4; + + /* For cn10k platform VF mailbox regions of a PF follows after the + * PF <-> AF mailbox region. Whereas for Octeontx2 it is read from + * RVU_PF_VF_BAR4_ADDR register. + */ + if (type == TYPE_AFVF) { + for (region = 0; region < num; region++) { + if (hw->cap.per_pf_mbox_regs) { + bar4 = rvu_read64(rvu, BLKADDR_RVUM, + RVU_AF_PFX_BAR4_ADDR(0)) + + MBOX_SIZE; + bar4 += region * MBOX_SIZE; + } else { + bar4 = rvupf_read64(rvu, RVU_PF_VF_BAR4_ADDR); + bar4 += region * MBOX_SIZE; + } + mbox_addr[region] = (void *)ioremap_wc(bar4, MBOX_SIZE); + if (!mbox_addr[region]) + goto error; + } + return 0; + } + + /* For cn10k platform AF <-> PF mailbox region of a PF is read from per + * PF registers. Whereas for Octeontx2 it is read from + * RVU_AF_PF_BAR4_ADDR register. + */ + for (region = 0; region < num; region++) { + if (hw->cap.per_pf_mbox_regs) { + bar4 = rvu_read64(rvu, BLKADDR_RVUM, + RVU_AF_PFX_BAR4_ADDR(region)); + } else { + bar4 = rvu_read64(rvu, BLKADDR_RVUM, + RVU_AF_PF_BAR4_ADDR); + bar4 += region * MBOX_SIZE; + } + mbox_addr[region] = (void *)ioremap_wc(bar4, MBOX_SIZE); + if (!mbox_addr[region]) + goto error; + } + return 0; + +error: + while (region--) + iounmap((void __iomem *)mbox_addr[region]); + return -ENOMEM; +} + static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, int type, int num, void (mbox_handler)(struct work_struct *), void (mbox_up_handler)(struct work_struct *)) { - void __iomem *hwbase = NULL, *reg_base; - int err, i, dir, dir_up; + int err = -EINVAL, i, dir, dir_up; + void __iomem *reg_base; struct rvu_work *mwork; + void **mbox_regions; const char *name; - u64 bar4_addr; + + mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); + if (!mbox_regions) + return -ENOMEM; switch (type) { case TYPE_AFPF: name = "rvu_afpf_mailbox"; - bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR); dir = MBOX_DIR_AFPF; dir_up = MBOX_DIR_AFPF_UP; reg_base = rvu->afreg_base; + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF); + if (err) + goto free_regions; break; case TYPE_AFVF: name = "rvu_afvf_mailbox"; - bar4_addr = rvupf_read64(rvu, RVU_PF_VF_BAR4_ADDR); dir = MBOX_DIR_PFVF; dir_up = MBOX_DIR_PFVF_UP; reg_base = rvu->pfreg_base; + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF); + if (err) + goto free_regions; break; default: - return -EINVAL; + return err; } mw->mbox_wq = alloc_workqueue(name, WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM, num); - if (!mw->mbox_wq) - return -ENOMEM; + if (!mw->mbox_wq) { + err = -ENOMEM; + goto unmap_regions; + } mw->mbox_wrk = devm_kcalloc(rvu->dev, num, sizeof(struct rvu_work), GFP_KERNEL); @@ -1986,23 +2053,13 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, goto exit; } - /* Mailbox is a reserved memory (in RAM) region shared between - * RVU devices, shouldn't be mapped as device memory to allow - * unaligned accesses. - */ - hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * num); - if (!hwbase) { - dev_err(rvu->dev, "Unable to map mailbox region\n"); - err = -ENOMEM; - goto exit; - } - - err = otx2_mbox_init(&mw->mbox, hwbase, rvu->pdev, reg_base, dir, num); + err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev, + reg_base, dir, num); if (err) goto exit; - err = otx2_mbox_init(&mw->mbox_up, hwbase, rvu->pdev, - reg_base, dir_up, num); + err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev, + reg_base, dir_up, num); if (err) goto exit; @@ -2015,25 +2072,36 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, mwork->rvu = rvu; INIT_WORK(&mwork->work, mbox_up_handler); } - + kfree(mbox_regions); return 0; + exit: - if (hwbase) - iounmap((void __iomem *)hwbase); destroy_workqueue(mw->mbox_wq); +unmap_regions: + while (num--) + iounmap((void __iomem *)mbox_regions[num]); +free_regions: + kfree(mbox_regions); return err; } static void rvu_mbox_destroy(struct mbox_wq_info *mw) { + struct otx2_mbox *mbox = &mw->mbox; + struct otx2_mbox_dev *mdev; + int devid; + if (mw->mbox_wq) { flush_workqueue(mw->mbox_wq); destroy_workqueue(mw->mbox_wq); mw->mbox_wq = NULL; } - if (mw->mbox.hwbase) - iounmap((void __iomem *)mw->mbox.hwbase); + for (devid = 0; devid < mbox->ndevs; devid++) { + mdev = &mbox->dev[devid]; + if (mdev->hwbase) + iounmap((void __iomem *)mdev->hwbase); + } otx2_mbox_destroy(&mw->mbox); otx2_mbox_destroy(&mw->mbox_up); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index ce931d86600b..381055a6b6a4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -25,6 +25,7 @@ /* Subsystem Device ID */ #define PCI_SUBSYS_DEVID_96XX 0xB200 +#define PCI_SUBSYS_DEVID_CN10K_A 0xB900 /* PCI BAR nos */ #define PCI_AF_REG_BAR_NUM 0 @@ -303,6 +304,7 @@ struct hw_cap { bool nix_shaping; /* Is shaping and coloring supported */ bool nix_tx_link_bp; /* Can link backpressure TL queues ? */ bool nix_rx_multicast; /* Rx packet replication support */ + bool per_pf_mbox_regs; /* PF mbox specified in per PF registers ? */ }; struct rvu_hwinfo { @@ -472,6 +474,27 @@ static inline bool is_rvu_96xx_B0(struct rvu *rvu) (pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX); } +/* REVID for PCIe devices. + * Bits 0..1: minor pass, bit 3..2: major pass + * bits 7..4: midr id + */ +#define PCI_REVISION_ID_96XX 0x00 +#define PCI_REVISION_ID_95XX 0x10 +#define PCI_REVISION_ID_LOKI 0x20 +#define PCI_REVISION_ID_98XX 0x30 +#define PCI_REVISION_ID_95XXMM 0x40 + +static inline bool is_rvu_otx2(struct rvu *rvu) +{ + struct pci_dev *pdev = rvu->pdev; + + u8 midr = pdev->revision & 0xF0; + + return (midr == PCI_REVISION_ID_96XX || midr == PCI_REVISION_ID_95XX || + midr == PCI_REVISION_ID_LOKI || midr == PCI_REVISION_ID_98XX || + midr == PCI_REVISION_ID_95XXMM); +} + /* Function Prototypes * RVU */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 79a6dcf0e3c0..78395c740c39 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -44,6 +44,11 @@ #define RVU_AF_PFME_INT_W1S (0x28c8) #define RVU_AF_PFME_INT_ENA_W1S (0x28d0) #define RVU_AF_PFME_INT_ENA_W1C (0x28d8) +#define RVU_AF_PFX_BAR4_ADDR(a) (0x5000 | (a) << 4) +#define RVU_AF_PFX_BAR4_CFG (0x5200 | (a) << 4) +#define RVU_AF_PFX_VF_BAR4_ADDR (0x5400 | (a) << 4) +#define RVU_AF_PFX_VF_BAR4_CFG (0x5600 | (a) << 4) +#define RVU_AF_PFX_LMTLINE_ADDR (0x5800 | (a) << 4) /* Admin function's privileged PF/VF registers */ #define RVU_PRIV_CONST (0x8000000) @@ -100,6 +105,8 @@ #define RVU_PF_MSIX_VECX_ADDR(a) (0x000 | (a) << 4) #define RVU_PF_MSIX_VECX_CTL(a) (0x008 | (a) << 4) #define RVU_PF_MSIX_PBAX(a) (0xF0000 | (a) << 3) +#define RVU_PF_VF_MBOX_ADDR (0xC40) +#define RVU_PF_LMTLINE_ADDR (0xC48) /* RVU VF registers */ #define RVU_VF_VFPF_MBOX0 (0x00000) From patchwork Thu Feb 11 01:46:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE721C43381 for ; Thu, 11 Feb 2021 01:48:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 90D1964E3B for ; Thu, 11 Feb 2021 01:48:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229763AbhBKBsa (ORCPT ); Wed, 10 Feb 2021 20:48:30 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:35680 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229741AbhBKBr6 (ORCPT ); Wed, 10 Feb 2021 20:47:58 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKl000429; Wed, 10 Feb 2021 17:47:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=lOJxlme3iqymuuu7SQu9Kz06e51+tS7VAtSJeUJrL4s=; b=dJmbjqfLk9+ZY6TeUe4G3EMpm95EfLyB+sR2akQnKlu7DB7QNABzdX4zu+M+15M2Yp98 6Rb9em+ksccbAvF3ZWyf5PYOgkAxIyAsAPFwlyQtrRUDVjWvCTFkCKedSgMgBqohPxz0 aHxqN5l4NcDpSkrwbNbZnoVNQiKp1LR/TBx9IpY+iJ5clum/vep6kQ9Rlhs+zRanXhtZ fFcuuRT186s8yaUKYz69mGIOFubhTwOiAOeCIU9QSPoxRbWsAnxMW2H4rb8xfUC8797F LH5ZE3H5v6E5YzuekYHt5YkDyvCIEmWlwvtOOFXwWDA7Gz/zZ5hYgvKNIeF3/GKU2bem yQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhbm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:13 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:11 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:10 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:10 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id DB0E43F7040; Wed, 10 Feb 2021 17:47:06 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 02/14] octeontx2-pf: cn10k: Add mbox support for CN10K Date: Thu, 11 Feb 2021 07:16:19 +0530 Message-ID: <20210211014631.9578-3-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Subbaraya Sundeep Firmware allocates memory regions for PFs and VFs in DRAM. The PFs memory region is used for AF-PF and PF-VF mailbox. This mbox facilitate communication between AF-PF and PF-VF. On CN10K platform: The DRAM region allocated to PF is enumerated as PF BAR4 memory. PF BAR4 contains AF-PF mbox region followed by its VFs mbox region. AF-PF mbox region base address is configured at RVU_AF_PFX_BAR4_ADDR PF-VF mailbox base address is configured at RVU_PF(x)_VF_MBOX_ADDR = RVU_AF_PF()_BAR4_ADDR+64KB. PF access its mbox region via BAR4, whereas VF accesses PF-VF DRAM mailboxes via BAR2 indirect access. On CN9XX platform: Mailbox region in DRAM is divided into two parts AF-PF mbox region and PF-VF mbox region i.e all PFs mbox region is contiguous similarly all VFs. The base address of the AF-PF mbox region is configured at RVU_AF_PF_BAR4_ADDR. AF-PF1 mbox address can be calculated as RVU_AF_PF_BAR4_ADDR * mbox size. The base address of PF-VF mbox region for each PF is configure at RVU_AF_PF(0..15)_VF_BAR4_ADDR.PF access its mbox region via BAR4 and its VF mbox regions from RVU_PF_VF_BAR4_ADDR register, whereas VF access its mbox region via BAR4. This patch changes mbox initialization to support both CN9XX and CN10K platform. The patch also adds new hw_cap flag to setting hw features like TSO etc and removes platform specific name from the PF/VF driver name to make it appropriate for all supported platforms This patch also removes platform specific name from the PF/VF driver name to make it appropriate for all supported platforms Signed-off-by: Subbaraya Sundeep Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../ethernet/marvell/octeontx2/nic/Makefile | 8 ++-- .../marvell/octeontx2/nic/otx2_common.h | 29 +++++++++++++-- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 18 ++++++--- .../ethernet/marvell/octeontx2/nic/otx2_reg.h | 3 ++ .../marvell/octeontx2/nic/otx2_txrx.c | 5 ++- .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 37 ++++++++++++------- 6 files changed, 73 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile index 4193ae3bde6b..29c82b94b2dc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile @@ -3,11 +3,11 @@ # Makefile for Marvell's OcteonTX2 ethernet device drivers # -obj-$(CONFIG_OCTEONTX2_PF) += octeontx2_nicpf.o -obj-$(CONFIG_OCTEONTX2_VF) += octeontx2_nicvf.o +obj-$(CONFIG_OCTEONTX2_PF) += rvu_nicpf.o +obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o -octeontx2_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ +rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ otx2_ptp.o otx2_flows.o -octeontx2_nicvf-y := otx2_vf.o +rvu_nicvf-y := otx2_vf.o ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 143ae04c8ad5..e05a5d59fe0c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -190,7 +190,6 @@ struct otx2_hw { u8 lso_tsov6_idx; u8 lso_udpv4_idx; u8 lso_udpv6_idx; - u8 hw_tso; /* MSI-X */ u8 cint_cnt; /* CQ interrupt count */ @@ -206,6 +205,9 @@ struct otx2_hw { u64 cgx_tx_stats[CGX_TX_STATS_COUNT]; u8 cgx_links; /* No. of CGX links present in HW */ u8 lbk_links; /* No. of LBK links present in HW */ +#define HW_TSO BIT_ULL(0) +#define CN10K_MBOX BIT_ULL(1) + unsigned long cap_flag; }; struct otx2_vf_config { @@ -339,6 +341,25 @@ static inline bool is_96xx_B0(struct pci_dev *pdev) (pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX_RVU_PFVF); } +/* REVID for PCIe devices. + * Bits 0..1: minor pass, bit 3..2: major pass + * bits 7..4: midr id + */ +#define PCI_REVISION_ID_96XX 0x00 +#define PCI_REVISION_ID_95XX 0x10 +#define PCI_REVISION_ID_LOKI 0x20 +#define PCI_REVISION_ID_98XX 0x30 +#define PCI_REVISION_ID_95XXMM 0x40 + +static inline bool is_dev_otx2(struct pci_dev *pdev) +{ + u8 midr = pdev->revision & 0xF0; + + return (midr == PCI_REVISION_ID_96XX || midr == PCI_REVISION_ID_95XX || + midr == PCI_REVISION_ID_LOKI || midr == PCI_REVISION_ID_98XX || + midr == PCI_REVISION_ID_95XXMM); +} + static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf) { struct otx2_hw *hw = &pfvf->hw; @@ -347,10 +368,10 @@ static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf) pfvf->hw.cq_ecount_wait = CQ_CQE_THRESH_DEFAULT; pfvf->hw.cq_qcount_wait = CQ_QCOUNT_DEFAULT; - hw->hw_tso = true; + __set_bit(HW_TSO, &hw->cap_flag); if (is_96xx_A0(pfvf->pdev)) { - hw->hw_tso = false; + __clear_bit(HW_TSO, &hw->cap_flag); /* Time based irq coalescing is not supported */ pfvf->hw.cq_qcount_wait = 0x0; @@ -361,6 +382,8 @@ static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf) pfvf->hw.rq_skid = 600; pfvf->qset.rqe_cnt = Q_COUNT(Q_SIZE_1K); } + if (!is_dev_otx2(pfvf->pdev)) + __set_bit(CN10K_MBOX, &hw->cap_flag); } /* Register read/write APIs */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 07ec85aebcca..55df8ad88985 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -24,8 +24,8 @@ #include "otx2_ptp.h" #include -#define DRV_NAME "octeontx2-nicpf" -#define DRV_STRING "Marvell OcteonTX2 NIC Physical Function Driver" +#define DRV_NAME "rvu_nicpf" +#define DRV_STRING "Marvell RVU NIC Physical Function Driver" /* Supported devices */ static const struct pci_device_id otx2_pf_id_table[] = { @@ -585,9 +585,17 @@ static int otx2_pfvf_mbox_init(struct otx2_nic *pf, int numvfs) if (!pf->mbox_pfvf_wq) return -ENOMEM; - base = readq((void __iomem *)((u64)pf->reg_base + RVU_PF_VF_BAR4_ADDR)); - hwbase = ioremap_wc(base, MBOX_SIZE * pf->total_vfs); + /* On CN10K platform, PF <-> VF mailbox region follows after + * PF <-> AF mailbox region. + */ + if (test_bit(CN10K_MBOX, &pf->hw.cap_flag)) + base = pci_resource_start(pf->pdev, PCI_MBOX_BAR_NUM) + + MBOX_SIZE; + else + base = readq((void __iomem *)((u64)pf->reg_base + + RVU_PF_VF_BAR4_ADDR)); + hwbase = ioremap_wc(base, MBOX_SIZE * pf->total_vfs); if (!hwbase) { err = -ENOMEM; goto free_wq; @@ -1039,7 +1047,7 @@ static int otx2_pfaf_mbox_init(struct otx2_nic *pf) * device memory to allow unaligned accesses. */ hwbase = ioremap_wc(pci_resource_start(pf->pdev, PCI_MBOX_BAR_NUM), - pci_resource_len(pf->pdev, PCI_MBOX_BAR_NUM)); + MBOX_SIZE); if (!hwbase) { dev_err(pf->dev, "Unable to map PFAF mailbox region\n"); err = -ENOMEM; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h index 867f646e0802..1e052d76a580 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h @@ -44,6 +44,8 @@ #define RVU_PF_MSIX_VECX_ADDR(a) (0x000 | (a) << 4) #define RVU_PF_MSIX_VECX_CTL(a) (0x008 | (a) << 4) #define RVU_PF_MSIX_PBAX(a) (0xF0000 | (a) << 3) +#define RVU_PF_VF_MBOX_ADDR (0xC40) +#define RVU_PF_LMTLINE_ADDR (0xC48) /* RVU VF registers */ #define RVU_VF_VFPF_MBOX0 (0x00000) @@ -57,6 +59,7 @@ #define RVU_VF_MSIX_VECX_ADDR(a) (0x000 | (a) << 4) #define RVU_VF_MSIX_VECX_CTL(a) (0x008 | (a) << 4) #define RVU_VF_MSIX_PBAX(a) (0xF0000 | (a) << 3) +#define RVU_VF_MBOX_REGION (0xC0000) #define RVU_FUNC_BLKADDR_SHIFT 20 #define RVU_FUNC_BLKADDR_MASK 0x1FULL diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index d0e25414f1a1..a7eb5ea0fb48 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -806,8 +806,6 @@ static bool is_hw_tso_supported(struct otx2_nic *pfvf, { int payload_len, last_seg_size; - if (!pfvf->hw.hw_tso) - return false; /* HW has an issue due to which when the payload of the last LSO * segment is shorter than 16 bytes, some header fields may not @@ -821,6 +819,9 @@ static bool is_hw_tso_supported(struct otx2_nic *pfvf, if (last_seg_size && last_seg_size < 16) return false; + if (!test_bit(HW_TSO, &pfvf->hw.cap_flag)) + return false; + return true; } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index d3e4cfd244e2..e7d8fef4f53d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -8,8 +8,8 @@ #include "otx2_common.h" #include "otx2_reg.h" -#define DRV_NAME "octeontx2-nicvf" -#define DRV_STRING "Marvell OcteonTX2 NIC Virtual Function Driver" +#define DRV_NAME "rvu_nicvf" +#define DRV_STRING "Marvell RVU NIC Virtual Function Driver" static const struct pci_device_id otx2_vf_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AFVF) }, @@ -277,7 +277,7 @@ static void otx2vf_vfaf_mbox_destroy(struct otx2_nic *vf) vf->mbox_wq = NULL; } - if (mbox->mbox.hwbase) + if (mbox->mbox.hwbase && !test_bit(CN10K_MBOX, &vf->hw.cap_flag)) iounmap((void __iomem *)mbox->mbox.hwbase); otx2_mbox_destroy(&mbox->mbox); @@ -297,16 +297,25 @@ static int otx2vf_vfaf_mbox_init(struct otx2_nic *vf) if (!vf->mbox_wq) return -ENOMEM; - /* Mailbox is a reserved memory (in RAM) region shared between - * admin function (i.e PF0) and this VF, shouldn't be mapped as - * device memory to allow unaligned accesses. - */ - hwbase = ioremap_wc(pci_resource_start(vf->pdev, PCI_MBOX_BAR_NUM), - pci_resource_len(vf->pdev, PCI_MBOX_BAR_NUM)); - if (!hwbase) { - dev_err(vf->dev, "Unable to map VFAF mailbox region\n"); - err = -ENOMEM; - goto exit; + if (test_bit(CN10K_MBOX, &vf->hw.cap_flag)) { + /* For cn10k platform, VF mailbox region is in its BAR2 + * register space + */ + hwbase = vf->reg_base + RVU_VF_MBOX_REGION; + } else { + /* Mailbox is a reserved memory (in RAM) region shared between + * admin function (i.e PF0) and this VF, shouldn't be mapped as + * device memory to allow unaligned accesses. + */ + hwbase = ioremap_wc(pci_resource_start(vf->pdev, + PCI_MBOX_BAR_NUM), + pci_resource_len(vf->pdev, + PCI_MBOX_BAR_NUM)); + if (!hwbase) { + dev_err(vf->dev, "Unable to map VFAF mailbox region\n"); + err = -ENOMEM; + goto exit; + } } err = otx2_mbox_init(&mbox->mbox, hwbase, vf->pdev, vf->reg_base, @@ -329,6 +338,8 @@ static int otx2vf_vfaf_mbox_init(struct otx2_nic *vf) return 0; exit: + if (hwbase && !test_bit(CN10K_MBOX, &vf->hw.cap_flag)) + iounmap(hwbase); destroy_workqueue(vf->mbox_wq); return err; } From patchwork Thu Feb 11 01:46:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E823C433E6 for ; Thu, 11 Feb 2021 01:49:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1774364EC9 for ; Thu, 11 Feb 2021 01:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230191AbhBKBtS (ORCPT ); Wed, 10 Feb 2021 20:49:18 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:36580 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229675AbhBKBsI (ORCPT ); Wed, 10 Feb 2021 20:48:08 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kfZs000402; Wed, 10 Feb 2021 17:47:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=C5Bi78JqBFPiQK8dZzqmNvUKA6FEON+s8kj14zO13Ek=; b=fYP8XegC+srTvzriQi1hGmeANs8/z+oVrWBgXqobN2WmHAlPIwuLbveqpi5fd0JEONHB VRYG50mMt9aRv7jRteqrHZOPjkZYLvzuLGBFcYZ+fo3pjmkM03ZTIuzHUdTsFMyKprBy sar/yOaTFJdsmOYqK6/sq8qxNXfCWCRsjEn2WTikNTvLJ31R/QuOOPYc2inAMb5xD2By OZDDDUoIGf1MYhYb4RN/jM+WYFmNik9d2DiCFNPCHNOvDLld8GTpliaeIyS9+PEoCRtv zMFqLLOsm4oGDlrGUHTYrsxZfkX08Iqj/vuqOG8dy9du13r0SBL3fWrHUrxd1N1U+QX+ kw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhbs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:20 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:17 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:17 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:16 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 21AE23F703F; Wed, 10 Feb 2021 17:47:12 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 03/14] octeontx2-af: cn10k: Update NIX/NPA context structure Date: Thu, 11 Feb 2021 07:16:20 +0530 Message-ID: <20210211014631.9578-4-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org NIX hardware context structure got changed to accommodate new features like bandwidth steering, L3/L4 outer/inner checksum enable/disable etc., on CN10K platform. This patch defines new mbox message NIX_CN10K_AQ_INST for new NIX context initialization. This patch also updates the NPA context structures to accommodate bit field changes made for CN10K platform. This patch also removes Big endian bit fields from existing structures as its support got deprecated in current and upcoming silicons. Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 35 + .../ethernet/marvell/octeontx2/af/rvu_nix.c | 8 + .../marvell/octeontx2/af/rvu_struct.h | 604 ++++++------------ 3 files changed, 223 insertions(+), 424 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 5a08f3e13933..f2b37b3f18e6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -241,6 +241,8 @@ M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \ nix_bp_cfg_rsp) \ M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp) \ M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, nix_get_mac_addr_rsp) \ +M(NIX_CN10K_AQ_ENQ, 0x8019, nix_cn10k_aq_enq, nix_cn10k_aq_enq_req, \ + nix_cn10k_aq_enq_rsp) /* Messages initiated by AF (range 0xC00 - 0xDFF) */ #define MBOX_UP_CGX_MESSAGES \ @@ -549,6 +551,39 @@ struct nix_lf_free_req { u64 flags; }; +/* CN10K NIX AQ enqueue msg */ +struct nix_cn10k_aq_enq_req { + struct mbox_msghdr hdr; + u32 qidx; + u8 ctype; + u8 op; + union { + struct nix_cn10k_rq_ctx_s rq; + struct nix_cn10k_sq_ctx_s sq; + struct nix_cq_ctx_s cq; + struct nix_rsse_s rss; + struct nix_rx_mce_s mce; + }; + union { + struct nix_cn10k_rq_ctx_s rq_mask; + struct nix_cn10k_sq_ctx_s sq_mask; + struct nix_cq_ctx_s cq_mask; + struct nix_rsse_s rss_mask; + struct nix_rx_mce_s mce_mask; + }; +}; + +struct nix_cn10k_aq_enq_rsp { + struct mbox_msghdr hdr; + union { + struct nix_cn10k_rq_ctx_s rq; + struct nix_cn10k_sq_ctx_s sq; + struct nix_cq_ctx_s cq; + struct nix_rsse_s rss; + struct nix_rx_mce_s mce; + }; +}; + /* NIX AQ enqueue msg */ struct nix_aq_enq_req { struct mbox_msghdr hdr; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index b54753ef7d94..46da18d611d4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -1000,6 +1000,14 @@ int rvu_mbox_handler_nix_aq_enq(struct rvu *rvu, return rvu_nix_aq_enq_inst(rvu, req, rsp); } #endif +/* CN10K mbox handler */ +int rvu_mbox_handler_nix_cn10k_aq_enq(struct rvu *rvu, + struct nix_cn10k_aq_enq_req *req, + struct nix_cn10k_aq_enq_rsp *rsp) +{ + return rvu_nix_aq_enq_inst(rvu, (struct nix_aq_enq_req *)req, + (struct nix_aq_enq_rsp *)rsp); +} int rvu_mbox_handler_nix_hwctx_disable(struct rvu *rvu, struct hwctx_disable_req *req, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h index 5e15f4fc11e3..5e5f45c7eab0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h @@ -139,63 +139,29 @@ enum npa_inpq { /* NPA admin queue instruction structure */ struct npa_aq_inst_s { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 doneint : 1; /* W0 */ - u64 reserved_44_62 : 19; - u64 cindex : 20; - u64 reserved_17_23 : 7; - u64 lf : 9; - u64 ctype : 4; - u64 op : 4; -#else - u64 op : 4; + u64 op : 4; /* W0 */ u64 ctype : 4; u64 lf : 9; u64 reserved_17_23 : 7; u64 cindex : 20; u64 reserved_44_62 : 19; u64 doneint : 1; -#endif u64 res_addr; /* W1 */ }; /* NPA admin queue result structure */ struct npa_aq_res_s { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 reserved_17_63 : 47; /* W0 */ - u64 doneint : 1; - u64 compcode : 8; - u64 ctype : 4; - u64 op : 4; -#else - u64 op : 4; + u64 op : 4; /* W0 */ u64 ctype : 4; u64 compcode : 8; u64 doneint : 1; u64 reserved_17_63 : 47; -#endif u64 reserved_64_127; /* W1 */ }; struct npa_aura_s { u64 pool_addr; /* W0 */ -#if defined(__BIG_ENDIAN_BITFIELD) /* W1 */ - u64 avg_level : 8; - u64 reserved_118_119 : 2; - u64 shift : 6; - u64 aura_drop : 8; - u64 reserved_98_103 : 6; - u64 bp_ena : 2; - u64 aura_drop_ena : 1; - u64 pool_drop_ena : 1; - u64 reserved_93 : 1; - u64 avg_con : 9; - u64 pool_way_mask : 16; - u64 pool_caching : 1; - u64 reserved_65 : 2; - u64 ena : 1; -#else - u64 ena : 1; + u64 ena : 1; /* W1 */ u64 reserved_65 : 2; u64 pool_caching : 1; u64 pool_way_mask : 16; @@ -209,59 +175,24 @@ struct npa_aura_s { u64 shift : 6; u64 reserved_118_119 : 2; u64 avg_level : 8; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W2 */ - u64 reserved_189_191 : 3; - u64 nix1_bpid : 9; - u64 reserved_177_179 : 3; - u64 nix0_bpid : 9; - u64 reserved_164_167 : 4; - u64 count : 36; -#else - u64 count : 36; + u64 count : 36; /* W2 */ u64 reserved_164_167 : 4; u64 nix0_bpid : 9; u64 reserved_177_179 : 3; u64 nix1_bpid : 9; u64 reserved_189_191 : 3; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W3 */ - u64 reserved_252_255 : 4; - u64 fc_hyst_bits : 4; - u64 fc_stype : 2; - u64 fc_up_crossing : 1; - u64 fc_ena : 1; - u64 reserved_240_243 : 4; - u64 bp : 8; - u64 reserved_228_231 : 4; - u64 limit : 36; -#else - u64 limit : 36; + u64 limit : 36; /* W3 */ u64 reserved_228_231 : 4; u64 bp : 8; - u64 reserved_240_243 : 4; + u64 reserved_241_243 : 3; + u64 fc_be : 1; u64 fc_ena : 1; u64 fc_up_crossing : 1; u64 fc_stype : 2; u64 fc_hyst_bits : 4; u64 reserved_252_255 : 4; -#endif u64 fc_addr; /* W4 */ -#if defined(__BIG_ENDIAN_BITFIELD) /* W5 */ - u64 reserved_379_383 : 5; - u64 err_qint_idx : 7; - u64 reserved_371 : 1; - u64 thresh_qint_idx : 7; - u64 reserved_363 : 1; - u64 thresh_up : 1; - u64 thresh_int_ena : 1; - u64 thresh_int : 1; - u64 err_int_ena : 8; - u64 err_int : 8; - u64 update_time : 16; - u64 pool_drop : 8; -#else - u64 pool_drop : 8; + u64 pool_drop : 8; /* W5 */ u64 update_time : 16; u64 err_int : 8; u64 err_int_ena : 8; @@ -273,31 +204,15 @@ struct npa_aura_s { u64 reserved_371 : 1; u64 err_qint_idx : 7; u64 reserved_379_383 : 5; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W6 */ - u64 reserved_420_447 : 28; - u64 thresh : 36; -#else - u64 thresh : 36; - u64 reserved_420_447 : 28; -#endif + u64 thresh : 36; /* W6*/ + u64 rsvd_423_420 : 4; + u64 fc_msh_dst : 11; + u64 reserved_435_447 : 13; u64 reserved_448_511; /* W7 */ }; struct npa_pool_s { u64 stack_base; /* W0 */ -#if defined(__BIG_ENDIAN_BITFIELD) /* W1 */ - u64 reserved_115_127 : 13; - u64 buf_size : 11; - u64 reserved_100_103 : 4; - u64 buf_offset : 12; - u64 stack_way_mask : 16; - u64 reserved_70_71 : 3; - u64 stack_caching : 1; - u64 reserved_66_67 : 2; - u64 nat_align : 1; - u64 ena : 1; -#else u64 ena : 1; u64 nat_align : 1; u64 reserved_66_67 : 2; @@ -308,36 +223,10 @@ struct npa_pool_s { u64 reserved_100_103 : 4; u64 buf_size : 11; u64 reserved_115_127 : 13; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W2 */ - u64 stack_pages : 32; - u64 stack_max_pages : 32; -#else u64 stack_max_pages : 32; u64 stack_pages : 32; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W3 */ - u64 reserved_240_255 : 16; - u64 op_pc : 48; -#else u64 op_pc : 48; u64 reserved_240_255 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W4 */ - u64 reserved_316_319 : 4; - u64 update_time : 16; - u64 reserved_297_299 : 3; - u64 fc_up_crossing : 1; - u64 fc_hyst_bits : 4; - u64 fc_stype : 2; - u64 fc_ena : 1; - u64 avg_con : 9; - u64 avg_level : 8; - u64 reserved_270_271 : 2; - u64 shift : 6; - u64 reserved_260_263 : 4; - u64 stack_offset : 4; -#else u64 stack_offset : 4; u64 reserved_260_263 : 4; u64 shift : 6; @@ -348,26 +237,13 @@ struct npa_pool_s { u64 fc_stype : 2; u64 fc_hyst_bits : 4; u64 fc_up_crossing : 1; - u64 reserved_297_299 : 3; + u64 fc_be : 1; + u64 reserved_298_299 : 2; u64 update_time : 16; u64 reserved_316_319 : 4; -#endif u64 fc_addr; /* W5 */ u64 ptr_start; /* W6 */ u64 ptr_end; /* W7 */ -#if defined(__BIG_ENDIAN_BITFIELD) /* W8 */ - u64 reserved_571_575 : 5; - u64 err_qint_idx : 7; - u64 reserved_563 : 1; - u64 thresh_qint_idx : 7; - u64 reserved_555 : 1; - u64 thresh_up : 1; - u64 thresh_int_ena : 1; - u64 thresh_int : 1; - u64 err_int_ena : 8; - u64 err_int : 8; - u64 reserved_512_535 : 24; -#else u64 reserved_512_535 : 24; u64 err_int : 8; u64 err_int_ena : 8; @@ -379,14 +255,10 @@ struct npa_pool_s { u64 reserved_563 : 1; u64 err_qint_idx : 7; u64 reserved_571_575 : 5; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W9 */ - u64 reserved_612_639 : 28; u64 thresh : 36; -#else - u64 thresh : 36; - u64 reserved_612_639 : 28; -#endif + u64 rsvd_615_612 : 4; + u64 fc_msh_dst : 11; + u64 reserved_627_639 : 13; u64 reserved_640_703; /* W10 */ u64 reserved_704_767; /* W11 */ u64 reserved_768_831; /* W12 */ @@ -414,6 +286,7 @@ enum nix_aq_ctype { NIX_AQ_CTYPE_MCE = 0x3, NIX_AQ_CTYPE_RSS = 0x4, NIX_AQ_CTYPE_DYNO = 0x5, + NIX_AQ_CTYPE_BAND_PROF = 0x6, }; /* NIX admin queue instruction opcodes */ @@ -428,59 +301,29 @@ enum nix_aq_instop { /* NIX admin queue instruction structure */ struct nix_aq_inst_s { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 doneint : 1; /* W0 */ - u64 reserved_44_62 : 19; - u64 cindex : 20; - u64 reserved_15_23 : 9; - u64 lf : 7; - u64 ctype : 4; - u64 op : 4; -#else u64 op : 4; u64 ctype : 4; - u64 lf : 7; - u64 reserved_15_23 : 9; + u64 lf : 9; + u64 reserved_17_23 : 7; u64 cindex : 20; u64 reserved_44_62 : 19; u64 doneint : 1; -#endif u64 res_addr; /* W1 */ }; /* NIX admin queue result structure */ struct nix_aq_res_s { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 reserved_17_63 : 47; /* W0 */ - u64 doneint : 1; - u64 compcode : 8; - u64 ctype : 4; - u64 op : 4; -#else u64 op : 4; u64 ctype : 4; u64 compcode : 8; u64 doneint : 1; u64 reserved_17_63 : 47; -#endif u64 reserved_64_127; /* W1 */ }; /* NIX Completion queue context structure */ struct nix_cq_ctx_s { u64 base; -#if defined(__BIG_ENDIAN_BITFIELD) /* W1 */ - u64 wrptr : 20; - u64 avg_con : 9; - u64 cint_idx : 7; - u64 cq_err : 1; - u64 qint_idx : 7; - u64 rsvd_81_83 : 3; - u64 bpid : 9; - u64 rsvd_69_71 : 3; - u64 bp_ena : 1; - u64 rsvd_64_67 : 4; -#else u64 rsvd_64_67 : 4; u64 bp_ena : 1; u64 rsvd_69_71 : 3; @@ -491,31 +334,10 @@ struct nix_cq_ctx_s { u64 cint_idx : 7; u64 avg_con : 9; u64 wrptr : 20; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W2 */ - u64 update_time : 16; - u64 avg_level : 8; - u64 head : 20; - u64 tail : 20; -#else u64 tail : 20; u64 head : 20; u64 avg_level : 8; u64 update_time : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W3 */ - u64 cq_err_int_ena : 8; - u64 cq_err_int : 8; - u64 qsize : 4; - u64 rsvd_233_235 : 3; - u64 caching : 1; - u64 substream : 20; - u64 rsvd_210_211 : 2; - u64 ena : 1; - u64 drop_ena : 1; - u64 drop : 8; - u64 bp : 8; -#else u64 bp : 8; u64 drop : 8; u64 drop_ena : 1; @@ -527,20 +349,161 @@ struct nix_cq_ctx_s { u64 qsize : 4; u64 cq_err_int : 8; u64 cq_err_int_ena : 8; -#endif +}; + +/* CN10K NIX Receive queue context structure */ +struct nix_cn10k_rq_ctx_s { + u64 ena : 1; + u64 sso_ena : 1; + u64 ipsech_ena : 1; + u64 ena_wqwd : 1; + u64 cq : 20; + u64 rsvd_36_24 : 13; + u64 lenerr_dis : 1; + u64 csum_il4_dis : 1; + u64 csum_ol4_dis : 1; + u64 len_il4_dis : 1; + u64 len_il3_dis : 1; + u64 len_ol4_dis : 1; + u64 len_ol3_dis : 1; + u64 wqe_aura : 20; + u64 spb_aura : 20; + u64 lpb_aura : 20; + u64 sso_grp : 10; + u64 sso_tt : 2; + u64 pb_caching : 2; + u64 wqe_caching : 1; + u64 xqe_drop_ena : 1; + u64 spb_drop_ena : 1; + u64 lpb_drop_ena : 1; + u64 pb_stashing : 1; + u64 ipsecd_drop_ena : 1; + u64 chi_ena : 1; + u64 rsvd_127_125 : 3; + u64 band_prof_id : 10; /* W2 */ + u64 rsvd_138 : 1; + u64 policer_ena : 1; + u64 spb_sizem1 : 6; + u64 wqe_skip : 2; + u64 rsvd_150_148 : 3; + u64 spb_ena : 1; + u64 lpb_sizem1 : 12; + u64 first_skip : 7; + u64 rsvd_171 : 1; + u64 later_skip : 6; + u64 xqe_imm_size : 6; + u64 rsvd_189_184 : 6; + u64 xqe_imm_copy : 1; + u64 xqe_hdr_split : 1; + u64 xqe_drop : 8; /* W3 */ + u64 xqe_pass : 8; + u64 wqe_pool_drop : 8; + u64 wqe_pool_pass : 8; + u64 spb_aura_drop : 8; + u64 spb_aura_pass : 8; + u64 spb_pool_drop : 8; + u64 spb_pool_pass : 8; + u64 lpb_aura_drop : 8; /* W4 */ + u64 lpb_aura_pass : 8; + u64 lpb_pool_drop : 8; + u64 lpb_pool_pass : 8; + u64 rsvd_291_288 : 4; + u64 rq_int : 8; + u64 rq_int_ena : 8; + u64 qint_idx : 7; + u64 rsvd_319_315 : 5; + u64 ltag : 24; /* W5 */ + u64 good_utag : 8; + u64 bad_utag : 8; + u64 flow_tagw : 6; + u64 ipsec_vwqe : 1; + u64 vwqe_ena : 1; + u64 vwqe_wait : 8; + u64 max_vsize_exp : 4; + u64 vwqe_skip : 2; + u64 rsvd_383_382 : 2; + u64 octs : 48; /* W6 */ + u64 rsvd_447_432 : 16; + u64 pkts : 48; /* W7 */ + u64 rsvd_511_496 : 16; + u64 drop_octs : 48; /* W8 */ + u64 rsvd_575_560 : 16; + u64 drop_pkts : 48; /* W9 */ + u64 rsvd_639_624 : 16; + u64 re_pkts : 48; /* W10 */ + u64 rsvd_703_688 : 16; + u64 rsvd_767_704; /* W11 */ + u64 rsvd_831_768; /* W12 */ + u64 rsvd_895_832; /* W13 */ + u64 rsvd_959_896; /* W14 */ + u64 rsvd_1023_960; /* W15 */ +}; + +/* CN10K NIX Send queue context structure */ +struct nix_cn10k_sq_ctx_s { + u64 ena : 1; + u64 qint_idx : 6; + u64 substream : 20; + u64 sdp_mcast : 1; + u64 cq : 20; + u64 sqe_way_mask : 16; + u64 smq : 10; /* W1 */ + u64 cq_ena : 1; + u64 xoff : 1; + u64 sso_ena : 1; + u64 smq_rr_weight : 14; + u64 default_chan : 12; + u64 sqb_count : 16; + u64 rsvd_120_119 : 2; + u64 smq_rr_count_lb : 7; + u64 smq_rr_count_ub : 25; /* W2 */ + u64 sqb_aura : 20; + u64 sq_int : 8; + u64 sq_int_ena : 8; + u64 sqe_stype : 2; + u64 rsvd_191 : 1; + u64 max_sqe_size : 2; /* W3 */ + u64 cq_limit : 8; + u64 lmt_dis : 1; + u64 mnq_dis : 1; + u64 smq_next_sq : 20; + u64 smq_lso_segnum : 8; + u64 tail_offset : 6; + u64 smenq_offset : 6; + u64 head_offset : 6; + u64 smenq_next_sqb_vld : 1; + u64 smq_pend : 1; + u64 smq_next_sq_vld : 1; + u64 rsvd_255_253 : 3; + u64 next_sqb : 64; /* W4 */ + u64 tail_sqb : 64; /* W5 */ + u64 smenq_sqb : 64; /* W6 */ + u64 smenq_next_sqb : 64; /* W7 */ + u64 head_sqb : 64; /* W8 */ + u64 rsvd_583_576 : 8; /* W9 */ + u64 vfi_lso_total : 18; + u64 vfi_lso_sizem1 : 3; + u64 vfi_lso_sb : 8; + u64 vfi_lso_mps : 14; + u64 vfi_lso_vlan0_ins_ena : 1; + u64 vfi_lso_vlan1_ins_ena : 1; + u64 vfi_lso_vld : 1; + u64 rsvd_639_630 : 10; + u64 scm_lso_rem : 18; /* W10 */ + u64 rsvd_703_658 : 46; + u64 octs : 48; /* W11 */ + u64 rsvd_767_752 : 16; + u64 pkts : 48; /* W12 */ + u64 rsvd_831_816 : 16; + u64 rsvd_895_832 : 64; /* W13 */ + u64 dropped_octs : 48; + u64 rsvd_959_944 : 16; + u64 dropped_pkts : 48; + u64 rsvd_1023_1008 : 16; }; /* NIX Receive queue context structure */ struct nix_rq_ctx_s { -#if defined(__BIG_ENDIAN_BITFIELD) /* W0 */ - u64 wqe_aura : 20; - u64 substream : 20; - u64 cq : 20; - u64 ena_wqwd : 1; - u64 ipsech_ena : 1; - u64 sso_ena : 1; - u64 ena : 1; -#else u64 ena : 1; u64 sso_ena : 1; u64 ipsech_ena : 1; @@ -548,19 +511,6 @@ struct nix_rq_ctx_s { u64 cq : 20; u64 substream : 20; u64 wqe_aura : 20; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W1 */ - u64 rsvd_127_122 : 6; - u64 lpb_drop_ena : 1; - u64 spb_drop_ena : 1; - u64 xqe_drop_ena : 1; - u64 wqe_caching : 1; - u64 pb_caching : 2; - u64 sso_tt : 2; - u64 sso_grp : 10; - u64 lpb_aura : 20; - u64 spb_aura : 20; -#else u64 spb_aura : 20; u64 lpb_aura : 20; u64 sso_grp : 10; @@ -571,23 +521,7 @@ struct nix_rq_ctx_s { u64 spb_drop_ena : 1; u64 lpb_drop_ena : 1; u64 rsvd_127_122 : 6; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W2 */ - u64 xqe_hdr_split : 1; - u64 xqe_imm_copy : 1; - u64 rsvd_189_184 : 6; - u64 xqe_imm_size : 6; - u64 later_skip : 6; - u64 rsvd_171 : 1; - u64 first_skip : 7; - u64 lpb_sizem1 : 12; - u64 spb_ena : 1; - u64 rsvd_150_148 : 3; - u64 wqe_skip : 2; - u64 spb_sizem1 : 6; - u64 rsvd_139_128 : 12; -#else - u64 rsvd_139_128 : 12; + u64 rsvd_139_128 : 12; /* W2 */ u64 spb_sizem1 : 6; u64 wqe_skip : 2; u64 rsvd_150_148 : 3; @@ -600,18 +534,7 @@ struct nix_rq_ctx_s { u64 rsvd_189_184 : 6; u64 xqe_imm_copy : 1; u64 xqe_hdr_split : 1; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W3 */ - u64 spb_pool_pass : 8; - u64 spb_pool_drop : 8; - u64 spb_aura_pass : 8; - u64 spb_aura_drop : 8; - u64 wqe_pool_pass : 8; - u64 wqe_pool_drop : 8; - u64 xqe_pass : 8; - u64 xqe_drop : 8; -#else - u64 xqe_drop : 8; + u64 xqe_drop : 8; /* W3*/ u64 xqe_pass : 8; u64 wqe_pool_drop : 8; u64 wqe_pool_pass : 8; @@ -619,19 +542,7 @@ struct nix_rq_ctx_s { u64 spb_aura_pass : 8; u64 spb_pool_drop : 8; u64 spb_pool_pass : 8; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W4 */ - u64 rsvd_319_315 : 5; - u64 qint_idx : 7; - u64 rq_int_ena : 8; - u64 rq_int : 8; - u64 rsvd_291_288 : 4; - u64 lpb_pool_pass : 8; - u64 lpb_pool_drop : 8; - u64 lpb_aura_pass : 8; - u64 lpb_aura_drop : 8; -#else - u64 lpb_aura_drop : 8; + u64 lpb_aura_drop : 8; /* W4 */ u64 lpb_aura_pass : 8; u64 lpb_pool_drop : 8; u64 lpb_pool_pass : 8; @@ -640,55 +551,21 @@ struct nix_rq_ctx_s { u64 rq_int_ena : 8; u64 qint_idx : 7; u64 rsvd_319_315 : 5; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W5 */ - u64 rsvd_383_366 : 18; - u64 flow_tagw : 6; - u64 bad_utag : 8; - u64 good_utag : 8; - u64 ltag : 24; -#else - u64 ltag : 24; + u64 ltag : 24; /* W5 */ u64 good_utag : 8; u64 bad_utag : 8; u64 flow_tagw : 6; u64 rsvd_383_366 : 18; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W6 */ - u64 rsvd_447_432 : 16; - u64 octs : 48; -#else - u64 octs : 48; + u64 octs : 48; /* W6 */ u64 rsvd_447_432 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W7 */ - u64 rsvd_511_496 : 16; - u64 pkts : 48; -#else - u64 pkts : 48; + u64 pkts : 48; /* W7 */ u64 rsvd_511_496 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W8 */ + u64 drop_octs : 48; /* W8 */ u64 rsvd_575_560 : 16; - u64 drop_octs : 48; -#else - u64 drop_octs : 48; - u64 rsvd_575_560 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W9 */ - u64 rsvd_639_624 : 16; - u64 drop_pkts : 48; -#else - u64 drop_pkts : 48; + u64 drop_pkts : 48; /* W9 */ u64 rsvd_639_624 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W10 */ + u64 re_pkts : 48; /* W10 */ u64 rsvd_703_688 : 16; - u64 re_pkts : 48; -#else - u64 re_pkts : 48; - u64 rsvd_703_688 : 16; -#endif u64 rsvd_767_704; /* W11 */ u64 rsvd_831_768; /* W12 */ u64 rsvd_895_832; /* W13 */ @@ -711,30 +588,12 @@ enum nix_stype { /* NIX Send queue context structure */ struct nix_sq_ctx_s { -#if defined(__BIG_ENDIAN_BITFIELD) /* W0 */ - u64 sqe_way_mask : 16; - u64 cq : 20; - u64 sdp_mcast : 1; - u64 substream : 20; - u64 qint_idx : 6; - u64 ena : 1; -#else u64 ena : 1; u64 qint_idx : 6; u64 substream : 20; u64 sdp_mcast : 1; u64 cq : 20; u64 sqe_way_mask : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W1 */ - u64 sqb_count : 16; - u64 default_chan : 12; - u64 smq_rr_quantum : 24; - u64 sso_ena : 1; - u64 xoff : 1; - u64 cq_ena : 1; - u64 smq : 9; -#else u64 smq : 9; u64 cq_ena : 1; u64 xoff : 1; @@ -742,37 +601,12 @@ struct nix_sq_ctx_s { u64 smq_rr_quantum : 24; u64 default_chan : 12; u64 sqb_count : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W2 */ - u64 rsvd_191 : 1; - u64 sqe_stype : 2; - u64 sq_int_ena : 8; - u64 sq_int : 8; - u64 sqb_aura : 20; - u64 smq_rr_count : 25; -#else u64 smq_rr_count : 25; u64 sqb_aura : 20; u64 sq_int : 8; u64 sq_int_ena : 8; u64 sqe_stype : 2; u64 rsvd_191 : 1; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W3 */ - u64 rsvd_255_253 : 3; - u64 smq_next_sq_vld : 1; - u64 smq_pend : 1; - u64 smenq_next_sqb_vld : 1; - u64 head_offset : 6; - u64 smenq_offset : 6; - u64 tail_offset : 6; - u64 smq_lso_segnum : 8; - u64 smq_next_sq : 20; - u64 mnq_dis : 1; - u64 lmt_dis : 1; - u64 cq_limit : 8; - u64 max_sqe_size : 2; -#else u64 max_sqe_size : 2; u64 cq_limit : 8; u64 lmt_dis : 1; @@ -786,23 +620,11 @@ struct nix_sq_ctx_s { u64 smq_pend : 1; u64 smq_next_sq_vld : 1; u64 rsvd_255_253 : 3; -#endif u64 next_sqb : 64;/* W4 */ u64 tail_sqb : 64;/* W5 */ u64 smenq_sqb : 64;/* W6 */ u64 smenq_next_sqb : 64;/* W7 */ u64 head_sqb : 64;/* W8 */ -#if defined(__BIG_ENDIAN_BITFIELD) /* W9 */ - u64 rsvd_639_630 : 10; - u64 vfi_lso_vld : 1; - u64 vfi_lso_vlan1_ins_ena : 1; - u64 vfi_lso_vlan0_ins_ena : 1; - u64 vfi_lso_mps : 14; - u64 vfi_lso_sb : 8; - u64 vfi_lso_sizem1 : 3; - u64 vfi_lso_total : 18; - u64 rsvd_583_576 : 8; -#else u64 rsvd_583_576 : 8; u64 vfi_lso_total : 18; u64 vfi_lso_sizem1 : 3; @@ -812,68 +634,28 @@ struct nix_sq_ctx_s { u64 vfi_lso_vlan1_ins_ena : 1; u64 vfi_lso_vld : 1; u64 rsvd_639_630 : 10; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W10 */ - u64 rsvd_703_658 : 46; - u64 scm_lso_rem : 18; -#else u64 scm_lso_rem : 18; u64 rsvd_703_658 : 46; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W11 */ - u64 rsvd_767_752 : 16; - u64 octs : 48; -#else u64 octs : 48; u64 rsvd_767_752 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W12 */ - u64 rsvd_831_816 : 16; - u64 pkts : 48; -#else u64 pkts : 48; u64 rsvd_831_816 : 16; -#endif u64 rsvd_895_832 : 64;/* W13 */ -#if defined(__BIG_ENDIAN_BITFIELD) /* W14 */ - u64 rsvd_959_944 : 16; - u64 dropped_octs : 48; -#else u64 dropped_octs : 48; u64 rsvd_959_944 : 16; -#endif -#if defined(__BIG_ENDIAN_BITFIELD) /* W15 */ - u64 rsvd_1023_1008 : 16; - u64 dropped_pkts : 48; -#else u64 dropped_pkts : 48; u64 rsvd_1023_1008 : 16; -#endif }; /* NIX Receive side scaling entry structure*/ struct nix_rsse_s { -#if defined(__BIG_ENDIAN_BITFIELD) - uint32_t reserved_20_31 : 12; - uint32_t rq : 20; -#else uint32_t rq : 20; uint32_t reserved_20_31 : 12; -#endif }; /* NIX receive multicast/mirror entry structure */ struct nix_rx_mce_s { -#if defined(__BIG_ENDIAN_BITFIELD) /* W0 */ - uint64_t next : 16; - uint64_t pf_func : 16; - uint64_t rsvd_31_24 : 8; - uint64_t index : 20; - uint64_t eol : 1; - uint64_t rsvd_2 : 1; - uint64_t op : 2; -#else uint64_t op : 2; uint64_t rsvd_2 : 1; uint64_t eol : 1; @@ -881,7 +663,6 @@ struct nix_rx_mce_s { uint64_t rsvd_31_24 : 8; uint64_t pf_func : 16; uint64_t next : 16; -#endif }; enum nix_lsoalg { @@ -900,15 +681,6 @@ enum nix_txlayer { }; struct nix_lso_format { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 rsvd_19_63 : 45; - u64 alg : 3; - u64 rsvd_14_15 : 2; - u64 sizem1 : 2; - u64 rsvd_10_11 : 2; - u64 layer : 2; - u64 offset : 8; -#else u64 offset : 8; u64 layer : 2; u64 rsvd_10_11 : 2; @@ -916,24 +688,9 @@ struct nix_lso_format { u64 rsvd_14_15 : 2; u64 alg : 3; u64 rsvd_19_63 : 45; -#endif }; struct nix_rx_flowkey_alg { -#if defined(__BIG_ENDIAN_BITFIELD) - u64 reserved_35_63 :29; - u64 ltype_match :4; - u64 ltype_mask :4; - u64 sel_chan :1; - u64 ena :1; - u64 reserved_24_24 :1; - u64 lid :3; - u64 bytesm1 :5; - u64 hdr_offset :8; - u64 fn_mask :1; - u64 ln_mask :1; - u64 key_offset :6; -#else u64 key_offset :6; u64 ln_mask :1; u64 fn_mask :1; @@ -946,7 +703,6 @@ struct nix_rx_flowkey_alg { u64 ltype_mask :4; u64 ltype_match :4; u64 reserved_35_63 :29; -#endif }; /* NIX VTAG size */ From patchwork Thu Feb 11 01:46:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381612 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 479F2C433E0 for ; Thu, 11 Feb 2021 01:50:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1ECD064EC9 for ; Thu, 11 Feb 2021 01:50:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230232AbhBKBtl (ORCPT ); Wed, 10 Feb 2021 20:49:41 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:10240 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229870AbhBKBsK (ORCPT ); Wed, 10 Feb 2021 20:48:10 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kKqx023671; Wed, 10 Feb 2021 17:47:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Wo3zwsaIZKkzo1vsTZoWODzN9OLUxvKuGiUGo3GKseI=; b=Qvfw3synYGJuimdNUD1LM+LUCxaie5kCQEdj3xXqCWW6G1qFrPAcWtQ8s3V6Wb1XV0OB p+jcLykag34WHQfoM3QhnYkXzJLJs5yaba9+RLD1yiEidajBp+71OOBkPX9nNh2vX5XV qjHsvbD/JGxSJ8Xr6DyxqJIpj8d6ysrBLFYo+fTOLZzoNg+ty3PaSXc0e05SSZB/dIK0 vA29Sx1LnXqRl58NFdpsWVsEkwa3FUHcviEpeZBd1lSad+ivWyFBdxdBaiVjDDdq9XXO ExRJCnwXRY62JtKUN/0IjU+XBpVLvmctlELT0ktJu9EoJEo9Rjp9RkaD7AxBhj401Iy8 /g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrnttc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:24 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:22 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:22 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id B18273F703F; Wed, 10 Feb 2021 17:47:18 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 04/14] octeontx2-af: cn10k: Update NIX and NPA context in debugfs Date: Thu, 11 Feb 2021 07:16:21 +0530 Message-ID: <20210211014631.9578-5-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On CN10K platform NPA and NIX context structure bit fields had changed to support new features like bandwidth steering etc. This patch dumps approprate context for CN10K platform. Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../marvell/octeontx2/af/rvu_debugfs.c | 177 +++++++++++++++++- 1 file changed, 175 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 80e964330de3..02775d49cf0f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -449,6 +449,7 @@ RVU_DEBUG_SEQ_FOPS(npa_qsize, npa_qsize_display, npa_qsize_write); static void print_npa_aura_ctx(struct seq_file *m, struct npa_aq_enq_rsp *rsp) { struct npa_aura_s *aura = &rsp->aura; + struct rvu *rvu = m->private; seq_printf(m, "W0: Pool addr\t\t%llx\n", aura->pool_addr); @@ -468,6 +469,9 @@ static void print_npa_aura_ctx(struct seq_file *m, struct npa_aq_enq_rsp *rsp) seq_printf(m, "W3: limit\t\t%llu\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n", (u64)aura->limit, aura->bp, aura->fc_ena); + + if (!is_rvu_otx2(rvu)) + seq_printf(m, "W3: fc_be\t\t%d\n", aura->fc_be); seq_printf(m, "W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d\n", aura->fc_up_crossing, aura->fc_stype); seq_printf(m, "W3: fc_hyst_bits\t%d\n", aura->fc_hyst_bits); @@ -485,12 +489,15 @@ static void print_npa_aura_ctx(struct seq_file *m, struct npa_aq_enq_rsp *rsp) seq_printf(m, "W5: err_qint_idx \t%d\n", aura->err_qint_idx); seq_printf(m, "W6: thresh\t\t%llu\n", (u64)aura->thresh); + if (!is_rvu_otx2(rvu)) + seq_printf(m, "W6: fc_msh_dst\t\t%d\n", aura->fc_msh_dst); } /* Dumps given NPA Pool's context */ static void print_npa_pool_ctx(struct seq_file *m, struct npa_aq_enq_rsp *rsp) { struct npa_pool_s *pool = &rsp->pool; + struct rvu *rvu = m->private; seq_printf(m, "W0: Stack base\t\t%llx\n", pool->stack_base); @@ -512,6 +519,8 @@ static void print_npa_pool_ctx(struct seq_file *m, struct npa_aq_enq_rsp *rsp) pool->avg_con, pool->fc_ena, pool->fc_stype); seq_printf(m, "W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d\n", pool->fc_hyst_bits, pool->fc_up_crossing); + if (!is_rvu_otx2(rvu)) + seq_printf(m, "W4: fc_be\t\t%d\n", pool->fc_be); seq_printf(m, "W4: update_time\t\t%d\n", pool->update_time); seq_printf(m, "W5: fc_addr\t\t%llx\n", pool->fc_addr); @@ -525,8 +534,10 @@ static void print_npa_pool_ctx(struct seq_file *m, struct npa_aq_enq_rsp *rsp) seq_printf(m, "W8: thresh_int\t\t%d\n", pool->thresh_int); seq_printf(m, "W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d\n", pool->thresh_int_ena, pool->thresh_up); - seq_printf(m, "W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t\t%d\n", + seq_printf(m, "W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d\n", pool->thresh_qint_idx, pool->err_qint_idx); + if (!is_rvu_otx2(rvu)) + seq_printf(m, "W8: fc_msh_dst\t\t%d\n", pool->fc_msh_dst); } /* Reads aura/pool's ctx from admin queue */ @@ -910,11 +921,78 @@ static int rvu_dbg_nix_ndc_tx_hits_miss_display(struct seq_file *filp, RVU_DEBUG_SEQ_FOPS(nix_ndc_tx_hits_miss, nix_ndc_tx_hits_miss_display, NULL); +static void print_nix_cn10k_sq_ctx(struct seq_file *m, + struct nix_cn10k_sq_ctx_s *sq_ctx) +{ + seq_printf(m, "W0: ena \t\t\t%d\nW0: qint_idx \t\t\t%d\n", + sq_ctx->ena, sq_ctx->qint_idx); + seq_printf(m, "W0: substream \t\t\t0x%03x\nW0: sdp_mcast \t\t\t%d\n", + sq_ctx->substream, sq_ctx->sdp_mcast); + seq_printf(m, "W0: cq \t\t\t\t%d\nW0: sqe_way_mask \t\t%d\n\n", + sq_ctx->cq, sq_ctx->sqe_way_mask); + + seq_printf(m, "W1: smq \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: xoff\t\t\t%d\n", + sq_ctx->smq, sq_ctx->cq_ena, sq_ctx->xoff); + seq_printf(m, "W1: sso_ena \t\t\t%d\nW1: smq_rr_weight\t\t%d\n", + sq_ctx->sso_ena, sq_ctx->smq_rr_weight); + seq_printf(m, "W1: default_chan\t\t%d\nW1: sqb_count\t\t\t%d\n\n", + sq_ctx->default_chan, sq_ctx->sqb_count); + + seq_printf(m, "W2: smq_rr_count_lb \t\t%d\n", sq_ctx->smq_rr_count_lb); + seq_printf(m, "W2: smq_rr_count_ub \t\t%d\n", sq_ctx->smq_rr_count_ub); + seq_printf(m, "W2: sqb_aura \t\t\t%d\nW2: sq_int \t\t\t%d\n", + sq_ctx->sqb_aura, sq_ctx->sq_int); + seq_printf(m, "W2: sq_int_ena \t\t\t%d\nW2: sqe_stype \t\t\t%d\n", + sq_ctx->sq_int_ena, sq_ctx->sqe_stype); + + seq_printf(m, "W3: max_sqe_size\t\t%d\nW3: cq_limit\t\t\t%d\n", + sq_ctx->max_sqe_size, sq_ctx->cq_limit); + seq_printf(m, "W3: lmt_dis \t\t\t%d\nW3: mnq_dis \t\t\t%d\n", + sq_ctx->mnq_dis, sq_ctx->lmt_dis); + seq_printf(m, "W3: smq_next_sq\t\t\t%d\nW3: smq_lso_segnum\t\t%d\n", + sq_ctx->smq_next_sq, sq_ctx->smq_lso_segnum); + seq_printf(m, "W3: tail_offset \t\t%d\nW3: smenq_offset\t\t%d\n", + sq_ctx->tail_offset, sq_ctx->smenq_offset); + seq_printf(m, "W3: head_offset\t\t\t%d\nW3: smenq_next_sqb_vld\t\t%d\n\n", + sq_ctx->head_offset, sq_ctx->smenq_next_sqb_vld); + + seq_printf(m, "W4: next_sqb \t\t\t%llx\n\n", sq_ctx->next_sqb); + seq_printf(m, "W5: tail_sqb \t\t\t%llx\n\n", sq_ctx->tail_sqb); + seq_printf(m, "W6: smenq_sqb \t\t\t%llx\n\n", sq_ctx->smenq_sqb); + seq_printf(m, "W7: smenq_next_sqb \t\t%llx\n\n", + sq_ctx->smenq_next_sqb); + + seq_printf(m, "W8: head_sqb\t\t\t%llx\n\n", sq_ctx->head_sqb); + + seq_printf(m, "W9: vfi_lso_total\t\t%d\n", sq_ctx->vfi_lso_total); + seq_printf(m, "W9: vfi_lso_sizem1\t\t%d\nW9: vfi_lso_sb\t\t\t%d\n", + sq_ctx->vfi_lso_sizem1, sq_ctx->vfi_lso_sb); + seq_printf(m, "W9: vfi_lso_mps\t\t\t%d\nW9: vfi_lso_vlan0_ins_ena\t%d\n", + sq_ctx->vfi_lso_mps, sq_ctx->vfi_lso_vlan0_ins_ena); + seq_printf(m, "W9: vfi_lso_vlan1_ins_ena\t%d\nW9: vfi_lso_vld \t\t%d\n\n", + sq_ctx->vfi_lso_vld, sq_ctx->vfi_lso_vlan1_ins_ena); + + seq_printf(m, "W10: scm_lso_rem \t\t%llu\n\n", + (u64)sq_ctx->scm_lso_rem); + seq_printf(m, "W11: octs \t\t\t%llu\n\n", (u64)sq_ctx->octs); + seq_printf(m, "W12: pkts \t\t\t%llu\n\n", (u64)sq_ctx->pkts); + seq_printf(m, "W14: dropped_octs \t\t%llu\n\n", + (u64)sq_ctx->dropped_octs); + seq_printf(m, "W15: dropped_pkts \t\t%llu\n\n", + (u64)sq_ctx->dropped_pkts); +} + /* Dumps given nix_sq's context */ static void print_nix_sq_ctx(struct seq_file *m, struct nix_aq_enq_rsp *rsp) { struct nix_sq_ctx_s *sq_ctx = &rsp->sq; + struct nix_hw *nix_hw = m->private; + struct rvu *rvu = nix_hw->rvu; + if (!is_rvu_otx2(rvu)) { + print_nix_cn10k_sq_ctx(m, (struct nix_cn10k_sq_ctx_s *)sq_ctx); + return; + } seq_printf(m, "W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d\n", sq_ctx->sqe_way_mask, sq_ctx->cq); seq_printf(m, "W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x\n", @@ -974,10 +1052,94 @@ static void print_nix_sq_ctx(struct seq_file *m, struct nix_aq_enq_rsp *rsp) (u64)sq_ctx->dropped_pkts); } +static void print_nix_cn10k_rq_ctx(struct seq_file *m, + struct nix_cn10k_rq_ctx_s *rq_ctx) +{ + seq_printf(m, "W0: ena \t\t\t%d\nW0: sso_ena \t\t\t%d\n", + rq_ctx->ena, rq_ctx->sso_ena); + seq_printf(m, "W0: ipsech_ena \t\t\t%d\nW0: ena_wqwd \t\t\t%d\n", + rq_ctx->ipsech_ena, rq_ctx->ena_wqwd); + seq_printf(m, "W0: cq \t\t\t\t%d\nW0: lenerr_dis \t\t\t%d\n", + rq_ctx->cq, rq_ctx->lenerr_dis); + seq_printf(m, "W0: csum_il4_dis \t\t%d\nW0: csum_ol4_dis \t\t%d\n", + rq_ctx->csum_il4_dis, rq_ctx->csum_ol4_dis); + seq_printf(m, "W0: len_il4_dis \t\t%d\nW0: len_il3_dis \t\t%d\n", + rq_ctx->len_il4_dis, rq_ctx->len_il3_dis); + seq_printf(m, "W0: len_ol4_dis \t\t%d\nW0: len_ol3_dis \t\t%d\n", + rq_ctx->len_ol4_dis, rq_ctx->len_ol3_dis); + seq_printf(m, "W0: wqe_aura \t\t\t%d\n\n", rq_ctx->wqe_aura); + + seq_printf(m, "W1: spb_aura \t\t\t%d\nW1: lpb_aura \t\t\t%d\n", + rq_ctx->spb_aura, rq_ctx->lpb_aura); + seq_printf(m, "W1: spb_aura \t\t\t%d\n", rq_ctx->spb_aura); + seq_printf(m, "W1: sso_grp \t\t\t%d\nW1: sso_tt \t\t\t%d\n", + rq_ctx->sso_grp, rq_ctx->sso_tt); + seq_printf(m, "W1: pb_caching \t\t\t%d\nW1: wqe_caching \t\t%d\n", + rq_ctx->pb_caching, rq_ctx->wqe_caching); + seq_printf(m, "W1: xqe_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d\n", + rq_ctx->xqe_drop_ena, rq_ctx->spb_drop_ena); + seq_printf(m, "W1: lpb_drop_ena \t\t%d\nW1: pb_stashing \t\t%d\n", + rq_ctx->lpb_drop_ena, rq_ctx->pb_stashing); + seq_printf(m, "W1: ipsecd_drop_ena \t\t%d\nW1: chi_ena \t\t\t%d\n\n", + rq_ctx->ipsecd_drop_ena, rq_ctx->chi_ena); + + seq_printf(m, "W2: band_prof_id \t\t%d\n", rq_ctx->band_prof_id); + seq_printf(m, "W2: policer_ena \t\t%d\n", rq_ctx->policer_ena); + seq_printf(m, "W2: spb_sizem1 \t\t\t%d\n", rq_ctx->spb_sizem1); + seq_printf(m, "W2: wqe_skip \t\t\t%d\nW2: sqb_ena \t\t\t%d\n", + rq_ctx->wqe_skip, rq_ctx->spb_ena); + seq_printf(m, "W2: lpb_size1 \t\t\t%d\nW2: first_skip \t\t\t%d\n", + rq_ctx->lpb_sizem1, rq_ctx->first_skip); + seq_printf(m, "W2: later_skip\t\t\t%d\nW2: xqe_imm_size\t\t%d\n", + rq_ctx->later_skip, rq_ctx->xqe_imm_size); + seq_printf(m, "W2: xqe_imm_copy \t\t%d\nW2: xqe_hdr_split \t\t%d\n\n", + rq_ctx->xqe_imm_copy, rq_ctx->xqe_hdr_split); + + seq_printf(m, "W3: xqe_drop \t\t\t%d\nW3: xqe_pass \t\t\t%d\n", + rq_ctx->xqe_drop, rq_ctx->xqe_pass); + seq_printf(m, "W3: wqe_pool_drop \t\t%d\nW3: wqe_pool_pass \t\t%d\n", + rq_ctx->wqe_pool_drop, rq_ctx->wqe_pool_pass); + seq_printf(m, "W3: spb_pool_drop \t\t%d\nW3: spb_pool_pass \t\t%d\n", + rq_ctx->spb_pool_drop, rq_ctx->spb_pool_pass); + seq_printf(m, "W3: spb_aura_drop \t\t%d\nW3: spb_aura_pass \t\t%d\n\n", + rq_ctx->spb_aura_pass, rq_ctx->spb_aura_drop); + + seq_printf(m, "W4: lpb_aura_drop \t\t%d\nW3: lpb_aura_pass \t\t%d\n", + rq_ctx->lpb_aura_pass, rq_ctx->lpb_aura_drop); + seq_printf(m, "W4: lpb_pool_drop \t\t%d\nW3: lpb_pool_pass \t\t%d\n", + rq_ctx->lpb_pool_drop, rq_ctx->lpb_pool_pass); + seq_printf(m, "W4: rq_int \t\t\t%d\nW4: rq_int_ena\t\t\t%d\n", + rq_ctx->rq_int, rq_ctx->rq_int_ena); + seq_printf(m, "W4: qint_idx \t\t\t%d\n\n", rq_ctx->qint_idx); + + seq_printf(m, "W5: ltag \t\t\t%d\nW5: good_utag \t\t\t%d\n", + rq_ctx->ltag, rq_ctx->good_utag); + seq_printf(m, "W5: bad_utag \t\t\t%d\nW5: flow_tagw \t\t\t%d\n", + rq_ctx->bad_utag, rq_ctx->flow_tagw); + seq_printf(m, "W5: ipsec_vwqe \t\t\t%d\nW5: vwqe_ena \t\t\t%d\n", + rq_ctx->ipsec_vwqe, rq_ctx->vwqe_ena); + seq_printf(m, "W5: vwqe_wait \t\t\t%d\nW5: max_vsize_exp\t\t%d\n", + rq_ctx->vwqe_wait, rq_ctx->max_vsize_exp); + seq_printf(m, "W5: vwqe_skip \t\t\t%d\n\n", rq_ctx->vwqe_skip); + + seq_printf(m, "W6: octs \t\t\t%llu\n\n", (u64)rq_ctx->octs); + seq_printf(m, "W7: pkts \t\t\t%llu\n\n", (u64)rq_ctx->pkts); + seq_printf(m, "W8: drop_octs \t\t\t%llu\n\n", (u64)rq_ctx->drop_octs); + seq_printf(m, "W9: drop_pkts \t\t\t%llu\n\n", (u64)rq_ctx->drop_pkts); + seq_printf(m, "W10: re_pkts \t\t\t%llu\n", (u64)rq_ctx->re_pkts); +} + /* Dumps given nix_rq's context */ static void print_nix_rq_ctx(struct seq_file *m, struct nix_aq_enq_rsp *rsp) { struct nix_rq_ctx_s *rq_ctx = &rsp->rq; + struct nix_hw *nix_hw = m->private; + struct rvu *rvu = nix_hw->rvu; + + if (!is_rvu_otx2(rvu)) { + print_nix_cn10k_rq_ctx(m, (struct nix_cn10k_rq_ctx_s *)rq_ctx); + return; + } seq_printf(m, "W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x\n", rq_ctx->wqe_aura, rq_ctx->substream); @@ -1551,6 +1713,9 @@ static void rvu_dbg_cgx_init(struct rvu *rvu) char dname[20]; void *cgx; + if (!cgx_get_cgxcnt_max()) + return; + rvu->rvu_dbg.cgx_root = debugfs_create_dir("cgx", rvu->rvu_dbg.root); for (i = 0; i < cgx_get_cgxcnt_max(); i++) { @@ -2128,9 +2293,17 @@ static void rvu_dbg_cpt_init(struct rvu *rvu, int blkaddr) &rvu_dbg_cpt_err_info_fops); } +static const char *rvu_get_dbg_dir_name(struct rvu *rvu) +{ + if (!is_rvu_otx2(rvu)) + return "cn10k"; + else + return "octeontx2"; +} + void rvu_dbg_init(struct rvu *rvu) { - rvu->rvu_dbg.root = debugfs_create_dir(DEBUGFS_DIR_NAME, NULL); + rvu->rvu_dbg.root = debugfs_create_dir(rvu_get_dbg_dir_name(rvu), NULL); debugfs_create_file("rsrc_alloc", 0444, rvu->rvu_dbg.root, rvu, &rvu_dbg_rsrc_status_fops); From patchwork Thu Feb 11 01:46:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 649F7C433E0 for ; Thu, 11 Feb 2021 01:50:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B55164E3B for ; Thu, 11 Feb 2021 01:50:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229454AbhBKBuF (ORCPT ); Wed, 10 Feb 2021 20:50:05 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:19932 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbhBKBsP (ORCPT ); Wed, 10 Feb 2021 20:48:15 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKm000429; Wed, 10 Feb 2021 17:47:30 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=iofFx/zAMMFdYUnCy/3xCZSNAiEHCyI7beG84siPbww=; b=f92SdTOF/obvZFhQ9Jh2on8my+oJE6eNGIEXu9/4RoUu8c/ZPzX8eAavN0u7jXtanXzr BpyBH8qI0qrzgbefuw8z/lmnj/vOKPzwJiUNkdD8c+wTYjTk5pH6xtiNi7Vo0JBeMgy8 DrMmXiSSfpaV/VKdpPEdv+GlX6syNvdJ5W5q/tITnGhtE6pK77PIs3T/KuGp18HEWVyj UyhGYZlJIJA6CvyywlfuHiQqNKkNOaJqXJC0uo9hjbgMgvJPh/3G0zQyysNLYN9U1yGJ 8thiWOG+xMV4mTPbUJhOURTavZhc8Z9Wix1oBCHE5lX1nAbAOxle4oFWAb7QkELB6FEo 9A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhbx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:30 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:28 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:28 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id AE4E73F703F; Wed, 10 Feb 2021 17:47:24 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 05/14] octeontx2-pf: cn10k: Initialise NIX context Date: Thu, 11 Feb 2021 07:16:22 +0530 Message-ID: <20210211014631.9578-6-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On CN10K platform NIX RQ and SQ context structure got changed. This patch uses new mbox message "NIX_CN10K_AQ_ENQ" for NIX context initialization on CN10K platform. This patch also updates the nix_rx_parse_s and nix_sqe_sg_s structures to add packet steering bit feilds. Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../marvell/octeontx2/nic/otx2_common.c | 101 +++++++++++++----- .../marvell/octeontx2/nic/otx2_struct.h | 10 +- 2 files changed, 80 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index cbd68fa9f1d6..4a1a6b95d299 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -750,11 +750,79 @@ static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) return otx2_sync_mbox_msg(&pfvf->mbox); } +static int cn10k_sq_aq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) +{ + struct nix_cn10k_aq_enq_req *aq; + + /* Get memory to put this msg */ + aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); + if (!aq) + return -ENOMEM; + + aq->sq.cq = pfvf->hw.rx_queues + qidx; + aq->sq.max_sqe_size = NIX_MAXSQESZ_W16; /* 128 byte */ + aq->sq.cq_ena = 1; + aq->sq.ena = 1; + /* Only one SMQ is allocated, map all SQ's to that SMQ */ + aq->sq.smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; + /* FIXME: set based on NIX_AF_DWRR_RPM_MTU*/ + aq->sq.smq_rr_weight = OTX2_MAX_MTU; + aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ + aq->sq.sqb_aura = sqb_aura; + aq->sq.sq_int_ena = NIX_SQINT_BITS; + aq->sq.qint_idx = 0; + /* Due pipelining impact minimum 2000 unused SQ CQE's + * need to maintain to avoid CQ overflow. + */ + aq->sq.cq_limit = ((SEND_CQ_SKID * 256) / (pfvf->qset.sqe_cnt)); + + /* Fill AQ info */ + aq->qidx = qidx; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_INIT; + + return otx2_sync_mbox_msg(&pfvf->mbox); +} + +static int otx2_sq_aq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) +{ + struct nix_aq_enq_req *aq; + + /* Get memory to put this msg */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(&pfvf->mbox); + if (!aq) + return -ENOMEM; + + aq->sq.cq = pfvf->hw.rx_queues + qidx; + aq->sq.max_sqe_size = NIX_MAXSQESZ_W16; /* 128 byte */ + aq->sq.cq_ena = 1; + aq->sq.ena = 1; + /* Only one SMQ is allocated, map all SQ's to that SMQ */ + aq->sq.smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; + aq->sq.smq_rr_quantum = DFLT_RR_QTM; + aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ + aq->sq.sqb_aura = sqb_aura; + aq->sq.sq_int_ena = NIX_SQINT_BITS; + aq->sq.qint_idx = 0; + /* Due pipelining impact minimum 2000 unused SQ CQE's + * need to maintain to avoid CQ overflow. + */ + aq->sq.cq_limit = ((SEND_CQ_SKID * 256) / (pfvf->qset.sqe_cnt)); + + /* Fill AQ info */ + aq->qidx = qidx; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_INIT; + + return otx2_sync_mbox_msg(&pfvf->mbox); +} + static int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) { struct otx2_qset *qset = &pfvf->qset; struct otx2_snd_queue *sq; - struct nix_aq_enq_req *aq; struct otx2_pool *pool; int err; @@ -797,34 +865,11 @@ static int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) sq->stats.bytes = 0; sq->stats.pkts = 0; - /* Get memory to put this msg */ - aq = otx2_mbox_alloc_msg_nix_aq_enq(&pfvf->mbox); - if (!aq) - return -ENOMEM; - - aq->sq.cq = pfvf->hw.rx_queues + qidx; - aq->sq.max_sqe_size = NIX_MAXSQESZ_W16; /* 128 byte */ - aq->sq.cq_ena = 1; - aq->sq.ena = 1; - /* Only one SMQ is allocated, map all SQ's to that SMQ */ - aq->sq.smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; - aq->sq.smq_rr_quantum = DFLT_RR_QTM; - aq->sq.default_chan = pfvf->hw.tx_chan_base; - aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ - aq->sq.sqb_aura = sqb_aura; - aq->sq.sq_int_ena = NIX_SQINT_BITS; - aq->sq.qint_idx = 0; - /* Due pipelining impact minimum 2000 unused SQ CQE's - * need to maintain to avoid CQ overflow. - */ - aq->sq.cq_limit = ((SEND_CQ_SKID * 256) / (sq->sqe_cnt)); - - /* Fill AQ info */ - aq->qidx = qidx; - aq->ctype = NIX_AQ_CTYPE_SQ; - aq->op = NIX_AQ_INSTOP_INIT; + if (is_dev_otx2(pfvf->pdev)) + return otx2_sq_aq_init(pfvf, qidx, sqb_aura); + else + return cn10k_sq_aq_init(pfvf, qidx, sqb_aura); - return otx2_sync_mbox_msg(&pfvf->mbox); } static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h index cba59ddf71bb..1f49b3caf5d4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h @@ -142,7 +142,9 @@ struct nix_rx_parse_s { u64 vtag0_ptr : 8; /* W5 */ u64 vtag1_ptr : 8; u64 flow_key_alg : 5; - u64 rsvd_383_341 : 43; + u64 rsvd_359_341 : 19; + u64 color : 2; + u64 rsvd_383_362 : 22; u64 rsvd_447_384; /* W6 */ }; @@ -218,7 +220,8 @@ struct nix_sqe_ext_s { u64 vlan1_ins_tci : 16; u64 vlan0_ins_ena : 1; u64 vlan1_ins_ena : 1; - u64 rsvd_127_114 : 14; + u64 init_color : 2; + u64 rsvd_127_116 : 12; }; struct nix_sqe_sg_s { @@ -237,7 +240,8 @@ struct nix_sqe_sg_s { /* NIX send memory subdescriptor structure */ struct nix_sqe_mem_s { u64 offset : 16; /* W0 */ - u64 rsvd_52_16 : 37; + u64 rsvd_51_16 : 36; + u64 per_lso_seg : 1; u64 wmem : 1; u64 dsz : 2; u64 alg : 4; From patchwork Thu Feb 11 01:46:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2999C4332D for ; Thu, 11 Feb 2021 01:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9C1F64ECE for ; Thu, 11 Feb 2021 01:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230042AbhBKBuW (ORCPT ); Wed, 10 Feb 2021 20:50:22 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:64390 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229854AbhBKBsW (ORCPT ); Wed, 10 Feb 2021 20:48:22 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKt000423; Wed, 10 Feb 2021 17:47:37 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=F62G4dyQUfrDK+IyPBJmeTn9ilJhWVfUGcNeEcjLjnM=; b=P2sjk09LjIvG5ZyOLkS9vhOujm9OL5vOrGpbMW49dBJHFE7HaLylso93Bq4Whk8iXm4R /BHulzEtCz1fGOO0eJ6isCaUXGVp6LBjQJf7kesDMLnn8sxU6HKS7vtMzPx7yW36MrBw 1c5K5QHZgYjzjjJY76RD/g7MMIoeHJ79H0rYGxKYKnWZsNnFOP1ogcvMO2cQ3x+TKoKD a4YVCPvWEn/eYLnl6YNXNRVlV6Q4K4jeh2qW02ZsF5lXAKfyw4EZGg3OhgKz/C43pzmu xkdCIKv24jlWLoqPiPDy77U5La+aJ/05iXDAhlSX6g8J8FQoi0Pkotnxc3mmccidEX2s Kg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhca-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:37 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:35 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:34 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:34 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id B2D183F7041; Wed, 10 Feb 2021 17:47:30 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 06/14] octeontx2-pf: cn10k: Map LMTST region Date: Thu, 11 Feb 2021 07:16:23 +0530 Message-ID: <20210211014631.9578-7-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On CN10K platform transmit/receive buffer alloc and free from/to hardware had changed to support burst operation. Whereas pervious silicon's only support single buffer free at a time. To Support the same firmware allocates a DRAM region for each PF/VF for storing LMTLINES. These LMTLINES are used for NPA batch free and for flushing SQE to the hardware. PF/VF LMTST region is accessed via BAR4. PFs LMTST region is followed by its VFs mbox memory. The size of region varies from 2KB to 256KB based on number of LMTLINES configured. This patch adds support for - Mapping PF/VF LMTST region. - Reserves 0-71 (RX + TX + XDP) LMTST lines for NPA batch free operation. - Reserves 72-512 LMTST lines for NIX SQE flush. Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../marvell/octeontx2/nic/otx2_common.h | 15 +++++- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 52 ++++++++++++++++++- .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 37 ++++++++++++- 3 files changed, 100 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index e05a5d59fe0c..80f892fd8800 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -207,7 +207,14 @@ struct otx2_hw { u8 lbk_links; /* No. of LBK links present in HW */ #define HW_TSO BIT_ULL(0) #define CN10K_MBOX BIT_ULL(1) +#define CN10K_LMTST BIT_ULL(2) unsigned long cap_flag; + +#define LMT_LINE_SIZE 128 +#define NIX_LMTID_BASE 72 /* RX + TX + XDP */ + void __iomem *lmt_base; + u64 *npa_lmt_base; + u64 *nix_lmt_base; }; struct otx2_vf_config { @@ -317,6 +324,10 @@ struct otx2_nic { /* Block address of NIX either BLKADDR_NIX0 or BLKADDR_NIX1 */ int nix_blkaddr; + /* LMTST Lines info */ + u16 tot_lmt_lines; + u16 nix_lmt_lines; + u32 nix_lmt_size; struct otx2_ptp *ptp; struct hwtstamp_config tstamp; @@ -382,8 +393,10 @@ static inline void otx2_setup_dev_hw_settings(struct otx2_nic *pfvf) pfvf->hw.rq_skid = 600; pfvf->qset.rqe_cnt = Q_COUNT(Q_SIZE_1K); } - if (!is_dev_otx2(pfvf->pdev)) + if (!is_dev_otx2(pfvf->pdev)) { __set_bit(CN10K_MBOX, &hw->cap_flag); + __set_bit(CN10K_LMTST, &hw->cap_flag); + } } /* Register read/write APIs */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 55df8ad88985..997d1372703f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -46,6 +46,39 @@ enum { static int otx2_config_hw_tx_tstamp(struct otx2_nic *pfvf, bool enable); static int otx2_config_hw_rx_tstamp(struct otx2_nic *pfvf, bool enable); +static int cn10k_lmtst_init(struct otx2_nic *pf) +{ + int size, num_lines; + u64 base; + + if (!test_bit(CN10K_LMTST, &pf->hw.cap_flag)) + return 0; + + base = pci_resource_start(pf->pdev, PCI_MBOX_BAR_NUM) + + (MBOX_SIZE * (pf->total_vfs + 1)); + + size = pci_resource_len(pf->pdev, PCI_MBOX_BAR_NUM) - + (MBOX_SIZE * (pf->total_vfs + 1)); + + pf->hw.lmt_base = ioremap(base, size); + + if (!pf->hw.lmt_base) { + dev_err(pf->dev, "Unable to map PF LMTST region\n"); + return -ENOMEM; + } + + /* FIXME: Get the num of LMTST lines from LMT table */ + pf->tot_lmt_lines = size / LMT_LINE_SIZE; + num_lines = (pf->tot_lmt_lines - NIX_LMTID_BASE) / + pf->hw.tx_queues; + /* Number of LMT lines per SQ queues */ + pf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; + + pf->nix_lmt_size = pf->nix_lmt_lines * LMT_LINE_SIZE; + + return 0; +} + static int otx2_change_mtu(struct net_device *netdev, int new_mtu) { bool if_up = netif_running(netdev); @@ -1495,6 +1528,14 @@ int otx2_open(struct net_device *netdev) if (!qset->rq) goto err_free_mem; + if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) { + /* Reserve LMT lines for NPA AURA batch free */ + pf->hw.npa_lmt_base = (__force u64 *)pf->hw.lmt_base; + /* Reserve LMT lines for NIX TX */ + pf->hw.nix_lmt_base = (__force u64 *)((u64)pf->hw.npa_lmt_base + + (NIX_LMTID_BASE * LMT_LINE_SIZE)); + } + err = otx2_init_hw_resources(pf); if (err) goto err_free_mem; @@ -2333,6 +2374,8 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_free_netdev; } + otx2_setup_dev_hw_settings(pf); + /* Init PF <=> AF mailbox stuff */ err = otx2_pfaf_mbox_init(pf); if (err) @@ -2358,7 +2401,9 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (err) goto err_detach_rsrc; - otx2_setup_dev_hw_settings(pf); + err = cn10k_lmtst_init(pf); + if (err) + goto err_detach_rsrc; /* Assign default mac address */ otx2_get_mac_from_af(netdev); @@ -2443,6 +2488,8 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) err_ptp_destroy: otx2_ptp_destroy(pf); err_detach_rsrc: + if (hw->lmt_base) + iounmap(hw->lmt_base); otx2_detach_resources(&pf->mbox); err_disable_mbox_intr: otx2_disable_mbox_intr(pf); @@ -2602,6 +2649,9 @@ static void otx2_remove(struct pci_dev *pdev) otx2_ptp_destroy(pf); otx2_mcam_flow_del(pf); otx2_detach_resources(&pf->mbox); + if (pf->hw.lmt_base) + iounmap(pf->hw.lmt_base); + otx2_disable_mbox_intr(pf); otx2_pfaf_mbox_destroy(pf); pci_free_irq_vectors(pf->pdev); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index e7d8fef4f53d..9ed850b75d59 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -27,6 +27,31 @@ enum { RVU_VF_INT_VEC_MBOX = 0x0, }; +static int cn10k_lmtst_init(struct otx2_nic *vf) +{ + int size, num_lines; + + if (!test_bit(CN10K_LMTST, &vf->hw.cap_flag)) + return 0; + + size = pci_resource_len(vf->pdev, PCI_MBOX_BAR_NUM); + vf->hw.lmt_base = ioremap_wc(pci_resource_start(vf->pdev, + PCI_MBOX_BAR_NUM), + size); + if (!vf->hw.lmt_base) { + dev_err(vf->dev, "Unable to map VF LMTST region\n"); + return -ENOMEM; + } + + vf->tot_lmt_lines = size / LMT_LINE_SIZE; + /* LMTST lines per SQ */ + num_lines = (vf->tot_lmt_lines - NIX_LMTID_BASE) / + vf->hw.tx_queues; + vf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; + vf->nix_lmt_size = vf->nix_lmt_lines * LMT_LINE_SIZE; + return 0; +} + static void otx2vf_process_vfaf_mbox_msg(struct otx2_nic *vf, struct mbox_msghdr *msg) { @@ -536,6 +561,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_free_irq_vectors; } + otx2_setup_dev_hw_settings(vf); /* Init VF <=> PF mailbox stuff */ err = otx2vf_vfaf_mbox_init(vf); if (err) @@ -559,7 +585,9 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (err) goto err_detach_rsrc; - otx2_setup_dev_hw_settings(vf); + err = cn10k_lmtst_init(vf); + if (err) + goto err_detach_rsrc; /* Assign default mac address */ otx2_get_mac_from_af(netdev); @@ -611,6 +639,8 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) return 0; err_detach_rsrc: + if (hw->lmt_base) + iounmap(hw->lmt_base); otx2_detach_resources(&vf->mbox); err_disable_mbox_intr: otx2vf_disable_mbox_intr(vf); @@ -639,8 +669,11 @@ static void otx2vf_remove(struct pci_dev *pdev) cancel_work_sync(&vf->reset_task); unregister_netdev(netdev); otx2vf_disable_mbox_intr(vf); - otx2_detach_resources(&vf->mbox); + + if (vf->hw.lmt_base) + iounmap(vf->hw.lmt_base); + otx2vf_vfaf_mbox_destroy(vf); pci_free_irq_vectors(vf->pdev); pci_set_drvdata(pdev, NULL); From patchwork Thu Feb 11 01:46:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381610 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C00EC433E0 for ; Thu, 11 Feb 2021 01:51:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A1BD64DBD for ; Thu, 11 Feb 2021 01:51:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229913AbhBKBvT (ORCPT ); Wed, 10 Feb 2021 20:51:19 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:30830 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229974AbhBKBs1 (ORCPT ); Wed, 10 Feb 2021 20:48:27 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kOc0023685; Wed, 10 Feb 2021 17:47:43 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=4Ef/u5RO2lmVf0NbzpbTYphb67yqe8ikUZcBFbTaucI=; b=CvOBCsPqMFkHkeXbxz3QXsrP/ItwI+IpUNSibv/f+AJsUQcD8xxf6F5gomFJFmcy98T0 WKgZa970gil9JJiAlQndd4Jv7zloleF+TuhCJOW+Bi7otr6jKJiOCaH2OlyhB8fly/PB 5h+dbrvGHvdiiplM1GARgOvYF/9SkuPriJd0Vyi5W64d/Ph4zUDDO+v22/jinlIozYab BIF0QyWF6wxSq6nCCB5Q1XYf+xqcK2YYhvu1SO+2zE9EL326ArVsMjyncbTXfBtbDgZi DHVltjLHXisq8uQLsD2WKZ/4a6a8IXUC6LmZnGrjzGGV84jPYwX2Rjm2/xmXTMORrgJ6 8A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrnttw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:42 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:41 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:41 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:40 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id C75913F703F; Wed, 10 Feb 2021 17:47:36 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 07/14] octeontx2-pf: cn10k: Use LMTST lines for NPA/NIX operations Date: Thu, 11 Feb 2021 07:16:24 +0530 Message-ID: <20210211014631.9578-8-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds support to use new LMTST lines for NPA batch free and burst SQE flush. Adds new dev_hw_ops structure to hold platform specific functions and create new files cn10k.c and cn10k.h. Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../ethernet/marvell/octeontx2/nic/Makefile | 2 +- .../ethernet/marvell/octeontx2/nic/cn10k.c | 182 ++++++++++++++++++ .../ethernet/marvell/octeontx2/nic/cn10k.h | 17 ++ .../marvell/octeontx2/nic/otx2_common.c | 81 ++++---- .../marvell/octeontx2/nic/otx2_common.h | 67 ++++++- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 36 +--- .../ethernet/marvell/octeontx2/nic/otx2_reg.h | 1 + .../marvell/octeontx2/nic/otx2_txrx.c | 39 ++-- .../marvell/octeontx2/nic/otx2_txrx.h | 7 + .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 28 +-- include/linux/soc/marvell/octeontx2/asm.h | 8 + 11 files changed, 334 insertions(+), 134 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile index 29c82b94b2dc..745aa8a19499 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_OCTEONTX2_PF) += rvu_nicpf.o obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ - otx2_ptp.o otx2_flows.o + otx2_ptp.o otx2_flows.o cn10k.o rvu_nicvf-y := otx2_vf.o ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c new file mode 100644 index 000000000000..70548d12ff42 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -0,0 +1,182 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell OcteonTx2 RVU Physcial Function ethernet driver + * + * Copyright (C) 2020 Marvell. + */ + +#include "cn10k.h" +#include "otx2_reg.h" +#include "otx2_struct.h" + +static struct dev_hw_ops otx2_hw_ops = { + .sq_aq_init = otx2_sq_aq_init, + .sqe_flush = otx2_sqe_flush, + .aura_freeptr = otx2_aura_freeptr, + .refill_pool_ptrs = otx2_refill_pool_ptrs, +}; + +static struct dev_hw_ops cn10k_hw_ops = { + .sq_aq_init = cn10k_sq_aq_init, + .sqe_flush = cn10k_sqe_flush, + .aura_freeptr = cn10k_aura_freeptr, + .refill_pool_ptrs = cn10k_refill_pool_ptrs, +}; + +int cn10k_pf_lmtst_init(struct otx2_nic *pf) +{ + int size, num_lines; + u64 base; + + if (!test_bit(CN10K_LMTST, &pf->hw.cap_flag)) { + pf->hw_ops = &otx2_hw_ops; + return 0; + } + + pf->hw_ops = &cn10k_hw_ops; + base = pci_resource_start(pf->pdev, PCI_MBOX_BAR_NUM) + + (MBOX_SIZE * (pf->total_vfs + 1)); + + size = pci_resource_len(pf->pdev, PCI_MBOX_BAR_NUM) - + (MBOX_SIZE * (pf->total_vfs + 1)); + + pf->hw.lmt_base = ioremap(base, size); + + if (!pf->hw.lmt_base) { + dev_err(pf->dev, "Unable to map PF LMTST region\n"); + return -ENOMEM; + } + + /* FIXME: Get the num of LMTST lines from LMT table */ + pf->tot_lmt_lines = size / LMT_LINE_SIZE; + num_lines = (pf->tot_lmt_lines - NIX_LMTID_BASE) / + pf->hw.tx_queues; + /* Number of LMT lines per SQ queues */ + pf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; + + pf->nix_lmt_size = pf->nix_lmt_lines * LMT_LINE_SIZE; + return 0; +} + +int cn10k_vf_lmtst_init(struct otx2_nic *vf) +{ + int size, num_lines; + + if (!test_bit(CN10K_LMTST, &vf->hw.cap_flag)) { + vf->hw_ops = &otx2_hw_ops; + return 0; + } + + vf->hw_ops = &cn10k_hw_ops; + size = pci_resource_len(vf->pdev, PCI_MBOX_BAR_NUM); + vf->hw.lmt_base = ioremap_wc(pci_resource_start(vf->pdev, + PCI_MBOX_BAR_NUM), + size); + if (!vf->hw.lmt_base) { + dev_err(vf->dev, "Unable to map VF LMTST region\n"); + return -ENOMEM; + } + + vf->tot_lmt_lines = size / LMT_LINE_SIZE; + /* LMTST lines per SQ */ + num_lines = (vf->tot_lmt_lines - NIX_LMTID_BASE) / + vf->hw.tx_queues; + vf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; + vf->nix_lmt_size = vf->nix_lmt_lines * LMT_LINE_SIZE; + return 0; +} +EXPORT_SYMBOL(cn10k_vf_lmtst_init); + +int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) +{ + struct nix_cn10k_aq_enq_req *aq; + struct otx2_nic *pfvf = dev; + struct otx2_snd_queue *sq; + + sq = &pfvf->qset.sq[qidx]; + sq->lmt_addr = (__force u64 *)((u64)pfvf->hw.nix_lmt_base + + (qidx * pfvf->nix_lmt_size)); + + /* Get memory to put this msg */ + aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); + if (!aq) + return -ENOMEM; + + aq->sq.cq = pfvf->hw.rx_queues + qidx; + aq->sq.max_sqe_size = NIX_MAXSQESZ_W16; /* 128 byte */ + aq->sq.cq_ena = 1; + aq->sq.ena = 1; + /* Only one SMQ is allocated, map all SQ's to that SMQ */ + aq->sq.smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; + /* FIXME: set based on NIX_AF_DWRR_RPM_MTU*/ + aq->sq.smq_rr_weight = OTX2_MAX_MTU; + aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ + aq->sq.sqb_aura = sqb_aura; + aq->sq.sq_int_ena = NIX_SQINT_BITS; + aq->sq.qint_idx = 0; + /* Due pipelining impact minimum 2000 unused SQ CQE's + * need to maintain to avoid CQ overflow. + */ + aq->sq.cq_limit = ((SEND_CQ_SKID * 256) / (pfvf->qset.sqe_cnt)); + + /* Fill AQ info */ + aq->qidx = qidx; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_INIT; + + return otx2_sync_mbox_msg(&pfvf->mbox); +} + +#define NPA_MAX_BURST 16 +void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq) +{ + struct otx2_nic *pfvf = dev; + u64 ptrs[NPA_MAX_BURST]; + int num_ptrs = 1; + s64 bufptr; + + /* Refill pool with new buffers */ + while (cq->pool_ptrs) { + bufptr = otx2_alloc_buffer(pfvf, cq); + if (unlikely(bufptr <= 0)) { + if (num_ptrs--) + __cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs, + num_ptrs, + cq->rbpool->lmt_addr); + break; + } + cq->pool_ptrs--; + ptrs[num_ptrs] = (u64)bufptr + OTX2_HEAD_ROOM; + num_ptrs++; + if (num_ptrs == NPA_MAX_BURST || cq->pool_ptrs == 0) { + __cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs, + num_ptrs, + cq->rbpool->lmt_addr); + num_ptrs = 1; + } + } +} + +void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx) +{ + struct otx2_nic *pfvf = dev; + int lmt_id = NIX_LMTID_BASE + (qidx * pfvf->nix_lmt_lines); + u64 val = 0, tar_addr = 0; + + /* FIXME: val[0:10] LMT_ID. + * [12:15] no of LMTST - 1 in the burst. + * [19:63] data size of each LMTST in the burst except first. + */ + val = (lmt_id & 0x7FF); + /* Target address for LMTST flush tells HW how many 128bit + * words are present. + * tar_addr[6:4] size of first LMTST - 1 in units of 128b. + */ + tar_addr |= sq->io_addr | (((size / 16) - 1) & 0x7) << 4; + dma_wmb(); + memcpy(sq->lmt_addr, sq->sqe_base, size); + cn10k_lmt_flush(val, tar_addr); + + sq->head++; + sq->head &= (sq->sqe_cnt - 1); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h new file mode 100644 index 000000000000..e0bc595cbb78 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 + * Marvell OcteonTx2 RVU Ethernet driver + * + * Copyright (C) 2020 Marvell. + */ + +#ifndef CN10K_H +#define CN10K_H + +#include "otx2_common.h" + +void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); +void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); +int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +int cn10k_pf_lmtst_init(struct otx2_nic *pf); +int cn10k_vf_lmtst_init(struct otx2_nic *vf); +#endif /* CN10K_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 4a1a6b95d299..49351a2d7629 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -15,6 +15,7 @@ #include "otx2_reg.h" #include "otx2_common.h" #include "otx2_struct.h" +#include "cn10k.h" static void otx2_nix_rq_op_stats(struct queue_stats *stats, struct otx2_nic *pfvf, int qidx) @@ -512,6 +513,25 @@ static dma_addr_t otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool) return addr; } +s64 otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq) +{ + s64 bufptr; + + bufptr = __otx2_alloc_rbuf(pfvf, cq->rbpool); + if (unlikely(bufptr <= 0)) { + struct refill_work *work; + struct delayed_work *dwork; + + work = &pfvf->refill_wrk[cq->cq_idx]; + dwork = &work->pool_refill_work; + /* Schedule a task if no other task is running */ + if (!cq->refill_task_sched) { + cq->refill_task_sched = true; + schedule_delayed_work(dwork, msecs_to_jiffies(100)); + } + } + return bufptr; +} void otx2_tx_timeout(struct net_device *netdev, unsigned int txq) { struct otx2_nic *pfvf = netdev_priv(netdev); @@ -714,9 +734,6 @@ void otx2_sqb_flush(struct otx2_nic *pfvf) #define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */ #define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */ -/* Send skid of 2000 packets required for CQ size of 4K CQEs. */ -#define SEND_CQ_SKID 2000 - static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) { struct otx2_qset *qset = &pfvf->qset; @@ -750,45 +767,14 @@ static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) return otx2_sync_mbox_msg(&pfvf->mbox); } -static int cn10k_sq_aq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) -{ - struct nix_cn10k_aq_enq_req *aq; - - /* Get memory to put this msg */ - aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); - if (!aq) - return -ENOMEM; - - aq->sq.cq = pfvf->hw.rx_queues + qidx; - aq->sq.max_sqe_size = NIX_MAXSQESZ_W16; /* 128 byte */ - aq->sq.cq_ena = 1; - aq->sq.ena = 1; - /* Only one SMQ is allocated, map all SQ's to that SMQ */ - aq->sq.smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; - /* FIXME: set based on NIX_AF_DWRR_RPM_MTU*/ - aq->sq.smq_rr_weight = OTX2_MAX_MTU; - aq->sq.default_chan = pfvf->hw.tx_chan_base; - aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ - aq->sq.sqb_aura = sqb_aura; - aq->sq.sq_int_ena = NIX_SQINT_BITS; - aq->sq.qint_idx = 0; - /* Due pipelining impact minimum 2000 unused SQ CQE's - * need to maintain to avoid CQ overflow. - */ - aq->sq.cq_limit = ((SEND_CQ_SKID * 256) / (pfvf->qset.sqe_cnt)); - - /* Fill AQ info */ - aq->qidx = qidx; - aq->ctype = NIX_AQ_CTYPE_SQ; - aq->op = NIX_AQ_INSTOP_INIT; - - return otx2_sync_mbox_msg(&pfvf->mbox); -} - -static int otx2_sq_aq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) +int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) { + struct otx2_nic *pfvf = dev; + struct otx2_snd_queue *sq; struct nix_aq_enq_req *aq; + sq = &pfvf->qset.sq[qidx]; + sq->lmt_addr = (__force u64 *)(pfvf->reg_base + LMT_LF_LMTLINEX(qidx)); /* Get memory to put this msg */ aq = otx2_mbox_alloc_msg_nix_aq_enq(&pfvf->mbox); if (!aq) @@ -859,16 +845,12 @@ static int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) sq->sqe_thresh = ((sq->num_sqbs * sq->sqe_per_sqb) * 10) / 100; sq->aura_id = sqb_aura; sq->aura_fc_addr = pool->fc_addr->base; - sq->lmt_addr = (__force u64 *)(pfvf->reg_base + LMT_LF_LMTLINEX(qidx)); sq->io_addr = (__force u64)otx2_get_regaddr(pfvf, NIX_LF_OP_SENDX(0)); sq->stats.bytes = 0; sq->stats.pkts = 0; - if (is_dev_otx2(pfvf->pdev)) - return otx2_sq_aq_init(pfvf, qidx, sqb_aura); - else - return cn10k_sq_aq_init(pfvf, qidx, sqb_aura); + return pfvf->hw_ops->sq_aq_init(pfvf, qidx, sqb_aura); } @@ -1218,6 +1200,11 @@ static int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, pool->rbsize = buf_size; + /* Set LMTST addr for NPA batch free */ + if (test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) + pool->lmt_addr = (__force u64 *)((u64)pfvf->hw.npa_lmt_base + + (pool_id * LMT_LINE_SIZE)); + /* Initialize this pool's context via AF */ aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox); if (!aq) { @@ -1307,7 +1294,7 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf) bufptr = otx2_alloc_rbuf(pfvf, pool); if (bufptr <= 0) return bufptr; - otx2_aura_freeptr(pfvf, pool_id, bufptr); + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr); sq->sqb_ptrs[sq->sqb_count++] = (u64)bufptr; } } @@ -1358,8 +1345,8 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf) bufptr = otx2_alloc_rbuf(pfvf, pool); if (bufptr <= 0) return bufptr; - otx2_aura_freeptr(pfvf, pool_id, - bufptr + OTX2_HEAD_ROOM); + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, + bufptr + OTX2_HEAD_ROOM); } } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 80f892fd8800..52205cbd6ada 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -50,6 +50,9 @@ enum arua_mapped_qtypes { #define NIX_LF_ERR_VEC 0x81 #define NIX_LF_POISON_VEC 0x82 +/* Send skid of 2000 packets required for CQ size of 4K CQEs. */ +#define SEND_CQ_SKID 2000 + /* RSS configuration */ struct otx2_rss_ctx { u8 ind_tbl[MAX_RSS_INDIR_TBL_SIZE]; @@ -273,9 +276,18 @@ struct otx2_flow_config { struct list_head flow_list; }; +struct dev_hw_ops { + int (*sq_aq_init)(void *dev, u16 qidx, u16 sqb_aura); + void (*sqe_flush)(void *dev, struct otx2_snd_queue *sq, + int size, int qidx); + void (*refill_pool_ptrs)(void *dev, struct otx2_cq_queue *cq); + void (*aura_freeptr)(void *dev, int aura, s64 buf); +}; + struct otx2_nic { void __iomem *reg_base; struct net_device *netdev; + struct dev_hw_ops *hw_ops; void *iommu_domain; u16 max_frs; u16 rbsize; /* Receive buffer size */ @@ -505,10 +517,51 @@ static inline u64 otx2_atomic64_add(u64 incr, u64 *ptr) } #else -#define otx2_write128(lo, hi, addr) +#define otx2_write128(lo, hi, addr) writeq((hi) | (lo), addr) #define otx2_atomic64_add(incr, ptr) ({ *ptr += incr; }) #endif +static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura, + u64 *ptrs, u64 num_ptrs, + u64 *lmt_addr) +{ + u64 size = 0, count_eot = 0; + u64 tar_addr, val = 0; + + tar_addr = (__force u64)otx2_get_regaddr(pfvf, NPA_LF_AURA_BATCH_FREE0); + /* LMTID is same as AURA Id */ + val = (aura & 0x7FF) | BIT_ULL(63); + /* Set if [127:64] of last 128bit word has a valid pointer */ + count_eot = (num_ptrs % 2) ? 0ULL : 1ULL; + /* Set AURA ID to free pointer */ + ptrs[0] = (count_eot << 32) | (aura & 0xFFFFF); + /* Target address for LMTST flush tells HW how many 128bit + * words are valid from NPA_LF_AURA_BATCH_FREE0. + * + * tar_addr[6:4] is LMTST size-1 in units of 128b. + */ + if (num_ptrs > 2) { + size = (sizeof(u64) * num_ptrs) / 16; + if (!count_eot) + size++; + tar_addr |= ((size - 1) & 0x7) << 4; + } + memcpy(lmt_addr, ptrs, sizeof(u64) * num_ptrs); + /* Perform LMTST flush */ + cn10k_lmt_flush(val, tar_addr); +} + +static inline void cn10k_aura_freeptr(void *dev, int aura, s64 buf) +{ + struct otx2_nic *pfvf = dev; + struct otx2_pool *pool; + u64 ptrs[2]; + + pool = &pfvf->qset.pool[aura]; + ptrs[1] = (u64)buf; + __cn10k_aura_freeptr(pfvf, aura, ptrs, 2, pool->lmt_addr); +} + /* Alloc pointer from pool/aura */ static inline u64 otx2_aura_allocptr(struct otx2_nic *pfvf, int aura) { @@ -520,11 +573,12 @@ static inline u64 otx2_aura_allocptr(struct otx2_nic *pfvf, int aura) } /* Free pointer to a pool/aura */ -static inline void otx2_aura_freeptr(struct otx2_nic *pfvf, - int aura, s64 buf) +static inline void otx2_aura_freeptr(void *dev, int aura, s64 buf) { - otx2_write128((u64)buf, (u64)aura | BIT_ULL(63), - otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_FREE0)); + struct otx2_nic *pfvf = dev; + void __iomem *addr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_FREE0); + + otx2_write128((u64)buf, (u64)aura | BIT_ULL(63), addr); } static inline int otx2_get_pool_idx(struct otx2_nic *pfvf, int type, int idx) @@ -678,6 +732,9 @@ void otx2_ctx_disable(struct mbox *mbox, int type, bool npa); int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable); void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); +int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +s64 otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); /* RSS configuration APIs*/ int otx2_rss_init(struct otx2_nic *pfvf); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 997d1372703f..8b6a012d0a88 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -22,6 +22,7 @@ #include "otx2_txrx.h" #include "otx2_struct.h" #include "otx2_ptp.h" +#include "cn10k.h" #include #define DRV_NAME "rvu_nicpf" @@ -46,39 +47,6 @@ enum { static int otx2_config_hw_tx_tstamp(struct otx2_nic *pfvf, bool enable); static int otx2_config_hw_rx_tstamp(struct otx2_nic *pfvf, bool enable); -static int cn10k_lmtst_init(struct otx2_nic *pf) -{ - int size, num_lines; - u64 base; - - if (!test_bit(CN10K_LMTST, &pf->hw.cap_flag)) - return 0; - - base = pci_resource_start(pf->pdev, PCI_MBOX_BAR_NUM) + - (MBOX_SIZE * (pf->total_vfs + 1)); - - size = pci_resource_len(pf->pdev, PCI_MBOX_BAR_NUM) - - (MBOX_SIZE * (pf->total_vfs + 1)); - - pf->hw.lmt_base = ioremap(base, size); - - if (!pf->hw.lmt_base) { - dev_err(pf->dev, "Unable to map PF LMTST region\n"); - return -ENOMEM; - } - - /* FIXME: Get the num of LMTST lines from LMT table */ - pf->tot_lmt_lines = size / LMT_LINE_SIZE; - num_lines = (pf->tot_lmt_lines - NIX_LMTID_BASE) / - pf->hw.tx_queues; - /* Number of LMT lines per SQ queues */ - pf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; - - pf->nix_lmt_size = pf->nix_lmt_lines * LMT_LINE_SIZE; - - return 0; -} - static int otx2_change_mtu(struct net_device *netdev, int new_mtu) { bool if_up = netif_running(netdev); @@ -2401,7 +2369,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (err) goto err_detach_rsrc; - err = cn10k_lmtst_init(pf); + err = cn10k_pf_lmtst_init(pf); if (err) goto err_detach_rsrc; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h index 1e052d76a580..21b811c6ee0f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h @@ -94,6 +94,7 @@ #define NPA_LF_QINTX_INT_W1S(a) (NPA_LFBASE | 0x318 | (a) << 12) #define NPA_LF_QINTX_ENA_W1S(a) (NPA_LFBASE | 0x320 | (a) << 12) #define NPA_LF_QINTX_ENA_W1C(a) (NPA_LFBASE | 0x330 | (a) << 12) +#define NPA_LF_AURA_BATCH_FREE0 (NPA_LFBASE | 0x400) /* NIX LF registers */ #define NIX_LFBASE (BLKTYPE_NIX << RVU_FUNC_BLKADDR_SHIFT) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index a7eb5ea0fb48..690bb3a9f8f7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -17,6 +17,7 @@ #include "otx2_struct.h" #include "otx2_txrx.h" #include "otx2_ptp.h" +#include "cn10k.h" #define CQE_ADDR(CQ, idx) ((CQ)->cqe_base + ((CQ)->cqe_size * (idx))) @@ -199,7 +200,8 @@ static void otx2_free_rcv_seg(struct otx2_nic *pfvf, struct nix_cqe_rx_s *cqe, sg = (struct nix_rx_sg_s *)start; seg_addr = &sg->seg_addr; for (seg = 0; seg < sg->segs; seg++, seg_addr++) - otx2_aura_freeptr(pfvf, qidx, *seg_addr & ~0x07ULL); + pfvf->hw_ops->aura_freeptr(pfvf, qidx, + *seg_addr & ~0x07ULL); start += sizeof(*sg); } } @@ -304,7 +306,6 @@ static int otx2_rx_napi_handler(struct otx2_nic *pfvf, { struct nix_cqe_rx_s *cqe; int processed_cqe = 0; - s64 bufptr; while (likely(processed_cqe < budget)) { cqe = (struct nix_cqe_rx_s *)CQE_ADDR(cq, cq->cq_head); @@ -330,29 +331,24 @@ static int otx2_rx_napi_handler(struct otx2_nic *pfvf, if (unlikely(!cq->pool_ptrs)) return 0; + pfvf->hw_ops->refill_pool_ptrs(pfvf, cq); + + return processed_cqe; +} + +void otx2_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq) +{ + struct otx2_nic *pfvf = dev; + s64 bufptr; /* Refill pool with new buffers */ while (cq->pool_ptrs) { - bufptr = __otx2_alloc_rbuf(pfvf, cq->rbpool); - if (unlikely(bufptr <= 0)) { - struct refill_work *work; - struct delayed_work *dwork; - - work = &pfvf->refill_wrk[cq->cq_idx]; - dwork = &work->pool_refill_work; - /* Schedule a task if no other task is running */ - if (!cq->refill_task_sched) { - cq->refill_task_sched = true; - schedule_delayed_work(dwork, - msecs_to_jiffies(100)); - } + bufptr = otx2_alloc_buffer(pfvf, cq); + if (unlikely(bufptr <= 0)) break; - } otx2_aura_freeptr(pfvf, cq->cq_idx, bufptr + OTX2_HEAD_ROOM); cq->pool_ptrs--; } - - return processed_cqe; } static int otx2_tx_napi_handler(struct otx2_nic *pfvf, @@ -439,7 +435,8 @@ int otx2_napi_handler(struct napi_struct *napi, int budget) return workdone; } -static void otx2_sqe_flush(struct otx2_snd_queue *sq, int size) +void otx2_sqe_flush(void *dev, struct otx2_snd_queue *sq, + int size, int qidx) { u64 status; @@ -797,7 +794,7 @@ static void otx2_sq_append_tso(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, sqe_hdr->sizem1 = (offset / 16) - 1; /* Flush SQE to HW */ - otx2_sqe_flush(sq, offset); + pfvf->hw_ops->sqe_flush(pfvf, sq, offset, qidx); } } @@ -916,7 +913,7 @@ bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, netdev_tx_sent_queue(txq, skb->len); /* Flush SQE to HW */ - otx2_sqe_flush(sq, offset); + pfvf->hw_ops->sqe_flush(pfvf, sq, offset, qidx); return true; } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index 73af15685657..d2b26b3357f3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -114,6 +114,7 @@ struct otx2_cq_poll { struct otx2_pool { struct qmem *stack; struct qmem *fc_addr; + u64 *lmt_addr; u16 rbsize; }; @@ -156,4 +157,10 @@ static inline u64 otx2_iova_to_phys(void *iommu_domain, dma_addr_t dma_addr) int otx2_napi_handler(struct napi_struct *napi, int budget); bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, struct sk_buff *skb, u16 qidx); +void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, + int size, int qidx); +void otx2_sqe_flush(void *dev, struct otx2_snd_queue *sq, + int size, int qidx); +void otx2_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); +void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); #endif /* OTX2_TXRX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index 9ed850b75d59..31e03253e612 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -7,6 +7,7 @@ #include "otx2_common.h" #include "otx2_reg.h" +#include "cn10k.h" #define DRV_NAME "rvu_nicvf" #define DRV_STRING "Marvell RVU NIC Virtual Function Driver" @@ -27,31 +28,6 @@ enum { RVU_VF_INT_VEC_MBOX = 0x0, }; -static int cn10k_lmtst_init(struct otx2_nic *vf) -{ - int size, num_lines; - - if (!test_bit(CN10K_LMTST, &vf->hw.cap_flag)) - return 0; - - size = pci_resource_len(vf->pdev, PCI_MBOX_BAR_NUM); - vf->hw.lmt_base = ioremap_wc(pci_resource_start(vf->pdev, - PCI_MBOX_BAR_NUM), - size); - if (!vf->hw.lmt_base) { - dev_err(vf->dev, "Unable to map VF LMTST region\n"); - return -ENOMEM; - } - - vf->tot_lmt_lines = size / LMT_LINE_SIZE; - /* LMTST lines per SQ */ - num_lines = (vf->tot_lmt_lines - NIX_LMTID_BASE) / - vf->hw.tx_queues; - vf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; - vf->nix_lmt_size = vf->nix_lmt_lines * LMT_LINE_SIZE; - return 0; -} - static void otx2vf_process_vfaf_mbox_msg(struct otx2_nic *vf, struct mbox_msghdr *msg) { @@ -585,7 +561,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (err) goto err_detach_rsrc; - err = cn10k_lmtst_init(vf); + err = cn10k_vf_lmtst_init(vf); if (err) goto err_detach_rsrc; diff --git a/include/linux/soc/marvell/octeontx2/asm.h b/include/linux/soc/marvell/octeontx2/asm.h index ae2279fe830a..28c04d918f0f 100644 --- a/include/linux/soc/marvell/octeontx2/asm.h +++ b/include/linux/soc/marvell/octeontx2/asm.h @@ -22,8 +22,16 @@ : [rs]"r" (ioaddr)); \ (result); \ }) +#define cn10k_lmt_flush(val, addr) \ +({ \ + __asm__ volatile(".cpu generic+lse\n" \ + "steor %x[rf],[%[rs]]" \ + : [rf]"+r"(val) \ + : [rs]"r"(addr)); \ +}) #else #define otx2_lmt_flush(ioaddr) ({ 0; }) +#define cn10k_lmt_flush(val, addr) ({ addr = val; }) #endif #endif /* __SOC_OTX2_ASM_H */ From patchwork Thu Feb 11 01:46:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EECAC433DB for ; Thu, 11 Feb 2021 01:51:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EA1864ECE for ; Thu, 11 Feb 2021 01:51:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230170AbhBKBuo (ORCPT ); Wed, 10 Feb 2021 20:50:44 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:60478 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229985AbhBKBsg (ORCPT ); Wed, 10 Feb 2021 20:48:36 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKn000429; Wed, 10 Feb 2021 17:47:49 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Jg6T1SyC0Y1nhPOko4UMXDFxVIEP33N8/a2wezMLbiM=; b=iTUKKh4QBmUNkR0cjtqJEAknzPP+Sy7MVnKfKE30SKoPu1KcSAkWB3RRdbv80fvHqpw7 mSzGHnqeDvywI1CvobJe34kRFIQt4FAdSYNsWKFkEAqkz28KCaPPXYmiAYkkuvcNkI87 95dUqkrD4ghONfxmFmfZo/3lu0luJmJrRZqqftl7w5V0Owg8tgHqXXQRICA8l5yBprOB iOwih19hHy90OHUl5YHHdKeR1yPLZu7LXBL/N/TopQPR71semq9SRMO2WPB9mtH2qkTL uV0atMFvIbjUZ2JR9k5JSWZr88PpSf58FNhtn/L8QMnxZmD87conbErDD3G+V+JK1Hxn GA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhch-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:48 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:46 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:46 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 746363F7040; Wed, 10 Feb 2021 17:47:42 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 08/14] octeontx2-af: cn10k: Add RPM MAC support Date: Thu, 11 Feb 2021 07:16:25 +0530 Message-ID: <20210211014631.9578-9-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Hariprasad Kelam OcteonTx2's next gen platform the CN10K has RPM MAC which has a different serdes when compared to CGX MAC. Though the underlying HW is different, the CSR interface has been designed largely inline with CGX MAC, with few exceptions though. So we are using the same CGX driver for RPM MAC as well and will have a different set of APIs for RPM where ever necessary. This patch adds initial support for CN10K's RPM MAC i.e. the driver registration, communication with firmware etc. For communication with firmware, RPM provides a different IRQ when compared to CGX. The CGX and RPM blocks support different features. Currently few features like ptp, flowcontrol and higig are not supported by RPM. This patch adds new mailbox message "CGX_FEATURES_GET" to get the list of features supported by underlying MAC. RPM has different implementations for RX/TX stats. Unlike CGX, bar offset of stat registers are different. This patch adds support to access the same and dump the values in debugfs. Signed-off-by: Hariprasad Kelam Signed-off-by: Subbaraya Sundeep Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- MAINTAINERS | 1 + .../ethernet/marvell/octeontx2/af/Makefile | 2 +- .../net/ethernet/marvell/octeontx2/af/cgx.c | 271 ++++++++++++------ .../net/ethernet/marvell/octeontx2/af/cgx.h | 16 +- .../marvell/octeontx2/af/lmac_common.h | 96 +++++++ .../net/ethernet/marvell/octeontx2/af/mbox.h | 15 +- .../net/ethernet/marvell/octeontx2/af/rpm.c | 39 +++ .../net/ethernet/marvell/octeontx2/af/rpm.h | 23 ++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 2 + .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 60 +++- .../marvell/octeontx2/af/rvu_debugfs.c | 54 +++- .../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 +- 12 files changed, 479 insertions(+), 106 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rpm.c create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rpm.h diff --git a/MAINTAINERS b/MAINTAINERS index d1b0057a9797..0c39376f58d7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10710,6 +10710,7 @@ M: Sunil Goutham M: Linu Cherian M: Geetha sowjanya M: Jerin Jacob +M: hariprasad L: netdev@vger.kernel.org S: Supported F: Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index eb535c98ca38..a6afbdeeaf30 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -10,4 +10,4 @@ obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o octeontx2_mbox-y := mbox.o rvu_trace.o octeontx2_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \ - rvu_cpt.o rvu_devlink.o + rvu_cpt.o rvu_devlink.o rpm.o diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 84a91234ba8e..acbea44e369b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -20,47 +20,11 @@ #include #include "cgx.h" +#include "rvu.h" +#include "lmac_common.h" -#define DRV_NAME "octeontx2-cgx" -#define DRV_STRING "Marvell OcteonTX2 CGX/MAC Driver" - -/** - * struct lmac - * @wq_cmd_cmplt: waitq to keep the process blocked until cmd completion - * @cmd_lock: Lock to serialize the command interface - * @resp: command response - * @link_info: link related information - * @event_cb: callback for linkchange events - * @event_cb_lock: lock for serializing callback with unregister - * @cmd_pend: flag set before new command is started - * flag cleared after command response is received - * @cgx: parent cgx port - * @lmac_id: lmac port id - * @name: lmac port name - */ -struct lmac { - wait_queue_head_t wq_cmd_cmplt; - struct mutex cmd_lock; - u64 resp; - struct cgx_link_user_info link_info; - struct cgx_event_cb event_cb; - spinlock_t event_cb_lock; - bool cmd_pend; - struct cgx *cgx; - u8 lmac_id; - char *name; -}; - -struct cgx { - void __iomem *reg_base; - struct pci_dev *pdev; - u8 cgx_id; - u8 lmac_count; - struct lmac *lmac_idmap[MAX_LMAC_PER_CGX]; - struct work_struct cgx_cmd_work; - struct workqueue_struct *cgx_cmd_workq; - struct list_head cgx_list; -}; +#define DRV_NAME "Marvell-CGX/RPM" +#define DRV_STRING "Marvell CGX/RPM Driver" static LIST_HEAD(cgx_list); @@ -76,22 +40,45 @@ static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool en); /* Supported devices */ static const struct pci_device_id cgx_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) }, + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RPM) }, { 0, } /* end of table */ }; MODULE_DEVICE_TABLE(pci, cgx_id_table); -static void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val) +static bool is_dev_rpm(void *cgxd) +{ + struct cgx *cgx = cgxd; + + return (cgx->pdev->device == PCI_DEVID_CN10K_RPM); +} + +bool is_lmac_valid(struct cgx *cgx, int lmac_id) +{ + return cgx && test_bit(lmac_id, &cgx->lmac_bmap); +} + +struct mac_ops *get_mac_ops(void *cgxd) +{ + if (!cgxd) + return cgxd; + + return ((struct cgx *)cgxd)->mac_ops; +} + +void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val) { - writeq(val, cgx->reg_base + (lmac << 18) + offset); + writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) + + offset); } -static u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset) +u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset) { - return readq(cgx->reg_base + (lmac << 18) + offset); + return readq(cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) + + offset); } -static inline struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx) +struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx) { if (!cgx || lmac_id >= MAX_LMAC_PER_CGX) return NULL; @@ -185,8 +172,10 @@ static u64 mac2u64 (u8 *mac_addr) int cgx_lmac_addr_set(u8 cgx_id, u8 lmac_id, u8 *mac_addr) { struct cgx *cgx_dev = cgx_get_pdata(cgx_id); + struct mac_ops *mac_ops; u64 cfg; + mac_ops = cgx_dev->mac_ops; /* copy 6bytes from macaddr */ /* memcpy(&cfg, mac_addr, 6); */ @@ -205,8 +194,11 @@ int cgx_lmac_addr_set(u8 cgx_id, u8 lmac_id, u8 *mac_addr) u64 cgx_lmac_addr_get(u8 cgx_id, u8 lmac_id) { struct cgx *cgx_dev = cgx_get_pdata(cgx_id); + struct mac_ops *mac_ops; u64 cfg; + mac_ops = cgx_dev->mac_ops; + cfg = cgx_read(cgx_dev, 0, CGXX_CMRX_RX_DMAC_CAM0 + lmac_id * 0x8); return cfg & CGX_RX_DMAC_ADR_MASK; } @@ -215,7 +207,7 @@ int cgx_set_pkind(void *cgxd, u8 lmac_id, int pkind) { struct cgx *cgx = cgxd; - if (!cgx || lmac_id >= cgx->lmac_count) + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; cgx_write(cgx, lmac_id, CGXX_CMRX_RX_ID_MAP, (pkind & 0x3F)); @@ -237,7 +229,7 @@ int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable) u8 lmac_type; u64 cfg; - if (!cgx || lmac_id >= cgx->lmac_count) + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; lmac_type = cgx_get_lmac_type(cgx, lmac_id); @@ -262,11 +254,13 @@ int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable) void cgx_lmac_promisc_config(int cgx_id, int lmac_id, bool enable) { struct cgx *cgx = cgx_get_pdata(cgx_id); + struct mac_ops *mac_ops; u64 cfg = 0; if (!cgx) return; + mac_ops = cgx->mac_ops; if (enable) { /* Enable promiscuous mode on LMAC */ cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); @@ -298,6 +292,9 @@ void cgx_lmac_enadis_rx_pause_fwding(void *cgxd, int lmac_id, bool enable) struct cgx *cgx = cgxd; u64 cfg; + if (is_dev_rpm(cgx)) + return; + if (!cgx) return; @@ -322,9 +319,11 @@ void cgx_lmac_enadis_rx_pause_fwding(void *cgxd, int lmac_id, bool enable) int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat) { + struct mac_ops *mac_ops; struct cgx *cgx = cgxd; - if (!cgx || lmac_id >= cgx->lmac_count) + mac_ops = cgx->mac_ops; + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; *rx_stat = cgx_read(cgx, lmac_id, CGXX_CMRX_RX_STAT0 + (idx * 8)); return 0; @@ -332,20 +331,27 @@ int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat) int cgx_get_tx_stats(void *cgxd, int lmac_id, int idx, u64 *tx_stat) { + struct mac_ops *mac_ops; struct cgx *cgx = cgxd; - if (!cgx || lmac_id >= cgx->lmac_count) + mac_ops = cgx->mac_ops; + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; *tx_stat = cgx_read(cgx, lmac_id, CGXX_CMRX_TX_STAT0 + (idx * 8)); return 0; } +u64 cgx_features_get(void *cgxd) +{ + return ((struct cgx *)cgxd)->hw_features; +} + int cgx_lmac_rx_tx_enable(void *cgxd, int lmac_id, bool enable) { struct cgx *cgx = cgxd; u64 cfg; - if (!cgx || lmac_id >= cgx->lmac_count) + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG); @@ -362,7 +368,7 @@ int cgx_lmac_tx_enable(void *cgxd, int lmac_id, bool enable) struct cgx *cgx = cgxd; u64 cfg, last; - if (!cgx || lmac_id >= cgx->lmac_count) + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG); @@ -383,7 +389,10 @@ int cgx_lmac_get_pause_frm(void *cgxd, int lmac_id, struct cgx *cgx = cgxd; u64 cfg; - if (!cgx || lmac_id >= cgx->lmac_count) + if (is_dev_rpm(cgx)) + return 0; + + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; cfg = cgx_read(cgx, lmac_id, CGXX_SMUX_RX_FRM_CTL); @@ -400,7 +409,10 @@ int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id, struct cgx *cgx = cgxd; u64 cfg; - if (!cgx || lmac_id >= cgx->lmac_count) + if (is_dev_rpm(cgx)) + return 0; + + if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; cfg = cgx_read(cgx, lmac_id, CGXX_SMUX_RX_FRM_CTL); @@ -428,7 +440,10 @@ static void cgx_lmac_pause_frm_config(struct cgx *cgx, int lmac_id, bool enable) { u64 cfg; - if (!cgx || lmac_id >= cgx->lmac_count) + if (is_dev_rpm(cgx)) + return; + + if (!is_lmac_valid(cgx, lmac_id)) return; if (enable) { /* Enable receive pause frames */ @@ -486,6 +501,9 @@ void cgx_lmac_ptp_config(void *cgxd, int lmac_id, bool enable) if (!cgx) return; + if (is_dev_rpm(cgx)) + return; + if (enable) { /* Enable inbound PTP timestamping */ cfg = cgx_read(cgx, lmac_id, CGXX_GMP_GMI_RXX_FRM_CTL); @@ -508,7 +526,7 @@ void cgx_lmac_ptp_config(void *cgxd, int lmac_id, bool enable) } /* CGX Firmware interface low level support */ -static int cgx_fwi_cmd_send(u64 req, u64 *resp, struct lmac *lmac) +int cgx_fwi_cmd_send(u64 req, u64 *resp, struct lmac *lmac) { struct cgx *cgx = lmac->cgx; struct device *dev; @@ -556,8 +574,7 @@ static int cgx_fwi_cmd_send(u64 req, u64 *resp, struct lmac *lmac) return err; } -static inline int cgx_fwi_cmd_generic(u64 req, u64 *resp, - struct cgx *cgx, int lmac_id) +int cgx_fwi_cmd_generic(u64 req, u64 *resp, struct cgx *cgx, int lmac_id) { struct lmac *lmac; int err; @@ -686,12 +703,16 @@ static inline bool cgx_event_is_linkevent(u64 event) static irqreturn_t cgx_fwi_event_handler(int irq, void *data) { + u64 event, offset, clear_bit; struct lmac *lmac = data; struct cgx *cgx; - u64 event; cgx = lmac->cgx; + /* Clear SW_INT for RPM and CMR_INT for CGX */ + offset = cgx->mac_ops->int_register; + clear_bit = cgx->mac_ops->int_ena_bit; + event = cgx_read(cgx, lmac->lmac_id, CGX_EVENT_REG); if (!FIELD_GET(EVTREG_ACK, event)) @@ -727,7 +748,7 @@ static irqreturn_t cgx_fwi_event_handler(int irq, void *data) * Ack the interrupt register as well. */ cgx_write(lmac->cgx, lmac->lmac_id, CGX_EVENT_REG, 0); - cgx_write(lmac->cgx, lmac->lmac_id, CGXX_CMRX_INT, FW_CGX_INT); + cgx_write(lmac->cgx, lmac->lmac_id, offset, clear_bit); return IRQ_HANDLED; } @@ -771,14 +792,16 @@ int cgx_get_fwdata_base(u64 *base) { u64 req = 0, resp; struct cgx *cgx; + int first_lmac; int err; cgx = list_first_entry_or_null(&cgx_list, struct cgx, cgx_list); if (!cgx) return -ENXIO; + first_lmac = find_first_bit(&cgx->lmac_bmap, MAX_LMAC_PER_CGX); req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_FWD_BASE, req); - err = cgx_fwi_cmd_generic(req, &resp, cgx, 0); + err = cgx_fwi_cmd_generic(req, &resp, cgx, first_lmac); if (!err) *base = FIELD_GET(RESP_FWD_BASE, resp); @@ -800,10 +823,11 @@ static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool enable) static inline int cgx_fwi_read_version(u64 *resp, struct cgx *cgx) { + int first_lmac = find_first_bit(&cgx->lmac_bmap, MAX_LMAC_PER_CGX); u64 req = 0; req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_FW_VER, req); - return cgx_fwi_cmd_generic(req, resp, cgx, 0); + return cgx_fwi_cmd_generic(req, resp, cgx, first_lmac); } static int cgx_lmac_verify_fwi_version(struct cgx *cgx) @@ -836,8 +860,8 @@ static void cgx_lmac_linkup_work(struct work_struct *work) struct device *dev = &cgx->pdev->dev; int i, err; - /* Do Link up for all the lmacs */ - for (i = 0; i < cgx->lmac_count; i++) { + /* Do Link up for all the enabled lmacs */ + for_each_set_bit(i, &cgx->lmac_bmap, MAX_LMAC_PER_CGX) { err = cgx_fwi_link_change(cgx, i, true); if (err) dev_info(dev, "cgx port %d:%d Link up command failed\n", @@ -857,12 +881,67 @@ int cgx_lmac_linkup_start(void *cgxd) return 0; } +static int cgx_configure_interrupt(struct cgx *cgx, struct lmac *lmac, + int cnt, bool req_free) +{ + struct mac_ops *mac_ops = cgx->mac_ops; + u64 offset, ena_bit; + unsigned int irq; + int err; + + irq = pci_irq_vector(cgx->pdev, mac_ops->lmac_fwi + + cnt * mac_ops->irq_offset); + offset = mac_ops->int_set_reg; + ena_bit = mac_ops->int_ena_bit; + + if (req_free) { + free_irq(irq, lmac); + return 0; + } + + err = request_irq(irq, cgx_fwi_event_handler, 0, lmac->name, lmac); + if (err) + return err; + + /* Enable interrupt */ + cgx_write(cgx, lmac->lmac_id, offset, ena_bit); + return 0; +} + +int cgx_get_nr_lmacs(void *cgxd) +{ + struct cgx *cgx = cgxd; + + return cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7ULL; +} + +u8 cgx_get_lmacid(void *cgxd, u8 lmac_index) +{ + struct cgx *cgx = cgxd; + + return cgx->lmac_idmap[lmac_index]->lmac_id; +} + +unsigned long cgx_get_lmac_bmap(void *cgxd) +{ + struct cgx *cgx = cgxd; + + return cgx->lmac_bmap; +} + static int cgx_lmac_init(struct cgx *cgx) { struct lmac *lmac; + u64 lmac_list; int i, err; - cgx->lmac_count = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7; + cgx->lmac_count = cgx->mac_ops->get_nr_lmacs(cgx); + /* lmac_list specifies which lmacs are enabled + * when bit n is set to 1, LMAC[n] is enabled + */ + if (cgx->mac_ops->non_contiguous_serdes_lane) + lmac_list = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0xFULL; + if (cgx->lmac_count > MAX_LMAC_PER_CGX) cgx->lmac_count = MAX_LMAC_PER_CGX; @@ -876,24 +955,25 @@ static int cgx_lmac_init(struct cgx *cgx) goto err_lmac_free; } sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i); - lmac->lmac_id = i; + if (cgx->mac_ops->non_contiguous_serdes_lane) { + lmac->lmac_id = __ffs64(lmac_list); + lmac_list &= ~BIT_ULL(lmac->lmac_id); + } else { + lmac->lmac_id = i; + } + lmac->cgx = cgx; init_waitqueue_head(&lmac->wq_cmd_cmplt); mutex_init(&lmac->cmd_lock); spin_lock_init(&lmac->event_cb_lock); - err = request_irq(pci_irq_vector(cgx->pdev, - CGX_LMAC_FWI + i * 9), - cgx_fwi_event_handler, 0, lmac->name, lmac); + err = cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, false); if (err) goto err_irq; - /* Enable interrupt */ - cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S, - FW_CGX_INT); - /* Add reference */ - cgx->lmac_idmap[i] = lmac; - cgx_lmac_pause_frm_config(cgx, i, true); + cgx->lmac_idmap[lmac->lmac_id] = lmac; + cgx_lmac_pause_frm_config(cgx, lmac->lmac_id, true); + set_bit(lmac->lmac_id, &cgx->lmac_bmap); } return cgx_lmac_verify_fwi_version(cgx); @@ -917,12 +997,12 @@ static int cgx_lmac_exit(struct cgx *cgx) } /* Free all lmac related resources */ - for (i = 0; i < cgx->lmac_count; i++) { - cgx_lmac_pause_frm_config(cgx, i, false); + for_each_set_bit(i, &cgx->lmac_bmap, MAX_LMAC_PER_CGX) { lmac = cgx->lmac_idmap[i]; if (!lmac) continue; - free_irq(pci_irq_vector(cgx->pdev, CGX_LMAC_FWI + i * 9), lmac); + cgx_lmac_pause_frm_config(cgx, lmac->lmac_id, false); + cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, true); kfree(lmac->name); kfree(lmac); } @@ -930,6 +1010,27 @@ static int cgx_lmac_exit(struct cgx *cgx) return 0; } +static void cgx_populate_features(struct cgx *cgx) +{ + if (is_dev_rpm(cgx)) + cgx->hw_features = RVU_MAC_RPM; + else + cgx->hw_features = (RVU_LMAC_FEAT_FC | RVU_LMAC_FEAT_PTP); +} + +static struct mac_ops cgx_mac_ops = { + .name = "cgx", + .csr_offset = 0, + .lmac_offset = 18, + .int_register = CGXX_CMRX_INT, + .int_set_reg = CGXX_CMRX_INT_ENA_W1S, + .irq_offset = 9, + .int_ena_bit = FW_CGX_INT, + .lmac_fwi = CGX_LMAC_FWI, + .non_contiguous_serdes_lane = false, + .get_nr_lmacs = cgx_get_nr_lmacs, +}; + static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; @@ -943,6 +1044,12 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) pci_set_drvdata(pdev, cgx); + /* Use mac_ops to get MAC specific features */ + if (pdev->device == PCI_DEVID_CN10K_RPM) + cgx->mac_ops = rpm_get_mac_ops(); + else + cgx->mac_ops = &cgx_mac_ops; + err = pci_enable_device(pdev); if (err) { dev_err(dev, "Failed to enable PCI device\n"); @@ -964,7 +1071,7 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_release_regions; } - nvec = CGX_NVEC; + nvec = pci_msix_vec_count(cgx->pdev); err = pci_alloc_irq_vectors(pdev, nvec, nvec, PCI_IRQ_MSIX); if (err < 0 || err != nvec) { dev_err(dev, "Request for %d msix vectors failed, err %d\n", @@ -988,6 +1095,8 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) cgx_link_usertable_init(); + cgx_populate_features(cgx); + err = cgx_lmac_init(cgx); if (err) goto err_release_lmac; @@ -1011,8 +1120,10 @@ static void cgx_remove(struct pci_dev *pdev) { struct cgx *cgx = pci_get_drvdata(pdev); - cgx_lmac_exit(cgx); - list_del(&cgx->cgx_list); + if (cgx) { + cgx_lmac_exit(cgx); + list_del(&cgx->cgx_list); + } pci_free_irq_vectors(pdev); pci_release_regions(pdev); pci_disable_device(pdev); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index bcfc3e5f66bb..7c76fd09ca61 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -13,6 +13,7 @@ #include "mbox.h" #include "cgx_fw_if.h" +#include "rpm.h" /* PCI device IDs */ #define PCI_DEVID_OCTEONTX2_CGX 0xA059 @@ -40,18 +41,18 @@ #define FW_CGX_INT BIT_ULL(1) #define CGXX_CMRX_INT_ENA_W1S 0x058 #define CGXX_CMRX_RX_ID_MAP 0x060 -#define CGXX_CMRX_RX_STAT0 0x070 +#define CGXX_CMRX_RX_STAT0 (0x070 + mac_ops->csr_offset) #define CGXX_CMRX_RX_LMACS 0x128 -#define CGXX_CMRX_RX_DMAC_CTL0 0x1F8 +#define CGXX_CMRX_RX_DMAC_CTL0 (0x1F8 + mac_ops->csr_offset) #define CGX_DMAC_CTL0_CAM_ENABLE BIT_ULL(3) #define CGX_DMAC_CAM_ACCEPT BIT_ULL(3) #define CGX_DMAC_MCAST_MODE BIT_ULL(1) #define CGX_DMAC_BCAST_MODE BIT_ULL(0) -#define CGXX_CMRX_RX_DMAC_CAM0 0x200 +#define CGXX_CMRX_RX_DMAC_CAM0 (0x200 + mac_ops->csr_offset) #define CGX_DMAC_CAM_ADDR_ENABLE BIT_ULL(48) #define CGXX_CMRX_RX_DMAC_CAM1 0x400 #define CGX_RX_DMAC_ADR_MASK GENMASK_ULL(47, 0) -#define CGXX_CMRX_TX_STAT0 0x700 +#define CGXX_CMRX_TX_STAT0 (0x700 + mac_ops->csr_offset) #define CGXX_SCRATCH0_REG 0x1050 #define CGXX_SCRATCH1_REG 0x1058 #define CGX_CONST 0x2000 @@ -81,7 +82,6 @@ #define CGX_CMD_TIMEOUT 2200 /* msecs */ #define DEFAULT_PAUSE_TIME 0x7FF -#define CGX_NVEC 37 #define CGX_LMAC_FWI 0 enum cgx_nix_stat_type { @@ -147,5 +147,9 @@ int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause); void cgx_lmac_ptp_config(void *cgxd, int lmac_id, bool enable); u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id); - +u64 cgx_features_get(void *cgxd); +struct mac_ops *get_mac_ops(void *cgxd); +int cgx_get_nr_lmacs(void *cgxd); +u8 cgx_get_lmacid(void *cgxd, u8 lmac_index); +unsigned long cgx_get_lmac_bmap(void *cgxd); #endif /* CGX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h new file mode 100644 index 000000000000..ac36e1527d09 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell OcteonTx2 RPM driver + * + * Copyright (C) 2020 Marvell. + */ + +#ifndef LMAC_COMMON_H +#define LMAC_COMMON_H + +#include "rvu.h" +#include "cgx.h" +/** + * struct lmac + * @wq_cmd_cmplt: waitq to keep the process blocked until cmd completion + * @cmd_lock: Lock to serialize the command interface + * @resp: command response + * @link_info: link related information + * @event_cb: callback for linkchange events + * @event_cb_lock: lock for serializing callback with unregister + * @cmd_pend: flag set before new command is started + * flag cleared after command response is received + * @cgx: parent cgx port + * @lmac_id: lmac port id + * @name: lmac port name + */ +struct lmac { + wait_queue_head_t wq_cmd_cmplt; + /* Lock to serialize the command interface */ + struct mutex cmd_lock; + u64 resp; + struct cgx_link_user_info link_info; + struct cgx_event_cb event_cb; + /* lock for serializing callback with unregister */ + spinlock_t event_cb_lock; + bool cmd_pend; + struct cgx *cgx; + u8 lmac_id; + char *name; +}; + +/* CGX & RPM has different feature set + * update the structure fields with different one + */ +struct mac_ops { + char *name; + /* Features like RXSTAT, TXSTAT, DMAC FILTER csrs differs by fixed + * bar offset for example + * CGX DMAC_CTL0 0x1f8 + * RPM DMAC_CTL0 0x4ff8 + */ + u64 csr_offset; + /* For ATF to send events to kernel, there is no dedicated interrupt + * defined hence CGX uses OVERFLOW bit in CMR_INT. RPM block supports + * SW_INT so that ATF triggers this interrupt after processing of + * requested command + */ + u64 int_register; + u64 int_set_reg; + /* lmac offset is different is RPM */ + u8 lmac_offset; + u8 irq_offset; + u8 int_ena_bit; + u8 lmac_fwi; + bool non_contiguous_serdes_lane; + /* Incase of RPM get number of lmacs from RPMX_CMR_RX_LMACS[LMAC_EXIST] + * number of setbits in lmac_exist tells number of lmacs + */ + int (*get_nr_lmacs)(void *cgx); +}; + +struct cgx { + void __iomem *reg_base; + struct pci_dev *pdev; + u8 cgx_id; + u8 lmac_count; + struct lmac *lmac_idmap[MAX_LMAC_PER_CGX]; + struct work_struct cgx_cmd_work; + struct workqueue_struct *cgx_cmd_workq; + struct list_head cgx_list; + u64 hw_features; + struct mac_ops *mac_ops; + unsigned long lmac_bmap; /* bitmap of enabled lmacs */ +}; + +typedef struct cgx rpm_t; + +/* Function Declarations */ +void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val); +u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset); +struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx); +int cgx_fwi_cmd_send(u64 req, u64 *resp, struct lmac *lmac); +int cgx_fwi_cmd_generic(u64 req, u64 *resp, struct cgx *cgx, int lmac_id); +bool is_lmac_valid(struct cgx *cgx, int lmac_id); +struct mac_ops *rpm_get_mac_ops(void); + +#endif /* LMAC_COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index f2b37b3f18e6..333fa5e20e8d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -153,6 +153,8 @@ M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \ M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \ M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \ cgx_pause_frm_cfg) \ +M(CGX_FEATURES_GET, 0x20F, cgx_features_get, msg_req, \ + cgx_features_info_msg) \ /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, \ npa_lf_alloc_req, npa_lf_alloc_rsp) \ @@ -360,7 +362,7 @@ struct get_hw_cap_rsp { struct cgx_stats_rsp { struct mbox_msghdr hdr; -#define CGX_RX_STATS_COUNT 13 +#define CGX_RX_STATS_COUNT 9 #define CGX_TX_STATS_COUNT 18 u64 rx_stats[CGX_RX_STATS_COUNT]; u64 tx_stats[CGX_TX_STATS_COUNT]; @@ -397,6 +399,17 @@ struct cgx_pause_frm_cfg { u8 tx_pause; }; +#define RVU_LMAC_FEAT_FC BIT_ULL(0) /* pause frames */ +#define RVU_LMAC_FEAT_PTP BIT_ULL(1) /* precison time protocol */ +#define RVU_MAC_VERSION BIT_ULL(2) +#define RVU_MAC_CGX 0 +#define RVU_MAC_RPM 1 + +struct cgx_features_info_msg { + struct mbox_msghdr hdr; + u64 lmac_features; +}; + /* NPA mbox message formats */ /* NPA mailbox error codes diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c new file mode 100644 index 000000000000..e97baeea4e19 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell OcteonTx2 RPM driver + * + * Copyright (C) 2020 Marvell. + * + */ + +#include "cgx.h" +#include "lmac_common.h" + +static struct mac_ops rpm_mac_ops = { + .name = "rpm", + .csr_offset = 0x4e00, + .lmac_offset = 20, + .int_register = RPMX_CMRX_SW_INT, + .int_set_reg = RPMX_CMRX_SW_INT_ENA_W1S, + .irq_offset = 1, + .int_ena_bit = BIT_ULL(0), + .lmac_fwi = RPM_LMAC_FWI, + .non_contiguous_serdes_lane = true, + .get_nr_lmacs = rpm_get_nr_lmacs, +}; + +struct mac_ops *rpm_get_mac_ops(void) +{ + return &rpm_mac_ops; +} + +static u64 rpm_read(rpm_t *rpm, u64 lmac, u64 offset) +{ + return cgx_read(rpm, lmac, offset); +} + +int rpm_get_nr_lmacs(void *rpmd) +{ + rpm_t *rpm = rpmd; + + return hweight8(rpm_read(rpm, 0, CGXX_CMRX_RX_LMACS) & 0xFULL); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h new file mode 100644 index 000000000000..7f45c1c645bf --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell OcteonTx2 RPM driver + * + * Copyright (C) 2020 Marvell. + * + */ + +#ifndef RPM_H +#define RPM_H + +/* PCI device IDs */ +#define PCI_DEVID_CN10K_RPM 0xA060 + +/* Registers */ +#define RPMX_CMRX_SW_INT 0x180 +#define RPMX_CMRX_SW_INT_W1S 0x188 +#define RPMX_CMRX_SW_INT_ENA_W1S 0x198 + +#define RPM_LMAC_FWI 0xa + +/* Function Declarations */ +int rpm_get_nr_lmacs(void *cgxd); +#endif /* RPM_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 381055a6b6a4..e9f6c0af11c8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -631,6 +631,8 @@ void npc_enable_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam, void npc_read_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, u16 src, struct mcam_entry *entry, u8 *intf, u8 *ena); +bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature); + /* CPT APIs */ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 6c6b411e78fd..8ec9b04da073 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -42,6 +42,20 @@ static struct _req_type __maybe_unused \ MBOX_UP_CGX_MESSAGES #undef M +bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature) +{ + u8 cgx_id, lmac_id; + void *cgxd; + + if (!is_pf_cgxmapped(rvu, pf)) + return 0; + + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); + cgxd = rvu_cgx_pdata(cgx_id, rvu); + + return (cgx_features_get(cgxd) & feature); +} + /* Returns bitmap of mapped PFs */ static u16 cgxlmac_to_pfmap(struct rvu *rvu, u8 cgx_id, u8 lmac_id) { @@ -92,9 +106,10 @@ static int rvu_map_cgx_lmac_pf(struct rvu *rvu) { struct npc_pkind *pkind = &rvu->hw->pkind; int cgx_cnt_max = rvu->cgx_cnt_max; - int cgx, lmac_cnt, lmac; int pf = PF_CGXMAP_BASE; + unsigned long lmac_bmap; int size, free_pkind; + int cgx, lmac, iter; if (!cgx_cnt_max) return 0; @@ -125,14 +140,17 @@ static int rvu_map_cgx_lmac_pf(struct rvu *rvu) for (cgx = 0; cgx < cgx_cnt_max; cgx++) { if (!rvu_cgx_pdata(cgx, rvu)) continue; - lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); - for (lmac = 0; lmac < lmac_cnt; lmac++, pf++) { + lmac_bmap = cgx_get_lmac_bmap(rvu_cgx_pdata(cgx, rvu)); + for_each_set_bit(iter, &lmac_bmap, MAX_LMAC_PER_CGX) { + lmac = cgx_get_lmacid(rvu_cgx_pdata(cgx, rvu), + iter); rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac); rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf; free_pkind = rvu_alloc_rsrc(&pkind->rsrc); pkind->pfchan_map[free_pkind] = ((pf) & 0x3F) << 16; rvu_map_cgx_nix_block(rvu, pf, cgx, lmac); rvu->cgx_mapped_pfs++; + pf++; } } return 0; @@ -154,8 +172,10 @@ static int rvu_cgx_send_link_info(int cgx_id, int lmac_id, struct rvu *rvu) &qentry->link_event.link_uinfo); qentry->link_event.cgx_id = cgx_id; qentry->link_event.lmac_id = lmac_id; - if (err) + if (err) { + kfree(qentry); goto skip_add; + } list_add_tail(&qentry->evq_node, &rvu->cgx_evq_head); skip_add: spin_unlock_irqrestore(&rvu->cgx_evq_lock, flags); @@ -251,6 +271,7 @@ static void cgx_evhandler_task(struct work_struct *work) static int cgx_lmac_event_handler_init(struct rvu *rvu) { + unsigned long lmac_bmap; struct cgx_event_cb cb; int cgx, lmac, err; void *cgxd; @@ -271,7 +292,8 @@ static int cgx_lmac_event_handler_init(struct rvu *rvu) cgxd = rvu_cgx_pdata(cgx, rvu); if (!cgxd) continue; - for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) { + lmac_bmap = cgx_get_lmac_bmap(cgxd); + for_each_set_bit(lmac, &lmac_bmap, MAX_LMAC_PER_CGX) { err = cgx_lmac_evh_register(&cb, cgxd, lmac); if (err) dev_err(rvu->dev, @@ -349,6 +371,7 @@ int rvu_cgx_init(struct rvu *rvu) int rvu_cgx_exit(struct rvu *rvu) { + unsigned long lmac_bmap; int cgx, lmac; void *cgxd; @@ -356,7 +379,8 @@ int rvu_cgx_exit(struct rvu *rvu) cgxd = rvu_cgx_pdata(cgx, rvu); if (!cgxd) continue; - for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) + lmac_bmap = cgx_get_lmac_bmap(cgxd); + for_each_set_bit(lmac, &lmac_bmap, MAX_LMAC_PER_CGX) cgx_lmac_evh_unregister(cgxd, lmac); } @@ -538,6 +562,9 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable) u8 cgx_id, lmac_id; void *cgxd; + if (!is_mac_feature_supported(rvu, pf, RVU_LMAC_FEAT_PTP)) + return 0; + /* This msg is expected only from PFs that are mapped to CGX LMACs, * if received from other PF/VF simply ACK, nothing to do. */ @@ -624,6 +651,24 @@ int rvu_mbox_handler_cgx_get_linkinfo(struct rvu *rvu, struct msg_req *req, return err; } +int rvu_mbox_handler_cgx_features_get(struct rvu *rvu, + struct msg_req *req, + struct cgx_features_info_msg *rsp) +{ + int pf = rvu_get_pf(req->hdr.pcifunc); + u8 cgx_idx, lmac; + void *cgxd; + + if (!is_pf_cgxmapped(rvu, pf)) + return 0; + + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_idx, &lmac); + cgxd = rvu_cgx_pdata(cgx_idx, rvu); + rsp->lmac_features = cgx_features_get(cgxd); + + return 0; +} + static int rvu_cgx_config_intlbk(struct rvu *rvu, u16 pcifunc, bool en) { int pf = rvu_get_pf(pcifunc); @@ -659,6 +704,9 @@ int rvu_mbox_handler_cgx_cfg_pause_frm(struct rvu *rvu, int pf = rvu_get_pf(req->hdr.pcifunc); u8 cgx_id, lmac_id; + if (!is_mac_feature_supported(rvu, pf, RVU_LMAC_FEAT_FC)) + return 0; + /* This msg is expected only from PF/VFs that are mapped to CGX LMACs, * if received from other PF/VF simply ACK, nothing to do. */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 02775d49cf0f..96ae6f79568d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -19,6 +19,7 @@ #include "rvu_reg.h" #include "rvu.h" #include "cgx.h" +#include "lmac_common.h" #include "npc.h" #define DEBUGFS_DIR_NAME "octeontx2" @@ -234,6 +235,8 @@ static int rvu_dbg_rvu_pf_cgx_map_display(struct seq_file *filp, void *unused) { struct rvu *rvu = filp->private; struct pci_dev *pdev = NULL; + struct mac_ops *mac_ops; + int rvu_def_cgx_id = 0; char cgx[10], lmac[10]; struct rvu_pfvf *pfvf; int pf, domain, blkid; @@ -241,7 +244,9 @@ static int rvu_dbg_rvu_pf_cgx_map_display(struct seq_file *filp, void *unused) u16 pcifunc; domain = 2; - seq_puts(filp, "PCI dev\t\tRVU PF Func\tNIX block\tCGX\tLMAC\n"); + mac_ops = get_mac_ops(rvu_cgx_pdata(rvu_def_cgx_id, rvu)); + seq_printf(filp, "PCI dev\t\tRVU PF Func\tNIX block\t%s\tLMAC\n", + mac_ops->name); for (pf = 0; pf < rvu->hw->total_pfs; pf++) { if (!is_pf_cgxmapped(rvu, pf)) continue; @@ -262,7 +267,7 @@ static int rvu_dbg_rvu_pf_cgx_map_display(struct seq_file *filp, void *unused) rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); - sprintf(cgx, "CGX%d", cgx_id); + sprintf(cgx, "%s%d", mac_ops->name, cgx_id); sprintf(lmac, "LMAC%d", lmac_id); seq_printf(filp, "%s\t0x%x\t\tNIX%d\t\t%s\t%s\n", dev_name(&pdev->dev), pcifunc, blkid, cgx, lmac); @@ -1601,6 +1606,7 @@ static void rvu_dbg_npa_init(struct rvu *rvu) static int cgx_print_stats(struct seq_file *s, int lmac_id) { struct cgx_link_user_info linfo; + struct mac_ops *mac_ops; void *cgxd = s->private; u64 ucast, mcast, bcast; int stat = 0, err = 0; @@ -1612,6 +1618,11 @@ static int cgx_print_stats(struct seq_file *s, int lmac_id) if (!rvu) return -ENODEV; + mac_ops = get_mac_ops(cgxd); + + if (!mac_ops) + return 0; + /* Link status */ seq_puts(s, "\n=======Link Status======\n\n"); err = cgx_get_link_info(cgxd, lmac_id, &linfo); @@ -1621,7 +1632,8 @@ static int cgx_print_stats(struct seq_file *s, int lmac_id) linfo.link_up ? "UP" : "DOWN", linfo.speed); /* Rx stats */ - seq_puts(s, "\n=======NIX RX_STATS(CGX port level)======\n\n"); + seq_printf(s, "\n=======NIX RX_STATS(%s port level)======\n\n", + mac_ops->name); ucast = PRINT_CGX_CUML_NIXRX_STATUS(RX_UCAST, "rx_ucast_frames"); if (err) return err; @@ -1643,7 +1655,8 @@ static int cgx_print_stats(struct seq_file *s, int lmac_id) return err; /* Tx stats */ - seq_puts(s, "\n=======NIX TX_STATS(CGX port level)======\n\n"); + seq_printf(s, "\n=======NIX TX_STATS(%s port level)======\n\n", + mac_ops->name); ucast = PRINT_CGX_CUML_NIXTX_STATUS(TX_UCAST, "tx_ucast_frames"); if (err) return err; @@ -1662,7 +1675,7 @@ static int cgx_print_stats(struct seq_file *s, int lmac_id) return err; /* Rx stats */ - seq_puts(s, "\n=======CGX RX_STATS======\n\n"); + seq_printf(s, "\n=======%s RX_STATS======\n\n", mac_ops->name); while (stat < CGX_RX_STATS_COUNT) { err = cgx_get_rx_stats(cgxd, lmac_id, stat, &rx_stat); if (err) @@ -1673,7 +1686,7 @@ static int cgx_print_stats(struct seq_file *s, int lmac_id) /* Tx stats */ stat = 0; - seq_puts(s, "\n=======CGX TX_STATS======\n\n"); + seq_printf(s, "\n=======%s TX_STATS======\n\n", mac_ops->name); while (stat < CGX_TX_STATS_COUNT) { err = cgx_get_tx_stats(cgxd, lmac_id, stat, &tx_stat); if (err) @@ -1709,6 +1722,9 @@ RVU_DEBUG_SEQ_FOPS(cgx_stat, cgx_stat_display, NULL); static void rvu_dbg_cgx_init(struct rvu *rvu) { + struct mac_ops *mac_ops; + unsigned long lmac_bmap; + int rvu_def_cgx_id = 0; int i, lmac_id; char dname[20]; void *cgx; @@ -1716,17 +1732,24 @@ static void rvu_dbg_cgx_init(struct rvu *rvu) if (!cgx_get_cgxcnt_max()) return; - rvu->rvu_dbg.cgx_root = debugfs_create_dir("cgx", rvu->rvu_dbg.root); + mac_ops = get_mac_ops(rvu_cgx_pdata(rvu_def_cgx_id, rvu)); + if (!mac_ops) + return; + + rvu->rvu_dbg.cgx_root = debugfs_create_dir(mac_ops->name, + rvu->rvu_dbg.root); for (i = 0; i < cgx_get_cgxcnt_max(); i++) { cgx = rvu_cgx_pdata(i, rvu); if (!cgx) continue; + lmac_bmap = cgx_get_lmac_bmap(cgx); /* cgx debugfs dir */ - sprintf(dname, "cgx%d", i); + sprintf(dname, "%s%d", mac_ops->name, i); rvu->rvu_dbg.cgx = debugfs_create_dir(dname, rvu->rvu_dbg.cgx_root); - for (lmac_id = 0; lmac_id < cgx_get_lmac_cnt(cgx); lmac_id++) { + + for_each_set_bit(lmac_id, &lmac_bmap, MAX_LMAC_PER_CGX) { /* lmac debugfs dir */ sprintf(dname, "lmac%d", lmac_id); rvu->rvu_dbg.lmac = @@ -2307,9 +2330,18 @@ void rvu_dbg_init(struct rvu *rvu) debugfs_create_file("rsrc_alloc", 0444, rvu->rvu_dbg.root, rvu, &rvu_dbg_rsrc_status_fops); - debugfs_create_file("rvu_pf_cgx_map", 0444, rvu->rvu_dbg.root, rvu, - &rvu_dbg_rvu_pf_cgx_map_fops); + if (!cgx_get_cgxcnt_max()) + goto create; + + if (is_rvu_otx2(rvu)) + debugfs_create_file("rvu_pf_cgx_map", 0444, rvu->rvu_dbg.root, + rvu, &rvu_dbg_rvu_pf_cgx_map_fops); + else + debugfs_create_file("rvu_pf_cgx_map", 0444, rvu->rvu_dbg.root, + rvu, &rvu_dbg_rvu_pf_cgx_map_fops); + +create: rvu_dbg_npa_init(rvu); rvu_dbg_nix_init(rvu, BLKADDR_NIX0); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 46da18d611d4..0c9b1c37a8c6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -3631,10 +3631,14 @@ static int rvu_nix_lf_ptp_tx_cfg(struct rvu *rvu, u16 pcifunc, bool enable) { struct rvu_hwinfo *hw = rvu->hw; struct rvu_block *block; - int blkaddr; + int blkaddr, pf; int nixlf; u64 cfg; + pf = rvu_get_pf(pcifunc); + if (!is_mac_feature_supported(rvu, pf, RVU_LMAC_FEAT_PTP)) + return 0; + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); if (blkaddr < 0) return NIX_AF_ERR_AF_LF_INVALID; From patchwork Thu Feb 11 01:46:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C51CCC433E0 for ; Thu, 11 Feb 2021 01:52:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 88F0E64E3B for ; Thu, 11 Feb 2021 01:52:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229853AbhBKBve (ORCPT ); Wed, 10 Feb 2021 20:51:34 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:7876 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230031AbhBKBsk (ORCPT ); Wed, 10 Feb 2021 20:48:40 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKu000423; Wed, 10 Feb 2021 17:47:55 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=SWzsCPtId+OtjzuBpoN/S2EAwPtZ+i6QKdk/jwSp1AA=; b=ZNkS+9P/ZKMlDCT6EOAcxSbVwJfuBIfzRjY3QKulUx875Nx3rDaBhCW2ALduRJjpTE3c QPDVuMwFVOALtswICOdM9VYB4wKypeTPa7aFL1pjqB4o6G/02xUywks7Q7ed06SRPpt5 16gpohSLhiZtWK/eOoqhzlSipdTFjetLFoqbumGa0oQrMeGbJyWmM/Vx0J+CzmJU4E9S Dt8zeiBY5ayrbE2G4lxaXCUrOiIFS1PBS44Z9WVeo5TxEfY7w1OWutQ4d9NzwXHxu63H THp6ImfxrHAnY/kh6HXhmnrTQjgWBddmRyk+rkPm1C/zxlrvEHb/2KfelK1M0FX5SDup ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhcn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:47:54 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:52 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:52 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 9BF7E3F703F; Wed, 10 Feb 2021 17:47:48 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 09/14] octeontx2-af: cn10k: Add support for programmable channels Date: Thu, 11 Feb 2021 07:16:26 +0530 Message-ID: <20210211014631.9578-10-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Subbaraya Sundeep NIX uses unique channel numbers to identify the packet sources/sinks like CGX,LBK and SDP. The channel numbers assigned to each block are hardwired in CN9xxx silicon. The fixed channel numbers in CN9xxx are: 0x0 | a << 8 | b - LBK(0..3)_CH(0..63) 0x0 | a << 8 - Reserved 0x700 | a - SDP_CH(0..255) 0x800 | a << 8 | b << 4 | c - CGX(0..7)_LMAC(0..3)_CH(0..15) All the channels in the above fixed enumerator(with maximum number of blocks) are not required since some chips have less number of blocks. For CN10K silicon the channel numbers need to be programmed by software in each block with the base channel number and range of channels. This patch calculates and assigns the channel numbers to efficiently distribute the channel number range(0-4095) among all the blocks. The assignment is made based on the actual number of blocks present and also contiguously leaving no holes. The channel numbers remaining after the math are used as new CPT replay channels present in CN10K. Also since channel numbers are not fixed the transmit channel link number needed by AF consumers is calculated by AF and sent along with nix_lf_alloc mailbox response. Signed-off-by: Subbaraya Sundeep Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- MAINTAINERS | 1 + .../ethernet/marvell/octeontx2/af/Makefile | 2 +- .../net/ethernet/marvell/octeontx2/af/cgx.c | 14 + .../net/ethernet/marvell/octeontx2/af/cgx.h | 2 + .../ethernet/marvell/octeontx2/af/common.h | 3 + .../net/ethernet/marvell/octeontx2/af/rpm.h | 6 + .../net/ethernet/marvell/octeontx2/af/rvu.c | 8 +- .../net/ethernet/marvell/octeontx2/af/rvu.h | 44 +++ .../ethernet/marvell/octeontx2/af/rvu_cn10k.c | 261 ++++++++++++++++++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 16 +- .../ethernet/marvell/octeontx2/af/rvu_npc.c | 4 +- .../ethernet/marvell/octeontx2/af/rvu_reg.h | 16 ++ 12 files changed, 360 insertions(+), 17 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c diff --git a/MAINTAINERS b/MAINTAINERS index 0c39376f58d7..1a4f56d31074 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10711,6 +10711,7 @@ M: Linu Cherian M: Geetha sowjanya M: Jerin Jacob M: hariprasad +M: Subbaraya Sundeep L: netdev@vger.kernel.org S: Supported F: Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index a6afbdeeaf30..621cc5bf106a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -10,4 +10,4 @@ obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o octeontx2_mbox-y := mbox.o rvu_trace.o octeontx2_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \ - rvu_cpt.o rvu_devlink.o rpm.o + rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index acbea44e369b..3419dd3279d9 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -122,6 +122,20 @@ void *cgx_get_pdata(int cgx_id) return NULL; } +void cgx_lmac_write(int cgx_id, int lmac_id, u64 offset, u64 val) +{ + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); + + cgx_write(cgx_dev, lmac_id, offset, val); +} + +u64 cgx_lmac_read(int cgx_id, int lmac_id, u64 offset) +{ + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); + + return cgx_read(cgx_dev, lmac_id, offset); +} + int cgx_get_cgxid(void *cgxd) { struct cgx *cgx = cgxd; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index 7c76fd09ca61..7589721a82d7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -152,4 +152,6 @@ struct mac_ops *get_mac_ops(void *cgxd); int cgx_get_nr_lmacs(void *cgxd); u8 cgx_get_lmacid(void *cgxd, u8 lmac_index); unsigned long cgx_get_lmac_bmap(void *cgxd); +void cgx_lmac_write(int cgx_id, int lmac_id, u64 offset, u64 val); +u64 cgx_lmac_read(int cgx_id, int lmac_id, u64 offset); #endif /* CGX_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h index 17f6f42f4453..0100e7044619 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h @@ -191,6 +191,9 @@ enum nix_scheduler { #define NIX_LINK_LBK(a) (12 + (a)) #define NIX_CHAN_CGX_LMAC_CHX(a, b, c) (0x800 + 0x100 * (a) + 0x10 * (b) + (c)) #define NIX_CHAN_LBK_CHX(a, b) (0 + 0x100 * (a) + (b)) +#define NIX_CHAN_SDP_CH_START (0x700ull) + +#define SDP_CHANNELS 256 /* NIX LSO format indices. * As of now TSO is the only one using, so statically assigning indices. diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h index 7f45c1c645bf..1779e8b689e8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h @@ -8,6 +8,8 @@ #ifndef RPM_H #define RPM_H +#include + /* PCI device IDs */ #define PCI_DEVID_CN10K_RPM 0xA060 @@ -15,6 +17,10 @@ #define RPMX_CMRX_SW_INT 0x180 #define RPMX_CMRX_SW_INT_W1S 0x188 #define RPMX_CMRX_SW_INT_ENA_W1S 0x198 +#define RPMX_CMRX_LINK_CFG 0x1070 + +#define RPMX_CMRX_LINK_RANGE_MASK GENMASK_ULL(19, 16) +#define RPMX_CMRX_LINK_BASE_MASK GENMASK_ULL(11, 0) #define RPM_LMAC_FWI 0xa diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 1729a2ad093e..54844084dd1b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -1006,6 +1006,10 @@ static int rvu_setup_hw_resources(struct rvu *rvu) rvu_scan_block(rvu, block); } + err = rvu_set_channels_base(rvu); + if (err) + goto msix_err; + err = rvu_npc_init(rvu); if (err) goto npc_err; @@ -1025,6 +1029,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu) if (err) goto nix_err; + rvu_program_channels(rvu); + return 0; nix_err: @@ -2721,8 +2727,6 @@ static void rvu_enable_afvf_intr(struct rvu *rvu) rvupf_write64(rvu, RVU_PF_VFME_INT_ENA_W1SX(1), INTR_MASK(vfs - 64)); } -#define PCI_DEVID_OCTEONTX2_LBK 0xA061 - int rvu_get_num_lbk_chans(void) { struct pci_dev *pdev; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index e9f6c0af11c8..bedccffa3f7f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -19,9 +19,11 @@ #include "common.h" #include "mbox.h" #include "npc.h" +#include "rvu_reg.h" /* PCI device IDs */ #define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065 +#define PCI_DEVID_OCTEONTX2_LBK 0xA061 /* Subsystem Device ID */ #define PCI_SUBSYS_DEVID_96XX 0xB200 @@ -305,6 +307,7 @@ struct hw_cap { bool nix_tx_link_bp; /* Can link backpressure TL queues ? */ bool nix_rx_multicast; /* Rx packet replication support */ bool per_pf_mbox_regs; /* PF mbox specified in per PF registers ? */ + bool programmable_chans; /* Channels programmable ? */ }; struct rvu_hwinfo { @@ -313,9 +316,14 @@ struct rvu_hwinfo { u16 max_vfs_per_pf; /* Max VFs that can be attached to a PF */ u8 cgx; u8 lmac_per_cgx; + u16 cgx_chan_base; /* CGX base channel number */ + u16 lbk_chan_base; /* LBK base channel number */ + u16 sdp_chan_base; /* SDP base channel number */ + u16 cpt_chan_base; /* CPT base channel number */ u8 cgx_links; u8 lbk_links; u8 sdp_links; + u8 cpt_links; /* Number of CPT links */ u8 npc_kpus; /* No of parser units */ u8 npc_pkinds; /* No of port kinds */ u8 npc_intfs; /* No of interfaces */ @@ -495,6 +503,38 @@ static inline bool is_rvu_otx2(struct rvu *rvu) midr == PCI_REVISION_ID_95XXMM); } +static inline u16 rvu_nix_chan_cgx(struct rvu *rvu, u8 cgxid, + u8 lmacid, u8 chan) +{ + u64 nix_const = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST); + u16 cgx_chans = nix_const & 0xFFULL; + struct rvu_hwinfo *hw = rvu->hw; + + if (!hw->cap.programmable_chans) + return NIX_CHAN_CGX_LMAC_CHX(cgxid, lmacid, chan); + + return rvu->hw->cgx_chan_base + + (cgxid * hw->lmac_per_cgx + lmacid) * cgx_chans + chan; +} + +static inline u16 rvu_nix_chan_lbk(struct rvu *rvu, u8 lbkid, + u8 chan) +{ + u64 nix_const = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST); + u16 lbk_chans = (nix_const >> 16) & 0xFFULL; + struct rvu_hwinfo *hw = rvu->hw; + + if (!hw->cap.programmable_chans) + return NIX_CHAN_LBK_CHX(lbkid, chan); + + return rvu->hw->lbk_chan_base + lbkid * lbk_chans + chan; +} + +static inline u16 rvu_nix_chan_cpt(struct rvu *rvu, u8 chan) +{ + return rvu->hw->cpt_chan_base + chan; +} + /* Function Prototypes * RVU */ @@ -636,6 +676,10 @@ bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature); /* CPT APIs */ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot); +/* CN10K RVU */ +int rvu_set_channels_base(struct rvu *rvu); +void rvu_program_channels(struct rvu *rvu); + #ifdef CONFIG_DEBUG_FS void rvu_dbg_init(struct rvu *rvu); void rvu_dbg_exit(struct rvu *rvu); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c new file mode 100644 index 000000000000..7d9e71c6965f --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -0,0 +1,261 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RPM CN10K driver + * + * Copyright (C) 2020 Marvell. + */ + +#include +#include +#include "rvu.h" +#include "cgx.h" +#include "rvu_reg.h" + +int rvu_set_channels_base(struct rvu *rvu) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 cpt_chan_base; + u64 nix_const; + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, 0); + if (blkaddr < 0) + return blkaddr; + + nix_const = rvu_read64(rvu, blkaddr, NIX_AF_CONST); + + hw->cgx = (nix_const >> 12) & 0xFULL; + hw->lmac_per_cgx = (nix_const >> 8) & 0xFULL; + hw->cgx_links = hw->cgx * hw->lmac_per_cgx; + hw->lbk_links = (nix_const >> 24) & 0xFULL; + hw->cpt_links = (nix_const >> 44) & 0xFULL; + hw->sdp_links = 1; + + hw->cgx_chan_base = NIX_CHAN_CGX_LMAC_CHX(0, 0, 0); + hw->lbk_chan_base = NIX_CHAN_LBK_CHX(0, 0); + hw->sdp_chan_base = NIX_CHAN_SDP_CH_START; + + /* No Programmable channels */ + if (!(nix_const & BIT_ULL(60))) + return 0; + + hw->cap.programmable_chans = true; + + /* If programmable channels are present then configure + * channels such that all channel numbers are contiguous + * leaving no holes. This way the new CPT channels can be + * accomodated. The order of channel numbers assigned is + * LBK, SDP, CGX and CPT. + */ + hw->sdp_chan_base = hw->lbk_chan_base + hw->lbk_links * + ((nix_const >> 16) & 0xFFULL); + hw->cgx_chan_base = hw->sdp_chan_base + hw->sdp_links * SDP_CHANNELS; + + cpt_chan_base = hw->cgx_chan_base + hw->cgx_links * + (nix_const & 0xFFULL); + + /* Out of 4096 channels start CPT from 2048 so + * that MSB for CPT channels is always set + */ + if (cpt_chan_base <= 0x800) { + hw->cpt_chan_base = 0x800; + } else { + dev_err(rvu->dev, + "CPT channels could not fit in the range 2048-4095\n"); + return -EINVAL; + } + + return 0; +} + +#define LBK_CONNECT_NIXX(a) (0x0 + (a)) + +static void __rvu_lbk_set_chans(struct rvu *rvu, void __iomem *base, + u64 offset, int lbkid, u16 chans) +{ + struct rvu_hwinfo *hw = rvu->hw; + u64 cfg; + + cfg = readq(base + offset); + cfg &= ~(LBK_LINK_CFG_RANGE_MASK | + LBK_LINK_CFG_ID_MASK | LBK_LINK_CFG_BASE_MASK); + cfg |= FIELD_PREP(LBK_LINK_CFG_RANGE_MASK, ilog2(chans)); + cfg |= FIELD_PREP(LBK_LINK_CFG_ID_MASK, lbkid); + cfg |= FIELD_PREP(LBK_LINK_CFG_BASE_MASK, hw->lbk_chan_base); + + writeq(cfg, base + offset); +} + +static void rvu_lbk_set_channels(struct rvu *rvu) +{ + struct pci_dev *pdev = NULL; + void __iomem *base; + u64 lbk_const; + u8 src, dst; + u16 chans; + + /* To loopback packets between multiple NIX blocks + * mutliple LBK blocks are needed. With two NIX blocks, + * four LBK blocks are needed and each LBK block + * source and destination are as follows: + * LBK0 - source NIX0 and destination NIX1 + * LBK1 - source NIX0 and destination NIX1 + * LBK2 - source NIX1 and destination NIX0 + * LBK3 - source NIX1 and destination NIX1 + * As per the HRM channel numbers should be programmed as: + * P2X and X2P of LBK0 as same + * P2X and X2P of LBK3 as same + * P2X of LBK1 and X2P of LBK2 as same + * P2X of LBK2 and X2P of LBK1 as same + */ + while (true) { + pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_LBK, pdev); + if (!pdev) + return; + + base = pci_ioremap_bar(pdev, 0); + if (!base) + goto err_put; + + lbk_const = readq(base + LBK_CONST); + chans = FIELD_GET(LBK_CONST_CHANS, lbk_const); + dst = FIELD_GET(LBK_CONST_DST, lbk_const); + src = FIELD_GET(LBK_CONST_SRC, lbk_const); + + if (src == dst) { + if (src == LBK_CONNECT_NIXX(0)) { /* LBK0 */ + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_X2P, + 0, chans); + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_P2X, + 0, chans); + } else if (src == LBK_CONNECT_NIXX(1)) { /* LBK3 */ + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_X2P, + 1, chans); + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_P2X, + 1, chans); + } + } else { + if (src == LBK_CONNECT_NIXX(0)) { /* LBK1 */ + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_X2P, + 0, chans); + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_P2X, + 1, chans); + } else if (src == LBK_CONNECT_NIXX(1)) { /* LBK2 */ + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_X2P, + 1, chans); + __rvu_lbk_set_chans(rvu, base, LBK_LINK_CFG_P2X, + 0, chans); + } + } + iounmap(base); + } +err_put: + pci_dev_put(pdev); +} + +static void __rvu_nix_set_channels(struct rvu *rvu, int blkaddr) +{ + u64 nix_const = rvu_read64(rvu, blkaddr, NIX_AF_CONST); + u16 cgx_chans, lbk_chans, sdp_chans, cpt_chans; + struct rvu_hwinfo *hw = rvu->hw; + int link, nix_link = 0; + u16 start; + u64 cfg; + + cgx_chans = nix_const & 0xFFULL; + lbk_chans = (nix_const >> 16) & 0xFFULL; + sdp_chans = SDP_CHANNELS; + cpt_chans = (nix_const >> 32) & 0xFFFULL; + + start = hw->cgx_chan_base; + for (link = 0; link < hw->cgx_links; link++, nix_link++) { + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link)); + cfg &= ~(NIX_AF_LINKX_BASE_MASK | NIX_AF_LINKX_RANGE_MASK); + cfg |= FIELD_PREP(NIX_AF_LINKX_RANGE_MASK, ilog2(cgx_chans)); + cfg |= FIELD_PREP(NIX_AF_LINKX_BASE_MASK, start); + rvu_write64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link), cfg); + start += cgx_chans; + } + + start = hw->lbk_chan_base; + for (link = 0; link < hw->lbk_links; link++, nix_link++) { + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link)); + cfg &= ~(NIX_AF_LINKX_BASE_MASK | NIX_AF_LINKX_RANGE_MASK); + cfg |= FIELD_PREP(NIX_AF_LINKX_RANGE_MASK, ilog2(lbk_chans)); + cfg |= FIELD_PREP(NIX_AF_LINKX_BASE_MASK, start); + rvu_write64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link), cfg); + start += lbk_chans; + } + + start = hw->sdp_chan_base; + for (link = 0; link < hw->sdp_links; link++, nix_link++) { + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link)); + cfg &= ~(NIX_AF_LINKX_BASE_MASK | NIX_AF_LINKX_RANGE_MASK); + cfg |= FIELD_PREP(NIX_AF_LINKX_RANGE_MASK, ilog2(sdp_chans)); + cfg |= FIELD_PREP(NIX_AF_LINKX_BASE_MASK, start); + rvu_write64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link), cfg); + start += sdp_chans; + } + + start = hw->cpt_chan_base; + for (link = 0; link < hw->cpt_links; link++, nix_link++) { + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link)); + cfg &= ~(NIX_AF_LINKX_BASE_MASK | NIX_AF_LINKX_RANGE_MASK); + cfg |= FIELD_PREP(NIX_AF_LINKX_RANGE_MASK, ilog2(cpt_chans)); + cfg |= FIELD_PREP(NIX_AF_LINKX_BASE_MASK, start); + rvu_write64(rvu, blkaddr, NIX_AF_LINKX_CFG(nix_link), cfg); + start += cpt_chans; + } +} + +static void rvu_nix_set_channels(struct rvu *rvu) +{ + int blkaddr = 0; + + blkaddr = rvu_get_next_nix_blkaddr(rvu, blkaddr); + while (blkaddr) { + __rvu_nix_set_channels(rvu, blkaddr); + blkaddr = rvu_get_next_nix_blkaddr(rvu, blkaddr); + } +} + +static void __rvu_rpm_set_channels(int cgxid, int lmacid, u16 base) +{ + u64 cfg; + + cfg = cgx_lmac_read(cgxid, lmacid, RPMX_CMRX_LINK_CFG); + cfg &= ~(RPMX_CMRX_LINK_BASE_MASK | RPMX_CMRX_LINK_RANGE_MASK); + + /* There is no read-only constant register to read + * the number of channels for LMAC and it is always 16. + */ + cfg |= FIELD_PREP(RPMX_CMRX_LINK_RANGE_MASK, ilog2(16)); + cfg |= FIELD_PREP(RPMX_CMRX_LINK_BASE_MASK, base); + cgx_lmac_write(cgxid, lmacid, RPMX_CMRX_LINK_CFG, cfg); +} + +static void rvu_rpm_set_channels(struct rvu *rvu) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 base = hw->cgx_chan_base; + int cgx, lmac; + + for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) { + for (lmac = 0; lmac < hw->lmac_per_cgx; lmac++) { + __rvu_rpm_set_channels(cgx, lmac, base); + base += 16; + } + } +} + +void rvu_program_channels(struct rvu *rvu) +{ + struct rvu_hwinfo *hw = rvu->hw; + + if (!hw->cap.programmable_chans) + return; + + rvu_nix_set_channels(rvu); + rvu_lbk_set_channels(rvu); + rvu_rpm_set_channels(rvu); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 0c9b1c37a8c6..cf9e88c6b89e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -233,7 +233,7 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) "PF_Func 0x%x: Invalid pkind\n", pcifunc); return -EINVAL; } - pfvf->rx_chan_base = NIX_CHAN_CGX_LMAC_CHX(cgx_id, lmac_id, 0); + pfvf->rx_chan_base = rvu_nix_chan_cgx(rvu, cgx_id, lmac_id, 0); pfvf->tx_chan_base = pfvf->rx_chan_base; pfvf->rx_chan_cnt = 1; pfvf->tx_chan_cnt = 1; @@ -262,10 +262,10 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) * loopback channels.Therefore if odd number of AF VFs are * enabled then the last VF remains with no pair. */ - pfvf->rx_chan_base = NIX_CHAN_LBK_CHX(lbkid, vf); + pfvf->rx_chan_base = rvu_nix_chan_lbk(rvu, lbkid, vf); pfvf->tx_chan_base = vf & 0x1 ? - NIX_CHAN_LBK_CHX(lbkid, vf - 1) : - NIX_CHAN_LBK_CHX(lbkid, vf + 1); + rvu_nix_chan_lbk(rvu, lbkid, vf - 1) : + rvu_nix_chan_lbk(rvu, lbkid, vf + 1); pfvf->rx_chan_cnt = 1; pfvf->tx_chan_cnt = 1; rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf, @@ -3389,14 +3389,6 @@ static int rvu_nix_block_init(struct rvu *rvu, struct nix_hw *nix_hw) if (err) return err; - /* Set num of links of each type */ - cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST); - hw->cgx = (cfg >> 12) & 0xF; - hw->lmac_per_cgx = (cfg >> 8) & 0xF; - hw->cgx_links = hw->cgx * hw->lmac_per_cgx; - hw->lbk_links = (cfg >> 24) & 0xF; - hw->sdp_links = 1; - /* Initialize admin queue */ err = nix_aq_init(rvu, block); if (err) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c index 5cf9b7a907ae..04bb0803a5c5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c @@ -102,9 +102,9 @@ int npc_mcam_verify_channel(struct rvu *rvu, u16 pcifunc, u8 intf, u16 channel) return -EINVAL; } else { rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); - base = NIX_CHAN_CGX_LMAC_CHX(cgx_id, lmac_id, 0x0); + base = rvu_nix_chan_cgx(rvu, cgx_id, lmac_id, 0x0); /* CGX mapped functions has maximum of 16 channels */ - end = NIX_CHAN_CGX_LMAC_CHX(cgx_id, lmac_id, 0xF); + end = rvu_nix_chan_cgx(rvu, cgx_id, lmac_id, 0xF); } if (channel < base || channel > end) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 78395c740c39..2460d71c4f87 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -406,12 +406,16 @@ #define NIX_AF_RX_NPC_MIRROR_RCV (0x4720) #define NIX_AF_RX_NPC_MIRROR_DROP (0x4730) #define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) (0x4800 | (a) << 16) +#define NIX_AF_LINKX_CFG(a) (0x4010 | (a) << 17) #define NIX_PRIV_AF_INT_CFG (0x8000000) #define NIX_PRIV_LFX_CFG (0x8000010) #define NIX_PRIV_LFX_INT_CFG (0x8000020) #define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030) +#define NIX_AF_LINKX_BASE_MASK GENMASK_ULL(11, 0) +#define NIX_AF_LINKX_RANGE_MASK GENMASK_ULL(19, 16) + /* SSO */ #define SSO_AF_CONST (0x1000) #define SSO_AF_CONST1 (0x1008) @@ -644,4 +648,16 @@ (0x00F00 | (a) << 5 | (b) << 4) #define NDC_AF_BANKX_HIT_PC(a) (0x01000 | (a) << 3) #define NDC_AF_BANKX_MISS_PC(a) (0x01100 | (a) << 3) + +/* LBK */ +#define LBK_CONST (0x10ull) +#define LBK_LINK_CFG_P2X (0x400ull) +#define LBK_LINK_CFG_X2P (0x408ull) +#define LBK_CONST_CHANS GENMASK_ULL(47, 32) +#define LBK_CONST_DST GENMASK_ULL(31, 28) +#define LBK_CONST_SRC GENMASK_ULL(27, 24) +#define LBK_LINK_CFG_RANGE_MASK GENMASK_ULL(19, 16) +#define LBK_LINK_CFG_ID_MASK GENMASK_ULL(11, 6) +#define LBK_LINK_CFG_BASE_MASK GENMASK_ULL(5, 0) + #endif /* RVU_REG_H */ From patchwork Thu Feb 11 01:46:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2174C433DB for ; Thu, 11 Feb 2021 01:54:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAD5B64E42 for ; Thu, 11 Feb 2021 01:54:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231235AbhBKBx2 (ORCPT ); Wed, 10 Feb 2021 20:53:28 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:13890 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbhBKBsu (ORCPT ); Wed, 10 Feb 2021 20:48:50 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kiGV000456; Wed, 10 Feb 2021 17:48:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=RHfDaPZZ2DmrpkTX4tiXKbUpVu3pKFb5IWEgapfsPHc=; b=R6RblIazl/9OTnycCzxMVveP6UydIXMzutYpN8lN8aMNdJKYS1dd9cKc3EtOSyymyQBv YM/N7VX9L4KBVISSBj+sBXHFP0EL3BumrivSQ1Fs73x0mcAcDbSP4WPE7xTvNK2ZPmEx /c0C2Bo12+WqxiQ7QYkKU3DmlfHq8YUORtYMqzCJeyE7LotQ9Hi0yp+GDE643LDap3Ve 7rwKsn2naLd0JtCrk/6B03oAJVAkJ4udZmI32ubHJMMfwre+8lrQUr4PPFHixZNQtLaI VAPVT5cwnYudK4jGdcCrR6USuai87sSRqAg0civj2MXR+4+9AtvNUhZ/Q3Vh2W8lf/+l BQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhcy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:48:01 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:59 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:47:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:47:59 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 0F18B3F7041; Wed, 10 Feb 2021 17:47:54 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 10/14] octeontx2-af: cn10K: Add MTU configuration Date: Thu, 11 Feb 2021 07:16:27 +0530 Message-ID: <20210211014631.9578-11-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Hariprasad Kelam OcteonTx3 CN10K silicon supports bigger MTU when compared to 9216 MTU supported by OcteonTx2 silicon variants. Lookback interface supports upto 64K and RPM LMAC interfaces support upto 16K. This patch does the necessary configuration and adds support for PF/VF drivers to retrieve max packet size supported via mbox This patch also configures tx link credit by considering supported fifo size and max packet length for Octeontx3 silicon. This patch also removes platform specific name from the driver name. Signed-off-by: Hariprasad Kelam Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../ethernet/marvell/octeontx2/af/Makefile | 8 +- .../net/ethernet/marvell/octeontx2/af/cgx.c | 10 +++ .../net/ethernet/marvell/octeontx2/af/cgx.h | 1 + .../ethernet/marvell/octeontx2/af/common.h | 2 + .../marvell/octeontx2/af/lmac_common.h | 1 + .../net/ethernet/marvell/octeontx2/af/mbox.h | 9 ++- .../net/ethernet/marvell/octeontx2/af/rvu.c | 29 +++++++- .../net/ethernet/marvell/octeontx2/af/rvu.h | 2 + .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 13 ++++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 74 +++++++++++++++++-- .../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 + 11 files changed, 138 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index 621cc5bf106a..1a3455620b38 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -4,10 +4,10 @@ # ccflags-y += -I$(src) -obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o -obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o +obj-$(CONFIG_OCTEONTX2_MBOX) += rvu_mbox.o +obj-$(CONFIG_OCTEONTX2_AF) += rvu_af.o -octeontx2_mbox-y := mbox.o rvu_trace.o -octeontx2_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ +rvu_mbox-y := mbox.o rvu_trace.o +rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \ rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 3419dd3279d9..5e7c7dd46ad7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -895,6 +895,14 @@ int cgx_lmac_linkup_start(void *cgxd) return 0; } +static void cgx_lmac_get_fifolen(struct cgx *cgx) +{ + u64 cfg; + + cfg = cgx_read(cgx, 0, CGX_CONST); + cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg); +} + static int cgx_configure_interrupt(struct cgx *cgx, struct lmac *lmac, int cnt, bool req_free) { @@ -949,6 +957,8 @@ static int cgx_lmac_init(struct cgx *cgx) u64 lmac_list; int i, err; + cgx_lmac_get_fifolen(cgx); + cgx->lmac_count = cgx->mac_ops->get_nr_lmacs(cgx); /* lmac_list specifies which lmacs are enabled * when bit n is set to 1, LMAC[n] is enabled diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index 7589721a82d7..f2c77e05c5ab 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -56,6 +56,7 @@ #define CGXX_SCRATCH0_REG 0x1050 #define CGXX_SCRATCH1_REG 0x1058 #define CGX_CONST 0x2000 +#define CGX_CONST_RXFIFO_SIZE GENMASK_ULL(23, 0) #define CGXX_SPUX_CONTROL1 0x10000 #define CGXX_SPUX_CONTROL1_LBK BIT_ULL(14) #define CGXX_GMP_PCS_MRX_CTL 0x30000 diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h index 0100e7044619..e66109367487 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h @@ -155,6 +155,8 @@ enum nix_scheduler { #define NIC_HW_MIN_FRS 40 #define NIC_HW_MAX_FRS 9212 #define SDP_HW_MAX_FRS 65535 +#define CN10K_LMAC_LINK_MAX_FRS 16380 /* 16k - FCS */ +#define CN10K_LBK_LINK_MAX_FRS 65535 /* 64k */ /* NIX RX action operation*/ #define NIX_RX_ACTIONOP_DROP (0x0ull) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h index ac36e1527d09..9c5a9c58b2d7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h @@ -61,6 +61,7 @@ struct mac_ops { u8 irq_offset; u8 int_ena_bit; u8 lmac_fwi; + u32 fifo_len; bool non_contiguous_serdes_lane; /* Incase of RPM get number of lmacs from RPMX_CMR_RX_LMACS[LMAC_EXIST] * number of setbits in lmac_exist tells number of lmacs diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 333fa5e20e8d..fe36a0768582 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -244,7 +244,8 @@ M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \ M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp) \ M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, nix_get_mac_addr_rsp) \ M(NIX_CN10K_AQ_ENQ, 0x8019, nix_cn10k_aq_enq, nix_cn10k_aq_enq_req, \ - nix_cn10k_aq_enq_rsp) + nix_cn10k_aq_enq_rsp) \ +M(NIX_GET_HW_INFO, 0x801a, nix_get_hw_info, msg_req, nix_hw_info) /* Messages initiated by AF (range 0xC00 - 0xDFF) */ #define MBOX_UP_CGX_MESSAGES \ @@ -861,6 +862,12 @@ struct nix_bp_cfg_rsp { u8 chan_cnt; /* Number of channel for which bpids are assigned */ }; +struct nix_hw_info { + struct mbox_msghdr hdr; + u16 max_mtu; + u16 min_mtu; +}; + /* NPC mbox message structs */ #define NPC_MCAM_ENTRY_INVALID 0xFFFF diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 54844084dd1b..d9a1a71c7ccc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -22,7 +22,7 @@ #include "rvu_trace.h" -#define DRV_NAME "octeontx2-af" +#define DRV_NAME "rvu_af" #define DRV_STRING "Marvell OcteonTX2 RVU Admin Function Driver" static int rvu_get_hwvf(struct rvu *rvu, int pcifunc); @@ -855,6 +855,31 @@ static int rvu_setup_cpt_hw_resource(struct rvu *rvu, int blkaddr) return rvu_alloc_bitmap(&block->lf); } +static void rvu_get_lbk_bufsize(struct rvu *rvu) +{ + struct pci_dev *pdev = NULL; + void __iomem *base; + u64 lbk_const; + + pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_LBK, pdev); + if (!pdev) + return; + + base = pci_ioremap_bar(pdev, 0); + if (!base) + goto err_put; + + lbk_const = readq(base + LBK_CONST); + + /* cache fifo size */ + rvu->hw->lbk_bufsize = FIELD_GET(LBK_CONST_BUF_SIZE, lbk_const); + + iounmap(base); +err_put: + pci_dev_put(pdev); +} + static int rvu_setup_hw_resources(struct rvu *rvu) { struct rvu_hwinfo *hw = rvu->hw; @@ -1025,6 +1050,8 @@ static int rvu_setup_hw_resources(struct rvu *rvu) if (err) goto npa_err; + rvu_get_lbk_bufsize(rvu); + err = rvu_nix_init(rvu); if (err) goto nix_err; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index bedccffa3f7f..651407ed1e41 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -329,6 +329,7 @@ struct rvu_hwinfo { u8 npc_intfs; /* No of interfaces */ u8 npc_kpu_entries; /* No of KPU entries */ u16 npc_counters; /* No of match stats counters */ + u32 lbk_bufsize; /* FIFO size supported by LBK */ bool npc_ext_set; /* Extended register set */ struct hw_cap cap; @@ -672,6 +673,7 @@ void npc_read_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, u16 src, struct mcam_entry *entry, u8 *intf, u8 *ena); bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature); +u32 rvu_cgx_get_fifolen(struct rvu *rvu); /* CPT APIs */ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 8ec9b04da073..5e514f27a22d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -14,6 +14,7 @@ #include "rvu.h" #include "cgx.h" +#include "lmac_common.h" #include "rvu_reg.h" #include "rvu_trace.h" @@ -669,6 +670,18 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu, return 0; } +u32 rvu_cgx_get_fifolen(struct rvu *rvu) +{ + struct mac_ops *mac_ops; + int rvu_def_cgx_id = 0; + u32 fifo_len; + + mac_ops = get_mac_ops(rvu_cgx_pdata(rvu_def_cgx_id, rvu)); + fifo_len = mac_ops ? mac_ops->fifo_len : 0; + + return fifo_len; +} + static int rvu_cgx_config_intlbk(struct rvu *rvu, u16 pcifunc, bool en) { int pf = rvu_get_pf(pcifunc); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index cf9e88c6b89e..ee27dddeff59 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -2543,6 +2543,43 @@ static int nix_af_mark_format_setup(struct rvu *rvu, struct nix_hw *nix_hw, return 0; } +static void rvu_get_lbk_link_max_frs(struct rvu *rvu, u16 *max_mtu) +{ + /* CN10K supports LBK FIFO size 72 KB */ + if (rvu->hw->lbk_bufsize == 0x12000) + *max_mtu = CN10K_LBK_LINK_MAX_FRS; + else + *max_mtu = NIC_HW_MAX_FRS; +} + +static void rvu_get_lmac_link_max_frs(struct rvu *rvu, u16 *max_mtu) +{ + /* RPM supports FIFO len 128 KB */ + if (rvu_cgx_get_fifolen(rvu) == 0x20000) + *max_mtu = CN10K_LMAC_LINK_MAX_FRS; + else + *max_mtu = NIC_HW_MAX_FRS; +} + +int rvu_mbox_handler_nix_get_hw_info(struct rvu *rvu, struct msg_req *req, + struct nix_hw_info *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (blkaddr < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + if (is_afvf(pcifunc)) + rvu_get_lbk_link_max_frs(rvu, &rsp->max_mtu); + else + rvu_get_lmac_link_max_frs(rvu, &rsp->max_mtu); + + rsp->min_mtu = NIC_HW_MIN_FRS; + return 0; +} + int rvu_mbox_handler_nix_stats_rst(struct rvu *rvu, struct msg_req *req, struct msg_rsp *rsp) { @@ -3107,6 +3144,7 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, u64 cfg, lmac_fifo_len; struct nix_hw *nix_hw; u8 cgx = 0, lmac = 0; + u16 max_mtu; blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); if (blkaddr < 0) @@ -3116,7 +3154,12 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, if (!nix_hw) return -EINVAL; - if (!req->sdp_link && req->maxlen > NIC_HW_MAX_FRS) + if (is_afvf(pcifunc)) + rvu_get_lbk_link_max_frs(rvu, &max_mtu); + else + rvu_get_lmac_link_max_frs(rvu, &max_mtu); + + if (!req->sdp_link && req->maxlen > max_mtu) return NIX_AF_ERR_FRS_INVALID; if (req->update_minlen && req->minlen < NIC_HW_MIN_FRS) @@ -3176,7 +3219,8 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, /* Update transmit credits for CGX links */ lmac_fifo_len = - CGX_FIFO_LEN / cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); + rvu_cgx_get_fifolen(rvu) / + cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); cfg = rvu_read64(rvu, blkaddr, NIX_AF_TX_LINKX_NORM_CREDIT(link)); cfg &= ~(0xFFFFFULL << 12); cfg |= ((lmac_fifo_len - req->maxlen) / 16) << 12; @@ -3216,23 +3260,40 @@ int rvu_mbox_handler_nix_set_rx_cfg(struct rvu *rvu, struct nix_rx_cfg *req, return 0; } +static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs) +{ + /* CN10k supports 72KB FIFO size and max packet size of 64k */ + if (rvu->hw->lbk_bufsize == 0x12000) + return (rvu->hw->lbk_bufsize - lbk_max_frs) / 16; + + return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */ +} + static void nix_link_config(struct rvu *rvu, int blkaddr) { struct rvu_hwinfo *hw = rvu->hw; int cgx, lmac_cnt, slink, link; + u16 lbk_max_frs, lmac_max_frs; u64 tx_credits; + rvu_get_lbk_link_max_frs(rvu, &lbk_max_frs); + rvu_get_lmac_link_max_frs(rvu, &lmac_max_frs); + /* Set default min/max packet lengths allowed on NIX Rx links. * * With HW reset minlen value of 60byte, HW will treat ARP pkts * as undersize and report them to SW as error pkts, hence * setting it to 40 bytes. */ - for (link = 0; link < (hw->cgx_links + hw->lbk_links); link++) { + for (link = 0; link < hw->cgx_links; link++) { rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), - NIC_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS); + ((u64)lmac_max_frs << 16) | NIC_HW_MIN_FRS); } + for (link = hw->cgx_links; link < hw->lbk_links; link++) { + rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), + ((u64)lbk_max_frs << 16) | NIC_HW_MIN_FRS); + } if (hw->sdp_links) { link = hw->cgx_links + hw->lbk_links; rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), @@ -3244,7 +3305,8 @@ static void nix_link_config(struct rvu *rvu, int blkaddr) */ for (cgx = 0; cgx < hw->cgx; cgx++) { lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu)); - tx_credits = ((CGX_FIFO_LEN / lmac_cnt) - NIC_HW_MAX_FRS) / 16; + tx_credits = ((rvu_cgx_get_fifolen(rvu) / lmac_cnt) - + lmac_max_frs) / 16; /* Enable credits and set credit pkt count to max allowed */ tx_credits = (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1); slink = cgx * hw->lmac_per_cgx; @@ -3258,7 +3320,7 @@ static void nix_link_config(struct rvu *rvu, int blkaddr) /* Set Tx credits for LBK link */ slink = hw->cgx_links; for (link = slink; link < (slink + hw->lbk_links); link++) { - tx_credits = 1000; /* 10 * max LBK datarate = 10 * 100Gbps */ + tx_credits = rvu_get_lbk_link_credits(rvu, lbk_max_frs); /* Enable credits and set credit pkt count to max allowed */ tx_credits = (tx_credits << 12) | (0x1FF << 2) | BIT_ULL(1); rvu_write64(rvu, blkaddr, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 2460d71c4f87..3e401fd8ac63 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -656,6 +656,7 @@ #define LBK_CONST_CHANS GENMASK_ULL(47, 32) #define LBK_CONST_DST GENMASK_ULL(31, 28) #define LBK_CONST_SRC GENMASK_ULL(27, 24) +#define LBK_CONST_BUF_SIZE GENMASK_ULL(23, 0) #define LBK_LINK_CFG_RANGE_MASK GENMASK_ULL(19, 16) #define LBK_LINK_CFG_ID_MASK GENMASK_ULL(11, 6) #define LBK_LINK_CFG_BASE_MASK GENMASK_ULL(5, 0) From patchwork Thu Feb 11 01:46:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33708C433E9 for ; Thu, 11 Feb 2021 01:54:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06AE064DBD for ; Thu, 11 Feb 2021 01:54:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229521AbhBKBy3 (ORCPT ); Wed, 10 Feb 2021 20:54:29 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:5684 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbhBKBsw (ORCPT ); Wed, 10 Feb 2021 20:48:52 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kfZv000402; Wed, 10 Feb 2021 17:48:07 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=VSMY4v9zWJoXrusAjnQNoBZSmIXoEIfJqf8TCVcc6Gw=; b=NlMz4pRzNiq9y6GwCdc8sDu1OFWjEF4OPtXwctBgMBitwQaTq0SVXkOpdJKrcNzISYWv l+x47rTsjTHbhnEUE3V46pMaSvClnnJztJ0sksxqNQW13yDOAKDXlDnGEQiXEZuv9Ioa NAkmJmWYsFtW1Er0d9rdKl5DCsd/B5WVDlrMP/KnRW27MstJQgzENN1cFZTIePfWhnSb otgjxehOHyn47sJQ8Zj3/S6Qu0vrL2Us7GZNhDXDtOUNmflreAv3wMlcPVPYRUR5sUA9 q3UKWA08CFsQjR6/oe0LTFkzgplG9fyrldTMdmgyQtcCdjFuSYnrCLS9/hfOOGgH/61L aA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhd8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:48:07 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:48:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:48:05 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 3667F3F703F; Wed, 10 Feb 2021 17:48:00 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 11/14] octeontx2-pf: cn10k: Get max mtu supported from admin function Date: Thu, 11 Feb 2021 07:16:28 +0530 Message-ID: <20210211014631.9578-12-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Hariprasad Kelam CN10K supports max MTU of 16K on LMAC links and 64k on LBK links and Octeontx2 silicon supports 9K mtu on both links. Get the same from nix_get_hw_info mbox message in netdev probe. This patch also calculates receive buffer size required based on the MTU set. Signed-off-by: Hariprasad Kelam Signed-off-by: Subbaraya Sundeep Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../ethernet/marvell/octeontx2/nic/cn10k.c | 2 +- .../marvell/octeontx2/nic/otx2_common.c | 45 +++++++++++++++++-- .../marvell/octeontx2/nic/otx2_common.h | 2 +- .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 35 +++++++++++++-- .../marvell/octeontx2/nic/otx2_txrx.c | 28 +++++++++--- .../marvell/octeontx2/nic/otx2_txrx.h | 1 - .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 2 +- 7 files changed, 97 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c index 70548d12ff42..da82f9fd177a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -108,7 +108,7 @@ int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) /* Only one SMQ is allocated, map all SQ's to that SMQ */ aq->sq.smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; /* FIXME: set based on NIX_AF_DWRR_RPM_MTU*/ - aq->sq.smq_rr_weight = OTX2_MAX_MTU; + aq->sq.smq_rr_weight = pfvf->netdev->mtu; aq->sq.default_chan = pfvf->hw.tx_chan_base; aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ aq->sq.sqb_aura = sqb_aura; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 49351a2d7629..3a72e8588d38 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -217,7 +217,6 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) return -ENOMEM; } - pfvf->max_frs = mtu + OTX2_ETH_HLEN; req->maxlen = pfvf->max_frs; err = otx2_sync_mbox_msg(&pfvf->mbox); @@ -591,8 +590,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl) /* Set topology e.t.c configuration */ if (lvl == NIX_TXSCH_LVL_SMQ) { req->reg[0] = NIX_AF_SMQX_CFG(schq); - req->regval[0] = ((OTX2_MAX_MTU + OTX2_ETH_HLEN) << 8) | - OTX2_MIN_MTU; + req->regval[0] = ((pfvf->netdev->max_mtu + OTX2_ETH_HLEN) << 8) + | OTX2_MIN_MTU; req->regval[0] |= (0x20ULL << 51) | (0x80ULL << 39) | (0x2ULL << 36); @@ -1618,6 +1617,46 @@ void otx2_set_cints_affinity(struct otx2_nic *pfvf) } } +u16 otx2_get_max_mtu(struct otx2_nic *pfvf) +{ + struct nix_hw_info *rsp; + struct msg_req *req; + u16 max_mtu; + int rc; + + mutex_lock(&pfvf->mbox.lock); + + req = otx2_mbox_alloc_msg_nix_get_hw_info(&pfvf->mbox); + if (!req) { + rc = -ENOMEM; + goto out; + } + + rc = otx2_sync_mbox_msg(&pfvf->mbox); + if (!rc) { + rsp = (struct nix_hw_info *) + otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + + /* HW counts VLAN insertion bytes (8 for double tag) + * irrespective of whether SQE is requesting to insert VLAN + * in the packet or not. Hence these 8 bytes have to be + * discounted from max packet size otherwise HW will throw + * SMQ errors + */ + max_mtu = rsp->max_mtu - 8 - OTX2_ETH_HLEN; + } + +out: + mutex_unlock(&pfvf->mbox.lock); + if (rc) { + dev_warn(pfvf->dev, + "Failed to get MTU from hardware setting default value(1500)\n"); + max_mtu = 1500; + } + return max_mtu; +} +EXPORT_SYMBOL(otx2_get_max_mtu); + #define M(_name, _id, _fn_name, _req_type, _rsp_type) \ int __weak \ otx2_mbox_up_handler_ ## _fn_name(struct otx2_nic *pfvf, \ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 52205cbd6ada..23e1c242175a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -790,5 +790,5 @@ int otx2_del_macfilter(struct net_device *netdev, const u8 *mac); int otx2_add_macfilter(struct net_device *netdev, const u8 *mac); int otx2_enable_rxvlan(struct otx2_nic *pf, bool enable); int otx2_install_rxvlan_offload_flow(struct otx2_nic *pfvf); - +u16 otx2_get_max_mtu(struct otx2_nic *pfvf); #endif /* OTX2_COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 8b6a012d0a88..01458fe4e088 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1285,6 +1285,33 @@ static void otx2_free_sq_res(struct otx2_nic *pf) } } +static int otx2_get_rbuf_size(struct otx2_nic *pf, int mtu) +{ + int frame_size; + int total_size; + int rbuf_size; + + /* The data transferred by NIX to memory consists of actual packet + * plus additional data which has timestamp and/or EDSA/HIGIG2 + * headers if interface is configured in corresponding modes. + * NIX transfers entire data using 6 segments/buffers and writes + * a CQE_RX descriptor with those segment addresses. First segment + * has additional data prepended to packet. Also software omits a + * headroom of 128 bytes and sizeof(struct skb_shared_info) in + * each segment. Hence the total size of memory needed + * to receive a packet with 'mtu' is: + * frame size = mtu + additional data; + * memory = frame_size + (headroom + struct skb_shared_info size) * 6; + * each receive buffer size = memory / 6; + */ + frame_size = mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; + total_size = frame_size + (OTX2_HEAD_ROOM + + OTX2_DATA_ALIGN(sizeof(struct skb_shared_info))) * 6; + rbuf_size = total_size / 6; + + return ALIGN(rbuf_size, 2048); +} + static int otx2_init_hw_resources(struct otx2_nic *pf) { struct nix_lf_free_req *free_req; @@ -1301,9 +1328,9 @@ static int otx2_init_hw_resources(struct otx2_nic *pf) hw->sqpool_cnt = hw->tx_queues; hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt; - /* Get the size of receive buffers to allocate */ - pf->rbsize = RCV_FRAG_LEN(OTX2_HW_TIMESTAMP_LEN + pf->netdev->mtu + - OTX2_ETH_HLEN); + pf->max_frs = pf->netdev->mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; + + pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu); mutex_lock(&mbox->lock); /* NPA init */ @@ -2426,7 +2453,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) /* MTU range: 64 - 9190 */ netdev->min_mtu = OTX2_MIN_MTU; - netdev->max_mtu = OTX2_MAX_MTU; + netdev->max_mtu = otx2_get_max_mtu(pf); err = register_netdev(netdev); if (err) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index 690bb3a9f8f7..9c53c789803e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -257,12 +257,11 @@ static bool otx2_check_rcv_errors(struct otx2_nic *pfvf, /* For now ignore all the NPC parser errors and * pass the packets to stack. */ - if (cqe->sg.segs == 1) - return false; + return false; } /* If RXALL is enabled pass on packets to stack. */ - if (cqe->sg.segs == 1 && (pfvf->netdev->features & NETIF_F_RXALL)) + if (pfvf->netdev->features & NETIF_F_RXALL) return false; /* Free buffer back to pool */ @@ -277,9 +276,14 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, struct nix_cqe_rx_s *cqe) { struct nix_rx_parse_s *parse = &cqe->parse; + struct nix_rx_sg_s *sg = &cqe->sg; struct sk_buff *skb = NULL; + void *end, *start; + u64 *seg_addr; + u16 *seg_size; + int seg; - if (unlikely(parse->errlev || parse->errcode || cqe->sg.segs > 1)) { + if (unlikely(parse->errlev || parse->errcode)) { if (otx2_check_rcv_errors(pfvf, cqe, cq->cq_idx)) return; } @@ -288,9 +292,19 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, if (unlikely(!skb)) return; - otx2_skb_add_frag(pfvf, skb, cqe->sg.seg_addr, cqe->sg.seg_size, parse); - cq->pool_ptrs++; - + start = (void *)sg; + end = start + ((cqe->parse.desc_sizem1 + 1) * 16); + while (start < end) { + sg = (struct nix_rx_sg_s *)start; + seg_addr = &sg->seg_addr; + seg_size = (void *)sg; + for (seg = 0; seg < sg->segs; seg++, seg_addr++) { + otx2_skb_add_frag(pfvf, skb, *seg_addr, seg_size[seg], + parse); + cq->pool_ptrs++; + } + start += sizeof(*sg); + } otx2_set_rxhash(pfvf, cqe, skb); skb_record_rx_queue(skb, cq->cq_idx); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index d2b26b3357f3..52486c1f0973 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -24,7 +24,6 @@ #define OTX2_ETH_HLEN (VLAN_ETH_HLEN + VLAN_HLEN) #define OTX2_MIN_MTU 64 -#define OTX2_MAX_MTU (9212 - OTX2_ETH_HLEN) #define OTX2_MAX_GSO_SEGS 255 #define OTX2_MAX_FRAGS_IN_SQE 9 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index 31e03253e612..085be90a03eb 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -586,7 +586,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) /* MTU range: 68 - 9190 */ netdev->min_mtu = OTX2_MIN_MTU; - netdev->max_mtu = OTX2_MAX_MTU; + netdev->max_mtu = otx2_get_max_mtu(vf); INIT_WORK(&vf->reset_task, otx2vf_reset_task); From patchwork Thu Feb 11 01:46:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 382369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65565C433E0 for ; Thu, 11 Feb 2021 01:56:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2679164DBD for ; Thu, 11 Feb 2021 01:56:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229611AbhBKB4A (ORCPT ); Wed, 10 Feb 2021 20:56:00 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:15370 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229766AbhBKBtB (ORCPT ); Wed, 10 Feb 2021 20:49:01 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1lDcq002167; Wed, 10 Feb 2021 17:48:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=F0V5L82ZUHjkUI4d8MC5gqNPxT47bs2Ix1zriWKTK8g=; b=bdUj8CnbzYeDdls78TpPTV+UQcINT+9C78rJQ6nqZkHkvKKzakgBI5MR9SE74BkFOelO bowLiVtyQMLI1A9mM2Ymdre1uTwro8rrV5a6qExClLMHst1JgvOR9lovVC+rt1aRVPbr C8ERY+8fXW9JM8dvkVu+M7LHI6wVsPduSUsgCso+dkkOTkvwuP1TJs+WfZ76KRmkp7PC CwN1ctpCHZD1CCwRPEMji73+ZJkSEkBV1cSq4MdMpGrW5/WX/s85p43ry5rK0PfnO5RY /X+3cng24JzUuf24ndFKS7gbVoNc6iw9F82o/dUncjYn7W/g4757utE1KUnH9EhbjoqE XQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhde-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:48:13 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:48:11 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:48:11 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 389503F703F; Wed, 10 Feb 2021 17:48:06 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Rakesh Babu , Geetha sowjanya , Sunil Kovvuri Goutham Subject: [net-next v5 12/14] octeontx2-af: cn10k: Add RPM LMAC pause frame support Date: Thu, 11 Feb 2021 07:16:29 +0530 Message-ID: <20210211014631.9578-13-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Rakesh Babu Flow control configuration is different for CGX(Octeontx2) and RPM(CN10K) functional blocks. This patch adds the necessary changes for RPM to support 802.3 pause frames configuration on cn10k platforms. Signed-off-by: Rakesh Babu Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Kovvuri Goutham --- .../net/ethernet/marvell/octeontx2/af/cgx.c | 27 ++-- .../marvell/octeontx2/af/lmac_common.h | 18 +++ .../net/ethernet/marvell/octeontx2/af/rpm.c | 134 ++++++++++++++++++ .../net/ethernet/marvell/octeontx2/af/rpm.h | 18 ++- .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 19 ++- .../ethernet/marvell/octeontx2/af/rvu_nix.c | 8 +- 6 files changed, 201 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 5e7c7dd46ad7..cc53aa1d6d3a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -306,9 +306,6 @@ void cgx_lmac_enadis_rx_pause_fwding(void *cgxd, int lmac_id, bool enable) struct cgx *cgx = cgxd; u64 cfg; - if (is_dev_rpm(cgx)) - return; - if (!cgx) return; @@ -397,8 +394,8 @@ int cgx_lmac_tx_enable(void *cgxd, int lmac_id, bool enable) return !!(last & DATA_PKT_TX_EN); } -int cgx_lmac_get_pause_frm(void *cgxd, int lmac_id, - u8 *tx_pause, u8 *rx_pause) +static int cgx_lmac_get_pause_frm_status(void *cgxd, int lmac_id, + u8 *tx_pause, u8 *rx_pause) { struct cgx *cgx = cgxd; u64 cfg; @@ -417,8 +414,8 @@ int cgx_lmac_get_pause_frm(void *cgxd, int lmac_id, return 0; } -int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id, - u8 tx_pause, u8 rx_pause) +static int cgx_lmac_enadis_pause_frm(void *cgxd, int lmac_id, + u8 tx_pause, u8 rx_pause) { struct cgx *cgx = cgxd; u64 cfg; @@ -450,13 +447,11 @@ int cgx_lmac_set_pause_frm(void *cgxd, int lmac_id, return 0; } -static void cgx_lmac_pause_frm_config(struct cgx *cgx, int lmac_id, bool enable) +static void cgx_lmac_pause_frm_config(void *cgxd, int lmac_id, bool enable) { + struct cgx *cgx = cgxd; u64 cfg; - if (is_dev_rpm(cgx)) - return; - if (!is_lmac_valid(cgx, lmac_id)) return; if (enable) { @@ -996,7 +991,7 @@ static int cgx_lmac_init(struct cgx *cgx) /* Add reference */ cgx->lmac_idmap[lmac->lmac_id] = lmac; - cgx_lmac_pause_frm_config(cgx, lmac->lmac_id, true); + cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, true); set_bit(lmac->lmac_id, &cgx->lmac_bmap); } @@ -1025,7 +1020,7 @@ static int cgx_lmac_exit(struct cgx *cgx) lmac = cgx->lmac_idmap[i]; if (!lmac) continue; - cgx_lmac_pause_frm_config(cgx, lmac->lmac_id, false); + cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, false); cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, true); kfree(lmac->name); kfree(lmac); @@ -1037,7 +1032,7 @@ static int cgx_lmac_exit(struct cgx *cgx) static void cgx_populate_features(struct cgx *cgx) { if (is_dev_rpm(cgx)) - cgx->hw_features = RVU_MAC_RPM; + cgx->hw_features = (RVU_MAC_RPM | RVU_LMAC_FEAT_FC); else cgx->hw_features = (RVU_LMAC_FEAT_FC | RVU_LMAC_FEAT_PTP); } @@ -1053,6 +1048,10 @@ static struct mac_ops cgx_mac_ops = { .lmac_fwi = CGX_LMAC_FWI, .non_contiguous_serdes_lane = false, .get_nr_lmacs = cgx_get_nr_lmacs, + .mac_enadis_rx_pause_fwding = cgx_lmac_enadis_rx_pause_fwding, + .mac_get_pause_frm_status = cgx_lmac_get_pause_frm_status, + .mac_enadis_pause_frm = cgx_lmac_enadis_pause_frm, + .mac_pause_frm_config = cgx_lmac_pause_frm_config, }; static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h index 9c5a9c58b2d7..b4eb337ccbdd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h @@ -67,6 +67,24 @@ struct mac_ops { * number of setbits in lmac_exist tells number of lmacs */ int (*get_nr_lmacs)(void *cgx); + + /* Enable LMAC Pause Frame Configuration */ + void (*mac_enadis_rx_pause_fwding)(void *cgxd, + int lmac_id, + bool enable); + int (*mac_get_pause_frm_status)(void *cgxd, + int lmac_id, + u8 *tx_pause, + u8 *rx_pause); + + int (*mac_enadis_pause_frm)(void *cgxd, + int lmac_id, + u8 tx_pause, + u8 rx_pause); + void (*mac_pause_frm_config)(void *cgxd, + int lmac_id, + bool enable); + }; struct cgx { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c index e97baeea4e19..8a4241abaa2a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c @@ -19,6 +19,10 @@ static struct mac_ops rpm_mac_ops = { .lmac_fwi = RPM_LMAC_FWI, .non_contiguous_serdes_lane = true, .get_nr_lmacs = rpm_get_nr_lmacs, + .mac_enadis_rx_pause_fwding = rpm_lmac_enadis_rx_pause_fwding, + .mac_get_pause_frm_status = rpm_lmac_get_pause_frm_status, + .mac_enadis_pause_frm = rpm_lmac_enadis_pause_frm, + .mac_pause_frm_config = rpm_lmac_pause_frm_config, }; struct mac_ops *rpm_get_mac_ops(void) @@ -26,6 +30,11 @@ struct mac_ops *rpm_get_mac_ops(void) return &rpm_mac_ops; } +static void rpm_write(rpm_t *rpm, u64 lmac, u64 offset, u64 val) +{ + cgx_write(rpm, lmac, offset, val); +} + static u64 rpm_read(rpm_t *rpm, u64 lmac, u64 offset) { return cgx_read(rpm, lmac, offset); @@ -37,3 +46,128 @@ int rpm_get_nr_lmacs(void *rpmd) return hweight8(rpm_read(rpm, 0, CGXX_CMRX_RX_LMACS) & 0xFULL); } + +void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable) +{ + struct cgx *rpm = rpmd; + u64 cfg; + + if (!rpm) + return; + + if (enable) { + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + } else { + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + } +} + +int rpm_lmac_get_pause_frm_status(void *rpmd, int lmac_id, + u8 *tx_pause, u8 *rx_pause) +{ + rpm_t *rpm = rpmd; + u64 cfg; + + if (!is_lmac_valid(rpm, lmac_id)) + return -ENODEV; + + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + *rx_pause = !(cfg & RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE); + + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + *tx_pause = !(cfg & RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE); + return 0; +} + +int rpm_lmac_enadis_pause_frm(void *rpmd, int lmac_id, u8 tx_pause, + u8 rx_pause) +{ + rpm_t *rpm = rpmd; + u64 cfg; + + if (!is_lmac_valid(rpm, lmac_id)) + return -ENODEV; + + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE; + cfg |= rx_pause ? 0x0 : RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE; + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE; + cfg |= rx_pause ? 0x0 : RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE; + cfg |= tx_pause ? 0x0 : RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + cfg = rpm_read(rpm, 0, RPMX_CMR_RX_OVR_BP); + if (tx_pause) { + cfg &= ~RPMX_CMR_RX_OVR_BP_EN(lmac_id); + } else { + cfg |= RPMX_CMR_RX_OVR_BP_EN(lmac_id); + cfg &= ~RPMX_CMR_RX_OVR_BP_BP(lmac_id); + } + rpm_write(rpm, 0, RPMX_CMR_RX_OVR_BP, cfg); + return 0; +} + +void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable) +{ + rpm_t *rpm = rpmd; + u64 cfg; + + if (enable) { + /* Enable 802.3 pause frame mode */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_PFC_MODE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + /* Enable receive pause frames */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + /* Enable forward pause to TX block */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + /* Enable pause frames transmission */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg &= ~RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + /* Set pause time and interval */ + cfg = rpm_read(rpm, lmac_id, + RPMX_MTI_MAC100X_CL01_PAUSE_QUANTA); + cfg &= ~0xFFFFULL; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_CL01_PAUSE_QUANTA, + cfg | RPM_DEFAULT_PAUSE_TIME); + /* Set pause interval as the hardware default is too short */ + cfg = rpm_read(rpm, lmac_id, + RPMX_MTI_MAC100X_CL01_QUANTA_THRESH); + cfg &= ~0xFFFFULL; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_CL01_QUANTA_THRESH, + cfg | (RPM_DEFAULT_PAUSE_TIME / 2)); + + } else { + /* ALL pause frames received are completely ignored */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + /* Disable forward pause to TX block */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + + /* Disable pause frames transmission */ + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG); + cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE; + rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); + } +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h index 1779e8b689e8..0e8b4560b3c3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h @@ -21,9 +21,25 @@ #define RPMX_CMRX_LINK_RANGE_MASK GENMASK_ULL(19, 16) #define RPMX_CMRX_LINK_BASE_MASK GENMASK_ULL(11, 0) - +#define RPMX_MTI_MAC100X_COMMAND_CONFIG 0x8010 +#define RPMX_MTI_MAC100X_COMMAND_CONFIG_RX_P_DISABLE BIT_ULL(29) +#define RPMX_MTI_MAC100X_COMMAND_CONFIG_TX_P_DISABLE BIT_ULL(28) +#define RPMX_MTI_MAC100X_COMMAND_CONFIG_PAUSE_IGNORE BIT_ULL(8) +#define RPMX_MTI_MAC100X_COMMAND_CONFIG_PFC_MODE BIT_ULL(19) +#define RPMX_MTI_MAC100X_CL01_PAUSE_QUANTA 0x80A8 +#define RPMX_MTI_MAC100X_CL01_QUANTA_THRESH 0x80C8 +#define RPM_DEFAULT_PAUSE_TIME 0xFFFF +#define RPMX_CMR_RX_OVR_BP 0x4120 +#define RPMX_CMR_RX_OVR_BP_EN(x) BIT_ULL((x) + 8) +#define RPMX_CMR_RX_OVR_BP_BP(x) BIT_ULL((x) + 4) #define RPM_LMAC_FWI 0xa /* Function Declarations */ +void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable); +int rpm_lmac_get_pause_frm_status(void *cgxd, int lmac_id, u8 *tx_pause, + u8 *rx_pause); +void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable); +int rpm_lmac_enadis_pause_frm(void *rpmd, int lmac_id, u8 tx_pause, + u8 rx_pause); int rpm_get_nr_lmacs(void *cgxd); #endif /* RPM_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 5e514f27a22d..1c980c8a13ee 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -406,6 +406,7 @@ static bool is_cgx_config_permitted(struct rvu *rvu, u16 pcifunc) void rvu_cgx_enadis_rx_bp(struct rvu *rvu, int pf, bool enable) { + struct mac_ops *mac_ops; u8 cgx_id, lmac_id; void *cgxd; @@ -415,11 +416,12 @@ void rvu_cgx_enadis_rx_bp(struct rvu *rvu, int pf, bool enable) rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); cgxd = rvu_cgx_pdata(cgx_id, rvu); + mac_ops = get_mac_ops(cgxd); /* Set / clear CTL_BCK to control pause frame forwarding to NIX */ if (enable) - cgx_lmac_enadis_rx_pause_fwding(cgxd, lmac_id, true); + mac_ops->mac_enadis_rx_pause_fwding(cgxd, lmac_id, true); else - cgx_lmac_enadis_rx_pause_fwding(cgxd, lmac_id, false); + mac_ops->mac_enadis_rx_pause_fwding(cgxd, lmac_id, false); } int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start) @@ -715,7 +717,9 @@ int rvu_mbox_handler_cgx_cfg_pause_frm(struct rvu *rvu, struct cgx_pause_frm_cfg *rsp) { int pf = rvu_get_pf(req->hdr.pcifunc); + struct mac_ops *mac_ops; u8 cgx_id, lmac_id; + void *cgxd; if (!is_mac_feature_supported(rvu, pf, RVU_LMAC_FEAT_FC)) return 0; @@ -727,13 +731,16 @@ int rvu_mbox_handler_cgx_cfg_pause_frm(struct rvu *rvu, return -ENODEV; rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); + cgxd = rvu_cgx_pdata(cgx_id, rvu); + mac_ops = get_mac_ops(cgxd); if (req->set) - cgx_lmac_set_pause_frm(rvu_cgx_pdata(cgx_id, rvu), lmac_id, - req->tx_pause, req->rx_pause); + mac_ops->mac_enadis_pause_frm(cgxd, lmac_id, + req->tx_pause, req->rx_pause); else - cgx_lmac_get_pause_frm(rvu_cgx_pdata(cgx_id, rvu), lmac_id, - &rsp->tx_pause, &rsp->rx_pause); + mac_ops->mac_get_pause_frm_status(cgxd, lmac_id, + &rsp->tx_pause, + &rsp->rx_pause); return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index ee27dddeff59..d3000194e2d3 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -16,6 +16,7 @@ #include "rvu.h" #include "npc.h" #include "cgx.h" +#include "lmac_common.h" static void nix_free_tx_vtag_entries(struct rvu *rvu, u16 pcifunc); static int rvu_nix_get_bpid(struct rvu *rvu, struct nix_bp_cfg_req *req, @@ -214,6 +215,7 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr, static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); + struct mac_ops *mac_ops; int pkind, pf, vf, lbkid; u8 cgx_id, lmac_id; int err; @@ -240,10 +242,12 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf) cgx_set_pkind(rvu_cgx_pdata(cgx_id, rvu), lmac_id, pkind); rvu_npc_set_pkind(rvu, pkind, pfvf); + mac_ops = get_mac_ops(rvu_cgx_pdata(cgx_id, rvu)); /* By default we enable pause frames */ if ((pcifunc & RVU_PFVF_FUNC_MASK) == 0) - cgx_lmac_set_pause_frm(rvu_cgx_pdata(cgx_id, rvu), - lmac_id, true, true); + mac_ops->mac_enadis_pause_frm(rvu_cgx_pdata(cgx_id, + rvu), + lmac_id, true, true); break; case NIX_INTF_TYPE_LBK: vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1; From patchwork Thu Feb 11 01:46:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381608 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D962EC433E0 for ; Thu, 11 Feb 2021 01:55:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 994FB64E3B for ; Thu, 11 Feb 2021 01:55:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229968AbhBKByt (ORCPT ); Wed, 10 Feb 2021 20:54:49 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:48354 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230139AbhBKBtH (ORCPT ); Wed, 10 Feb 2021 20:49:07 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kfZw000402; Wed, 10 Feb 2021 17:48:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pfpt0220; bh=SdCVIulbVvl2sVJBrpHl4WXTlYcgx90Cy2Rn8ATemJg=; b=fPn20Xgpgskoi1IMHxfDfkO/SK4/pfxwLwAmNL88UuINIqPg5/QNdR/qHyzS1CLHB5lh Dm2K5ge64mWo88G+1hW2lFxc3jEx710uOZ4GtzBdEffUxvXMH6NOP94wGO/su83Nco1j CzPcqLWzM18G0SMWws3GLMfjJ5lNzgtsMfIG8ahtvaWCS/aZ6tGFrnnRTKUcLf416kai ih3Ffl8lb2piAWJdVSBFqpbNztXb0CnjdLfmWZX5GWyBAuCN1iHEG6BArmkBEpuWEMtK qIoiYWberVAsb1Lpr0hwjUSzde0Y6IzFV+hhYGtK8YkONueUz9WRDpQrlDy/SEytbkvU 8A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhdm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:48:20 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:48:18 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:48:17 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:48:18 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id C5F113F7041; Wed, 10 Feb 2021 17:48:13 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya , "Sunil Kovvuri Goutham" Subject: [net-next v5 13/14] octeontx2-af: cn10k: Add RPM Rx/Tx stats support Date: Thu, 11 Feb 2021 07:16:30 +0530 Message-ID: <20210211014631.9578-14-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Hariprasad Kelam RPM supports below list of counters as an extension to existing counters * class based flow control pause frames * vlan/jabber/fragmented packets * fcs/alignment/oversized error packets This patch adds support to display supported RPM counters via debugfs and define new mbox rpm_stats to read all support counters. Signed-off-by: Hariprasad Kelam Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Kovvuri Goutham --- .../net/ethernet/marvell/octeontx2/af/cgx.c | 10 +- .../net/ethernet/marvell/octeontx2/af/cgx.h | 4 +- .../marvell/octeontx2/af/lmac_common.h | 16 ++- .../net/ethernet/marvell/octeontx2/af/mbox.h | 13 ++- .../net/ethernet/marvell/octeontx2/af/rpm.c | 57 ++++++++- .../net/ethernet/marvell/octeontx2/af/rpm.h | 9 +- .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 36 ++++-- .../marvell/octeontx2/af/rvu_debugfs.c | 108 ++++++++++++++++-- 8 files changed, 227 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index cc53aa1d6d3a..7b03234567ca 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -330,10 +330,8 @@ void cgx_lmac_enadis_rx_pause_fwding(void *cgxd, int lmac_id, bool enable) int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat) { - struct mac_ops *mac_ops; struct cgx *cgx = cgxd; - mac_ops = cgx->mac_ops; if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; *rx_stat = cgx_read(cgx, lmac_id, CGXX_CMRX_RX_STAT0 + (idx * 8)); @@ -342,10 +340,8 @@ int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat) int cgx_get_tx_stats(void *cgxd, int lmac_id, int idx, u64 *tx_stat) { - struct mac_ops *mac_ops; struct cgx *cgx = cgxd; - mac_ops = cgx->mac_ops; if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; *tx_stat = cgx_read(cgx, lmac_id, CGXX_CMRX_TX_STAT0 + (idx * 8)); @@ -1047,7 +1043,11 @@ static struct mac_ops cgx_mac_ops = { .int_ena_bit = FW_CGX_INT, .lmac_fwi = CGX_LMAC_FWI, .non_contiguous_serdes_lane = false, + .rx_stats_cnt = 9, + .tx_stats_cnt = 18, .get_nr_lmacs = cgx_get_nr_lmacs, + .mac_get_rx_stats = cgx_get_rx_stats, + .mac_get_tx_stats = cgx_get_tx_stats, .mac_enadis_rx_pause_fwding = cgx_lmac_enadis_rx_pause_fwding, .mac_get_pause_frm_status = cgx_lmac_get_pause_frm_status, .mac_enadis_pause_frm = cgx_lmac_enadis_pause_frm, @@ -1120,6 +1120,8 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) cgx_populate_features(cgx); + mutex_init(&cgx->lock); + err = cgx_lmac_init(cgx); if (err) goto err_release_lmac; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h index f2c77e05c5ab..bb0087e540ff 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h @@ -41,7 +41,7 @@ #define FW_CGX_INT BIT_ULL(1) #define CGXX_CMRX_INT_ENA_W1S 0x058 #define CGXX_CMRX_RX_ID_MAP 0x060 -#define CGXX_CMRX_RX_STAT0 (0x070 + mac_ops->csr_offset) +#define CGXX_CMRX_RX_STAT0 0x070 #define CGXX_CMRX_RX_LMACS 0x128 #define CGXX_CMRX_RX_DMAC_CTL0 (0x1F8 + mac_ops->csr_offset) #define CGX_DMAC_CTL0_CAM_ENABLE BIT_ULL(3) @@ -52,7 +52,7 @@ #define CGX_DMAC_CAM_ADDR_ENABLE BIT_ULL(48) #define CGXX_CMRX_RX_DMAC_CAM1 0x400 #define CGX_RX_DMAC_ADR_MASK GENMASK_ULL(47, 0) -#define CGXX_CMRX_TX_STAT0 (0x700 + mac_ops->csr_offset) +#define CGXX_CMRX_TX_STAT0 0x700 #define CGXX_SCRATCH0_REG 0x1050 #define CGXX_SCRATCH1_REG 0x1058 #define CGX_CONST 0x2000 diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h index b4eb337ccbdd..fea230393782 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h @@ -63,15 +63,25 @@ struct mac_ops { u8 lmac_fwi; u32 fifo_len; bool non_contiguous_serdes_lane; + /* RPM & CGX differs in number of Receive/transmit stats */ + u8 rx_stats_cnt; + u8 tx_stats_cnt; /* Incase of RPM get number of lmacs from RPMX_CMR_RX_LMACS[LMAC_EXIST] * number of setbits in lmac_exist tells number of lmacs */ int (*get_nr_lmacs)(void *cgx); + /* Register Stats related functions */ + int (*mac_get_rx_stats)(void *cgx, int lmac_id, + int idx, u64 *rx_stat); + int (*mac_get_tx_stats)(void *cgx, int lmac_id, + int idx, u64 *tx_stat); + /* Enable LMAC Pause Frame Configuration */ void (*mac_enadis_rx_pause_fwding)(void *cgxd, int lmac_id, bool enable); + int (*mac_get_pause_frm_status)(void *cgxd, int lmac_id, u8 *tx_pause, @@ -81,10 +91,10 @@ struct mac_ops { int lmac_id, u8 tx_pause, u8 rx_pause); + void (*mac_pause_frm_config)(void *cgxd, int lmac_id, bool enable); - }; struct cgx { @@ -99,6 +109,10 @@ struct cgx { u64 hw_features; struct mac_ops *mac_ops; unsigned long lmac_bmap; /* bitmap of enabled lmacs */ + /* Lock to serialize read/write of global csrs like + * RPMX_MTI_STAT_DATA_HI_CDC etc + */ + struct mutex lock; }; typedef struct cgx rpm_t; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index fe36a0768582..cbd66ada11ca 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -155,6 +155,7 @@ M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \ cgx_pause_frm_cfg) \ M(CGX_FEATURES_GET, 0x20F, cgx_features_get, msg_req, \ cgx_features_info_msg) \ +M(RPM_STATS, 0x21C, rpm_stats, msg_req, rpm_stats_rsp) \ /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, \ npa_lf_alloc_req, npa_lf_alloc_rsp) \ @@ -403,14 +404,22 @@ struct cgx_pause_frm_cfg { #define RVU_LMAC_FEAT_FC BIT_ULL(0) /* pause frames */ #define RVU_LMAC_FEAT_PTP BIT_ULL(1) /* precison time protocol */ #define RVU_MAC_VERSION BIT_ULL(2) -#define RVU_MAC_CGX 0 -#define RVU_MAC_RPM 1 +#define RVU_MAC_CGX BIT_ULL(3) +#define RVU_MAC_RPM BIT_ULL(4) struct cgx_features_info_msg { struct mbox_msghdr hdr; u64 lmac_features; }; +struct rpm_stats_rsp { + struct mbox_msghdr hdr; +#define RPM_RX_STATS_COUNT 43 +#define RPM_TX_STATS_COUNT 34 + u64 rx_stats[RPM_RX_STATS_COUNT]; + u64 tx_stats[RPM_TX_STATS_COUNT]; +}; + /* NPA mbox message formats */ /* NPA mailbox error codes diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c index 8a4241abaa2a..3870cd436683 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c @@ -18,7 +18,11 @@ static struct mac_ops rpm_mac_ops = { .int_ena_bit = BIT_ULL(0), .lmac_fwi = RPM_LMAC_FWI, .non_contiguous_serdes_lane = true, + .rx_stats_cnt = 43, + .tx_stats_cnt = 34, .get_nr_lmacs = rpm_get_nr_lmacs, + .mac_get_rx_stats = rpm_get_rx_stats, + .mac_get_tx_stats = rpm_get_tx_stats, .mac_enadis_rx_pause_fwding = rpm_lmac_enadis_rx_pause_fwding, .mac_get_pause_frm_status = rpm_lmac_get_pause_frm_status, .mac_enadis_pause_frm = rpm_lmac_enadis_pause_frm, @@ -49,7 +53,7 @@ int rpm_get_nr_lmacs(void *rpmd) void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable) { - struct cgx *rpm = rpmd; + rpm_t *rpm = rpmd; u64 cfg; if (!rpm) @@ -171,3 +175,54 @@ void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable) rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg); } } + +int rpm_get_rx_stats(void *rpmd, int lmac_id, int idx, u64 *rx_stat) +{ + rpm_t *rpm = rpmd; + u64 val_lo, val_hi; + + if (!rpm || lmac_id >= rpm->lmac_count) + return -ENODEV; + + mutex_lock(&rpm->lock); + + /* Update idx to point per lmac Rx statistics page */ + idx += lmac_id * rpm->mac_ops->rx_stats_cnt; + + /* Read lower 32 bits of counter */ + val_lo = rpm_read(rpm, 0, RPMX_MTI_STAT_RX_STAT_PAGES_COUNTERX + + (idx * 8)); + + /* upon read of lower 32 bits, higher 32 bits are written + * to RPMX_MTI_STAT_DATA_HI_CDC + */ + val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); + + *rx_stat = (val_hi << 32 | val_lo); + + mutex_unlock(&rpm->lock); + return 0; +} + +int rpm_get_tx_stats(void *rpmd, int lmac_id, int idx, u64 *tx_stat) +{ + rpm_t *rpm = rpmd; + u64 val_lo, val_hi; + + if (!rpm || lmac_id >= rpm->lmac_count) + return -ENODEV; + + mutex_lock(&rpm->lock); + + /* Update idx to point per lmac Tx statistics page */ + idx += lmac_id * rpm->mac_ops->tx_stats_cnt; + + val_lo = rpm_read(rpm, 0, RPMX_MTI_STAT_TX_STAT_PAGES_COUNTERX + + (idx * 8)); + val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); + + *tx_stat = (val_hi << 32 | val_lo); + + mutex_unlock(&rpm->lock); + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h index 0e8b4560b3c3..c939302308b5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h @@ -18,6 +18,7 @@ #define RPMX_CMRX_SW_INT_W1S 0x188 #define RPMX_CMRX_SW_INT_ENA_W1S 0x198 #define RPMX_CMRX_LINK_CFG 0x1070 +#define RPMX_MTI_LPCSX_CONTROL(id) (0x30000 | ((id) * 0x100)) #define RPMX_CMRX_LINK_RANGE_MASK GENMASK_ULL(19, 16) #define RPMX_CMRX_LINK_BASE_MASK GENMASK_ULL(11, 0) @@ -32,14 +33,20 @@ #define RPMX_CMR_RX_OVR_BP 0x4120 #define RPMX_CMR_RX_OVR_BP_EN(x) BIT_ULL((x) + 8) #define RPMX_CMR_RX_OVR_BP_BP(x) BIT_ULL((x) + 4) +#define RPMX_MTI_STAT_RX_STAT_PAGES_COUNTERX 0x12000 +#define RPMX_MTI_STAT_TX_STAT_PAGES_COUNTERX 0x13000 +#define RPMX_MTI_STAT_DATA_HI_CDC 0x10038 + #define RPM_LMAC_FWI 0xa /* Function Declarations */ +int rpm_get_nr_lmacs(void *rpmd); void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable); int rpm_lmac_get_pause_frm_status(void *cgxd, int lmac_id, u8 *tx_pause, u8 *rx_pause); void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable); int rpm_lmac_enadis_pause_frm(void *rpmd, int lmac_id, u8 tx_pause, u8 rx_pause); -int rpm_get_nr_lmacs(void *cgxd); +int rpm_get_tx_stats(void *rpmd, int lmac_id, int idx, u64 *tx_stat); +int rpm_get_rx_stats(void *rpmd, int lmac_id, int idx, u64 *rx_stat); #endif /* RPM_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 1c980c8a13ee..e0f9414f8358 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -453,10 +453,11 @@ int rvu_mbox_handler_cgx_stop_rxtx(struct rvu *rvu, struct msg_req *req, return 0; } -int rvu_mbox_handler_cgx_stats(struct rvu *rvu, struct msg_req *req, - struct cgx_stats_rsp *rsp) +static int rvu_lmac_get_stats(struct rvu *rvu, struct msg_req *req, + void *rsp) { int pf = rvu_get_pf(req->hdr.pcifunc); + struct mac_ops *mac_ops; int stat = 0, err = 0; u64 tx_stat, rx_stat; u8 cgx_idx, lmac; @@ -467,28 +468,47 @@ int rvu_mbox_handler_cgx_stats(struct rvu *rvu, struct msg_req *req, rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_idx, &lmac); cgxd = rvu_cgx_pdata(cgx_idx, rvu); + mac_ops = get_mac_ops(cgxd); /* Rx stats */ - while (stat < CGX_RX_STATS_COUNT) { - err = cgx_get_rx_stats(cgxd, lmac, stat, &rx_stat); + while (stat < mac_ops->rx_stats_cnt) { + err = mac_ops->mac_get_rx_stats(cgxd, lmac, stat, &rx_stat); if (err) return err; - rsp->rx_stats[stat] = rx_stat; + if (mac_ops->rx_stats_cnt == RPM_RX_STATS_COUNT) + ((struct rpm_stats_rsp *)rsp)->rx_stats[stat] = rx_stat; + else + ((struct cgx_stats_rsp *)rsp)->rx_stats[stat] = rx_stat; stat++; } /* Tx stats */ stat = 0; - while (stat < CGX_TX_STATS_COUNT) { - err = cgx_get_tx_stats(cgxd, lmac, stat, &tx_stat); + while (stat < mac_ops->tx_stats_cnt) { + err = mac_ops->mac_get_tx_stats(cgxd, lmac, stat, &tx_stat); if (err) return err; - rsp->tx_stats[stat] = tx_stat; + if (mac_ops->tx_stats_cnt == RPM_TX_STATS_COUNT) + ((struct rpm_stats_rsp *)rsp)->tx_stats[stat] = tx_stat; + else + ((struct cgx_stats_rsp *)rsp)->tx_stats[stat] = tx_stat; stat++; } return 0; } +int rvu_mbox_handler_cgx_stats(struct rvu *rvu, struct msg_req *req, + struct cgx_stats_rsp *rsp) +{ + return rvu_lmac_get_stats(rvu, req, (void *)rsp); +} + +int rvu_mbox_handler_rpm_stats(struct rvu *rvu, struct msg_req *req, + struct rpm_stats_rsp *rsp) +{ + return rvu_lmac_get_stats(rvu, req, (void *)rsp); +} + int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu, struct cgx_mac_addr_set_or_get *req, struct cgx_mac_addr_set_or_get *rsp) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 96ae6f79568d..dfeea587a27e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -110,6 +110,89 @@ static char *cgx_tx_stats_fields[] = { [CGX_STAT17] = "Control/PAUSE packets sent", }; +static char *rpm_rx_stats_fields[] = { + "Octets of received packets", + "Octets of received packets with out error", + "Received packets with alignment errors", + "Control/PAUSE packets received", + "Packets received with Frame too long Errors", + "Packets received with a1nrange length Errors", + "Received packets", + "Packets received with FrameCheckSequenceErrors", + "Packets received with VLAN header", + "Error packets", + "Packets recievd with unicast DMAC", + "Packets received with multicast DMAC", + "Packets received with broadcast DMAC", + "Dropped packets", + "Total frames received on interface", + "Packets received with an octet count < 64", + "Packets received with an octet count == 64", + "Packets received with an octet count of 65–127", + "Packets received with an octet count of 128-255", + "Packets received with an octet count of 256-511", + "Packets received with an octet count of 512-1023", + "Packets received with an octet count of 1024-1518", + "Packets received with an octet count of > 1518", + "Oversized Packets", + "Jabber Packets", + "Fragmented Packets", + "CBFC(class based flow control) pause frames received for class 0", + "CBFC pause frames received for class 1", + "CBFC pause frames received for class 2", + "CBFC pause frames received for class 3", + "CBFC pause frames received for class 4", + "CBFC pause frames received for class 5", + "CBFC pause frames received for class 6", + "CBFC pause frames received for class 7", + "CBFC pause frames received for class 8", + "CBFC pause frames received for class 9", + "CBFC pause frames received for class 10", + "CBFC pause frames received for class 11", + "CBFC pause frames received for class 12", + "CBFC pause frames received for class 13", + "CBFC pause frames received for class 14", + "CBFC pause frames received for class 15", + "MAC control packets received", +}; + +static char *rpm_tx_stats_fields[] = { + "Total octets sent on the interface", + "Total octets transmitted OK", + "Control/Pause frames sent", + "Total frames transmitted OK", + "Total frames sent with VLAN header", + "Error Packets", + "Packets sent to unicast DMAC", + "Packets sent to the multicast DMAC", + "Packets sent to a broadcast DMAC", + "Packets sent with an octet count == 64", + "Packets sent with an octet count of 65–127", + "Packets sent with an octet count of 128-255", + "Packets sent with an octet count of 256-511", + "Packets sent with an octet count of 512-1023", + "Packets sent with an octet count of 1024-1518", + "Packets sent with an octet count of > 1518", + "CBFC(class based flow control) pause frames transmitted for class 0", + "CBFC pause frames transmitted for class 1", + "CBFC pause frames transmitted for class 2", + "CBFC pause frames transmitted for class 3", + "CBFC pause frames transmitted for class 4", + "CBFC pause frames transmitted for class 5", + "CBFC pause frames transmitted for class 6", + "CBFC pause frames transmitted for class 7", + "CBFC pause frames transmitted for class 8", + "CBFC pause frames transmitted for class 9", + "CBFC pause frames transmitted for class 10", + "CBFC pause frames transmitted for class 11", + "CBFC pause frames transmitted for class 12", + "CBFC pause frames transmitted for class 13", + "CBFC pause frames transmitted for class 14", + "CBFC pause frames transmitted for class 15", + "MAC control packets sent", + "Total frames sent on the interface" +}; + enum cpt_eng_type { CPT_AE_TYPE = 1, CPT_SE_TYPE = 2, @@ -1676,23 +1759,34 @@ static int cgx_print_stats(struct seq_file *s, int lmac_id) /* Rx stats */ seq_printf(s, "\n=======%s RX_STATS======\n\n", mac_ops->name); - while (stat < CGX_RX_STATS_COUNT) { - err = cgx_get_rx_stats(cgxd, lmac_id, stat, &rx_stat); + while (stat < mac_ops->rx_stats_cnt) { + err = mac_ops->mac_get_rx_stats(cgxd, lmac_id, stat, &rx_stat); if (err) return err; - seq_printf(s, "%s: %llu\n", cgx_rx_stats_fields[stat], rx_stat); + if (is_rvu_otx2(rvu)) + seq_printf(s, "%s: %llu\n", cgx_rx_stats_fields[stat], + rx_stat); + else + seq_printf(s, "%s: %llu\n", rpm_rx_stats_fields[stat], + rx_stat); stat++; } /* Tx stats */ stat = 0; seq_printf(s, "\n=======%s TX_STATS======\n\n", mac_ops->name); - while (stat < CGX_TX_STATS_COUNT) { - err = cgx_get_tx_stats(cgxd, lmac_id, stat, &tx_stat); + while (stat < mac_ops->tx_stats_cnt) { + err = mac_ops->mac_get_tx_stats(cgxd, lmac_id, stat, &tx_stat); if (err) return err; - seq_printf(s, "%s: %llu\n", cgx_tx_stats_fields[stat], tx_stat); - stat++; + + if (is_rvu_otx2(rvu)) + seq_printf(s, "%s: %llu\n", cgx_tx_stats_fields[stat], + tx_stat); + else + seq_printf(s, "%s: %llu\n", rpm_tx_stats_fields[stat], + tx_stat); + stat++; } return err; From patchwork Thu Feb 11 01:46:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geetha sowjanya X-Patchwork-Id: 381607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE585C433E0 for ; Thu, 11 Feb 2021 01:57:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B010C64DBD for ; Thu, 11 Feb 2021 01:57:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229881AbhBKB4r (ORCPT ); Wed, 10 Feb 2021 20:56:47 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:35868 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230179AbhBKBtL (ORCPT ); Wed, 10 Feb 2021 20:49:11 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11B1kgKx000423; Wed, 10 Feb 2021 17:48:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=0M0SD2wSsN0orD4MfhmK710/UXhQIgcHGsLyie1jkZI=; b=SzUSjUvbaYtAblM2tWgxLB4AbyphxEdnU7xod8xfCf48IG2uju/NjzdD7hFYWE4ggb6l d4UX3mh5JfZ7VJw7kemoQQ3bKTHuViqF+zvFBvuvHUekmRQRjRdt0iSZ+qgQgUWtsRUl 2QeXjpSNQ4ui4MvBCbtGiW8ZRbvfZ8tQZPuflI/OWoY+OQOU9dLDTIy2Hmk4F6XyKtOJ wYMmP9yeZF/DFIMVmHpTEAX1fHE2dOqomyPs42Uq2wRHOI/6z9Sh8gNmcOcAdPz3VQWp ZXDZ6D8qhi+hze7TNzfOs9J+tB0tFU9FjcxKwXfrR++4weeiVF6FwImRtAafZWJFtUM7 Qg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugqdhdy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Feb 2021 17:48:26 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 10 Feb 2021 17:48:24 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 10 Feb 2021 17:48:24 -0800 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 761B53F703F; Wed, 10 Feb 2021 17:48:20 -0800 (PST) From: Geetha sowjanya To: , , CC: , , , , , , , , , , Geetha sowjanya Subject: [net-next v5 14/14] octeontx2-af: cn10k: MAC internal loopback support Date: Thu, 11 Feb 2021 07:16:31 +0530 Message-ID: <20210211014631.9578-15-gakula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211014631.9578-1-gakula@marvell.com> References: <20210211014631.9578-1-gakula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.737 definitions=2021-02-10_11:2021-02-10,2021-02-10 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Hariprasad Kelam MAC on CN10K silicon support loopback for selftest or debug purposes. This patch does necessary configuration to loopback packets upon receiving request from LMAC mapped RVU PF's netdev via mailbox. Also MAC (CGX) on OcteonTx2 silicon variants and MAC (RPM) on OcteonTx3 CN10K are different and loopback needs to be configured differently. Upper layer interface between RVU AF and PF netdev is kept same. Based on silicon variant appropriate fn() pointer is called to config the MAC. Signed-off-by: Hariprasad Kelam Signed-off-by: Geetha sowjanya Signed-off-by: Sunil Goutham --- .../net/ethernet/marvell/octeontx2/af/cgx.c | 9 ++-- .../ethernet/marvell/octeontx2/af/cgx_fw_if.h | 1 + .../marvell/octeontx2/af/lmac_common.h | 4 +- .../net/ethernet/marvell/octeontx2/af/rpm.c | 44 +++++++++++++++++++ .../net/ethernet/marvell/octeontx2/af/rpm.h | 5 +++ .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 6 +-- 6 files changed, 62 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 7b03234567ca..fefefa4db726 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -228,8 +228,9 @@ int cgx_set_pkind(void *cgxd, u8 lmac_id, int pkind) return 0; } -static inline u8 cgx_get_lmac_type(struct cgx *cgx, int lmac_id) +static u8 cgx_get_lmac_type(void *cgxd, int lmac_id) { + struct cgx *cgx = cgxd; u64 cfg; cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG); @@ -246,7 +247,7 @@ int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable) if (!is_lmac_valid(cgx, lmac_id)) return -ENODEV; - lmac_type = cgx_get_lmac_type(cgx, lmac_id); + lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac_id); if (lmac_type == LMAC_MODE_SGMII || lmac_type == LMAC_MODE_QSGMII) { cfg = cgx_read(cgx, lmac_id, CGXX_GMP_PCS_MRX_CTL); if (enable) @@ -637,7 +638,7 @@ static inline void link_status_user_format(u64 lstat, linfo->link_up = FIELD_GET(RESP_LINKSTAT_UP, lstat); linfo->full_duplex = FIELD_GET(RESP_LINKSTAT_FDUPLEX, lstat); linfo->speed = cgx_speed_mbps[FIELD_GET(RESP_LINKSTAT_SPEED, lstat)]; - linfo->lmac_type_id = cgx_get_lmac_type(cgx, lmac_id); + linfo->lmac_type_id = FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, lstat); lmac_string = cgx_lmactype_string[linfo->lmac_type_id]; strncpy(linfo->lmac_type, lmac_string, LMACTYPE_STR_LEN - 1); } @@ -1046,6 +1047,8 @@ static struct mac_ops cgx_mac_ops = { .rx_stats_cnt = 9, .tx_stats_cnt = 18, .get_nr_lmacs = cgx_get_nr_lmacs, + .get_lmac_type = cgx_get_lmac_type, + .mac_lmac_intl_lbk = cgx_lmac_internal_loopback, .mac_get_rx_stats = cgx_get_rx_stats, .mac_get_tx_stats = cgx_get_tx_stats, .mac_enadis_rx_pause_fwding = cgx_lmac_enadis_rx_pause_fwding, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h index c3702fa58b6b..c07a96ed6c61 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx_fw_if.h @@ -154,6 +154,7 @@ enum cgx_cmd_own { * CGX_STAT_SUCCESS */ #define RESP_FWD_BASE GENMASK_ULL(56, 9) +#define RESP_LINKSTAT_LMAC_TYPE GENMASK_ULL(35, 28) /* Response to cmd ID - CGX_CMD_LINK_BRING_UP/DOWN, event ID CGX_EVT_LINK_CHANGE * status can be either CGX_STAT_FAIL or CGX_STAT_SUCCESS diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h index fea230393782..45706fd87120 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h @@ -70,7 +70,9 @@ struct mac_ops { * number of setbits in lmac_exist tells number of lmacs */ int (*get_nr_lmacs)(void *cgx); - + u8 (*get_lmac_type)(void *cgx, int lmac_id); + int (*mac_lmac_intl_lbk)(void *cgx, int lmac_id, + bool enable); /* Register Stats related functions */ int (*mac_get_rx_stats)(void *cgx, int lmac_id, int idx, u64 *rx_stat); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c index 3870cd436683..a91ccdc59403 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c @@ -21,6 +21,8 @@ static struct mac_ops rpm_mac_ops = { .rx_stats_cnt = 43, .tx_stats_cnt = 34, .get_nr_lmacs = rpm_get_nr_lmacs, + .get_lmac_type = rpm_get_lmac_type, + .mac_lmac_intl_lbk = rpm_lmac_internal_loopback, .mac_get_rx_stats = rpm_get_rx_stats, .mac_get_tx_stats = rpm_get_tx_stats, .mac_enadis_rx_pause_fwding = rpm_lmac_enadis_rx_pause_fwding, @@ -226,3 +228,45 @@ int rpm_get_tx_stats(void *rpmd, int lmac_id, int idx, u64 *tx_stat) mutex_unlock(&rpm->lock); return 0; } + +u8 rpm_get_lmac_type(void *rpmd, int lmac_id) +{ + rpm_t *rpm = rpmd; + u64 req = 0, resp; + int err; + + req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req); + err = cgx_fwi_cmd_generic(req, &resp, rpm, 0); + if (!err) + return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp); + return err; +} + +int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable) +{ + rpm_t *rpm = rpmd; + u8 lmac_type; + u64 cfg; + + if (!rpm || lmac_id >= rpm->lmac_count) + return -ENODEV; + lmac_type = rpm->mac_ops->get_lmac_type(rpm, lmac_id); + if (lmac_type == LMAC_MODE_100G_R) { + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1); + + if (enable) + cfg |= RPMX_MTI_PCS_LBK; + else + cfg &= ~RPMX_MTI_PCS_LBK; + rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg); + } else { + cfg = rpm_read(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1); + if (enable) + cfg |= RPMX_MTI_PCS_LBK; + else + cfg &= ~RPMX_MTI_PCS_LBK; + rpm_write(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1, cfg); + } + + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h index c939302308b5..d32e74bd5964 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h @@ -18,6 +18,9 @@ #define RPMX_CMRX_SW_INT_W1S 0x188 #define RPMX_CMRX_SW_INT_ENA_W1S 0x198 #define RPMX_CMRX_LINK_CFG 0x1070 +#define RPMX_MTI_PCS100X_CONTROL1 0x20000 +#define RPMX_MTI_LPCSX_CONTROL1 0x30000 +#define RPMX_MTI_PCS_LBK BIT_ULL(14) #define RPMX_MTI_LPCSX_CONTROL(id) (0x30000 | ((id) * 0x100)) #define RPMX_CMRX_LINK_RANGE_MASK GENMASK_ULL(19, 16) @@ -41,6 +44,8 @@ /* Function Declarations */ int rpm_get_nr_lmacs(void *rpmd); +u8 rpm_get_lmac_type(void *rpmd, int lmac_id); +int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable); void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable); int rpm_lmac_get_pause_frm_status(void *cgxd, int lmac_id, u8 *tx_pause, u8 *rx_pause); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index e0f9414f8358..b3dd89a4c322 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -706,15 +706,15 @@ u32 rvu_cgx_get_fifolen(struct rvu *rvu) static int rvu_cgx_config_intlbk(struct rvu *rvu, u16 pcifunc, bool en) { - int pf = rvu_get_pf(pcifunc); + struct mac_ops *mac_ops; u8 cgx_id, lmac_id; if (!is_cgx_config_permitted(rvu, pcifunc)) return -EPERM; - rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); + mac_ops = get_mac_ops(rvu_cgx_pdata(cgx_id, rvu)); - return cgx_lmac_internal_loopback(rvu_cgx_pdata(cgx_id, rvu), + return mac_ops->mac_lmac_intl_lbk(rvu_cgx_pdata(cgx_id, rvu), lmac_id, en); }