From patchwork Thu Apr 13 10:04:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74D1FC77B6C for ; Thu, 13 Apr 2023 09:54:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229920AbjDMJym (ORCPT ); Thu, 13 Apr 2023 05:54:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229728AbjDMJyg (ORCPT ); Thu, 13 Apr 2023 05:54:36 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A27B58A6A for ; Thu, 13 Apr 2023 02:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379671; x=1712915671; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mo0NIUHO34G81AsAr26IswcOI5APybJpqSP4JrcGw1s=; b=n5OhZ57/9X+ZVAQejgiq13gsb9Nh4fdetB2RszBVAuHGK9er935N+nh7 pFrma/9g4XSmB4UXI0PC6bhOcbY1SDRIt1T65/SVtdMIOsed3ac5Ca3je 7q+4ZOu4L99VoNyNTqNlkxwchnNr4ySol9StybSisG5+1BV8jcoPHrb3O hOUfk0Vry3ek8AZq5bp+MlRqW8cCIa/qhqioh3Gb1J7edeCxdgTrP5+fb VEf9BUUEpRrBMd+tlem4CPdigymfX8cNpArhbO//ndyKEX5xG/KjYkVj+ 6C9EvwXxbAyosBRsf25xs9P/4+afEUoF00UQIQrEXtxQMpel0ht7N3Xcv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371992897" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371992897" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639599998" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639599998" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:27 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 01/14] media: intel/ipu6: add Intel IPU6 PCI device driver Date: Thu, 13 Apr 2023 18:04:16 +0800 Message-Id: <20230413100429.919622-2-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Intel Image Processing Unit 6th Gen includes input and processing systems but the hardware presents itself as a single PCI device in system. IPU6 PCI device driver basically does PCI configurations and load the firmware binary, initialises IPU virtual bus, and sets up platform specific variants to support multiple IPU6 devices in single device driver. Signed-off-by: Bingbu Cao --- .../media/pci/intel/ipu6/ipu6-platform-regs.h | 177 ++++ drivers/media/pci/intel/ipu6/ipu6-platform.h | 31 + drivers/media/pci/intel/ipu6/ipu6.c | 969 ++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6.h | 344 +++++++ 4 files changed, 1521 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-platform-regs.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6-platform.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-platform-regs.h b/drivers/media/pci/intel/ipu6/ipu6-platform-regs.h new file mode 100644 index 000000000000..85752914a562 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-platform-regs.h @@ -0,0 +1,177 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2018 - 2023 Intel Corporation */ + +#ifndef IPU6_PLATFORM_REGS_H +#define IPU6_PLATFORM_REGS_H + +/* + * IPU6 uses uniform address within IPU6, therefore all subsystem registers + * locates in one single space starts from 0 but in different sctions with + * different addresses, the subsystem offsets are defined to 0 as the + * register definition will have the address offset to 0. + */ +#define IPU6_UNIFIED_OFFSET 0 + +#define IPU6_ISYS_IOMMU0_OFFSET 0x2e0000 +#define IPU6_ISYS_IOMMU1_OFFSET 0x2e0500 +#define IPU6_ISYS_IOMMUI_OFFSET 0x2e0a00 + +#define IPU6_PSYS_IOMMU0_OFFSET 0x1b0000 +#define IPU6_PSYS_IOMMU1_OFFSET 0x1b0700 +#define IPU6_PSYS_IOMMU1R_OFFSET 0x1b0e00 +#define IPU6_PSYS_IOMMUI_OFFSET 0x1b1500 + +/* the offset from IOMMU base register */ +#define IPU6_MMU_L1_STREAM_ID_REG_OFFSET 0x0c +#define IPU6_MMU_L2_STREAM_ID_REG_OFFSET 0x4c +#define IPU6_PSYS_MMU1W_L2_STREAM_ID_REG_OFFSET 0x8c + +#define IPU6_MMU_INFO_OFFSET 0x8 + +#define IPU6_ISYS_SPC_OFFSET 0x210000 + +#define IPU6SE_PSYS_SPC_OFFSET 0x110000 +#define IPU6_PSYS_SPC_OFFSET 0x118000 + +#define IPU6_ISYS_DMEM_OFFSET 0x200000 +#define IPU6_PSYS_DMEM_OFFSET 0x100000 + +#define IPU6_REG_ISYS_UNISPART_IRQ_EDGE 0x27c000 +#define IPU6_REG_ISYS_UNISPART_IRQ_MASK 0x27c004 +#define IPU6_REG_ISYS_UNISPART_IRQ_STATUS 0x27c008 +#define IPU6_REG_ISYS_UNISPART_IRQ_CLEAR 0x27c00c +#define IPU6_REG_ISYS_UNISPART_IRQ_ENABLE 0x27c010 +#define IPU6_REG_ISYS_UNISPART_IRQ_LEVEL_NOT_PULSE 0x27c014 +#define IPU6_REG_ISYS_UNISPART_SW_IRQ_REG 0x27c414 +#define IPU6_REG_ISYS_UNISPART_SW_IRQ_MUX_REG 0x27c418 +#define IPU6_ISYS_UNISPART_IRQ_CSI0 BIT(2) +#define IPU6_ISYS_UNISPART_IRQ_CSI1 BIT(3) +#define IPU6_ISYS_UNISPART_IRQ_SW BIT(22) + +#define IPU6_REG_ISYS_ISL_TOP_IRQ_EDGE 0x2b0200 +#define IPU6_REG_ISYS_ISL_TOP_IRQ_MASK 0x2b0204 +#define IPU6_REG_ISYS_ISL_TOP_IRQ_STATUS 0x2b0208 +#define IPU6_REG_ISYS_ISL_TOP_IRQ_CLEAR 0x2b020c +#define IPU6_REG_ISYS_ISL_TOP_IRQ_ENABLE 0x2b0210 +#define IPU6_REG_ISYS_ISL_TOP_IRQ_LEVEL_NOT_PULSE 0x2b0214 + +#define IPU6_REG_ISYS_CMPR_TOP_IRQ_EDGE 0x2d2100 +#define IPU6_REG_ISYS_CMPR_TOP_IRQ_MASK 0x2d2104 +#define IPU6_REG_ISYS_CMPR_TOP_IRQ_STATUS 0x2d2108 +#define IPU6_REG_ISYS_CMPR_TOP_IRQ_CLEAR 0x2d210c +#define IPU6_REG_ISYS_CMPR_TOP_IRQ_ENABLE 0x2d2110 +#define IPU6_REG_ISYS_CMPR_TOP_IRQ_LEVEL_NOT_PULSE 0x2d2114 + +/* CDC Burst collector thresholds for isys - 3 FIFOs i = 0..2 */ +#define IPU6_REG_ISYS_CDC_THRESHOLD(i) (0x27c400 + ((i) * 4)) + +#define IPU6_CSI_IRQ_NUM_PER_PIPE 4 +#define IPU6SE_ISYS_CSI_PORT_NUM 4 +#define IPU6_ISYS_CSI_PORT_NUM 8 + +#define IPU6_ISYS_CSI_PORT_IRQ(irq_num) (1 << (irq_num)) + +/* PKG DIR OFFSET in IMR in secure mode */ +#define IPU6_PKG_DIR_IMR_OFFSET 0x40 + +#define IPU6_ISYS_REG_SPC_STATUS_CTRL 0x0 + +#define IPU6_ISYS_SPC_STATUS_START BIT(1) +#define IPU6_ISYS_SPC_STATUS_RUN BIT(3) +#define IPU6_ISYS_SPC_STATUS_READY BIT(5) +#define IPU6_ISYS_SPC_STATUS_CTRL_ICACHE_INVALIDATE BIT(12) +#define IPU6_ISYS_SPC_STATUS_ICACHE_PREFETCH BIT(13) + +#define IPU6_PSYS_REG_SPC_STATUS_CTRL 0x0 +#define IPU6_PSYS_REG_SPC_START_PC 0x4 +#define IPU6_PSYS_REG_SPC_ICACHE_BASE 0x10 +#define IPU6_REG_PSYS_INFO_SEG_0_CONFIG_ICACHE_MASTER 0x14 + +#define IPU6_PSYS_SPC_STATUS_START BIT(1) +#define IPU6_PSYS_SPC_STATUS_RUN BIT(3) +#define IPU6_PSYS_SPC_STATUS_READY BIT(5) +#define IPU6_PSYS_SPC_STATUS_CTRL_ICACHE_INVALIDATE BIT(12) +#define IPU6_PSYS_SPC_STATUS_ICACHE_PREFETCH BIT(13) + +#define IPU6_PSYS_REG_SPP0_STATUS_CTRL 0x20000 + +#define IPU6_INFO_ENABLE_SNOOP BIT(0) +#define IPU6_INFO_DEC_FORCE_FLUSH BIT(1) +#define IPU6_INFO_DEC_PASS_THROUGH BIT(2) +#define IPU6_INFO_ZLW BIT(3) +#define IPU6_INFO_REQUEST_DESTINATION_IOSF BIT(9) +#define IPU6_INFO_IMR_BASE BIT(10) +#define IPU6_INFO_IMR_DESTINED BIT(11) + +#define IPU6_INFO_REQUEST_DESTINATION_PRIMARY IPU6_INFO_REQUEST_DESTINATION_IOSF + +/* + * s2m_pixel_soc_pixel_remapping is dedicated for the enabling of the + * pixel s2m remp ability.Remap here means that s2m rearange the order + * of the pixels in each 4 pixels group. + * For examle, mirroring remping means that if input's 4 first pixels + * are 1 2 3 4 then in output we should see 4 3 2 1 in this 4 first pixels. + * 0xE4 is from s2m MAS document. It means no remapping. + */ +#define S2M_PIXEL_SOC_PIXEL_REMAPPING_FLAG_NO_REMAPPING 0xE4 +/* + * csi_be_soc_pixel_remapping is for the enabling of the pixel remapping. + * This remapping is exactly like the stream2mmio remapping. + */ +#define CSI_BE_SOC_PIXEL_REMAPPING_FLAG_NO_REMAPPING 0xE4 + +#define IPU6_REG_DMA_TOP_AB_GROUP1_BASE_ADDR 0x1ae000 +#define IPU6_REG_DMA_TOP_AB_GROUP2_BASE_ADDR 0x1af000 +#define IPU6_REG_DMA_TOP_AB_RING_MIN_OFFSET(n) (0x4 + (n) * 0xc) +#define IPU6_REG_DMA_TOP_AB_RING_MAX_OFFSET(n) (0x8 + (n) * 0xc) +#define IPU6_REG_DMA_TOP_AB_RING_ACCESS_OFFSET(n) (0xc + (n) * 0xc) + +enum ipu6_device_ab_group1_target_id { + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R0_SPC_DMEM, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R1_SPC_DMEM, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R2_SPC_DMEM, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R3_SPC_STATUS_REG, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R4_SPC_MASTER_BASE_ADDR, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R5_SPC_PC_STALL, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R6_SPC_EQ, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R7_SPC_RESERVED, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R8_SPC_RESERVED, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R9_SPP0, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R10_SPP1, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R11_CENTRAL_R1, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R12_IRQ, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R13_CENTRAL_R2, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R14_DMA, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R15_DMA, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R16_GP, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R17_ZLW_INSERTER, + IPU6_DEVICE_AB_GROUP1_TARGET_ID_R18_AB, +}; + +enum nci_ab_access_mode { + NCI_AB_ACCESS_MODE_RW, /* read & write */ + NCI_AB_ACCESS_MODE_RO, /* read only */ + NCI_AB_ACCESS_MODE_WO, /* write only */ + NCI_AB_ACCESS_MODE_NA /* No access at all */ +}; + +/* IRQ-related registers in PSYS */ +#define IPU6_REG_PSYS_GPDEV_IRQ_EDGE 0x1aa200 +#define IPU6_REG_PSYS_GPDEV_IRQ_MASK 0x1aa204 +#define IPU6_REG_PSYS_GPDEV_IRQ_STATUS 0x1aa208 +#define IPU6_REG_PSYS_GPDEV_IRQ_CLEAR 0x1aa20c +#define IPU6_REG_PSYS_GPDEV_IRQ_ENABLE 0x1aa210 +#define IPU6_REG_PSYS_GPDEV_IRQ_LEVEL_NOT_PULSE 0x1aa214 +/* There are 8 FW interrupts, n = 0..7 */ +#define IPU6_PSYS_GPDEV_FWIRQ0 5 +#define IPU6_PSYS_GPDEV_FWIRQ1 6 +#define IPU6_PSYS_GPDEV_FWIRQ2 7 +#define IPU6_PSYS_GPDEV_FWIRQ3 8 +#define IPU6_PSYS_GPDEV_FWIRQ4 9 +#define IPU6_PSYS_GPDEV_FWIRQ5 10 +#define IPU6_PSYS_GPDEV_FWIRQ6 11 +#define IPU6_PSYS_GPDEV_FWIRQ7 12 +#define IPU6_PSYS_GPDEV_IRQ_FWIRQ(n) (1 << (n)) +#define IPU6_REG_PSYS_GPDEV_FWIRQ(n) (4 * (n) + 0x1aa100) + +#endif /* IPU6_PLATFORM_REGS_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-platform.h b/drivers/media/pci/intel/ipu6/ipu6-platform.h new file mode 100644 index 000000000000..f730a6797a9c --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-platform.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_PLATFORM_H +#define IPU6_PLATFORM_H + +#include "ipu6-fw-isys.h" + +#define IPU6_NAME "intel-ipu6" + +#define IPU6SE_FIRMWARE_NAME "intel/ipu6se_fw.bin" +#define IPU6EP_FIRMWARE_NAME "intel/ipu6ep_fw.bin" +#define IPU6_FIRMWARE_NAME "intel/ipu6_fw.bin" +#define IPU6EPMTL_FIRMWARE_NAME "intel/ipu6epmtl_fw.bin" + +/* + * The following definitions are encoded to the media_device's model field so + * that the software components which uses IPU6 driver can get the hw stepping + * information. + */ +#define IPU6_MEDIA_DEV_MODEL_NAME "ipu6" + +#define IPU6SE_ISYS_NUM_STREAMS IPU6SE_NONSECURE_STREAM_ID_MAX +#define IPU6_ISYS_NUM_STREAMS IPU6_NONSECURE_STREAM_ID_MAX + +extern struct ipu6_isys_internal_pdata isys_ipdata; +extern struct ipu6_psys_internal_pdata psys_ipdata; +extern const struct ipu6_buttress_ctrl isys_buttress_ctrl; +extern const struct ipu6_buttress_ctrl psys_buttress_ctrl; + +#endif diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c new file mode 100644 index 000000000000..0c8026768827 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6.c @@ -0,0 +1,969 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-cpd.h" +#include "ipu6-isys.h" +#include "ipu6-mmu.h" +#include "ipu6-platform.h" +#include "ipu6-platform-buttress-regs.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + +#define IPU6_PCI_BAR 0 + +struct ipu6_cell_program_t { + u32 magic_number; + + u32 blob_offset; + u32 blob_size; + + u32 start[3]; + + u32 icache_source; + u32 icache_target; + u32 icache_size; + + u32 pmem_source; + u32 pmem_target; + u32 pmem_size; + + u32 data_source; + u32 data_target; + u32 data_size; + + u32 bss_target; + u32 bss_size; + + u32 cell_id; + u32 regs_addr; + + u32 cell_pmem_data_bus_address; + u32 cell_dmem_data_bus_address; + u32 cell_pmem_control_bus_address; + u32 cell_dmem_control_bus_address; + + u32 next; + u32 dummy[2]; +} __packed; + +static u32 ipu6se_csi_offsets[] = { + IPU6_CSI_PORT_A_ADDR_OFFSET, + IPU6_CSI_PORT_B_ADDR_OFFSET, + IPU6_CSI_PORT_C_ADDR_OFFSET, + IPU6_CSI_PORT_D_ADDR_OFFSET, +}; + +/* + * Only IPU6 on TGL support maximum 8 csi2 ports + * JSL, ADL and MTL support 4 maximum csi2 ports + * However, in real world, there are maximum 4 hardware MIPI + * ports for cameras, and some reference board design uses the + * CSI2 port 2 ~ 5 not 0 ~ 3, this is the reason that we need register + * 8 csi2 sub-devices instead of 4. + */ +static u32 ipu6_csi_offsets[] = { + IPU6_CSI_PORT_A_ADDR_OFFSET, + IPU6_CSI_PORT_B_ADDR_OFFSET, + IPU6_CSI_PORT_C_ADDR_OFFSET, + IPU6_CSI_PORT_D_ADDR_OFFSET, + IPU6_CSI_PORT_E_ADDR_OFFSET, + IPU6_CSI_PORT_F_ADDR_OFFSET, + IPU6_CSI_PORT_G_ADDR_OFFSET, + IPU6_CSI_PORT_H_ADDR_OFFSET +}; + +struct ipu6_isys_internal_pdata isys_ipdata = { + .hw_variant = { + .offset = IPU6_UNIFIED_OFFSET, + .nr_mmus = 3, + .mmu_hw = { + { + .offset = IPU6_ISYS_IOMMU0_OFFSET, + .info_bits = IPU6_INFO_REQUEST_DESTINATION_IOSF, + .nr_l1streams = 16, + .l1_block_sz = { + 3, 8, 2, 2, 2, 2, 2, 2, 1, 1, + 1, 1, 1, 1, 1, 1 + }, + .nr_l2streams = 16, + .l2_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2 + }, + .insert_read_before_invalidate = false, + .l1_stream_id_reg_offset = + IPU6_MMU_L1_STREAM_ID_REG_OFFSET, + .l2_stream_id_reg_offset = + IPU6_MMU_L2_STREAM_ID_REG_OFFSET, + }, + { + .offset = IPU6_ISYS_IOMMU1_OFFSET, + .info_bits = 0, + .nr_l1streams = 16, + .l1_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 1, 1, 4 + }, + .nr_l2streams = 16, + .l2_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2 + }, + .insert_read_before_invalidate = false, + .l1_stream_id_reg_offset = + IPU6_MMU_L1_STREAM_ID_REG_OFFSET, + .l2_stream_id_reg_offset = + IPU6_MMU_L2_STREAM_ID_REG_OFFSET, + }, + { + .offset = IPU6_ISYS_IOMMUI_OFFSET, + .info_bits = 0, + .nr_l1streams = 0, + .nr_l2streams = 0, + .insert_read_before_invalidate = false, + }, + }, + .cdc_fifos = 3, + .cdc_fifo_threshold = {6, 8, 2}, + .dmem_offset = IPU6_ISYS_DMEM_OFFSET, + .spc_offset = IPU6_ISYS_SPC_OFFSET, + }, + .isys_dma_overshoot = IPU6_ISYS_OVERALLOC_MIN, +}; + +struct ipu6_psys_internal_pdata psys_ipdata = { + .hw_variant = { + .offset = IPU6_UNIFIED_OFFSET, + .nr_mmus = 4, + .mmu_hw = { + { + .offset = IPU6_PSYS_IOMMU0_OFFSET, + .info_bits = + IPU6_INFO_REQUEST_DESTINATION_IOSF, + .nr_l1streams = 16, + .l1_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2 + }, + .nr_l2streams = 16, + .l2_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2 + }, + .insert_read_before_invalidate = false, + .l1_stream_id_reg_offset = + IPU6_MMU_L1_STREAM_ID_REG_OFFSET, + .l2_stream_id_reg_offset = + IPU6_MMU_L2_STREAM_ID_REG_OFFSET, + }, + { + .offset = IPU6_PSYS_IOMMU1_OFFSET, + .info_bits = 0, + .nr_l1streams = 32, + .l1_block_sz = { + 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 10, + 5, 4, 14, 6, 4, 14, 6, 4, 8, + 4, 2, 1, 1, 1, 1, 14 + }, + .nr_l2streams = 32, + .l2_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2 + }, + .insert_read_before_invalidate = false, + .l1_stream_id_reg_offset = + IPU6_MMU_L1_STREAM_ID_REG_OFFSET, + .l2_stream_id_reg_offset = + IPU6_PSYS_MMU1W_L2_STREAM_ID_REG_OFFSET, + }, + { + .offset = IPU6_PSYS_IOMMU1R_OFFSET, + .info_bits = 0, + .nr_l1streams = 16, + .l1_block_sz = { + 1, 4, 4, 4, 4, 16, 8, 4, 32, + 16, 16, 2, 2, 2, 1, 12 + }, + .nr_l2streams = 16, + .l2_block_sz = { + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2 + }, + .insert_read_before_invalidate = false, + .l1_stream_id_reg_offset = + IPU6_MMU_L1_STREAM_ID_REG_OFFSET, + .l2_stream_id_reg_offset = + IPU6_MMU_L2_STREAM_ID_REG_OFFSET, + }, + { + .offset = IPU6_PSYS_IOMMUI_OFFSET, + .info_bits = 0, + .nr_l1streams = 0, + .nr_l2streams = 0, + .insert_read_before_invalidate = false, + }, + }, + .dmem_offset = IPU6_PSYS_DMEM_OFFSET, + }, +}; + +const struct ipu6_buttress_ctrl isys_buttress_ctrl = { + .ratio = IPU6_IS_FREQ_CTL_DEFAULT_RATIO, + .qos_floor = IPU6_IS_FREQ_CTL_DEFAULT_QOS_FLOOR_RATIO, + .freq_ctl = IPU6_BUTTRESS_REG_IS_FREQ_CTL, + .pwr_sts_shift = IPU6_BUTTRESS_PWR_STATE_IS_PWR_SHIFT, + .pwr_sts_mask = IPU6_BUTTRESS_PWR_STATE_IS_PWR_MASK, + .pwr_sts_on = IPU6_BUTTRESS_PWR_STATE_UP_DONE, + .pwr_sts_off = IPU6_BUTTRESS_PWR_STATE_DN_DONE, +}; + +const struct ipu6_buttress_ctrl psys_buttress_ctrl = { + .ratio = IPU6_PS_FREQ_CTL_DEFAULT_RATIO, + .qos_floor = IPU6_PS_FREQ_CTL_DEFAULT_QOS_FLOOR_RATIO, + .freq_ctl = IPU6_BUTTRESS_REG_PS_FREQ_CTL, + .pwr_sts_shift = IPU6_BUTTRESS_PWR_STATE_PS_PWR_SHIFT, + .pwr_sts_mask = IPU6_BUTTRESS_PWR_STATE_PS_PWR_MASK, + .pwr_sts_on = IPU6_BUTTRESS_PWR_STATE_UP_DONE, + .pwr_sts_off = IPU6_BUTTRESS_PWR_STATE_DN_DONE, +}; + +static void +ipu6_pkg_dir_configure_spc(struct ipu6_device *isp, + const struct ipu6_hw_variants *hw_variant, + int pkg_dir_idx, void __iomem *base, + u64 *pkg_dir, dma_addr_t pkg_dir_vied_address) +{ + u32 server_fw_addr; + struct ipu6_cell_program_t *prog; + void __iomem *spc_base; + dma_addr_t dma_addr; + + server_fw_addr = lower_32_bits(*(pkg_dir + (pkg_dir_idx + 1) * 2)); + if (pkg_dir_idx == IPU6_CPD_PKG_DIR_ISYS_SERVER_IDX) + dma_addr = sg_dma_address(isp->isys->fw_sgt.sgl); + else + dma_addr = sg_dma_address(isp->psys->fw_sgt.sgl); + + prog = (struct ipu6_cell_program_t *)((u64)isp->cpd_fw->data + + (server_fw_addr - + dma_addr)); + spc_base = base + prog->regs_addr; + if (spc_base != (base + hw_variant->spc_offset)) + dev_warn(&isp->pdev->dev, + "SPC reg addr %p not matching value from CPD %p\n", + base + hw_variant->spc_offset, spc_base); + writel(server_fw_addr + prog->blob_offset + + prog->icache_source, spc_base + IPU6_PSYS_REG_SPC_ICACHE_BASE); + writel(IPU6_INFO_REQUEST_DESTINATION_IOSF, + spc_base + IPU6_REG_PSYS_INFO_SEG_0_CONFIG_ICACHE_MASTER); + writel(prog->start[1], spc_base + IPU6_PSYS_REG_SPC_START_PC); + writel(pkg_dir_vied_address, base + hw_variant->dmem_offset); +} + +void ipu6_configure_spc(struct ipu6_device *isp, + const struct ipu6_hw_variants *hw_variant, + int pkg_dir_idx, void __iomem *base, u64 *pkg_dir, + dma_addr_t pkg_dir_dma_addr) +{ + void __iomem *dmem_base = base + hw_variant->dmem_offset; + void __iomem *spc_regs_base = base + hw_variant->spc_offset; + u32 val; + + val = readl(spc_regs_base + IPU6_PSYS_REG_SPC_STATUS_CTRL); + val |= IPU6_PSYS_SPC_STATUS_CTRL_ICACHE_INVALIDATE; + writel(val, spc_regs_base + IPU6_PSYS_REG_SPC_STATUS_CTRL); + + if (isp->secure_mode) + writel(IPU6_PKG_DIR_IMR_OFFSET, dmem_base); + else + ipu6_pkg_dir_configure_spc(isp, hw_variant, pkg_dir_idx, base, + pkg_dir, pkg_dir_dma_addr); +} +EXPORT_SYMBOL_NS_GPL(ipu6_configure_spc, INTEL_IPU6); + +static void ipu6_internal_pdata_init(struct ipu6_device *isp) +{ + u8 hw_ver = isp->hw_ver; + + isys_ipdata.num_parallel_streams = IPU6_ISYS_NUM_STREAMS; + isys_ipdata.sram_gran_shift = IPU6_SRAM_GRANULARITY_SHIFT; + isys_ipdata.sram_gran_size = IPU6_SRAM_GRANULARITY_SIZE; + isys_ipdata.max_sram_size = IPU6_MAX_SRAM_SIZE; + isys_ipdata.sensor_type_start = IPU6_FW_ISYS_SENSOR_TYPE_START; + isys_ipdata.sensor_type_end = IPU6_FW_ISYS_SENSOR_TYPE_END; + isys_ipdata.max_streams = IPU6_ISYS_NUM_STREAMS; + isys_ipdata.max_send_queues = IPU6_N_MAX_SEND_QUEUES; + isys_ipdata.max_sram_blocks = IPU6_NOF_SRAM_BLOCKS_MAX; + isys_ipdata.max_devq_size = IPU6_DEV_SEND_QUEUE_SIZE; + isys_ipdata.csi2.nports = ARRAY_SIZE(ipu6_csi_offsets); + isys_ipdata.csi2.offsets = ipu6_csi_offsets; + isys_ipdata.csi2.irq_mask = IPU6_CSI_RX_ERROR_IRQ_MASK; + isys_ipdata.csi2.ctrl0_irq_edge = IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_EDGE; + isys_ipdata.csi2.ctrl0_irq_clear = + IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_CLEAR; + isys_ipdata.csi2.ctrl0_irq_mask = IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_MASK; + isys_ipdata.csi2.ctrl0_irq_enable = + IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_ENABLE; + isys_ipdata.csi2.ctrl0_irq_status = + IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_STATUS; + isys_ipdata.csi2.ctrl0_irq_lnp = + IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_LEVEL_NOT_PULSE; + isys_ipdata.enhanced_iwake = is_ipu6ep_mtl(hw_ver) || is_ipu6ep(hw_ver); + psys_ipdata.hw_variant.spc_offset = IPU6_PSYS_SPC_OFFSET; + isys_ipdata.csi2.fw_access_port_ofs = CSI_REG_HUB_FW_ACCESS_PORT_OFS; + + if (is_ipu6ep(hw_ver)) { + isys_ipdata.ltr = IPU6EP_LTR_VALUE; + isys_ipdata.memopen_threshold = IPU6EP_MIN_MEMOPEN_TH; + } + + if (is_ipu6ep_mtl(hw_ver)) { + isys_ipdata.csi2.ctrl0_irq_edge = + IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_EDGE; + isys_ipdata.csi2.ctrl0_irq_clear = + IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_CLEAR; + isys_ipdata.csi2.ctrl0_irq_mask = + IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_MASK; + isys_ipdata.csi2.ctrl0_irq_enable = + IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_ENABLE; + isys_ipdata.csi2.ctrl0_irq_lnp = + IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_LEVEL_NOT_PULSE; + isys_ipdata.csi2.ctrl0_irq_status = + IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_STATUS; + isys_ipdata.csi2.fw_access_port_ofs = + CSI_REG_HUB_FW_ACCESS_PORT_V6OFS; + isys_ipdata.ltr = IPU6EP_MTL_LTR_VALUE; + isys_ipdata.memopen_threshold = IPU6EP_MTL_MIN_MEMOPEN_TH; + } + + if (is_ipu6se(hw_ver)) { + isys_ipdata.csi2.nports = ARRAY_SIZE(ipu6se_csi_offsets); + isys_ipdata.csi2.irq_mask = IPU6SE_CSI_RX_ERROR_IRQ_MASK; + isys_ipdata.csi2.offsets = ipu6se_csi_offsets; + isys_ipdata.num_parallel_streams = IPU6SE_ISYS_NUM_STREAMS; + isys_ipdata.sram_gran_shift = IPU6SE_SRAM_GRANULARITY_SHIFT; + isys_ipdata.sram_gran_size = IPU6SE_SRAM_GRANULARITY_SIZE; + isys_ipdata.max_sram_size = IPU6SE_MAX_SRAM_SIZE; + isys_ipdata.sensor_type_start = + IPU6SE_FW_ISYS_SENSOR_TYPE_START; + isys_ipdata.sensor_type_end = IPU6SE_FW_ISYS_SENSOR_TYPE_END; + isys_ipdata.max_streams = IPU6SE_ISYS_NUM_STREAMS; + isys_ipdata.max_send_queues = IPU6SE_N_MAX_SEND_QUEUES; + isys_ipdata.max_sram_blocks = IPU6SE_NOF_SRAM_BLOCKS_MAX; + isys_ipdata.max_devq_size = IPU6SE_DEV_SEND_QUEUE_SIZE; + psys_ipdata.hw_variant.spc_offset = IPU6SE_PSYS_SPC_OFFSET; + } +} + +static int ipu6_isys_check_fwnode_graph(struct fwnode_handle *fwnode) +{ + struct fwnode_handle *endpoint; + + if (IS_ERR_OR_NULL(fwnode)) + return -EINVAL; + + endpoint = fwnode_graph_get_next_endpoint(fwnode, NULL); + if (endpoint) { + fwnode_handle_put(endpoint); + return 0; + } + + return ipu6_isys_check_fwnode_graph(fwnode->secondary); +} + +static struct ipu6_bus_device * +ipu6_isys_init(struct pci_dev *pdev, struct device *parent, + struct ipu6_buttress_ctrl *ctrl, void __iomem *base, + const struct ipu6_isys_internal_pdata *ipdata) +{ + struct fwnode_handle *fwnode = dev_fwnode(&pdev->dev); + struct ipu6_bus_device *isys_adev; + struct ipu6_isys_pdata *pdata; + int ret; + + ret = ipu6_isys_check_fwnode_graph(fwnode); + if (ret) { + if (fwnode && !IS_ERR_OR_NULL(fwnode->secondary)) { + dev_err(&pdev->dev, + "fwnode graph has no endpoints connection\n"); + return ERR_PTR(-EINVAL); + } + /* TODO: use bridge to register software nodes */ + } + + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); + if (!pdata) + return ERR_PTR(-ENOMEM); + + pdata->base = base; + pdata->ipdata = ipdata; + + isys_adev = ipu6_bus_initialize_device(pdev, parent, pdata, ctrl, + IPU6_ISYS_NAME); + if (IS_ERR(isys_adev)) { + dev_err_probe(&pdev->dev, PTR_ERR(isys_adev), + "ipu6_bus_add_device(isys_adev) failed\n"); + return ERR_CAST(isys_adev); + } + + isys_adev->mmu = ipu6_mmu_init(&pdev->dev, base, ISYS_MMID, + &ipdata->hw_variant); + if (IS_ERR(isys_adev->mmu)) { + dev_err_probe(&pdev->dev, PTR_ERR(isys_adev), + "ipu6_mmu_init(isys_adev->mmu) failed\n"); + return ERR_CAST(isys_adev->mmu); + } + + isys_adev->mmu->dev = &isys_adev->dev; + + ret = ipu6_bus_add_device(isys_adev); + + return ret ? ERR_PTR(ret) : isys_adev; +} + +static struct ipu6_bus_device * +ipu6_psys_init(struct pci_dev *pdev, struct device *parent, + struct ipu6_buttress_ctrl *ctrl, void __iomem *base, + const struct ipu6_psys_internal_pdata *ipdata) +{ + struct ipu6_bus_device *psys_adev; + struct ipu6_psys_pdata *pdata; + int ret; + + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); + if (!pdata) + return ERR_PTR(-ENOMEM); + + pdata->base = base; + pdata->ipdata = ipdata; + + psys_adev = ipu6_bus_initialize_device(pdev, parent, pdata, ctrl, + IPU6_PSYS_NAME); + if (IS_ERR(psys_adev)) { + dev_err_probe(&pdev->dev, PTR_ERR(psys_adev), + "ipu6_bus_add_device(psys_adev) failed\n"); + return ERR_CAST(psys_adev); + } + + psys_adev->mmu = ipu6_mmu_init(&pdev->dev, base, PSYS_MMID, + &ipdata->hw_variant); + if (IS_ERR(psys_adev->mmu)) { + dev_err_probe(&pdev->dev, PTR_ERR(psys_adev), + "ipu6_mmu_init(psys_adev->mmu) failed\n"); + return ERR_CAST(psys_adev->mmu); + } + + psys_adev->mmu->dev = &psys_adev->dev; + + ret = ipu6_bus_add_device(psys_adev); + + return ret ? ERR_PTR(ret) : psys_adev; +} + +static int ipu6_pci_config_setup(struct pci_dev *dev, u8 hw_ver) +{ + u16 pci_command; + int ret; + + pci_read_config_word(dev, PCI_COMMAND, &pci_command); + pci_command |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER; + pci_write_config_word(dev, PCI_COMMAND, pci_command); + + /* No PCI msi capability for IPU6EP */ + if (hw_ver == IPU6_VER_6EP || hw_ver == IPU6_VER_6EP_MTL) { + /* likely do nothing as msi not enabled by default */ + pci_disable_msi(dev); + return 0; + } + + ret = pci_enable_msi(dev); + if (ret) + dev_err(&dev->dev, "Failed to enable msi (%d)\n", ret); + + return ret; +} + +static void ipu6_configure_vc_mechanism(struct ipu6_device *isp) +{ + u32 val = readl(isp->base + BUTTRESS_REG_BTRS_CTRL); + + if (IPU6_BTRS_ARB_STALL_MODE_VC0 == IPU6_BTRS_ARB_MODE_TYPE_STALL) + val |= BUTTRESS_REG_BTRS_CTRL_STALL_MODE_VC0; + else + val &= ~BUTTRESS_REG_BTRS_CTRL_STALL_MODE_VC0; + + if (IPU6_BTRS_ARB_STALL_MODE_VC1 == IPU6_BTRS_ARB_MODE_TYPE_STALL) + val |= BUTTRESS_REG_BTRS_CTRL_STALL_MODE_VC1; + else + val &= ~BUTTRESS_REG_BTRS_CTRL_STALL_MODE_VC1; + + writel(val, isp->base + BUTTRESS_REG_BTRS_CTRL); +} + +static int request_cpd_fw(const struct firmware **firmware_p, const char *name, + struct device *device) +{ + const struct firmware *fw; + struct firmware *dst; + int ret = 0; + + ret = request_firmware(&fw, name, device); + if (ret) + return ret; + + if (is_vmalloc_addr(fw->data)) { + *firmware_p = fw; + return 0; + } + + dst = kzalloc(sizeof(*dst), GFP_KERNEL); + if (!dst) { + ret = -ENOMEM; + goto release_firmware; + } + + dst->size = fw->size; + dst->data = vmalloc(fw->size); + if (!dst->data) { + kfree(dst); + ret = -ENOMEM; + goto release_firmware; + } + + memcpy((void *)dst->data, fw->data, fw->size); + *firmware_p = dst; + +release_firmware: + release_firmware(fw); + + return ret; +} + +static int ipu6_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct ipu6_buttress_ctrl *isys_ctrl = NULL, *psys_ctrl = NULL; + void __iomem *isys_base = NULL; + void __iomem *psys_base = NULL; + struct ipu6_device *isp; + phys_addr_t phys; + void __iomem *const *iomap; + int ret; + u32 val, version, sku_id; + + isp = devm_kzalloc(&pdev->dev, sizeof(*isp), GFP_KERNEL); + if (!isp) + return -ENOMEM; + + isp->pdev = pdev; + INIT_LIST_HEAD(&isp->devices); + + ret = pcim_enable_device(pdev); + if (ret) { + dev_err(&pdev->dev, "Failed to enable PCI device (%d)\n", ret); + return ret; + } + + dev_info(&pdev->dev, "Device 0x%x (rev: 0x%x)\n", + pdev->device, pdev->revision); + + phys = pci_resource_start(pdev, IPU6_PCI_BAR); + + ret = pcim_iomap_regions(pdev, 1 << IPU6_PCI_BAR, pci_name(pdev)); + if (ret) { + dev_err(&pdev->dev, "Failed to I/O mem remapping (%d)\n", ret); + return ret; + } + dev_dbg(&pdev->dev, "physical base address 0x%llx\n", phys); + + iomap = pcim_iomap_table(pdev); + if (!iomap) { + dev_err(&pdev->dev, "Failed to iomap table (%d)\n", ret); + return -ENODEV; + } + + isp->base = iomap[IPU6_PCI_BAR]; + + pci_set_drvdata(pdev, isp); + pci_set_master(pdev); + + isp->cpd_metadata_cmpnt_size = sizeof(struct ipu6_cpd_metadata_cmpnt); + switch (id->device) { + case IPU6_PCI_ID: + isp->hw_ver = IPU6_VER_6; + isp->cpd_fw_name = IPU6_FIRMWARE_NAME; + break; + case IPU6SE_PCI_ID: + isp->hw_ver = IPU6_VER_6SE; + isp->cpd_fw_name = IPU6SE_FIRMWARE_NAME; + isp->cpd_metadata_cmpnt_size = + sizeof(struct ipu6se_cpd_metadata_cmpnt); + break; + case IPU6EP_ADL_P_PCI_ID: + case IPU6EP_ADL_N_PCI_ID: + case IPU6EP_RPL_P_PCI_ID: + isp->hw_ver = IPU6_VER_6EP; + isp->cpd_fw_name = IPU6EP_FIRMWARE_NAME; + break; + case IPU6EP_MTL_PCI_ID: + isp->hw_ver = IPU6_VER_6EP_MTL; + isp->cpd_fw_name = IPU6EPMTL_FIRMWARE_NAME; + break; + default: + dev_err(&pdev->dev, "Unsupported IPU6 device %x\n", id->device); + return -ENODEV; + } + + ipu6_internal_pdata_init(isp); + + isys_base = isp->base + isys_ipdata.hw_variant.offset; + psys_base = isp->base + psys_ipdata.hw_variant.offset; + + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(39)); + if (ret) { + dev_err(&pdev->dev, "Failed to set DMA mask (%d)\n", ret); + return ret; + } + + dma_set_max_seg_size(&pdev->dev, UINT_MAX); + + ret = ipu6_pci_config_setup(pdev, isp->hw_ver); + if (ret) + return ret; + + ret = devm_request_threaded_irq(&pdev->dev, pdev->irq, + ipu6_buttress_isr, + ipu6_buttress_isr_threaded, + IRQF_SHARED, IPU6_NAME, isp); + if (ret) { + dev_err(&pdev->dev, "Requesting irq failed(%d)\n", ret); + return ret; + } + + ret = ipu6_buttress_init(isp); + if (ret) + return ret; + + dev_info(&pdev->dev, "cpd file name: %s\n", isp->cpd_fw_name); + + ret = request_cpd_fw(&isp->cpd_fw, isp->cpd_fw_name, &pdev->dev); + if (ret) { + dev_err(&isp->pdev->dev, "Requesting signed firmware failed\n"); + goto buttress_exit; + } + + ret = ipu6_cpd_validate_cpd_file(isp, isp->cpd_fw->data, + isp->cpd_fw->size); + if (ret) { + dev_err(&isp->pdev->dev, "Failed to validate cpd\n"); + goto out_ipu6_bus_del_devices; + } + + isys_ctrl = devm_kzalloc(&pdev->dev, sizeof(*isys_ctrl), GFP_KERNEL); + if (!isys_ctrl) { + ret = -ENOMEM; + goto out_ipu6_bus_del_devices; + } + + memcpy(isys_ctrl, &isys_buttress_ctrl, sizeof(*isys_ctrl)); + + isp->isys = ipu6_isys_init(pdev, &pdev->dev, isys_ctrl, isys_base, + &isys_ipdata); + if (IS_ERR(isp->isys)) { + ret = PTR_ERR(isp->isys); + goto out_ipu6_bus_del_devices; + } + + psys_ctrl = devm_kzalloc(&pdev->dev, sizeof(*psys_ctrl), GFP_KERNEL); + if (!psys_ctrl) { + ret = -ENOMEM; + goto out_ipu6_bus_del_devices; + } + + memcpy(psys_ctrl, &psys_buttress_ctrl, sizeof(*psys_ctrl)); + + isp->psys = ipu6_psys_init(pdev, &isp->isys->dev, psys_ctrl, + psys_base, &psys_ipdata); + if (IS_ERR(isp->psys)) { + ret = PTR_ERR(isp->psys); + goto out_ipu6_bus_del_devices; + } + + ret = pm_runtime_get_sync(&isp->psys->dev); + if (ret < 0) { + dev_err(&isp->psys->dev, "Failed to get runtime PM\n"); + goto out_ipu6_bus_del_devices; + } + + ret = ipu6_mmu_hw_init(isp->psys->mmu); + if (ret) { + dev_err(&isp->pdev->dev, "Failed to set MMU hardware\n"); + goto out_ipu6_bus_del_devices; + } + + ret = ipu6_buttress_map_fw_image(isp->psys, isp->cpd_fw, + &isp->psys->fw_sgt); + if (ret) { + dev_err(&isp->pdev->dev, "failed to map fw image\n"); + goto out_ipu6_bus_del_devices; + } + + ret = ipu6_cpd_create_pkg_dir(isp->psys, isp->cpd_fw->data); + if (ret) { + dev_err(&isp->pdev->dev, "failed to create pkg dir\n"); + goto out_ipu6_bus_del_devices; + } + + ret = ipu6_buttress_authenticate(isp); + if (ret) { + dev_err(&isp->pdev->dev, "FW authentication failed(%d)\n", + ret); + goto out_ipu6_bus_del_devices; + } + + ipu6_mmu_hw_cleanup(isp->psys->mmu); + pm_runtime_put(&isp->psys->dev); + + /* Configure the arbitration mechanisms for VC requests */ + ipu6_configure_vc_mechanism(isp); + + val = readl(isp->base + BUTTRESS_REG_SKU); + sku_id = FIELD_GET(GENMASK(6, 4), val); + version = FIELD_GET(GENMASK(3, 0), val); + dev_info(&pdev->dev, "IPU%u-v%u hardware version %d\n", version, sku_id, + isp->hw_ver); + + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_allow(&pdev->dev); + + isp->bus_ready_to_probe = true; + + return 0; + +out_ipu6_bus_del_devices: + if (isp->psys) { + ipu6_cpd_free_pkg_dir(isp->psys); + ipu6_buttress_unmap_fw_image(isp->psys, &isp->psys->fw_sgt); + } + if (!IS_ERR_OR_NULL(isp->psys) && !IS_ERR_OR_NULL(isp->psys->mmu)) + ipu6_mmu_cleanup(isp->psys->mmu); + if (!IS_ERR_OR_NULL(isp->isys) && !IS_ERR_OR_NULL(isp->isys->mmu)) + ipu6_mmu_cleanup(isp->isys->mmu); + if (!IS_ERR_OR_NULL(isp->psys)) + pm_runtime_put(&isp->psys->dev); + ipu6_bus_del_devices(pdev); + release_firmware(isp->cpd_fw); +buttress_exit: + ipu6_buttress_exit(isp); + + return ret; +} + +static void ipu6_pci_remove(struct pci_dev *pdev) +{ + struct ipu6_device *isp = pci_get_drvdata(pdev); + + ipu6_cpd_free_pkg_dir(isp->psys); + + ipu6_buttress_unmap_fw_image(isp->psys, &isp->psys->fw_sgt); + + ipu6_bus_del_devices(pdev); + + pm_runtime_forbid(&pdev->dev); + pm_runtime_get_noresume(&pdev->dev); + + pci_release_regions(pdev); + pci_disable_device(pdev); + + ipu6_buttress_exit(isp); + + release_firmware(isp->cpd_fw); + + ipu6_mmu_cleanup(isp->psys->mmu); + ipu6_mmu_cleanup(isp->isys->mmu); +} + +static void ipu6_pci_reset_prepare(struct pci_dev *pdev) +{ + struct ipu6_device *isp = pci_get_drvdata(pdev); + + dev_warn(&pdev->dev, "FLR prepare\n"); + pm_runtime_forbid(&isp->pdev->dev); + isp->flr_done = true; +} + +static void ipu6_pci_reset_done(struct pci_dev *pdev) +{ + struct ipu6_device *isp = pci_get_drvdata(pdev); + + ipu6_buttress_restore(isp); + if (isp->secure_mode) + ipu6_buttress_reset_authentication(isp); + + ipu6_bus_flr_recovery(); + isp->need_ipc_reset = true; + pm_runtime_allow(&isp->pdev->dev); + + dev_info(&pdev->dev, "IPU6 PCI FLR completed\n"); +} + +/* + * PCI base driver code requires driver to provide these to enable + * PCI device level PM state transitions (D0<->D3) + */ +static int ipu6_suspend(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct ipu6_device *isp = pci_get_drvdata(pdev); + + isp->flr_done = false; + + return 0; +} + +static int ipu6_resume(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct ipu6_device *isp = pci_get_drvdata(pdev); + struct ipu6_buttress *b = &isp->buttress; + int ret; + + /* Configure the arbitration mechanisms for VC requests */ + ipu6_configure_vc_mechanism(isp); + + isp->secure_mode = ipu6_buttress_get_secure_mode(isp); + dev_info(dev, "IPU6 in %s mode\n", + isp->secure_mode ? "secure" : "non-secure"); + + ipu6_buttress_restore(isp); + + ret = ipu6_buttress_ipc_reset(isp, &b->cse); + if (ret) + dev_err(&isp->pdev->dev, "IPC reset protocol failed!\n"); + + ret = pm_runtime_resume_and_get(&isp->psys->dev); + if (ret < 0) { + dev_err(&isp->psys->dev, "Failed to get runtime PM\n"); + return 0; + } + + ret = ipu6_buttress_authenticate(isp); + if (ret) + dev_err(&isp->pdev->dev, "FW authentication failed(%d)\n", ret); + + pm_runtime_put(&isp->psys->dev); + + return 0; +} + +static int ipu6_runtime_resume(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct ipu6_device *isp = pci_get_drvdata(pdev); + int ret; + + ipu6_configure_vc_mechanism(isp); + ipu6_buttress_restore(isp); + + if (isp->need_ipc_reset) { + struct ipu6_buttress *b = &isp->buttress; + + isp->need_ipc_reset = false; + ret = ipu6_buttress_ipc_reset(isp, &b->cse); + if (ret) + dev_err(&isp->pdev->dev, "IPC reset protocol failed\n"); + } + + return 0; +} + +static const struct dev_pm_ops ipu6_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(&ipu6_suspend, &ipu6_resume) + SET_RUNTIME_PM_OPS(&ipu6_suspend, &ipu6_runtime_resume, NULL) +}; + +static const struct pci_device_id ipu6_pci_tbl[] = { + { PCI_VDEVICE(INTEL, IPU6_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6SE_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_ADL_P_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_ADL_N_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_RPL_P_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_MTL_PCI_ID) }, + { } +}; +MODULE_DEVICE_TABLE(pci, ipu6_pci_tbl); + +static const struct pci_error_handlers pci_err_handlers = { + .reset_prepare = ipu6_pci_reset_prepare, + .reset_done = ipu6_pci_reset_done, +}; + +static struct pci_driver ipu6_pci_driver = { + .name = IPU6_NAME, + .id_table = ipu6_pci_tbl, + .probe = ipu6_pci_probe, + .remove = ipu6_pci_remove, + .driver = { + .pm = &ipu6_pm_ops, + }, + .err_handler = &pci_err_handlers, +}; + +static int __init ipu6_init(void) +{ + int ret = ipu6_bus_register(); + + if (ret) { + pr_warn("can't register IPU6 bus (%d)\n", ret); + return ret; + } + + ret = pci_register_driver(&ipu6_pci_driver); + if (ret) { + pr_warn("can't register PCI driver (%d)\n", ret); + goto out_bus_unregister; + } + + return 0; + +out_bus_unregister: + ipu6_bus_unregister(); + + return ret; +} + +static void __exit ipu6_exit(void) +{ + pci_unregister_driver(&ipu6_pci_driver); + ipu6_bus_unregister(); +} + +module_init(ipu6_init); +module_exit(ipu6_exit); + +MODULE_AUTHOR("Sakari Ailus "); +MODULE_AUTHOR("Tianshu Qiu "); +MODULE_AUTHOR("Bingbu Cao "); +MODULE_AUTHOR("Qingwu Zhang "); +MODULE_AUTHOR("Yunliang Ding "); +MODULE_AUTHOR("Hongju Wang "); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Intel IPU6 PCI driver"); diff --git a/drivers/media/pci/intel/ipu6/ipu6.h b/drivers/media/pci/intel/ipu6/ipu6.h new file mode 100644 index 000000000000..95911a365c48 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6.h @@ -0,0 +1,344 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_H +#define IPU6_H + +#include +#include +#include +#include + +#include "ipu6-buttress.h" + +#define IPU6_PCI_ID 0x9a19 +#define IPU6SE_PCI_ID 0x4e19 +#define IPU6EP_ADL_P_PCI_ID 0x465d +#define IPU6EP_ADL_N_PCI_ID 0x462e +#define IPU6EP_RPL_P_PCI_ID 0xa75d +#define IPU6EP_MTL_PCI_ID 0x7d19 + +enum ipu6_version { + IPU6_VER_INVALID = 0, + IPU6_VER_6 = 1, + IPU6_VER_6SE = 3, + IPU6_VER_6EP = 5, + IPU6_VER_6EP_MTL = 6, +}; + +/* + * IPU6 - TGL + * IPU6SE - JSL + * IPU6EP - ADL/RPL + * IPU6EP_MTL - MTL + */ +static inline bool is_ipu6se(u8 hw_ver) +{ + return hw_ver == IPU6_VER_6SE; +} + +static inline bool is_ipu6ep(u8 hw_ver) +{ + return hw_ver == IPU6_VER_6EP; +} + +static inline bool is_ipu6ep_mtl(u8 hw_ver) +{ + return hw_ver == IPU6_VER_6EP_MTL; +} + +/* + * ISYS DMA can overshoot. For higher resolutions over allocation is one line + * but it must be at minimum 1024 bytes. Value could be different in + * different versions / generations thus provide it via platform data. + */ +#define IPU6_ISYS_OVERALLOC_MIN 1024 + +/* Physical pages in GDA is 128, page size is 2K for IPU6, 1K for others */ +#define IPU6_DEVICE_GDA_NR_PAGES 128 + +/* Virtualization factor to calculate the available virtual pages */ +#define IPU6_DEVICE_GDA_VIRT_FACTOR 32 + +#define NR_OF_MMU_RESOURCES 2 + +struct ipu6_device { + struct pci_dev *pdev; + struct list_head devices; + struct ipu6_bus_device *isys; + struct ipu6_bus_device *psys; + struct ipu6_buttress buttress; + + const struct firmware *cpd_fw; + const char *cpd_fw_name; + u32 cpd_metadata_cmpnt_size; + + void __iomem *base; + struct ipu6_trace *trace; + bool flr_done; + bool need_ipc_reset; + bool secure_mode; + u8 hw_ver; + bool bus_ready_to_probe; +}; + +#define IPU6_FW_CALL_TIMEOUT_MS 2000 + +#define IPU6_ISYS_NAME IPU6_NAME "-isys" +#define IPU6_PSYS_NAME IPU6_NAME "-psys" + +#define IPU6_MMU_MAX_DEVICES 4 +#define IPU6_MMU_ADDR_BITS 32 +/* The firmware is accessible within the first 2 GiB only in non-secure mode. */ +#define IPU6_MMU_ADDR_BITS_NON_SECURE 31 + +#define IPU6_MMU_MAX_TLB_L1_STREAMS 32 +#define IPU6_MMU_MAX_TLB_L2_STREAMS 32 +#define IPU6_MAX_LI_BLOCK_ADDR 128 +#define IPU6_MAX_L2_BLOCK_ADDR 64 + +#define IPU6_ISYS_MAX_CSI2_LEGACY_PORTS 4 +#define IPU6_ISYS_MAX_CSI2_COMBO_PORTS 2 + +#define IPU6_MAX_FRAME_COUNTER 0xff + +/* + * To maximize the IOSF utlization, IPU6 need to send requests in bursts. + * At the DMA interface with the buttress, there are CDC FIFOs with burst + * collection capability. CDC FIFO burst collectors have a configurable + * threshold and is configured based on the outcome of performance measurements. + * + * isys has 3 ports with IOSF interface for VC0, VC1 and VC2 + * psys has 4 ports with IOSF interface for VC0, VC1w, VC1r and VC2 + * + * Threshold values are pre-defined and are arrived at after performance + * evaluations on a type of IPU6 + */ +#define IPU6_MAX_VC_IOSF_PORTS 4 + +/* + * IPU6 must configure correct arbitration mechanism related to the IOSF VC + * requests. There are two options per VC0 and VC1 - > 0 means rearbitrate on + * stall and 1 means stall until the request is completed. + */ +#define IPU6_BTRS_ARB_MODE_TYPE_REARB 0 +#define IPU6_BTRS_ARB_MODE_TYPE_STALL 1 + +/* Currently chosen arbitration mechanism for VC0 */ +#define IPU6_BTRS_ARB_STALL_MODE_VC0 \ + IPU6_BTRS_ARB_MODE_TYPE_REARB + +/* Currently chosen arbitration mechanism for VC1 */ +#define IPU6_BTRS_ARB_STALL_MODE_VC1 \ + IPU6_BTRS_ARB_MODE_TYPE_REARB + +/* + * MMU Invalidation HW bug workaround by ZLW mechanism + * + * Old IPU6 MMUV2 has a bug in the invalidation mechanism which might result in + * wrong translation or replication of the translation. This will cause data + * corruption. So we cannot directly use the MMU V2 invalidation registers + * to invalidate the MMU. Instead, whenever an invalidate is called, we need to + * clear the TLB by evicting all the valid translations by filling it with trash + * buffer (which is guaranteed not to be used by any other processes). ZLW is + * used to fill the L1 and L2 caches with the trash buffer translations. ZLW + * or Zero length write, is pre-fetch mechanism to pre-fetch the pages in + * advance to the L1 and L2 caches without triggering any memory operations. + * + * In MMU V2, L1 -> 16 streams and 64 blocks, maximum 16 blocks per stream + * One L1 block has 16 entries, hence points to 16 * 4K pages + * L2 -> 16 streams and 32 blocks. 2 blocks per streams + * One L2 block maps to 1024 L1 entries, hence points to 4MB address range + * 2 blocks per L2 stream means, 1 stream points to 8MB range + * + * As we need to clear the caches and 8MB being the biggest cache size, we need + * to have trash buffer which points to 8MB address range. As these trash + * buffers are not used for any memory transactions, we need only the least + * amount of physical memory. So we reserve 8MB IOVA address range but only + * one page is reserved from physical memory. Each of this 8MB IOVA address + * range is then mapped to the same physical memory page. + */ +/* One L2 entry maps 1024 L1 entries and one L1 entry per page */ +#define IPU6_MMUV2_L2_RANGE (1024 * PAGE_SIZE) +/* Max L2 blocks per stream */ +#define IPU6_MMUV2_MAX_L2_BLOCKS 2 +/* Max L1 blocks per stream */ +#define IPU6_MMUV2_MAX_L1_BLOCKS 16 +#define IPU6_MMUV2_TRASH_RANGE (IPU6_MMUV2_L2_RANGE * \ + IPU6_MMUV2_MAX_L2_BLOCKS) +/* Entries per L1 block */ +#define MMUV2_ENTRIES_PER_L1_BLOCK 16 +#define MMUV2_TRASH_L1_BLOCK_OFFSET (MMUV2_ENTRIES_PER_L1_BLOCK * \ + PAGE_SIZE) +#define MMUV2_TRASH_L2_BLOCK_OFFSET IPU6_MMUV2_L2_RANGE + +/* + * In some of the IPU6 MMUs, there is provision to configure L1 and L2 page + * table caches. Both these L1 and L2 caches are divided into multiple sections + * called streams. There is maximum 16 streams for both caches. Each of these + * sections are subdivided into multiple blocks. When nr_l1streams = 0 and + * nr_l2streams = 0, means the MMU is of type MMU_V1 and do not support + * L1/L2 page table caches. + * + * L1 stream per block sizes are configurable and varies per usecase. + * L2 has constant block sizes - 2 blocks per stream. + * + * MMU1 support pre-fetching of the pages to have less cache lookup misses. To + * enable the pre-fetching, MMU1 AT (Address Translator) device registers + * need to be configured. + * + * There are four types of memory accesses which requires ZLW configuration. + * ZLW(Zero Length Write) is a mechanism to enable VT-d pre-fetching on IOMMU. + * + * 1. Sequential Access or 1D mode + * Set ZLW_EN -> 1 + * set ZLW_PAGE_CROSS_1D -> 1 + * Set ZLW_N to "N" pages so that ZLW will be inserte N pages ahead where + * N is pre-defined and hardcoded in the platform data + * Set ZLW_2D -> 0 + * + * 2. ZLW 2D mode + * Set ZLW_EN -> 1 + * set ZLW_PAGE_CROSS_1D -> 1, + * Set ZLW_N -> 0 + * Set ZLW_2D -> 1 + * + * 3. ZLW Enable (no 1D or 2D mode) + * Set ZLW_EN -> 1 + * set ZLW_PAGE_CROSS_1D -> 0, + * Set ZLW_N -> 0 + * Set ZLW_2D -> 0 + * + * 4. ZLW disable + * Set ZLW_EN -> 0 + * set ZLW_PAGE_CROSS_1D -> 0, + * Set ZLW_N -> 0 + * Set ZLW_2D -> 0 + * + * To configure the ZLW for the above memory access, four registers are + * available. Hence to track these four settings, we have the following entries + * in the struct ipu6_mmu_hw. Each of these entries are per stream and + * available only for the L1 streams. + * + * a. l1_zlw_en -> To track zlw enabled per stream (ZLW_EN) + * b. l1_zlw_1d_mode -> Track 1D mode per stream. ZLW inserted at page boundary + * c. l1_ins_zlw_ahead_pages -> to track how advance the ZLW need to be inserted + * Insert ZLW request N pages ahead address. + * d. l1_zlw_2d_mode -> To track 2D mode per stream (ZLW_2D) + * + * + * Currently L1/L2 streams, blocks, AT ZLW configurations etc. are pre-defined + * as per the usecase specific calculations. Any change to this pre-defined + * table has to happen in sync with IPU6 FW. + */ +struct ipu6_mmu_hw { + union { + unsigned long offset; + void __iomem *base; + }; + u32 info_bits; + u8 nr_l1streams; + /* + * L1 has variable blocks per stream - total of 64 blocks and maximum of + * 16 blocks per stream. Configurable by using the block start address + * per stream. Block start address is calculated from the block size + */ + u8 l1_block_sz[IPU6_MMU_MAX_TLB_L1_STREAMS]; + /* Is ZLW is enabled in each stream */ + bool l1_zlw_en[IPU6_MMU_MAX_TLB_L1_STREAMS]; + bool l1_zlw_1d_mode[IPU6_MMU_MAX_TLB_L1_STREAMS]; + u8 l1_ins_zlw_ahead_pages[IPU6_MMU_MAX_TLB_L1_STREAMS]; + bool l1_zlw_2d_mode[IPU6_MMU_MAX_TLB_L1_STREAMS]; + + u32 l1_stream_id_reg_offset; + u32 l2_stream_id_reg_offset; + + u8 nr_l2streams; + /* + * L2 has fixed 2 blocks per stream. Block address is calculated + * from the block size + */ + u8 l2_block_sz[IPU6_MMU_MAX_TLB_L2_STREAMS]; + /* flag to track if WA is needed for successive invalidate HW bug */ + bool insert_read_before_invalidate; +}; + +struct ipu6_mmu_pdata { + u32 nr_mmus; + struct ipu6_mmu_hw mmu_hw[IPU6_MMU_MAX_DEVICES]; + int mmid; +}; + +struct ipu6_isys_csi2_pdata { + void __iomem *base; +}; + +struct ipu6_isys_internal_csi2_pdata { + u32 nports; + u32 irq_mask; + u32 *offsets; + u32 ctrl0_irq_edge; + u32 ctrl0_irq_clear; + u32 ctrl0_irq_mask; + u32 ctrl0_irq_enable; + u32 ctrl0_irq_lnp; + u32 ctrl0_irq_status; + u32 fw_access_port_ofs; +}; + +struct ipu6_isys_internal_tpg_pdata { + u32 ntpgs; + u32 *offsets; + u32 *sels; +}; + +struct ipu6_hw_variants { + unsigned long offset; + u32 nr_mmus; + struct ipu6_mmu_hw mmu_hw[IPU6_MMU_MAX_DEVICES]; + u8 cdc_fifos; + u8 cdc_fifo_threshold[IPU6_MAX_VC_IOSF_PORTS]; + u32 dmem_offset; + u32 spc_offset; +}; + +struct ipu6_isys_internal_pdata { + struct ipu6_isys_internal_csi2_pdata csi2; + struct ipu6_hw_variants hw_variant; + u32 num_parallel_streams; + u32 isys_dma_overshoot; + u32 sram_gran_shift; + u32 sram_gran_size; + u32 max_sram_size; + u32 max_streams; + u32 max_send_queues; + u32 max_sram_blocks; + u32 max_devq_size; + u32 sensor_type_start; + u32 sensor_type_end; + u32 ltr; + u32 memopen_threshold; + bool enhanced_iwake; +}; + +struct ipu6_isys_pdata { + void __iomem *base; + const struct ipu6_isys_internal_pdata *ipdata; +}; + +struct ipu6_psys_internal_pdata { + struct ipu6_hw_variants hw_variant; +}; + +struct ipu6_psys_pdata { + void __iomem *base; + const struct ipu6_psys_internal_pdata *ipdata; +}; + +int ipu6_fw_authenticate(void *data, u64 val); +void ipu6_configure_spc(struct ipu6_device *isp, + const struct ipu6_hw_variants *hw_variant, + int pkg_dir_idx, void __iomem *base, u64 *pkg_dir, + dma_addr_t pkg_dir_dma_addr); +int cio2_bridge_init(struct pci_dev *cio2); +#endif /* IPU6_H */ From patchwork Thu Apr 13 10:04:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B286C77B61 for ; Thu, 13 Apr 2023 09:54:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229930AbjDMJyq (ORCPT ); Thu, 13 Apr 2023 05:54:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230071AbjDMJyk (ORCPT ); Thu, 13 Apr 2023 05:54:40 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94A1A93E2 for ; Thu, 13 Apr 2023 02:54:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379675; x=1712915675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XZhPM2AyRZtYbBg2TtgqnNirntlmEHLoQnRNFfxhIEo=; b=ZBqtmk9PcOP9dAtYMOp08pJ7PContgqz4HE70SaL1JtgBrGBX6KYsNHJ aoMHnru4hikVqiXRsd2KRyJz27tBBdLIwIUp8l4Vx8DK+oPY8EcTDYXHG C+mlLI0VRa/hT3+Xiew2xXQ8oyiz2bSGcK1UX4XNBb7WU075Al19zEo0P jNkI7WXklHveRwBkiAq7eUo4kqn3wDo9GwUP8/PizzyieWHFrKYyUGzrA x7aikVXBWThjeGw2k+kCi9QqQ3IlmmeVSAWEDSgraTh9/UNiFLenXjZ7b nuq9NnhTJiC85AzuJAoCwZO9OWNbTis119jI5hIXTUSOGExku+mGNSUfR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371992908" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371992908" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600008" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600008" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:31 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 02/14] media: intel/ipu6: add IPU virtual bus driver Date: Thu, 13 Apr 2023 18:04:17 +0800 Message-Id: <20230413100429.919622-3-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Even the IPU input system and processing system are a single PCI device, each system has its own power sequence, the processing system power up depends on the input system power up. Besides, input system and processing system have their own MMU hardware for IPU DMA address mapping. Define a virtual bus to help us to implement the power sequence dependency and DMA mapping requirement on specific device. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-bus.c | 263 ++++++++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-bus.h | 68 ++++++ 2 files changed, 331 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-bus.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-bus.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-bus.c b/drivers/media/pci/intel/ipu6/ipu6-bus.c new file mode 100644 index 000000000000..9a758d7ae73c --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-bus.c @@ -0,0 +1,263 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-dma.h" +#include "ipu6-platform.h" + +static struct bus_type ipu6_bus; + +static int bus_pm_runtime_suspend(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + int ret; + + ret = pm_generic_runtime_suspend(dev); + if (ret) + return ret; + + ret = ipu6_buttress_power(dev, adev->ctrl, false); + dev_dbg(dev, "buttress power down %d\n", ret); + if (!ret) + return 0; + + dev_err(dev, "power down failed!\n"); + + /* Powering down failed, attempt to resume device now */ + ret = pm_generic_runtime_resume(dev); + if (!ret) + return -EBUSY; + + return -EIO; +} + +static int bus_pm_runtime_resume(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + int ret; + + ret = ipu6_buttress_power(dev, adev->ctrl, true); + dev_dbg(dev, "buttress power up %d\n", ret); + if (ret) + return ret; + + ret = pm_generic_runtime_resume(dev); + if (ret) + goto out_err; + + return 0; + +out_err: + ipu6_buttress_power(dev, adev->ctrl, false); + + return -EBUSY; +} + +static const struct dev_pm_ops ipu6_bus_pm_ops = { + .runtime_suspend = bus_pm_runtime_suspend, + .runtime_resume = bus_pm_runtime_resume, +}; + +static int ipu6_bus_match(struct device *dev, struct device_driver *drv) +{ + struct ipu6_bus_driver *adrv = to_ipu6_bus_driver(drv); + struct pci_dev *pci_dev = to_pci_dev(dev->parent); + const struct pci_device_id *found_id; + + found_id = pci_match_id(adrv->id_table, pci_dev); + if (found_id) + dev_dbg(dev, "%s %x:%x matched\n", dev_name(dev), + found_id->vendor, found_id->device); + + return found_id ? 1 : 0; +} + +static int ipu6_bus_probe(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + struct ipu6_bus_driver *adrv = to_ipu6_bus_driver(dev->driver); + int ret; + + if (!adev->isp->bus_ready_to_probe) + return -EPROBE_DEFER; + + dev_dbg(dev, "bus probe dev %s\n", dev_name(dev)); + + adev->adrv = adrv; + if (!adrv->probe) { + ret = -ENODEV; + goto out_err; + } + + ret = pm_runtime_resume_and_get(&adev->dev); + if (ret < 0) { + dev_err(&adev->dev, "Failed to get runtime PM\n"); + goto out_err; + } + + ret = adrv->probe(adev); + pm_runtime_put(&adev->dev); + + if (ret) + goto out_err; + + return 0; + +out_err: + ipu6_bus_set_drvdata(adev, NULL); + adev->adrv = NULL; + + return ret; +} + +static void ipu6_bus_remove(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + struct ipu6_bus_driver *adrv = to_ipu6_bus_driver(dev->driver); + + if (adrv->remove) + adrv->remove(adev); +} + +static struct bus_type ipu6_bus = { + .name = IPU6_BUS_NAME, + .match = ipu6_bus_match, + .probe = ipu6_bus_probe, + .remove = ipu6_bus_remove, + .pm = &ipu6_bus_pm_ops, +}; + +static DEFINE_MUTEX(ipu6_bus_mutex); + +static void ipu6_bus_release(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + + kfree(adev); +} + +struct ipu6_bus_device * +ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent, + void *pdata, struct ipu6_buttress_ctrl *ctrl, + char *name) +{ + struct ipu6_bus_device *adev; + struct ipu6_device *isp = pci_get_drvdata(pdev); + + adev = kzalloc(sizeof(*adev), GFP_KERNEL); + if (!adev) + return ERR_PTR(-ENOMEM); + + adev->dev.parent = parent; + adev->dev.bus = &ipu6_bus; + adev->dev.release = ipu6_bus_release; + adev->dev.dma_ops = &ipu6_dma_ops; + adev->dma_mask = DMA_BIT_MASK(isp->secure_mode ? IPU6_MMU_ADDR_BITS : + IPU6_MMU_ADDR_BITS_NON_SECURE); + adev->dev.dma_mask = &adev->dma_mask; + adev->dev.dma_parms = pdev->dev.dma_parms; + adev->dev.coherent_dma_mask = adev->dma_mask; + adev->ctrl = ctrl; + adev->pdata = pdata; + adev->isp = isp; + dev_set_name(&adev->dev, "%s", name); + + device_initialize(&adev->dev); + pm_runtime_forbid(&adev->dev); + pm_runtime_enable(&adev->dev); + + return adev; +} + +int ipu6_bus_add_device(struct ipu6_bus_device *adev) +{ + int ret; + + ret = device_add(&adev->dev); + if (ret) { + put_device(&adev->dev); + return ret; + } + + mutex_lock(&ipu6_bus_mutex); + list_add(&adev->list, &adev->isp->devices); + mutex_unlock(&ipu6_bus_mutex); + + pm_runtime_allow(&adev->dev); + return 0; +} + +void ipu6_bus_del_devices(struct pci_dev *pdev) +{ + struct ipu6_device *isp = pci_get_drvdata(pdev); + struct ipu6_bus_device *adev, *save; + + mutex_lock(&ipu6_bus_mutex); + + list_for_each_entry_safe(adev, save, &isp->devices, list) { + pm_runtime_disable(&adev->dev); + list_del(&adev->list); + device_unregister(&adev->dev); + } + + mutex_unlock(&ipu6_bus_mutex); +} + +int ipu6_bus_register_driver(struct ipu6_bus_driver *adrv) +{ + adrv->drv.bus = &ipu6_bus; + return driver_register(&adrv->drv); +} +EXPORT_SYMBOL_NS_GPL(ipu6_bus_register_driver, INTEL_IPU6); + +int ipu6_bus_unregister_driver(struct ipu6_bus_driver *adrv) +{ + driver_unregister(&adrv->drv); + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_bus_unregister_driver, INTEL_IPU6); + +int ipu6_bus_register(void) +{ + return bus_register(&ipu6_bus); +} + +void ipu6_bus_unregister(void) +{ + return bus_unregister(&ipu6_bus); +} + +static int flr_rpm_recovery(struct device *dev, void *p) +{ + /* + * We are not necessarily going through device from child to + * parent. runtime PM refuses to change state for parent if the child + * is still active. At FLR (full reset for whole IPU6) that doesn't + * matter. Everything has been power gated by HW during the FLR cycle + * and we are just cleaning up SW state. Thus, ignore child during + * set_suspended. + */ + dev_dbg(dev, "FLR recovery call\n"); + pm_suspend_ignore_children(dev, true); + pm_runtime_set_suspended(dev); + pm_suspend_ignore_children(dev, false); + + return 0; +} + +int ipu6_bus_flr_recovery(void) +{ + bus_for_each_dev(&ipu6_bus, NULL, NULL, flr_rpm_recovery); + return 0; +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-bus.h b/drivers/media/pci/intel/ipu6/ipu6-bus.h new file mode 100644 index 000000000000..de01bd56e786 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-bus.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_BUS_H +#define IPU6_BUS_H + +#include +#include +#include +#include +#include + +#define IPU6_BUS_NAME IPU6_NAME "-bus" + +struct ipu6_buttress_ctrl; +struct ipu6_subsystem_trace_config; + +struct ipu6_bus_device { + struct device dev; + struct list_head list; + void *pdata; + struct ipu6_bus_driver *adrv; + struct ipu6_mmu *mmu; + struct ipu6_device *isp; + struct ipu6_subsystem_trace_config *trace_cfg; + struct ipu6_buttress_ctrl *ctrl; + u64 dma_mask; + + const struct firmware *fw; + struct sg_table fw_sgt; + u64 *pkg_dir; + dma_addr_t pkg_dir_dma_addr; + unsigned int pkg_dir_size; +}; + +#define to_ipu6_bus_device(_dev) container_of(_dev, struct ipu6_bus_device, dev) + +struct ipu6_bus_driver { + struct device_driver drv; + const struct pci_device_id *id_table; + int (*probe)(struct ipu6_bus_device *adev); + void (*remove)(struct ipu6_bus_device *adev); + irqreturn_t (*isr)(struct ipu6_bus_device *adev); + irqreturn_t (*isr_threaded)(struct ipu6_bus_device *adev); + bool wake_isr_thread; +}; + +#define to_ipu6_bus_driver(_drv) container_of(_drv, struct ipu6_bus_driver, drv) + +struct ipu6_bus_device * +ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent, + void *pdata, struct ipu6_buttress_ctrl *ctrl, + char *name); +int ipu6_bus_add_device(struct ipu6_bus_device *adev); +void ipu6_bus_del_devices(struct pci_dev *pdev); + +int ipu6_bus_register_driver(struct ipu6_bus_driver *adrv); +int ipu6_bus_unregister_driver(struct ipu6_bus_driver *adrv); + +int ipu6_bus_register(void); +void ipu6_bus_unregister(void); + +#define ipu6_bus_set_drvdata(adev, data) dev_set_drvdata(&(adev)->dev, data) +#define ipu6_bus_get_drvdata(adev) dev_get_drvdata(&(adev)->dev) + +int ipu6_bus_flr_recovery(void); + +#endif From patchwork Thu Apr 13 10:04:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E09BFC77B6C for ; Thu, 13 Apr 2023 09:54:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229947AbjDMJyz (ORCPT ); Thu, 13 Apr 2023 05:54:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229888AbjDMJyx (ORCPT ); Thu, 13 Apr 2023 05:54:53 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B3A69753 for ; Thu, 13 Apr 2023 02:54:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379680; x=1712915680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yu92RQSSPHii1q+dHuPxneIwLZYJBv9iOjuJVcii30I=; b=GYMM5r1o/IkIw5/EI+KnpRYwzhhQQsMV1WftRY3kJgAYDlJnV/lfCroq l21Adt9jiDhtTATJoR39haMIuGfDcI5nxeKwo9Iu5hMDPjLHeXfneCKET +5fbUN0thsbQK97MzUy25S3R2StXO+q0d5UHH/u1fLj+mO7fz5S+xJJNS tQ7po2UI0Np5CcKGptM42RmAyobB6usUzf58PWQUhx5GT+BhEoaP3ghh4 JZJh3o19zW0P5Ombh551lNHbNBOsnOmBC9R8cRqufMj8xGAvVTyujUEfv EdjewmYBQWpG/yiNLLKpcE999KfnfjTs2qnXzodXkJfitjSRK4vP/Ykth Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371992924" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371992924" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600018" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600018" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:35 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 03/14] media: intel/ipu6: add IPU6 buttress interface driver Date: Thu, 13 Apr 2023 18:04:18 +0800 Message-Id: <20230413100429.919622-4-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao The IPU6 buttress is the interface between IPU device (input system and processing system) with rest of the SoC. It contains overall IPU hardware control registers, these control registers are used as the interfaces with the Intel Converged Security Engine and Punit to do firmware authentication and power management. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-buttress.c | 916 ++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-buttress.h | 109 +++ .../intel/ipu6/ipu6-platform-buttress-regs.h | 231 +++++ 3 files changed, 1256 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-buttress.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-buttress.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6-platform-buttress-regs.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-buttress.c b/drivers/media/pci/intel/ipu6/ipu6-buttress.c new file mode 100644 index 000000000000..c42d26522858 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-buttress.c @@ -0,0 +1,916 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-cpd.h" +#include "ipu6-platform-buttress-regs.h" + +#define BOOTLOADER_STATUS_OFFSET 0x15c + +#define BOOTLOADER_MAGIC_KEY 0xb00710ad + +#define ENTRY BUTTRESS_IU2CSECSR_IPC_PEER_COMP_ACTIONS_RST_PHASE1 +#define EXIT BUTTRESS_IU2CSECSR_IPC_PEER_COMP_ACTIONS_RST_PHASE2 +#define QUERY BUTTRESS_IU2CSECSR_IPC_PEER_QUERIED_IP_COMP_ACTIONS_RST_PHASE + +#define BUTTRESS_TSC_SYNC_RESET_TRIAL_MAX 10 + +#define BUTTRESS_POWER_TIMEOUT_US (200 * USEC_PER_MSEC) + +#define BUTTRESS_CSE_BOOTLOAD_TIMEOUT_US (5 * USEC_PER_SEC) +#define BUTTRESS_CSE_AUTHENTICATE_TIMEOUT_US (10 * USEC_PER_SEC) +#define BUTTRESS_CSE_FWRESET_TIMEOUT_US (100 * USEC_PER_MSEC) + +#define BUTTRESS_IPC_TX_TIMEOUT_MS MSEC_PER_SEC +#define BUTTRESS_IPC_RX_TIMEOUT_MS MSEC_PER_SEC +#define BUTTRESS_IPC_VALIDITY_TIMEOUT_US (1 * USEC_PER_SEC) +#define BUTTRESS_TSC_SYNC_TIMEOUT_US (5 * USEC_PER_MSEC) + +#define BUTTRESS_IPC_RESET_RETRY 2000 +#define BUTTRESS_CSE_IPC_RESET_RETRY 4 +#define BUTTRESS_IPC_CMD_SEND_RETRY 1 + +#define BUTTRESS_MAX_CONSECUTIVE_IRQS 100 + +static const u32 ipu6_adev_irq_mask[] = { + BUTTRESS_ISR_IS_IRQ, BUTTRESS_ISR_PS_IRQ +}; + +int ipu6_buttress_ipc_reset(struct ipu6_device *isp, + struct ipu6_buttress_ipc *ipc) +{ + unsigned int retries = BUTTRESS_IPC_RESET_RETRY; + struct ipu6_buttress *b = &isp->buttress; + u32 val = 0, csr_in_clr; + + if (!isp->secure_mode) { + dev_info(&isp->pdev->dev, "Skip IPC reset for non-secure mode"); + return 0; + } + + mutex_lock(&b->ipc_mutex); + + /* Clear-by-1 CSR (all bits), corresponding internal states. */ + val = readl(isp->base + ipc->csr_in); + writel(val, isp->base + ipc->csr_in); + + /* Set peer CSR bit IPC_PEER_COMP_ACTIONS_RST_PHASE1 */ + writel(ENTRY, isp->base + ipc->csr_out); + /* + * Clear-by-1 all CSR bits EXCEPT following + * bits: + * A. IPC_PEER_COMP_ACTIONS_RST_PHASE1. + * B. IPC_PEER_COMP_ACTIONS_RST_PHASE2. + * C. Possibly custom bits, depending on + * their role. + */ + csr_in_clr = BUTTRESS_IU2CSECSR_IPC_PEER_DEASSERTED_REG_VALID_REQ | + BUTTRESS_IU2CSECSR_IPC_PEER_ACKED_REG_VALID | + BUTTRESS_IU2CSECSR_IPC_PEER_ASSERTED_REG_VALID_REQ | QUERY; + + do { + usleep_range(400, 500); + val = readl(isp->base + ipc->csr_in); + switch (val) { + case ENTRY | EXIT: + case ENTRY | EXIT | QUERY: + /* + * 1) Clear-by-1 CSR bits + * (IPC_PEER_COMP_ACTIONS_RST_PHASE1, + * IPC_PEER_COMP_ACTIONS_RST_PHASE2). + * 2) Set peer CSR bit + * IPC_PEER_QUERIED_IP_COMP_ACTIONS_RST_PHASE. + */ + writel(ENTRY | EXIT, isp->base + ipc->csr_in); + writel(QUERY, isp->base + ipc->csr_out); + break; + case ENTRY: + case ENTRY | QUERY: + /* + * 1) Clear-by-1 CSR bits + * (IPC_PEER_COMP_ACTIONS_RST_PHASE1, + * IPC_PEER_QUERIED_IP_COMP_ACTIONS_RST_PHASE). + * 2) Set peer CSR bit + * IPC_PEER_COMP_ACTIONS_RST_PHASE1. + */ + writel(ENTRY | QUERY, isp->base + ipc->csr_in); + writel(ENTRY, isp->base + ipc->csr_out); + break; + case EXIT: + case EXIT | QUERY: + /* + * Clear-by-1 CSR bit + * IPC_PEER_COMP_ACTIONS_RST_PHASE2. + * 1) Clear incoming doorbell. + * 2) Clear-by-1 all CSR bits EXCEPT following + * bits: + * A. IPC_PEER_COMP_ACTIONS_RST_PHASE1. + * B. IPC_PEER_COMP_ACTIONS_RST_PHASE2. + * C. Possibly custom bits, depending on + * their role. + * 3) Set peer CSR bit + * IPC_PEER_COMP_ACTIONS_RST_PHASE2. + */ + writel(EXIT, isp->base + ipc->csr_in); + writel(0, isp->base + ipc->db0_in); + writel(csr_in_clr, isp->base + ipc->csr_in); + writel(EXIT, isp->base + ipc->csr_out); + + /* + * Read csr_in again to make sure if RST_PHASE2 is done. + * If csr_in is QUERY, it should be handled again. + */ + usleep_range(200, 300); + val = readl(isp->base + ipc->csr_in); + if (val & QUERY) { + dev_dbg(&isp->pdev->dev, + "RST_PHASE2 retry csr_in = %x\n", val); + break; + } + mutex_unlock(&b->ipc_mutex); + return 0; + case QUERY: + /* + * 1) Clear-by-1 CSR bit + * IPC_PEER_QUERIED_IP_COMP_ACTIONS_RST_PHASE. + * 2) Set peer CSR bit + * IPC_PEER_COMP_ACTIONS_RST_PHASE1 + */ + writel(QUERY, isp->base + ipc->csr_in); + writel(ENTRY, isp->base + ipc->csr_out); + break; + default: + dev_warn_ratelimited(&isp->pdev->dev, + "Unexpected CSR 0x%x\n", val); + break; + } + } while (retries--); + + mutex_unlock(&b->ipc_mutex); + dev_err(&isp->pdev->dev, "Timed out while waiting for CSE\n"); + + return -ETIMEDOUT; +} + +static void +ipu6_buttress_ipc_validity_close(struct ipu6_device *isp, + struct ipu6_buttress_ipc *ipc) +{ + writel(BUTTRESS_IU2CSECSR_IPC_PEER_DEASSERTED_REG_VALID_REQ, + isp->base + ipc->csr_out); +} + +static int +ipu6_buttress_ipc_validity_open(struct ipu6_device *isp, + struct ipu6_buttress_ipc *ipc) +{ + unsigned int mask = BUTTRESS_IU2CSECSR_IPC_PEER_ACKED_REG_VALID; + void __iomem *addr; + int ret; + u32 val; + + writel(BUTTRESS_IU2CSECSR_IPC_PEER_ASSERTED_REG_VALID_REQ, + isp->base + ipc->csr_out); + + addr = isp->base + ipc->csr_in; + ret = readl_poll_timeout(addr, val, val & mask, 200, + BUTTRESS_IPC_VALIDITY_TIMEOUT_US); + if (ret) { + dev_err(&isp->pdev->dev, "CSE validity timeout 0x%x\n", val); + ipu6_buttress_ipc_validity_close(isp, ipc); + } + + return ret; +} + +static void ipu6_buttress_ipc_recv(struct ipu6_device *isp, + struct ipu6_buttress_ipc *ipc, u32 *ipc_msg) +{ + if (ipc_msg) + *ipc_msg = readl(isp->base + ipc->data0_in); + writel(0, isp->base + ipc->db0_in); +} + +static int ipu6_buttress_ipc_send_bulk(struct ipu6_device *isp, + enum ipu6_buttress_ipc_domain ipc_domain, + struct ipu6_ipc_buttress_bulk_msg *msgs, + u32 size) +{ + unsigned long tx_timeout_jiffies, rx_timeout_jiffies; + unsigned int i, retry = BUTTRESS_IPC_CMD_SEND_RETRY; + struct ipu6_buttress *b = &isp->buttress; + struct ipu6_buttress_ipc *ipc; + u32 val; + int ret; + int tout; + + ipc = ipc_domain == IPU6_BUTTRESS_IPC_CSE ? &b->cse : &b->ish; + + mutex_lock(&b->ipc_mutex); + + ret = ipu6_buttress_ipc_validity_open(isp, ipc); + if (ret) { + dev_err(&isp->pdev->dev, "IPC validity open failed\n"); + goto out; + } + + tx_timeout_jiffies = msecs_to_jiffies(BUTTRESS_IPC_TX_TIMEOUT_MS); + rx_timeout_jiffies = msecs_to_jiffies(BUTTRESS_IPC_RX_TIMEOUT_MS); + + for (i = 0; i < size; i++) { + reinit_completion(&ipc->send_complete); + if (msgs[i].require_resp) + reinit_completion(&ipc->recv_complete); + + dev_dbg(&isp->pdev->dev, "bulk IPC command: 0x%x\n", + msgs[i].cmd); + writel(msgs[i].cmd, isp->base + ipc->data0_out); + + val = BUTTRESS_IU2CSEDB0_BUSY | msgs[i].cmd_size; + + writel(val, isp->base + ipc->db0_out); + + tout = wait_for_completion_timeout(&ipc->send_complete, + tx_timeout_jiffies); + if (!tout) { + dev_err(&isp->pdev->dev, "send IPC response timeout\n"); + if (!retry--) { + ret = -ETIMEDOUT; + goto out; + } + + /* Try again if CSE is not responding on first try */ + writel(0, isp->base + ipc->db0_out); + i--; + continue; + } + + retry = BUTTRESS_IPC_CMD_SEND_RETRY; + + if (!msgs[i].require_resp) + continue; + + tout = wait_for_completion_timeout(&ipc->recv_complete, + rx_timeout_jiffies); + if (!tout) { + dev_err(&isp->pdev->dev, "recv IPC response timeout\n"); + ret = -ETIMEDOUT; + goto out; + } + + if (ipc->nack_mask && + (ipc->recv_data & ipc->nack_mask) == ipc->nack) { + dev_err(&isp->pdev->dev, + "IPC NACK for cmd 0x%x\n", msgs[i].cmd); + ret = -EIO; + goto out; + } + + if (ipc->recv_data != msgs[i].expected_resp) { + dev_err(&isp->pdev->dev, + "expected resp: 0x%x, IPC response: 0x%x ", + msgs[i].expected_resp, ipc->recv_data); + ret = -EIO; + goto out; + } + } + + dev_dbg(&isp->pdev->dev, "bulk IPC commands done\n"); + +out: + ipu6_buttress_ipc_validity_close(isp, ipc); + mutex_unlock(&b->ipc_mutex); + return ret; +} + +static int +ipu6_buttress_ipc_send(struct ipu6_device *isp, + enum ipu6_buttress_ipc_domain ipc_domain, + u32 ipc_msg, u32 size, bool require_resp, + u32 expected_resp) +{ + struct ipu6_ipc_buttress_bulk_msg msg = { + .cmd = ipc_msg, + .cmd_size = size, + .require_resp = require_resp, + .expected_resp = expected_resp, + }; + + return ipu6_buttress_ipc_send_bulk(isp, ipc_domain, &msg, 1); +} + +static irqreturn_t ipu6_buttress_call_isr(struct ipu6_bus_device *adev) +{ + irqreturn_t ret = IRQ_WAKE_THREAD; + + if (!adev || !adev->adrv) + return IRQ_NONE; + + if (adev->adrv->isr) + ret = adev->adrv->isr(adev); + + if (ret == IRQ_WAKE_THREAD && !adev->adrv->isr_threaded) + ret = IRQ_NONE; + + adev->adrv->wake_isr_thread = (ret == IRQ_WAKE_THREAD); + + return ret; +} + +irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr) +{ + struct ipu6_device *isp = isp_ptr; + struct ipu6_bus_device *adev[] = { isp->isys, isp->psys }; + struct ipu6_buttress *b = &isp->buttress; + u32 reg_irq_sts = BUTTRESS_REG_ISR_STATUS; + irqreturn_t ret = IRQ_NONE; + u32 disable_irqs = 0; + u32 irq_status; + u32 i, count = 0; + + pm_runtime_get_noresume(&isp->pdev->dev); + + irq_status = readl(isp->base + reg_irq_sts); + if (!irq_status) { + pm_runtime_put_noidle(&isp->pdev->dev); + return IRQ_NONE; + } + + do { + writel(irq_status, isp->base + BUTTRESS_REG_ISR_CLEAR); + + for (i = 0; i < ARRAY_SIZE(ipu6_adev_irq_mask); i++) { + irqreturn_t r = ipu6_buttress_call_isr(adev[i]); + + if (!(irq_status & ipu6_adev_irq_mask[i])) + continue; + + if (r == IRQ_WAKE_THREAD) { + ret = IRQ_WAKE_THREAD; + disable_irqs |= ipu6_adev_irq_mask[i]; + } else if (ret == IRQ_NONE && r == IRQ_HANDLED) { + ret = IRQ_HANDLED; + } + } + + if ((irq_status & BUTTRESS_EVENT) && ret == IRQ_NONE) + ret = IRQ_HANDLED; + + if (irq_status & BUTTRESS_ISR_IPC_FROM_CSE_IS_WAITING) { + dev_dbg(&isp->pdev->dev, + "BUTTRESS_ISR_IPC_FROM_CSE_IS_WAITING\n"); + ipu6_buttress_ipc_recv(isp, &b->cse, &b->cse.recv_data); + complete(&b->cse.recv_complete); + } + + if (irq_status & BUTTRESS_ISR_IPC_FROM_ISH_IS_WAITING) { + dev_dbg(&isp->pdev->dev, + "BUTTRESS_ISR_IPC_FROM_ISH_IS_WAITING\n"); + ipu6_buttress_ipc_recv(isp, &b->ish, &b->ish.recv_data); + complete(&b->ish.recv_complete); + } + + if (irq_status & BUTTRESS_ISR_IPC_EXEC_DONE_BY_CSE) { + dev_dbg(&isp->pdev->dev, + "BUTTRESS_ISR_IPC_EXEC_DONE_BY_CSE\n"); + complete(&b->cse.send_complete); + } + + if (irq_status & BUTTRESS_ISR_IPC_EXEC_DONE_BY_ISH) { + dev_dbg(&isp->pdev->dev, + "BUTTRESS_ISR_IPC_EXEC_DONE_BY_CSE\n"); + complete(&b->ish.send_complete); + } + + if (irq_status & BUTTRESS_ISR_SAI_VIOLATION && + ipu6_buttress_get_secure_mode(isp)) + dev_err(&isp->pdev->dev, + "BUTTRESS_ISR_SAI_VIOLATION\n"); + + if (irq_status & (BUTTRESS_ISR_IS_FATAL_MEM_ERR | + BUTTRESS_ISR_PS_FATAL_MEM_ERR)) + dev_err(&isp->pdev->dev, + "BUTTRESS_ISR_FATAL_MEM_ERR\n"); + + if (irq_status & BUTTRESS_ISR_UFI_ERROR) + dev_err(&isp->pdev->dev, "BUTTRESS_ISR_UFI_ERROR\n"); + + if (++count == BUTTRESS_MAX_CONSECUTIVE_IRQS) { + dev_err(&isp->pdev->dev, "too many consecutive IRQs\n"); + ret = IRQ_NONE; + break; + } + + irq_status = readl(isp->base + reg_irq_sts); + } while (irq_status && !isp->flr_done); + + if (disable_irqs) + writel(BUTTRESS_IRQS & ~disable_irqs, + isp->base + BUTTRESS_REG_ISR_ENABLE); + + pm_runtime_put(&isp->pdev->dev); + + return ret; +} + +irqreturn_t ipu6_buttress_isr_threaded(int irq, void *isp_ptr) +{ + struct ipu6_device *isp = isp_ptr; + struct ipu6_bus_device *adev[] = { isp->isys, isp->psys }; + irqreturn_t ret = IRQ_NONE; + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(ipu6_adev_irq_mask); i++) { + if (adev[i] && adev[i]->adrv && + adev[i]->adrv->wake_isr_thread && + adev[i]->adrv->isr_threaded(adev[i]) == IRQ_HANDLED) + ret = IRQ_HANDLED; + } + + writel(BUTTRESS_IRQS, isp->base + BUTTRESS_REG_ISR_ENABLE); + + return ret; +} + +int ipu6_buttress_power(struct device *dev, struct ipu6_buttress_ctrl *ctrl, + bool on) +{ + struct ipu6_device *isp = to_ipu6_bus_device(dev)->isp; + u32 pwr_sts, val; + int ret = 0; + + if (!ctrl) + return 0; + + /* Until FLR completion nothing is expected to work */ + if (isp->flr_done) + return 0; + + mutex_lock(&isp->buttress.power_mutex); + + if (!on) { + val = 0; + pwr_sts = ctrl->pwr_sts_off << ctrl->pwr_sts_shift; + } else { + val = BUTTRESS_FREQ_CTL_START | + FIELD_PREP(BUTTRESS_FREQ_CTL_RATIO_MASK, + ctrl->ratio) | + FIELD_PREP(BUTTRESS_FREQ_CTL_QOS_FLOOR_MASK, + ctrl->qos_floor) | + BUTTRESS_FREQ_CTL_ICCMAX_LEVEL; + + pwr_sts = ctrl->pwr_sts_on << ctrl->pwr_sts_shift; + } + + writel(val, isp->base + ctrl->freq_ctl); + + ret = readl_poll_timeout(isp->base + BUTTRESS_REG_PWR_STATE, + val, (val & ctrl->pwr_sts_mask) == pwr_sts, + 100, BUTTRESS_POWER_TIMEOUT_US); + if (ret) + dev_err(&isp->pdev->dev, + "Change power status timeout with 0x%x\n", val); + + ctrl->started = !ret && on; + + mutex_unlock(&isp->buttress.power_mutex); + + return ret; +} + +bool ipu6_buttress_get_secure_mode(struct ipu6_device *isp) +{ + u32 val; + + val = readl(isp->base + BUTTRESS_REG_SECURITY_CTL); + + return val & BUTTRESS_SECURITY_CTL_FW_SECURE_MODE; +} + +bool ipu6_buttress_auth_done(struct ipu6_device *isp) +{ + u32 val; + + if (!isp->secure_mode) + return true; + + val = readl(isp->base + BUTTRESS_REG_SECURITY_CTL); + + return (val & BUTTRESS_SECURITY_CTL_FW_SETUP_MASK) == + BUTTRESS_SECURITY_CTL_AUTH_DONE; +} +EXPORT_SYMBOL_NS_GPL(ipu6_buttress_auth_done, INTEL_IPU6); + +int ipu6_buttress_reset_authentication(struct ipu6_device *isp) +{ + int ret; + u32 val; + + if (!isp->secure_mode) { + dev_dbg(&isp->pdev->dev, "Skip auth for non-secure mode\n"); + return 0; + } + + writel(BUTTRESS_FW_RESET_CTL_START, isp->base + + BUTTRESS_REG_FW_RESET_CTL); + + ret = readl_poll_timeout(isp->base + BUTTRESS_REG_FW_RESET_CTL, val, + val & BUTTRESS_FW_RESET_CTL_DONE, 500, + BUTTRESS_CSE_FWRESET_TIMEOUT_US); + if (ret) { + dev_err(&isp->pdev->dev, + "Time out while resetting authentication state\n"); + } else { + dev_info(&isp->pdev->dev, "FW reset for authentication done\n"); + writel(0, isp->base + BUTTRESS_REG_FW_RESET_CTL); + /* leave some time for HW restore */ + usleep_range(800, 1000); + } + + return ret; +} + +int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys, + const struct firmware *fw, struct sg_table *sgt) +{ + struct page **pages; + const void *addr; + unsigned long n_pages; + unsigned int i; + int ret; + + n_pages = PAGE_ALIGN(fw->size) >> PAGE_SHIFT; + + pages = kmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) + return -ENOMEM; + + addr = fw->data; + for (i = 0; i < n_pages; i++) { + struct page *p = vmalloc_to_page(addr); + + if (!p) { + ret = -ENOMEM; + goto out; + } + pages[i] = p; + addr += PAGE_SIZE; + } + + ret = sg_alloc_table_from_pages(sgt, pages, n_pages, 0, fw->size, + GFP_KERNEL); + if (ret) { + ret = -ENOMEM; + goto out; + } + + ret = dma_map_sgtable(&sys->dev, sgt, DMA_TO_DEVICE, 0); + if (ret < 0) { + ret = -ENOMEM; + sg_free_table(sgt); + goto out; + } + + dma_sync_sgtable_for_device(&sys->dev, sgt, DMA_TO_DEVICE); + +out: + kfree(pages); + + return ret; +} +EXPORT_SYMBOL_NS_GPL(ipu6_buttress_map_fw_image, INTEL_IPU6); + +int ipu6_buttress_unmap_fw_image(struct ipu6_bus_device *sys, + struct sg_table *sgt) +{ + dma_unmap_sgtable(&sys->dev, sgt, DMA_TO_DEVICE, 0); + sg_free_table(sgt); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_buttress_unmap_fw_image, INTEL_IPU6); + +int ipu6_buttress_authenticate(struct ipu6_device *isp) +{ + struct ipu6_buttress *b = &isp->buttress; + struct ipu6_psys_pdata *psys_pdata; + u32 data, mask, done, fail; + int ret; + + if (!isp->secure_mode) { + dev_dbg(&isp->pdev->dev, "Skip auth for non-secure mode\n"); + return 0; + } + + psys_pdata = isp->psys->pdata; + + mutex_lock(&b->auth_mutex); + + if (ipu6_buttress_auth_done(isp)) { + ret = 0; + goto out_unlock; + } + + /* + * Write address of FIT table to FW_SOURCE register + * Let's use fw address. I.e. not using FIT table yet + */ + data = lower_32_bits(isp->psys->pkg_dir_dma_addr); + writel(data, isp->base + BUTTRESS_REG_FW_SOURCE_BASE_LO); + + data = upper_32_bits(isp->psys->pkg_dir_dma_addr); + writel(data, isp->base + BUTTRESS_REG_FW_SOURCE_BASE_HI); + + /* + * Write boot_load into IU2CSEDATA0 + * Write sizeof(boot_load) | 0x2 << CLIENT_ID to + * IU2CSEDB.IU2CSECMD and set IU2CSEDB.IU2CSEBUSY as + */ + dev_info(&isp->pdev->dev, "Sending BOOT_LOAD to CSE\n"); + + ret = ipu6_buttress_ipc_send(isp, IPU6_BUTTRESS_IPC_CSE, + BUTTRESS_IU2CSEDATA0_IPC_BOOT_LOAD, + 1, true, + BUTTRESS_CSE2IUDATA0_IPC_BOOT_LOAD_DONE); + if (ret) { + dev_err(&isp->pdev->dev, "CSE boot_load failed\n"); + goto out_unlock; + } + + mask = BUTTRESS_SECURITY_CTL_FW_SETUP_MASK; + done = BUTTRESS_SECURITY_CTL_FW_SETUP_DONE; + fail = BUTTRESS_SECURITY_CTL_AUTH_FAILED; + ret = readl_poll_timeout(isp->base + BUTTRESS_REG_SECURITY_CTL, data, + ((data & mask) == done || + (data & mask) == fail), 500, + BUTTRESS_CSE_BOOTLOAD_TIMEOUT_US); + if (ret) { + dev_err(&isp->pdev->dev, "CSE boot_load timeout\n"); + goto out_unlock; + } + + if ((data & mask) == fail) { + dev_err(&isp->pdev->dev, "CSE auth failed\n"); + ret = -EINVAL; + goto out_unlock; + } + + ret = readl_poll_timeout(psys_pdata->base + BOOTLOADER_STATUS_OFFSET, + data, data == BOOTLOADER_MAGIC_KEY, 500, + BUTTRESS_CSE_BOOTLOAD_TIMEOUT_US); + if (ret) { + dev_err(&isp->pdev->dev, "Unexpected magic number 0x%x\n", + data); + goto out_unlock; + } + + /* + * Write authenticate_run into IU2CSEDATA0 + * Write sizeof(boot_load) | 0x2 << CLIENT_ID to + * IU2CSEDB.IU2CSECMD and set IU2CSEDB.IU2CSEBUSY as + */ + dev_info(&isp->pdev->dev, "Sending AUTHENTICATE_RUN to CSE\n"); + ret = ipu6_buttress_ipc_send(isp, IPU6_BUTTRESS_IPC_CSE, + BUTTRESS_IU2CSEDATA0_IPC_AUTH_RUN, + 1, true, + BUTTRESS_CSE2IUDATA0_IPC_AUTH_RUN_DONE); + if (ret) { + dev_err(&isp->pdev->dev, "CSE authenticate_run failed\n"); + goto out_unlock; + } + + done = BUTTRESS_SECURITY_CTL_AUTH_DONE; + ret = readl_poll_timeout(isp->base + BUTTRESS_REG_SECURITY_CTL, data, + ((data & mask) == done || + (data & mask) == fail), 500, + BUTTRESS_CSE_AUTHENTICATE_TIMEOUT_US); + if (ret) { + dev_err(&isp->pdev->dev, "CSE authenticate timeout\n"); + goto out_unlock; + } + + if ((data & mask) == fail) { + dev_err(&isp->pdev->dev, "CSE boot_load failed\n"); + ret = -EINVAL; + goto out_unlock; + } + + dev_info(&isp->pdev->dev, "CSE authenticate_run done\n"); + +out_unlock: + mutex_unlock(&b->auth_mutex); + + return ret; +} + +static int ipu6_buttress_send_tsc_request(struct ipu6_device *isp) +{ + u32 val, mask, done; + int ret; + + mask = BUTTRESS_PWR_STATE_HH_STATUS_MASK; + + writel(BUTTRESS_FABRIC_CMD_START_TSC_SYNC, + isp->base + BUTTRESS_REG_FABRIC_CMD); + + val = readl(isp->base + BUTTRESS_REG_PWR_STATE); + val = FIELD_GET(mask, val); + if (val == BUTTRESS_PWR_STATE_HH_STATE_ERR) { + dev_err(&isp->pdev->dev, "Start tsc sync failed\n"); + return -EINVAL; + } + + done = BUTTRESS_PWR_STATE_HH_STATE_DONE; + ret = readl_poll_timeout(isp->base + BUTTRESS_REG_PWR_STATE, val, + FIELD_GET(mask, val) == done, 500, + BUTTRESS_TSC_SYNC_TIMEOUT_US); + if (ret) + dev_err(&isp->pdev->dev, "Start tsc sync timeout\n"); + + return ret; +} + +int ipu6_buttress_start_tsc_sync(struct ipu6_device *isp) +{ + unsigned int i; + + for (i = 0; i < BUTTRESS_TSC_SYNC_RESET_TRIAL_MAX; i++) { + u32 val; + int ret; + + ret = ipu6_buttress_send_tsc_request(isp); + if (ret != -ETIMEDOUT) + return ret; + + val = readl(isp->base + BUTTRESS_REG_TSW_CTL); + val = val | BUTTRESS_TSW_CTL_SOFT_RESET; + writel(val, isp->base + BUTTRESS_REG_TSW_CTL); + val = val & ~BUTTRESS_TSW_CTL_SOFT_RESET; + writel(val, isp->base + BUTTRESS_REG_TSW_CTL); + } + + dev_err(&isp->pdev->dev, "TSC sync failed (timeout)\n"); + + return -ETIMEDOUT; +} +EXPORT_SYMBOL_NS_GPL(ipu6_buttress_start_tsc_sync, INTEL_IPU6); + +int ipu6_buttress_tsc_read(struct ipu6_device *isp, u64 *val) +{ + u32 tsc_hi_1, tsc_hi_2, tsc_lo; + unsigned long flags; + + local_irq_save(flags); + tsc_hi_1 = readl(isp->base + BUTTRESS_REG_TSC_HI); + tsc_lo = readl(isp->base + BUTTRESS_REG_TSC_LO); + tsc_hi_2 = readl(isp->base + BUTTRESS_REG_TSC_HI); + if (tsc_hi_1 == tsc_hi_2) { + *val = (u64)tsc_hi_1 << 32 | tsc_lo; + } else { + /* Check if TSC has rolled over */ + if (tsc_lo & BIT(31)) + *val = (u64)tsc_hi_1 << 32 | tsc_lo; + else + *val = (u64)tsc_hi_2 << 32 | tsc_lo; + } + local_irq_restore(flags); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_buttress_tsc_read, INTEL_IPU6); + +u64 ipu6_buttress_tsc_ticks_to_ns(u64 ticks, const struct ipu6_device *isp) +{ + u64 ns = ticks * 10000; + + /* + * converting TSC tick count to ns is calculated by: + * Example (TSC clock frequency is 19.2MHz): + * ns = ticks * 1000 000 000 / 19.2Mhz + * = ticks * 1000 000 000 / 19200000Hz + * = ticks * 10000 / 192 ns + */ + return div_u64(ns, isp->buttress.ref_clk); +} +EXPORT_SYMBOL_NS_GPL(ipu6_buttress_tsc_ticks_to_ns, INTEL_IPU6); + +int ipu6_buttress_restore(struct ipu6_device *isp) +{ + struct ipu6_buttress *b = &isp->buttress; + + writel(BUTTRESS_IRQS, isp->base + BUTTRESS_REG_ISR_CLEAR); + writel(BUTTRESS_IRQS, isp->base + BUTTRESS_REG_ISR_ENABLE); + writel(b->wdt_cached_value, isp->base + BUTTRESS_REG_WDT); + + return 0; +} + +int ipu6_buttress_init(struct ipu6_device *isp) +{ + int ret, ipc_reset_retry = BUTTRESS_CSE_IPC_RESET_RETRY; + struct ipu6_buttress *b = &isp->buttress; + u32 val; + + mutex_init(&b->power_mutex); + mutex_init(&b->auth_mutex); + mutex_init(&b->cons_mutex); + mutex_init(&b->ipc_mutex); + init_completion(&b->ish.send_complete); + init_completion(&b->cse.send_complete); + init_completion(&b->ish.recv_complete); + init_completion(&b->cse.recv_complete); + + b->cse.nack = BUTTRESS_CSE2IUDATA0_IPC_NACK; + b->cse.nack_mask = BUTTRESS_CSE2IUDATA0_IPC_NACK_MASK; + b->cse.csr_in = BUTTRESS_REG_CSE2IUCSR; + b->cse.csr_out = BUTTRESS_REG_IU2CSECSR; + b->cse.db0_in = BUTTRESS_REG_CSE2IUDB0; + b->cse.db0_out = BUTTRESS_REG_IU2CSEDB0; + b->cse.data0_in = BUTTRESS_REG_CSE2IUDATA0; + b->cse.data0_out = BUTTRESS_REG_IU2CSEDATA0; + + /* no ISH on IPU6 */ + memset(&b->ish, 0, sizeof(b->ish)); + INIT_LIST_HEAD(&b->constraints); + + isp->secure_mode = ipu6_buttress_get_secure_mode(isp); + dev_info(&isp->pdev->dev, "IPU6 in %s mode\n", + isp->secure_mode ? "secure" : "non-secure"); + + dev_info(&isp->pdev->dev, "IPU6 secure touch = 0x%x\n", + readl(isp->base + BUTTRESS_REG_SECURITY_TOUCH)); + + dev_info(&isp->pdev->dev, "IPU6 camera mask = 0x%x\n", + readl(isp->base + BUTTRESS_REG_CAMERA_MASK)); + + b->wdt_cached_value = readl(isp->base + BUTTRESS_REG_WDT); + writel(BUTTRESS_IRQS, isp->base + BUTTRESS_REG_ISR_CLEAR); + writel(BUTTRESS_IRQS, isp->base + BUTTRESS_REG_ISR_ENABLE); + + /* get ref_clk frequency by reading the indication in btrs control */ + val = readl(isp->base + BUTTRESS_REG_BTRS_CTRL); + val = FIELD_GET(BUTTRESS_REG_BTRS_CTRL_REF_CLK_IND, val); + + switch (val) { + case 0x0: + b->ref_clk = 240; + break; + case 0x1: + b->ref_clk = 192; + break; + case 0x2: + b->ref_clk = 384; + break; + default: + dev_warn(&isp->pdev->dev, + "Unsupported ref clock, use 19.2Mhz by default.\n"); + b->ref_clk = 192; + break; + } + + /* Retry couple of times in case of CSE initialization is delayed */ + do { + ret = ipu6_buttress_ipc_reset(isp, &b->cse); + if (ret) { + dev_warn(&isp->pdev->dev, + "IPC reset protocol failed, retrying\n"); + } else { + dev_info(&isp->pdev->dev, "IPC reset done\n"); + return 0; + } + } while (ipc_reset_retry--); + + dev_err(&isp->pdev->dev, "IPC reset protocol failed\n"); + + mutex_destroy(&b->power_mutex); + mutex_destroy(&b->auth_mutex); + mutex_destroy(&b->cons_mutex); + mutex_destroy(&b->ipc_mutex); + + return ret; +} + +void ipu6_buttress_exit(struct ipu6_device *isp) +{ + struct ipu6_buttress *b = &isp->buttress; + + writel(0, isp->base + BUTTRESS_REG_ISR_ENABLE); + + mutex_destroy(&b->power_mutex); + mutex_destroy(&b->auth_mutex); + mutex_destroy(&b->cons_mutex); + mutex_destroy(&b->ipc_mutex); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-buttress.h b/drivers/media/pci/intel/ipu6/ipu6-buttress.h new file mode 100644 index 000000000000..c5a0b1d0c851 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-buttress.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_BUTTRESS_H +#define IPU6_BUTTRESS_H + +#include +#include + +struct firmware; +struct ipu6_device; +struct ipu6_bus_device; + +#define IPU6_BUTTRESS_NUM_OF_SENS_CKS 3 +#define IPU6_BUTTRESS_NUM_OF_PLL_CKS 3 + +#define BUTTRESS_PS_FREQ_STEP 25U +#define BUTTRESS_MIN_FORCE_PS_FREQ (BUTTRESS_PS_FREQ_STEP * 8) +#define BUTTRESS_MAX_FORCE_PS_FREQ (BUTTRESS_PS_FREQ_STEP * 32) + +#define BUTTRESS_IS_FREQ_STEP 25U +#define BUTTRESS_MIN_FORCE_IS_FREQ (BUTTRESS_IS_FREQ_STEP * 8) +#define BUTTRESS_MAX_FORCE_IS_FREQ (BUTTRESS_IS_FREQ_STEP * 16) + +struct ipu6_buttress_ctrl { + u32 freq_ctl, pwr_sts_shift, pwr_sts_mask, pwr_sts_on, pwr_sts_off; + unsigned int ratio; + unsigned int qos_floor; + bool started; +}; + +struct ipu6_buttress_fused_freqs { + unsigned int min_freq; + unsigned int max_freq; + unsigned int efficient_freq; +}; + +struct ipu6_buttress_ipc { + struct completion send_complete; + struct completion recv_complete; + u32 nack; + u32 nack_mask; + u32 recv_data; + u32 csr_out; + u32 csr_in; + u32 db0_in; + u32 db0_out; + u32 data0_out; + u32 data0_in; +}; + +struct ipu6_buttress { + struct mutex power_mutex, auth_mutex, cons_mutex, ipc_mutex; + struct ipu6_buttress_ipc cse; + struct ipu6_buttress_ipc ish; + struct list_head constraints; + u32 wdt_cached_value; + bool force_suspend; + u32 ref_clk; +}; + +struct ipu6_buttress_sensor_clk_freq { + unsigned int rate; + unsigned int val; +}; + +enum ipu6_buttress_ipc_domain { + IPU6_BUTTRESS_IPC_CSE, + IPU6_BUTTRESS_IPC_ISH, +}; + +struct ipu6_buttress_constraint { + struct list_head list; + unsigned int min_freq; +}; + +struct ipu6_ipc_buttress_bulk_msg { + u32 cmd; + u32 expected_resp; + bool require_resp; + u8 cmd_size; +}; + +int ipu6_buttress_ipc_reset(struct ipu6_device *isp, + struct ipu6_buttress_ipc *ipc); +int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys, + const struct firmware *fw, + struct sg_table *sgt); +int ipu6_buttress_unmap_fw_image(struct ipu6_bus_device *sys, + struct sg_table *sgt); +int ipu6_buttress_power(struct device *dev, struct ipu6_buttress_ctrl *ctrl, + bool on); +bool ipu6_buttress_get_secure_mode(struct ipu6_device *isp); +int ipu6_buttress_authenticate(struct ipu6_device *isp); +int ipu6_buttress_reset_authentication(struct ipu6_device *isp); +bool ipu6_buttress_auth_done(struct ipu6_device *isp); +int ipu6_buttress_start_tsc_sync(struct ipu6_device *isp); +int ipu6_buttress_tsc_read(struct ipu6_device *isp, u64 *val); +u64 ipu6_buttress_tsc_ticks_to_ns(u64 ticks, const struct ipu6_device *isp); + +irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr); +irqreturn_t ipu6_buttress_isr_threaded(int irq, void *isp_ptr); +int ipu6_buttress_debugfs_init(struct ipu6_device *isp); +int ipu6_buttress_init(struct ipu6_device *isp); +void ipu6_buttress_exit(struct ipu6_device *isp); +void ipu6_buttress_csi_port_config(struct ipu6_device *isp, + u32 legacy, u32 combo); +int ipu6_buttress_restore(struct ipu6_device *isp); +#endif /* IPU6_BUTTRESS_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-platform-buttress-regs.h b/drivers/media/pci/intel/ipu6/ipu6-platform-buttress-regs.h new file mode 100644 index 000000000000..b460a750d293 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-platform-buttress-regs.h @@ -0,0 +1,231 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#ifndef IPU6_PLATFORM_BUTTRESS_REGS_H +#define IPU6_PLATFORM_BUTTRESS_REGS_H + +/* IS_WORKPOINT_REQ */ +#define IPU6_BUTTRESS_REG_IS_FREQ_CTL 0x34 +/* PS_WORKPOINT_REQ */ +#define IPU6_BUTTRESS_REG_PS_FREQ_CTL 0x38 + +#define IPU6_IS_FREQ_MAX 533 +#define IPU6_IS_FREQ_MIN 200 +#define IPU6_PS_FREQ_MAX 450 +#define IPU6_IS_FREQ_RATIO_BASE 25 +#define IPU6_PS_FREQ_RATIO_BASE 25 + +/* should be tuned for real silicon */ +#define IPU6_IS_FREQ_CTL_DEFAULT_RATIO 0x08 +#define IPU6SE_IS_FREQ_CTL_DEFAULT_RATIO 0x0a +#define IPU6_PS_FREQ_CTL_DEFAULT_RATIO 0x0d + +#define IPU6_IS_FREQ_CTL_DEFAULT_QOS_FLOOR_RATIO 0x10 +#define IPU6_PS_FREQ_CTL_DEFAULT_QOS_FLOOR_RATIO 0x0708 + +#define IPU6_BUTTRESS_PWR_STATE_IS_PWR_SHIFT 3 +#define IPU6_BUTTRESS_PWR_STATE_IS_PWR_MASK GENMASK(4, 3) + +#define IPU6_BUTTRESS_PWR_STATE_PS_PWR_SHIFT 6 +#define IPU6_BUTTRESS_PWR_STATE_PS_PWR_MASK GENMASK(7, 6) + +#define IPU6_BUTTRESS_PWR_STATE_DN_DONE 0x0 +#define IPU6_BUTTRESS_PWR_STATE_UP_PROCESS 0x1 +#define IPU6_BUTTRESS_PWR_STATE_DN_PROCESS 0x2 +#define IPU6_BUTTRESS_PWR_STATE_UP_DONE 0x3 + +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_0 0x270 +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_1 0x274 +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_2 0x278 +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_3 0x27c +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_4 0x280 +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_5 0x284 +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_6 0x288 +#define IPU6_BUTTRESS_REG_FPGA_SUPPORT_7 0x28c + +#define BUTTRESS_REG_WDT 0x8 +#define BUTTRESS_REG_BTRS_CTRL 0xc +#define BUTTRESS_REG_BTRS_CTRL_STALL_MODE_VC0 BIT(0) +#define BUTTRESS_REG_BTRS_CTRL_STALL_MODE_VC1 BIT(1) +#define BUTTRESS_REG_BTRS_CTRL_REF_CLK_IND GENMASK(9, 8) + +#define BUTTRESS_REG_FW_RESET_CTL 0x30 +#define BUTTRESS_FW_RESET_CTL_START BIT(0) +#define BUTTRESS_FW_RESET_CTL_DONE BIT(1) + +#define BUTTRESS_REG_IS_FREQ_CTL 0x34 +#define BUTTRESS_REG_PS_FREQ_CTL 0x38 + +#define BUTTRESS_FREQ_CTL_START BIT(31) +#define BUTTRESS_FREQ_CTL_ICCMAX_LEVEL GENMASK(19, 16) +#define BUTTRESS_FREQ_CTL_QOS_FLOOR_MASK GENMASK(15, 8) +#define BUTTRESS_FREQ_CTL_RATIO_MASK GENMASK(7, 0) + +#define BUTTRESS_REG_PWR_STATE 0x5c + +#define BUTTRESS_PWR_STATE_RESET 0x0 +#define BUTTRESS_PWR_STATE_PWR_ON_DONE 0x1 +#define BUTTRESS_PWR_STATE_PWR_RDY 0x3 +#define BUTTRESS_PWR_STATE_PWR_IDLE 0x4 + +#define BUTTRESS_PWR_STATE_HH_STATUS_MASK GENMASK(12, 11) + +enum { + BUTTRESS_PWR_STATE_HH_STATE_IDLE, + BUTTRESS_PWR_STATE_HH_STATE_IN_PRGS, + BUTTRESS_PWR_STATE_HH_STATE_DONE, + BUTTRESS_PWR_STATE_HH_STATE_ERR, +}; + +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_MASK GENMASK(23, 19) + +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_IDLE 0x0 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_WAIT_4_PLL_CMP 0x1 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_WAIT_4_CLKACK 0x2 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_WAIT_4_PG_ACK 0x3 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_RST_ASSRT_CYCLES 0x4 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_STOP_CLK_CYCLES1 0x5 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_STOP_CLK_CYCLES2 0x6 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_RST_DEASSRT_CYCLES 0x7 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_WAIT_4_FUSE_WR_CMP 0x8 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_BRK_POINT 0x9 +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_IS_RDY 0xa +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_HALT_HALTED 0xb +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_RST_DURATION_CNT3 0xc +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_WAIT_4_CLKACK_PD 0xd +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_PD_BRK_POINT 0xe +#define BUTTRESS_PWR_STATE_IS_PWR_FSM_WAIT_4_PD_PG_ACK0 0xf + +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_MASK GENMASK(28, 24) + +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_IDLE 0x0 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_PU_PLL_IP_RDY 0x1 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_RO_PRE_CNT_EXH 0x2 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_PU_VGI_PWRGOOD 0x3 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_RO_POST_CNT_EXH 0x4 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WR_PLL_RATIO 0x5 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_PU_PLL_CMP 0x6 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_PU_CLKACK 0x7 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_RST_ASSRT_CYCLES 0x8 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_STOP_CLK_CYCLES1 0x9 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_STOP_CLK_CYCLES2 0xa +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_RST_DEASSRT_CYCLES 0xb +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_PU_BRK_PNT 0xc +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_FUSE_ACCPT 0xd +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_PS_PWR_UP 0xf +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_4_HALTED 0x10 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_RESET_CNT3 0x11 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_PD_CLKACK 0x12 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_PD_OFF_IND 0x13 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_DVFS_PH4 0x14 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_DVFS_PLL_CMP 0x15 +#define BUTTRESS_PWR_STATE_PS_PWR_FSM_WAIT_DVFS_CLKACK 0x16 + +#define BUTTRESS_REG_SECURITY_CTL 0x300 +#define BUTTRESS_REG_SKU 0x314 +#define BUTTRESS_REG_SECURITY_TOUCH 0x318 +#define BUTTRESS_REG_CAMERA_MASK 0x84 + +#define BUTTRESS_SECURITY_CTL_FW_SECURE_MODE BIT(16) +#define BUTTRESS_SECURITY_CTL_FW_SETUP_MASK GENMASK(4, 0) + +#define BUTTRESS_SECURITY_CTL_FW_SETUP_DONE BIT(0) +#define BUTTRESS_SECURITY_CTL_AUTH_DONE BIT(1) +#define BUTTRESS_SECURITY_CTL_AUTH_FAILED BIT(3) + +#define BUTTRESS_REG_FW_SOURCE_BASE_LO 0x78 +#define BUTTRESS_REG_FW_SOURCE_BASE_HI 0x7C +#define BUTTRESS_REG_FW_SOURCE_SIZE 0x80 + +#define BUTTRESS_REG_ISR_STATUS 0x90 +#define BUTTRESS_REG_ISR_ENABLED_STATUS 0x94 +#define BUTTRESS_REG_ISR_ENABLE 0x98 +#define BUTTRESS_REG_ISR_CLEAR 0x9C + +#define BUTTRESS_ISR_IS_IRQ BIT(0) +#define BUTTRESS_ISR_PS_IRQ BIT(1) +#define BUTTRESS_ISR_IPC_EXEC_DONE_BY_CSE BIT(2) +#define BUTTRESS_ISR_IPC_EXEC_DONE_BY_ISH BIT(3) +#define BUTTRESS_ISR_IPC_FROM_CSE_IS_WAITING BIT(4) +#define BUTTRESS_ISR_IPC_FROM_ISH_IS_WAITING BIT(5) +#define BUTTRESS_ISR_CSE_CSR_SET BIT(6) +#define BUTTRESS_ISR_ISH_CSR_SET BIT(7) +#define BUTTRESS_ISR_SPURIOUS_CMP BIT(8) +#define BUTTRESS_ISR_WATCHDOG_EXPIRED BIT(9) +#define BUTTRESS_ISR_PUNIT_2_IUNIT_IRQ BIT(10) +#define BUTTRESS_ISR_SAI_VIOLATION BIT(11) +#define BUTTRESS_ISR_HW_ASSERTION BIT(12) +#define BUTTRESS_ISR_IS_CORRECTABLE_MEM_ERR BIT(13) +#define BUTTRESS_ISR_IS_FATAL_MEM_ERR BIT(14) +#define BUTTRESS_ISR_IS_NON_FATAL_MEM_ERR BIT(15) +#define BUTTRESS_ISR_PS_CORRECTABLE_MEM_ERR BIT(16) +#define BUTTRESS_ISR_PS_FATAL_MEM_ERR BIT(17) +#define BUTTRESS_ISR_PS_NON_FATAL_MEM_ERR BIT(18) +#define BUTTRESS_ISR_PS_FAST_THROTTLE BIT(19) +#define BUTTRESS_ISR_UFI_ERROR BIT(20) + +#define BUTTRESS_REG_IU2CSEDB0 0x100 + +#define BUTTRESS_IU2CSEDB0_BUSY BIT(31) +#define BUTTRESS_IU2CSEDB0_IPC_CLIENT_ID_VAL 2 + +#define BUTTRESS_REG_IU2CSEDATA0 0x104 + +#define BUTTRESS_IU2CSEDATA0_IPC_BOOT_LOAD 1 +#define BUTTRESS_IU2CSEDATA0_IPC_AUTH_RUN 2 +#define BUTTRESS_IU2CSEDATA0_IPC_AUTH_REPLACE 3 +#define BUTTRESS_IU2CSEDATA0_IPC_UPDATE_SECURE_TOUCH 16 + +#define BUTTRESS_CSE2IUDATA0_IPC_BOOT_LOAD_DONE BIT(0) +#define BUTTRESS_CSE2IUDATA0_IPC_AUTH_RUN_DONE BIT(1) +#define BUTTRESS_CSE2IUDATA0_IPC_AUTH_REPLACE_DONE BIT(2) +#define BUTTRESS_CSE2IUDATA0_IPC_UPDATE_SECURE_TOUCH_DONE BIT(4) + +#define BUTTRESS_REG_IU2CSECSR 0x108 + +#define BUTTRESS_IU2CSECSR_IPC_PEER_COMP_ACTIONS_RST_PHASE1 BIT(0) +#define BUTTRESS_IU2CSECSR_IPC_PEER_COMP_ACTIONS_RST_PHASE2 BIT(1) +#define BUTTRESS_IU2CSECSR_IPC_PEER_QUERIED_IP_COMP_ACTIONS_RST_PHASE BIT(2) +#define BUTTRESS_IU2CSECSR_IPC_PEER_ASSERTED_REG_VALID_REQ BIT(3) +#define BUTTRESS_IU2CSECSR_IPC_PEER_ACKED_REG_VALID BIT(4) +#define BUTTRESS_IU2CSECSR_IPC_PEER_DEASSERTED_REG_VALID_REQ BIT(5) + +#define BUTTRESS_REG_CSE2IUDB0 0x304 +#define BUTTRESS_REG_CSE2IUCSR 0x30C +#define BUTTRESS_REG_CSE2IUDATA0 0x308 + +/* 0x20 == NACK, 0xf == unknown command */ +#define BUTTRESS_CSE2IUDATA0_IPC_NACK 0xf20 +#define BUTTRESS_CSE2IUDATA0_IPC_NACK_MASK GENMASK(15, 0) + +#define BUTTRESS_REG_ISH2IUCSR 0x50 +#define BUTTRESS_REG_ISH2IUDB0 0x54 +#define BUTTRESS_REG_ISH2IUDATA0 0x58 + +#define BUTTRESS_REG_IU2ISHDB0 0x10C +#define BUTTRESS_REG_IU2ISHDATA0 0x110 +#define BUTTRESS_REG_IU2ISHDATA1 0x114 +#define BUTTRESS_REG_IU2ISHCSR 0x118 + +#define BUTTRESS_REG_FABRIC_CMD 0x88 + +#define BUTTRESS_FABRIC_CMD_START_TSC_SYNC BIT(0) +#define BUTTRESS_FABRIC_CMD_IS_DRAIN BIT(4) + +#define BUTTRESS_REG_TSW_CTL 0x120 +#define BUTTRESS_TSW_CTL_SOFT_RESET BIT(8) + +#define BUTTRESS_REG_TSC_LO 0x164 +#define BUTTRESS_REG_TSC_HI 0x168 + +#define BUTTRESS_IRQS (BUTTRESS_ISR_IPC_FROM_CSE_IS_WAITING | \ + BUTTRESS_ISR_IPC_EXEC_DONE_BY_CSE | \ + BUTTRESS_ISR_IS_IRQ | \ + BUTTRESS_ISR_PS_IRQ) + +#define BUTTRESS_EVENT (BUTTRESS_ISR_IPC_FROM_CSE_IS_WAITING | \ + BUTTRESS_ISR_IPC_FROM_ISH_IS_WAITING | \ + BUTTRESS_ISR_IPC_EXEC_DONE_BY_CSE | \ + BUTTRESS_ISR_IPC_EXEC_DONE_BY_ISH | \ + BUTTRESS_ISR_SAI_VIOLATION) +#endif /* IPU6_PLATFORM_BUTTRESS_REGS_H */ From patchwork Thu Apr 13 10:04:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE3C0C77B6E for ; Thu, 13 Apr 2023 09:55:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbjDMJzF (ORCPT ); Thu, 13 Apr 2023 05:55:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229989AbjDMJzA (ORCPT ); Thu, 13 Apr 2023 05:55:00 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58E1C903D for ; Thu, 13 Apr 2023 02:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379685; x=1712915685; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OIi/hVk/pkGaa2KaIfEyhvNfnYx1sNth5+0TXu1onzc=; b=cY2W3hJ7Bbt72DssTeIgU1Fr5OeSksB0vMXS1NJhDp6CCa021Oeykhm1 LEULAy7+o5zxlDDj9UiaVnQgt3aIrxY98QdGd+FcFS45likV91n3lxv/r PkMxIFibObry3bZHcfFWK34dpw3R1JPt2dXMVv3Z8eeQNHbCSfq6gRUbe dG0BWOKK3cpiEpmXwQEB/PteWHMMPmJK0zWXKY5jBeZTkORQTj1QL4Z/D WW3oXFqgFZjA0LaGxPLKVojajvMOeROX9hTxvoBvnTxYGHEBuXsv5Dcsx 4jNkRB6fK4CCsy4/x0NU9JxN2o+ZFTeG56JF3YgsEvY2IK6Kl8J5ruilz w==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371992938" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371992938" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600033" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600033" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:39 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 04/14] media: intel/ipu6: CPD parsing for get firmware components Date: Thu, 13 Apr 2023 18:04:19 +0800 Message-Id: <20230413100429.919622-5-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao For IPU6, firmware is generated and released as signed Code Partition Directory (CPD) format file, which is aligned with the SPI flash code partition definition. CPD format include CPD header, manifest, metadata and module data. Driver can parse them according to the CPD layout to acquire each component. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-cpd.c | 359 ++++++++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-cpd.h | 107 +++++++ 2 files changed, 466 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-cpd.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-cpd.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-cpd.c b/drivers/media/pci/intel/ipu6/ipu6-cpd.c new file mode 100644 index 000000000000..23eb0ed96686 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-cpd.c @@ -0,0 +1,359 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2015 - 2023 Intel Corporation + +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-cpd.h" + +/* 15 entries + header*/ +#define MAX_PKG_DIR_ENT_CNT 16 +/* 2 qword per entry/header */ +#define PKG_DIR_ENT_LEN 2 +/* PKG_DIR size in bytes */ +#define PKG_DIR_SIZE ((MAX_PKG_DIR_ENT_CNT) * \ + (PKG_DIR_ENT_LEN) * sizeof(u64)) +/* _IUPKDR_ */ +#define PKG_DIR_HDR_MARK 0x5f4955504b44525f + +/* $CPD */ +#define CPD_HDR_MARK 0x44504324 + +#define MAX_MANIFEST_SIZE (SZ_2K * sizeof(u32)) +#define MAX_METADATA_SIZE SZ_64K + +#define MAX_COMPONENT_ID 127 +#define MAX_COMPONENT_VERSION 0xffff + +#define MANIFEST_IDX 0 +#define METADATA_IDX 1 +#define MODULEDATA_IDX 2 +/* + * PKG_DIR Entry (type == id) + * 63:56 55 54:48 47:32 31:24 23:0 + * Rsvd Rsvd Type Version Rsvd Size + */ +#define PKG_DIR_SIZE_MASK GENMASK(23, 0) +#define PKG_DIR_VERSION_MASK GENMASK(47, 32) +#define PKG_DIR_TYPE_MASK GENMASK(54, 48) + +static inline const struct ipu6_cpd_ent *ipu6_cpd_get_entry(const void *cpd, + u8 idx) +{ + const struct ipu6_cpd_hdr *cpd_hdr = cpd; + const struct ipu6_cpd_ent *ent; + + ent = (const struct ipu6_cpd_ent *)((const u8 *)cpd + cpd_hdr->hdr_len); + return ent + idx; +} + +#define ipu6_cpd_get_manifest(cpd) ipu6_cpd_get_entry(cpd, MANIFEST_IDX) +#define ipu6_cpd_get_metadata(cpd) ipu6_cpd_get_entry(cpd, METADATA_IDX) +#define ipu6_cpd_get_moduledata(cpd) ipu6_cpd_get_entry(cpd, MODULEDATA_IDX) + +static const struct ipu6_cpd_metadata_cmpnt_hdr * +ipu6_cpd_metadata_get_cmpnt(struct ipu6_device *isp, const void *metadata, + unsigned int metadata_size, u8 idx) +{ + size_t extn_size = sizeof(struct ipu6_cpd_metadata_extn); + size_t cmpnt_count = metadata_size - extn_size; + + cmpnt_count = div_u64(cmpnt_count, isp->cpd_metadata_cmpnt_size); + + if (idx > MAX_COMPONENT_ID || idx >= cmpnt_count) { + dev_err(&isp->pdev->dev, "Component index out of range (%d)\n", + idx); + return ERR_PTR(-EINVAL); + } + + return metadata + extn_size + idx * isp->cpd_metadata_cmpnt_size; +} + +static u32 ipu6_cpd_metadata_cmpnt_version(struct ipu6_device *isp, + const void *metadata, + unsigned int metadata_size, u8 idx) +{ + const struct ipu6_cpd_metadata_cmpnt_hdr *cmpnt = + ipu6_cpd_metadata_get_cmpnt(isp, metadata, metadata_size, idx); + + if (IS_ERR(cmpnt)) + return PTR_ERR(cmpnt); + + return cmpnt->ver; +} + +static int ipu6_cpd_metadata_get_cmpnt_id(struct ipu6_device *isp, + const void *metadata, + unsigned int metadata_size, u8 idx) +{ + const struct ipu6_cpd_metadata_cmpnt_hdr *cmpnt = + ipu6_cpd_metadata_get_cmpnt(isp, metadata, + metadata_size, idx); + + if (IS_ERR(cmpnt)) + return PTR_ERR(cmpnt); + + return cmpnt->id; +} + +static int ipu6_cpd_parse_module_data(struct ipu6_device *isp, + const void *module_data, + unsigned int module_data_size, + dma_addr_t dma_addr_module_data, + u64 *pkg_dir, const void *metadata, + unsigned int metadata_size) +{ + const struct ipu6_cpd_module_data_hdr *module_data_hdr; + const struct ipu6_cpd_hdr *dir_hdr; + const struct ipu6_cpd_ent *dir_ent; + unsigned int i; + u8 len; + + if (!module_data) + return -EINVAL; + + module_data_hdr = module_data; + dir_hdr = module_data + module_data_hdr->hdr_len; + len = dir_hdr->hdr_len; + dir_ent = (const struct ipu6_cpd_ent *)(((u8 *)dir_hdr) + len); + + pkg_dir[0] = PKG_DIR_HDR_MARK; + /* pkg_dir entry count = component count + pkg_dir header */ + pkg_dir[1] = dir_hdr->ent_cnt + 1; + + for (i = 0; i < dir_hdr->ent_cnt; i++, dir_ent++) { + u64 *p = &pkg_dir[PKG_DIR_ENT_LEN * (1 + i)]; + int ver, id; + + *p++ = dma_addr_module_data + dir_ent->offset; + + id = ipu6_cpd_metadata_get_cmpnt_id(isp, metadata, + metadata_size, i); + + if (id < 0 || id > MAX_COMPONENT_ID) { + dev_err(&isp->pdev->dev, "Invalid CPD component id\n"); + return -EINVAL; + } + + ver = ipu6_cpd_metadata_cmpnt_version(isp, metadata, + metadata_size, i); + + if (ver < 0 || ver > MAX_COMPONENT_VERSION) { + dev_err(&isp->pdev->dev, + "Invalid CPD component version\n"); + return -EINVAL; + } + + *p = FIELD_PREP(PKG_DIR_SIZE_MASK, dir_ent->len) | + FIELD_PREP(PKG_DIR_TYPE_MASK, id) | + FIELD_PREP(PKG_DIR_VERSION_MASK, ver); + } + + return 0; +} + +int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src) +{ + dma_addr_t dma_addr_src = sg_dma_address(adev->fw_sgt.sgl); + const struct ipu6_cpd_ent *ent, *man_ent, *met_ent; + struct ipu6_device *isp = adev->isp; + unsigned int man_sz, met_sz; + void *pkg_dir_pos; + int ret; + + man_ent = ipu6_cpd_get_manifest(src); + man_sz = man_ent->len; + + met_ent = ipu6_cpd_get_metadata(src); + met_sz = met_ent->len; + + adev->pkg_dir_size = PKG_DIR_SIZE + man_sz + met_sz; + adev->pkg_dir = dma_alloc_attrs(&adev->dev, adev->pkg_dir_size, + &adev->pkg_dir_dma_addr, GFP_KERNEL, 0); + if (!adev->pkg_dir) + return -ENOMEM; + + /* + * pkg_dir entry/header: + * qword | 63:56 | 55 | 54:48 | 47:32 | 31:24 | 23:0 + * N Address/Offset/"_IUPKDR_" + * N + 1 | rsvd | rsvd | type | ver | rsvd | size + * + * We can ignore other fields that size in N + 1 qword as they + * are 0 anyway. Just setting size for now. + */ + + ent = ipu6_cpd_get_moduledata(src); + + ret = ipu6_cpd_parse_module_data(isp, src + ent->offset, + ent->len, dma_addr_src + ent->offset, + adev->pkg_dir, src + met_ent->offset, + met_ent->len); + if (ret) { + dev_err(&isp->pdev->dev, "Failed to parse module data\n"); + dma_free_attrs(&isp->psys->dev, adev->pkg_dir_size, + adev->pkg_dir, adev->pkg_dir_dma_addr, 0); + return -EINVAL; + } + + /* Copy manifest after pkg_dir */ + pkg_dir_pos = adev->pkg_dir + PKG_DIR_ENT_LEN * MAX_PKG_DIR_ENT_CNT; + memcpy(pkg_dir_pos, src + man_ent->offset, man_sz); + + /* Copy metadata after manifest */ + pkg_dir_pos += man_sz; + memcpy(pkg_dir_pos, src + met_ent->offset, met_sz); + + dma_sync_single_range_for_device(&adev->dev, adev->pkg_dir_dma_addr, + 0, adev->pkg_dir_size, DMA_TO_DEVICE); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_cpd_create_pkg_dir, INTEL_IPU6); + +void ipu6_cpd_free_pkg_dir(struct ipu6_bus_device *adev) +{ + dma_free_attrs(&adev->dev, adev->pkg_dir_size, adev->pkg_dir, + adev->pkg_dir_dma_addr, 0); +} +EXPORT_SYMBOL_NS_GPL(ipu6_cpd_free_pkg_dir, INTEL_IPU6); + +static int ipu6_cpd_validate_cpd(struct ipu6_device *isp, const void *cpd, + unsigned long cpd_size, + unsigned long data_size) +{ + const struct ipu6_cpd_hdr *cpd_hdr = cpd; + const struct ipu6_cpd_ent *ent; + unsigned int i; + u8 len; + + len = cpd_hdr->hdr_len; + + /* Ensure cpd hdr is within moduledata */ + if (cpd_size < len) { + dev_err(&isp->pdev->dev, "Invalid CPD moduledata size\n"); + return -EINVAL; + } + + /* Sanity check for CPD header */ + if ((cpd_size - len) / sizeof(*ent) < cpd_hdr->ent_cnt) { + dev_err(&isp->pdev->dev, "Invalid CPD header\n"); + return -EINVAL; + } + + /* Ensure that all entries are within moduledata */ + ent = (const struct ipu6_cpd_ent *)(((const u8 *)cpd_hdr) + len); + for (i = 0; i < cpd_hdr->ent_cnt; i++, ent++) { + if (data_size < ent->offset || + data_size - ent->offset < ent->len) { + dev_err(&isp->pdev->dev, "Invalid CPD entry (%d)\n", i); + return -EINVAL; + } + } + + return 0; +} + +static int ipu6_cpd_validate_moduledata(struct ipu6_device *isp, + const void *moduledata, + u32 moduledata_size) +{ + const struct ipu6_cpd_module_data_hdr *mod_hdr = moduledata; + int ret; + + /* Ensure moduledata hdr is within moduledata */ + if (moduledata_size < sizeof(*mod_hdr) || + moduledata_size < mod_hdr->hdr_len) { + dev_err(&isp->pdev->dev, "Invalid CPD moduledata size\n"); + return -EINVAL; + } + + dev_info(&isp->pdev->dev, "FW version: %x\n", mod_hdr->fw_pkg_date); + ret = ipu6_cpd_validate_cpd(isp, moduledata + mod_hdr->hdr_len, + moduledata_size - mod_hdr->hdr_len, + moduledata_size); + if (ret) { + dev_err(&isp->pdev->dev, "Invalid CPD in moduledata\n"); + return -EINVAL; + } + + return 0; +} + +static int ipu6_cpd_validate_metadata(struct ipu6_device *isp, + const void *metadata, u32 meta_size) +{ + const struct ipu6_cpd_metadata_extn *extn = metadata; + + /* Sanity check for metadata size */ + if (meta_size < sizeof(*extn) || meta_size > MAX_METADATA_SIZE) { + dev_err(&isp->pdev->dev, "Invalid CPD metadata\n"); + return -EINVAL; + } + + /* Validate extension and image types */ + if (extn->extn_type != IPU6_CPD_METADATA_EXTN_TYPE_IUNIT || + extn->img_type != IPU6_CPD_METADATA_IMAGE_TYPE_MAIN_FIRMWARE) { + dev_err(&isp->pdev->dev, + "Invalid CPD metadata descriptor img_type (%d)\n", + extn->img_type); + return -EINVAL; + } + + /* Validate metadata size multiple of metadata components */ + if ((meta_size - sizeof(*extn)) % isp->cpd_metadata_cmpnt_size) { + dev_err(&isp->pdev->dev, "Invalid CPD metadata size\n"); + return -EINVAL; + } + + return 0; +} + +int ipu6_cpd_validate_cpd_file(struct ipu6_device *isp, const void *cpd_file, + unsigned long cpd_file_size) +{ + const struct ipu6_cpd_hdr *hdr = cpd_file; + const struct ipu6_cpd_ent *ent; + int ret; + + ret = ipu6_cpd_validate_cpd(isp, cpd_file, cpd_file_size, + cpd_file_size); + if (ret) { + dev_err(&isp->pdev->dev, "Invalid CPD in file\n"); + return -EINVAL; + } + + /* Check for CPD file marker */ + if (hdr->hdr_mark != CPD_HDR_MARK) { + dev_err(&isp->pdev->dev, "Invalid CPD header\n"); + return -EINVAL; + } + + /* Sanity check for manifest size */ + ent = ipu6_cpd_get_manifest(cpd_file); + if (ent->len > MAX_MANIFEST_SIZE) { + dev_err(&isp->pdev->dev, "Invalid CPD manifest size\n"); + return -EINVAL; + } + + /* Validate metadata */ + ent = ipu6_cpd_get_metadata(cpd_file); + ret = ipu6_cpd_validate_metadata(isp, cpd_file + ent->offset, ent->len); + if (ret) { + dev_err(&isp->pdev->dev, "Invalid CPD metadata\n"); + return ret; + } + + /* Validate moduledata */ + ent = ipu6_cpd_get_moduledata(cpd_file); + ret = ipu6_cpd_validate_moduledata(isp, cpd_file + ent->offset, + ent->len); + if (ret) { + dev_err(&isp->pdev->dev, "Invalid CPD moduledata\n"); + return ret; + } + + return 0; +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-cpd.h b/drivers/media/pci/intel/ipu6/ipu6-cpd.h new file mode 100644 index 000000000000..baa15faff9d6 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-cpd.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2015 - 2023 Intel Corporation */ + +#ifndef IPU6_CPD_H +#define IPU6_CPD_H + +#define IPU6_CPD_SIZE_OF_FW_ARCH_VERSION 7 +#define IPU6_CPD_SIZE_OF_SYSTEM_VERSION 11 +#define IPU6_CPD_SIZE_OF_COMPONENT_NAME 12 + +#define IPU6_CPD_METADATA_EXTN_TYPE_IUNIT 0x10 + +#define IPU6_CPD_METADATA_IMAGE_TYPE_RESERVED 0 +#define IPU6_CPD_METADATA_IMAGE_TYPE_BOOTLOADER 1 +#define IPU6_CPD_METADATA_IMAGE_TYPE_MAIN_FIRMWARE 2 + +#define IPU6_CPD_PKG_DIR_PSYS_SERVER_IDX 0 +#define IPU6_CPD_PKG_DIR_ISYS_SERVER_IDX 1 + +#define IPU6_CPD_PKG_DIR_CLIENT_PG_TYPE 3 + +#define IPU6_CPD_METADATA_HASH_KEY_SIZE 48 +#define IPU6SE_CPD_METADATA_HASH_KEY_SIZE 32 + +struct ipu6_cpd_module_data_hdr { + u32 hdr_len; + u32 endian; + u32 fw_pkg_date; + u32 hive_sdk_date; + u32 compiler_date; + u32 target_platform_type; + u8 sys_ver[IPU6_CPD_SIZE_OF_SYSTEM_VERSION]; + u8 fw_arch_ver[IPU6_CPD_SIZE_OF_FW_ARCH_VERSION]; + u8 rsvd[2]; +} __packed; + +/* + * ipu6_cpd_hdr structure updated as the chksum and + * sub_partition_name is unused on host side + * CSE layout version 1.6 for IPU6SE (hdr_len = 0x10) + * CSE layout version 1.7 for IPU6 (hdr_len = 0x14) + */ +struct ipu6_cpd_hdr { + u32 hdr_mark; + u32 ent_cnt; + u8 hdr_ver; + u8 ent_ver; + u8 hdr_len; +} __packed; + +struct ipu6_cpd_ent { + u8 name[IPU6_CPD_SIZE_OF_COMPONENT_NAME]; + u32 offset; + u32 len; + u8 rsvd[4]; +} __packed; + +struct ipu6_cpd_metadata_cmpnt_hdr { + u32 id; + u32 size; + u32 ver; +} __packed; + +struct ipu6_cpd_metadata_cmpnt { + struct ipu6_cpd_metadata_cmpnt_hdr hdr; + u8 sha2_hash[IPU6_CPD_METADATA_HASH_KEY_SIZE]; + u32 entry_point; + u32 icache_base_offs; + u8 attrs[16]; +} __packed; + +struct ipu6se_cpd_metadata_cmpnt { + struct ipu6_cpd_metadata_cmpnt_hdr hdr; + u8 sha2_hash[IPU6SE_CPD_METADATA_HASH_KEY_SIZE]; + u32 entry_point; + u32 icache_base_offs; + u8 attrs[16]; +} __packed; + +struct ipu6_cpd_metadata_extn { + u32 extn_type; + u32 len; + u32 img_type; + u8 rsvd[16]; +} __packed; + +struct ipu6_cpd_client_pkg_hdr { + u32 prog_list_offs; + u32 prog_list_size; + u32 prog_desc_offs; + u32 prog_desc_size; + u32 pg_manifest_offs; + u32 pg_manifest_size; + u32 prog_bin_offs; + u32 prog_bin_size; +} __packed; + +int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src); +void ipu6_cpd_free_pkg_dir(struct ipu6_bus_device *adev); +int ipu6_cpd_validate_cpd_file(struct ipu6_device *isp, const void *cpd_file, + unsigned long cpd_file_size); +unsigned int ipu6_cpd_pkg_dir_get_address(const u64 *pkg_dir, int pkg_dir_idx); +unsigned int ipu6_cpd_pkg_dir_get_num_entries(const u64 *pkg_dir); +unsigned int ipu6_cpd_pkg_dir_get_size(const u64 *pkg_dir, int pkg_dir_idx); +unsigned int ipu6_cpd_pkg_dir_get_type(const u64 *pkg_dir, int pkg_dir_idx); + +#endif /* IPU6_CPD_H */ From patchwork Thu Apr 13 10:04:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A2CBC77B78 for ; Thu, 13 Apr 2023 09:55:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229637AbjDMJzK (ORCPT ); Thu, 13 Apr 2023 05:55:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229992AbjDMJzI (ORCPT ); Thu, 13 Apr 2023 05:55:08 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87D3C9EC6 for ; Thu, 13 Apr 2023 02:54:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379688; x=1712915688; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ldz8uUB9tFcaImr4Pd4AYsqtSgVolYdONo0/auhfmeM=; b=Pz96qzx0Vh2ILnCTAkN/FzKWRoWCSr3pM3Byc19K1DnFXvi/8ta3PYBu mBr6vz65t6aXms50INals25PDllwhPx5X2slOjVl5Oo0f5vsCj1T+XwOk /0HE8dsZ98NnLt3muT6uQ/pCOtgqWoEwxndxcLtCnVWK7c6mcFzljwrh+ PtqbldxgfKNc61zTxe6LK5qfNWdup1dJAoV4krBbTrI0r8Wa/dH5WXszb LwOfBAYY1UIJ9WaGvDH7WM5zr6N2/bDorwvRL53FRnzFIzVitxmEN+pqL z8yPnLNE6GRjJRwJTxhvG+Q3H5VyyFDrwEAGC/69tFNbQQ4h1aW2GDcZZ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371992965" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371992965" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600036" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600036" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:43 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 05/14] media: intel/ipu6: add IPU6 DMA mapping API and MMU table Date: Thu, 13 Apr 2023 18:04:20 +0800 Message-Id: <20230413100429.919622-6-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao IPU has its own scalar processor which the firmware run at, it has internal 32-bits virtual address mapping, it allows that scalar process can access the IPU internal memory directly and also access the external system memory by IPU virtual address. So IPU DMA and MMU driver define its DMA mapping ops expose by IPU virtual bus. MMU driver do MMU hardware configuration and setup IPU MMU lookup table. IPU MMU and DMA mapping works behind the IOMMU hardware, it may do nested mapping - PCI(IOMMU) mapping and IPU-MMU mapping. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-dma.c | 497 ++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-dma.h | 19 + drivers/media/pci/intel/ipu6/ipu6-mmu.c | 833 ++++++++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-mmu.h | 65 ++ 4 files changed, 1414 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-dma.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-dma.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6-mmu.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-mmu.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c new file mode 100644 index 000000000000..2ba2deb361e2 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c @@ -0,0 +1,497 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-dma.h" +#include "ipu6-mmu.h" + +struct vm_info { + struct list_head list; + struct page **pages; + dma_addr_t ipu6_iova; + void *vaddr; + unsigned long size; +}; + +static struct vm_info *get_vm_info(struct ipu6_mmu *mmu, dma_addr_t iova) +{ + struct vm_info *info, *save; + + list_for_each_entry_safe(info, save, &mmu->vma_list, list) { + if (iova >= info->ipu6_iova && + iova < (info->ipu6_iova + info->size)) + return info; + } + + return NULL; +} + +static void __dma_clear_buffer(struct page *page, size_t size, + unsigned long attrs) +{ + void *ptr; + + if (!page) + return; + /* + * Ensure that the allocated pages are zeroed, and that any data + * lurking in the kernel direct-mapped region is invalidated. + */ + ptr = page_address(page); + memset(ptr, 0, size); + if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) + clflush_cache_range(ptr, size); +} + +static struct page **__dma_alloc_buffer(struct device *dev, size_t size, + gfp_t gfp, + unsigned long attrs) +{ + struct page **pages; + int count = size >> PAGE_SHIFT; + int array_size = count * sizeof(struct page *); + int i = 0; + + pages = kvzalloc(array_size, GFP_KERNEL); + if (!pages) + return NULL; + + gfp |= __GFP_NOWARN; + + while (count) { + int j, order = __fls(count); + + pages[i] = alloc_pages(gfp, order); + while (!pages[i] && order) + pages[i] = alloc_pages(gfp, --order); + if (!pages[i]) + goto error; + + if (order) { + split_page(pages[i], order); + j = 1 << order; + while (j--) + pages[i + j] = pages[i] + j; + } + + __dma_clear_buffer(pages[i], PAGE_SIZE << order, attrs); + i += 1 << order; + count -= 1 << order; + } + + return pages; +error: + while (i--) + if (pages[i]) + __free_pages(pages[i], 0); + kvfree(pages); + return NULL; +} + +static int __dma_free_buffer(struct device *dev, struct page **pages, + size_t size, + unsigned long attrs) +{ + int count = PHYS_PFN(size); + unsigned int i; + + for (i = 0; i < count && pages[i]; i++) { + __dma_clear_buffer(pages[i], PAGE_SIZE, attrs); + __free_pages(pages[i], 0); + } + + kvfree(pages); + return 0; +} + +static void ipu6_dma_sync_single_for_cpu(struct device *dev, + dma_addr_t dma_handle, + size_t size, + enum dma_data_direction dir) +{ + void *vaddr; + u32 offset; + struct vm_info *info; + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + + info = get_vm_info(mmu, dma_handle); + if (WARN_ON(!info)) + return; + + offset = dma_handle - info->ipu6_iova; + if (WARN_ON(size > (info->size - offset))) + return; + + vaddr = info->vaddr + offset; + clflush_cache_range(vaddr, size); +} + +static void ipu6_dma_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sglist, + int nents, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + for_each_sg(sglist, sg, nents, i) + clflush_cache_range(page_to_virt(sg_page(sg)), sg->length); +} + +static void *ipu6_dma_alloc(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, + unsigned long attrs) +{ + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + dma_addr_t pci_dma_addr, ipu6_iova; + struct vm_info *info; + unsigned long count; + struct page **pages; + struct iova *iova; + unsigned int i; + int ret; + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) + return NULL; + + size = PAGE_ALIGN(size); + count = size >> PAGE_SHIFT; + + iova = alloc_iova(&mmu->dmap->iovad, count, + dma_get_mask(dev) >> PAGE_SHIFT, 0); + if (!iova) + goto out_kfree; + + pages = __dma_alloc_buffer(dev, size, gfp, attrs); + if (!pages) + goto out_free_iova; + + dev_dbg(dev, "dma_alloc: iova low pfn %lu, high pfn %lu\n", + iova->pfn_lo, iova->pfn_hi); + for (i = 0; iova->pfn_lo + i <= iova->pfn_hi; i++) { + pci_dma_addr = dma_map_page_attrs(&pdev->dev, pages[i], 0, + PAGE_SIZE, DMA_BIDIRECTIONAL, + attrs); + dev_dbg(dev, "dma_alloc: mapped pci_dma_addr %pad\n", + &pci_dma_addr); + if (dma_mapping_error(&pdev->dev, pci_dma_addr)) { + dev_err(dev, "pci_dma_mapping for page[%d] failed", i); + goto out_unmap; + } + + ret = ipu6_mmu_map(mmu->dmap->mmu_info, + (iova->pfn_lo + i) << PAGE_SHIFT, + pci_dma_addr, PAGE_SIZE); + if (ret) { + dev_err(dev, "ipu6_mmu_map for pci_dma[%d] %pad failed", + i, &pci_dma_addr); + dma_unmap_page_attrs(&pdev->dev, pci_dma_addr, + PAGE_SIZE, DMA_BIDIRECTIONAL, + attrs); + goto out_unmap; + } + } + + info->vaddr = vmap(pages, count, VM_USERMAP, PAGE_KERNEL); + if (!info->vaddr) + goto out_unmap; + + *dma_handle = iova->pfn_lo << PAGE_SHIFT; + + info->pages = pages; + info->ipu6_iova = *dma_handle; + info->size = size; + list_add(&info->list, &mmu->vma_list); + + return info->vaddr; + +out_unmap: + for (i--; i >= 0; i--) { + ipu6_iova = (iova->pfn_lo + i) << PAGE_SHIFT; + pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info, + ipu6_iova); + dma_unmap_page_attrs(&pdev->dev, pci_dma_addr, PAGE_SIZE, + DMA_BIDIRECTIONAL, attrs); + + ipu6_mmu_unmap(mmu->dmap->mmu_info, ipu6_iova, PAGE_SIZE); + } + + __dma_free_buffer(dev, pages, size, attrs); + +out_free_iova: + __free_iova(&mmu->dmap->iovad, iova); +out_kfree: + kfree(info); + + return NULL; +} + +static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr, + dma_addr_t dma_handle, + unsigned long attrs) +{ + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + struct iova *iova = find_iova(&mmu->dmap->iovad, + dma_handle >> PAGE_SHIFT); + dma_addr_t pci_dma_addr, ipu6_iova; + struct vm_info *info; + struct page **pages; + unsigned int i; + + if (WARN_ON(!iova)) + return; + + info = get_vm_info(mmu, dma_handle); + if (WARN_ON(!info)) + return; + + if (WARN_ON(!info->vaddr)) + return; + + if (WARN_ON(!info->pages)) + return; + + list_del(&info->list); + + size = PAGE_ALIGN(size); + + pages = info->pages; + + vunmap(vaddr); + + for (i = 0; i < size >> PAGE_SHIFT; i++) { + ipu6_iova = (iova->pfn_lo + i) << PAGE_SHIFT; + pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info, + ipu6_iova); + dma_unmap_page_attrs(&pdev->dev, pci_dma_addr, PAGE_SIZE, + DMA_BIDIRECTIONAL, attrs); + } + + ipu6_mmu_unmap(mmu->dmap->mmu_info, iova->pfn_lo << PAGE_SHIFT, + iova_size(iova) << PAGE_SHIFT); + + __dma_free_buffer(dev, pages, size, attrs); + + mmu->tlb_invalidate(mmu); + + __free_iova(&mmu->dmap->iovad, iova); + + kfree(info); +} + +static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *addr, dma_addr_t iova, size_t size, + unsigned long attrs) +{ + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + size_t count = PAGE_ALIGN(size) >> PAGE_SHIFT; + struct vm_info *info; + size_t i; + + info = get_vm_info(mmu, iova); + if (!info) + return -EFAULT; + + if (!info->vaddr) + return -EFAULT; + + if (vma->vm_start & ~PAGE_MASK) + return -EINVAL; + + if (size > info->size) + return -EFAULT; + + for (i = 0; i < count; i++) + vm_insert_page(vma, vma->vm_start + (i << PAGE_SHIFT), + info->pages[i]); + + return 0; +} + +static void ipu6_dma_unmap_sg(struct device *dev, + struct scatterlist *sglist, + int nents, enum dma_data_direction dir, + unsigned long attrs) +{ + struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct iova *iova = find_iova(&mmu->dmap->iovad, + sg_dma_address(sglist) >> PAGE_SHIFT); + int i, npages, count; + struct scatterlist *sg; + dma_addr_t pci_dma_addr; + + if (!nents) + return; + + if (WARN_ON(!iova)) + return; + + if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) + ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL); + + /* get the nents as orig_nents given by caller */ + count = 0; + npages = iova_size(iova); + for_each_sg(sglist, sg, nents, i) { + if (sg_dma_len(sg) == 0 || + sg_dma_address(sg) == DMA_MAPPING_ERROR) + break; + + npages -= PAGE_ALIGN(sg_dma_len(sg)) >> PAGE_SHIFT; + count++; + if (npages <= 0) + break; + } + + /* Before IPU6 mmu unmap, return the pci dma address back to sg + * assume the nents is less than orig_nents as the least granule + * is 1 SZ_4K page + */ + dev_dbg(dev, "trying to unmap concatenated %u ents\n", count); + for_each_sg(sglist, sg, count, i) { + dev_dbg(dev, "ipu6_unmap sg[%d] %pad\n", + i, &sg_dma_address(sg)); + pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info, + sg_dma_address(sg)); + dev_dbg(dev, "return pci_dma_addr %pad back to sg[%d]\n", + &pci_dma_addr, i); + sg_dma_address(sg) = pci_dma_addr; + } + + dev_dbg(dev, "ipu6_mmu_unmap low pfn %lu high pfn %lu\n", + iova->pfn_lo, iova->pfn_hi); + ipu6_mmu_unmap(mmu->dmap->mmu_info, iova->pfn_lo << PAGE_SHIFT, + iova_size(iova) << PAGE_SHIFT); + + mmu->tlb_invalidate(mmu); + + dma_unmap_sg_attrs(&pdev->dev, sglist, nents, dir, attrs); + + __free_iova(&mmu->dmap->iovad, iova); +} + +static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist, + int nents, enum dma_data_direction dir, + unsigned long attrs) +{ + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev; + struct scatterlist *sg; + struct iova *iova; + size_t npages = 0; + u32 iova_addr; + int i, count; + + dev_dbg(dev, "pci_dma_map_sg trying to map %d ents\n", nents); + count = dma_map_sg_attrs(&pdev->dev, sglist, nents, dir, attrs); + if (count <= 0) { + dev_err(dev, "pci_dma_map_sg %d ents failed\n", nents); + return 0; + } + + dev_dbg(dev, "pci_dma_map_sg %d ents mapped\n", count); + + for_each_sg(sglist, sg, count, i) + npages += PAGE_ALIGN(sg_dma_len(sg)) >> PAGE_SHIFT; + + iova = alloc_iova(&mmu->dmap->iovad, npages, + dma_get_mask(dev) >> PAGE_SHIFT, 0); + if (!iova) + return 0; + + dev_dbg(dev, "dmamap: iova low pfn %lu, high pfn %lu\n", iova->pfn_lo, + iova->pfn_hi); + + iova_addr = iova->pfn_lo; + for_each_sg(sglist, sg, count, i) { + int ret; + + dev_dbg(dev, "mapping entry %d: iova 0x%lx phy %pad size %d\n", + i, (unsigned long)iova_addr << PAGE_SHIFT, + &sg_dma_address(sg), sg_dma_len(sg)); + + dev_dbg(dev, "mapping entry %d: sg->length = %d\n", i, + sg->length); + + ret = ipu6_mmu_map(mmu->dmap->mmu_info, + iova_addr << PAGE_SHIFT, + sg_dma_address(sg), + PAGE_ALIGN(sg_dma_len(sg))); + if (ret) + goto out_fail; + + sg_dma_address(sg) = iova_addr << PAGE_SHIFT; + + iova_addr += PAGE_ALIGN(sg_dma_len(sg)) >> PAGE_SHIFT; + } + + if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) + ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL); + + return count; + +out_fail: + ipu6_dma_unmap_sg(dev, sglist, i, dir, attrs); + + return 0; +} + +/* + * Create scatter-list for the already allocated DMA buffer + */ +static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size, + unsigned long attrs) +{ + struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu; + struct vm_info *info; + int n_pages; + int ret = 0; + + info = get_vm_info(mmu, handle); + if (!info) + return -EFAULT; + + if (!info->vaddr) + return -EFAULT; + + if (WARN_ON(!info->pages)) + return -ENOMEM; + + n_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + ret = sg_alloc_table_from_pages(sgt, info->pages, n_pages, 0, size, + GFP_KERNEL); + if (ret) + dev_warn(dev, "IPU6 get sgt table failed\n"); + + return ret; +} + +const struct dma_map_ops ipu6_dma_ops = { + .alloc = ipu6_dma_alloc, + .free = ipu6_dma_free, + .mmap = ipu6_dma_mmap, + .map_sg = ipu6_dma_map_sg, + .unmap_sg = ipu6_dma_unmap_sg, + .sync_single_for_cpu = ipu6_dma_sync_single_for_cpu, + .sync_single_for_device = ipu6_dma_sync_single_for_cpu, + .sync_sg_for_cpu = ipu6_dma_sync_sg_for_cpu, + .sync_sg_for_device = ipu6_dma_sync_sg_for_cpu, + .get_sgtable = ipu6_dma_get_sgtable, +}; diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h new file mode 100644 index 000000000000..934deddab9ba --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_DMA_H +#define IPU6_DMA_H + +#include + +struct ipu6_mmu_info; + +struct ipu6_dma_mapping { + struct ipu6_mmu_info *mmu_info; + struct iova_domain iovad; + struct kref ref; +}; + +extern const struct dma_map_ops ipu6_dma_ops; + +#endif /* IPU6_DMA_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c new file mode 100644 index 000000000000..dec16018458f --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c @@ -0,0 +1,833 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-buttress.h" +#include "ipu6-dma.h" +#include "ipu6-mmu.h" +#include "ipu6-platform.h" +#include "ipu6-platform-regs.h" + +#define ISP_PAGE_SHIFT 12 +#define ISP_PAGE_SIZE BIT(ISP_PAGE_SHIFT) +#define ISP_PAGE_MASK (~(ISP_PAGE_SIZE - 1)) + +#define ISP_L1PT_SHIFT 22 +#define ISP_L1PT_MASK (~((1U << ISP_L1PT_SHIFT) - 1)) + +#define ISP_L2PT_SHIFT 12 +#define ISP_L2PT_MASK (~(ISP_L1PT_MASK | (~(ISP_PAGE_MASK)))) + +#define ISP_L1PT_PTES 1024 +#define ISP_L2PT_PTES 1024 + +#define ISP_PADDR_SHIFT 12 + +#define REG_TLB_INVALIDATE 0x0000 + +#define REG_L1_PHYS 0x0004 /* 27-bit pfn */ +#define REG_INFO 0x0008 + +#define TBL_PHYS_ADDR(a) ((phys_addr_t)(a) << ISP_PADDR_SHIFT) + +static void tlb_invalidate(struct ipu6_mmu *mmu) +{ + unsigned long flags; + unsigned int i; + + spin_lock_irqsave(&mmu->ready_lock, flags); + if (!mmu->ready) { + spin_unlock_irqrestore(&mmu->ready_lock, flags); + return; + } + + for (i = 0; i < mmu->nr_mmus; i++) { + /* + * To avoid the HW bug induced dead lock in some of the IPU6 + * MMUs on successive invalidate calls, we need to first do a + * read to the page table base before writing the invalidate + * register. MMUs which need to implement this WA, will have + * the insert_read_before_invalidate flags set as true. + * Disregard the return value of the read. + */ + if (mmu->mmu_hw[i].insert_read_before_invalidate) + readl(mmu->mmu_hw[i].base + REG_L1_PHYS); + + writel(0xffffffff, mmu->mmu_hw[i].base + + REG_TLB_INVALIDATE); + /* + * The TLB invalidation is a "single cycle" (IOMMU clock cycles) + * When the actual MMIO write reaches the IPU6 TLB Invalidate + * register, wmb() will force the TLB invalidate out if the CPU + * attempts to update the IOMMU page table (or sooner). + */ + wmb(); + } + spin_unlock_irqrestore(&mmu->ready_lock, flags); +} + +#ifdef DEBUG +static void page_table_dump(struct ipu6_mmu_info *mmu_info) +{ + u32 l1_idx; + + dev_dbg(mmu_info->dev, "begin IOMMU page table dump\n"); + + for (l1_idx = 0; l1_idx < ISP_L1PT_PTES; l1_idx++) { + u32 l2_idx; + u32 iova = (phys_addr_t)l1_idx << ISP_L1PT_SHIFT; + + if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) + continue; + dev_dbg(mmu_info->dev, + "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %p\n", + l1_idx, iova, iova + ISP_PAGE_SIZE, + (void *)TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx])); + + for (l2_idx = 0; l2_idx < ISP_L2PT_PTES; l2_idx++) { + u32 *l2_pt = mmu_info->l2_pts[l1_idx]; + u32 iova2 = iova + (l2_idx << ISP_L2PT_SHIFT); + + if (l2_pt[l2_idx] == mmu_info->dummy_page_pteval) + continue; + + dev_dbg(mmu_info->dev, + "\tl2 entry %u; iova 0x%8.8x, phys %p\n", + l2_idx, iova2, + (void *)TBL_PHYS_ADDR(l2_pt[l2_idx])); + } + } + + dev_dbg(mmu_info->dev, "end IOMMU page table dump\n"); +} +#endif /* DEBUG */ + +static dma_addr_t map_single(struct ipu6_mmu_info *mmu_info, void *ptr) +{ + dma_addr_t dma; + + dma = dma_map_single(mmu_info->dev, ptr, PAGE_SIZE, DMA_BIDIRECTIONAL); + if (dma_mapping_error(mmu_info->dev, dma)) + return 0; + + return dma; +} + +static int get_dummy_page(struct ipu6_mmu_info *mmu_info) +{ + void *pt = (void *)get_zeroed_page(GFP_ATOMIC | GFP_DMA32); + dma_addr_t dma; + + if (!pt) + return -ENOMEM; + + dev_dbg(mmu_info->dev, "dummy_page: get_zeroed_page() == %p\n", pt); + + dma = map_single(mmu_info, pt); + if (!dma) { + dev_err(mmu_info->dev, "Failed to map dummy page\n"); + goto err_free_page; + } + + mmu_info->dummy_page = pt; + mmu_info->dummy_page_pteval = dma >> ISP_PAGE_SHIFT; + + return 0; + +err_free_page: + free_page((unsigned long)pt); + return -ENOMEM; +} + +static void free_dummy_page(struct ipu6_mmu_info *mmu_info) +{ + dma_unmap_single(mmu_info->dev, + TBL_PHYS_ADDR(mmu_info->dummy_page_pteval), + PAGE_SIZE, DMA_BIDIRECTIONAL); + free_page((unsigned long)mmu_info->dummy_page); +} + +static int alloc_dummy_l2_pt(struct ipu6_mmu_info *mmu_info) +{ + u32 *pt = (u32 *)get_zeroed_page(GFP_ATOMIC | GFP_DMA32); + dma_addr_t dma; + unsigned int i; + + if (!pt) + return -ENOMEM; + + dev_dbg(mmu_info->dev, "dummy_l2: get_zeroed_page() = %p\n", pt); + + dma = map_single(mmu_info, pt); + if (!dma) { + dev_err(mmu_info->dev, "Failed to map l2pt page\n"); + goto err_free_page; + } + + for (i = 0; i < ISP_L2PT_PTES; i++) + pt[i] = mmu_info->dummy_page_pteval; + + mmu_info->dummy_l2_pt = pt; + mmu_info->dummy_l2_pteval = dma >> ISP_PAGE_SHIFT; + + return 0; + +err_free_page: + free_page((unsigned long)pt); + return -ENOMEM; +} + +static void free_dummy_l2_pt(struct ipu6_mmu_info *mmu_info) +{ + dma_unmap_single(mmu_info->dev, + TBL_PHYS_ADDR(mmu_info->dummy_l2_pteval), + PAGE_SIZE, DMA_BIDIRECTIONAL); + free_page((unsigned long)mmu_info->dummy_l2_pt); +} + +static u32 *alloc_l1_pt(struct ipu6_mmu_info *mmu_info) +{ + u32 *pt = (u32 *)get_zeroed_page(GFP_ATOMIC | GFP_DMA32); + dma_addr_t dma; + unsigned int i; + + if (!pt) + return NULL; + + dev_dbg(mmu_info->dev, "alloc_l1: get_zeroed_page() = %p\n", pt); + + for (i = 0; i < ISP_L1PT_PTES; i++) + pt[i] = mmu_info->dummy_l2_pteval; + + dma = map_single(mmu_info, pt); + if (!dma) { + dev_err(mmu_info->dev, "Failed to map l1pt page\n"); + goto err_free_page; + } + + mmu_info->l1_pt_dma = dma >> ISP_PADDR_SHIFT; + dev_dbg(mmu_info->dev, "l1 pt %p mapped at %llx\n", pt, dma); + + return pt; + +err_free_page: + free_page((unsigned long)pt); + return NULL; +} + +static u32 *alloc_l2_pt(struct ipu6_mmu_info *mmu_info) +{ + u32 *pt = (u32 *)get_zeroed_page(GFP_ATOMIC | GFP_DMA32); + unsigned int i; + + if (!pt) + return NULL; + + dev_dbg(mmu_info->dev, "alloc_l2: get_zeroed_page() = %p\n", pt); + + for (i = 0; i < ISP_L1PT_PTES; i++) + pt[i] = mmu_info->dummy_page_pteval; + + return pt; +} + +static int l2_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t paddr, size_t size) +{ + u32 l1_idx = iova >> ISP_L1PT_SHIFT; + u32 iova_start = iova; + u32 *l2_pt, *l2_virt; + unsigned int l2_idx; + unsigned long flags; + dma_addr_t dma; + u32 l1_entry; + + dev_dbg(mmu_info->dev, + "mapping l2 page table for l1 index %u (iova %8.8x)\n", + l1_idx, (u32)iova); + + spin_lock_irqsave(&mmu_info->lock, flags); + l1_entry = mmu_info->l1_pt[l1_idx]; + if (l1_entry == mmu_info->dummy_l2_pteval) { + l2_virt = mmu_info->l2_pts[l1_idx]; + if (likely(!l2_virt)) { + l2_virt = alloc_l2_pt(mmu_info); + if (!l2_virt) { + spin_unlock_irqrestore(&mmu_info->lock, flags); + return -ENOMEM; + } + } + + dma = map_single(mmu_info, l2_virt); + if (!dma) { + dev_err(mmu_info->dev, "Failed to map l2pt page\n"); + free_page((unsigned long)l2_virt); + spin_unlock_irqrestore(&mmu_info->lock, flags); + return -EINVAL; + } + + l1_entry = dma >> ISP_PADDR_SHIFT; + + dev_dbg(mmu_info->dev, "page for l1_idx %u %p allocated\n", + l1_idx, l2_virt); + mmu_info->l1_pt[l1_idx] = l1_entry; + mmu_info->l2_pts[l1_idx] = l2_virt; + clflush_cache_range(&mmu_info->l1_pt[l1_idx], + sizeof(mmu_info->l1_pt[l1_idx])); + } + + l2_pt = mmu_info->l2_pts[l1_idx]; + + dev_dbg(mmu_info->dev, "l2_pt at %p with dma 0x%x\n", l2_pt, l1_entry); + + paddr = ALIGN(paddr, ISP_PAGE_SIZE); + + l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; + + dev_dbg(mmu_info->dev, "l2_idx %u, phys 0x%8.8x\n", l2_idx, + l2_pt[l2_idx]); + if (l2_pt[l2_idx] != mmu_info->dummy_page_pteval) { + spin_unlock_irqrestore(&mmu_info->lock, flags); + return -EINVAL; + } + + l2_pt[l2_idx] = paddr >> ISP_PADDR_SHIFT; + + clflush_cache_range(&l2_pt[l2_idx], sizeof(l2_pt[l2_idx])); + spin_unlock_irqrestore(&mmu_info->lock, flags); + + dev_dbg(mmu_info->dev, "l2 index %u mapped as 0x%8.8x\n", l2_idx, + l2_pt[l2_idx]); + + return 0; +} + +static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t paddr, size_t size) +{ + u32 iova_start = round_down(iova, ISP_PAGE_SIZE); + u32 iova_end = ALIGN(iova + size, ISP_PAGE_SIZE); + + dev_dbg(mmu_info->dev, + "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr 0x%10.10llx\n", + iova_start, iova_end, size, paddr); + + return l2_map(mmu_info, iova_start, paddr, size); +} + +static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t dummy, size_t size) +{ + u32 l1_idx = iova >> ISP_L1PT_SHIFT; + u32 iova_start = iova; + unsigned int l2_idx; + size_t unmapped = 0; + unsigned long flags; + u32 *l2_pt; + + dev_dbg(mmu_info->dev, "unmapping l2 page table for l1 index %u (iova 0x%8.8lx)\n", + l1_idx, iova); + + spin_lock_irqsave(&mmu_info->lock, flags); + if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval) { + spin_unlock_irqrestore(&mmu_info->lock, flags); + dev_err(mmu_info->dev, + "unmap iova 0x%8.8lx l1 idx %u which was not mapped\n", + iova, l1_idx); + return 0; + } + + for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT; + (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT) + < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) { + l2_pt = mmu_info->l2_pts[l1_idx]; + dev_dbg(mmu_info->dev, + "unmap l2 index %u with pteval 0x%10.10llx\n", + l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx])); + l2_pt[l2_idx] = mmu_info->dummy_page_pteval; + + clflush_cache_range(&l2_pt[l2_idx], sizeof(l2_pt[l2_idx])); + unmapped++; + } + spin_unlock_irqrestore(&mmu_info->lock, flags); + + return unmapped << ISP_PAGE_SHIFT; +} + +static size_t __ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, + unsigned long iova, size_t size) +{ + return l2_unmap(mmu_info, iova, 0, size); +} + +static int allocate_trash_buffer(struct ipu6_mmu *mmu) +{ + unsigned int n_pages = PAGE_ALIGN(IPU6_MMUV2_TRASH_RANGE) >> PAGE_SHIFT; + struct iova *iova; + unsigned int i; + dma_addr_t dma; + u32 iova_addr; + int ret; + + /* Allocate 8MB in iova range */ + iova = alloc_iova(&mmu->dmap->iovad, n_pages, + mmu->dmap->mmu_info->aperture_end >> PAGE_SHIFT, 0); + if (!iova) { + dev_err(mmu->dev, "cannot allocate iova range for trash\n"); + return -ENOMEM; + } + + dma = dma_map_page(mmu->dmap->mmu_info->dev, mmu->trash_page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + if (dma_mapping_error(mmu->dmap->mmu_info->dev, dma)) { + dev_err(mmu->dmap->mmu_info->dev, "Failed to map trash page\n"); + ret = -ENOMEM; + goto out_free_iova; + } + + mmu->pci_trash_page = dma; + + /* + * Map the 8MB iova address range to the same physical trash page + * mmu->trash_page which is already reserved at the probe + */ + iova_addr = iova->pfn_lo; + for (i = 0; i < n_pages; i++) { + ret = ipu6_mmu_map(mmu->dmap->mmu_info, iova_addr << PAGE_SHIFT, + mmu->pci_trash_page, PAGE_SIZE); + if (ret) { + dev_err(mmu->dev, + "mapping trash buffer range failed\n"); + goto out_unmap; + } + + iova_addr++; + } + + mmu->iova_trash_page = iova->pfn_lo << PAGE_SHIFT; + dev_dbg(mmu->dev, "iova trash buffer for MMUID: %d is %u\n", + mmu->mmid, (unsigned int)mmu->iova_trash_page); + return 0; + +out_unmap: + ipu6_mmu_unmap(mmu->dmap->mmu_info, iova->pfn_lo << PAGE_SHIFT, + (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT); + dma_unmap_page(mmu->dmap->mmu_info->dev, mmu->pci_trash_page, + PAGE_SIZE, DMA_BIDIRECTIONAL); +out_free_iova: + __free_iova(&mmu->dmap->iovad, iova); + return ret; +} + +int ipu6_mmu_hw_init(struct ipu6_mmu *mmu) +{ + struct ipu6_mmu_info *mmu_info; + unsigned long flags; + unsigned int i; + + mmu_info = mmu->dmap->mmu_info; + + /* Initialise the each MMU HW block */ + for (i = 0; i < mmu->nr_mmus; i++) { + struct ipu6_mmu_hw *mmu_hw = &mmu->mmu_hw[i]; + unsigned int j; + u16 block_addr; + + /* Write page table address per MMU */ + writel((phys_addr_t)mmu_info->l1_pt_dma, + mmu->mmu_hw[i].base + REG_L1_PHYS); + + /* Set info bits per MMU */ + writel(mmu->mmu_hw[i].info_bits, + mmu->mmu_hw[i].base + REG_INFO); + + /* Configure MMU TLB stream configuration for L1 */ + for (j = 0, block_addr = 0; j < mmu_hw->nr_l1streams; + block_addr += mmu->mmu_hw[i].l1_block_sz[j], j++) { + if (block_addr > IPU6_MAX_LI_BLOCK_ADDR) { + dev_err(mmu->dev, "invalid L1 configuration\n"); + return -EINVAL; + } + + /* Write block start address for each streams */ + writel(block_addr, mmu_hw->base + + mmu_hw->l1_stream_id_reg_offset + 4 * j); + } + + /* Configure MMU TLB stream configuration for L2 */ + for (j = 0, block_addr = 0; j < mmu_hw->nr_l2streams; + block_addr += mmu->mmu_hw[i].l2_block_sz[j], j++) { + if (block_addr > IPU6_MAX_L2_BLOCK_ADDR) { + dev_err(mmu->dev, "invalid L2 configuration\n"); + return -EINVAL; + } + + writel(block_addr, mmu_hw->base + + mmu_hw->l2_stream_id_reg_offset + 4 * j); + } + } + + if (!mmu->trash_page) { + int ret; + + mmu->trash_page = alloc_page(GFP_KERNEL); + if (!mmu->trash_page) { + dev_err(mmu->dev, "insufficient memory for trash buffer\n"); + return -ENOMEM; + } + + ret = allocate_trash_buffer(mmu); + if (ret) { + __free_page(mmu->trash_page); + mmu->trash_page = NULL; + dev_err(mmu->dev, "trash buffer allocation failed\n"); + return ret; + } + } + + spin_lock_irqsave(&mmu->ready_lock, flags); + mmu->ready = true; + spin_unlock_irqrestore(&mmu->ready_lock, flags); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_mmu_hw_init, INTEL_IPU6); + +static struct ipu6_mmu_info *ipu6_mmu_alloc(struct ipu6_device *isp) +{ + struct ipu6_mmu_info *mmu_info; + int ret; + + mmu_info = kzalloc(sizeof(*mmu_info), GFP_KERNEL); + if (!mmu_info) + return NULL; + + mmu_info->aperture_start = 0; + mmu_info->aperture_end = DMA_BIT_MASK(isp->secure_mode ? + IPU6_MMU_ADDR_BITS : + IPU6_MMU_ADDR_BITS_NON_SECURE); + mmu_info->pgsize_bitmap = SZ_4K; + mmu_info->dev = &isp->pdev->dev; + + ret = get_dummy_page(mmu_info); + if (ret) + goto err_free_info; + + ret = alloc_dummy_l2_pt(mmu_info); + if (ret) + goto err_free_dummy_page; + + mmu_info->l2_pts = vzalloc(ISP_L2PT_PTES * sizeof(*mmu_info->l2_pts)); + if (!mmu_info->l2_pts) + goto err_free_dummy_l2_pt; + + /* + * We always map the L1 page table (a single page as well as + * the L2 page tables). + */ + mmu_info->l1_pt = alloc_l1_pt(mmu_info); + if (!mmu_info->l1_pt) + goto err_free_l2_pts; + + spin_lock_init(&mmu_info->lock); + + dev_dbg(mmu_info->dev, "domain initialised\n"); + + return mmu_info; + +err_free_l2_pts: + vfree(mmu_info->l2_pts); +err_free_dummy_l2_pt: + free_dummy_l2_pt(mmu_info); +err_free_dummy_page: + free_dummy_page(mmu_info); +err_free_info: + kfree(mmu_info); + + return NULL; +} + +int ipu6_mmu_hw_cleanup(struct ipu6_mmu *mmu) +{ + unsigned long flags; + + spin_lock_irqsave(&mmu->ready_lock, flags); + mmu->ready = false; + spin_unlock_irqrestore(&mmu->ready_lock, flags); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_mmu_hw_cleanup, INTEL_IPU6); + +static struct ipu6_dma_mapping *alloc_dma_mapping(struct ipu6_device *isp) +{ + struct ipu6_dma_mapping *dmap; + + dmap = kzalloc(sizeof(*dmap), GFP_KERNEL); + if (!dmap) + return NULL; + + dmap->mmu_info = ipu6_mmu_alloc(isp); + if (!dmap->mmu_info) { + kfree(dmap); + return NULL; + } + init_iova_domain(&dmap->iovad, SZ_4K, 1); + dmap->mmu_info->dmap = dmap; + + kref_init(&dmap->ref); + + dev_dbg(&isp->pdev->dev, "alloc mapping\n"); + + iova_cache_get(); + + return dmap; +} + +phys_addr_t ipu6_mmu_iova_to_phys(struct ipu6_mmu_info *mmu_info, + dma_addr_t iova) +{ + phys_addr_t phy_addr; + unsigned long flags; + u32 *l2_pt; + + spin_lock_irqsave(&mmu_info->lock, flags); + l2_pt = mmu_info->l2_pts[iova >> ISP_L1PT_SHIFT]; + phy_addr = (phys_addr_t)l2_pt[(iova & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT]; + phy_addr <<= ISP_PAGE_SHIFT; + spin_unlock_irqrestore(&mmu_info->lock, flags); + + return phy_addr; +} + +static size_t ipu6_mmu_pgsize(unsigned long pgsize_bitmap, + unsigned long addr_merge, size_t size) +{ + unsigned int pgsize_idx; + size_t pgsize; + + /* Max page size that still fits into 'size' */ + pgsize_idx = __fls(size); + + if (likely(addr_merge)) { + /* Max page size allowed by address */ + unsigned int align_pgsize_idx = __ffs(addr_merge); + + pgsize_idx = min(pgsize_idx, align_pgsize_idx); + } + + pgsize = (1UL << (pgsize_idx + 1)) - 1; + pgsize &= pgsize_bitmap; + + WARN_ON(!pgsize); + + /* pick the biggest page */ + pgsize_idx = __fls(pgsize); + pgsize = 1UL << pgsize_idx; + + return pgsize; +} + +size_t ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + size_t size) +{ + size_t unmapped_page, unmapped = 0; + unsigned int min_pagesz; + + /* find out the minimum page size supported */ + min_pagesz = 1 << __ffs(mmu_info->pgsize_bitmap); + + /* + * The virtual address and the size of the mapping must be + * aligned (at least) to the size of the smallest page supported + * by the hardware + */ + if (!IS_ALIGNED(iova | size, min_pagesz)) { + dev_err(NULL, "unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n", + iova, size, min_pagesz); + return -EINVAL; + } + + /* + * Keep iterating until we either unmap 'size' bytes (or more) + * or we hit an area that isn't mapped. + */ + while (unmapped < size) { + size_t pgsize = ipu6_mmu_pgsize(mmu_info->pgsize_bitmap, + iova, size - unmapped); + + unmapped_page = __ipu6_mmu_unmap(mmu_info, iova, pgsize); + if (!unmapped_page) + break; + + dev_dbg(mmu_info->dev, "unmapped: iova 0x%lx size 0x%zx\n", + iova, unmapped_page); + + iova += unmapped_page; + unmapped += unmapped_page; + } + + return unmapped; +} + +int ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t paddr, size_t size) +{ + unsigned long orig_iova = iova; + unsigned int min_pagesz; + size_t orig_size = size; + int ret = 0; + + if (mmu_info->pgsize_bitmap == 0UL) + return -ENODEV; + + /* find out the minimum page size supported */ + min_pagesz = 1 << __ffs(mmu_info->pgsize_bitmap); + + /* + * both the virtual address and the physical one, as well as + * the size of the mapping, must be aligned (at least) to the + * size of the smallest page supported by the hardware + */ + if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) { + dev_err(mmu_info->dev, + "unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n", + iova, &paddr, size, min_pagesz); + return -EINVAL; + } + + dev_dbg(mmu_info->dev, "map: iova 0x%lx pa %pa size 0x%zx\n", + iova, &paddr, size); + + while (size) { + size_t pgsize = ipu6_mmu_pgsize(mmu_info->pgsize_bitmap, + iova | paddr, size); + + dev_dbg(mmu_info->dev, + "mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", + iova, &paddr, pgsize); + + ret = __ipu6_mmu_map(mmu_info, iova, paddr, pgsize); + if (ret) + break; + + iova += pgsize; + paddr += pgsize; + size -= pgsize; + } + + /* unroll mapping in case something went wrong */ + if (ret) + ipu6_mmu_unmap(mmu_info, orig_iova, orig_size - size); + + return ret; +} + +static void ipu6_mmu_destroy(struct ipu6_mmu *mmu) +{ + struct ipu6_dma_mapping *dmap = mmu->dmap; + struct ipu6_mmu_info *mmu_info = dmap->mmu_info; + struct iova *iova; + u32 l1_idx; + + if (mmu->iova_trash_page) { + iova = find_iova(&dmap->iovad, + mmu->iova_trash_page >> PAGE_SHIFT); + if (iova) { + /* unmap and free the trash buffer iova */ + ipu6_mmu_unmap(mmu_info, iova->pfn_lo << PAGE_SHIFT, + (iova->pfn_hi - iova->pfn_lo + 1) << + PAGE_SHIFT); + __free_iova(&dmap->iovad, iova); + } else { + dev_err(mmu->dev, "trash buffer iova not found.\n"); + } + + mmu->iova_trash_page = 0; + dma_unmap_page(mmu_info->dev, mmu->pci_trash_page, + PAGE_SIZE, DMA_BIDIRECTIONAL); + mmu->pci_trash_page = 0; + __free_page(mmu->trash_page); + } + + for (l1_idx = 0; l1_idx < ISP_L1PT_PTES; l1_idx++) { + if (mmu_info->l1_pt[l1_idx] != mmu_info->dummy_l2_pteval) { + dma_unmap_single(mmu_info->dev, + TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx]), + PAGE_SIZE, DMA_BIDIRECTIONAL); + free_page((unsigned long)mmu_info->l2_pts[l1_idx]); + } + } + + free_dummy_page(mmu_info); + dma_unmap_single(mmu_info->dev, mmu_info->l1_pt_dma << ISP_PADDR_SHIFT, + PAGE_SIZE, DMA_BIDIRECTIONAL); + free_page((unsigned long)mmu_info->dummy_l2_pt); + free_page((unsigned long)mmu_info->l1_pt); + kfree(mmu_info); +} + +struct ipu6_mmu *ipu6_mmu_init(struct device *dev, + void __iomem *base, int mmid, + const struct ipu6_hw_variants *hw) +{ + struct ipu6_device *isp = pci_get_drvdata(to_pci_dev(dev)); + struct ipu6_mmu_pdata *pdata; + struct ipu6_mmu *mmu; + unsigned int i; + + if (hw->nr_mmus > IPU6_MMU_MAX_DEVICES) + return ERR_PTR(-EINVAL); + + pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); + if (!pdata) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < hw->nr_mmus; i++) { + struct ipu6_mmu_hw *pdata_mmu = &pdata->mmu_hw[i]; + const struct ipu6_mmu_hw *src_mmu = &hw->mmu_hw[i]; + + if (src_mmu->nr_l1streams > IPU6_MMU_MAX_TLB_L1_STREAMS || + src_mmu->nr_l2streams > IPU6_MMU_MAX_TLB_L2_STREAMS) + return ERR_PTR(-EINVAL); + + *pdata_mmu = *src_mmu; + pdata_mmu->base = base + src_mmu->offset; + } + + mmu = devm_kzalloc(dev, sizeof(*mmu), GFP_KERNEL); + if (!mmu) + return ERR_PTR(-ENOMEM); + + mmu->mmid = mmid; + mmu->mmu_hw = pdata->mmu_hw; + mmu->nr_mmus = hw->nr_mmus; + mmu->tlb_invalidate = tlb_invalidate; + mmu->ready = false; + INIT_LIST_HEAD(&mmu->vma_list); + spin_lock_init(&mmu->ready_lock); + + mmu->dmap = alloc_dma_mapping(isp); + if (!mmu->dmap) { + dev_err(dev, "can't alloc dma mapping\n"); + return ERR_PTR(-ENOMEM); + } + + return mmu; +} + +void ipu6_mmu_cleanup(struct ipu6_mmu *mmu) +{ + struct ipu6_dma_mapping *dmap = mmu->dmap; + + ipu6_mmu_destroy(mmu); + mmu->dmap = NULL; + iova_cache_put(); + put_iova_domain(&dmap->iovad); + kfree(dmap); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.h b/drivers/media/pci/intel/ipu6/ipu6-mmu.h new file mode 100644 index 000000000000..db62c00210cf --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_MMU_H +#define IPU6_MMU_H + +#define ISYS_MMID 1 +#define PSYS_MMID 0 + +struct ipu6_mmu_info { + struct device *dev; + + u32 __iomem *l1_pt; + u32 l1_pt_dma; + u32 **l2_pts; + + u32 *dummy_l2_pt; + u32 dummy_l2_pteval; + void *dummy_page; + u32 dummy_page_pteval; + + dma_addr_t aperture_start; + dma_addr_t aperture_end; + unsigned long pgsize_bitmap; + + spinlock_t lock; /* Serialize access to users */ + struct ipu6_dma_mapping *dmap; +}; + +struct ipu6_mmu { + struct list_head node; + + struct ipu6_mmu_hw *mmu_hw; + unsigned int nr_mmus; + int mmid; + + phys_addr_t pgtbl; + struct device *dev; + + struct ipu6_dma_mapping *dmap; + struct list_head vma_list; + + struct page *trash_page; + dma_addr_t pci_trash_page; /* IOVA from PCI DMA services (parent) */ + dma_addr_t iova_trash_page; /* IOVA for IPU6 child nodes to use */ + + bool ready; + spinlock_t ready_lock; /* Serialize access to bool ready */ + + void (*tlb_invalidate)(struct ipu6_mmu *mmu); +}; + +struct ipu6_mmu *ipu6_mmu_init(struct device *dev, + void __iomem *base, int mmid, + const struct ipu6_hw_variants *hw); +void ipu6_mmu_cleanup(struct ipu6_mmu *mmu); +int ipu6_mmu_hw_init(struct ipu6_mmu *mmu); +int ipu6_mmu_hw_cleanup(struct ipu6_mmu *mmu); +int ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova, + phys_addr_t paddr, size_t size); +size_t ipu6_mmu_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova, + size_t size); +phys_addr_t ipu6_mmu_iova_to_phys(struct ipu6_mmu_info *mmu_info, + dma_addr_t iova); +#endif From patchwork Thu Apr 13 10:04:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D5ADC77B6F for ; Thu, 13 Apr 2023 09:55:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229575AbjDMJzL (ORCPT ); Thu, 13 Apr 2023 05:55:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229989AbjDMJzK (ORCPT ); Thu, 13 Apr 2023 05:55:10 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5AEA93F8 for ; Thu, 13 Apr 2023 02:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379693; x=1712915693; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Ni0/Wk3qySOSs+L77QuGTEeRzjrCEjYV7Udi0X/m5Q=; b=AQIg37Wpf6stA9qpHYZWNFv2GWVHIfeXhw7HJh/zw21WU0FufOVDq3Ad aNwGUduwdzYmFNRGYeEr4P0Bz8wSp7BHeruwuYc8csR4FJ0B/SiqsvFmZ uH5f1ExYeEr0SvxmSBqHQ9hm1l2J4/Yi8klcdzyt0DiQ8RZeb2eB6RWJY SzUEuQdW5xaVNR3vmvYeUYbdbqhlQB4WQK3HuRaqZwZVkeByWrKUtBZLJ XeTCyjEDpBqDEE6p08IaMhYbKrq4EoTenrrpkLUhlsKQVzrhtdjHWhQ3P uoyDazakULTGA3D9A3NDmAiHqWKGUyIcwEa7fWOFo2wDb5jXJ9wbhjHOQ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371992981" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371992981" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600042" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600042" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:48 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 06/14] media: intel/ipu6: add syscom interfaces between firmware and driver Date: Thu, 13 Apr 2023 18:04:21 +0800 Message-Id: <20230413100429.919622-7-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Syscom is an inter-process(or) communication mechanism between an IPU and host. Syscom uses message queues for message exchange between IPU and host. Each message queue has its consumer and producer, host queue messages to firmware as the producer and then firmware to dequeue the messages as consumer and vice versa. IPU and host use shared registers or memory to reside the read and write indices which are updated by consumer and producer. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-fw-com.c | 417 +++++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-fw-com.h | 47 +++ 2 files changed, 464 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-fw-com.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-fw-com.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c new file mode 100644 index 000000000000..3f570fd12e0d --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c @@ -0,0 +1,417 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-fw-com.h" + +/* + * FWCOM layer is a shared resource between FW and driver. It consist + * of token queues to both send and receive directions. Queue is simply + * an array of structures with read and write indexes to the queue. + * There are 1...n queues to both directions. Queues locates in + * system RAM and are mapped to ISP MMU so that both CPU and ISP can + * see the same buffer. Indexes are located in ISP DMEM so that FW code + * can poll those with very low latency and cost. CPU access to indexes is + * more costly but that happens only at message sending time and + * interrupt triggered message handling. CPU doesn't need to poll indexes. + * wr_reg / rd_reg are offsets to those dmem location. They are not + * the indexes itself. + */ + +/* Shared structure between driver and FW - do not modify */ +struct ipu6_fw_sys_queue { + u64 host_address; + u32 vied_address; + u32 size; + u32 token_size; + u32 wr_reg; /* reg number in subsystem's regmem */ + u32 rd_reg; + u32 _align; +} __packed; + +struct ipu6_fw_sys_queue_res { + u64 host_address; + u32 vied_address; + u32 reg; +} __packed; + +enum syscom_state { + /* Program load or explicit host setting should init to this */ + SYSCOM_STATE_UNINIT = 0x57A7E000, + /* SP Syscom sets this when it is ready for use */ + SYSCOM_STATE_READY = 0x57A7E001, + /* SP Syscom sets this when no more syscom accesses will happen */ + SYSCOM_STATE_INACTIVE = 0x57A7E002 +}; + +enum syscom_cmd { + /* Program load or explicit host setting should init to this */ + SYSCOM_COMMAND_UNINIT = 0x57A7F000, + /* Host Syscom requests syscom to become inactive */ + SYSCOM_COMMAND_INACTIVE = 0x57A7F001 +}; + +/* firmware config: data that sent from the host to SP via DDR */ +/* Cell copies data into a context */ + +struct ipu6_fw_syscom_config { + u32 firmware_address; + + u32 num_input_queues; + u32 num_output_queues; + + /* ISP pointers to an array of ipu6_fw_sys_queue structures */ + u32 input_queue; + u32 output_queue; + + /* ISYS / PSYS private data */ + u32 specific_addr; + u32 specific_size; +} __packed; + +struct ipu6_fw_com_context { + struct ipu6_bus_device *adev; + void __iomem *dmem_addr; + int (*cell_ready)(struct ipu6_bus_device *adev); + void (*cell_start)(struct ipu6_bus_device *adev); + + void *dma_buffer; + dma_addr_t dma_addr; + unsigned int dma_size; + unsigned long attrs; + + struct ipu6_fw_sys_queue *input_queue; /* array of host to SP queues */ + struct ipu6_fw_sys_queue *output_queue; /* array of SP to host */ + + u32 config_vied_addr; + + unsigned int buttress_boot_offset; + void __iomem *base_addr; +}; + +#define FW_COM_WR_REG 0 +#define FW_COM_RD_REG 4 + +#define REGMEM_OFFSET 0 +#ifdef IPU_TRACE_SUPPORT +#define TUNIT_MAGIC_PATTERN 0x5a5a5a5a +#endif + +enum regmem_id { + /* pass pkg_dir address to SPC in non-secure mode */ + PKG_DIR_ADDR_REG = 0, + /* Tunit CFG blob for secure - provided by host.*/ + TUNIT_CFG_DWR_REG = 1, + /* syscom commands - modified by the host */ + SYSCOM_COMMAND_REG = 2, + /* Store interrupt status - updated by SP */ + SYSCOM_IRQ_REG = 3, + /* first syscom queue pointer register */ + SYSCOM_QPR_BASE_REG = 4 +}; + +enum message_direction { + DIR_RECV = 0, + DIR_SEND +}; + +#define BUTRESS_FW_BOOT_PARAMS_0 0x4000 +#define BUTTRESS_FW_BOOT_PARAM_REG(base, offset, id) \ + ((base) + BUTRESS_FW_BOOT_PARAMS_0 + ((offset) + (id)) * 4) + +enum buttress_syscom_id { + /* pass syscom configuration to SPC */ + SYSCOM_CONFIG_ID = 0, + /* syscom state - modified by SP */ + SYSCOM_STATE_ID = 1, + /* syscom vtl0 addr mask */ + SYSCOM_VTL0_ADDR_MASK_ID = 2, + SYSCOM_ID_MAX +}; + +static void ipu6_sys_queue_init(struct ipu6_fw_sys_queue *q, unsigned int size, + unsigned int token_size, + struct ipu6_fw_sys_queue_res *res) +{ + unsigned int buf_size = (size + 1) * token_size; + + q->size = size + 1; + q->token_size = token_size; + + /* acquire the shared buffer space */ + q->host_address = res->host_address; + res->host_address += buf_size; + q->vied_address = res->vied_address; + res->vied_address += buf_size; + + /* acquire the shared read and writer pointers */ + q->wr_reg = res->reg; + res->reg++; + q->rd_reg = res->reg; + res->reg++; +} + +void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg, + struct ipu6_bus_device *adev, void __iomem *base) +{ + size_t conf_size, inq_size, outq_size, specific_size; + struct ipu6_fw_syscom_config *config_host_addr; + unsigned int sizeinput = 0, sizeoutput = 0; + struct ipu6_fw_sys_queue_res res; + struct ipu6_fw_com_context *ctx; + size_t sizeall, offset; + unsigned long attrs = 0; + void *specific_host_addr; + unsigned int i; + + if (!cfg || !cfg->cell_start || !cfg->cell_ready) + return NULL; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return NULL; + ctx->dmem_addr = base + cfg->dmem_addr + REGMEM_OFFSET; + ctx->adev = adev; + ctx->cell_start = cfg->cell_start; + ctx->cell_ready = cfg->cell_ready; + ctx->buttress_boot_offset = cfg->buttress_boot_offset; + ctx->base_addr = base; + + /* + * Allocate DMA mapped memory. Allocate one big chunk. + */ + /* Base cfg for FW */ + conf_size = roundup(sizeof(struct ipu6_fw_syscom_config), 8); + /* Descriptions of the queues */ + inq_size = size_mul(cfg->num_input_queues, + sizeof(struct ipu6_fw_sys_queue)); + outq_size = size_mul(cfg->num_output_queues, + sizeof(struct ipu6_fw_sys_queue)); + /* FW specific information structure */ + specific_size = roundup(cfg->specific_size, 8); + + sizeall = conf_size + inq_size + outq_size + specific_size; + + for (i = 0; i < cfg->num_input_queues; i++) + sizeinput += size_mul(cfg->input[i].queue_size + 1, + cfg->input[i].token_size); + + for (i = 0; i < cfg->num_output_queues; i++) + sizeoutput += size_mul(cfg->output[i].queue_size + 1, + cfg->output[i].token_size); + + sizeall += sizeinput + sizeoutput; + + ctx->dma_buffer = dma_alloc_attrs(&ctx->adev->dev, sizeall, + &ctx->dma_addr, GFP_KERNEL, attrs); + ctx->attrs = attrs; + if (!ctx->dma_buffer) { + dev_err(&ctx->adev->dev, "failed to allocate dma memory\n"); + kfree(ctx); + return NULL; + } + + ctx->dma_size = sizeall; + + config_host_addr = ctx->dma_buffer; + ctx->config_vied_addr = ctx->dma_addr; + + offset = conf_size; + ctx->input_queue = ctx->dma_buffer + offset; + config_host_addr->input_queue = ctx->dma_addr + offset; + config_host_addr->num_input_queues = cfg->num_input_queues; + + offset += inq_size; + ctx->output_queue = ctx->dma_buffer + offset; + config_host_addr->output_queue = ctx->dma_addr + offset; + config_host_addr->num_output_queues = cfg->num_output_queues; + + /* copy firmware specific data */ + offset += outq_size; + specific_host_addr = ctx->dma_buffer + offset; + config_host_addr->specific_addr = ctx->dma_addr + offset; + config_host_addr->specific_size = cfg->specific_size; + if (cfg->specific_addr && cfg->specific_size) + memcpy(specific_host_addr, cfg->specific_addr, + cfg->specific_size); + + /* initialize input queues */ + offset += specific_size; + res.reg = SYSCOM_QPR_BASE_REG; + res.host_address = (u64)(ctx->dma_buffer + offset); + res.vied_address = ctx->dma_addr + offset; + for (i = 0; i < cfg->num_input_queues; i++) + ipu6_sys_queue_init(ctx->input_queue + i, + cfg->input[i].queue_size, + cfg->input[i].token_size, &res); + + /* initialize output queues */ + offset += sizeinput; + res.host_address = (u64)(ctx->dma_buffer + offset); + res.vied_address = ctx->dma_addr + offset; + for (i = 0; i < cfg->num_output_queues; i++) { + ipu6_sys_queue_init(ctx->output_queue + i, + cfg->output[i].queue_size, + cfg->output[i].token_size, &res); + } + + return ctx; +} +EXPORT_SYMBOL_NS_GPL(ipu6_fw_com_prepare, INTEL_IPU6); + +int ipu6_fw_com_open(struct ipu6_fw_com_context *ctx) +{ + /* Check if SP is in valid state */ + if (!ctx->cell_ready(ctx->adev)) + return -EIO; + + /* store syscom uninitialized command */ + writel(SYSCOM_COMMAND_UNINIT, ctx->dmem_addr + SYSCOM_COMMAND_REG * 4); + + /* store syscom uninitialized state */ + writel(SYSCOM_STATE_UNINIT, + BUTTRESS_FW_BOOT_PARAM_REG(ctx->base_addr, + ctx->buttress_boot_offset, + SYSCOM_STATE_ID)); + + /* store firmware configuration address */ + writel(ctx->config_vied_addr, + BUTTRESS_FW_BOOT_PARAM_REG(ctx->base_addr, + ctx->buttress_boot_offset, + SYSCOM_CONFIG_ID)); + ctx->cell_start(ctx->adev); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_fw_com_open, INTEL_IPU6); + +int ipu6_fw_com_close(struct ipu6_fw_com_context *ctx) +{ + int state; + + state = readl(BUTTRESS_FW_BOOT_PARAM_REG(ctx->base_addr, + ctx->buttress_boot_offset, + SYSCOM_STATE_ID)); + if (state != SYSCOM_STATE_READY) + return -EBUSY; + + /* set close request flag */ + writel(SYSCOM_COMMAND_INACTIVE, ctx->dmem_addr + + SYSCOM_COMMAND_REG * 4); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_fw_com_close, INTEL_IPU6); + +int ipu6_fw_com_release(struct ipu6_fw_com_context *ctx, unsigned int force) +{ + /* check if release is forced, an verify cell state if it is not */ + if (!force && !ctx->cell_ready(ctx->adev)) + return -EBUSY; + + dma_free_attrs(&ctx->adev->dev, ctx->dma_size, + ctx->dma_buffer, ctx->dma_addr, ctx->attrs); + kfree(ctx); + return 0; +} +EXPORT_SYMBOL_NS_GPL(ipu6_fw_com_release, INTEL_IPU6); + +bool ipu6_fw_com_ready(struct ipu6_fw_com_context *ctx) +{ + int state; + + state = readl(BUTTRESS_FW_BOOT_PARAM_REG(ctx->base_addr, + ctx->buttress_boot_offset, + SYSCOM_STATE_ID)); + + return state == SYSCOM_STATE_READY; +} +EXPORT_SYMBOL_NS_GPL(ipu6_fw_com_ready, INTEL_IPU6); + +void *ipu6_send_get_token(struct ipu6_fw_com_context *ctx, int q_nbr) +{ + struct ipu6_fw_sys_queue *q = &ctx->input_queue[q_nbr]; + void __iomem *q_dmem = ctx->dmem_addr + q->wr_reg * 4; + unsigned int wr, rd; + unsigned int packets; + unsigned int index; + + wr = readl(q_dmem + FW_COM_WR_REG); + rd = readl(q_dmem + FW_COM_RD_REG); + + if (WARN_ON_ONCE(wr >= q->size || rd >= q->size)) + return NULL; + + if (wr < rd) + packets = rd - wr - 1; + else + packets = q->size - (wr - rd + 1); + + if (!packets) + return NULL; + + index = readl(q_dmem + FW_COM_WR_REG); + + return (void *)(q->host_address + index * q->token_size); +} +EXPORT_SYMBOL_NS_GPL(ipu6_send_get_token, INTEL_IPU6); + +void ipu6_send_put_token(struct ipu6_fw_com_context *ctx, int q_nbr) +{ + struct ipu6_fw_sys_queue *q = &ctx->input_queue[q_nbr]; + void __iomem *q_dmem = ctx->dmem_addr + q->wr_reg * 4; + unsigned int wr = readl(q_dmem + FW_COM_WR_REG) + 1; + + if (wr >= q->size) + wr = 0; + + writel(wr, q_dmem + FW_COM_WR_REG); +} +EXPORT_SYMBOL_NS_GPL(ipu6_send_put_token, INTEL_IPU6); + +void *ipu6_recv_get_token(struct ipu6_fw_com_context *ctx, int q_nbr) +{ + struct ipu6_fw_sys_queue *q = &ctx->output_queue[q_nbr]; + void __iomem *q_dmem = ctx->dmem_addr + q->wr_reg * 4; + unsigned int wr, rd; + unsigned int packets; + void *addr; + + wr = readl(q_dmem + FW_COM_WR_REG); + rd = readl(q_dmem + FW_COM_RD_REG); + + if (WARN_ON_ONCE(wr >= q->size || rd >= q->size)) + return NULL; + + if (wr < rd) + wr += q->size; + + packets = wr - rd; + if (!packets) + return NULL; + + addr = (void *)(q->host_address + rd * q->token_size); + + return addr; +} +EXPORT_SYMBOL_NS_GPL(ipu6_recv_get_token, INTEL_IPU6); + +void ipu6_recv_put_token(struct ipu6_fw_com_context *ctx, int q_nbr) +{ + struct ipu6_fw_sys_queue *q = &ctx->output_queue[q_nbr]; + void __iomem *q_dmem = ctx->dmem_addr + q->wr_reg * 4; + unsigned int rd = readl(q_dmem + FW_COM_RD_REG) + 1; + + if (rd >= q->size) + rd = 0; + + writel(rd, q_dmem + FW_COM_RD_REG); +} +EXPORT_SYMBOL_NS_GPL(ipu6_recv_put_token, INTEL_IPU6); diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-com.h b/drivers/media/pci/intel/ipu6/ipu6-fw-com.h new file mode 100644 index 000000000000..660c406b3ac9 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-fw-com.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_FW_COM_H +#define IPU6_FW_COM_H + +struct ipu6_fw_com_context; +struct ipu6_bus_device; + +struct ipu6_fw_syscom_queue_config { + unsigned int queue_size; /* tokens per queue */ + unsigned int token_size; /* bytes per token */ +}; + +#define SYSCOM_BUTTRESS_FW_PARAMS_ISYS_OFFSET 0 + +struct ipu6_fw_com_cfg { + unsigned int num_input_queues; + unsigned int num_output_queues; + struct ipu6_fw_syscom_queue_config *input; + struct ipu6_fw_syscom_queue_config *output; + + unsigned int dmem_addr; + + /* firmware-specific configuration data */ + void *specific_addr; + unsigned int specific_size; + int (*cell_ready)(struct ipu6_bus_device *adev); + void (*cell_start)(struct ipu6_bus_device *adev); + + unsigned int buttress_boot_offset; +}; + +void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg, + struct ipu6_bus_device *adev, void __iomem *base); + +int ipu6_fw_com_open(struct ipu6_fw_com_context *ctx); +bool ipu6_fw_com_ready(struct ipu6_fw_com_context *ctx); +int ipu6_fw_com_close(struct ipu6_fw_com_context *ctx); +int ipu6_fw_com_release(struct ipu6_fw_com_context *ctx, unsigned int force); + +void *ipu6_recv_get_token(struct ipu6_fw_com_context *ctx, int q_nbr); +void ipu6_recv_put_token(struct ipu6_fw_com_context *ctx, int q_nbr); +void *ipu6_send_get_token(struct ipu6_fw_com_context *ctx, int q_nbr); +void ipu6_send_put_token(struct ipu6_fw_com_context *ctx, int q_nbr); + +#endif From patchwork Thu Apr 13 10:04:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39D19C77B61 for ; Thu, 13 Apr 2023 09:55:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230081AbjDMJzR (ORCPT ); Thu, 13 Apr 2023 05:55:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230013AbjDMJzQ (ORCPT ); Thu, 13 Apr 2023 05:55:16 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 690BF7692 for ; Thu, 13 Apr 2023 02:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379700; x=1712915700; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ey5aAi/bwzTcNK8dPW7z7D5aFasb7ZJ1gPWySyJXjvw=; b=PL1Il0/XUJ4UONGIvMCQg9aaf139cA58/59sh9W9/1JBoYRMqUdfcRgz jfX3B/S58ypB6ZYfgEtSEEaIvICKyUkIpxfDro7WXr6FGzJTad5SknK3v EoWkOo4xDi6iHwOPhIFITkDiiQG9SUd4h2zFr5d0nGmBUBNUM9df9Urvu AJhu6E8fkcEG1TvuZnVtsjZ2CdajvfxiH1NWOvEQ/VlJuWRP2YwLVFNvI Rx8LcU7h+BTBPUPMgYcfVzlSQbNoOksYqyiJA94jbt8WilJVZykQRUlam WJ9OIrRUx+clhKh4bhE73y/VW0BbD/jq9YmwYd6QqNpbYbeM3m4R5ejyw Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993000" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993000" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:54:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600048" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600048" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:52 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 07/14] media: intel/ipu6: input system ABI between firmware and driver Date: Thu, 13 Apr 2023 18:04:22 +0800 Message-Id: <20230413100429.919622-8-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Implement the input system firmware ABIs between the firmware and driver - include stream configuration, control command, capture request and response, etc. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-fw-isys.c | 566 +++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-fw-isys.h | 574 ++++++++++++++++++++ 2 files changed, 1140 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-fw-isys.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-fw-isys.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-isys.c b/drivers/media/pci/intel/ipu6/ipu6-fw-isys.c new file mode 100644 index 000000000000..f5073b580066 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-fw-isys.c @@ -0,0 +1,566 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-fw-com.h" +#include "ipu6-fw-isys.h" +#include "ipu6-isys.h" +#include "ipu6-platform.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + +#define IPU6_FW_UNSUPPORTED_DATA_TYPE 0 +static const u8 extracted_bits_per_pixel_per_mipi_data_type[64] = { + 64, /* [0x00] MIPI_DATA_TYPE_FRAME_START_CODE */ + 64, /* [0x01] MIPI_DATA_TYPE_FRAME_END_CODE */ + 64, /* [0x02] MIPI_DATA_TYPE_LINE_START_CODE */ + 64, /* [0x03] MIPI_DATA_TYPE_LINE_END_CODE */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x04] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x05] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x06] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x07] */ + 64, /* [0x08] MIPI_DATA_TYPE_GENERIC_SHORT1 */ + 64, /* [0x09] MIPI_DATA_TYPE_GENERIC_SHORT2 */ + 64, /* [0x0A] MIPI_DATA_TYPE_GENERIC_SHORT3 */ + 64, /* [0x0B] MIPI_DATA_TYPE_GENERIC_SHORT4 */ + 64, /* [0x0C] MIPI_DATA_TYPE_GENERIC_SHORT5 */ + 64, /* [0x0D] MIPI_DATA_TYPE_GENERIC_SHORT6 */ + 64, /* [0x0E] MIPI_DATA_TYPE_GENERIC_SHORT7 */ + 64, /* [0x0F] MIPI_DATA_TYPE_GENERIC_SHORT8 */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x10] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x11] */ + 8, /* [0x12] MIPI_DATA_TYPE_EMBEDDED */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x13] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x14] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x15] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x16] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x17] */ + 12, /* [0x18] MIPI_DATA_TYPE_YUV420_8 */ + 15, /* [0x19] MIPI_DATA_TYPE_YUV420_10 */ + 12, /* [0x1A] MIPI_DATA_TYPE_YUV420_8_LEGACY */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x1B] */ + 12, /* [0x1C] MIPI_DATA_TYPE_YUV420_8_SHIFT */ + 15, /* [0x1D] MIPI_DATA_TYPE_YUV420_10_SHIFT */ + 16, /* [0x1E] MIPI_DATA_TYPE_YUV422_8 */ + 20, /* [0x1F] MIPI_DATA_TYPE_YUV422_10 */ + 16, /* [0x20] MIPI_DATA_TYPE_RGB_444 */ + 16, /* [0x21] MIPI_DATA_TYPE_RGB_555 */ + 16, /* [0x22] MIPI_DATA_TYPE_RGB_565 */ + 18, /* [0x23] MIPI_DATA_TYPE_RGB_666 */ + 24, /* [0x24] MIPI_DATA_TYPE_RGB_888 */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x25] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x26] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x27] */ + 6, /* [0x28] MIPI_DATA_TYPE_RAW_6 */ + 7, /* [0x29] MIPI_DATA_TYPE_RAW_7 */ + 8, /* [0x2A] MIPI_DATA_TYPE_RAW_8 */ + 10, /* [0x2B] MIPI_DATA_TYPE_RAW_10 */ + 12, /* [0x2C] MIPI_DATA_TYPE_RAW_12 */ + 14, /* [0x2D] MIPI_DATA_TYPE_RAW_14 */ + 16, /* [0x2E] MIPI_DATA_TYPE_RAW_16 */ + 8, /* [0x2F] MIPI_DATA_TYPE_BINARY_8 */ + 8, /* [0x30] MIPI_DATA_TYPE_USER_DEF1 */ + 8, /* [0x31] MIPI_DATA_TYPE_USER_DEF2 */ + 8, /* [0x32] MIPI_DATA_TYPE_USER_DEF3 */ + 8, /* [0x33] MIPI_DATA_TYPE_USER_DEF4 */ + 8, /* [0x34] MIPI_DATA_TYPE_USER_DEF5 */ + 8, /* [0x35] MIPI_DATA_TYPE_USER_DEF6 */ + 8, /* [0x36] MIPI_DATA_TYPE_USER_DEF7 */ + 8, /* [0x37] MIPI_DATA_TYPE_USER_DEF8 */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x38] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x39] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x3A] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x3B] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x3C] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x3D] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE, /* [0x3E] */ + IPU6_FW_UNSUPPORTED_DATA_TYPE /* [0x3F] */ +}; + +static const char send_msg_types[N_IPU6_FW_ISYS_SEND_TYPE][32] = { + "STREAM_OPEN", + "STREAM_START", + "STREAM_START_AND_CAPTURE", + "STREAM_CAPTURE", + "STREAM_STOP", + "STREAM_FLUSH", + "STREAM_CLOSE" +}; + +u8 ipu6_fw_isys_get_bpp_by_dt(u8 dt) +{ + return extracted_bits_per_pixel_per_mipi_data_type[dt]; +} + +static int handle_proxy_response(struct ipu6_isys *isys, unsigned int req_id) +{ + struct ipu6_fw_isys_proxy_resp_info_abi *resp; + int ret = -EIO; + + resp = (struct ipu6_fw_isys_proxy_resp_info_abi *) + ipu6_recv_get_token(isys->fwcom, IPU6_BASE_PROXY_RECV_QUEUES); + if (!resp) + return 1; + + dev_dbg(&isys->adev->dev, + "Proxy response: id 0x%x, error %d, details %d\n", + resp->request_id, resp->error_info.error, + resp->error_info.error_details); + + if (req_id == resp->request_id) + ret = 0; + + ipu6_recv_put_token(isys->fwcom, IPU6_BASE_PROXY_RECV_QUEUES); + return ret; +} + +int ipu6_fw_isys_send_proxy_token(struct ipu6_isys *isys, + unsigned int req_id, + unsigned int index, + unsigned int offset, u32 value) +{ + struct ipu6_fw_com_context *ctx = isys->fwcom; + struct ipu6_fw_proxy_send_queue_token *token; + unsigned int timeout = 1000; + int ret; + + dev_dbg(&isys->adev->dev, + "proxy send: req_id 0x%x, index %d, offset 0x%x, value 0x%x\n", + req_id, index, offset, value); + + token = ipu6_send_get_token(ctx, IPU6_BASE_PROXY_SEND_QUEUES); + if (!token) + return -EBUSY; + + token->request_id = req_id; + token->region_index = index; + token->offset = offset; + token->value = value; + ipu6_send_put_token(ctx, IPU6_BASE_PROXY_SEND_QUEUES); + + do { + usleep_range(100, 110); + ret = handle_proxy_response(isys, req_id); + if (!ret) + break; + if (ret == -EIO) { + dev_err(&isys->adev->dev, + "Proxy response received with unexpected id\n"); + break; + } + timeout--; + } while (ret && timeout); + + if (!timeout) + dev_err(&isys->adev->dev, "Proxy response timed out\n"); + + return ret; +} + +int ipu6_fw_isys_complex_cmd(struct ipu6_isys *isys, + const unsigned int stream_handle, + void *cpu_mapped_buf, + dma_addr_t dma_mapped_buf, + size_t size, u16 send_type) +{ + struct ipu6_fw_com_context *ctx = isys->fwcom; + struct ipu6_fw_send_queue_token *token; + + if (send_type >= N_IPU6_FW_ISYS_SEND_TYPE) + return -EINVAL; + + dev_dbg(&isys->adev->dev, "send_token: %s\n", + send_msg_types[send_type]); + + /* + * Time to flush cache in case we have some payload. Not all messages + * have that + */ + if (cpu_mapped_buf) + clflush_cache_range(cpu_mapped_buf, size); + + token = ipu6_send_get_token(ctx, + stream_handle + IPU6_BASE_MSG_SEND_QUEUES); + if (!token) + return -EBUSY; + + token->payload = dma_mapped_buf; + token->buf_handle = (unsigned long)cpu_mapped_buf; + token->send_type = send_type; + + ipu6_send_put_token(ctx, stream_handle + IPU6_BASE_MSG_SEND_QUEUES); + + return 0; +} + +int ipu6_fw_isys_simple_cmd(struct ipu6_isys *isys, + const unsigned int stream_handle, u16 send_type) +{ + return ipu6_fw_isys_complex_cmd(isys, stream_handle, NULL, 0, 0, + send_type); +} + +int ipu6_fw_isys_close(struct ipu6_isys *isys) +{ + struct device *dev = &isys->adev->dev; + int retry = IPU6_ISYS_CLOSE_RETRY; + unsigned long flags; + void *fwcom; + int ret; + + /* + * Stop the isys fw. Actual close takes + * some time as the FW must stop its actions including code fetch + * to SP icache. + * spinlock to wait the interrupt handler to be finished + */ + spin_lock_irqsave(&isys->power_lock, flags); + ret = ipu6_fw_com_close(isys->fwcom); + fwcom = isys->fwcom; + isys->fwcom = NULL; + spin_unlock_irqrestore(&isys->power_lock, flags); + if (ret) + dev_err(dev, "Device close failure: %d\n", ret); + + /* release probably fails if the close failed. Let's try still */ + do { + usleep_range(400, 500); + ret = ipu6_fw_com_release(fwcom, 0); + retry--; + } while (ret && retry); + + if (ret) { + dev_err(dev, "Device release time out %d\n", ret); + spin_lock_irqsave(&isys->power_lock, flags); + isys->fwcom = fwcom; + spin_unlock_irqrestore(&isys->power_lock, flags); + } + + return ret; +} + +void ipu6_fw_isys_cleanup(struct ipu6_isys *isys) +{ + int ret; + + ret = ipu6_fw_com_release(isys->fwcom, 1); + if (ret < 0) + dev_err(&isys->adev->dev, + "Device busy, fw_com release failed."); + isys->fwcom = NULL; +} + +static void start_sp(struct ipu6_bus_device *adev) +{ + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + void __iomem *spc_regs_base = isys->pdata->base + + isys->pdata->ipdata->hw_variant.spc_offset; + u32 val = IPU6_ISYS_SPC_STATUS_START | + IPU6_ISYS_SPC_STATUS_RUN | + IPU6_ISYS_SPC_STATUS_CTRL_ICACHE_INVALIDATE; + + val |= isys->icache_prefetch ? IPU6_ISYS_SPC_STATUS_ICACHE_PREFETCH : 0; + + writel(val, spc_regs_base + IPU6_ISYS_REG_SPC_STATUS_CTRL); +} + +static int query_sp(struct ipu6_bus_device *adev) +{ + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + void __iomem *spc_regs_base = isys->pdata->base + + isys->pdata->ipdata->hw_variant.spc_offset; + u32 val; + + val = readl(spc_regs_base + IPU6_ISYS_REG_SPC_STATUS_CTRL); + /* return true when READY == 1, START == 0 */ + val &= IPU6_ISYS_SPC_STATUS_READY | IPU6_ISYS_SPC_STATUS_START; + + return val == IPU6_ISYS_SPC_STATUS_READY; +} + +static int ipu6_isys_fwcom_cfg_init(struct ipu6_isys *isys, + struct ipu6_fw_com_cfg *fwcom, + unsigned int num_streams) +{ + unsigned int max_send_queues, max_sram_blocks, max_devq_size; + struct ipu6_fw_syscom_queue_config *input_queue_cfg; + struct ipu6_fw_syscom_queue_config *output_queue_cfg; + int type_proxy = IPU6_FW_ISYS_QUEUE_TYPE_PROXY; + int type_dev = IPU6_FW_ISYS_QUEUE_TYPE_DEV; + int type_msg = IPU6_FW_ISYS_QUEUE_TYPE_MSG; + int base_dev_send = IPU6_BASE_DEV_SEND_QUEUES; + int base_msg_send = IPU6_BASE_MSG_SEND_QUEUES; + int base_msg_recv = IPU6_BASE_MSG_RECV_QUEUES; + struct ipu6_fw_isys_fw_config *isys_fw_cfg; + u32 num_in_message_queues; + unsigned int max_streams; + unsigned int size; + unsigned int i; + + max_streams = isys->pdata->ipdata->max_streams; + max_send_queues = isys->pdata->ipdata->max_send_queues; + max_sram_blocks = isys->pdata->ipdata->max_sram_blocks; + max_devq_size = isys->pdata->ipdata->max_devq_size; + num_in_message_queues = clamp(num_streams, 1U, max_streams); + isys_fw_cfg = devm_kzalloc(&isys->adev->dev, sizeof(*isys_fw_cfg), + GFP_KERNEL); + if (!isys_fw_cfg) + return -ENOMEM; + + isys_fw_cfg->num_send_queues[IPU6_FW_ISYS_QUEUE_TYPE_PROXY] = + IPU6_N_MAX_PROXY_SEND_QUEUES; + isys_fw_cfg->num_send_queues[IPU6_FW_ISYS_QUEUE_TYPE_DEV] = + IPU6_N_MAX_DEV_SEND_QUEUES; + isys_fw_cfg->num_send_queues[IPU6_FW_ISYS_QUEUE_TYPE_MSG] = + num_in_message_queues; + isys_fw_cfg->num_recv_queues[IPU6_FW_ISYS_QUEUE_TYPE_PROXY] = + IPU6_N_MAX_PROXY_RECV_QUEUES; + /* Common msg/dev return queue */ + isys_fw_cfg->num_recv_queues[IPU6_FW_ISYS_QUEUE_TYPE_DEV] = 0; + isys_fw_cfg->num_recv_queues[IPU6_FW_ISYS_QUEUE_TYPE_MSG] = 1; + + size = sizeof(*input_queue_cfg) * max_send_queues; + input_queue_cfg = devm_kzalloc(&isys->adev->dev, size, GFP_KERNEL); + if (!input_queue_cfg) + return -ENOMEM; + + size = sizeof(*output_queue_cfg) * IPU6_N_MAX_RECV_QUEUES; + output_queue_cfg = devm_kzalloc(&isys->adev->dev, size, GFP_KERNEL); + if (!output_queue_cfg) + return -ENOMEM; + + fwcom->input = input_queue_cfg; + fwcom->output = output_queue_cfg; + + fwcom->num_input_queues = + isys_fw_cfg->num_send_queues[type_proxy] + + isys_fw_cfg->num_send_queues[type_dev] + + isys_fw_cfg->num_send_queues[type_msg]; + + fwcom->num_output_queues = + isys_fw_cfg->num_recv_queues[type_proxy] + + isys_fw_cfg->num_recv_queues[type_dev] + + isys_fw_cfg->num_recv_queues[type_msg]; + + /* SRAM partitioning. Equal partitioning is set. */ + for (i = 0; i < max_sram_blocks; i++) { + if (i < num_in_message_queues) + isys_fw_cfg->buffer_partition.num_gda_pages[i] = + (IPU6_DEVICE_GDA_NR_PAGES * + IPU6_DEVICE_GDA_VIRT_FACTOR) / + num_in_message_queues; + else + isys_fw_cfg->buffer_partition.num_gda_pages[i] = 0; + } + + /* FW assumes proxy interface at fwcom queue 0 */ + for (i = 0; i < isys_fw_cfg->num_send_queues[type_proxy]; i++) { + input_queue_cfg[i].token_size = + sizeof(struct ipu6_fw_proxy_send_queue_token); + input_queue_cfg[i].queue_size = IPU6_ISYS_SIZE_PROXY_SEND_QUEUE; + } + + for (i = 0; i < isys_fw_cfg->num_send_queues[type_dev]; i++) { + input_queue_cfg[base_dev_send + i].token_size = + sizeof(struct ipu6_fw_send_queue_token); + input_queue_cfg[base_dev_send + i].queue_size = max_devq_size; + } + + for (i = 0; i < isys_fw_cfg->num_send_queues[type_msg]; i++) { + input_queue_cfg[base_msg_send + i].token_size = + sizeof(struct ipu6_fw_send_queue_token); + input_queue_cfg[base_msg_send + i].queue_size = + IPU6_ISYS_SIZE_SEND_QUEUE; + } + + for (i = 0; i < isys_fw_cfg->num_recv_queues[type_proxy]; i++) { + output_queue_cfg[i].token_size = + sizeof(struct ipu6_fw_proxy_resp_queue_token); + output_queue_cfg[i].queue_size = + IPU6_ISYS_SIZE_PROXY_RECV_QUEUE; + } + /* There is no recv DEV queue */ + for (i = 0; i < isys_fw_cfg->num_recv_queues[type_msg]; i++) { + output_queue_cfg[base_msg_recv + i].token_size = + sizeof(struct ipu6_fw_resp_queue_token); + output_queue_cfg[base_msg_recv + i].queue_size = + IPU6_ISYS_SIZE_RECV_QUEUE; + } + + fwcom->dmem_addr = isys->pdata->ipdata->hw_variant.dmem_offset; + fwcom->specific_addr = isys_fw_cfg; + fwcom->specific_size = sizeof(*isys_fw_cfg); + + return 0; +} + +int ipu6_fw_isys_init(struct ipu6_isys *isys, unsigned int num_streams) +{ + struct device *dev = &isys->adev->dev; + int retry = IPU6_ISYS_OPEN_RETRY; + struct ipu6_fw_com_cfg fwcom = { + .cell_start = start_sp, + .cell_ready = query_sp, + .buttress_boot_offset = SYSCOM_BUTTRESS_FW_PARAMS_ISYS_OFFSET, + }; + int ret; + + ipu6_isys_fwcom_cfg_init(isys, &fwcom, num_streams); + + isys->fwcom = ipu6_fw_com_prepare(&fwcom, isys->adev, + isys->pdata->base); + if (!isys->fwcom) { + dev_err(dev, "isys fw com prepare failed\n"); + return -EIO; + } + + ret = ipu6_fw_com_open(isys->fwcom); + if (ret) { + dev_err(dev, "isys fw com open failed %d\n", ret); + return ret; + } + + do { + usleep_range(400, 500); + if (ipu6_fw_com_ready(isys->fwcom)) + break; + retry--; + } while (retry > 0); + + if (!retry && ret) { + dev_err(dev, "isys port open ready failed %d\n", ret); + ipu6_fw_isys_close(isys); + } + + return ret; +} + +struct ipu6_fw_isys_resp_info_abi * +ipu6_fw_isys_get_resp(void *context, unsigned int queue) +{ + return (struct ipu6_fw_isys_resp_info_abi *) + ipu6_recv_get_token(context, queue); +} + +void ipu6_fw_isys_put_resp(void *context, unsigned int queue) +{ + ipu6_recv_put_token(context, queue); +} + +void +ipu6_fw_isys_dump_stream_cfg(struct device *dev, + struct ipu6_fw_isys_stream_cfg_data_abi *cfg) +{ + unsigned int i; + + dev_dbg(dev, "-----------------------------------------------------\n"); + dev_dbg(dev, "IPU6_FW_ISYS_STREAM_CFG_DATA\n"); + + dev_dbg(dev, "compfmt = %d\n", cfg->vc); + dev_dbg(dev, "src = %d\n", cfg->src); + dev_dbg(dev, "vc = %d\n", cfg->vc); + dev_dbg(dev, "isl_use = %d\n", cfg->isl_use); + dev_dbg(dev, "sensor_type = %d\n", cfg->sensor_type); + + dev_dbg(dev, "send_irq_sof_discarded = %d\n", + cfg->send_irq_sof_discarded); + dev_dbg(dev, "send_irq_eof_discarded = %d\n", + cfg->send_irq_eof_discarded); + dev_dbg(dev, "send_resp_sof_discarded = %d\n", + cfg->send_resp_sof_discarded); + dev_dbg(dev, "send_resp_eof_discarded = %d\n", + cfg->send_resp_eof_discarded); + + dev_dbg(dev, "crop:\n"); + dev_dbg(dev, "\t.left_top = [%d, %d]\n", cfg->crop.left_offset, + cfg->crop.top_offset); + dev_dbg(dev, "\t.right_bottom = [%d, %d]\n", cfg->crop.right_offset, + cfg->crop.bottom_offset); + + dev_dbg(dev, "nof_input_pins = %d\n", cfg->nof_input_pins); + for (i = 0; i < cfg->nof_input_pins; i++) { + dev_dbg(dev, "input pin[%d]:\n", i); + dev_dbg(dev, "\t.dt = 0x%0x\n", cfg->input_pins[i].dt); + dev_dbg(dev, "\t.mipi_store_mode = %d\n", + cfg->input_pins[i].mipi_store_mode); + dev_dbg(dev, "\t.bits_per_pix = %d\n", + cfg->input_pins[i].bits_per_pix); + dev_dbg(dev, "\t.mapped_dt = 0x%0x\n", + cfg->input_pins[i].mapped_dt); + dev_dbg(dev, "\t.input_res = %dx%d\n", + cfg->input_pins[i].input_res.width, + cfg->input_pins[i].input_res.height); + dev_dbg(dev, "\t.mipi_decompression = %d\n", + cfg->input_pins[i].mipi_decompression); + dev_dbg(dev, "\t.capture_mode = %d\n", + cfg->input_pins[i].capture_mode); + } + + dev_dbg(dev, "nof_output_pins = %d\n", cfg->nof_output_pins); + for (i = 0; i < cfg->nof_output_pins; i++) { + dev_dbg(dev, "output_pin[%d]:\n", i); + dev_dbg(dev, "\t.input_pin_id = %d\n", + cfg->output_pins[i].input_pin_id); + dev_dbg(dev, "\t.output_res = %dx%d\n", + cfg->output_pins[i].output_res.width, + cfg->output_pins[i].output_res.height); + dev_dbg(dev, "\t.stride = %d\n", cfg->output_pins[i].stride); + dev_dbg(dev, "\t.pt = %d\n", cfg->output_pins[i].pt); + dev_dbg(dev, "\t.payload_buf_size = %d\n", + cfg->output_pins[i].payload_buf_size); + dev_dbg(dev, "\t.ft = %d\n", cfg->output_pins[i].ft); + dev_dbg(dev, "\t.watermark_in_lines = %d\n", + cfg->output_pins[i].watermark_in_lines); + dev_dbg(dev, "\t.send_irq = %d\n", + cfg->output_pins[i].send_irq); + dev_dbg(dev, "\t.reserve_compression = %d\n", + cfg->output_pins[i].reserve_compression); + dev_dbg(dev, "\t.snoopable = %d\n", + cfg->output_pins[i].snoopable); + dev_dbg(dev, "\t.error_handling_enable = %d\n", + cfg->output_pins[i].error_handling_enable); + dev_dbg(dev, "\t.sensor_type = %d\n", + cfg->output_pins[i].sensor_type); + } + dev_dbg(dev, "-----------------------------------------------------\n"); +} + +void +ipu6_fw_isys_dump_frame_buff_set(struct device *dev, + struct ipu6_fw_isys_frame_buff_set_abi *buf, + unsigned int outputs) +{ + unsigned int i; + + dev_dbg(dev, "-----------------------------------------------------\n"); + dev_dbg(dev, "IPU6_FW_ISYS_FRAME_BUFF_SET\n"); + + for (i = 0; i < outputs; i++) { + dev_dbg(dev, "output_pin[%d]:\n", i); + dev_dbg(dev, "\t.out_buf_id = %llu\n", + buf->output_pins[i].out_buf_id); + dev_dbg(dev, "\t.addr = 0x%x\n", buf->output_pins[i].addr); + dev_dbg(dev, "\t.compress = %d\n", + buf->output_pins[i].compress); + } + + dev_dbg(dev, "send_irq_sof = 0x%x\n", buf->send_irq_sof); + dev_dbg(dev, "send_irq_eof = 0x%x\n", buf->send_irq_eof); + dev_dbg(dev, "send_resp_sof = 0x%x\n", buf->send_resp_sof); + dev_dbg(dev, "send_resp_eof = 0x%x\n", buf->send_resp_eof); + dev_dbg(dev, "send_irq_capture_ack = 0x%x\n", + buf->send_irq_capture_ack); + dev_dbg(dev, "send_irq_capture_done = 0x%x\n", + buf->send_irq_capture_done); + dev_dbg(dev, "send_resp_capture_ack = 0x%x\n", + buf->send_resp_capture_ack); + dev_dbg(dev, "send_resp_capture_done = 0x%x\n", + buf->send_resp_capture_done); + + dev_dbg(dev, "-----------------------------------------------------\n"); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-isys.h b/drivers/media/pci/intel/ipu6/ipu6-fw-isys.h new file mode 100644 index 000000000000..7de0a34bb0f4 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-fw-isys.h @@ -0,0 +1,574 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_FW_ISYS_H +#define IPU6_FW_ISYS_H + +struct ipu6_isys; + +/* Max number of Input/Output Pins */ +#define IPU6_MAX_IPINS 4 + +#define IPU6_MAX_OPINS ((IPU6_MAX_IPINS) + 1) + +#define IPU6_STREAM_ID_MAX 16 +#define IPU6_NONSECURE_STREAM_ID_MAX 12 +#define IPU6_DEV_SEND_QUEUE_SIZE (IPU6_STREAM_ID_MAX) +#define IPU6_NOF_SRAM_BLOCKS_MAX (IPU6_STREAM_ID_MAX) +#define IPU6_N_MAX_MSG_SEND_QUEUES (IPU6_STREAM_ID_MAX) +#define IPU6SE_STREAM_ID_MAX 8 +#define IPU6SE_NONSECURE_STREAM_ID_MAX 4 +#define IPU6SE_DEV_SEND_QUEUE_SIZE (IPU6SE_STREAM_ID_MAX) +#define IPU6SE_NOF_SRAM_BLOCKS_MAX (IPU6SE_STREAM_ID_MAX) +#define IPU6SE_N_MAX_MSG_SEND_QUEUES (IPU6SE_STREAM_ID_MAX) + +/* Single return queue for all streams/commands type */ +#define IPU6_N_MAX_MSG_RECV_QUEUES 1 +/* Single device queue for high priority commands (bypass in-order queue) */ +#define IPU6_N_MAX_DEV_SEND_QUEUES 1 +/* Single dedicated send queue for proxy interface */ +#define IPU6_N_MAX_PROXY_SEND_QUEUES 1 +/* Single dedicated recv queue for proxy interface */ +#define IPU6_N_MAX_PROXY_RECV_QUEUES 1 +/* Send queues layout */ +#define IPU6_BASE_PROXY_SEND_QUEUES 0 +#define IPU6_BASE_DEV_SEND_QUEUES \ + (IPU6_BASE_PROXY_SEND_QUEUES + IPU6_N_MAX_PROXY_SEND_QUEUES) +#define IPU6_BASE_MSG_SEND_QUEUES \ + (IPU6_BASE_DEV_SEND_QUEUES + IPU6_N_MAX_DEV_SEND_QUEUES) +/* Recv queues layout */ +#define IPU6_BASE_PROXY_RECV_QUEUES 0 +#define IPU6_BASE_MSG_RECV_QUEUES \ + (IPU6_BASE_PROXY_RECV_QUEUES + IPU6_N_MAX_PROXY_RECV_QUEUES) +#define IPU6_N_MAX_RECV_QUEUES \ + (IPU6_BASE_MSG_RECV_QUEUES + IPU6_N_MAX_MSG_RECV_QUEUES) + +#define IPU6_N_MAX_SEND_QUEUES \ + (IPU6_BASE_MSG_SEND_QUEUES + IPU6_N_MAX_MSG_SEND_QUEUES) +#define IPU6SE_N_MAX_SEND_QUEUES \ + (IPU6_BASE_MSG_SEND_QUEUES + IPU6SE_N_MAX_MSG_SEND_QUEUES) + +/* Max number of supported input pins routed in ISL */ +#define IPU6_MAX_IPINS_IN_ISL 2 + +/* Max number of planes for frame formats supported by the FW */ +#define IPU6_PIN_PLANES_MAX 4 + +#define IPU6_FW_ISYS_SENSOR_TYPE_START 14 +#define IPU6_FW_ISYS_SENSOR_TYPE_END 19 +#define IPU6SE_FW_ISYS_SENSOR_TYPE_START 6 +#define IPU6SE_FW_ISYS_SENSOR_TYPE_END 11 +/* + * Device close takes some time from last ack message to actual stopping + * of the SP processor. As long as the SP processor runs we can't proceed with + * clean up of resources. + */ +#define IPU6_ISYS_OPEN_RETRY 2000 +#define IPU6_ISYS_CLOSE_RETRY 2000 +#define IPU6_FW_CALL_TIMEOUT_JIFFIES \ + msecs_to_jiffies(IPU6_FW_CALL_TIMEOUT_MS) + +enum ipu6_fw_isys_resp_type { + IPU6_FW_ISYS_RESP_TYPE_STREAM_OPEN_DONE = 0, + IPU6_FW_ISYS_RESP_TYPE_STREAM_START_ACK, + IPU6_FW_ISYS_RESP_TYPE_STREAM_START_AND_CAPTURE_ACK, + IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_ACK, + IPU6_FW_ISYS_RESP_TYPE_STREAM_STOP_ACK, + IPU6_FW_ISYS_RESP_TYPE_STREAM_FLUSH_ACK, + IPU6_FW_ISYS_RESP_TYPE_STREAM_CLOSE_ACK, + IPU6_FW_ISYS_RESP_TYPE_PIN_DATA_READY, + IPU6_FW_ISYS_RESP_TYPE_PIN_DATA_WATERMARK, + IPU6_FW_ISYS_RESP_TYPE_FRAME_SOF, + IPU6_FW_ISYS_RESP_TYPE_FRAME_EOF, + IPU6_FW_ISYS_RESP_TYPE_STREAM_START_AND_CAPTURE_DONE, + IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_DONE, + IPU6_FW_ISYS_RESP_TYPE_PIN_DATA_SKIPPED, + IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_SKIPPED, + IPU6_FW_ISYS_RESP_TYPE_FRAME_SOF_DISCARDED, + IPU6_FW_ISYS_RESP_TYPE_FRAME_EOF_DISCARDED, + IPU6_FW_ISYS_RESP_TYPE_STATS_DATA_READY, + N_IPU6_FW_ISYS_RESP_TYPE +}; + +enum ipu6_fw_isys_send_type { + IPU6_FW_ISYS_SEND_TYPE_STREAM_OPEN = 0, + IPU6_FW_ISYS_SEND_TYPE_STREAM_START, + IPU6_FW_ISYS_SEND_TYPE_STREAM_START_AND_CAPTURE, + IPU6_FW_ISYS_SEND_TYPE_STREAM_CAPTURE, + IPU6_FW_ISYS_SEND_TYPE_STREAM_STOP, + IPU6_FW_ISYS_SEND_TYPE_STREAM_FLUSH, + IPU6_FW_ISYS_SEND_TYPE_STREAM_CLOSE, + N_IPU6_FW_ISYS_SEND_TYPE +}; + +enum ipu6_fw_isys_queue_type { + IPU6_FW_ISYS_QUEUE_TYPE_PROXY = 0, + IPU6_FW_ISYS_QUEUE_TYPE_DEV, + IPU6_FW_ISYS_QUEUE_TYPE_MSG, + N_IPU6_FW_ISYS_QUEUE_TYPE +}; + +enum ipu6_fw_isys_stream_source { + IPU6_FW_ISYS_STREAM_SRC_PORT_0 = 0, + IPU6_FW_ISYS_STREAM_SRC_PORT_1, + IPU6_FW_ISYS_STREAM_SRC_PORT_2, + IPU6_FW_ISYS_STREAM_SRC_PORT_3, + IPU6_FW_ISYS_STREAM_SRC_PORT_4, + IPU6_FW_ISYS_STREAM_SRC_PORT_5, + IPU6_FW_ISYS_STREAM_SRC_PORT_6, + IPU6_FW_ISYS_STREAM_SRC_PORT_7, + IPU6_FW_ISYS_STREAM_SRC_PORT_8, + IPU6_FW_ISYS_STREAM_SRC_PORT_9, + IPU6_FW_ISYS_STREAM_SRC_PORT_10, + IPU6_FW_ISYS_STREAM_SRC_PORT_11, + IPU6_FW_ISYS_STREAM_SRC_PORT_12, + IPU6_FW_ISYS_STREAM_SRC_PORT_13, + IPU6_FW_ISYS_STREAM_SRC_PORT_14, + IPU6_FW_ISYS_STREAM_SRC_PORT_15, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_0, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_1, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_2, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_3, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_4, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_5, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_6, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_7, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_8, + IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_9, + N_IPU6_FW_ISYS_STREAM_SRC +}; + +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_PORT0 IPU6_FW_ISYS_STREAM_SRC_PORT_0 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_PORT1 IPU6_FW_ISYS_STREAM_SRC_PORT_1 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_PORT2 IPU6_FW_ISYS_STREAM_SRC_PORT_2 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_PORT3 IPU6_FW_ISYS_STREAM_SRC_PORT_3 + +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_3PH_PORTA IPU6_FW_ISYS_STREAM_SRC_PORT_4 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_3PH_PORTB IPU6_FW_ISYS_STREAM_SRC_PORT_5 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_3PH_CPHY_PORT0 \ + IPU6_FW_ISYS_STREAM_SRC_PORT_6 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_3PH_CPHY_PORT1 \ + IPU6_FW_ISYS_STREAM_SRC_PORT_7 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_3PH_CPHY_PORT2 \ + IPU6_FW_ISYS_STREAM_SRC_PORT_8 +#define IPU6_FW_ISYS_STREAM_SRC_CSI2_3PH_CPHY_PORT3 \ + IPU6_FW_ISYS_STREAM_SRC_PORT_9 + +#define IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_PORT0 IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_0 +#define IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_PORT1 IPU6_FW_ISYS_STREAM_SRC_MIPIGEN_1 + +/** + * enum ipu6_fw_isys_mipi_vc: MIPI csi2 spec + * supports up to 4 virtual per physical channel + */ +enum ipu6_fw_isys_mipi_vc { + IPU6_FW_ISYS_MIPI_VC_0 = 0, + IPU6_FW_ISYS_MIPI_VC_1, + IPU6_FW_ISYS_MIPI_VC_2, + IPU6_FW_ISYS_MIPI_VC_3, + N_IPU6_FW_ISYS_MIPI_VC +}; + +enum ipu6_fw_isys_frame_format_type { + IPU6_FW_ISYS_FRAME_FORMAT_NV11 = 0, /* 12 bit YUV 411, Y, UV plane */ + IPU6_FW_ISYS_FRAME_FORMAT_NV12, /* 12 bit YUV 420, Y, UV plane */ + IPU6_FW_ISYS_FRAME_FORMAT_NV12_16, /* 16 bit YUV 420, Y, UV plane */ + /* 12 bit YUV 420, Intel proprietary tiled format */ + IPU6_FW_ISYS_FRAME_FORMAT_NV12_TILEY, + + IPU6_FW_ISYS_FRAME_FORMAT_NV16, /* 16 bit YUV 422, Y, UV plane */ + IPU6_FW_ISYS_FRAME_FORMAT_NV21, /* 12 bit YUV 420, Y, VU plane */ + IPU6_FW_ISYS_FRAME_FORMAT_NV61, /* 16 bit YUV 422, Y, VU plane */ + IPU6_FW_ISYS_FRAME_FORMAT_YV12, /* 12 bit YUV 420, Y, V, U plane */ + IPU6_FW_ISYS_FRAME_FORMAT_YV16, /* 16 bit YUV 422, Y, V, U plane */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV420, /* 12 bit YUV 420, Y, U, V plane */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV420_10, /* yuv420, 10 bits per subpixel */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV420_12, /* yuv420, 12 bits per subpixel */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV420_14, /* yuv420, 14 bits per subpixel */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV420_16, /* yuv420, 16 bits per subpixel */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV422, /* 16 bit YUV 422, Y, U, V plane */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV422_16, /* yuv422, 16 bits per subpixel */ + IPU6_FW_ISYS_FRAME_FORMAT_UYVY, /* 16 bit YUV 422, UYVY interleaved */ + IPU6_FW_ISYS_FRAME_FORMAT_YUYV, /* 16 bit YUV 422, YUYV interleaved */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV444, /* 24 bit YUV 444, Y, U, V plane */ + /* Internal format, 2 y lines followed by a uvinterleaved line */ + IPU6_FW_ISYS_FRAME_FORMAT_YUV_LINE, + IPU6_FW_ISYS_FRAME_FORMAT_RAW8, /* RAW8, 1 plane */ + IPU6_FW_ISYS_FRAME_FORMAT_RAW10, /* RAW10, 1 plane */ + IPU6_FW_ISYS_FRAME_FORMAT_RAW12, /* RAW12, 1 plane */ + IPU6_FW_ISYS_FRAME_FORMAT_RAW14, /* RAW14, 1 plane */ + IPU6_FW_ISYS_FRAME_FORMAT_RAW16, /* RAW16, 1 plane */ + /** + * 16 bit RGB, 1 plane. Each 3 sub pixels are packed into one 16 bit + * value, 5 bits for R, 6 bits for G and 5 bits for B. + */ + IPU6_FW_ISYS_FRAME_FORMAT_RGB565, + IPU6_FW_ISYS_FRAME_FORMAT_PLANAR_RGB888, /* 24 bit RGB, 3 planes */ + IPU6_FW_ISYS_FRAME_FORMAT_RGBA888, /* 32 bit RGBA, 1 plane, A=Alpha */ + IPU6_FW_ISYS_FRAME_FORMAT_QPLANE6, /* Internal, for advanced ISP */ + IPU6_FW_ISYS_FRAME_FORMAT_BINARY_8, /* byte stream, used for jpeg. */ + N_IPU6_FW_ISYS_FRAME_FORMAT +}; + +#define IPU6_FW_ISYS_FRAME_FORMAT_RAW (IPU6_FW_ISYS_FRAME_FORMAT_RAW16) + +enum ipu6_fw_isys_pin_type { + /* captured as MIPI packets */ + IPU6_FW_ISYS_PIN_TYPE_MIPI = 0, + /* captured through the SoC path */ + IPU6_FW_ISYS_PIN_TYPE_RAW_SOC = 3, +}; + +/** + * enum ipu6_fw_isys_mipi_store_mode. Describes if long MIPI packets reach + * MIPI SRAM with the long packet header or + * if not, then only option is to capture it with pin type MIPI. + */ +enum ipu6_fw_isys_mipi_store_mode { + IPU6_FW_ISYS_MIPI_STORE_MODE_NORMAL = 0, + IPU6_FW_ISYS_MIPI_STORE_MODE_DISCARD_LONG_HEADER, + N_IPU6_FW_ISYS_MIPI_STORE_MODE +}; + +enum ipu6_fw_isys_capture_mode { + IPU6_FW_ISYS_CAPTURE_MODE_REGULAR = 0, + IPU6_FW_ISYS_CAPTURE_MODE_BURST, + N_IPU6_FW_ISYS_CAPTURE_MODE, +}; + +enum ipu6_fw_isys_sensor_mode { + IPU6_FW_ISYS_SENSOR_MODE_NORMAL = 0, + IPU6_FW_ISYS_SENSOR_MODE_TOBII, + N_IPU6_FW_ISYS_SENSOR_MODE, +}; + +enum ipu6_fw_isys_error { + IPU6_FW_ISYS_ERROR_NONE = 0, /* No details */ + IPU6_FW_ISYS_ERROR_FW_INTERNAL_CONSISTENCY, /* enum */ + IPU6_FW_ISYS_ERROR_HW_CONSISTENCY, /* enum */ + IPU6_FW_ISYS_ERROR_DRIVER_INVALID_COMMAND_SEQUENCE, /* enum */ + IPU6_FW_ISYS_ERROR_DRIVER_INVALID_DEVICE_CONFIGURATION, /* enum */ + IPU6_FW_ISYS_ERROR_DRIVER_INVALID_STREAM_CONFIGURATION, /* enum */ + IPU6_FW_ISYS_ERROR_DRIVER_INVALID_FRAME_CONFIGURATION, /* enum */ + IPU6_FW_ISYS_ERROR_INSUFFICIENT_RESOURCES, /* enum */ + IPU6_FW_ISYS_ERROR_HW_REPORTED_STR2MMIO, /* HW code */ + IPU6_FW_ISYS_ERROR_HW_REPORTED_SIG2CIO, /* HW code */ + IPU6_FW_ISYS_ERROR_SENSOR_FW_SYNC, /* enum */ + IPU6_FW_ISYS_ERROR_STREAM_IN_SUSPENSION, /* FW code */ + IPU6_FW_ISYS_ERROR_RESPONSE_QUEUE_FULL, /* FW code */ + N_IPU6_FW_ISYS_ERROR +}; + +enum ipu6_fw_proxy_error { + IPU6_FW_PROXY_ERROR_NONE = 0, + IPU6_FW_PROXY_ERROR_INVALID_WRITE_REGION, + IPU6_FW_PROXY_ERROR_INVALID_WRITE_OFFSET, + N_IPU6_FW_PROXY_ERROR +}; + +/* firmware ABI structure below are aligned in firmware, no need pack */ +struct ipu6_fw_isys_buffer_partition_abi { + u32 num_gda_pages[IPU6_STREAM_ID_MAX]; +}; + +struct ipu6_fw_isys_fw_config { + struct ipu6_fw_isys_buffer_partition_abi buffer_partition; + u32 num_send_queues[N_IPU6_FW_ISYS_QUEUE_TYPE]; + u32 num_recv_queues[N_IPU6_FW_ISYS_QUEUE_TYPE]; +}; + +/** + * struct ipu6_fw_isys_resolution_abi: Generic resolution structure. + */ +struct ipu6_fw_isys_resolution_abi { + u32 width; + u32 height; +}; + +/** + * struct ipu6_fw_isys_output_pin_payload_abi + * @out_buf_id: Points to output pin buffer - buffer identifier + * @addr: Points to output pin buffer - CSS Virtual Address + * @compress: Request frame compression (1), or not (0) + */ +struct ipu6_fw_isys_output_pin_payload_abi { + u64 out_buf_id; + u32 addr; + u32 compress; +}; + +/** + * struct ipu6_fw_isys_output_pin_info_abi + * @output_res: output pin resolution + * @stride: output stride in Bytes (not valid for statistics) + * @watermark_in_lines: pin watermark level in lines + * @payload_buf_size: minimum size in Bytes of all buffers that will be + * supplied for capture on this pin + * @send_irq: assert if pin event should trigger irq + * @pt: pin type -real format "enum ipu6_fw_isys_pin_type" + * @ft: frame format type -real format "enum ipu6_fw_isys_frame_format_type" + * @input_pin_id: related input pin id + * @reserve_compression: reserve compression resources for pin + */ +struct ipu6_fw_isys_output_pin_info_abi { + struct ipu6_fw_isys_resolution_abi output_res; + u32 stride; + u32 watermark_in_lines; + u32 payload_buf_size; + u32 ts_offsets[IPU6_PIN_PLANES_MAX]; + u32 s2m_pixel_soc_pixel_remapping; + u32 csi_be_soc_pixel_remapping; + u8 send_irq; + u8 input_pin_id; + u8 pt; + u8 ft; + u8 reserved; + u8 reserve_compression; + u8 snoopable; + u8 error_handling_enable; + u32 sensor_type; +}; + +/** + * struct ipu6_fw_isys_input_pin_info_abi + * @input_res: input resolution + * @dt: mipi data type ((enum ipu6_fw_isys_mipi_data_type) + * @mipi_store_mode: defines if legacy long packet header will be stored or + * discarded if discarded, output pin type for this + * input pin can only be MIPI + * (enum ipu6_fw_isys_mipi_store_mode) + * @bits_per_pix: native bits per pixel + * @mapped_dt: actual data type from sensor + * @mipi_decompression: defines which compression will be in mipi backend + * @crop_first_and_last_lines Control whether to crop the + * first and last line of the + * input image. Crop done by HW + * device. + * @capture_mode: mode of capture, regular or burst, default value is regular + */ +struct ipu6_fw_isys_input_pin_info_abi { + struct ipu6_fw_isys_resolution_abi input_res; + u8 dt; + u8 mipi_store_mode; + u8 bits_per_pix; + u8 mapped_dt; + u8 mipi_decompression; + u8 crop_first_and_last_lines; + u8 capture_mode; + u8 reserved; +}; + +/** + * struct ipu6_fw_isys_cropping_abi - cropping coordinates + */ +struct ipu6_fw_isys_cropping_abi { + s32 top_offset; + s32 left_offset; + s32 bottom_offset; + s32 right_offset; +}; + +/** + * struct ipu6_fw_isys_stream_cfg_data_abi + * ISYS stream configuration data structure + * @crop: defines cropping resolution for the + * maximum number of input pins which can be cropped, + * it is directly mapped to the HW devices + * @input_pins: input pin descriptors + * @output_pins: output pin descriptors + * @compfmt: de-compression setting for User Defined Data + * @nof_input_pins: number of input pins + * @nof_output_pins: number of output pins + * @send_irq_sof_discarded: send irq on discarded frame sof response + * - if '1' it will override the send_resp_sof_discarded + * and send the response + * - if '0' the send_resp_sof_discarded will determine + * whether to send the response + * @send_irq_eof_discarded: send irq on discarded frame eof response + * - if '1' it will override the send_resp_eof_discarded + * and send the response + * - if '0' the send_resp_eof_discarded will determine + * whether to send the response + * @send_resp_sof_discarded: send response for discarded frame sof detected, + * used only when send_irq_sof_discarded is '0' + * @send_resp_eof_discarded: send response for discarded frame eof detected, + * used only when send_irq_eof_discarded is '0' + * @src: Stream source index e.g. MIPI_generator_0, CSI2-rx_1 + * @vc: MIPI Virtual Channel (up to 4 virtual per physical channel) + * @isl_use: indicates whether stream requires ISL and how + * @sensor_type: type of connected sensor, tobii or others, default is 0 + */ +struct ipu6_fw_isys_stream_cfg_data_abi { + struct ipu6_fw_isys_cropping_abi crop; + struct ipu6_fw_isys_input_pin_info_abi input_pins[IPU6_MAX_IPINS]; + struct ipu6_fw_isys_output_pin_info_abi output_pins[IPU6_MAX_OPINS]; + u32 compfmt; + u8 nof_input_pins; + u8 nof_output_pins; + u8 send_irq_sof_discarded; + u8 send_irq_eof_discarded; + u8 send_resp_sof_discarded; + u8 send_resp_eof_discarded; + u8 src; + u8 vc; + u8 isl_use; + u8 sensor_type; + u8 reserved; + u8 reserved2; +}; + +/** + * struct ipu6_fw_isys_frame_buff_set - frame buffer set + * @output_pins: output pin addresses + * @send_irq_sof: send irq on frame sof response + * - if '1' it will override the send_resp_sof and + * send the response + * - if '0' the send_resp_sof will determine whether to + * send the response + * @send_irq_eof: send irq on frame eof response + * - if '1' it will override the send_resp_eof and + * send the response + * - if '0' the send_resp_eof will determine whether to + * send the response + * @send_resp_sof: send response for frame sof detected, + * used only when send_irq_sof is '0' + * @send_resp_eof: send response for frame eof detected, + * used only when send_irq_eof is '0' + * @send_resp_capture_ack: send response for capture ack event + * @send_resp_capture_done: send response for capture done event + */ +struct ipu6_fw_isys_frame_buff_set_abi { + struct ipu6_fw_isys_output_pin_payload_abi output_pins[IPU6_MAX_OPINS]; + u8 send_irq_sof; + u8 send_irq_eof; + u8 send_irq_capture_ack; + u8 send_irq_capture_done; + u8 send_resp_sof; + u8 send_resp_eof; + u8 send_resp_capture_ack; + u8 send_resp_capture_done; + u8 reserved[8]; +}; + +/** + * struct ipu6_fw_isys_error_info_abi + * @error: error code if something went wrong + * @error_details: depending on error code, it may contain additional error info + */ +struct ipu6_fw_isys_error_info_abi { + u32 error; + u32 error_details; +}; + +/** + * struct ipu6_fw_isys_resp_info_comm + * @pin: this var is only valid for pin event related responses, + * contains pin addresses + * @error_info: error information from the FW + * @timestamp: Time information for event if available + * @stream_handle: stream id the response corresponds to + * @type: response type (enum ipu6_fw_isys_resp_type) + * @pin_id: pin id that the pin payload corresponds to + */ +struct ipu6_fw_isys_resp_info_abi { + u64 buf_id; + struct ipu6_fw_isys_output_pin_payload_abi pin; + struct ipu6_fw_isys_error_info_abi error_info; + u32 timestamp[2]; + u8 stream_handle; + u8 type; + u8 pin_id; + u8 reserved; + u32 reserved2; +}; + +/** + * struct ipu6_fw_isys_proxy_error_info_comm + * @proxy_error: error code if something went wrong + * @proxy_error_details: depending on error code, it may contain additional + * error info + */ +struct ipu6_fw_isys_proxy_error_info_abi { + u32 error; + u32 error_details; +}; + +struct ipu6_fw_isys_proxy_resp_info_abi { + u32 request_id; + struct ipu6_fw_isys_proxy_error_info_abi error_info; +}; + +/** + * struct ipu6_fw_proxy_write_queue_token + * @request_id: update id for the specific proxy write request + * @region_index: Region id for the proxy write request + * @offset: Offset of the write request according to the base address + * of the region + * @value: Value that is requested to be written with the proxy write request + */ +struct ipu6_fw_proxy_write_queue_token { + u32 request_id; + u32 region_index; + u32 offset; + u32 value; +}; + +/** + * struct ipu6_fw_resp_queue_token + */ +struct ipu6_fw_resp_queue_token { + struct ipu6_fw_isys_resp_info_abi resp_info; +}; + +/** + * struct ipu6_fw_send_queue_token + */ +struct ipu6_fw_send_queue_token { + u64 buf_handle; + u32 payload; + u16 send_type; + u16 stream_id; +}; + +/** + * struct ipu6_fw_proxy_resp_queue_token + */ +struct ipu6_fw_proxy_resp_queue_token { + struct ipu6_fw_isys_proxy_resp_info_abi proxy_resp_info; +}; + +/** + * struct ipu6_fw_proxy_send_queue_token + */ +struct ipu6_fw_proxy_send_queue_token { + u32 request_id; + u32 region_index; + u32 offset; + u32 value; +}; + +u8 ipu6_fw_isys_get_bpp_by_dt(u8 dt); +void ipu6_fw_isys_dump_stream_cfg(struct device *dev, + struct ipu6_fw_isys_stream_cfg_data_abi + *stream_cfg); +void +ipu6_fw_isys_dump_frame_buff_set(struct device *dev, + struct ipu6_fw_isys_frame_buff_set_abi *buf, + unsigned int outputs); +int ipu6_fw_isys_init(struct ipu6_isys *isys, unsigned int num_streams); +int ipu6_fw_isys_close(struct ipu6_isys *isys); +int ipu6_fw_isys_simple_cmd(struct ipu6_isys *isys, + const unsigned int stream_handle, u16 send_type); +int ipu6_fw_isys_complex_cmd(struct ipu6_isys *isys, + const unsigned int stream_handle, + void *cpu_mapped_buf, dma_addr_t dma_mapped_buf, + size_t size, u16 send_type); +int ipu6_fw_isys_send_proxy_token(struct ipu6_isys *isys, + unsigned int req_id, + unsigned int index, + unsigned int offset, u32 value); +void ipu6_fw_isys_cleanup(struct ipu6_isys *isys); +struct ipu6_fw_isys_resp_info_abi * +ipu6_fw_isys_get_resp(void *context, unsigned int queue); +void ipu6_fw_isys_put_resp(void *context, unsigned int queue); +#endif From patchwork Thu Apr 13 10:04:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D5D7C77B61 for ; Thu, 13 Apr 2023 09:55:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjDMJz0 (ORCPT ); Thu, 13 Apr 2023 05:55:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230092AbjDMJzW (ORCPT ); Thu, 13 Apr 2023 05:55:22 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A7CE5FE4 for ; Thu, 13 Apr 2023 02:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379708; x=1712915708; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pshsbQp8n4e2kgfiifJrTN3lVRCL+hvE1bCTJt+yl7M=; b=gE5qiWnkRrQzV8qVRMrmf4uE3xT4WopfYLkaMnR85tWrHc+lzAhCFwZS x7pZogAkFWsQN3eQpS/k8bbbGJQ6TQLaVmtIfgb31UkzMKZQ4R9Xh36uo 98BjFwkQKOCfI/UBHO6Lxi0J7xY+wZMrsQvY2oPulAAyGsLp1nO8rrWPb JEhfh02MHuIb+BJHz7OmUkBbmXh/pLhYX8Xyyi3XQoeyqtBigC7ajPmVd 8DGuKrrJ/dgK4ELftW1HmvDIxl3L/e/Mdr7AadJO58edyHBqVld42A4ic gjqjPx+7T8PqCt5ID7KqYlAY0G+fa9QeSgRgcisoPCombaTavLr5jLZtA Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993017" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993017" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600053" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600053" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:54:56 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 08/14] media: intel/ipu6: add IPU6 CSI2 receiver v4l2 sub-device Date: Thu, 13 Apr 2023 18:04:23 +0800 Message-Id: <20230413100429.919622-9-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Input system CSI2 receiver is exposed as a v4l2 sub-device. Each CSI2 sub-device represent one single CSI2 hardware port which be linked with external sub-device such camera sensor by linked with ISYS CSI2's sink pad. CSI2 source pad is linked to the sink pad of video capture device. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c | 579 ++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-isys-csi2.h | 75 +++ .../media/pci/intel/ipu6/ipu6-isys-subdev.c | 309 ++++++++++ .../media/pci/intel/ipu6/ipu6-isys-subdev.h | 70 +++ .../intel/ipu6/ipu6-platform-isys-csi2-reg.h | 187 ++++++ 5 files changed, 1220 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-csi2.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-subdev.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-subdev.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6-platform-isys-csi2-reg.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c new file mode 100644 index 000000000000..0cc41af0b552 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c @@ -0,0 +1,579 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include + +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-phy.h" +#include "ipu6-isys-subdev.h" +#include "ipu6-isys-video.h" +#include "ipu6-platform-buttress-regs.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + +static const u32 csi2_supported_codes[] = { + MEDIA_BUS_FMT_RGB565_1X16, + MEDIA_BUS_FMT_RGB888_1X24, + MEDIA_BUS_FMT_UYVY8_1X16, + MEDIA_BUS_FMT_YUYV8_1X16, + MEDIA_BUS_FMT_SBGGR10_1X10, + MEDIA_BUS_FMT_SGBRG10_1X10, + MEDIA_BUS_FMT_SGRBG10_1X10, + MEDIA_BUS_FMT_SRGGB10_1X10, + MEDIA_BUS_FMT_SBGGR12_1X12, + MEDIA_BUS_FMT_SGBRG12_1X12, + MEDIA_BUS_FMT_SGRBG12_1X12, + MEDIA_BUS_FMT_SRGGB12_1X12, + MEDIA_BUS_FMT_SBGGR8_1X8, + MEDIA_BUS_FMT_SGBRG8_1X8, + MEDIA_BUS_FMT_SGRBG8_1X8, + MEDIA_BUS_FMT_SRGGB8_1X8, + 0, +}; + +/* + * Strings corresponding to CSI-2 receiver errors are here. + * Corresponding macros are defined in the header file. + */ +static struct ipu6_csi2_error dphy_rx_errors[] = { + {"Single packet header error corrected", true}, + {"Multiple packet header errors detected", true}, + {"Payload checksum (CRC) error", true}, + {"Transfer FIFO overflow", false}, + {"Reserved short packet data type detected", true}, + {"Reserved long packet data type detected", true}, + {"Incomplete long packet detected", false}, + {"Frame sync error", false}, + {"Line sync error", false}, + {"DPHY recoverable synchronization error", true}, + {"DPHY fatal error", false}, + {"DPHY elastic FIFO overflow", false}, + {"Inter-frame short packet discarded", true}, + {"Inter-frame long packet discarded", true}, + {"MIPI pktgen overflow", false}, + {"MIPI pktgen data loss", false}, + {"FIFO overflow", false}, + {"Lane deskew", false}, + {"SOT sync error", false}, + {"HSIDLE detected", false} +}; + +int ipu6_isys_csi2_get_link_freq(struct ipu6_isys_csi2 *csi2, s64 *link_freq) +{ + struct media_pad *src_pad; + struct v4l2_subdev *ext_sd; + unsigned int bpp, lanes; + struct device *dev; + s64 ret; + + if (!csi2 || !link_freq) + return -EINVAL; + + dev = &csi2->isys->adev->dev; + src_pad = media_entity_remote_source_pad_unique(&csi2->asd.sd.entity); + if (IS_ERR_OR_NULL(src_pad)) { + dev_err(dev, "can't get source pad of %s\n", csi2->asd.sd.name); + return -ENOLINK; + } + + ext_sd = media_entity_to_v4l2_subdev(src_pad->entity); + if (WARN(!ext_sd, "Failed to get subdev for %s\n", csi2->asd.sd.name)) + return -ENODEV; + + bpp = ipu6_isys_mbus_code_to_bpp(csi2->asd.ffmt->code); + lanes = csi2->nlanes; + + ret = v4l2_get_link_freq(ext_sd->ctrl_handler, bpp, lanes * 2); + if (ret < 0) { + dev_err(dev, "can't get link frequency (%lld)\n", ret); + return ret; + } + + dev_dbg(dev, "link freq of %s is %lld\n", ext_sd->name, ret); + *link_freq = ret; + + return 0; +} + +static int csi2_subscribe_event(struct v4l2_subdev *sd, struct v4l2_fh *fh, + struct v4l2_event_subscription *sub) +{ + struct ipu6_isys_csi2 *csi2 = to_ipu6_isys_csi2(sd); + + dev_dbg(&csi2->isys->adev->dev, "csi2 subscribe event(type %u id %u)\n", + sub->type, sub->id); + + switch (sub->type) { + case V4L2_EVENT_FRAME_SYNC: + return v4l2_event_subscribe(fh, sub, 10, NULL); + case V4L2_EVENT_CTRL: + return v4l2_ctrl_subscribe_event(fh, sub); + default: + return -EINVAL; + } +} + +static const struct v4l2_subdev_core_ops csi2_sd_core_ops = { + .subscribe_event = csi2_subscribe_event, + .unsubscribe_event = v4l2_event_subdev_unsubscribe, +}; + +/* + * The input system CSI2+ receiver has several + * parameters affecting the receiver timings. These depend + * on the MIPI bus frequency F in Hz (sensor transmitter rate) + * as follows: + * register value = (A/1e9 + B * UI) / COUNT_ACC + * where + * UI = 1 / (2 * F) in seconds + * COUNT_ACC = counter accuracy in seconds + * COUNT_ACC = 0.125 ns = 1 / 8 ns, ACCINV = 8. + * + * A and B are coefficients from the table below, + * depending whether the register minimum or maximum value is + * calculated. + * Minimum Maximum + * Clock lane A B A B + * reg_rx_csi_dly_cnt_termen_clane 0 0 38 0 + * reg_rx_csi_dly_cnt_settle_clane 95 -8 300 -16 + * Data lanes + * reg_rx_csi_dly_cnt_termen_dlane0 0 0 35 4 + * reg_rx_csi_dly_cnt_settle_dlane0 85 -2 145 -6 + * reg_rx_csi_dly_cnt_termen_dlane1 0 0 35 4 + * reg_rx_csi_dly_cnt_settle_dlane1 85 -2 145 -6 + * reg_rx_csi_dly_cnt_termen_dlane2 0 0 35 4 + * reg_rx_csi_dly_cnt_settle_dlane2 85 -2 145 -6 + * reg_rx_csi_dly_cnt_termen_dlane3 0 0 35 4 + * reg_rx_csi_dly_cnt_settle_dlane3 85 -2 145 -6 + * + * We use the minimum values of both A and B. + */ + +#define DIV_SHIFT 8 +#define CSI2_ACCINV 8 + +static u32 calc_timing(s32 a, s32 b, s64 link_freq, u32 accinv) +{ + return accinv * a + (accinv * b * (500000000 >> DIV_SHIFT) + / (int32_t)(link_freq >> DIV_SHIFT)); +} + +static int +ipu6_isys_csi2_calc_timing(struct ipu6_isys_csi2 *csi2, + struct ipu6_isys_csi2_timing *timing, u32 accinv) +{ + s64 link_freq; + int ret; + + ret = ipu6_isys_csi2_get_link_freq(csi2, &link_freq); + if (ret) + return ret; + + timing->ctermen = calc_timing(CSI2_CSI_RX_DLY_CNT_TERMEN_CLANE_A, + CSI2_CSI_RX_DLY_CNT_TERMEN_CLANE_B, + link_freq, accinv); + timing->csettle = calc_timing(CSI2_CSI_RX_DLY_CNT_SETTLE_CLANE_A, + CSI2_CSI_RX_DLY_CNT_SETTLE_CLANE_B, + link_freq, accinv); + dev_dbg(&csi2->isys->adev->dev, "ctermen %u\n", timing->ctermen); + dev_dbg(&csi2->isys->adev->dev, "csettle %u\n", timing->csettle); + + timing->dtermen = calc_timing(CSI2_CSI_RX_DLY_CNT_TERMEN_DLANE_A, + CSI2_CSI_RX_DLY_CNT_TERMEN_DLANE_B, + link_freq, accinv); + timing->dsettle = calc_timing(CSI2_CSI_RX_DLY_CNT_SETTLE_DLANE_A, + CSI2_CSI_RX_DLY_CNT_SETTLE_DLANE_B, + link_freq, accinv); + dev_dbg(&csi2->isys->adev->dev, "dtermen %u\n", timing->dtermen); + dev_dbg(&csi2->isys->adev->dev, "dsettle %u\n", timing->dsettle); + + return 0; +} + +void ipu6_isys_register_errors(struct ipu6_isys_csi2 *csi2) +{ + u32 irq = readl(csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_STATUS_OFFSET); + struct ipu6_isys *isys = csi2->isys; + u32 mask; + + mask = isys->pdata->ipdata->csi2.irq_mask; + writel(irq & mask, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET); + csi2->receiver_errors |= irq & mask; +} + +void ipu6_isys_csi2_error(struct ipu6_isys_csi2 *csi2) +{ + struct ipu6_csi2_error *errors; + u32 status; + u32 i; + + /* register errors once more in case of interrupts are disabled */ + ipu6_isys_register_errors(csi2); + status = csi2->receiver_errors; + csi2->receiver_errors = 0; + errors = dphy_rx_errors; + + for (i = 0; i < CSI_RX_NUM_ERRORS_IN_IRQ; i++) { + if (status & BIT(i)) + dev_err_ratelimited(&csi2->isys->adev->dev, + "csi2-%i error: %s\n", + csi2->port, + errors[i].error_string); + } +} + +static int ipu6_isys_csi2_set_stream(struct v4l2_subdev *sd, + const struct ipu6_isys_csi2_timing *timing, + unsigned int nlanes, int enable) +{ + struct ipu6_isys_csi2 *csi2 = to_ipu6_isys_csi2(sd); + struct ipu6_isys *isys = csi2->isys; + struct ipu6_isys_stream *stream; + struct ipu6_isys_csi2_config cfg; + unsigned int port, nports; + int ret = 0; + u32 mask = 0; + u32 i; + + stream = ipu6_isys_query_stream_by_source(isys, csi2->asd.source); + if (!stream) { + dev_err(&isys->adev->dev, "no available stream\n"); + return -EINVAL; + } + + ipu6_isys_put_stream(stream); + + port = csi2->port; + dev_dbg(&isys->adev->dev, "for port %u with %u lanes\n", port, nlanes); + + cfg.port = port; + cfg.nlanes = nlanes; + + mask = isys->pdata->ipdata->csi2.irq_mask; + nports = isys->pdata->ipdata->csi2.nports; + + if (!enable) { + writel(0, csi2->base + CSI_REG_CSI_FE_ENABLE); + writel(0, csi2->base + CSI_REG_PPI2CSI_ENABLE); + + writel(0, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_ENABLE_OFFSET); + writel(mask, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET); + writel(0, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_ENABLE_OFFSET); + writel(0xffffffff, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET); + + isys->phy_set_power(isys, &cfg, timing, false); + + writel(0, isys->pdata->base + CSI_REG_HUB_FW_ACCESS_PORT + (isys->pdata->ipdata->csi2.fw_access_port_ofs, port)); + writel(0, isys->pdata->base + + CSI_REG_HUB_DRV_ACCESS_PORT(port)); + + return ret; + } + + /* reset port reset */ + writel(0x1, csi2->base + CSI_REG_PORT_GPREG_SRST); + usleep_range(100, 200); + writel(0x0, csi2->base + CSI_REG_PORT_GPREG_SRST); + + /* enable port clock */ + for (i = 0; i < nports; i++) { + writel(1, isys->pdata->base + CSI_REG_HUB_DRV_ACCESS_PORT(i)); + writel(1, isys->pdata->base + CSI_REG_HUB_FW_ACCESS_PORT + (isys->pdata->ipdata->csi2.fw_access_port_ofs, i)); + } + + /* enable all error related irq */ + writel(mask, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_STATUS_OFFSET); + writel(mask, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_MASK_OFFSET); + writel(mask, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET); + writel(mask, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_LEVEL_NOT_PULSE_OFFSET); + writel(mask, + csi2->base + CSI_PORT_REG_BASE_IRQ_CSI + + CSI_PORT_REG_BASE_IRQ_ENABLE_OFFSET); + + /* + * Using event from firmware instead of irq to handle CSI2 sync event + * which can reduce system wakeups. If CSI2 sync irq enabled, we need + * disable the firmware CSI2 sync event to avoid duplicate handling. + */ + writel(0xffffffff, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_STATUS_OFFSET); + writel(0, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_MASK_OFFSET); + writel(0xffffffff, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET); + writel(0, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_LEVEL_NOT_PULSE_OFFSET); + writel(0xffffffff, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_ENABLE_OFFSET); + + /* configure to enable FE and PPI2CSI */ + writel(0, csi2->base + CSI_REG_CSI_FE_MODE); + writel(CSI_SENSOR_INPUT, csi2->base + CSI_REG_CSI_FE_MUX_CTRL); + writel(CSI_CNTR_SENSOR_LINE_ID | CSI_CNTR_SENSOR_FRAME_ID, + csi2->base + CSI_REG_CSI_FE_SYNC_CNTR_SEL); + writel(FIELD_PREP(PPI_INTF_CONFIG_NOF_ENABLED_DLANES_MASK, nlanes - 1), + csi2->base + CSI_REG_PPI2CSI_CONFIG_PPI_INTF); + + writel(1, csi2->base + CSI_REG_PPI2CSI_ENABLE); + writel(1, csi2->base + CSI_REG_CSI_FE_ENABLE); + + ret = isys->phy_set_power(isys, &cfg, timing, true); + if (ret) { + dev_err(&isys->adev->dev, "csi-%d phy power up failed %d\n", + port, ret); + return ret; + } + + return 0; +} + +static int set_stream(struct v4l2_subdev *sd, int enable) +{ + struct ipu6_isys_csi2 *csi2 = to_ipu6_isys_csi2(sd); + struct ipu6_isys_csi2_timing timing = {0}; + struct ipu6_isys_stream *stream; + unsigned int nlanes; + int ret; + + dev_dbg(&csi2->isys->adev->dev, "csi2 s_stream %d\n", enable); + + stream = ipu6_isys_query_stream_by_source(csi2->isys, csi2->asd.source); + if (!stream) { + dev_err(&csi2->isys->adev->dev, "no available stream\n"); + return -ENODEV; + } + + if (!stream->source_entity) { + dev_err(&csi2->isys->adev->dev, "source_entity is NULL\n"); + return -ENODEV; + } + + ipu6_isys_put_stream(stream); + + if (!enable) { + ipu6_isys_csi2_set_stream(sd, &timing, 0, enable); + return 0; + } + + nlanes = csi2->nlanes; + dev_dbg(&csi2->isys->adev->dev, "lane nr %d.\n", nlanes); + + ret = ipu6_isys_csi2_calc_timing(csi2, &timing, CSI2_ACCINV); + if (ret) + return ret; + + return ipu6_isys_csi2_set_stream(sd, &timing, nlanes, enable); +} + +static int ipu6_isys_csi2_set_sel(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_selection *sel) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + struct v4l2_mbus_framefmt *sink_ffmt = + ipu6_isys_get_ffmt(sd, state, CSI2_PAD_SINK, sel->which); + struct v4l2_mbus_framefmt *src_ffmt = + ipu6_isys_get_ffmt(sd, state, sel->pad, sel->which); + + if (sel->pad == CSI2_PAD_SINK || sel->target != V4L2_SEL_TGT_CROP) + return -EINVAL; + + mutex_lock(&asd->mutex); + /* Only vertical cropping is supported */ + sel->r.left = 0; + sel->r.width = sink_ffmt->width; + /* Non-bayer formats can't be single line cropped */ + if (!ipu6_isys_is_bayer_format(sink_ffmt->code)) + sel->r.top &= ~1; + sel->r.height = clamp(sel->r.height & ~1, IPU6_ISYS_MIN_HEIGHT, + sink_ffmt->height - sel->r.top); + *ipu6_isys_get_crop(sd, state, sel->pad, sel->which) = sel->r; + + /* update source pad format */ + src_ffmt->width = sel->r.width; + src_ffmt->height = sel->r.height; + if (ipu6_isys_is_bayer_format(sink_ffmt->code)) + src_ffmt->code = ipu6_isys_convert_bayer_order(sink_ffmt->code, + sel->r.left, + sel->r.top); + dev_dbg(&asd->isys->adev->dev, + "set crop for %s, sel: %d,%d,%d,%d code: 0x%x\n", sd->name, + sel->r.left, sel->r.top, sel->r.width, sel->r.height, + src_ffmt->code); + mutex_unlock(&asd->mutex); + + return 0; +} + +static int ipu6_isys_csi2_get_sel(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_selection *sel) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + struct v4l2_mbus_framefmt *sink_ffmt = + ipu6_isys_get_ffmt(sd, state, CSI2_PAD_SINK, sel->which); + struct v4l2_rect *crop = + ipu6_isys_get_crop(sd, state, sel->pad, sel->which); + int ret = 0; + + if (sd->entity.pads[sel->pad].flags & MEDIA_PAD_FL_SINK) + return -EINVAL; + + mutex_lock(&asd->mutex); + switch (sel->target) { + case V4L2_SEL_TGT_CROP_DEFAULT: + case V4L2_SEL_TGT_CROP_BOUNDS: + sel->r.left = 0; + sel->r.top = 0; + sel->r.width = sink_ffmt->width; + sel->r.height = sink_ffmt->height; + break; + case V4L2_SEL_TGT_CROP: + sel->r = *crop; + break; + default: + ret = -EINVAL; + } + mutex_unlock(&asd->mutex); + + return ret; +} + +static const struct v4l2_subdev_video_ops csi2_sd_video_ops = { + .s_stream = set_stream, +}; + +static const struct v4l2_subdev_pad_ops csi2_sd_pad_ops = { + .init_cfg = ipu6_isys_subdev_init_cfg, + .link_validate = v4l2_subdev_link_validate_default, + .get_fmt = ipu6_isys_subdev_get_fmt, + .set_fmt = ipu6_isys_subdev_set_fmt, + .get_selection = ipu6_isys_csi2_get_sel, + .set_selection = ipu6_isys_csi2_set_sel, + .enum_mbus_code = ipu6_isys_subdev_enum_mbus_code, +}; + +static const struct v4l2_subdev_ops csi2_sd_ops = { + .core = &csi2_sd_core_ops, + .video = &csi2_sd_video_ops, + .pad = &csi2_sd_pad_ops, +}; + +static struct media_entity_operations csi2_entity_ops = { + .link_validate = v4l2_subdev_link_validate, +}; + +void ipu6_isys_csi2_cleanup(struct ipu6_isys_csi2 *csi2) +{ + if (!csi2->isys) + return; + + v4l2_device_unregister_subdev(&csi2->asd.sd); + ipu6_isys_subdev_cleanup(&csi2->asd); + csi2->isys = NULL; +} + +int ipu6_isys_csi2_init(struct ipu6_isys_csi2 *csi2, + struct ipu6_isys *isys, + void __iomem *base, unsigned int index) +{ + struct v4l2_subdev_format fmt = { + .which = V4L2_SUBDEV_FORMAT_ACTIVE, + .pad = CSI2_PAD_SINK, + .format = { + .width = 4096, + .height = 3072, + .code = MEDIA_BUS_FMT_SGRBG10_1X10, + }, + }; + int ret; + + csi2->isys = isys; + csi2->base = base; + csi2->port = index; + + csi2->asd.sd.entity.ops = &csi2_entity_ops; + csi2->asd.isys = isys; + ret = ipu6_isys_subdev_init(&csi2->asd, &csi2_sd_ops, 0, + NR_OF_CSI2_PADS); + if (ret) + goto fail; + + csi2->asd.pad[CSI2_PAD_SINK].flags = MEDIA_PAD_FL_SINK + | MEDIA_PAD_FL_MUST_CONNECT; + csi2->asd.pad[CSI2_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE; + + csi2->asd.source = IPU6_FW_ISYS_STREAM_SRC_CSI2_PORT0 + index; + csi2->asd.supported_codes = csi2_supported_codes; + ipu6_isys_subdev_set_fmt(&csi2->asd.sd, NULL, &fmt); + snprintf(csi2->asd.sd.name, sizeof(csi2->asd.sd.name), + IPU6_ISYS_ENTITY_PREFIX " CSI2 %u", index); + v4l2_set_subdevdata(&csi2->asd.sd, &csi2->asd); + + ret = v4l2_device_register_subdev(&isys->v4l2_dev, &csi2->asd.sd); + if (ret) { + dev_info(&isys->adev->dev, "can't register v4l2 subdev\n"); + goto fail; + } + + return 0; + +fail: + ipu6_isys_csi2_cleanup(csi2); + + return ret; +} + +void ipu6_isys_csi2_sof_event_by_stream(struct ipu6_isys_stream *stream) +{ + struct video_device *vdev = stream->csi2->asd.sd.devnode; + struct v4l2_event ev = { + .type = V4L2_EVENT_FRAME_SYNC, + }; + + ev.u.frame_sync.frame_sequence = atomic_inc_return(&stream->sequence); + + v4l2_event_queue(vdev, &ev); + dev_dbg(&stream->isys->adev->dev, + "sof_event::csi2-%i sequence: %i\n", + stream->csi2->port, ev.u.frame_sync.frame_sequence); +} + +void ipu6_isys_csi2_eof_event_by_stream(struct ipu6_isys_stream *stream) +{ + u32 frame_sequence = atomic_read(&stream->sequence); + + dev_dbg(&stream->isys->adev->dev, "eof_event::csi2-%i sequence: %i\n", + stream->csi2->port, frame_sequence); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.h b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.h new file mode 100644 index 000000000000..db932e9e8b0e --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.h @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_ISYS_CSI2_H +#define IPU6_ISYS_CSI2_H + +#include +#include + +#include "ipu6-isys-subdev.h" + +struct ipu6_isys; +struct ipu6_isys_csi2_pdata; +struct ipu6_isys_csi2_timing; +struct ipu6_isys_stream; + +#define NR_OF_CSI2_SINK_PADS 1 +#define CSI2_PAD_SINK 0 +#define NR_OF_CSI2_SOURCE_PADS 1 +#define CSI2_PAD_SOURCE 1 +#define NR_OF_CSI2_PADS (NR_OF_CSI2_SINK_PADS + NR_OF_CSI2_SOURCE_PADS) + +#define CSI2_CSI_RX_DLY_CNT_TERMEN_CLANE_A 0 +#define CSI2_CSI_RX_DLY_CNT_TERMEN_CLANE_B 0 +#define CSI2_CSI_RX_DLY_CNT_SETTLE_CLANE_A 95 +#define CSI2_CSI_RX_DLY_CNT_SETTLE_CLANE_B -8 + +#define CSI2_CSI_RX_DLY_CNT_TERMEN_DLANE_A 0 +#define CSI2_CSI_RX_DLY_CNT_TERMEN_DLANE_B 0 +#define CSI2_CSI_RX_DLY_CNT_SETTLE_DLANE_A 85 +#define CSI2_CSI_RX_DLY_CNT_SETTLE_DLANE_B -2 + +#define IPU6_EOF_TIMEOUT 300 +#define IPU6_EOF_TIMEOUT_JIFFIES msecs_to_jiffies(IPU6_EOF_TIMEOUT) + +#define CSI2_CROP_HOR BIT(0) +#define CSI2_CROP_VER BIT(1) +#define CSI2_CROP_MASK (CSI2_CROP_VER | CSI2_CROP_HOR) + +struct ipu6_isys_csi2 { + struct ipu6_isys_csi2_pdata *pdata; + struct ipu6_isys *isys; + struct ipu6_isys_subdev asd; + + void __iomem *base; + u32 receiver_errors; + unsigned int nlanes; + unsigned int port; +}; + +struct ipu6_isys_csi2_timing { + u32 ctermen; + u32 csettle; + u32 dtermen; + u32 dsettle; +}; + +struct ipu6_csi2_error { + const char *error_string; + bool is_info_only; +}; + +#define to_ipu6_isys_csi2(sd) container_of(to_ipu6_isys_subdev(sd), \ + struct ipu6_isys_csi2, asd) + +int ipu6_isys_csi2_get_link_freq(struct ipu6_isys_csi2 *csi2, s64 *link_freq); +int ipu6_isys_csi2_init(struct ipu6_isys_csi2 *csi2, struct ipu6_isys *isys, + void __iomem *base, unsigned int index); +void ipu6_isys_csi2_cleanup(struct ipu6_isys_csi2 *csi2); +void ipu6_isys_csi2_sof_event_by_stream(struct ipu6_isys_stream *stream); +void ipu6_isys_csi2_eof_event_by_stream(struct ipu6_isys_stream *stream); +void ipu6_isys_register_errors(struct ipu6_isys_csi2 *csi2); +void ipu6_isys_csi2_error(struct ipu6_isys_csi2 *csi2); + +#endif /* IPU6_ISYS_CSI2_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-subdev.c b/drivers/media/pci/intel/ipu6/ipu6-isys-subdev.c new file mode 100644 index 000000000000..35e0f38a4157 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-subdev.c @@ -0,0 +1,309 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-isys.h" +#include "ipu6-isys-subdev.h" +#include "ipu6-isys-video.h" + +unsigned int ipu6_isys_mbus_code_to_bpp(u32 code) +{ + switch (code) { + case MEDIA_BUS_FMT_RGB888_1X24: + return 24; + case MEDIA_BUS_FMT_RGB565_1X16: + case MEDIA_BUS_FMT_UYVY8_1X16: + case MEDIA_BUS_FMT_YUYV8_1X16: + return 16; + case MEDIA_BUS_FMT_SBGGR12_1X12: + case MEDIA_BUS_FMT_SGBRG12_1X12: + case MEDIA_BUS_FMT_SGRBG12_1X12: + case MEDIA_BUS_FMT_SRGGB12_1X12: + return 12; + case MEDIA_BUS_FMT_SBGGR10_1X10: + case MEDIA_BUS_FMT_SGBRG10_1X10: + case MEDIA_BUS_FMT_SGRBG10_1X10: + case MEDIA_BUS_FMT_SRGGB10_1X10: + return 10; + case MEDIA_BUS_FMT_SBGGR8_1X8: + case MEDIA_BUS_FMT_SGBRG8_1X8: + case MEDIA_BUS_FMT_SGRBG8_1X8: + case MEDIA_BUS_FMT_SRGGB8_1X8: + return 8; + default: + WARN_ON(1); + return 8; + } +} + +unsigned int ipu6_isys_mbus_code_to_mipi(u32 code) +{ + switch (code) { + case MEDIA_BUS_FMT_RGB565_1X16: + return MIPI_CSI2_DT_RGB565; + case MEDIA_BUS_FMT_RGB888_1X24: + return MIPI_CSI2_DT_RGB888; + case MEDIA_BUS_FMT_UYVY8_1X16: + case MEDIA_BUS_FMT_YUYV8_1X16: + return MIPI_CSI2_DT_YUV422_8B; + case MEDIA_BUS_FMT_SBGGR12_1X12: + case MEDIA_BUS_FMT_SGBRG12_1X12: + case MEDIA_BUS_FMT_SGRBG12_1X12: + case MEDIA_BUS_FMT_SRGGB12_1X12: + return MIPI_CSI2_DT_RAW12; + case MEDIA_BUS_FMT_SBGGR10_1X10: + case MEDIA_BUS_FMT_SGBRG10_1X10: + case MEDIA_BUS_FMT_SGRBG10_1X10: + case MEDIA_BUS_FMT_SRGGB10_1X10: + return MIPI_CSI2_DT_RAW10; + case MEDIA_BUS_FMT_SBGGR8_1X8: + case MEDIA_BUS_FMT_SGBRG8_1X8: + case MEDIA_BUS_FMT_SGRBG8_1X8: + case MEDIA_BUS_FMT_SRGGB8_1X8: + return MIPI_CSI2_DT_RAW8; + default: + /* return unavailable MIPI data type - 0x3f */ + WARN_ON(1); + return 0x3f; + } +} + +bool ipu6_isys_is_bayer_format(u32 code) +{ + switch (ipu6_isys_mbus_code_to_mipi(code)) { + case MIPI_CSI2_DT_RAW8: + case MIPI_CSI2_DT_RAW10: + case MIPI_CSI2_DT_RAW12: + return true; + } + return false; +} + +u32 ipu6_isys_convert_bayer_order(u32 code, int x, int y) +{ + static const u32 code_map[] = { + MEDIA_BUS_FMT_SRGGB8_1X8, + MEDIA_BUS_FMT_SGRBG8_1X8, + MEDIA_BUS_FMT_SGBRG8_1X8, + MEDIA_BUS_FMT_SBGGR8_1X8, + MEDIA_BUS_FMT_SRGGB10_1X10, + MEDIA_BUS_FMT_SGRBG10_1X10, + MEDIA_BUS_FMT_SGBRG10_1X10, + MEDIA_BUS_FMT_SBGGR10_1X10, + MEDIA_BUS_FMT_SRGGB12_1X12, + MEDIA_BUS_FMT_SGRBG12_1X12, + MEDIA_BUS_FMT_SGBRG12_1X12, + MEDIA_BUS_FMT_SBGGR12_1X12, + }; + u32 i; + + for (i = 0; i < ARRAY_SIZE(code_map); i++) + if (code_map[i] == code) + break; + + if (i == ARRAY_SIZE(code_map)) { + WARN_ON(1); + return code; + } + + return code_map[i ^ (((y & 1) << 1) | (x & 1))]; +} + +struct v4l2_mbus_framefmt *ipu6_isys_get_ffmt(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + unsigned int pad, + unsigned int which) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + + if (which == V4L2_SUBDEV_FORMAT_ACTIVE) + return &asd->ffmt[pad]; + else + return v4l2_subdev_get_try_format(sd, state, pad); +} + +struct v4l2_rect *ipu6_isys_get_crop(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + unsigned int pad, + unsigned int which) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + + if (which == V4L2_SUBDEV_FORMAT_ACTIVE) + return &asd->crop; + else + return v4l2_subdev_get_try_crop(sd, state, pad); +} + +int ipu6_isys_subdev_set_fmt(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_format *fmt) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + struct v4l2_mbus_framefmt *ffmt = + ipu6_isys_get_ffmt(sd, state, fmt->pad, fmt->which); + u32 code = asd->supported_codes[0]; + unsigned int i; + + mutex_lock(&asd->mutex); + if ((sd->entity.pads[fmt->pad].flags & MEDIA_PAD_FL_SOURCE) && + sd->entity.num_pads > 1) { + fmt->format = *ffmt; + mutex_unlock(&asd->mutex); + return 0; + } + fmt->format.width = clamp(fmt->format.width, IPU6_ISYS_MIN_WIDTH, + IPU6_ISYS_MAX_WIDTH); + fmt->format.height = clamp(fmt->format.height, + IPU6_ISYS_MIN_HEIGHT, IPU6_ISYS_MAX_HEIGHT); + for (i = 0; asd->supported_codes[i]; i++) { + if (asd->supported_codes[i] == fmt->format.code) { + code = asd->supported_codes[i]; + break; + } + } + fmt->format.code = code; + fmt->format.field = V4L2_FIELD_NONE; + *ffmt = fmt->format; + if (sd->entity.pads[fmt->pad].flags & MEDIA_PAD_FL_SINK) { + /* propagate format to following source pad */ + struct v4l2_rect *crop = + ipu6_isys_get_crop(sd, state, fmt->pad + 1, fmt->which); + + *ipu6_isys_get_ffmt(sd, state, fmt->pad + 1, fmt->which) = + fmt->format; + /* reset crop */ + crop->left = 0; + crop->top = 0; + crop->width = ffmt->width; + crop->height = ffmt->height; + } + mutex_unlock(&asd->mutex); + + return 0; +} + +int ipu6_isys_subdev_get_fmt(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_format *fmt) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + + mutex_lock(&asd->mutex); + fmt->format = *ipu6_isys_get_ffmt(sd, state, fmt->pad, fmt->which); + mutex_unlock(&asd->mutex); + + return 0; +} + +int ipu6_isys_subdev_enum_mbus_code(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_mbus_code_enum *code) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + const u32 *supported_codes = asd->supported_codes; + u32 index; + + for (index = 0; supported_codes[index]; index++) { + if (index == code->index) { + code->code = supported_codes[index]; + return 0; + } + } + + return -EINVAL; +} + +int ipu6_isys_subdev_init_cfg(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state) +{ + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + unsigned int i; + + mutex_lock(&asd->mutex); + + for (i = 0; i < asd->sd.entity.num_pads; i++) { + struct v4l2_mbus_framefmt *try_fmt = + v4l2_subdev_get_try_format(sd, state, i); + struct v4l2_rect *try_crop = + v4l2_subdev_get_try_crop(sd, state, i); + + *try_fmt = asd->ffmt[i]; + *try_crop = asd->crop; + } + + mutex_unlock(&asd->mutex); + + return 0; +} + +int ipu6_isys_subdev_init(struct ipu6_isys_subdev *asd, + const struct v4l2_subdev_ops *ops, + unsigned int nr_ctrls, + unsigned int num_pads) +{ + int ret; + + v4l2_subdev_init(&asd->sd, ops); + + asd->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS; + asd->sd.owner = THIS_MODULE; + asd->sd.entity.function = MEDIA_ENT_F_VID_IF_BRIDGE; + + asd->pad = devm_kcalloc(&asd->isys->adev->dev, num_pads, + sizeof(*asd->pad), GFP_KERNEL); + + asd->ffmt = devm_kcalloc(&asd->isys->adev->dev, num_pads, + sizeof(*asd->ffmt), GFP_KERNEL); + + if (!asd->pad || !asd->ffmt) + return -ENOMEM; + + mutex_init(&asd->mutex); + + ret = media_entity_pads_init(&asd->sd.entity, num_pads, asd->pad); + if (ret) + goto out_mutex_destroy; + + if (asd->ctrl_init) { + ret = v4l2_ctrl_handler_init(&asd->ctrl_handler, nr_ctrls); + if (ret) + goto out_media_entity_cleanup; + + asd->ctrl_init(&asd->sd); + if (asd->ctrl_handler.error) { + ret = asd->ctrl_handler.error; + goto out_v4l2_ctrl_handler_free; + } + + asd->sd.ctrl_handler = &asd->ctrl_handler; + } + + asd->source = -1; + + return 0; + +out_v4l2_ctrl_handler_free: + v4l2_ctrl_handler_free(&asd->ctrl_handler); + +out_media_entity_cleanup: + media_entity_cleanup(&asd->sd.entity); + +out_mutex_destroy: + mutex_destroy(&asd->mutex); + + return ret; +} + +void ipu6_isys_subdev_cleanup(struct ipu6_isys_subdev *asd) +{ + media_entity_cleanup(&asd->sd.entity); + v4l2_ctrl_handler_free(&asd->ctrl_handler); + mutex_destroy(&asd->mutex); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-subdev.h b/drivers/media/pci/intel/ipu6/ipu6-isys-subdev.h new file mode 100644 index 000000000000..c6197425d52f --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-subdev.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_ISYS_SUBDEV_H +#define IPU6_ISYS_SUBDEV_H + +#include + +#include +#include +#include +#include + +#define FMT_ENTRY (struct ipu6_isys_fmt_entry []) + +struct ipu6_isys; + +struct ipu6_isys_subdev { + /* Serialise access to any other field in the struct */ + struct mutex mutex; + struct v4l2_subdev sd; + struct ipu6_isys *isys; + u32 const *supported_codes; + struct media_pad *pad; + struct v4l2_mbus_framefmt *ffmt; + struct v4l2_rect crop; + struct v4l2_ctrl_handler ctrl_handler; + void (*ctrl_init)(struct v4l2_subdev *sd); + int source; /* SSI stream source; -1 if unset */ +}; + +#define to_ipu6_isys_subdev(__sd) \ + container_of(__sd, struct ipu6_isys_subdev, sd) + +unsigned int ipu6_isys_mbus_code_to_bpp(u32 code); +unsigned int ipu6_isys_mbus_code_to_mipi(u32 code); +bool ipu6_isys_is_bayer_format(u32 code); +u32 ipu6_isys_convert_bayer_order(u32 code, int x, int y); + +int ipu6_isys_subdev_set_fmt(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_format *fmt); +int ipu6_isys_subdev_get_fmt(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_format *fmt); +int ipu6_isys_subdev_enum_mbus_code(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + struct v4l2_subdev_mbus_code_enum + *code); +int ipu6_isys_subdev_link_validate(struct v4l2_subdev *sd, + struct media_link *link, + struct v4l2_subdev_format *source_fmt, + struct v4l2_subdev_format *sink_fmt); +struct v4l2_mbus_framefmt *ipu6_isys_get_ffmt(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + unsigned int pad, + unsigned int which); +struct v4l2_rect *ipu6_isys_get_crop(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state, + unsigned int pad, + unsigned int which); + +int ipu6_isys_subdev_init_cfg(struct v4l2_subdev *sd, + struct v4l2_subdev_state *state); +int ipu6_isys_subdev_init(struct ipu6_isys_subdev *asd, + const struct v4l2_subdev_ops *ops, + unsigned int nr_ctrls, + unsigned int num_pads); +void ipu6_isys_subdev_cleanup(struct ipu6_isys_subdev *asd); +#endif /* IPU6_ISYS_SUBDEV_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-platform-isys-csi2-reg.h b/drivers/media/pci/intel/ipu6/ipu6-platform-isys-csi2-reg.h new file mode 100644 index 000000000000..60bc9147e25b --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-platform-isys-csi2-reg.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#ifndef IPU6_PLATFORM_ISYS_CSI2_REG_H +#define IPU6_PLATFORM_ISYS_CSI2_REG_H + +#define CSI_REG_BASE 0x220000 +#define CSI_REG_BASE_PORT(id) ((id) * 0x1000) + +#define IPU6_CSI_PORT_A_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(0)) +#define IPU6_CSI_PORT_B_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(1)) +#define IPU6_CSI_PORT_C_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(2)) +#define IPU6_CSI_PORT_D_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(3)) +#define IPU6_CSI_PORT_E_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(4)) +#define IPU6_CSI_PORT_F_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(5)) +#define IPU6_CSI_PORT_G_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(6)) +#define IPU6_CSI_PORT_H_ADDR_OFFSET \ + (CSI_REG_BASE + CSI_REG_BASE_PORT(7)) + +/* CSI Port Genral Purpose Registers */ +#define CSI_REG_PORT_GPREG_SRST 0x0 +#define CSI_REG_PORT_GPREG_CSI2_SLV_REG_SRST 0x4 +#define CSI_REG_PORT_GPREG_CSI2_PORT_CONTROL 0x8 + +/* + * Port IRQs mapping events: + * IRQ0 - CSI_FE event + * IRQ1 - CSI_SYNC + * IRQ2 - S2M_SIDS0TO7 + * IRQ3 - S2M_SIDS8TO15 + */ +#define CSI_PORT_REG_BASE_IRQ_CSI 0x80 +#define CSI_PORT_REG_BASE_IRQ_CSI_SYNC 0xA0 +#define CSI_PORT_REG_BASE_IRQ_S2M_SIDS0TOS7 0xC0 +#define CSI_PORT_REG_BASE_IRQ_S2M_SIDS8TOS15 0xE0 + +#define CSI_PORT_REG_BASE_IRQ_EDGE_OFFSET 0x0 +#define CSI_PORT_REG_BASE_IRQ_MASK_OFFSET 0x4 +#define CSI_PORT_REG_BASE_IRQ_STATUS_OFFSET 0x8 +#define CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET 0xc +#define CSI_PORT_REG_BASE_IRQ_ENABLE_OFFSET 0x10 +#define CSI_PORT_REG_BASE_IRQ_LEVEL_NOT_PULSE_OFFSET 0x14 + +#define IPU6SE_CSI_RX_ERROR_IRQ_MASK GENMASK(18, 0) +#define IPU6_CSI_RX_ERROR_IRQ_MASK GENMASK(19, 0) + +#define CSI_RX_NUM_ERRORS_IN_IRQ 20 +#define CSI_RX_NUM_IRQ 32 + +#define IPU6_CSI_RX_IRQ_FS_VC 1 +#define IPU6_CSI_RX_IRQ_FE_VC 2 + +/* PPI2CSI */ +#define CSI_REG_PPI2CSI_ENABLE 0x200 +#define CSI_REG_PPI2CSI_CONFIG_PPI_INTF 0x204 +#define PPI_INTF_CONFIG_NOF_ENABLED_DLANES_MASK GENMASK(4, 3) +#define CSI_REG_PPI2CSI_CONFIG_CSI_FEATURE 0x208 + +enum CSI_PPI2CSI_CTRL { + CSI_PPI2CSI_DISABLE = 0, + CSI_PPI2CSI_ENABLE = 1, +}; + +/* CSI_FE */ +#define CSI_REG_CSI_FE_ENABLE 0x280 +#define CSI_REG_CSI_FE_MODE 0x284 +#define CSI_REG_CSI_FE_MUX_CTRL 0x288 +#define CSI_REG_CSI_FE_SYNC_CNTR_SEL 0x290 + +enum CSI_FE_ENABLE_TYPE { + CSI_FE_DISABLE = 0, + CSI_FE_ENABLE = 1, +}; + +enum CSI_FE_MODE_TYPE { + CSI_FE_DPHY_MODE = 0, + CSI_FE_CPHY_MODE = 1, +}; + +enum CSI_FE_INPUT_SELECTOR { + CSI_SENSOR_INPUT = 0, + CSI_MIPIGEN_INPUT = 1, +}; + +enum CSI_FE_SYNC_CNTR_SEL_TYPE { + CSI_CNTR_SENSOR_LINE_ID = BIT(0), + CSI_CNTR_INT_LINE_PKT_ID = ~CSI_CNTR_SENSOR_LINE_ID, + CSI_CNTR_SENSOR_FRAME_ID = BIT(1), + CSI_CNTR_INT_FRAME_PKT_ID = ~CSI_CNTR_SENSOR_FRAME_ID, +}; + +/* CSI HUB General Purpose Registers */ +#define CSI_REG_HUB_GPREG_SRST (CSI_REG_BASE + 0x18000) +#define CSI_REG_HUB_GPREG_SLV_REG_SRST (CSI_REG_BASE + 0x18004) + +#define CSI_REG_HUB_DRV_ACCESS_PORT(id) (CSI_REG_BASE + 0x18018 + (id) * 4) +#define CSI_REG_HUB_FW_ACCESS_PORT_OFS 0x17000 +#define CSI_REG_HUB_FW_ACCESS_PORT_V6OFS 0x16000 +#define CSI_REG_HUB_FW_ACCESS_PORT(ofs, id) (CSI_REG_BASE + (ofs) + \ + (id) * 4) + +enum CSI_PORT_CLK_GATING_SWITCH { + CSI_PORT_CLK_GATING_OFF = 0, + CSI_PORT_CLK_GATING_ON = 1, +}; + +#define CSI_REG_BASE_HUB_IRQ 0x18200 + +#define IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_EDGE 0x238200 +#define IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_MASK 0x238204 +#define IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_STATUS 0x238208 +#define IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_CLEAR 0x23820c +#define IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_ENABLE 0x238210 +#define IPU6_REG_ISYS_CSI_TOP_CTRL0_IRQ_LEVEL_NOT_PULSE 0x238214 + +#define IPU6_REG_ISYS_CSI_TOP_CTRL1_IRQ_EDGE 0x238220 +#define IPU6_REG_ISYS_CSI_TOP_CTRL1_IRQ_MASK 0x238224 +#define IPU6_REG_ISYS_CSI_TOP_CTRL1_IRQ_STATUS 0x238228 +#define IPU6_REG_ISYS_CSI_TOP_CTRL1_IRQ_CLEAR 0x23822c +#define IPU6_REG_ISYS_CSI_TOP_CTRL1_IRQ_ENABLE 0x238230 +#define IPU6_REG_ISYS_CSI_TOP_CTRL1_IRQ_LEVEL_NOT_PULSE 0x238234 + +/* MTL IPU6V6 irq ctrl0 & ctrl1 */ +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_EDGE 0x238700 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_MASK 0x238704 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_STATUS 0x238708 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_CLEAR 0x23870c +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_ENABLE 0x238710 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL0_IRQ_LEVEL_NOT_PULSE 0x238714 + +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL1_IRQ_EDGE 0x238720 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL1_IRQ_MASK 0x238724 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL1_IRQ_STATUS 0x238728 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL1_IRQ_CLEAR 0x23872c +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL1_IRQ_ENABLE 0x238730 +#define IPU6V6_REG_ISYS_CSI_TOP_CTRL1_IRQ_LEVEL_NOT_PULSE 0x238734 + +/* + * 3:0 CSI_PORT.irq_out[3:0] CSI_PORT_CTRL0 IRQ outputs (4bits) + * [0] CSI_PORT.IRQ_CTRL0_csi + * [1] CSI_PORT.IRQ_CTRL1_csi_sync + * [2] CSI_PORT.IRQ_CTRL2_s2m_sids0to7 + * [3] CSI_PORT.IRQ_CTRL3_s2m_sids8to15 + */ +#define IPU6_ISYS_UNISPART_IRQ_CSI2(port) \ + (0x3 << ((port) * IPU6_CSI_IRQ_NUM_PER_PIPE)) + +/* + * ipu6se support 2 front ends, 2 port per front end, 4 ports 0..3 + * sip0 - 0, 1 + * sip1 - 2, 3 + * 0 and 2 support 4 data lanes, 1 and 3 support 2 data lanes + * all offset are base from isys base address + */ + +#define CSI2_HUB_GPREG_SIP_SRST(sip) (0x238038 + (sip) * 4) +#define CSI2_HUB_GPREG_SIP_FB_PORT_CFG(sip) (0x238050 + (sip) * 4) + +#define CSI2_HUB_GPREG_DPHY_TIMER_INCR (0x238040) +#define CSI2_HUB_GPREG_HPLL_FREQ (0x238044) +#define CSI2_HUB_GPREG_IS_CLK_RATIO (0x238048) +#define CSI2_HUB_GPREG_HPLL_FREQ_ISCLK_RATE_OVERRIDE (0x23804c) +#define CSI2_HUB_GPREG_PORT_CLKGATING_DISABLE (0x238058) +#define CSI2_HUB_GPREG_SIP0_CSI_RX_A_CONTROL (0x23805c) +#define CSI2_HUB_GPREG_SIP0_CSI_RX_B_CONTROL (0x238088) +#define CSI2_HUB_GPREG_SIP1_CSI_RX_A_CONTROL (0x2380a4) +#define CSI2_HUB_GPREG_SIP1_CSI_RX_B_CONTROL (0x2380d0) + +#define CSI2_SIP_TOP_CSI_RX_BASE(sip) (0x23805c + (sip) * 0x48) +#define CSI2_SIP_TOP_CSI_RX_PORT_BASE_0(port) (0x23805c + ((port) / 2) * 0x48) +#define CSI2_SIP_TOP_CSI_RX_PORT_BASE_1(port) (0x238088 + ((port) / 2) * 0x48) + +/* offset from port base */ +#define CSI2_SIP_TOP_CSI_RX_PORT_CONTROL (0x0) +#define CSI2_SIP_TOP_CSI_RX_DLY_CNT_TERMEN_CLANE (0x4) +#define CSI2_SIP_TOP_CSI_RX_DLY_CNT_SETTLE_CLANE (0x8) +#define CSI2_SIP_TOP_CSI_RX_DLY_CNT_TERMEN_DLANE(lane) (0xc + (lane) * 8) +#define CSI2_SIP_TOP_CSI_RX_DLY_CNT_SETTLE_DLANE(lane) (0x10 + (lane) * 8) + +#endif /* IPU6_ISYS_CSI2_REG_H */ From patchwork Thu Apr 13 10:04:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 177D2C77B6E for ; Thu, 13 Apr 2023 09:55:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230099AbjDMJz2 (ORCPT ); Thu, 13 Apr 2023 05:55:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbjDMJzZ (ORCPT ); Thu, 13 Apr 2023 05:55:25 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90C109745 for ; Thu, 13 Apr 2023 02:55:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379710; x=1712915710; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wma92UDZXOQI52yqHibzC/VXC9T0JoRP7kam7MsFnxQ=; b=dLvZx+1nWf1QOnW/8XoAenWO620+JltCNTnzKMsCkQ2c6yohw3yY+07G sdaaEy8zUpDpU7cvycBDRm2LVnRcln6GECz+bVprchcpZM9BWocHb7cBi QwDITul0nlWBczqIhklxywJerEQx1ISWRvqJVkLfugv3jV5DcKC3VUoeo aI/logpvnpAJium4fbE+ZtThNHBuPt+KSOh3GpKnQuF1odlHKMNbbsYKA hpTmGffDO7zB4lOEmjVSVpM+TzNqt35cuKUZ5/ONF5T84gwFAdPwSHxD+ 2E4Geb9rcvH+0NIqsWChT2lRR8mVEdEh8OXB/BT+f8ef6przNKQTk61MD Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993046" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993046" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600102" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600102" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:55:00 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 09/14] media: intel/ipu6: add the CSI2 DPHY implementation Date: Thu, 13 Apr 2023 18:04:24 +0800 Message-Id: <20230413100429.919622-10-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao IPU6 CSI2 DPHY hardware varies on different platforms, current IPU6 has three DPHY hardware instance which maybe used on tigerlake, alderlake, metorlake and jasperlake. MCD DPHY is shipped on tigerlake and alderlake, DWC DPHY is shipped on metorlake. Each PHY has its own register space, input system driver call the DPHY callback which was set at isys_probe(). Signed-off-by: Bingbu Cao --- .../media/pci/intel/ipu6/ipu6-isys-dwc-phy.c | 549 +++++++++++++ .../media/pci/intel/ipu6/ipu6-isys-jsl-phy.c | 245 ++++++ .../media/pci/intel/ipu6/ipu6-isys-mcd-phy.c | 733 ++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-isys-phy.h | 24 + 4 files changed, 1551 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-dwc-phy.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-jsl-phy.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-mcd-phy.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-phy.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-dwc-phy.c b/drivers/media/pci/intel/ipu6/ipu6-isys-dwc-phy.c new file mode 100644 index 000000000000..2724b122f07d --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-dwc-phy.c @@ -0,0 +1,549 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2013 - 2023 Intel Corporation + */ + +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-phy.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + +#define IPU6_DWC_DPHY_BASE(i) (0x238038 + 0x34 * (i)) +#define IPU6_DWC_DPHY_RSTZ 0x00 +#define IPU6_DWC_DPHY_SHUTDOWNZ 0x04 +#define IPU6_DWC_DPHY_HSFREQRANGE 0x08 +#define IPU6_DWC_DPHY_CFGCLKFREQRANGE 0x0c +#define IPU6_DWC_DPHY_TEST_IFC_ACCESS_MODE 0x10 +#define IPU6_DWC_DPHY_TEST_IFC_REQ 0x14 +#define IPU6_DWC_DPHY_TEST_IFC_REQ_COMPLETION 0x18 +#define IPU6_DWC_DPHY_DFT_CTRL0 0x28 +#define IPU6_DWC_DPHY_DFT_CTRL1 0x2c +#define IPU6_DWC_DPHY_DFT_CTRL2 0x30 + +/* + * test IFC request definition: + * - req: 0 for read, 1 for write + * - 12 bits address + * - 8bits data (will ignore for read) + * --24----16------4-----0 + * --|-data-|-addr-|-req-| + */ +#define IFC_REQ(req, addr, data) (FIELD_PREP(GENMASK(23, 16), data) | \ + FIELD_PREP(GENMASK(15, 4), addr) | \ + FIELD_PREP(GENMASK(1, 0), req)) + +#define TEST_IFC_REQ_READ 0 +#define TEST_IFC_REQ_WRITE 1 +#define TEST_IFC_REQ_RESET 2 + +#define TEST_IFC_ACCESS_MODE_FSM 0 +#define TEST_IFC_ACCESS_MODE_IFC_CTL 1 + +enum phy_fsm_state { + PHY_FSM_STATE_POWERON = 0, + PHY_FSM_STATE_BGPON = 1, + PHY_FSM_STATE_CAL_TYPE = 2, + PHY_FSM_STATE_BURNIN_CAL = 3, + PHY_FSM_STATE_TERMCAL = 4, + PHY_FSM_STATE_OFFSETCAL = 5, + PHY_FSM_STATE_OFFSET_LANE = 6, + PHY_FSM_STATE_IDLE = 7, + PHY_FSM_STATE_ULP = 8, + PHY_FSM_STATE_DDLTUNNING = 9, + PHY_FSM_STATE_SKEW_BACKWARD = 10, + PHY_FSM_STATE_INVALID, +}; + +static void dwc_dphy_write(struct ipu6_isys *isys, u32 phy_id, u32 addr, + u32 data) +{ + void __iomem *isys_base = isys->pdata->base; + void __iomem *base = isys_base + IPU6_DWC_DPHY_BASE(phy_id); + + dev_dbg(&isys->adev->dev, "write: reg 0x%lx = data 0x%x", + base + addr - isys_base, data); + writel(data, base + addr); +} + +static u32 dwc_dphy_read(struct ipu6_isys *isys, u32 phy_id, u32 addr) +{ + void __iomem *isys_base = isys->pdata->base; + void __iomem *base = isys_base + IPU6_DWC_DPHY_BASE(phy_id); + u32 data; + + data = readl(base + addr); + dev_dbg(&isys->adev->dev, "read: reg 0x%lx = data 0x%x", + base + addr - isys_base, data); + + return data; +} + +static void dwc_dphy_write_mask(struct ipu6_isys *isys, u32 phy_id, u32 addr, + u32 data, u8 shift, u8 width) +{ + u32 temp; + u32 mask; + + mask = (1 << width) - 1; + temp = dwc_dphy_read(isys, phy_id, addr); + temp &= ~(mask << shift); + temp |= (data & mask) << shift; + dwc_dphy_write(isys, phy_id, addr, temp); +} + +static u32 __maybe_unused dwc_dphy_read_mask(struct ipu6_isys *isys, u32 phy_id, + u32 addr, u8 shift, u8 width) +{ + u32 val; + + val = dwc_dphy_read(isys, phy_id, addr) >> shift; + return val & ((1 << width) - 1); +} + +#define DWC_DPHY_TIMEOUT (5 * USEC_PER_SEC) +static int dwc_dphy_ifc_read(struct ipu6_isys *isys, u32 phy_id, u32 addr, + u32 *val) +{ + void __iomem *isys_base = isys->pdata->base; + void __iomem *base = isys_base + IPU6_DWC_DPHY_BASE(phy_id); + void __iomem *reg; + u32 completion; + int ret; + + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_TEST_IFC_REQ, + IFC_REQ(TEST_IFC_REQ_READ, addr, 0)); + reg = base + IPU6_DWC_DPHY_TEST_IFC_REQ_COMPLETION; + ret = readl_poll_timeout(reg, completion, !(completion & BIT(0)), + 10, DWC_DPHY_TIMEOUT); + if (ret) { + dev_err(&isys->adev->dev, "ifc request read timeout\n"); + return ret; + } + + *val = completion >> 8 & 0xff; + *val = FIELD_GET(GENMASK(15, 8), completion); + dev_dbg(&isys->adev->dev, "ifc read 0x%x = 0x%x", addr, *val); + + return 0; +} + +static int dwc_dphy_ifc_write(struct ipu6_isys *isys, u32 phy_id, u32 addr, + u32 data) +{ + void __iomem *isys_base = isys->pdata->base; + void __iomem *base = isys_base + IPU6_DWC_DPHY_BASE(phy_id); + void __iomem *reg; + u32 completion; + int ret; + + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_TEST_IFC_REQ, + IFC_REQ(TEST_IFC_REQ_WRITE, addr, data)); + completion = readl(base + IPU6_DWC_DPHY_TEST_IFC_REQ_COMPLETION); + reg = base + IPU6_DWC_DPHY_TEST_IFC_REQ_COMPLETION; + ret = readl_poll_timeout(reg, completion, !(completion & BIT(0)), + 10, DWC_DPHY_TIMEOUT); + if (ret) { + dev_err(&isys->adev->dev, "ifc request write timeout\n"); + return ret; + } + + return 0; +} + +static void dwc_dphy_ifc_write_mask(struct ipu6_isys *isys, u32 phy_id, + u32 addr, u32 data, u8 shift, u8 width) +{ + u32 temp, mask; + int ret; + + ret = dwc_dphy_ifc_read(isys, phy_id, addr, &temp); + if (ret) { + dev_err(&isys->adev->dev, + "dphy proxy read failed with %d", ret); + return; + } + + mask = (1 << width) - 1; + temp &= ~(mask << shift); + temp |= (data & mask) << shift; + ret = dwc_dphy_ifc_write(isys, phy_id, addr, temp); + if (ret) + dev_err(&isys->adev->dev, "dphy proxy write failed(%d)", ret); +} + +static u32 dwc_dphy_ifc_read_mask(struct ipu6_isys *isys, u32 phy_id, u32 addr, + u8 shift, u8 width) +{ + int ret; + u32 val; + + ret = dwc_dphy_ifc_read(isys, phy_id, addr, &val); + if (ret) { + dev_err(&isys->adev->dev, "dphy proxy read failed with %d", + ret); + return 0; + } + + return ((val >> shift) & ((1 << width) - 1)); +} + +static int dwc_dphy_pwr_up(struct ipu6_isys *isys, u32 phy_id) +{ + u32 fsm_state; + int ret; + + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_RSTZ, 1); + usleep_range(10, 20); + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_SHUTDOWNZ, 1); + + ret = read_poll_timeout(dwc_dphy_ifc_read_mask, fsm_state, + (fsm_state == PHY_FSM_STATE_IDLE || + fsm_state == PHY_FSM_STATE_ULP), + 100, DWC_DPHY_TIMEOUT, false, isys, + phy_id, 0x1e, 0, 4); + + if (ret) { + dev_err(&isys->adev->dev, "DPHY%d power up failed, state 0x%x", + phy_id, fsm_state); + return ret; + } + + return 0; +} + +struct dwc_dphy_freq_range { + u8 hsfreq; + u16 min; + u16 max; + u16 default_mbps; + u16 osc_freq_target; +}; + +#define DPHY_FREQ_RANGE_NUM (63) +#define DPHY_FREQ_RANGE_INVALID_INDEX (0xff) +const struct dwc_dphy_freq_range freqranges[DPHY_FREQ_RANGE_NUM] = { + {0x00, 80, 97, 80, 335}, + {0x10, 80, 107, 90, 335}, + {0x20, 84, 118, 100, 335}, + {0x30, 93, 128, 110, 335}, + {0x01, 103, 139, 120, 335}, + {0x11, 112, 149, 130, 335}, + {0x21, 122, 160, 140, 335}, + {0x31, 131, 170, 150, 335}, + {0x02, 141, 181, 160, 335}, + {0x12, 150, 191, 170, 335}, + {0x22, 160, 202, 180, 335}, + {0x32, 169, 212, 190, 335}, + {0x03, 183, 228, 205, 335}, + {0x13, 198, 244, 220, 335}, + {0x23, 212, 259, 235, 335}, + {0x33, 226, 275, 250, 335}, + {0x04, 250, 301, 275, 335}, + {0x14, 274, 328, 300, 335}, + {0x25, 297, 354, 325, 335}, + {0x35, 321, 380, 350, 335}, + {0x05, 369, 433, 400, 335}, + {0x16, 416, 485, 450, 335}, + {0x26, 464, 538, 500, 335}, + {0x37, 511, 590, 550, 335}, + {0x07, 559, 643, 600, 335}, + {0x18, 606, 695, 650, 335}, + {0x28, 654, 748, 700, 335}, + {0x39, 701, 800, 750, 335}, + {0x09, 749, 853, 800, 335}, + {0x19, 796, 905, 850, 335}, + {0x29, 844, 958, 900, 335}, + {0x3a, 891, 1010, 950, 335}, + {0x0a, 939, 1063, 1000, 335}, + {0x1a, 986, 1115, 1050, 335}, + {0x2a, 1034, 1168, 1100, 335}, + {0x3b, 1081, 1220, 1150, 335}, + {0x0b, 1129, 1273, 1200, 335}, + {0x1b, 1176, 1325, 1250, 335}, + {0x2b, 1224, 1378, 1300, 335}, + {0x3c, 1271, 1430, 1350, 335}, + {0x0c, 1319, 1483, 1400, 335}, + {0x1c, 1366, 1535, 1450, 335}, + {0x2c, 1414, 1588, 1500, 335}, + {0x3d, 1461, 1640, 1550, 208}, + {0x0d, 1509, 1693, 1600, 214}, + {0x1d, 1556, 1745, 1650, 221}, + {0x2e, 1604, 1798, 1700, 228}, + {0x3e, 1651, 1850, 1750, 234}, + {0x0e, 1699, 1903, 1800, 241}, + {0x1e, 1746, 1955, 1850, 248}, + {0x2f, 1794, 2008, 1900, 255}, + {0x3f, 1841, 2060, 1950, 261}, + {0x0f, 1889, 2113, 2000, 268}, + {0x40, 1936, 2165, 2050, 275}, + {0x41, 1984, 2218, 2100, 281}, + {0x42, 2031, 2270, 2150, 288}, + {0x43, 2079, 2323, 2200, 294}, + {0x44, 2126, 2375, 2250, 302}, + {0x45, 2174, 2428, 2300, 308}, + {0x46, 2221, 2480, 2350, 315}, + {0x47, 2269, 2500, 2400, 321}, + {0x48, 2316, 2500, 2450, 328}, + {0x49, 2364, 2500, 2500, 335}, +}; + +static u16 get_hsfreq_by_mbps(u32 mbps) +{ + unsigned int i; + + for (i = DPHY_FREQ_RANGE_NUM - 1; i >= 0; i--) { + if (freqranges[i].default_mbps == mbps || + (mbps >= freqranges[i].min && mbps <= freqranges[i].max)) + return i; + } + + return DPHY_FREQ_RANGE_INVALID_INDEX; +} + +static int ipu6_isys_dwc_phy_config(struct ipu6_isys *isys, + u32 phy_id, u32 mbps) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(&isys->adev->dev); + struct ipu6_device *isp = adev->isp; + u32 cfg_clk_freqrange; + u32 osc_freq_target; + u32 index; + + dev_dbg(&isys->adev->dev, "config phy %u with %u mbps", phy_id, mbps); + + index = get_hsfreq_by_mbps(mbps); + if (index == DPHY_FREQ_RANGE_INVALID_INDEX) { + dev_err(&isys->adev->dev, "link freq not found for mbps %u", + mbps); + return -EINVAL; + } + + dwc_dphy_write_mask(isys, phy_id, IPU6_DWC_DPHY_HSFREQRANGE, + freqranges[index].hsfreq, 0, 7); + + /* Force termination Calibration */ + if (isys->phy_termcal_val) { + dwc_dphy_ifc_write_mask(isys, phy_id, 0x20a, 0x1, 0, 1); + dwc_dphy_ifc_write_mask(isys, phy_id, 0x209, 0x3, 0, 2); + dwc_dphy_ifc_write_mask(isys, phy_id, 0x209, + isys->phy_termcal_val, 4, 4); + } + + /* + * Enable override to configure the DDL target oscillation + * frequency on bit 0 of register 0xe4 + */ + dwc_dphy_ifc_write_mask(isys, phy_id, 0xe4, 0x1, 0, 1); + /* + * configure registers 0xe2, 0xe3 with the + * appropriate DDL target oscillation frequency + * 0x1cc(460) + */ + osc_freq_target = freqranges[index].osc_freq_target; + dwc_dphy_ifc_write_mask(isys, phy_id, 0xe2, + osc_freq_target & 0xff, 0, 8); + dwc_dphy_ifc_write_mask(isys, phy_id, 0xe3, + (osc_freq_target >> 8) & 0xf, 0, 4); + + if (mbps < 1500) { + /* deskew_polarity_rw, for < 1.5Gbps */ + dwc_dphy_ifc_write_mask(isys, phy_id, 0x8, 0x1, 5, 1); + } + + /* + * Set cfgclkfreqrange[5:0] = round[(Fcfg_clk(MHz)-17)*4] + * (38.4 - 17) * 4 = ~85 (0x55) + */ + cfg_clk_freqrange = (isp->buttress.ref_clk - 170) * 4 / 10; + dev_dbg(&isys->adev->dev, "ref_clk = %u clf_freqrange = %u", + isp->buttress.ref_clk, cfg_clk_freqrange); + dwc_dphy_write_mask(isys, phy_id, IPU6_DWC_DPHY_CFGCLKFREQRANGE, + cfg_clk_freqrange, 0, 8); + + dwc_dphy_write_mask(isys, phy_id, IPU6_DWC_DPHY_DFT_CTRL2, 0x1, 4, 1); + dwc_dphy_write_mask(isys, phy_id, IPU6_DWC_DPHY_DFT_CTRL2, 0x1, 8, 1); + + return 0; +} + +static void ipu6_isys_dwc_phy_aggr_setup(struct ipu6_isys *isys, u32 master, + u32 slave, u32 mbps) +{ + /* Config mastermacro */ + dwc_dphy_ifc_write_mask(isys, master, 0x133, 0x1, 0, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x133, 0x0, 0, 1); + + /* Config master PHY clk lane to drive long channel clk */ + dwc_dphy_ifc_write_mask(isys, master, 0x307, 0x1, 2, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x307, 0x0, 2, 1); + + /* Config both PHYs data lanes to get clk from long channel */ + dwc_dphy_ifc_write_mask(isys, master, 0x508, 0x1, 5, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x508, 0x1, 5, 1); + dwc_dphy_ifc_write_mask(isys, master, 0x708, 0x1, 5, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x708, 0x1, 5, 1); + + /* Config slave PHY clk lane to bypass long channel clk to DDR clk */ + dwc_dphy_ifc_write_mask(isys, master, 0x308, 0x0, 3, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x308, 0x1, 3, 1); + + /* Override slave PHY clk lane enable (DPHYRXCLK_CLL_demux module) */ + dwc_dphy_ifc_write_mask(isys, slave, 0xe0, 0x3, 0, 2); + + /* Override slave PHY DDR clk lane enable (DPHYHSRX_div124 module) */ + dwc_dphy_ifc_write_mask(isys, slave, 0xe1, 0x1, 1, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x307, 0x1, 3, 1); + + /* Turn off slave PHY LP-RX clk lane */ + dwc_dphy_ifc_write_mask(isys, slave, 0x304, 0x1, 7, 1); + dwc_dphy_ifc_write_mask(isys, slave, 0x305, 0xa, 0, 5); +} + +#define PHY_E 4 +static int ipu6_isys_dwc_phy_powerup_ack(struct ipu6_isys *isys, u32 phy_id) +{ + u32 rescal_done; + int ret; + + ret = dwc_dphy_pwr_up(isys, phy_id); + if (ret != 0) { + dev_err(&isys->adev->dev, "dphy%u power up failed(%d)", phy_id, + ret); + return ret; + } + + /* reset forcerxmode */ + dwc_dphy_write_mask(isys, phy_id, IPU6_DWC_DPHY_DFT_CTRL2, 0, 4, 1); + dwc_dphy_write_mask(isys, phy_id, IPU6_DWC_DPHY_DFT_CTRL2, 0, 8, 1); + + dev_dbg(&isys->adev->dev, "phy %u is ready!", phy_id); + + if (phy_id != PHY_E || isys->phy_termcal_val) + return 0; + + usleep_range(100, 200); + rescal_done = dwc_dphy_ifc_read_mask(isys, phy_id, 0x221, 7, 1); + if (rescal_done) { + isys->phy_termcal_val = dwc_dphy_ifc_read_mask(isys, phy_id, + 0x220, 2, 4); + dev_dbg(&isys->adev->dev, "termcal done with value = %u", + isys->phy_termcal_val); + } + + return 0; +} + +static void ipu6_isys_dwc_phy_reset(struct ipu6_isys *isys, u32 phy_id) +{ + dev_dbg(&isys->adev->dev, "Reset phy %u", phy_id); + + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_SHUTDOWNZ, 0); + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_RSTZ, 0); + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_TEST_IFC_ACCESS_MODE, + TEST_IFC_ACCESS_MODE_FSM); + dwc_dphy_write(isys, phy_id, IPU6_DWC_DPHY_TEST_IFC_REQ, + TEST_IFC_REQ_RESET); +} + +int ipu6_isys_dwc_phy_set_power(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on) +{ + void __iomem *isys_base = isys->pdata->base; + u32 phy_id, primary, secondary; + u32 nlanes, port, mbps; + s64 link_freq; + int ret = 0; + + port = cfg->port; + + if (!isys_base || port >= isys->pdata->ipdata->csi2.nports) { + dev_warn(&isys->adev->dev, "invalid port ID %d\n", port); + return -EINVAL; + } + + nlanes = cfg->nlanes; + /* only port 0, 2 and 4 support 4 lanes */ + if (nlanes == 4 && port % 2) { + dev_err(&isys->adev->dev, "invalid csi-port %u with %u lanes\n", + port, nlanes); + return -EINVAL; + } + + ret = ipu6_isys_csi2_get_link_freq(&isys->csi2[port], &link_freq); + if (ret) { + dev_err(&isys->adev->dev, + "get link freq failed(%d).\n", ret); + return ret; + } + + mbps = div_u64(link_freq, 500000); + + phy_id = port; + primary = port & ~1; + secondary = primary + 1; + if (on) { + if (nlanes == 4) { + dev_dbg(&isys->adev->dev, + "config phy %u and %u in aggregation mode", + primary, secondary); + + ipu6_isys_dwc_phy_reset(isys, primary); + ipu6_isys_dwc_phy_reset(isys, secondary); + ipu6_isys_dwc_phy_aggr_setup(isys, primary, + secondary, mbps); + + ret = ipu6_isys_dwc_phy_config(isys, primary, mbps); + if (ret) + return ret; + ret = ipu6_isys_dwc_phy_config(isys, secondary, mbps); + if (ret) + return ret; + + ret = ipu6_isys_dwc_phy_powerup_ack(isys, primary); + if (ret) + return ret; + + ret = ipu6_isys_dwc_phy_powerup_ack(isys, secondary); + return ret; + } + + dev_dbg(&isys->adev->dev, + "config phy %u with %u lanes in non-aggr mode", + phy_id, nlanes); + + ipu6_isys_dwc_phy_reset(isys, phy_id); + ret = ipu6_isys_dwc_phy_config(isys, phy_id, mbps); + if (ret) + return ret; + + ret = ipu6_isys_dwc_phy_powerup_ack(isys, phy_id); + return ret; + } + + if (nlanes == 4) { + dev_dbg(&isys->adev->dev, + "Powerdown phy %u and phy %u for port %u", + primary, secondary, port); + ipu6_isys_dwc_phy_reset(isys, secondary); + ipu6_isys_dwc_phy_reset(isys, primary); + + return 0; + } + + dev_dbg(&isys->adev->dev, + "Powerdown phy %u with %u lanes", phy_id, nlanes); + + ipu6_isys_dwc_phy_reset(isys, phy_id); + + return 0; +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-jsl-phy.c b/drivers/media/pci/intel/ipu6/ipu6-isys-jsl-phy.c new file mode 100644 index 000000000000..118fc6d45095 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-jsl-phy.c @@ -0,0 +1,245 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2013 - 2023 Intel Corporation + */ +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-phy.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + +/* only use BB0, BB2, BB4, and BB6 on PHY0 */ +#define IPU6SE_ISYS_PHY_BB_NUM 4 +#define IPU6SE_ISYS_PHY_0_BASE 0x10000 + +#define PHY_CPHY_DLL_OVRD(x) (0x100 + 0x100 * (x)) +#define PHY_CPHY_RX_CONTROL1(x) (0x110 + 0x100 * (x)) +#define PHY_DPHY_CFG(x) (0x148 + 0x100 * (x)) +#define PHY_BB_AFE_CONFIG(x) (0x174 + 0x100 * (x)) + +/* + * use port_cfg to configure that which data lanes used + * +---------+ +------+ +-----+ + * | port0 x4<-----| | | | + * | | | port | | | + * | port1 x2<-----| | | | + * | | | <-| PHY | + * | port2 x4<-----| | | | + * | | |config| | | + * | port3 x2<-----| | | | + * +---------+ +------+ +-----+ + */ +const unsigned int csi2_port_cfg[][3] = { + {0, 0, 0x1f}, /* no link */ + {4, 0, 0x10}, /* x4 + x4 config */ + {2, 0, 0x12}, /* x2 + x2 config */ + {1, 0, 0x13}, /* x1 + x1 config */ + {2, 1, 0x15}, /* x2x1 + x2x1 config */ + {1, 1, 0x16}, /* x1x1 + x1x1 config */ + {2, 2, 0x18}, /* x2x2 + x2x2 config */ + {1, 2, 0x19}, /* x1x2 + x1x2 config */ +}; + +/* port, nlanes, bbindex, portcfg */ +const unsigned int phy_port_cfg[][4] = { + /* sip0 */ + {0, 1, 0, 0x15}, + {0, 2, 0, 0x15}, + {0, 4, 0, 0x15}, + {0, 4, 2, 0x22}, + /* sip1 */ + {2, 1, 4, 0x15}, + {2, 2, 4, 0x15}, + {2, 4, 4, 0x15}, + {2, 4, 6, 0x22}, +}; + +static int ipu6_isys_csi2_phy_config_by_port(struct ipu6_isys *isys, + unsigned int port, + unsigned int nlanes) +{ + void __iomem *base = isys->adev->isp->base; + unsigned int bbnum; + u32 val, reg, i; + + dev_dbg(&isys->adev->dev, "port %u with %u lanes", port, nlanes); + + /* only support <1.5Gbps */ + for (i = 0; i < IPU6SE_ISYS_PHY_BB_NUM; i++) { + /* cphy_dll_ovrd.crcdc_fsm_dlane0 = 13 */ + reg = IPU6SE_ISYS_PHY_0_BASE + PHY_CPHY_DLL_OVRD(i); + val = readl(base + reg); + val |= FIELD_PREP(GENMASK(6, 1), 13); + writel(val, base + reg); + + /* cphy_rx_control1.en_crc1 = 1 */ + reg = IPU6SE_ISYS_PHY_0_BASE + PHY_CPHY_RX_CONTROL1(i); + val = readl(base + reg); + val |= BIT(31); + writel(val, base + reg); + + /* dphy_cfg.reserved = 1 + * dphy_cfg.lden_from_dll_ovrd_0 = 1 + */ + reg = IPU6SE_ISYS_PHY_0_BASE + PHY_DPHY_CFG(i); + val = readl(base + reg); + val |= BIT(25) | BIT(26); + writel(val, base + reg); + + /* cphy_dll_ovrd.lden_crcdc_fsm_dlane0 = 1 */ + reg = IPU6SE_ISYS_PHY_0_BASE + PHY_CPHY_DLL_OVRD(i); + val = readl(base + reg); + val |= BIT(0); + writel(val, base + reg); + } + + /* Front end config, use minimal channel loss */ + for (i = 0; i < ARRAY_SIZE(phy_port_cfg); i++) { + if (phy_port_cfg[i][0] == port && + phy_port_cfg[i][1] == nlanes) { + bbnum = phy_port_cfg[i][2] / 2; + reg = IPU6SE_ISYS_PHY_0_BASE + PHY_BB_AFE_CONFIG(bbnum); + val = readl(base + reg); + val |= phy_port_cfg[i][3]; + writel(val, base + reg); + } + } + + return 0; +} + +static void ipu6_isys_csi2_rx_control(struct ipu6_isys *isys) +{ + void __iomem *base = isys->adev->isp->base; + u32 val, reg; + + reg = CSI2_HUB_GPREG_SIP0_CSI_RX_A_CONTROL; + val = readl(base + reg); + val |= BIT(0); + writel(val, base + CSI2_HUB_GPREG_SIP0_CSI_RX_A_CONTROL); + + reg = CSI2_HUB_GPREG_SIP0_CSI_RX_B_CONTROL; + val = readl(base + reg); + val |= BIT(0); + writel(val, base + CSI2_HUB_GPREG_SIP0_CSI_RX_B_CONTROL); + + reg = CSI2_HUB_GPREG_SIP1_CSI_RX_A_CONTROL; + val = readl(base + reg); + val |= BIT(0); + writel(val, base + CSI2_HUB_GPREG_SIP1_CSI_RX_A_CONTROL); + + reg = CSI2_HUB_GPREG_SIP1_CSI_RX_B_CONTROL; + val = readl(base + reg); + val |= BIT(0); + writel(val, base + CSI2_HUB_GPREG_SIP1_CSI_RX_B_CONTROL); +} + +static int ipu6_isys_csi2_set_port_cfg(struct ipu6_isys *isys, + unsigned int port, unsigned int nlanes) +{ + unsigned int sip = port / 2; + unsigned int index; + + switch (nlanes) { + case 1: + index = 5; + break; + case 2: + index = 6; + break; + case 4: + index = 1; + break; + default: + dev_err(&isys->adev->dev, "lanes nr %u is unsupported\n", + nlanes); + return -EINVAL; + } + + dev_dbg(&isys->adev->dev, "port config for port %u with %u lanes\n", + port, nlanes); + + writel(csi2_port_cfg[index][2], + isys->pdata->base + CSI2_HUB_GPREG_SIP_FB_PORT_CFG(sip)); + + return 0; +} + +static void +ipu6_isys_csi2_set_timing(struct ipu6_isys *isys, + const struct ipu6_isys_csi2_timing *timing, + unsigned int port, unsigned int nlanes) +{ + void __iomem *reg; + u32 port_base; + u32 i; + + port_base = (port % 2) ? CSI2_SIP_TOP_CSI_RX_PORT_BASE_1(port) : + CSI2_SIP_TOP_CSI_RX_PORT_BASE_0(port); + + dev_dbg(&isys->adev->dev, + "set timing for port %u base 0x%x with %u lanes\n", + port, port_base, nlanes); + + reg = isys->pdata->base + port_base; + reg += CSI2_SIP_TOP_CSI_RX_DLY_CNT_TERMEN_CLANE; + + writel(timing->ctermen, reg); + + reg = isys->pdata->base + port_base; + reg += CSI2_SIP_TOP_CSI_RX_DLY_CNT_SETTLE_CLANE; + writel(timing->csettle, reg); + + for (i = 0; i < nlanes; i++) { + reg = isys->pdata->base + port_base; + reg += CSI2_SIP_TOP_CSI_RX_DLY_CNT_TERMEN_DLANE(i); + writel(timing->dtermen, reg); + + reg = isys->pdata->base + port_base; + reg += CSI2_SIP_TOP_CSI_RX_DLY_CNT_SETTLE_DLANE(i); + writel(timing->dsettle, reg); + } +} + +#define DPHY_TIMER_INCR 0x28 +int ipu6_isys_jsl_phy_set_power(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on) +{ + void __iomem *isys_base = isys->pdata->base; + int ret = 0; + u32 nlanes; + u32 port; + + if (!on) + return 0; + + port = cfg->port; + nlanes = cfg->nlanes; + + if (!isys_base || port >= isys->pdata->ipdata->csi2.nports) { + dev_warn(&isys->adev->dev, "invalid port ID %d\n", port); + return -EINVAL; + } + + ipu6_isys_csi2_phy_config_by_port(isys, port, nlanes); + + writel(DPHY_TIMER_INCR, + isys->pdata->base + CSI2_HUB_GPREG_DPHY_TIMER_INCR); + + /* set port cfg and rx timing */ + ipu6_isys_csi2_set_timing(isys, timing, port, nlanes); + + ret = ipu6_isys_csi2_set_port_cfg(isys, port, nlanes); + if (ret) + return ret; + + ipu6_isys_csi2_rx_control(isys); + + return 0; +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-mcd-phy.c b/drivers/media/pci/intel/ipu6/ipu6-isys-mcd-phy.c new file mode 100644 index 000000000000..231b360272ed --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-mcd-phy.c @@ -0,0 +1,733 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2013 - 2023 Intel Corporation + */ +#include + +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-phy.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" +#define LOOP (2000) + +#define CSI_REG_HUB_GPREG_PHY_CTL(id) (CSI_REG_BASE + 0x18008 + (id) * 0x8) +#define CSI_REG_HUB_GPREG_PHY_CTL_RESET BIT(4) +#define CSI_REG_HUB_GPREG_PHY_CTL_PWR_EN BIT(0) +#define CSI_REG_HUB_GPREG_PHY_STATUS(id) (CSI_REG_BASE + 0x1800c + (id) * 0x8) +#define CSI_REG_HUB_GPREG_PHY_STATUS_POWER_ACK BIT(0) +#define CSI_REG_HUB_GPREG_PHY_STATUS_PHY_READY BIT(4) + +/* + * bridge to phy in buttress reg map, each phy has 16 kbytes + * only 2 phys for TGL U and Y + */ +#define IPU6_ISYS_MCD_PHY_BASE(i) (0x10000 + (i) * 0x4000) + +/* + * There are 2 MCD DPHY instances on TGL and 1 MCD DPHY instance on ADL. + * Each MCD PHY has 12-lanes which has 8 data lanes and 4 clock lanes. + * CSI port 1, 3 (5, 7) can support max 2 data lanes. + * CSI port 0, 2 (4, 6) can support max 4 data lanes. + * PHY configurations are PPI based instead of port. + * Left: + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | PPI | PPI5 | PPI4 | PPI3 | PPI2 | PPI1 | PPI0 | + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x4 | unused | D3 | D2 | C0 | D0 | D1 | + * |---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x2x2 | C1 | D0 | D1 | C0 | D0 | D1 | + * ----------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x2x1 | C1 | D0 | unused | C0 | D0 | D1 | + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x1x1 | C1 | D0 | unused | C0 | D0 | unused | + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x1x2 | C1 | D0 | D1 | C0 | D0 | unused | + * +---------+---------+---------+---------+--------+---------+----------+ + * + * Right: + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | PPI | PPI6 | PPI7 | PPI8 | PPI9 | PPI10 | PPI11 | + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x4 | D1 | D0 | C2 | D2 | D3 | unused | + * |---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x2x2 | D1 | D0 | C2 | D1 | D0 | C3 | + * ----------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x2x1 | D1 | D0 | C2 | unused | D0 | C3 | + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x1x1 | unused | D0 | C2 | unused | D0 | C3 | + * +---------+---------+---------+---------+--------+---------+----------+ + * | | | | | | | | + * | x1x2 | unused | D0 | C2 | D1 | D0 | C3 | + * +---------+---------+---------+---------+--------+---------+----------+ + * + * ppi mapping per phy : + * + * x4 + x4: + * Left : port0 - PPI range {0, 1, 2, 3, 4} + * Right: port2 - PPI range {6, 7, 8, 9, 10} + * + * x4 + x2x2: + * Left: port0 - PPI range {0, 1, 2, 3, 4} + * Right: port2 - PPI range {6, 7, 8}, port3 - PPI range {9, 10, 11} + * + * x2x2 + x4: + * Left: port0 - PPI range {0, 1, 2}, port1 - PPI range {3, 4, 5} + * Right: port2 - PPI range {6, 7, 8, 9, 10} + * + * x2x2 + x2x2: + * Left : port0 - PPI range {0, 1, 2}, port1 - PPI range {3, 4, 5} + * Right: port2 - PPI range {6, 7, 8}, port3 - PPI range {9, 10, 11} + */ + +struct phy_reg { + u32 reg; + u32 val; +}; + +static const struct phy_reg common_init_regs[] = { + /* for TGL-U, use 0x80000000 */ + {0x00000040, 0x80000000}, + {0x00000044, 0x00a80880}, + {0x00000044, 0x00b80880}, + {0x00000010, 0x0000078c}, + {0x00000344, 0x2f4401e2}, + {0x00000544, 0x924401e2}, + {0x00000744, 0x594401e2}, + {0x00000944, 0x624401e2}, + {0x00000b44, 0xfc4401e2}, + {0x00000d44, 0xc54401e2}, + {0x00000f44, 0x034401e2}, + {0x00001144, 0x8f4401e2}, + {0x00001344, 0x754401e2}, + {0x00001544, 0xe94401e2}, + {0x00001744, 0xcb4401e2}, + {0x00001944, 0xfa4401e2} +}; + +static const struct phy_reg x1_port0_config_regs[] = { + {0x00000694, 0xc80060fa}, + {0x00000680, 0x3d4f78ea}, + {0x00000690, 0x10a0140b}, + {0x000006a8, 0xdf04010a}, + {0x00000700, 0x57050060}, + {0x00000710, 0x0030001c}, + {0x00000738, 0x5f004444}, + {0x0000073c, 0x78464204}, + {0x00000748, 0x7821f940}, + {0x0000074c, 0xb2000433}, + {0x00000494, 0xfe6030fa}, + {0x00000480, 0x29ef5ed0}, + {0x00000490, 0x10a0540b}, + {0x000004a8, 0x7a01010a}, + {0x00000500, 0xef053460}, + {0x00000510, 0xe030101c}, + {0x00000538, 0xdf808444}, + {0x0000053c, 0xc8422204}, + {0x00000540, 0x0180088c}, + {0x00000574, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x1_port1_config_regs[] = { + {0x00000c94, 0xc80060fa}, + {0x00000c80, 0xcf47abea}, + {0x00000c90, 0x10a0840b}, + {0x00000ca8, 0xdf04010a}, + {0x00000d00, 0x57050060}, + {0x00000d10, 0x0030001c}, + {0x00000d38, 0x5f004444}, + {0x00000d3c, 0x78464204}, + {0x00000d48, 0x7821f940}, + {0x00000d4c, 0xb2000433}, + {0x00000a94, 0xc91030fa}, + {0x00000a80, 0x5a166ed0}, + {0x00000a90, 0x10a0540b}, + {0x00000aa8, 0x5d060100}, + {0x00000b00, 0xef053460}, + {0x00000b10, 0xa030101c}, + {0x00000b38, 0xdf808444}, + {0x00000b3c, 0xc8422204}, + {0x00000b40, 0x0180088c}, + {0x00000b74, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x1_port2_config_regs[] = { + {0x00001294, 0x28f000fa}, + {0x00001280, 0x08130cea}, + {0x00001290, 0x10a0140b}, + {0x000012a8, 0xd704010a}, + {0x00001300, 0x8d050060}, + {0x00001310, 0x0030001c}, + {0x00001338, 0xdf008444}, + {0x0000133c, 0x78422204}, + {0x00001348, 0x7821f940}, + {0x0000134c, 0x5a000433}, + {0x00001094, 0x2d20b0fa}, + {0x00001080, 0xade75dd0}, + {0x00001090, 0x10a0540b}, + {0x000010a8, 0xb101010a}, + {0x00001100, 0x33053460}, + {0x00001110, 0x0030101c}, + {0x00001138, 0xdf808444}, + {0x0000113c, 0xc8422204}, + {0x00001140, 0x8180088c}, + {0x00001174, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x1_port3_config_regs[] = { + {0x00001894, 0xc80060fa}, + {0x00001880, 0x0f90fd6a}, + {0x00001890, 0x10a0840b}, + {0x000018a8, 0xdf04010a}, + {0x00001900, 0x57050060}, + {0x00001910, 0x0030001c}, + {0x00001938, 0x5f004444}, + {0x0000193c, 0x78464204}, + {0x00001948, 0x7821f940}, + {0x0000194c, 0xb2000433}, + {0x00001694, 0x3050d0fa}, + {0x00001680, 0x0ef6d050}, + {0x00001690, 0x10a0540b}, + {0x000016a8, 0xe301010a}, + {0x00001700, 0x69053460}, + {0x00001710, 0xa030101c}, + {0x00001738, 0xdf808444}, + {0x0000173c, 0xc8422204}, + {0x00001740, 0x0180088c}, + {0x00001774, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x2_port0_config_regs[] = { + {0x00000694, 0xc80060fa}, + {0x00000680, 0x3d4f78ea}, + {0x00000690, 0x10a0140b}, + {0x000006a8, 0xdf04010a}, + {0x00000700, 0x57050060}, + {0x00000710, 0x0030001c}, + {0x00000738, 0x5f004444}, + {0x0000073c, 0x78464204}, + {0x00000748, 0x7821f940}, + {0x0000074c, 0xb2000433}, + {0x00000494, 0xc80060fa}, + {0x00000480, 0x29ef5ed8}, + {0x00000490, 0x10a0540b}, + {0x000004a8, 0x7a01010a}, + {0x00000500, 0xef053460}, + {0x00000510, 0xe030101c}, + {0x00000538, 0xdf808444}, + {0x0000053c, 0xc8422204}, + {0x00000540, 0x0180088c}, + {0x00000574, 0x00000000}, + {0x00000294, 0xc80060fa}, + {0x00000280, 0xcb45b950}, + {0x00000290, 0x10a0540b}, + {0x000002a8, 0x8c01010a}, + {0x00000300, 0xef053460}, + {0x00000310, 0x8030101c}, + {0x00000338, 0x41808444}, + {0x0000033c, 0x32422204}, + {0x00000340, 0x0180088c}, + {0x00000374, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x2_port1_config_regs[] = { + {0x00000c94, 0xc80060fa}, + {0x00000c80, 0xcf47abea}, + {0x00000c90, 0x10a0840b}, + {0x00000ca8, 0xdf04010a}, + {0x00000d00, 0x57050060}, + {0x00000d10, 0x0030001c}, + {0x00000d38, 0x5f004444}, + {0x00000d3c, 0x78464204}, + {0x00000d48, 0x7821f940}, + {0x00000d4c, 0xb2000433}, + {0x00000a94, 0xc80060fa}, + {0x00000a80, 0x5a166ed8}, + {0x00000a90, 0x10a0540b}, + {0x00000aa8, 0x7a01010a}, + {0x00000b00, 0xef053460}, + {0x00000b10, 0xa030101c}, + {0x00000b38, 0xdf808444}, + {0x00000b3c, 0xc8422204}, + {0x00000b40, 0x0180088c}, + {0x00000b74, 0x00000000}, + {0x00000894, 0xc80060fa}, + {0x00000880, 0x4d4f21d0}, + {0x00000890, 0x10a0540b}, + {0x000008a8, 0x5601010a}, + {0x00000900, 0xef053460}, + {0x00000910, 0x8030101c}, + {0x00000938, 0xdf808444}, + {0x0000093c, 0xc8422204}, + {0x00000940, 0x0180088c}, + {0x00000974, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x2_port2_config_regs[] = { + {0x00001294, 0xc80060fa}, + {0x00001280, 0x08130cea}, + {0x00001290, 0x10a0140b}, + {0x000012a8, 0xd704010a}, + {0x00001300, 0x8d050060}, + {0x00001310, 0x0030001c}, + {0x00001338, 0xdf008444}, + {0x0000133c, 0x78422204}, + {0x00001348, 0x7821f940}, + {0x0000134c, 0x5a000433}, + {0x00001094, 0xc80060fa}, + {0x00001080, 0xade75dd8}, + {0x00001090, 0x10a0540b}, + {0x000010a8, 0xb101010a}, + {0x00001100, 0x33053460}, + {0x00001110, 0x0030101c}, + {0x00001138, 0xdf808444}, + {0x0000113c, 0xc8422204}, + {0x00001140, 0x8180088c}, + {0x00001174, 0x00000000}, + {0x00000e94, 0xc80060fa}, + {0x00000e80, 0x0fbf16d0}, + {0x00000e90, 0x10a0540b}, + {0x00000ea8, 0x7a01010a}, + {0x00000f00, 0xf5053460}, + {0x00000f10, 0xc030101c}, + {0x00000f38, 0xdf808444}, + {0x00000f3c, 0xc8422204}, + {0x00000f40, 0x8180088c}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x2_port3_config_regs[] = { + {0x00001894, 0xc80060fa}, + {0x00001880, 0x0f90fd6a}, + {0x00001890, 0x10a0840b}, + {0x000018a8, 0xdf04010a}, + {0x00001900, 0x57050060}, + {0x00001910, 0x0030001c}, + {0x00001938, 0x5f004444}, + {0x0000193c, 0x78464204}, + {0x00001948, 0x7821f940}, + {0x0000194c, 0xb2000433}, + {0x00001694, 0xc80060fa}, + {0x00001680, 0x0ef6d058}, + {0x00001690, 0x10a0540b}, + {0x000016a8, 0x7a01010a}, + {0x00001700, 0x69053460}, + {0x00001710, 0xa030101c}, + {0x00001738, 0xdf808444}, + {0x0000173c, 0xc8422204}, + {0x00001740, 0x0180088c}, + {0x00001774, 0x00000000}, + {0x00001494, 0xc80060fa}, + {0x00001480, 0xf9d34bd0}, + {0x00001490, 0x10a0540b}, + {0x000014a8, 0x7a01010a}, + {0x00001500, 0x1b053460}, + {0x00001510, 0x0030101c}, + {0x00001538, 0xdf808444}, + {0x0000153c, 0xc8422204}, + {0x00001540, 0x8180088c}, + {0x00001574, 0x00000000}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x4_port0_config_regs[] = { + {0x00000694, 0xc80060fa}, + {0x00000680, 0x3d4f78fa}, + {0x00000690, 0x10a0140b}, + {0x000006a8, 0xdf04010a}, + {0x00000700, 0x57050060}, + {0x00000710, 0x0030001c}, + {0x00000738, 0x5f004444}, + {0x0000073c, 0x78464204}, + {0x00000748, 0x7821f940}, + {0x0000074c, 0xb2000433}, + {0x00000494, 0xfe6030fa}, + {0x00000480, 0x29ef5ed8}, + {0x00000490, 0x10a0540b}, + {0x000004a8, 0x7a01010a}, + {0x00000500, 0xef053460}, + {0x00000510, 0xe030101c}, + {0x00000538, 0xdf808444}, + {0x0000053c, 0xc8422204}, + {0x00000540, 0x0180088c}, + {0x00000574, 0x00000004}, + {0x00000294, 0x23e030fa}, + {0x00000280, 0xcb45b950}, + {0x00000290, 0x10a0540b}, + {0x000002a8, 0x8c01010a}, + {0x00000300, 0xef053460}, + {0x00000310, 0x8030101c}, + {0x00000338, 0x41808444}, + {0x0000033c, 0x32422204}, + {0x00000340, 0x0180088c}, + {0x00000374, 0x00000004}, + {0x00000894, 0x5620b0fa}, + {0x00000880, 0x4d4f21dc}, + {0x00000890, 0x10a0540b}, + {0x000008a8, 0x5601010a}, + {0x00000900, 0xef053460}, + {0x00000910, 0x8030101c}, + {0x00000938, 0xdf808444}, + {0x0000093c, 0xc8422204}, + {0x00000940, 0x0180088c}, + {0x00000974, 0x00000004}, + {0x00000a94, 0xc91030fa}, + {0x00000a80, 0x5a166ecc}, + {0x00000a90, 0x10a0540b}, + {0x00000aa8, 0x5d01010a}, + {0x00000b00, 0xef053460}, + {0x00000b10, 0xa030101c}, + {0x00000b38, 0xdf808444}, + {0x00000b3c, 0xc8422204}, + {0x00000b40, 0x0180088c}, + {0x00000b74, 0x00000004}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x4_port1_config_regs[] = { + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x4_port2_config_regs[] = { + {0x00001294, 0x28f000fa}, + {0x00001280, 0x08130cfa}, + {0x00001290, 0x10c0140b}, + {0x000012a8, 0xd704010a}, + {0x00001300, 0x8d050060}, + {0x00001310, 0x0030001c}, + {0x00001338, 0xdf008444}, + {0x0000133c, 0x78422204}, + {0x00001348, 0x7821f940}, + {0x0000134c, 0x5a000433}, + {0x00001094, 0x2d20b0fa}, + {0x00001080, 0xade75dd8}, + {0x00001090, 0x10a0540b}, + {0x000010a8, 0xb101010a}, + {0x00001100, 0x33053460}, + {0x00001110, 0x0030101c}, + {0x00001138, 0xdf808444}, + {0x0000113c, 0xc8422204}, + {0x00001140, 0x8180088c}, + {0x00001174, 0x00000004}, + {0x00000e94, 0xd308d0fa}, + {0x00000e80, 0x0fbf16d0}, + {0x00000e90, 0x10a0540b}, + {0x00000ea8, 0x2c01010a}, + {0x00000f00, 0xf5053460}, + {0x00000f10, 0xc030101c}, + {0x00000f38, 0xdf808444}, + {0x00000f3c, 0xc8422204}, + {0x00000f40, 0x8180088c}, + {0x00000f74, 0x00000004}, + {0x00001494, 0x136850fa}, + {0x00001480, 0xf9d34bdc}, + {0x00001490, 0x10a0540b}, + {0x000014a8, 0x5a01010a}, + {0x00001500, 0x1b053460}, + {0x00001510, 0x0030101c}, + {0x00001538, 0xdf808444}, + {0x0000153c, 0xc8422204}, + {0x00001540, 0x8180088c}, + {0x00001574, 0x00000004}, + {0x00001694, 0x3050d0fa}, + {0x00001680, 0x0ef6d04c}, + {0x00001690, 0x10a0540b}, + {0x000016a8, 0xe301010a}, + {0x00001700, 0x69053460}, + {0x00001710, 0xa030101c}, + {0x00001738, 0xdf808444}, + {0x0000173c, 0xc8422204}, + {0x00001740, 0x0180088c}, + {0x00001774, 0x00000004}, + {0x00000000, 0x00000000} +}; + +static const struct phy_reg x4_port3_config_regs[] = { + {0x00000000, 0x00000000} +}; + +static const struct phy_reg *x1_config_regs[4] = { + x1_port0_config_regs, + x1_port1_config_regs, + x1_port2_config_regs, + x1_port3_config_regs +}; + +static const struct phy_reg *x2_config_regs[4] = { + x2_port0_config_regs, + x2_port1_config_regs, + x2_port2_config_regs, + x2_port3_config_regs +}; + +static const struct phy_reg *x4_config_regs[4] = { + x4_port0_config_regs, + x4_port1_config_regs, + x4_port2_config_regs, + x4_port3_config_regs +}; + +static const struct phy_reg **config_regs[3] = { + x1_config_regs, + x2_config_regs, + x4_config_regs, +}; + +static int ipu6_isys_mcd_phy_powerup_ack(struct ipu6_isys *isys, + unsigned int phy_id) +{ + void __iomem *isys_base = isys->pdata->base; + unsigned int i; + u32 val; + + val = readl(isys_base + CSI_REG_HUB_GPREG_PHY_CTL(phy_id)); + val |= CSI_REG_HUB_GPREG_PHY_CTL_PWR_EN; + writel(val, isys_base + CSI_REG_HUB_GPREG_PHY_CTL(phy_id)); + + for (i = 0; i < LOOP; i++) { + if (readl(isys_base + CSI_REG_HUB_GPREG_PHY_STATUS(phy_id)) & + CSI_REG_HUB_GPREG_PHY_STATUS_POWER_ACK) + return 0; + usleep_range(100, 200); + } + + dev_warn(&isys->adev->dev, "PHY%d powerup ack timeout", phy_id); + + return -ETIMEDOUT; +} + +static int ipu6_isys_mcd_phy_powerdown_ack(struct ipu6_isys *isys, + unsigned int phy_id) +{ + void __iomem *isys_base = isys->pdata->base; + unsigned int i; + u32 val; + + writel(0, isys_base + CSI_REG_HUB_GPREG_PHY_CTL(phy_id)); + for (i = 0; i < LOOP; i++) { + usleep_range(10, 20); + val = readl(isys_base + CSI_REG_HUB_GPREG_PHY_STATUS(phy_id)); + if (!(val & CSI_REG_HUB_GPREG_PHY_STATUS_POWER_ACK)) + return 0; + } + + dev_warn(&isys->adev->dev, "PHY %d poweroff ack timeout.\n", phy_id); + + return -ETIMEDOUT; +} + +static int ipu6_isys_mcd_phy_reset(struct ipu6_isys *isys, unsigned int phy_id, + bool assert) +{ + void __iomem *isys_base = isys->pdata->base; + u32 val; + + val = readl(isys_base + CSI_REG_HUB_GPREG_PHY_CTL(phy_id)); + if (assert) + val |= CSI_REG_HUB_GPREG_PHY_CTL_RESET; + else + val &= ~(CSI_REG_HUB_GPREG_PHY_CTL_RESET); + + writel(val, isys_base + CSI_REG_HUB_GPREG_PHY_CTL(phy_id)); + + return 0; +} + +static int ipu6_isys_mcd_phy_ready(struct ipu6_isys *isys, unsigned int phy_id) +{ + void __iomem *isys_base = isys->pdata->base; + unsigned int i; + u32 val; + + for (i = 0; i < LOOP; i++) { + val = readl(isys_base + CSI_REG_HUB_GPREG_PHY_STATUS(phy_id)); + dev_dbg(&isys->adev->dev, "PHY%d ready status 0x%x\n", + phy_id, val); + if (val & CSI_REG_HUB_GPREG_PHY_STATUS_PHY_READY) + return 0; + usleep_range(10, 20); + } + + dev_warn(&isys->adev->dev, "PHY%d ready timeout\n", phy_id); + + return -ETIMEDOUT; +} + +static int ipu6_isys_mcd_phy_common_init(struct ipu6_isys *isys) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(&isys->adev->dev); + struct ipu6_device *isp = adev->isp; + void __iomem *isp_base = isp->base; + struct sensor_async_subdev *s_asd; + struct v4l2_async_subdev *asd; + void __iomem *phy_base; + unsigned int phy_id; + unsigned int i; + + list_for_each_entry(asd, &isys->notifier.asd_list, asd_list) { + s_asd = container_of(asd, struct sensor_async_subdev, asd); + phy_id = s_asd->csi2.port / 4; + phy_base = isp_base + IPU6_ISYS_MCD_PHY_BASE(phy_id); + + for (i = 0 ; i < ARRAY_SIZE(common_init_regs); i++) { + writel(common_init_regs[i].val, + phy_base + common_init_regs[i].reg); + } + } + + return 0; +} + +static int ipu6_isys_driver_port_to_phy_port(struct ipu6_isys_csi2_config *cfg) +{ + int phy_port; + int ret; + + if (!(cfg->nlanes == 4 || cfg->nlanes == 2 || cfg->nlanes == 1)) + return -EINVAL; + + /* B,F -> C0 A,E -> C1 C,G -> C2 D,H -> C4 */ + /* normalize driver port number */ + phy_port = cfg->port % 4; + + /* swap port number only for A and B */ + if (phy_port == 0) + phy_port = 1; + else if (phy_port == 1) + phy_port = 0; + + ret = phy_port; + + /* check validity per lane configuration */ + if (cfg->nlanes == 4 && + !(phy_port == 0 || phy_port == 2)) + ret = -EINVAL; + else if ((cfg->nlanes == 2 || cfg->nlanes == 1) && + !(phy_port >= 0 && phy_port <= 3)) + ret = -EINVAL; + + return ret; +} + +static int ipu6_isys_mcd_phy_config(struct ipu6_isys *isys) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(&isys->adev->dev); + const struct phy_reg **phy_config_regs; + struct ipu6_device *isp = adev->isp; + void __iomem *isp_base = isp->base; + struct sensor_async_subdev *s_asd; + struct ipu6_isys_csi2_config cfg; + struct v4l2_async_subdev *asd; + unsigned int i, phy_port, phy_id; + void __iomem *phy_base; + + list_for_each_entry(asd, &isys->notifier.asd_list, asd_list) { + s_asd = container_of(asd, struct sensor_async_subdev, asd); + cfg.port = s_asd->csi2.port; + cfg.nlanes = s_asd->csi2.nlanes; + phy_port = ipu6_isys_driver_port_to_phy_port(&cfg); + if (phy_port < 0) { + dev_err(&isys->adev->dev, "invalid port %d for lane %d", + cfg.port, cfg.nlanes); + return -ENXIO; + } + + phy_id = cfg.port / 4; + phy_base = isp_base + IPU6_ISYS_MCD_PHY_BASE(phy_id); + dev_dbg(&isys->adev->dev, "port%d PHY%u lanes %u\n", + cfg.port, phy_id, cfg.nlanes); + + phy_config_regs = config_regs[cfg.nlanes / 2]; + cfg.port = phy_port; + for (i = 0; phy_config_regs[cfg.port][i].reg; i++) { + writel(phy_config_regs[cfg.port][i].val, + phy_base + phy_config_regs[cfg.port][i].reg); + } + } + + return 0; +} + +#define CSI_MCD_PHY_NUM 2 +static refcount_t phy_power_ref_count[CSI_MCD_PHY_NUM]; + +int ipu6_isys_mcd_phy_set_power(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on) +{ + void __iomem *isys_base = isys->pdata->base; + unsigned int port, phy_id; + refcount_t *ref; + int ret = 0; + + port = cfg->port; + phy_id = port / 4; + + ref = &phy_power_ref_count[phy_id]; + + dev_dbg(&isys->adev->dev, "for phy %d port %d, lanes: %d\n", + phy_id, port, cfg->nlanes); + + if (!isys_base || port >= isys->pdata->ipdata->csi2.nports) { + dev_warn(&isys->adev->dev, "invalid port ID %d\n", port); + return -EINVAL; + } + + if (on) { + if (refcount_read(ref)) { + dev_dbg(&isys->adev->dev, "for phy %d is already UP", + phy_id); + refcount_inc(ref); + return 0; + } + + ret = ipu6_isys_mcd_phy_powerup_ack(isys, phy_id); + if (ret) + return ret; + + ipu6_isys_mcd_phy_reset(isys, phy_id, 0); + ipu6_isys_mcd_phy_common_init(isys); + + ret = ipu6_isys_mcd_phy_config(isys); + if (ret) + return ret; + + ipu6_isys_mcd_phy_reset(isys, phy_id, 1); + ret = ipu6_isys_mcd_phy_ready(isys, phy_id); + if (ret) + return ret; + + refcount_set(ref, 1); + return 0; + } + + if (refcount_dec_and_test(ref)) + ret = ipu6_isys_mcd_phy_powerdown_ack(isys, phy_id); + if (ret) + dev_err(&isys->adev->dev, "phy poweroff failed!"); + + return ret; +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-phy.h b/drivers/media/pci/intel/ipu6/ipu6-isys-phy.h new file mode 100644 index 000000000000..3b7a160b96cd --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-phy.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2013 - 2023 Intel Corporation + */ + +#ifndef IPU6_ISYS_PHY_H +#define IPU6_ISYS_PHY_H + +int ipu6_isys_mcd_phy_set_power(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on); + +int ipu6_isys_dwc_phy_set_power(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on); + +int ipu6_isys_jsl_phy_set_power(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on); + +#endif From patchwork Thu Apr 13 10:04:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2737C77B6C for ; Thu, 13 Apr 2023 09:55:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229992AbjDMJzc (ORCPT ); Thu, 13 Apr 2023 05:55:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230095AbjDMJza (ORCPT ); Thu, 13 Apr 2023 05:55:30 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 101176584 for ; Thu, 13 Apr 2023 02:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379716; x=1712915716; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=d9aLcFNMyT8cnhH1MScqA3K4l3h+OUiIdv0gEpAf+q8=; b=VfDICy9zq08mggW7HjN/uSYQIngyzv8RRS+7ipTVkixkmhg+F1E77kTb Oy5kPQ+GMw8cU0RAi/R8PQPTXmC8/J0O6fFR0+9CR011oHNwME5hgOJqT UuyrBaTJ8mq0VDwAu3+b4LvhoehFZBbZpxIsNId+d+zLX0ggDu+M/K/Wt 3Y8o2Q0RlfuWFKzMV/a3FJ0ASC7NLiMTO+7P2C0wYTvAwf4lSHQyOgcil Kaf4syZc9pL15F2SjQC7VgpwSm36P7mCwSRbWnL+A2BbdmHaGKv5yWu6u JbG7rgZGLMkLEdRJVtccV3mTQgqt+WWzVkqsNgCn2SnGUv4fMuPp5mm39 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993078" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993078" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600153" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600153" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:55:05 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 10/14] media: intel/ipu6: add input system driver Date: Thu, 13 Apr 2023 18:04:25 +0800 Message-Id: <20230413100429.919622-11-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Input system driver do basic isys hardware setup and irq handling and work with fwnode and v4l2 to register the ISYS v4l2 devices. Signed-off-by: Bingbu Cao --- drivers/media/pci/intel/ipu6/ipu6-isys.c | 1325 ++++++++++++++++++++++ drivers/media/pci/intel/ipu6/ipu6-isys.h | 190 ++++ 2 files changed, 1515 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys.c b/drivers/media/pci/intel/ipu6/ipu6-isys.c new file mode 100644 index 000000000000..cc3dd06d243d --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys.c @@ -0,0 +1,1325 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-cpd.h" +#include "ipu6-dma.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-phy.h" +#include "ipu6-isys-video.h" +#include "ipu6-mmu.h" +#include "ipu6-platform.h" +#include "ipu6-platform-buttress-regs.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + +#define IPU6_BUTTRESS_FABIC_CONTROL 0x68 +#define GDA_ENABLE_IWAKE_INDEX 2 +#define GDA_IWAKE_THRESHOLD_INDEX 1 +#define GDA_IRQ_CRITICAL_THRESHOLD_INDEX 0 +#define GDA_MEMOPEN_THRESHOLD_INDEX 3 +#define DEFAULT_DID_RATIO 90 +#define DEFAULT_IWAKE_THRESHOLD 0x42 +#define DEFAULT_MEM_OPEN_TIME 10 +#define ONE_THOUSAND_MICROSECOND 1000 +/* One page is 2KB, 8 x 16 x 16 = 2048B = 2KB */ +#define ISF_DMA_TOP_GDA_PROFERTY_PAGE_SIZE 0x800 + +/* LTR & DID value are 10 bit at most */ +#define LTR_DID_VAL_MAX 1023 +#define LTR_DEFAULT_VALUE 0x70503C19 +#define FILL_TIME_DEFAULT_VALUE 0xFFF0783C +#define LTR_DID_PKGC_2R 20 +#define LTR_SCALE_DEFAULT 5 +#define LTR_SCALE_1024NS 2 +#define DID_SCALE_1US 2 +#define DID_SCALE_32US 3 +#define REG_PKGC_PMON_CFG 0xB00 + +#define VAL_PKGC_PMON_CFG_RESET 0x38 +#define VAL_PKGC_PMON_CFG_START 0x7 + +#define IS_PIXEL_BUFFER_PAGES 0x80 +/* when iwake mode is disabled, the critical threshold is statically set to 75% + * of the IS pixel buffer criticalThreshold = (128 * 3) / 4 + */ +#define CRITICAL_THRESHOLD_IWAKE_DISABLE (IS_PIXEL_BUFFER_PAGES * 3 / 4) + +union fabric_ctrl { + struct { + u16 ltr_val : 10; + u16 ltr_scale : 3; + u16 reserved : 3; + u16 did_val : 10; + u16 did_scale : 3; + u16 reserved2 : 1; + u16 keep_power_in_D0 : 1; + u16 keep_power_override : 1; + } bits; + u32 value; +}; + +enum ltr_did_type { + LTR_IWAKE_ON, + LTR_IWAKE_OFF, + LTR_ISYS_ON, + LTR_ISYS_OFF, + LTR_ENHANNCE_IWAKE, + LTR_TYPE_MAX +}; + +#define ISYS_PM_QOS_VALUE 300 + +static int +isys_complete_ext_device_registration(struct ipu6_isys *isys, + struct v4l2_subdev *sd, + struct ipu6_isys_csi2_config *csi2) +{ + unsigned int i; + int ret; + + for (i = 0; i < sd->entity.num_pads; i++) { + if (sd->entity.pads[i].flags & MEDIA_PAD_FL_SOURCE) + break; + } + + if (i == sd->entity.num_pads) { + dev_warn(&isys->adev->dev, "no src pad in external entity\n"); + ret = -ENOENT; + goto unregister_subdev; + } + + ret = media_create_pad_link(&sd->entity, i, + &isys->csi2[csi2->port].asd.sd.entity, + 0, 0); + if (ret) { + dev_warn(&isys->adev->dev, "can't create link\n"); + goto unregister_subdev; + } + + isys->csi2[csi2->port].nlanes = csi2->nlanes; + + return 0; + +unregister_subdev: + v4l2_device_unregister_subdev(sd); + + return ret; +} + +static void isys_stream_init(struct ipu6_isys *isys) +{ + u32 i; + + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) { + mutex_init(&isys->streams[i].mutex); + init_completion(&isys->streams[i].stream_open_completion); + init_completion(&isys->streams[i].stream_close_completion); + init_completion(&isys->streams[i].stream_start_completion); + init_completion(&isys->streams[i].stream_stop_completion); + INIT_LIST_HEAD(&isys->streams[i].queues); + isys->streams[i].isys = isys; + isys->streams[i].stream_handle = i; + } +} + +static void isys_csi2_unregister_subdevices(struct ipu6_isys *isys) +{ + const struct ipu6_isys_internal_csi2_pdata *csi2 = + &isys->pdata->ipdata->csi2; + unsigned int i; + + for (i = 0; i < csi2->nports; i++) + ipu6_isys_csi2_cleanup(&isys->csi2[i]); +} + +static int isys_csi2_register_subdevices(struct ipu6_isys *isys) +{ + const struct ipu6_isys_internal_csi2_pdata *csi2_pdata = + &isys->pdata->ipdata->csi2; + unsigned int i; + int ret; + + isys->csi2 = devm_kcalloc(&isys->adev->dev, csi2_pdata->nports, + sizeof(*isys->csi2), GFP_KERNEL); + if (!isys->csi2) { + ret = -ENOMEM; + goto fail; + } + + for (i = 0; i < csi2_pdata->nports; i++) { + ret = ipu6_isys_csi2_init(&isys->csi2[i], isys, + isys->pdata->base + + csi2_pdata->offsets[i], i); + if (ret) + goto fail; + + isys->isr_csi2_bits |= IPU6_ISYS_UNISPART_IRQ_CSI2(i); + } + + return 0; + +fail: + while (i--) + ipu6_isys_csi2_cleanup(&isys->csi2[i]); + + return ret; +} + +static int isys_csi2_create_media_links(struct ipu6_isys *isys) +{ + const struct ipu6_isys_internal_csi2_pdata *csi2_pdata = + &isys->pdata->ipdata->csi2; + struct ipu6_isys_csi2 *csi2; + unsigned int i, j; + int ret; + + for (i = 0; i < csi2_pdata->nports; i++) { + for (j = 0; j < NR_OF_VIDEO_DEVICE; j++) { + csi2 = &isys->csi2[i]; + ret = media_create_pad_link(&csi2->asd.sd.entity, + CSI2_PAD_SOURCE, + &isys->av[j].vdev.entity, + 0, MEDIA_LNK_FL_DYNAMIC); + if (ret) { + dev_info(&isys->adev->dev, + "CSI2 can't create link\n"); + return ret; + } + } + } + + return 0; +} + +static void isys_unregister_video_devices(struct ipu6_isys *isys) +{ + unsigned int i; + + for (i = 0; i < NR_OF_VIDEO_DEVICE; i++) + ipu6_isys_video_cleanup(&isys->av[i]); +} + +static int isys_register_video_devices(struct ipu6_isys *isys) +{ + unsigned int i; + int ret; + + for (i = 0; i < NR_OF_VIDEO_DEVICE; i++) { + snprintf(isys->av[i].vdev.name, sizeof(isys->av[i].vdev.name), + IPU6_ISYS_ENTITY_PREFIX " ISYS Capture %u", i); + isys->av[i].isys = isys; + isys->av[i].aq.buf_prepare = ipu6_isys_buf_prepare; + isys->av[i].aq.fill_frame_buf_set = + ipu6_isys_buf_to_fw_frame_buf_pin; + isys->av[i].aq.link_fmt_validate = ipu6_isys_link_fmt_validate; + isys->av[i].aq.vbq.buf_struct_size = + sizeof(struct ipu6_isys_video_buffer); + isys->av[i].pfmt = &ipu6_isys_pfmts[0]; + + ret = ipu6_isys_video_init(&isys->av[i]); + if (ret) + goto fail; + } + + return 0; + +fail: + while (i--) + ipu6_isys_video_cleanup(&isys->av[i]); + + return ret; +} + +void isys_setup_hw(struct ipu6_isys *isys) +{ + void __iomem *base = isys->pdata->base; + const u8 *thd = isys->pdata->ipdata->hw_variant.cdc_fifo_threshold; + u32 irqs = 0; + unsigned int i, nports; + + nports = isys->pdata->ipdata->csi2.nports; + + /* Enable irqs for all MIPI ports */ + for (i = 0; i < nports; i++) + irqs |= IPU6_ISYS_UNISPART_IRQ_CSI2(i); + + writel(irqs, base + isys->pdata->ipdata->csi2.ctrl0_irq_edge); + writel(irqs, base + isys->pdata->ipdata->csi2.ctrl0_irq_lnp); + writel(irqs, base + isys->pdata->ipdata->csi2.ctrl0_irq_mask); + writel(irqs, base + isys->pdata->ipdata->csi2.ctrl0_irq_enable); + writel(GENMASK(19, 0), + base + isys->pdata->ipdata->csi2.ctrl0_irq_clear); + + irqs = ISYS_UNISPART_IRQS; + writel(irqs, base + IPU6_REG_ISYS_UNISPART_IRQ_EDGE); + writel(irqs, base + IPU6_REG_ISYS_UNISPART_IRQ_LEVEL_NOT_PULSE); + writel(GENMASK(28, 0), base + IPU6_REG_ISYS_UNISPART_IRQ_CLEAR); + writel(irqs, base + IPU6_REG_ISYS_UNISPART_IRQ_MASK); + writel(irqs, base + IPU6_REG_ISYS_UNISPART_IRQ_ENABLE); + + writel(0, base + IPU6_REG_ISYS_UNISPART_SW_IRQ_REG); + writel(0, base + IPU6_REG_ISYS_UNISPART_SW_IRQ_MUX_REG); + + /* Write CDC FIFO threshold values for isys */ + for (i = 0; i < isys->pdata->ipdata->hw_variant.cdc_fifos; i++) + writel(thd[i], base + IPU6_REG_ISYS_CDC_THRESHOLD(i)); +} + +static void ipu6_isys_csi2_isr(struct ipu6_isys_csi2 *csi2) +{ + struct ipu6_isys_stream *stream; + u32 status; + + ipu6_isys_register_errors(csi2); + + status = readl(csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_STATUS_OFFSET); + + writel(status, csi2->base + CSI_PORT_REG_BASE_IRQ_CSI_SYNC + + CSI_PORT_REG_BASE_IRQ_CLEAR_OFFSET); + + stream = ipu6_isys_query_stream_by_source(csi2->isys, csi2->asd.source); + if (!stream) + return; + + if (status & IPU6_CSI_RX_IRQ_FS_VC) + ipu6_isys_csi2_sof_event_by_stream(stream); + if (status & IPU6_CSI_RX_IRQ_FE_VC) + ipu6_isys_csi2_eof_event_by_stream(stream); + + ipu6_isys_put_stream(stream); +} + +irqreturn_t isys_isr(struct ipu6_bus_device *adev) +{ + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + void __iomem *base = isys->pdata->base; + u32 status_sw, status_csi; + u32 ctrl0_status, ctrl0_clear; + + spin_lock(&isys->power_lock); + if (!isys->power) { + spin_unlock(&isys->power_lock); + return IRQ_NONE; + } + + ctrl0_status = isys->pdata->ipdata->csi2.ctrl0_irq_status; + ctrl0_clear = isys->pdata->ipdata->csi2.ctrl0_irq_clear; + + status_csi = readl(isys->pdata->base + ctrl0_status); + status_sw = readl(isys->pdata->base + + IPU6_REG_ISYS_UNISPART_IRQ_STATUS); + + writel(ISYS_UNISPART_IRQS & ~IPU6_ISYS_UNISPART_IRQ_SW, + base + IPU6_REG_ISYS_UNISPART_IRQ_MASK); + + do { + writel(status_csi, isys->pdata->base + ctrl0_clear); + + writel(status_sw, isys->pdata->base + + IPU6_REG_ISYS_UNISPART_IRQ_CLEAR); + + if (isys->isr_csi2_bits & status_csi) { + unsigned int i; + + for (i = 0; i < isys->pdata->ipdata->csi2.nports; i++) { + /* irq from not enabled port */ + if (!isys->csi2[i].base) + continue; + if (status_csi & IPU6_ISYS_UNISPART_IRQ_CSI2(i)) + ipu6_isys_csi2_isr(&isys->csi2[i]); + } + } + + writel(0, base + IPU6_REG_ISYS_UNISPART_SW_IRQ_REG); + + if (!isys_isr_one(adev)) + status_sw = IPU6_ISYS_UNISPART_IRQ_SW; + else + status_sw = 0; + + status_csi = readl(isys->pdata->base + ctrl0_status); + status_sw |= readl(isys->pdata->base + + IPU6_REG_ISYS_UNISPART_IRQ_STATUS); + } while (((status_csi & isys->isr_csi2_bits) || + (status_sw & IPU6_ISYS_UNISPART_IRQ_SW)) && + !isys->adev->isp->flr_done); + + writel(ISYS_UNISPART_IRQS, base + IPU6_REG_ISYS_UNISPART_IRQ_MASK); + + spin_unlock(&isys->power_lock); + + return IRQ_HANDLED; +} + +static void get_lut_ltrdid(struct ipu6_isys *isys, struct ltr_did *pltr_did) +{ + struct isys_iwake_watermark *iwake_watermark = &isys->iwake_watermark; + struct ltr_did ltrdid_default; + + ltrdid_default.lut_ltr.value = LTR_DEFAULT_VALUE; + ltrdid_default.lut_fill_time.value = FILL_TIME_DEFAULT_VALUE; + + if (iwake_watermark->ltrdid.lut_ltr.value) + *pltr_did = iwake_watermark->ltrdid; + else + *pltr_did = ltrdid_default; +} + +static int set_iwake_register(struct ipu6_isys *isys, u32 index, u32 value) +{ + u32 req_id = index; + u32 offset = 0; + int ret = 0; + + ret = ipu6_fw_isys_send_proxy_token(isys, req_id, index, offset, value); + if (ret) + dev_err(&isys->adev->dev, "write %d failed %d", index, ret); + + return ret; +} + +/* + * When input system is powered up and before enabling any new sensor capture, + * or after disabling any sensor capture the following values need to be set: + * LTR_value = LTR(usec) from calculation; + * LTR_scale = 2; + * DID_value = DID(usec) from calculation; + * DID_scale = 2; + * + * When input system is powered down, the LTR and DID values + * must be returned to the default values: + * LTR_value = 1023; + * LTR_scale = 5; + * DID_value = 1023; + * DID_scale = 2; + */ +static void set_iwake_ltrdid(struct ipu6_isys *isys, u16 ltr, u16 did, + enum ltr_did_type use) +{ + u16 ltr_val, ltr_scale = LTR_SCALE_1024NS; + u16 did_val, did_scale = DID_SCALE_1US; + struct ipu6_device *isp = isys->adev->isp; + union fabric_ctrl fc; + + switch (use) { + case LTR_IWAKE_ON: + ltr_val = min_t(u16, ltr, (u16)LTR_DID_VAL_MAX); + did_val = min_t(u16, did, (u16)LTR_DID_VAL_MAX); + ltr_scale = (ltr == LTR_DID_VAL_MAX && + did == LTR_DID_VAL_MAX) ? + LTR_SCALE_DEFAULT : LTR_SCALE_1024NS; + break; + case LTR_ISYS_ON: + case LTR_IWAKE_OFF: + ltr_val = LTR_DID_PKGC_2R; + did_val = LTR_DID_PKGC_2R; + break; + case LTR_ISYS_OFF: + ltr_val = LTR_DID_VAL_MAX; + did_val = LTR_DID_VAL_MAX; + ltr_scale = LTR_SCALE_DEFAULT; + break; + case LTR_ENHANNCE_IWAKE: + if (ltr == LTR_DID_VAL_MAX && did == LTR_DID_VAL_MAX) { + ltr_val = LTR_DID_VAL_MAX; + did_val = LTR_DID_VAL_MAX; + ltr_scale = LTR_SCALE_DEFAULT; + } else if (did < ONE_THOUSAND_MICROSECOND) { + ltr_val = ltr; + did_val = did; + } else { + ltr_val = ltr; + /* div 90% value by 32 to account for scale change */ + did_val = did / 32; + did_scale = DID_SCALE_32US; + } + break; + default: + ltr_val = LTR_DID_VAL_MAX; + did_val = LTR_DID_VAL_MAX; + ltr_scale = LTR_SCALE_DEFAULT; + break; + } + + fc.value = readl(isp->base + IPU6_BUTTRESS_FABIC_CONTROL); + fc.bits.ltr_val = ltr_val; + fc.bits.ltr_scale = ltr_scale; + fc.bits.did_val = did_val; + fc.bits.did_scale = did_scale; + dev_dbg(&isys->adev->dev, "ltr: %d did: %d", ltr_val, did_val); + writel(fc.value, isp->base + IPU6_BUTTRESS_FABIC_CONTROL); +} + +/* + * Driver may clear register GDA_ENABLE_IWAKE before FW configures the + * stream for debug purpose. Otherwise driver should not access this register. + */ +static void enable_iwake(struct ipu6_isys *isys, bool enable) +{ + struct isys_iwake_watermark *iwake_watermark = &isys->iwake_watermark; + int ret; + + mutex_lock(&iwake_watermark->mutex); + + if (iwake_watermark->iwake_enabled == enable) { + mutex_unlock(&iwake_watermark->mutex); + return; + } + + ret = set_iwake_register(isys, GDA_ENABLE_IWAKE_INDEX, enable); + if (!ret) + iwake_watermark->iwake_enabled = enable; + + mutex_unlock(&iwake_watermark->mutex); +} + +void update_watermark_setting(struct ipu6_isys *isys) +{ + struct isys_iwake_watermark *iwake_watermark = &isys->iwake_watermark; + u32 iwake_threshold, iwake_critical_threshold, page_num; + u32 calc_fill_time_us = 0, ltr = 0, did = 0; + struct video_stream_watermark *p_watermark; + enum ltr_did_type ltr_did_type; + struct list_head *stream_node; + u64 isys_pb_datarate_mbs = 0; + u32 mem_open_threshold = 0; + struct ltr_did ltrdid; + u64 threshold_bytes; + u32 max_sram_size; + u32 shift; + + shift = isys->pdata->ipdata->sram_gran_shift; + max_sram_size = isys->pdata->ipdata->max_sram_size; + + mutex_lock(&iwake_watermark->mutex); + if (iwake_watermark->force_iwake_disable) { + set_iwake_ltrdid(isys, 0, 0, LTR_IWAKE_OFF); + set_iwake_register(isys, GDA_IRQ_CRITICAL_THRESHOLD_INDEX, + CRITICAL_THRESHOLD_IWAKE_DISABLE); + goto unlock_exit; + } + + if (list_empty(&iwake_watermark->video_list)) { + isys_pb_datarate_mbs = 0; + } else { + list_for_each(stream_node, &iwake_watermark->video_list) { + p_watermark = list_entry(stream_node, + struct video_stream_watermark, + stream_node); + isys_pb_datarate_mbs += p_watermark->stream_data_rate; + } + } + mutex_unlock(&iwake_watermark->mutex); + + if (!isys_pb_datarate_mbs) { + enable_iwake(isys, false); + set_iwake_ltrdid(isys, 0, 0, LTR_IWAKE_OFF); + mutex_lock(&iwake_watermark->mutex); + set_iwake_register(isys, GDA_IRQ_CRITICAL_THRESHOLD_INDEX, + CRITICAL_THRESHOLD_IWAKE_DISABLE); + goto unlock_exit; + } + + enable_iwake(isys, true); + calc_fill_time_us = max_sram_size / isys_pb_datarate_mbs; + + if (isys->pdata->ipdata->enhanced_iwake) { + ltr = isys->pdata->ipdata->ltr; + did = calc_fill_time_us * DEFAULT_DID_RATIO / 100; + ltr_did_type = LTR_ENHANNCE_IWAKE; + } else { + get_lut_ltrdid(isys, <rdid); + + if (calc_fill_time_us <= ltrdid.lut_fill_time.bits.th0) + ltr = 0; + else if (calc_fill_time_us <= ltrdid.lut_fill_time.bits.th1) + ltr = ltrdid.lut_ltr.bits.val0; + else if (calc_fill_time_us <= ltrdid.lut_fill_time.bits.th2) + ltr = ltrdid.lut_ltr.bits.val1; + else if (calc_fill_time_us <= ltrdid.lut_fill_time.bits.th3) + ltr = ltrdid.lut_ltr.bits.val2; + else + ltr = ltrdid.lut_ltr.bits.val3; + + did = calc_fill_time_us - ltr; + ltr_did_type = LTR_IWAKE_ON; + } + + set_iwake_ltrdid(isys, ltr, did, ltr_did_type); + + /* calculate iwake threshold with 2KB granularity pages */ + threshold_bytes = did * isys_pb_datarate_mbs; + iwake_threshold = max_t(u32, 1, threshold_bytes >> shift); + iwake_threshold = min_t(u32, iwake_threshold, max_sram_size); + + mutex_lock(&iwake_watermark->mutex); + if (isys->pdata->ipdata->enhanced_iwake) { + set_iwake_register(isys, GDA_IWAKE_THRESHOLD_INDEX, + DEFAULT_IWAKE_THRESHOLD); + /* calculate number of pages that will be filled in 10 usec */ + page_num = (DEFAULT_MEM_OPEN_TIME * isys_pb_datarate_mbs) / + ISF_DMA_TOP_GDA_PROFERTY_PAGE_SIZE; + page_num += ((DEFAULT_MEM_OPEN_TIME * isys_pb_datarate_mbs) % + ISF_DMA_TOP_GDA_PROFERTY_PAGE_SIZE) ? 1 : 0; + mem_open_threshold = isys->pdata->ipdata->memopen_threshold; + mem_open_threshold = max_t(u32, mem_open_threshold, page_num); + dev_dbg(&isys->adev->dev, "mem_open_threshold: %u\n", + mem_open_threshold); + set_iwake_register(isys, GDA_MEMOPEN_THRESHOLD_INDEX, + mem_open_threshold); + } else { + set_iwake_register(isys, GDA_IWAKE_THRESHOLD_INDEX, + iwake_threshold); + } + + iwake_critical_threshold = iwake_threshold + + (IS_PIXEL_BUFFER_PAGES - iwake_threshold) / 2; + + dev_dbg(&isys->adev->dev, "threshold: %u critical: %u\n", + iwake_threshold, iwake_critical_threshold); + + set_iwake_register(isys, GDA_IRQ_CRITICAL_THRESHOLD_INDEX, + iwake_critical_threshold); + + writel(VAL_PKGC_PMON_CFG_RESET, + isys->adev->isp->base + REG_PKGC_PMON_CFG); + writel(VAL_PKGC_PMON_CFG_START, + isys->adev->isp->base + REG_PKGC_PMON_CFG); +unlock_exit: + mutex_unlock(&iwake_watermark->mutex); +} + +static int isys_iwake_watermark_init(struct ipu6_isys *isys) +{ + struct isys_iwake_watermark *iwake_watermark = &isys->iwake_watermark; + + INIT_LIST_HEAD(&iwake_watermark->video_list); + mutex_init(&iwake_watermark->mutex); + + iwake_watermark->ltrdid.lut_ltr.value = 0; + iwake_watermark->isys = isys; + iwake_watermark->iwake_enabled = false; + iwake_watermark->force_iwake_disable = false; + + return 0; +} + +static int isys_iwake_watermark_cleanup(struct ipu6_isys *isys) +{ + struct isys_iwake_watermark *iwake_watermark = &isys->iwake_watermark; + + mutex_lock(&iwake_watermark->mutex); + list_del(&iwake_watermark->video_list); + mutex_unlock(&iwake_watermark->mutex); + + mutex_destroy(&iwake_watermark->mutex); + + return 0; +} + +/* The .bound() notifier callback when a match is found */ +static int isys_notifier_bound(struct v4l2_async_notifier *notifier, + struct v4l2_subdev *sd, + struct v4l2_async_subdev *asd) +{ + struct ipu6_isys *isys = + container_of(notifier, struct ipu6_isys, notifier); + struct sensor_async_subdev *s_asd = + container_of(asd, struct sensor_async_subdev, asd); + + dev_dbg(&isys->adev->dev, "bind %s nlanes is %d port is %d\n", + sd->name, s_asd->csi2.nlanes, s_asd->csi2.port); + isys_complete_ext_device_registration(isys, sd, &s_asd->csi2); + + return v4l2_device_register_subdev_nodes(&isys->v4l2_dev); +} + +static int isys_notifier_complete(struct v4l2_async_notifier *notifier) +{ + struct ipu6_isys *isys = + container_of(notifier, struct ipu6_isys, notifier); + + return v4l2_device_register_subdev_nodes(&isys->v4l2_dev); +} + +static const struct v4l2_async_notifier_operations isys_async_ops = { + .bound = isys_notifier_bound, + .complete = isys_notifier_complete, +}; + +static int isys_fwnode_parse(struct device *dev, + struct v4l2_fwnode_endpoint *vep, + struct v4l2_async_subdev *asd) +{ + struct sensor_async_subdev *s_asd = + container_of(asd, struct sensor_async_subdev, asd); + + s_asd->csi2.port = vep->base.port; + s_asd->csi2.nlanes = vep->bus.mipi_csi2.num_data_lanes; + + return 0; +} + +static int isys_notifier_init(struct ipu6_isys *isys) +{ + size_t asd_struct_size = sizeof(struct sensor_async_subdev); + struct ipu6_device *isp = isys->adev->isp; + int ret; + + v4l2_async_nf_init(&isys->notifier); + ret = v4l2_async_nf_parse_fwnode_endpoints(&isp->pdev->dev, + &isys->notifier, + asd_struct_size, + isys_fwnode_parse); + if (ret < 0) { + dev_err(&isys->adev->dev, + "parse fwnode endpoints failed: %d\n", ret); + return ret; + } + + if (list_empty(&isys->notifier.asd_list)) { + /* isys probe could continue with async subdevs missing */ + dev_warn(&isys->adev->dev, "no subdev found in graph\n"); + return 0; + } + + isys->notifier.ops = &isys_async_ops; + ret = v4l2_async_nf_register(&isys->v4l2_dev, &isys->notifier); + if (ret) { + dev_err(&isys->adev->dev, + "failed to register async notifier : %d\n", ret); + v4l2_async_nf_cleanup(&isys->notifier); + } + + return ret; +} + +static void isys_notifier_cleanup(struct ipu6_isys *isys) +{ + v4l2_async_nf_unregister(&isys->notifier); + v4l2_async_nf_cleanup(&isys->notifier); +} + +static int isys_register_devices(struct ipu6_isys *isys) +{ + struct pci_dev *pdev = isys->adev->isp->pdev; + int ret; + + isys->media_dev.dev = &isys->adev->dev; + + media_device_pci_init(&isys->media_dev, + pdev, IPU6_MEDIA_DEV_MODEL_NAME); + + strscpy(isys->v4l2_dev.name, isys->media_dev.model, + sizeof(isys->v4l2_dev.name)); + + ret = media_device_register(&isys->media_dev); + if (ret < 0) { + dev_err(&isys->adev->dev, "can't register media device\n"); + goto out_media_device_unregister; + } + + isys->v4l2_dev.mdev = &isys->media_dev; + isys->v4l2_dev.ctrl_handler = NULL; + + ret = v4l2_device_register(&isys->adev->dev, &isys->v4l2_dev); + if (ret < 0) { + dev_err(&isys->adev->dev, "can't register v4l2 device\n"); + goto out_media_device_unregister; + } + + ret = isys_register_video_devices(isys); + if (ret) + goto out_v4l2_device_unregister; + + ret = isys_csi2_register_subdevices(isys); + if (ret) + goto out_isys_unregister_video_device; + + ret = isys_csi2_create_media_links(isys); + if (ret) + goto out_isys_unregister_subdevices; + + ret = isys_notifier_init(isys); + if (ret) + goto out_isys_unregister_subdevices; + + ret = v4l2_device_register_subdev_nodes(&isys->v4l2_dev); + if (ret) + goto out_isys_notifier_cleanup; + + return 0; + +out_isys_notifier_cleanup: + isys_notifier_cleanup(isys); + +out_isys_unregister_subdevices: + isys_csi2_unregister_subdevices(isys); + +out_isys_unregister_video_device: + isys_unregister_video_devices(isys); + +out_v4l2_device_unregister: + v4l2_device_unregister(&isys->v4l2_dev); + +out_media_device_unregister: + media_device_unregister(&isys->media_dev); + media_device_cleanup(&isys->media_dev); + + return ret; +} + +static void isys_unregister_devices(struct ipu6_isys *isys) +{ + isys_unregister_video_devices(isys); + isys_csi2_unregister_subdevices(isys); + v4l2_device_unregister(&isys->v4l2_dev); + media_device_unregister(&isys->media_dev); + media_device_cleanup(&isys->media_dev); +} + +static int isys_runtime_pm_resume(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + struct ipu6_device *isp = adev->isp; + unsigned long flags; + int ret; + + if (!isys) + return 0; + + ret = ipu6_mmu_hw_init(adev->mmu); + if (ret) + return ret; + + cpu_latency_qos_update_request(&isys->pm_qos, ISYS_PM_QOS_VALUE); + + ret = ipu6_buttress_start_tsc_sync(isp); + if (ret) + return ret; + + spin_lock_irqsave(&isys->power_lock, flags); + isys->power = 1; + spin_unlock_irqrestore(&isys->power_lock, flags); + + isys_setup_hw(isys); + + set_iwake_ltrdid(isys, 0, 0, LTR_ISYS_ON); + + return 0; +} + +static int isys_runtime_pm_suspend(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + unsigned long flags; + + if (!isys) + return 0; + + spin_lock_irqsave(&isys->power_lock, flags); + isys->power = 0; + spin_unlock_irqrestore(&isys->power_lock, flags); + + mutex_lock(&isys->mutex); + isys->need_reset = false; + mutex_unlock(&isys->mutex); + + isys->phy_termcal_val = 0; + cpu_latency_qos_update_request(&isys->pm_qos, PM_QOS_DEFAULT_VALUE); + + set_iwake_ltrdid(isys, 0, 0, LTR_ISYS_OFF); + + ipu6_mmu_hw_cleanup(adev->mmu); + + return 0; +} + +static int isys_suspend(struct device *dev) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(dev); + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + + /* If stream is open, refuse to suspend */ + if (isys->stream_opened) + return -EBUSY; + + return 0; +} + +static int isys_resume(struct device *dev) +{ + return 0; +} + +static const struct dev_pm_ops isys_pm_ops = { + .runtime_suspend = isys_runtime_pm_suspend, + .runtime_resume = isys_runtime_pm_resume, + .suspend = isys_suspend, + .resume = isys_resume, +}; + +static void isys_remove(struct ipu6_bus_device *adev) +{ + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + struct ipu6_device *isp = adev->isp; + struct isys_fw_msgs *fwmsg, *safe; + unsigned int i; + + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) + mutex_destroy(&isys->streams[i].mutex); + + list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist, head) + dma_free_attrs(&adev->dev, sizeof(struct isys_fw_msgs), + fwmsg, fwmsg->dma_addr, 0); + + list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist_fw, head) + dma_free_attrs(&adev->dev, sizeof(struct isys_fw_msgs), + fwmsg, fwmsg->dma_addr, 0); + + isys_iwake_watermark_cleanup(isys); + isys_notifier_cleanup(isys); + isys_unregister_devices(isys); + + cpu_latency_qos_remove_request(&isys->pm_qos); + + if (!isp->secure_mode) { + ipu6_cpd_free_pkg_dir(adev); + ipu6_buttress_unmap_fw_image(adev, &adev->fw_sgt); + release_firmware(adev->fw); + } + + mutex_destroy(&isys->stream_mutex); + mutex_destroy(&isys->mutex); +} + +static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount) +{ + struct isys_fw_msgs *addr; + dma_addr_t dma_addr; + unsigned long flags; + unsigned int i; + + for (i = 0; i < amount; i++) { + addr = dma_alloc_attrs(&isys->adev->dev, + sizeof(struct isys_fw_msgs), + &dma_addr, GFP_KERNEL, 0); + if (!addr) + break; + addr->dma_addr = dma_addr; + + spin_lock_irqsave(&isys->listlock, flags); + list_add(&addr->head, &isys->framebuflist); + spin_unlock_irqrestore(&isys->listlock, flags); + } + + if (i == amount) + return 0; + + spin_lock_irqsave(&isys->listlock, flags); + while (!list_empty(&isys->framebuflist)) { + addr = list_first_entry(&isys->framebuflist, + struct isys_fw_msgs, head); + list_del(&addr->head); + spin_unlock_irqrestore(&isys->listlock, flags); + dma_free_attrs(&isys->adev->dev, + sizeof(struct isys_fw_msgs), + addr, addr->dma_addr, 0); + spin_lock_irqsave(&isys->listlock, flags); + } + spin_unlock_irqrestore(&isys->listlock, flags); + + return -ENOMEM; +} + +struct isys_fw_msgs *ipu6_get_fw_msg_buf(struct ipu6_isys_stream *stream) +{ + struct ipu6_isys *isys = stream->isys; + struct isys_fw_msgs *msg; + unsigned long flags; + int ret; + + spin_lock_irqsave(&isys->listlock, flags); + if (list_empty(&isys->framebuflist)) { + spin_unlock_irqrestore(&isys->listlock, flags); + dev_dbg(&isys->adev->dev, "Frame list empty - Allocate more"); + + ret = alloc_fw_msg_bufs(isys, 5); + if (ret < 0) + return NULL; + + spin_lock_irqsave(&isys->listlock, flags); + if (list_empty(&isys->framebuflist)) { + spin_unlock_irqrestore(&isys->listlock, flags); + dev_err(&isys->adev->dev, "Frame list empty"); + return NULL; + } + } + msg = list_last_entry(&isys->framebuflist, struct isys_fw_msgs, head); + list_move(&msg->head, &isys->framebuflist_fw); + spin_unlock_irqrestore(&isys->listlock, flags); + memset(&msg->fw_msg, 0, sizeof(msg->fw_msg)); + + return msg; +} + +void ipu6_cleanup_fw_msg_bufs(struct ipu6_isys *isys) +{ + struct isys_fw_msgs *fwmsg, *fwmsg0; + unsigned long flags; + + spin_lock_irqsave(&isys->listlock, flags); + list_for_each_entry_safe(fwmsg, fwmsg0, &isys->framebuflist_fw, head) + list_move(&fwmsg->head, &isys->framebuflist); + spin_unlock_irqrestore(&isys->listlock, flags); +} + +void ipu6_put_fw_msg_buf(struct ipu6_isys *isys, u64 data) +{ + struct isys_fw_msgs *msg; + unsigned long flags; + u64 *ptr = (u64 *)data; + + if (!ptr) + return; + + spin_lock_irqsave(&isys->listlock, flags); + msg = container_of(ptr, struct isys_fw_msgs, fw_msg.dummy); + list_move(&msg->head, &isys->framebuflist); + spin_unlock_irqrestore(&isys->listlock, flags); +} + +static int isys_probe(struct ipu6_bus_device *adev) +{ + struct ipu6_isys *isys; + struct ipu6_device *isp = adev->isp; + const struct firmware *fw; + unsigned int i; + int ret = 0; + + isys = devm_kzalloc(&adev->dev, sizeof(*isys), GFP_KERNEL); + if (!isys) + return -ENOMEM; + + ret = ipu6_mmu_hw_init(adev->mmu); + if (ret) + return ret; + + isys->adev = adev; + isys->pdata = adev->pdata; + + /* initial sensor type */ + isys->sensor_type = isys->pdata->ipdata->sensor_type_start; + + spin_lock_init(&isys->streams_lock); + spin_lock_init(&isys->power_lock); + isys->power = 0; + isys->phy_termcal_val = 0; + + mutex_init(&isys->mutex); + mutex_init(&isys->stream_mutex); + + spin_lock_init(&isys->listlock); + INIT_LIST_HEAD(&isys->framebuflist); + INIT_LIST_HEAD(&isys->framebuflist_fw); + + ipu6_bus_set_drvdata(adev, isys); + + isys->line_align = IPU6_ISYS_2600_MEM_LINE_ALIGN; + isys->icache_prefetch = 0; + + isys_stream_init(isys); + + if (!isp->secure_mode) { + fw = isp->cpd_fw; + ret = ipu6_buttress_map_fw_image(adev, fw, &adev->fw_sgt); + if (ret) + goto release_firmware; + + ret = ipu6_cpd_create_pkg_dir(adev, isp->cpd_fw->data); + if (ret) + goto remove_shared_buffer; + } + + cpu_latency_qos_add_request(&isys->pm_qos, PM_QOS_DEFAULT_VALUE); + + ret = alloc_fw_msg_bufs(isys, 20); + if (ret < 0) + goto out_remove_pkg_dir_shared_buffer; + + ret = isys_register_devices(isys); + if (ret) + goto out_remove_pkg_dir_shared_buffer; + + if (is_ipu6se(adev->isp->hw_ver)) + isys->phy_set_power = ipu6_isys_jsl_phy_set_power; + else if (is_ipu6ep_mtl(adev->isp->hw_ver)) + isys->phy_set_power = ipu6_isys_dwc_phy_set_power; + else + isys->phy_set_power = ipu6_isys_mcd_phy_set_power; + + ret = isys_iwake_watermark_init(isys); + if (ret) + goto out_unregister_devices; + + ipu6_mmu_hw_cleanup(adev->mmu); + + return 0; + +out_unregister_devices: + isys_unregister_devices(isys); +out_remove_pkg_dir_shared_buffer: + if (!isp->secure_mode) + ipu6_cpd_free_pkg_dir(adev); +remove_shared_buffer: + if (!isp->secure_mode) + ipu6_buttress_unmap_fw_image(adev, &adev->fw_sgt); +release_firmware: + if (!isp->secure_mode) + release_firmware(adev->fw); + + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) + mutex_destroy(&isys->streams[i].mutex); + + mutex_destroy(&isys->mutex); + mutex_destroy(&isys->stream_mutex); + + ipu6_mmu_hw_cleanup(adev->mmu); + + return ret; +} + +struct fwmsg { + int type; + char *msg; + bool valid_ts; +}; + +static const struct fwmsg fw_msg[] = { + {IPU6_FW_ISYS_RESP_TYPE_STREAM_OPEN_DONE, "STREAM_OPEN_DONE", 0}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_CLOSE_ACK, "STREAM_CLOSE_ACK", 0}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_START_ACK, "STREAM_START_ACK", 0}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_START_AND_CAPTURE_ACK, + "STREAM_START_AND_CAPTURE_ACK", 0}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_STOP_ACK, "STREAM_STOP_ACK", 0}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_FLUSH_ACK, "STREAM_FLUSH_ACK", 0}, + {IPU6_FW_ISYS_RESP_TYPE_PIN_DATA_READY, "PIN_DATA_READY", 1}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_ACK, "STREAM_CAPTURE_ACK", 0}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_START_AND_CAPTURE_DONE, + "STREAM_START_AND_CAPTURE_DONE", 1}, + {IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_DONE, "STREAM_CAPTURE_DONE", 1}, + {IPU6_FW_ISYS_RESP_TYPE_FRAME_SOF, "FRAME_SOF", 1}, + {IPU6_FW_ISYS_RESP_TYPE_FRAME_EOF, "FRAME_EOF", 1}, + {IPU6_FW_ISYS_RESP_TYPE_STATS_DATA_READY, "STATS_READY", 1}, + {-1, "UNKNOWN MESSAGE", 0}, +}; + +static int resp_type_to_index(int type) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(fw_msg); i++) + if (fw_msg[i].type == type) + return i; + + return i - 1; +} + +int isys_isr_one(struct ipu6_bus_device *adev) +{ + struct ipu6_isys *isys = ipu6_bus_get_drvdata(adev); + struct ipu6_fw_isys_resp_info_abi *resp; + struct ipu6_isys_stream *stream; + u64 ts; + + if (!isys->fwcom) + return 0; + + resp = ipu6_fw_isys_get_resp(isys->fwcom, IPU6_BASE_MSG_RECV_QUEUES); + if (!resp) + return 1; + + ts = (u64)resp->timestamp[1] << 32 | resp->timestamp[0]; + + if (resp->error_info.error == IPU6_FW_ISYS_ERROR_STREAM_IN_SUSPENSION) + /* Suspension is kind of special case: not enough buffers */ + dev_dbg(&adev->dev, + "FW error resp %02d %s, stream %u, error SUSPENSION, details %d, timestamp 0x%16.16llx, pin %d\n", + resp->type, + fw_msg[resp_type_to_index(resp->type)].msg, + resp->stream_handle, + resp->error_info.error_details, + fw_msg[resp_type_to_index(resp->type)].valid_ts ? + ts : 0, resp->pin_id); + else if (resp->error_info.error) + dev_dbg(&adev->dev, + "FW error resp %02d %s, stream %u, error %d, details %d, timestamp 0x%16.16llx, pin %d\n", + resp->type, + fw_msg[resp_type_to_index(resp->type)].msg, + resp->stream_handle, + resp->error_info.error, resp->error_info.error_details, + fw_msg[resp_type_to_index(resp->type)].valid_ts ? + ts : 0, resp->pin_id); + else + dev_dbg(&adev->dev, + "FW resp %02d %s, stream %u, timestamp 0x%16.16llx, pin %d\n", + resp->type, + fw_msg[resp_type_to_index(resp->type)].msg, + resp->stream_handle, + fw_msg[resp_type_to_index(resp->type)].valid_ts ? + ts : 0, resp->pin_id); + + if (resp->stream_handle >= IPU6_ISYS_MAX_STREAMS) { + dev_err(&adev->dev, "bad stream handle %u\n", + resp->stream_handle); + goto leave; + } + + stream = ipu6_isys_query_stream_by_handle(isys, resp->stream_handle); + if (!stream) { + dev_err(&adev->dev, "stream of stream_handle %u is unused\n", + resp->stream_handle); + goto leave; + } + stream->error = resp->error_info.error; + + switch (resp->type) { + case IPU6_FW_ISYS_RESP_TYPE_STREAM_OPEN_DONE: + complete(&stream->stream_open_completion); + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_CLOSE_ACK: + complete(&stream->stream_close_completion); + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_START_ACK: + complete(&stream->stream_start_completion); + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_START_AND_CAPTURE_ACK: + complete(&stream->stream_start_completion); + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_STOP_ACK: + complete(&stream->stream_stop_completion); + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_FLUSH_ACK: + complete(&stream->stream_stop_completion); + break; + case IPU6_FW_ISYS_RESP_TYPE_PIN_DATA_READY: + /* + * firmware only release the capture msg until software + * get pin_data_ready event + */ + ipu6_put_fw_msg_buf(ipu6_bus_get_drvdata(adev), resp->buf_id); + if (resp->pin_id < IPU6_ISYS_OUTPUT_PINS && + stream->output_pins[resp->pin_id].pin_ready) + stream->output_pins[resp->pin_id].pin_ready(stream, + resp); + else + dev_err(&adev->dev, + "%d:No data pin ready handler for pin id %d\n", + resp->stream_handle, resp->pin_id); + if (stream->csi2) + ipu6_isys_csi2_error(stream->csi2); + + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_ACK: + break; + case IPU6_FW_ISYS_RESP_TYPE_STREAM_START_AND_CAPTURE_DONE: + case IPU6_FW_ISYS_RESP_TYPE_STREAM_CAPTURE_DONE: + break; + case IPU6_FW_ISYS_RESP_TYPE_FRAME_SOF: + if (stream->csi2) + ipu6_isys_csi2_sof_event_by_stream(stream); + + stream->seq[stream->seq_index].sequence = + atomic_read(&stream->sequence) - 1; + stream->seq[stream->seq_index].timestamp = ts; + dev_dbg(&adev->dev, + "sof: handle %d: (index %u), timestamp 0x%16.16llx\n", + resp->stream_handle, + stream->seq[stream->seq_index].sequence, ts); + stream->seq_index = (stream->seq_index + 1) + % IPU6_ISYS_MAX_PARALLEL_SOF; + break; + case IPU6_FW_ISYS_RESP_TYPE_FRAME_EOF: + if (stream->csi2) + ipu6_isys_csi2_eof_event_by_stream(stream); + + dev_dbg(&adev->dev, + "eof: handle %d: (index %u), timestamp 0x%16.16llx\n", + resp->stream_handle, + stream->seq[stream->seq_index].sequence, ts); + break; + case IPU6_FW_ISYS_RESP_TYPE_STATS_DATA_READY: + break; + default: + dev_err(&adev->dev, "%d:unknown response type %u\n", + resp->stream_handle, resp->type); + break; + } + + ipu6_isys_put_stream(stream); +leave: + ipu6_fw_isys_put_resp(isys->fwcom, IPU6_BASE_MSG_RECV_QUEUES); + return 0; +} + +static const struct pci_device_id ipu6_pci_tbl[] = { + { PCI_VDEVICE(INTEL, IPU6_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6SE_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_ADL_P_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_ADL_N_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_RPL_P_PCI_ID) }, + { PCI_VDEVICE(INTEL, IPU6EP_MTL_PCI_ID) }, + { } +}; + +static struct ipu6_bus_driver isys_driver = { + .probe = isys_probe, + .remove = isys_remove, + .isr = isys_isr, + .id_table = ipu6_pci_tbl, + .drv = { + .name = IPU6_ISYS_NAME, + .owner = THIS_MODULE, + .pm = &isys_pm_ops, + }, +}; + +module_driver(isys_driver, ipu6_bus_register_driver, + ipu6_bus_unregister_driver); + +MODULE_DEVICE_TABLE(pci, ipu6_pci_tbl); + +MODULE_AUTHOR("Sakari Ailus "); +MODULE_AUTHOR("Tianshu Qiu "); +MODULE_AUTHOR("Bingbu Cao "); +MODULE_AUTHOR("Yunliang Ding "); +MODULE_AUTHOR("Hongju Wang "); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Intel IPU6 input system driver"); +MODULE_IMPORT_NS(INTEL_IPU6); diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys.h b/drivers/media/pci/intel/ipu6/ipu6-isys.h new file mode 100644 index 000000000000..48629e0e8c52 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys.h @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_ISYS_H +#define IPU6_ISYS_H + +#include +#include + +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-fw-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-video.h" + +#define IPU6_ISYS_ENTITY_PREFIX "Intel IPU6" +/* FW support max 16 streams */ +#define IPU6_ISYS_MAX_STREAMS 16 +#define ISYS_UNISPART_IRQS (IPU6_ISYS_UNISPART_IRQ_SW | \ + IPU6_ISYS_UNISPART_IRQ_CSI0 | \ + IPU6_ISYS_UNISPART_IRQ_CSI1) + +#define IPU6_ISYS_2600_MEM_LINE_ALIGN 64 + +/* + * Current message queue configuration. These must be big enough + * so that they never gets full. Queues are located in system memory + */ +#define IPU6_ISYS_SIZE_RECV_QUEUE 40 +#define IPU6_ISYS_SIZE_SEND_QUEUE 40 +#define IPU6_ISYS_SIZE_PROXY_RECV_QUEUE 5 +#define IPU6_ISYS_SIZE_PROXY_SEND_QUEUE 5 +#define IPU6_ISYS_NUM_RECV_QUEUE 1 + +#define IPU6_ISYS_MIN_WIDTH 1U +#define IPU6_ISYS_MIN_HEIGHT 1U +#define IPU6_ISYS_MAX_WIDTH 4672U +#define IPU6_ISYS_MAX_HEIGHT 3416U + +#define NR_OF_CSI2_BE_SOC_DEV 8 + +/* the threshold granularity is 2KB on IPU6 */ +#define IPU6_SRAM_GRANULARITY_SHIFT 11 +#define IPU6_SRAM_GRANULARITY_SIZE 2048 +/* the threshold granularity is 1KB on IPU6SE */ +#define IPU6SE_SRAM_GRANULARITY_SHIFT 10 +#define IPU6SE_SRAM_GRANULARITY_SIZE 1024 +/* IS pixel buffer is 256KB, MaxSRAMSize is 200KB on IPU6 */ +#define IPU6_MAX_SRAM_SIZE (200 << 10) +/* IS pixel buffer is 128KB, MaxSRAMSize is 96KB on IPU6SE */ +#define IPU6SE_MAX_SRAM_SIZE (96 << 10) + +#define IPU6EP_LTR_VALUE 200 +#define IPU6EP_MIN_MEMOPEN_TH 0x4 +#define IPU6EP_MTL_LTR_VALUE 1023 +#define IPU6EP_MTL_MIN_MEMOPEN_TH 0xc + +struct ltr_did { + union { + u32 value; + struct { + u8 val0; + u8 val1; + u8 val2; + u8 val3; + } bits; + } lut_ltr; + union { + u32 value; + struct { + u8 th0; + u8 th1; + u8 th2; + u8 th3; + } bits; + } lut_fill_time; +}; + +struct isys_iwake_watermark { + bool iwake_enabled; + bool force_iwake_disable; + u32 iwake_threshold; + u64 isys_pixelbuffer_datarate; + struct ltr_did ltrdid; + struct mutex mutex; /* protect whole struct */ + struct ipu6_isys *isys; + struct list_head video_list; +}; + +struct ipu6_isys_csi2_config { + u32 nlanes; + u32 port; +}; + +struct sensor_async_subdev { + struct v4l2_async_subdev asd; + struct ipu6_isys_csi2_config csi2; +}; + +/* + * struct ipu6_isys + * + * @media_dev: Media device + * @v4l2_dev: V4L2 device + * @adev: ISYS bus device + * @power: Is ISYS powered on or not? + * @isr_bits: Which bits does the ISR handle? + * @power_lock: Serialise access to power (power state in general) + * @csi2_rx_ctrl_cached: cached shared value between all CSI2 receivers + * @streams_lock: serialise access to streams + * @streams: streams per firmware stream ID + * @fwcom: fw communication layer private pointer + * or optional external library private pointer + * @line_align: line alignment in memory + * @phy_termcal_val: the termination calibration value, only used for DWC PHY + * @need_reset: Isys requires d0i0->i3 transition + * @video_opened: total number of opened file handles on video nodes + * @mutex: serialise access isys video open/release related operations + * @stream_mutex: serialise stream start and stop, queueing requests + * @pdata: platform data pointer + * @csi2: CSI-2 receivers + */ +struct ipu6_isys { + struct media_device media_dev; + struct v4l2_device v4l2_dev; + struct ipu6_bus_device *adev; + + int power; + spinlock_t power_lock; + u32 isr_csi2_bits; + u32 csi2_rx_ctrl_cached; + spinlock_t streams_lock; + struct ipu6_isys_stream streams[IPU6_ISYS_MAX_STREAMS]; + int streams_ref_count[IPU6_ISYS_MAX_STREAMS]; + void *fwcom; + unsigned int line_align; + u32 phy_termcal_val; + bool need_reset; + bool icache_prefetch; + bool csi2_cse_ipc_not_supported; + unsigned int video_opened; + unsigned int stream_opened; + unsigned int sensor_type; + + struct mutex mutex; + struct mutex stream_mutex; + + struct ipu6_isys_pdata *pdata; + + int (*phy_set_power)(struct ipu6_isys *isys, + struct ipu6_isys_csi2_config *cfg, + const struct ipu6_isys_csi2_timing *timing, + bool on); + + struct ipu6_isys_csi2 *csi2; + struct ipu6_isys_video av[NR_OF_VIDEO_DEVICE]; + + struct pm_qos_request pm_qos; + spinlock_t listlock; /* Protect framebuflist */ + struct list_head framebuflist; + struct list_head framebuflist_fw; + struct v4l2_async_notifier notifier; + struct isys_iwake_watermark iwake_watermark; +}; + +struct isys_fw_msgs { + union { + u64 dummy; + struct ipu6_fw_isys_frame_buff_set_abi frame; + struct ipu6_fw_isys_stream_cfg_data_abi stream; + } fw_msg; + struct list_head head; + dma_addr_t dma_addr; +}; + +struct isys_fw_msgs *ipu6_get_fw_msg_buf(struct ipu6_isys_stream *stream); +void ipu6_put_fw_msg_buf(struct ipu6_isys *isys, u64 data); +void ipu6_cleanup_fw_msg_bufs(struct ipu6_isys *isys); + +extern const struct v4l2_ioctl_ops ipu6_isys_ioctl_ops; + +void isys_setup_hw(struct ipu6_isys *isys); +int isys_isr_one(struct ipu6_bus_device *adev); +irqreturn_t isys_isr(struct ipu6_bus_device *adev); +void update_watermark_setting(struct ipu6_isys *isys); + +#endif /* IPU6_ISYS_H */ From patchwork Thu Apr 13 10:04:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFC00C77B61 for ; Thu, 13 Apr 2023 09:55:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbjDMJzj (ORCPT ); Thu, 13 Apr 2023 05:55:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbjDMJzh (ORCPT ); Thu, 13 Apr 2023 05:55:37 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B6493DD for ; Thu, 13 Apr 2023 02:55:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379722; x=1712915722; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OsSJjm8M2tOnrkQU4kr3wg6p4zDiXLJfMLeNLibx3Eo=; b=KPZFXbLIPtQAT6EORouo8dMVdrUKzh3JePWCGUrxZV012xShK7IkKhgE YRncee6rxcgMq2xHuBIFF6u2e8KApUzScqcj2y5LWagWfqumaDGaU6jOx fYDI4oczBaGUZ5glTGgVhtyvkFLKzcm6R/CdUCv4MslGVYX6nvHrNzHUL 4TEKM6cXBU6FZykRlZTNEv5URexIhCr6KyfPMrcLD4UDm/N8i0QFvnhV6 MFYEz1JcP0pel4C9oRckt/9aFu9vxAfghSjpo90a/4A3MWYCQlux1PsPc K+zhTjMgBbMbrxJednOEkTALolvIi0PHGHECYTlITeQCr6qI7DHHLvJ0/ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993097" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993097" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600183" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600183" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:55:10 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 11/14] media: intel/ipu6: input system video capture nodes Date: Thu, 13 Apr 2023 18:04:26 +0800 Message-Id: <20230413100429.919622-12-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Register v4l2 video device and setup the vb2 queue to support basic video capture. Video streaming callback will trigger the input system driver to construct a input system stream configuration for firmware based on data type and stream ID and then queue buffers to firmware to do capture. Signed-off-by: Bingbu Cao --- .../media/pci/intel/ipu6/ipu6-isys-queue.c | 869 +++++++++++++ .../media/pci/intel/ipu6/ipu6-isys-queue.h | 97 ++ .../media/pci/intel/ipu6/ipu6-isys-video.c | 1132 +++++++++++++++++ .../media/pci/intel/ipu6/ipu6-isys-video.h | 120 ++ 4 files changed, 2218 insertions(+) create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-queue.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-queue.h create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-video.c create mode 100644 drivers/media/pci/intel/ipu6/ipu6-isys-video.h diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c new file mode 100644 index 000000000000..8aa125626fc5 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c @@ -0,0 +1,869 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include + +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-buttress.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-video.h" + +static int queue_setup(struct vb2_queue *q, unsigned int *num_buffers, + unsigned int *num_planes, unsigned int sizes[], + struct device *alloc_devs[]) +{ + struct ipu6_isys_queue *aq = vb2_queue_to_ipu6_isys_queue(q); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + bool use_fmt = false; + unsigned int i; + u32 size; + + /* num_planes == 0: we're being called through VIDIOC_REQBUFS */ + if (!*num_planes) { + use_fmt = true; + *num_planes = av->mpix.num_planes; + } + + for (i = 0; i < *num_planes; i++) { + size = av->mpix.plane_fmt[i].sizeimage; + if (use_fmt) { + sizes[i] = size; + } else if (sizes[i] < size) { + dev_dbg(&av->isys->adev->dev, + "%s: queue setup: plane %d size %u < %u\n", + av->vdev.name, i, sizes[i], size); + return -EINVAL; + } + + alloc_devs[i] = aq->dev; + } + + return 0; +} + +int ipu6_isys_buf_prepare(struct vb2_buffer *vb) +{ + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + + dev_dbg(&av->isys->adev->dev, + "buffer: %s: configured size %u, buffer size %lu\n", + av->vdev.name, + av->mpix.plane_fmt[0].sizeimage, vb2_plane_size(vb, 0)); + + if (av->mpix.plane_fmt[0].sizeimage > vb2_plane_size(vb, 0)) + return -EINVAL; + + vb2_set_plane_payload(vb, 0, av->mpix.plane_fmt[0].bytesperline * + av->mpix.height); + vb->planes[0].data_offset = 0; + + return 0; +} + +static int buf_prepare(struct vb2_buffer *vb) +{ + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + + if (av->isys->adev->isp->flr_done) + return -EIO; + + return aq->buf_prepare(vb); +} + +/* + * Queue a buffer list back to incoming or active queues. The buffers + * are removed from the buffer list. + */ +void ipu6_isys_buffer_list_queue(struct ipu6_isys_buffer_list *bl, + unsigned long op_flags, + enum vb2_buffer_state state) +{ + struct ipu6_isys_buffer *ib, *ib_safe; + unsigned long flags; + bool first = true; + + if (!bl) + return; + + WARN_ON_ONCE(!bl->nbufs); + WARN_ON_ONCE(op_flags & IPU6_ISYS_BUFFER_LIST_FL_ACTIVE && + op_flags & IPU6_ISYS_BUFFER_LIST_FL_INCOMING); + + list_for_each_entry_safe(ib, ib_safe, &bl->head, head) { + struct ipu6_isys_video *av; + struct vb2_buffer *vb = ipu6_isys_buffer_to_vb2_buffer(ib); + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + + if (WARN_ON_ONCE(ib->type != IPU6_ISYS_VIDEO_BUFFER)) + continue; + + av = ipu6_isys_queue_to_video(aq); + spin_lock_irqsave(&aq->lock, flags); + list_del(&ib->head); + if (op_flags & IPU6_ISYS_BUFFER_LIST_FL_ACTIVE) + list_add(&ib->head, &aq->active); + else if (op_flags & IPU6_ISYS_BUFFER_LIST_FL_INCOMING) + list_add_tail(&ib->head, &aq->incoming); + spin_unlock_irqrestore(&aq->lock, flags); + + if (op_flags & IPU6_ISYS_BUFFER_LIST_FL_SET_STATE) + vb2_buffer_done(vb, state); + + if (first) { + dev_dbg(&av->isys->adev->dev, + "queue buf list %p flags %lx, s %d, %d bufs\n", + bl, op_flags, state, bl->nbufs); + first = false; + } + + bl->nbufs--; + } + + WARN_ON(bl->nbufs); +} + +/* + * flush_firmware_streamon_fail() - Flush in cases where requests may + * have been queued to firmware and the *firmware streamon fails for a + * reason or another. + */ +static void flush_firmware_streamon_fail(struct ipu6_isys_stream *stream) +{ + struct ipu6_isys_queue *aq; + unsigned long flags; + + lockdep_assert_held(&stream->mutex); + + list_for_each_entry(aq, &stream->queues, node) { + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + struct ipu6_isys_buffer *ib, *ib_safe; + + spin_lock_irqsave(&aq->lock, flags); + list_for_each_entry_safe(ib, ib_safe, &aq->active, head) { + struct vb2_buffer *vb = + ipu6_isys_buffer_to_vb2_buffer(ib); + + list_del(&ib->head); + if (av->streaming) { + dev_dbg(&av->isys->adev->dev, + "%s: queue buffer %u back to incoming\n", + av->vdev.name, vb->index); + /* Queue already streaming, return to driver. */ + list_add(&ib->head, &aq->incoming); + continue; + } + /* Queue not yet streaming, return to user. */ + dev_dbg(&av->isys->adev->dev, + "%s: return %u back to videobuf2\n", + av->vdev.name, vb->index); + vb2_buffer_done(ipu6_isys_buffer_to_vb2_buffer(ib), + VB2_BUF_STATE_QUEUED); + } + spin_unlock_irqrestore(&aq->lock, flags); + } +} + +/* + * Attempt obtaining a buffer list from the incoming queues, a list of + * buffers that contains one entry from each video buffer queue. If + * all queues have no buffers, the buffers that were already dequeued + * are returned to their queues. + */ +static int buffer_list_get(struct ipu6_isys_stream *stream, + struct ipu6_isys_buffer_list *bl) +{ + struct ipu6_isys_queue *aq; + unsigned long flags; + int ret = 0; + + bl->nbufs = 0; + INIT_LIST_HEAD(&bl->head); + + list_for_each_entry(aq, &stream->queues, node) { + struct ipu6_isys_buffer *ib; + + spin_lock_irqsave(&aq->lock, flags); + if (list_empty(&aq->incoming)) { + spin_unlock_irqrestore(&aq->lock, flags); + ret = -ENODATA; + goto error; + } + + ib = list_last_entry(&aq->incoming, + struct ipu6_isys_buffer, head); + + dev_dbg(&stream->isys->adev->dev, "buffer: %s: buffer %u\n", + ipu6_isys_queue_to_video(aq)->vdev.name, + ipu6_isys_buffer_to_vb2_buffer(ib)->index); + list_del(&ib->head); + list_add(&ib->head, &bl->head); + spin_unlock_irqrestore(&aq->lock, flags); + + bl->nbufs++; + } + + dev_dbg(&stream->isys->adev->dev, "get buffer list %p, %u buffers\n", + bl, bl->nbufs); + + return ret; + +error: + if (!list_empty(&bl->head)) + ipu6_isys_buffer_list_queue(bl, + IPU6_ISYS_BUFFER_LIST_FL_INCOMING, + 0); + + return ret; +} + +void +ipu6_isys_buf_to_fw_frame_buf_pin(struct vb2_buffer *vb, + struct ipu6_fw_isys_frame_buff_set_abi *set) +{ + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + + set->output_pins[aq->fw_output].addr = + vb2_dma_contig_plane_dma_addr(vb, 0); + set->output_pins[aq->fw_output].out_buf_id = vb->index + 1; +} + +/* + * Convert a buffer list to a isys fw ABI framebuffer set. The + * buffer list is not modified. + */ +#define IPU6_ISYS_FRAME_NUM_THRESHOLD (30) +void +ipu6_isys_buf_to_fw_frame_buf(struct ipu6_fw_isys_frame_buff_set_abi *set, + struct ipu6_isys_stream *stream, + struct ipu6_isys_buffer_list *bl) +{ + struct ipu6_isys_buffer *ib; + + WARN_ON(!bl->nbufs); + + set->send_irq_sof = 1; + set->send_resp_sof = 1; + set->send_irq_eof = 0; + set->send_resp_eof = 0; + + if (stream->streaming) + set->send_irq_capture_ack = 0; + else + set->send_irq_capture_ack = 1; + set->send_irq_capture_done = 0; + + set->send_resp_capture_ack = 1; + set->send_resp_capture_done = 1; + if (atomic_read(&stream->sequence) >= IPU6_ISYS_FRAME_NUM_THRESHOLD) { + set->send_resp_capture_ack = 0; + set->send_resp_capture_done = 0; + } + + list_for_each_entry(ib, &bl->head, head) { + struct vb2_buffer *vb = + ipu6_isys_buffer_to_vb2_buffer(ib); + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + + if (WARN_ON_ONCE(ib->type != IPU6_ISYS_VIDEO_BUFFER)) + continue; + + if (aq->fill_frame_buf_set) + aq->fill_frame_buf_set(vb, set); + } +} + +/* Start streaming for real. The buffer list must be available. */ +static int ipu6_isys_stream_start(struct ipu6_isys_video *av, + struct ipu6_isys_buffer_list *bl, bool error) +{ + struct ipu6_isys_stream *stream = av->stream; + struct ipu6_isys_buffer_list __bl; + int ret; + + mutex_lock(&stream->isys->stream_mutex); + ret = ipu6_isys_video_set_streaming(av, 1, bl); + mutex_unlock(&stream->isys->stream_mutex); + if (ret) + goto out_requeue; + + stream->streaming = 1; + + bl = &__bl; + + do { + struct ipu6_fw_isys_frame_buff_set_abi *buf = NULL; + struct isys_fw_msgs *msg; + enum ipu6_fw_isys_send_type send_type = + IPU6_FW_ISYS_SEND_TYPE_STREAM_CAPTURE; + + ret = buffer_list_get(stream, bl); + if (ret < 0) + break; + + msg = ipu6_get_fw_msg_buf(stream); + if (!msg) + return -ENOMEM; + + buf = &msg->fw_msg.frame; + + ipu6_isys_buf_to_fw_frame_buf(buf, stream, bl); + + ipu6_fw_isys_dump_frame_buff_set(&stream->isys->adev->dev, buf, + stream->nr_output_pins); + + ipu6_isys_buffer_list_queue(bl, + IPU6_ISYS_BUFFER_LIST_FL_ACTIVE, 0); + + ret = ipu6_fw_isys_complex_cmd(stream->isys, + stream->stream_handle, buf, + msg->dma_addr, sizeof(*buf), + send_type); + } while (!WARN_ON(ret)); + + return 0; + +out_requeue: + if (bl && bl->nbufs) + ipu6_isys_buffer_list_queue(bl, + IPU6_ISYS_BUFFER_LIST_FL_INCOMING | + error ? + IPU6_ISYS_BUFFER_LIST_FL_SET_STATE : + 0, error ? VB2_BUF_STATE_ERROR : + VB2_BUF_STATE_QUEUED); + flush_firmware_streamon_fail(stream); + + return ret; +} + +static void buf_queue(struct vb2_buffer *vb) +{ + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + struct ipu6_isys_buffer *ib = vb2_buffer_to_ipu6_isys_buffer(vb); + struct media_pipeline *media_pipe = + media_entity_pipeline(&av->vdev.entity); + struct ipu6_fw_isys_frame_buff_set_abi *buf = NULL; + struct ipu6_isys_stream *stream = av->stream; + struct ipu6_isys_buffer_list bl; + struct isys_fw_msgs *msg; + unsigned long flags; + unsigned int i; + int ret; + + dev_dbg(&av->isys->adev->dev, "queue buffer %u for %s\n", + vb->index, av->vdev.name); + + for (i = 0; i < vb->num_planes; i++) + dev_dbg(&av->isys->adev->dev, "iova: plane %u iova 0x%x\n", i, + (u32)vb2_dma_contig_plane_dma_addr(vb, i)); + + spin_lock_irqsave(&aq->lock, flags); + list_add(&ib->head, &aq->incoming); + spin_unlock_irqrestore(&aq->lock, flags); + + if (!media_pipe || !vb->vb2_queue->start_streaming_called) { + dev_dbg(&av->isys->adev->dev, + "media pipeline is not ready for %s\n", av->vdev.name); + return; + } + + mutex_lock(&stream->mutex); + + if (stream->nr_streaming != stream->nr_queues) { + dev_dbg(&av->isys->adev->dev, + "not streaming yet, adding to incoming\n"); + goto out; + } + + /* + * We just put one buffer to the incoming list of this queue + * (above). Let's see whether all queues in the pipeline would + * have a buffer. + */ + ret = buffer_list_get(stream, &bl); + if (ret < 0) { + dev_warn(&av->isys->adev->dev, "No buffers available\n"); + goto out; + } + + msg = ipu6_get_fw_msg_buf(stream); + if (!msg) { + ret = -ENOMEM; + goto out; + } + + buf = &msg->fw_msg.frame; + + ipu6_isys_buf_to_fw_frame_buf(buf, stream, &bl); + + ipu6_fw_isys_dump_frame_buff_set(&stream->isys->adev->dev, buf, + stream->nr_output_pins); + + if (!stream->streaming) { + dev_dbg(&av->isys->adev->dev, + "got a buffer to start streaming!\n"); + ret = ipu6_isys_stream_start(av, &bl, true); + if (ret) + dev_err(&av->isys->adev->dev, + "stream start failed.\n"); + goto out; + } + + /* + * We must queue the buffers in the buffer list to the + * appropriate video buffer queues BEFORE passing them to the + * firmware since we could get a buffer event back before we + * have queued them ourselves to the active queue. + */ + ipu6_isys_buffer_list_queue(&bl, IPU6_ISYS_BUFFER_LIST_FL_ACTIVE, 0); + + ret = ipu6_fw_isys_complex_cmd(stream->isys, stream->stream_handle, + buf, msg->dma_addr, sizeof(*buf), + IPU6_FW_ISYS_SEND_TYPE_STREAM_CAPTURE); + if (ret < 0) + dev_err(&av->isys->adev->dev, "send stream capture failed\n"); + +out: + mutex_unlock(&stream->mutex); +} + +int ipu6_isys_link_fmt_validate(struct ipu6_isys_queue *aq) +{ + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + struct v4l2_subdev_format fmt = { 0 }; + struct media_pad *pad = + media_pad_remote_pad_first(av->vdev.entity.pads); + struct v4l2_subdev *sd; + int ret; + + if (!pad) { + dev_dbg(&av->isys->adev->dev, + "video node %s pad not connected\n", av->vdev.name); + return -ENOTCONN; + } + + sd = media_entity_to_v4l2_subdev(pad->entity); + + fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE; + fmt.pad = pad->index; + ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &fmt); + if (ret) + return ret; + + if (fmt.format.width != av->mpix.width || + fmt.format.height != av->mpix.height) { + dev_dbg(&av->isys->adev->dev, + "wrong width or height %ux%u (%ux%u expected)\n", + av->mpix.width, av->mpix.height, + fmt.format.width, fmt.format.height); + return -EINVAL; + } + + if (fmt.format.field != av->mpix.field) { + dev_dbg(&av->isys->adev->dev, + "wrong field value 0x%8.8x (0x%8.8x expected)\n", + av->mpix.field, fmt.format.field); + return -EINVAL; + } + + if (fmt.format.code != av->pfmt->code) { + dev_dbg(&av->isys->adev->dev, + "wrong media bus code 0x%8.8x (0x%8.8x expected)\n", + av->pfmt->code, fmt.format.code); + return -EINVAL; + } + + return 0; +} + +static void return_buffers(struct ipu6_isys_queue *aq, + enum vb2_buffer_state state) +{ + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + bool need_reset = false; + unsigned long flags; + + spin_lock_irqsave(&aq->lock, flags); + while (!list_empty(&aq->incoming)) { + struct ipu6_isys_buffer *ib = list_first_entry(&aq->incoming, + struct + ipu6_isys_buffer, + head); + struct vb2_buffer *vb = ipu6_isys_buffer_to_vb2_buffer(ib); + + list_del(&ib->head); + spin_unlock_irqrestore(&aq->lock, flags); + + vb2_buffer_done(vb, state); + + dev_dbg(&av->isys->adev->dev, + "%s: stop_streaming incoming %u\n", + ipu6_isys_queue_to_video(vb2_queue_to_ipu6_isys_queue + (vb->vb2_queue))->vdev.name, + vb->index); + + spin_lock_irqsave(&aq->lock, flags); + } + + /* + * Something went wrong (FW crash / HW hang / not all buffers + * returned from isys) if there are still buffers queued in active + * queue. We have to clean up places a bit. + */ + while (!list_empty(&aq->active)) { + struct ipu6_isys_buffer *ib = + list_first_entry(&aq->active, struct ipu6_isys_buffer, + head); + struct vb2_buffer *vb = ipu6_isys_buffer_to_vb2_buffer(ib); + + list_del(&ib->head); + spin_unlock_irqrestore(&aq->lock, flags); + + vb2_buffer_done(vb, state); + + dev_warn(&av->isys->adev->dev, "%s: cleaning active queue %u\n", + ipu6_isys_queue_to_video(vb2_queue_to_ipu6_isys_queue + (vb->vb2_queue))->vdev.name, + vb->index); + + spin_lock_irqsave(&aq->lock, flags); + need_reset = true; + } + + spin_unlock_irqrestore(&aq->lock, flags); + + if (need_reset) { + mutex_lock(&av->isys->mutex); + av->isys->need_reset = true; + mutex_unlock(&av->isys->mutex); + } +} + +static int start_streaming(struct vb2_queue *q, unsigned int count) +{ + struct ipu6_isys_queue *aq = vb2_queue_to_ipu6_isys_queue(q); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + struct ipu6_isys_buffer_list __bl, *bl = NULL; + struct ipu6_isys_stream *stream; + int ret; + + dev_dbg(&av->isys->adev->dev, + "stream: %s: width %u, height %u, css pixelformat %u\n", + av->vdev.name, av->mpix.width, av->mpix.height, + av->pfmt->css_pixelformat); + + ret = video_device_pipeline_alloc_start(&av->vdev); + if (ret < 0) { + dev_dbg(&av->isys->adev->dev, "media pipeline start failed\n"); + goto out_return_buffers; + } + + ret = aq->link_fmt_validate(aq); + if (ret) { + dev_dbg(&av->isys->adev->dev, + "%s: link format validation failed (%d)\n", + av->vdev.name, ret); + goto out_pipeline_stop; + } + + /* every ipu6_isys_stream is only enabled once */ + av->stream = ipu6_isys_get_stream(av->isys); + if (!av->stream) { + dev_err(&av->isys->adev->dev, + "no available stream for firmware\n"); + goto out_pipeline_stop; + } + + stream = av->stream; + mutex_lock(&stream->mutex); + if (!stream->nr_streaming) { + ret = ipu6_isys_video_prepare_streaming(av); + if (ret) + goto out_put_stream; + } + + stream->nr_streaming++; + dev_dbg(&av->isys->adev->dev, "queue %u of %u\n", stream->nr_streaming, + stream->nr_queues); + list_add(&aq->node, &stream->queues); + if (stream->nr_streaming != stream->nr_queues) + goto out; + + bl = &__bl; + ret = buffer_list_get(stream, bl); + if (ret < 0) { + dev_dbg(&av->isys->adev->dev, + "no buffer available, postponing streamon\n"); + goto out; + } + + ret = ipu6_isys_stream_start(av, bl, false); + if (ret) + goto out_stream_start; + +out: + mutex_unlock(&stream->mutex); + + return 0; + +out_stream_start: + list_del(&aq->node); + stream->nr_streaming--; + +out_put_stream: + mutex_unlock(&stream->mutex); + ipu6_isys_put_stream(stream); + av->stream = NULL; + +out_pipeline_stop: + video_device_pipeline_stop(&av->vdev); + +out_return_buffers: + return_buffers(aq, VB2_BUF_STATE_QUEUED); + + return ret; +} + +static void stop_streaming(struct vb2_queue *q) +{ + struct ipu6_isys_queue *aq = vb2_queue_to_ipu6_isys_queue(q); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + struct ipu6_isys_stream *stream = av->stream; + + mutex_lock(&stream->mutex); + + mutex_lock(&av->isys->stream_mutex); + if (stream->nr_streaming == stream->nr_queues && stream->streaming) + ipu6_isys_video_set_streaming(av, 0, NULL); + mutex_unlock(&av->isys->stream_mutex); + + video_device_pipeline_stop(&av->vdev); + av->stream = NULL; + + stream->nr_streaming--; + list_del(&aq->node); + stream->streaming = 0; + + mutex_unlock(&stream->mutex); + ipu6_isys_put_stream(stream); + + return_buffers(aq, VB2_BUF_STATE_ERROR); +} + +static unsigned int +get_sof_sequence_by_timestamp(struct ipu6_isys_stream *stream, + struct ipu6_fw_isys_resp_info_abi *info) +{ + u64 time = (u64)info->timestamp[1] << 32 | info->timestamp[0]; + struct ipu6_isys *isys = stream->isys; + unsigned int i; + + /* + * The timestamp is invalid as no TSC in some FPGA platform, + * so get the sequence from pipeline directly in this case. + */ + if (time == 0) + return atomic_read(&stream->sequence) - 1; + + for (i = 0; i < IPU6_ISYS_MAX_PARALLEL_SOF; i++) + if (time == stream->seq[i].timestamp) { + dev_dbg(&isys->adev->dev, + "sof: using seq nr %u for ts %llu\n", + stream->seq[i].sequence, time); + return stream->seq[i].sequence; + } + + dev_dbg(&isys->adev->dev, "SOF: looking for %llu\n", time); + for (i = 0; i < IPU6_ISYS_MAX_PARALLEL_SOF; i++) + dev_dbg(&isys->adev->dev, + "SOF: sequence %u, timestamp value %llu\n", + stream->seq[i].sequence, stream->seq[i].timestamp); + dev_dbg(&isys->adev->dev, "SOF sequence number not found\n"); + + return 0; +} + +static u64 get_sof_ns_delta(struct ipu6_isys_video *av, + struct ipu6_fw_isys_resp_info_abi *info) +{ + struct ipu6_bus_device *adev = to_ipu6_bus_device(&av->isys->adev->dev); + struct ipu6_device *isp = adev->isp; + u64 delta, tsc_now; + + if (!ipu6_buttress_tsc_read(isp, &tsc_now)) + delta = tsc_now - + ((u64)info->timestamp[1] << 32 | info->timestamp[0]); + else + delta = 0; + + return ipu6_buttress_tsc_ticks_to_ns(delta, isp); +} + +void +ipu6_isys_buf_calc_sequence_time(struct ipu6_isys_buffer *ib, + struct ipu6_fw_isys_resp_info_abi *info) +{ + struct vb2_buffer *vb = ipu6_isys_buffer_to_vb2_buffer(ib); + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct ipu6_isys_queue *aq = + vb2_queue_to_ipu6_isys_queue(vb->vb2_queue); + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + struct device *dev = &av->isys->adev->dev; + struct ipu6_isys_stream *stream = av->stream; + u64 ns; + u32 sequence; + + ns = ktime_get_ns() - get_sof_ns_delta(av, info); + sequence = get_sof_sequence_by_timestamp(stream, info); + + vbuf->vb2_buf.timestamp = ns; + vbuf->sequence = sequence; + + dev_dbg(dev, "buf: %s: buffer done, CPU-timestamp:%lld, sequence:%d\n", + av->vdev.name, ktime_get_ns(), sequence); + dev_dbg(dev, "index:%d, vbuf timestamp:%lld, endl\n", + vb->index, vbuf->vb2_buf.timestamp); +} + +void ipu6_isys_queue_buf_done(struct ipu6_isys_buffer *ib) +{ + struct vb2_buffer *vb = ipu6_isys_buffer_to_vb2_buffer(ib); + + if (atomic_read(&ib->str2mmio_flag)) { + vb2_buffer_done(vb, VB2_BUF_STATE_ERROR); + /* + * Operation on buffer is ended with error and will be reported + * to the userspace when it is de-queued + */ + atomic_set(&ib->str2mmio_flag, 0); + } else { + vb2_buffer_done(vb, VB2_BUF_STATE_DONE); + } +} + +void ipu6_isys_queue_buf_ready(struct ipu6_isys_stream *stream, + struct ipu6_fw_isys_resp_info_abi *info) +{ + struct ipu6_isys_queue *aq = stream->output_pins[info->pin_id].aq; + struct ipu6_isys *isys = stream->isys; + struct ipu6_isys_buffer *ib; + struct vb2_buffer *vb; + unsigned long flags; + bool first = true; + struct vb2_v4l2_buffer *buf; + + dev_dbg(&isys->adev->dev, "buffer: %s: received buffer %8.8x\n", + ipu6_isys_queue_to_video(aq)->vdev.name, info->pin.addr); + + spin_lock_irqsave(&aq->lock, flags); + if (list_empty(&aq->active)) { + spin_unlock_irqrestore(&aq->lock, flags); + dev_err(&isys->adev->dev, "active queue empty\n"); + return; + } + + list_for_each_entry_reverse(ib, &aq->active, head) { + dma_addr_t addr; + + vb = ipu6_isys_buffer_to_vb2_buffer(ib); + addr = vb2_dma_contig_plane_dma_addr(vb, 0); + + if (info->pin.addr != addr) { + if (first) + dev_err(&isys->adev->dev, + "Unexpected buffer address %pad\n", + &addr); + first = false; + continue; + } + + if (info->error_info.error == + IPU6_FW_ISYS_ERROR_HW_REPORTED_STR2MMIO) { + /* + * Check for error message: + * 'IPU6_FW_ISYS_ERROR_HW_REPORTED_STR2MMIO' + */ + atomic_set(&ib->str2mmio_flag, 1); + } + dev_dbg(&isys->adev->dev, "buffer: found buffer %pad\n", &addr); + + buf = to_vb2_v4l2_buffer(vb); + buf->field = V4L2_FIELD_NONE; + + list_del(&ib->head); + spin_unlock_irqrestore(&aq->lock, flags); + + ipu6_isys_buf_calc_sequence_time(ib, info); + + ipu6_isys_queue_buf_done(ib); + + return; + } + + dev_err(&isys->adev->dev, + "WARNING: cannot find a matching video buffer!\n"); + + spin_unlock_irqrestore(&aq->lock, flags); +} + +static const struct vb2_ops ipu6_isys_queue_ops = { + .queue_setup = queue_setup, + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, + .buf_prepare = buf_prepare, + .start_streaming = start_streaming, + .stop_streaming = stop_streaming, + .buf_queue = buf_queue, +}; + +int ipu6_isys_queue_init(struct ipu6_isys_queue *aq) +{ + struct ipu6_isys *isys = ipu6_isys_queue_to_video(aq)->isys; + struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq); + int ret; + + /* no support for userptr */ + if (!aq->vbq.io_modes) + aq->vbq.io_modes = VB2_MMAP | VB2_DMABUF; + + aq->vbq.drv_priv = aq; + aq->vbq.ops = &ipu6_isys_queue_ops; + aq->vbq.lock = &av->mutex; + aq->vbq.mem_ops = &vb2_dma_contig_memops; + aq->vbq.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; + aq->vbq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; + + ret = vb2_queue_init(&aq->vbq); + if (ret) + return ret; + + aq->dev = &isys->adev->dev; + aq->vbq.dev = &isys->adev->dev; + spin_lock_init(&aq->lock); + INIT_LIST_HEAD(&aq->active); + INIT_LIST_HEAD(&aq->incoming); + + return 0; +} + +void ipu6_isys_queue_cleanup(struct ipu6_isys_queue *aq) +{ + vb2_queue_release(&aq->vbq); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h new file mode 100644 index 000000000000..f57f198b1deb --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_ISYS_QUEUE_H +#define IPU6_ISYS_QUEUE_H + +#include +#include + +#include + +struct ipu6_isys_video; +struct ipu6_isys_stream; +struct ipu6_fw_isys_resp_info_abi; +struct ipu6_fw_isys_frame_buff_set_abi; + +enum ipu6_isys_buffer_type { + IPU6_ISYS_VIDEO_BUFFER, +}; + +struct ipu6_isys_queue { + struct list_head node; /* struct ipu6_isys_stream.queues */ + struct vb2_queue vbq; + struct device *dev; + /* + * @lock: serialise access to queued and pre_streamon_queued + */ + spinlock_t lock; + struct list_head active; + struct list_head incoming; + unsigned int fw_output; + int (*buf_prepare)(struct vb2_buffer *vb); + void (*fill_frame_buf_set)(struct vb2_buffer *vb, + struct ipu6_fw_isys_frame_buff_set_abi *set); + int (*link_fmt_validate)(struct ipu6_isys_queue *aq); +}; + +struct ipu6_isys_buffer { + struct list_head head; + enum ipu6_isys_buffer_type type; + atomic_t str2mmio_flag; +}; + +struct ipu6_isys_video_buffer { + struct vb2_v4l2_buffer vb_v4l2; + struct ipu6_isys_buffer ib; +}; + +#define IPU6_ISYS_BUFFER_LIST_FL_INCOMING BIT(0) +#define IPU6_ISYS_BUFFER_LIST_FL_ACTIVE BIT(1) +#define IPU6_ISYS_BUFFER_LIST_FL_SET_STATE BIT(2) + +struct ipu6_isys_buffer_list { + struct list_head head; + unsigned int nbufs; +}; + +#define vb2_queue_to_ipu6_isys_queue(__vb2) \ + container_of(__vb2, struct ipu6_isys_queue, vbq) + +#define ipu6_isys_to_isys_video_buffer(__ib) \ + container_of(__ib, struct ipu6_isys_video_buffer, ib) + +#define vb2_buffer_to_ipu6_isys_video_buffer(__vb) \ + container_of(to_vb2_v4l2_buffer(__vb), \ + struct ipu6_isys_video_buffer, vb_v4l2) + +#define ipu6_isys_buffer_to_vb2_buffer(__ib) \ + (&ipu6_isys_to_isys_video_buffer(__ib)->vb_v4l2.vb2_buf) + +#define vb2_buffer_to_ipu6_isys_buffer(__vb) \ + (&vb2_buffer_to_ipu6_isys_video_buffer(__vb)->ib) + +int ipu6_isys_buf_prepare(struct vb2_buffer *vb); + +void ipu6_isys_buffer_list_queue(struct ipu6_isys_buffer_list *bl, + unsigned long op_flags, + enum vb2_buffer_state state); +void +ipu6_isys_buf_to_fw_frame_buf_pin(struct vb2_buffer *vb, + struct ipu6_fw_isys_frame_buff_set_abi *set); +void +ipu6_isys_buf_to_fw_frame_buf(struct ipu6_fw_isys_frame_buff_set_abi *set, + struct ipu6_isys_stream *stream, + struct ipu6_isys_buffer_list *bl); +int ipu6_isys_link_fmt_validate(struct ipu6_isys_queue *aq); + +void +ipu6_isys_buf_calc_sequence_time(struct ipu6_isys_buffer *ib, + struct ipu6_fw_isys_resp_info_abi *info); +void ipu6_isys_queue_buf_done(struct ipu6_isys_buffer *ib); +void ipu6_isys_queue_buf_ready(struct ipu6_isys_stream *stream, + struct ipu6_fw_isys_resp_info_abi *info); +int ipu6_isys_queue_init(struct ipu6_isys_queue *aq); +void ipu6_isys_queue_cleanup(struct ipu6_isys_queue *aq); + +#endif /* IPU6_ISYS_QUEUE_H */ diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c new file mode 100644 index 000000000000..22105dead119 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c @@ -0,0 +1,1132 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright (C) 2013 - 2023 Intel Corporation + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "ipu6.h" +#include "ipu6-bus.h" +#include "ipu6-cpd.h" +#include "ipu6-fw-com.h" +#include "ipu6-fw-isys.h" +#include "ipu6-isys.h" +#include "ipu6-isys-csi2.h" +#include "ipu6-isys-video.h" +#include "ipu6-platform.h" +#include "ipu6-platform-buttress-regs.h" +#include "ipu6-platform-isys-csi2-reg.h" +#include "ipu6-platform-regs.h" + + +const struct ipu6_isys_pixelformat ipu6_isys_pfmts[] = { + {V4L2_PIX_FMT_SBGGR12, 16, 12, MEDIA_BUS_FMT_SBGGR12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SGBRG12, 16, 12, MEDIA_BUS_FMT_SGBRG12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SGRBG12, 16, 12, MEDIA_BUS_FMT_SGRBG12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SRGGB12, 16, 12, MEDIA_BUS_FMT_SRGGB12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SBGGR10, 16, 10, MEDIA_BUS_FMT_SBGGR10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SGBRG10, 16, 10, MEDIA_BUS_FMT_SGBRG10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SGRBG10, 16, 10, MEDIA_BUS_FMT_SGRBG10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SRGGB10, 16, 10, MEDIA_BUS_FMT_SRGGB10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW16}, + {V4L2_PIX_FMT_SBGGR8, 8, 8, MEDIA_BUS_FMT_SBGGR8_1X8, + IPU6_FW_ISYS_FRAME_FORMAT_RAW8}, + {V4L2_PIX_FMT_SGBRG8, 8, 8, MEDIA_BUS_FMT_SGBRG8_1X8, + IPU6_FW_ISYS_FRAME_FORMAT_RAW8}, + {V4L2_PIX_FMT_SGRBG8, 8, 8, MEDIA_BUS_FMT_SGRBG8_1X8, + IPU6_FW_ISYS_FRAME_FORMAT_RAW8}, + {V4L2_PIX_FMT_SRGGB8, 8, 8, MEDIA_BUS_FMT_SRGGB8_1X8, + IPU6_FW_ISYS_FRAME_FORMAT_RAW8}, + {V4L2_PIX_FMT_SBGGR12P, 12, 12, MEDIA_BUS_FMT_SBGGR12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW12}, + {V4L2_PIX_FMT_SGBRG12P, 12, 12, MEDIA_BUS_FMT_SGBRG12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW12}, + {V4L2_PIX_FMT_SGRBG12P, 12, 12, MEDIA_BUS_FMT_SGRBG12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW12}, + {V4L2_PIX_FMT_SRGGB12P, 12, 12, MEDIA_BUS_FMT_SRGGB12_1X12, + IPU6_FW_ISYS_FRAME_FORMAT_RAW12}, + {V4L2_PIX_FMT_SBGGR10P, 10, 10, MEDIA_BUS_FMT_SBGGR10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW10}, + {V4L2_PIX_FMT_SGBRG10P, 10, 10, MEDIA_BUS_FMT_SGBRG10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW10}, + {V4L2_PIX_FMT_SGRBG10P, 10, 10, MEDIA_BUS_FMT_SGRBG10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW10}, + {V4L2_PIX_FMT_SRGGB10P, 10, 10, MEDIA_BUS_FMT_SRGGB10_1X10, + IPU6_FW_ISYS_FRAME_FORMAT_RAW10}, + {V4L2_PIX_FMT_UYVY, 16, 16, MEDIA_BUS_FMT_UYVY8_1X16, + IPU6_FW_ISYS_FRAME_FORMAT_UYVY}, + {V4L2_PIX_FMT_YUYV, 16, 16, MEDIA_BUS_FMT_YUYV8_1X16, + IPU6_FW_ISYS_FRAME_FORMAT_YUYV}, + {V4L2_PIX_FMT_RGB565, 16, 16, MEDIA_BUS_FMT_RGB565_1X16, + IPU6_FW_ISYS_FRAME_FORMAT_RGB565}, + {V4L2_PIX_FMT_BGR24, 24, 24, MEDIA_BUS_FMT_RGB888_1X24, + IPU6_FW_ISYS_FRAME_FORMAT_RGBA888}, +}; + +static int video_open(struct file *file) +{ + struct ipu6_isys_video *av = video_drvdata(file); + struct ipu6_isys *isys = av->isys; + struct ipu6_bus_device *adev = to_ipu6_bus_device(&isys->adev->dev); + struct ipu6_device *isp = adev->isp; + const struct ipu6_isys_internal_pdata *ipdata; + int ret; + + mutex_lock(&isys->mutex); + + if (isys->need_reset || isp->flr_done) { + mutex_unlock(&isys->mutex); + dev_warn(&isys->adev->dev, "isys power cycle required\n"); + return -EIO; + } + mutex_unlock(&isys->mutex); + + ret = pm_runtime_resume_and_get(&isys->adev->dev); + if (ret < 0) + return ret; + + ret = v4l2_fh_open(file); + if (ret) + goto out_power_down; + + mutex_lock(&isys->mutex); + + if (isys->video_opened++) + goto unlock; + + ipdata = isys->pdata->ipdata; + ipu6_configure_spc(adev->isp, &ipdata->hw_variant, + IPU6_CPD_PKG_DIR_ISYS_SERVER_IDX, isys->pdata->base, + adev->pkg_dir, adev->pkg_dir_dma_addr); + + /* + * Buffers could have been left to wrong queue at last closure. + * Move them now back to empty buffer queue. + */ + ipu6_cleanup_fw_msg_bufs(isys); + + if (isys->fwcom) { + /* + * Something went wrong in previous shutdown. As we are now + * restarting isys we can safely delete old context. + */ + dev_info(&isys->adev->dev, "Clearing old context\n"); + ipu6_fw_isys_cleanup(isys); + } + + ret = ipu6_fw_isys_init(av->isys, ipdata->num_parallel_streams); + if (ret < 0) + goto out_lib_init; +unlock: + mutex_unlock(&isys->mutex); + + return 0; + +out_lib_init: + isys->video_opened--; + mutex_unlock(&isys->mutex); + v4l2_fh_release(file); + +out_power_down: + pm_runtime_put(&isys->adev->dev); + + return ret; +} + +static int video_release(struct file *file) +{ + struct ipu6_isys_video *av = video_drvdata(file); + int ret = 0; + + vb2_fop_release(file); + + mutex_lock(&av->isys->mutex); + + if (!--av->isys->video_opened) { + ipu6_fw_isys_close(av->isys); + if (av->isys->fwcom) { + av->isys->need_reset = true; + ret = -EIO; + } + } + + mutex_unlock(&av->isys->mutex); + + if (av->isys->need_reset) + pm_runtime_put_sync(&av->isys->adev->dev); + else + pm_runtime_put(&av->isys->adev->dev); + + return ret; +} + +static const struct ipu6_isys_pixelformat * +ipu6_isys_get_pixelformat(struct ipu6_isys_video *av, u32 pixelformat) +{ + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(ipu6_isys_pfmts); i++) { + const struct ipu6_isys_pixelformat *pfmt = + &ipu6_isys_pfmts[i]; + + if (pfmt->pixelformat == pixelformat) + return pfmt; + } + + return &ipu6_isys_pfmts[0]; +} + +int ipu6_isys_vidioc_querycap(struct file *file, void *fh, + struct v4l2_capability *cap) +{ + struct ipu6_isys_video *av = video_drvdata(file); + + strscpy(cap->driver, IPU6_ISYS_NAME, sizeof(cap->driver)); + strscpy(cap->card, av->isys->media_dev.model, sizeof(cap->card)); + + return 0; +} + +int ipu6_isys_vidioc_enum_fmt(struct file *file, void *fh, + struct v4l2_fmtdesc *f) +{ + if (f->index >= ARRAY_SIZE(ipu6_isys_pfmts)) + return -EINVAL; + + f->flags = 0; + f->pixelformat = ipu6_isys_pfmts[f->index].pixelformat; + f->mbus_code = ipu6_isys_pfmts[f->index].code; + + return 0; +} + +static int vidioc_g_fmt_vid_cap_mplane(struct file *file, void *fh, + struct v4l2_format *fmt) +{ + struct ipu6_isys_video *av = video_drvdata(file); + + fmt->fmt.pix_mp = av->mpix; + + return 0; +} + +static const struct ipu6_isys_pixelformat * +ipu6_isys_video_try_fmt_vid_mplane(struct ipu6_isys_video *av, + struct v4l2_pix_format_mplane *mpix) +{ + const struct ipu6_isys_pixelformat *pfmt = + ipu6_isys_get_pixelformat(av, mpix->pixelformat); + + mpix->pixelformat = pfmt->pixelformat; + mpix->num_planes = 1; + + mpix->width = clamp(mpix->width, IPU6_ISYS_MIN_WIDTH, + IPU6_ISYS_MAX_WIDTH); + mpix->height = clamp(mpix->height, IPU6_ISYS_MIN_HEIGHT, + IPU6_ISYS_MAX_HEIGHT); + + if (pfmt->bpp != pfmt->bpp_packed) + mpix->plane_fmt[0].bytesperline = + mpix->width * DIV_ROUND_UP(pfmt->bpp, BITS_PER_BYTE); + else + mpix->plane_fmt[0].bytesperline = + DIV_ROUND_UP((unsigned int)mpix->width * pfmt->bpp, + BITS_PER_BYTE); + + mpix->plane_fmt[0].bytesperline = ALIGN(mpix->plane_fmt[0].bytesperline, + av->isys->line_align); + + /* + * (height + 1) * bytesperline due to a hardware issue: the DMA unit + * is a power of two, and a line should be transferred as few units + * as possible. The result is that up to line length more data than + * the image size may be transferred to memory after the image. + * Another limitation is the GDA allocation unit size. For low + * resolution it gives a bigger number. Use larger one to avoid + * memory corruption. + */ + mpix->plane_fmt[0].sizeimage = + max(max(mpix->plane_fmt[0].sizeimage, + mpix->plane_fmt[0].bytesperline * mpix->height + + max(mpix->plane_fmt[0].bytesperline, + av->isys->pdata->ipdata->isys_dma_overshoot)), 1U); + + memset(mpix->plane_fmt[0].reserved, 0, + sizeof(mpix->plane_fmt[0].reserved)); + + mpix->field = V4L2_FIELD_NONE; + + mpix->colorspace = V4L2_COLORSPACE_RAW; + mpix->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; + mpix->quantization = V4L2_QUANTIZATION_DEFAULT; + mpix->xfer_func = V4L2_XFER_FUNC_DEFAULT; + + return pfmt; +} + +static int vidioc_s_fmt_vid_cap_mplane(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct ipu6_isys_video *av = video_drvdata(file); + + if (av->aq.vbq.streaming) + return -EBUSY; + + av->pfmt = ipu6_isys_video_try_fmt_vid_mplane(av, &f->fmt.pix_mp); + av->mpix = f->fmt.pix_mp; + + return 0; +} + +static int vidioc_try_fmt_vid_cap_mplane(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct ipu6_isys_video *av = video_drvdata(file); + + ipu6_isys_video_try_fmt_vid_mplane(av, &f->fmt.pix_mp); + + return 0; +} + +static int vidioc_enum_input(struct file *file, void *fh, + struct v4l2_input *input) +{ + if (input->index > 0) + return -EINVAL; + strscpy(input->name, "camera", sizeof(input->name)); + input->type = V4L2_INPUT_TYPE_CAMERA; + + return 0; +} + +static int vidioc_g_input(struct file *file, void *fh, unsigned int *input) +{ + *input = 0; + + return 0; +} + +static int vidioc_s_input(struct file *file, void *fh, unsigned int input) +{ + return input == 0 ? 0 : -EINVAL; +} + +static int link_validate(struct media_link *link) +{ + struct ipu6_isys_video *av = + container_of(link->sink, struct ipu6_isys_video, pad); + struct ipu6_isys_subdev *asd; + struct v4l2_mbus_framefmt *ffmt; + struct v4l2_subdev *sd; + + if (!link->source->entity) + return -EINVAL; + + sd = media_entity_to_v4l2_subdev(link->source->entity); + asd = to_ipu6_isys_subdev(sd); + ffmt = &asd->ffmt[link->source->index]; + if (ffmt->code != av->pfmt->code || ffmt->width != av->mpix.width || + ffmt->height != av->mpix.height) { + dev_err(&av->isys->adev->dev, + "vdev link validation failed. %dx%d,%x != %dx%d,%x\n", + ffmt->width, ffmt->height, ffmt->code, + av->mpix.width, av->mpix.height, av->pfmt->code); + return -EINVAL; + } + + return 0; +} + +static void get_stream_opened(struct ipu6_isys_video *av) +{ + unsigned long flags; + + spin_lock_irqsave(&av->isys->streams_lock, flags); + av->isys->stream_opened++; + spin_unlock_irqrestore(&av->isys->streams_lock, flags); +} + +static void put_stream_opened(struct ipu6_isys_video *av) +{ + unsigned long flags; + + spin_lock_irqsave(&av->isys->streams_lock, flags); + av->isys->stream_opened--; + spin_unlock_irqrestore(&av->isys->streams_lock, flags); +} + +static void ipu6_isys_fw_pin_cfg(struct ipu6_isys_video *av, + struct ipu6_fw_isys_stream_cfg_data_abi *cfg, + struct v4l2_mbus_framefmt *ffmt) +{ + struct ipu6_isys_stream *stream = av->stream; + struct ipu6_isys_queue *aq = &av->aq; + struct ipu6_fw_isys_input_pin_info_abi *input_pin; + struct ipu6_fw_isys_output_pin_info_abi *output_pin; + struct ipu6_isys *isys = av->isys; + int pin = cfg->nof_input_pins++; + + input_pin = &cfg->input_pins[pin]; + input_pin->input_res.width = ffmt->width; + input_pin->input_res.height = ffmt->height; + input_pin->dt = ipu6_isys_mbus_code_to_mipi(ffmt->code); + input_pin->bits_per_pix = ipu6_fw_isys_get_bpp_by_dt(input_pin->dt); + input_pin->mapped_dt = 0x40; /* invalid mipi data type */ + input_pin->mipi_decompression = 0; + input_pin->capture_mode = IPU6_FW_ISYS_CAPTURE_MODE_REGULAR; + input_pin->mipi_store_mode = av->pfmt->bpp == av->pfmt->bpp_packed ? + IPU6_FW_ISYS_MIPI_STORE_MODE_DISCARD_LONG_HEADER : + IPU6_FW_ISYS_MIPI_STORE_MODE_NORMAL; + input_pin->crop_first_and_last_lines = cfg->crop.top_offset & 1; + + pin = cfg->nof_output_pins++; + aq->fw_output = pin; + stream->output_pins[pin].pin_ready = ipu6_isys_queue_buf_ready; + stream->output_pins[pin].aq = aq; + + output_pin = &cfg->output_pins[pin]; + output_pin->input_pin_id = 0; + output_pin->output_res.width = av->mpix.width; + output_pin->output_res.height = av->mpix.height; + + output_pin->stride = av->mpix.plane_fmt[0].bytesperline; + if (av->pfmt->bpp != av->pfmt->bpp_packed) + output_pin->pt = IPU6_FW_ISYS_PIN_TYPE_RAW_SOC; + else + output_pin->pt = IPU6_FW_ISYS_PIN_TYPE_MIPI; + output_pin->ft = av->pfmt->css_pixelformat; + output_pin->send_irq = 1; + memset(output_pin->ts_offsets, 0, sizeof(output_pin->ts_offsets)); + output_pin->s2m_pixel_soc_pixel_remapping = + S2M_PIXEL_SOC_PIXEL_REMAPPING_FLAG_NO_REMAPPING; + output_pin->csi_be_soc_pixel_remapping = + CSI_BE_SOC_PIXEL_REMAPPING_FLAG_NO_REMAPPING; + + output_pin->snoopable = true; + output_pin->error_handling_enable = false; + output_pin->sensor_type = isys->sensor_type++; + if (isys->sensor_type > isys->pdata->ipdata->sensor_type_end) + isys->sensor_type = isys->pdata->ipdata->sensor_type_start; +} + +static int start_stream_firmware(struct ipu6_isys_video *av, + struct ipu6_isys_buffer_list *bl) +{ + struct media_pad *src_pad = media_pad_remote_pad_first(&av->pad); + struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(src_pad->entity); + struct ipu6_isys_subdev *asd = to_ipu6_isys_subdev(sd); + struct ipu6_fw_isys_stream_cfg_data_abi *stream_cfg; + struct ipu6_fw_isys_frame_buff_set_abi *buf = NULL; + struct ipu6_isys_stream *stream = av->stream; + struct device *dev = &av->isys->adev->dev; + struct ipu6_fw_isys_cropping_abi *crop; + struct isys_fw_msgs *msg = NULL; + struct ipu6_isys_queue *aq; + int ret, retout, tout; + u16 send_type; + + msg = ipu6_get_fw_msg_buf(stream); + if (!msg) + return -ENOMEM; + + stream_cfg = &msg->fw_msg.stream; + stream_cfg->src = stream->stream_source; + stream_cfg->vc = 0; + stream_cfg->isl_use = 0; + stream_cfg->sensor_type = IPU6_FW_ISYS_SENSOR_MODE_NORMAL; + + crop = &stream_cfg->crop; + crop->left_offset = asd->crop.left; + crop->top_offset = asd->crop.top; + crop->right_offset = asd->crop.left + asd->crop.width; + crop->bottom_offset = asd->crop.top + asd->crop.height; + + list_for_each_entry(aq, &stream->queues, node) { + struct ipu6_isys_video *__av = ipu6_isys_queue_to_video(aq); + + ipu6_isys_fw_pin_cfg(__av, stream_cfg, asd->ffmt); + } + + ipu6_fw_isys_dump_stream_cfg(dev, stream_cfg); + + stream->nr_output_pins = stream_cfg->nof_output_pins; + + reinit_completion(&stream->stream_open_completion); + + ret = ipu6_fw_isys_complex_cmd(av->isys, stream->stream_handle, + stream_cfg, msg->dma_addr, + sizeof(*stream_cfg), + IPU6_FW_ISYS_SEND_TYPE_STREAM_OPEN); + if (ret < 0) { + dev_err(dev, "can't open stream (%d)\n", ret); + ipu6_put_fw_msg_buf(av->isys, (u64)stream_cfg); + return ret; + } + + get_stream_opened(av); + + tout = wait_for_completion_timeout(&stream->stream_open_completion, + IPU6_FW_CALL_TIMEOUT_JIFFIES); + + ipu6_put_fw_msg_buf(av->isys, (u64)stream_cfg); + + if (!tout) { + dev_err(dev, "stream open time out\n"); + ret = -ETIMEDOUT; + goto out_put_stream_opened; + } + if (stream->error) { + dev_err(dev, "stream open error: %d\n", stream->error); + ret = -EIO; + goto out_put_stream_opened; + } + dev_dbg(dev, "start stream: open complete\n"); + + if (bl) { + msg = ipu6_get_fw_msg_buf(stream); + if (!msg) { + ret = -ENOMEM; + goto out_put_stream_opened; + } + buf = &msg->fw_msg.frame; + } + + if (bl) { + ipu6_isys_buf_to_fw_frame_buf(buf, stream, bl); + ipu6_isys_buffer_list_queue(bl, + IPU6_ISYS_BUFFER_LIST_FL_ACTIVE, 0); + } + + reinit_completion(&stream->stream_start_completion); + + if (bl) { + send_type = IPU6_FW_ISYS_SEND_TYPE_STREAM_START_AND_CAPTURE; + ipu6_fw_isys_dump_frame_buff_set(dev, buf, + stream_cfg->nof_output_pins); + ret = ipu6_fw_isys_complex_cmd(av->isys, stream->stream_handle, + buf, msg->dma_addr, + sizeof(*buf), send_type); + } else { + send_type = IPU6_FW_ISYS_SEND_TYPE_STREAM_START; + ret = ipu6_fw_isys_simple_cmd(av->isys, stream->stream_handle, + send_type); + } + + if (ret < 0) { + dev_err(dev, "can't start streaming (%d)\n", ret); + goto out_stream_close; + } + + tout = wait_for_completion_timeout(&stream->stream_start_completion, + IPU6_FW_CALL_TIMEOUT_JIFFIES); + if (!tout) { + dev_err(dev, "stream start time out\n"); + ret = -ETIMEDOUT; + goto out_stream_close; + } + if (stream->error) { + dev_err(dev, "stream start error: %d\n", stream->error); + ret = -EIO; + goto out_stream_close; + } + dev_dbg(dev, "start stream: complete\n"); + + return 0; + +out_stream_close: + reinit_completion(&stream->stream_close_completion); + + retout = ipu6_fw_isys_simple_cmd(av->isys, + stream->stream_handle, + IPU6_FW_ISYS_SEND_TYPE_STREAM_CLOSE); + if (retout < 0) { + dev_dbg(dev, "can't close stream (%d)\n", retout); + goto out_put_stream_opened; + } + + tout = wait_for_completion_timeout(&stream->stream_close_completion, + IPU6_FW_CALL_TIMEOUT_JIFFIES); + if (!tout) + dev_err(dev, "stream close time out\n"); + else if (stream->error) + dev_err(dev, "stream close error: %d\n", stream->error); + else + dev_dbg(dev, "stream close complete\n"); + +out_put_stream_opened: + put_stream_opened(av); + + return ret; +} + +static void stop_streaming_firmware(struct ipu6_isys_video *av) +{ + struct ipu6_isys_stream *stream = av->stream; + struct device *dev = &av->isys->adev->dev; + int ret, tout; + + reinit_completion(&stream->stream_stop_completion); + + ret = ipu6_fw_isys_simple_cmd(av->isys, stream->stream_handle, + IPU6_FW_ISYS_SEND_TYPE_STREAM_FLUSH); + + if (ret < 0) { + dev_err(dev, "can't stop stream (%d)\n", ret); + return; + } + + tout = wait_for_completion_timeout(&stream->stream_stop_completion, + IPU6_FW_CALL_TIMEOUT_JIFFIES); + if (!tout) + dev_err(dev, "stream stop time out\n"); + else if (stream->error) + dev_err(dev, "stream stop error: %d\n", stream->error); + else + dev_dbg(dev, "stop stream: complete\n"); +} + +static void close_streaming_firmware(struct ipu6_isys_video *av) +{ + struct ipu6_isys_stream *stream = av->stream; + struct device *dev = &av->isys->adev->dev; + int ret, tout; + + reinit_completion(&stream->stream_close_completion); + + ret = ipu6_fw_isys_simple_cmd(av->isys, stream->stream_handle, + IPU6_FW_ISYS_SEND_TYPE_STREAM_CLOSE); + if (ret < 0) { + dev_err(dev, "can't close stream (%d)\n", ret); + return; + } + + tout = wait_for_completion_timeout(&stream->stream_close_completion, + IPU6_FW_CALL_TIMEOUT_JIFFIES); + if (!tout) + dev_err(dev, "stream close time out\n"); + else if (stream->error) + dev_err(dev, "stream close error: %d\n", stream->error); + else + dev_dbg(dev, "close stream: complete\n"); + + put_stream_opened(av); +} + +int ipu6_isys_video_prepare_streaming(struct ipu6_isys_video *av) +{ + struct ipu6_isys_stream *stream = av->stream; + struct ipu6_isys *isys = av->isys; + struct device *dev = &isys->adev->dev; + struct media_pipeline *pipe = av->pad.pipe; + struct media_pipeline_entity_iter iter; + struct media_entity *entity; + struct v4l2_subdev *sd, *remote_sd; + struct media_pad *remote_pad; + int ret; + + WARN_ON(stream->nr_streaming); + stream->nr_queues = 1; + stream->source_entity = NULL; + atomic_set(&stream->sequence, 0); + + stream->csi2 = NULL; + stream->seq_index = 0; + memset(stream->seq, 0, sizeof(stream->seq)); + + WARN_ON(!list_empty(&stream->queues)); + + if (!pipe) { + dev_err(dev, "No media pipe for %s\n", av->vdev.name); + return -EINVAL; + } + + ret = media_pipeline_entity_iter_init(pipe, &iter); + if (ret) + return ret; + + media_pipeline_for_each_entity(pipe, &iter, entity) { + /* Non-subdev nodes can be ignored here. */ + if (!is_media_entity_v4l2_subdev(entity)) + continue; + + if (stream->source_entity) + continue; + + sd = media_entity_to_v4l2_subdev(entity); + if (!sd || sd->owner != THIS_MODULE) + continue; + + remote_pad = media_pad_remote_pad_unique(&entity->pads[0]); + if (!remote_pad) + continue; + + remote_sd = media_entity_to_v4l2_subdev(remote_pad->entity); + if (!remote_sd || remote_sd->owner == THIS_MODULE) + continue; + + stream->csi2 = to_ipu6_isys_csi2(sd); + stream->csi2->receiver_errors = 0; + + stream->source_entity = remote_pad->entity; + stream->stream_source = to_ipu6_isys_subdev(sd)->source; + dev_dbg(dev, "prepare CSI2:%s stream\n", sd->name); + } + + media_pipeline_entity_iter_cleanup(&iter); + + if (WARN(!stream->source_entity, "no external source entity\n")) + return -EINVAL; + + dev_dbg(dev, "prepare stream: external entity %s\n", + stream->source_entity->name); + + return 0; +} + +static void configure_stream_watermark(struct ipu6_isys_video *av, bool state) +{ + struct ipu6_isys *isys = av->isys; + struct ipu6_isys_csi2 *csi2 = av->stream->csi2; + struct isys_iwake_watermark *iwake_watermark = &isys->iwake_watermark; + struct device *dev = &isys->adev->dev; + struct ipu6_isys_stream *isys_stream; + struct v4l2_subdev *esd; + struct v4l2_control hb = { .id = V4L2_CID_HBLANK, .value = 0 }; + unsigned int bpp, lanes; + s64 link_freq = 0; + u64 pixel_rate = 0; + int ret; + + if (!state) + return; + + isys_stream = av->stream; + if (!isys_stream->source_entity) + return; + + esd = media_entity_to_v4l2_subdev(isys_stream->source_entity); + + av->watermark->width = av->mpix.width; + av->watermark->height = av->mpix.height; + av->watermark->sram_gran_shift = isys->pdata->ipdata->sram_gran_shift; + av->watermark->sram_gran_size = isys->pdata->ipdata->sram_gran_size; + + ret = v4l2_g_ctrl(esd->ctrl_handler, &hb); + if (!ret && hb.value >= 0) + av->watermark->hblank = hb.value; + else + av->watermark->hblank = 0; + + ret = ipu6_isys_csi2_get_link_freq(csi2, &link_freq); + if (!ret) { + lanes = csi2->nlanes; + bpp = ipu6_isys_mbus_code_to_bpp(csi2->asd.ffmt->code); + pixel_rate = mul_u64_u32_div(link_freq, lanes * 2, bpp); + } + + av->watermark->pixel_rate = pixel_rate; + + if (!pixel_rate) { + mutex_lock(&iwake_watermark->mutex); + iwake_watermark->force_iwake_disable = true; + mutex_unlock(&iwake_watermark->mutex); + dev_err(dev, "unexpected pixel_rate from %s, disable iwake.\n", + isys_stream->source_entity->name); + } +} + +static void calculate_stream_datarate(struct ipu6_isys_video *av) +{ + struct video_stream_watermark *watermark = av->watermark; + u32 bpp = av->pfmt->bpp; + u64 pages_per_line, pb_bytes_per_line, stream_data_rate; + u64 pixels_per_line, bytes_per_line, line_time_ns; + u16 shift, size; + + shift = watermark->sram_gran_shift; + size = watermark->sram_gran_size; + pixels_per_line = watermark->width + watermark->hblank; + line_time_ns = + pixels_per_line * 1000 / (watermark->pixel_rate / 1000000); + + bytes_per_line = watermark->width * bpp / 8; + /* bytes to IS pixel buffer pages */ + pages_per_line = bytes_per_line >> shift; + + pages_per_line = DIV_ROUND_UP(bytes_per_line, size); + pb_bytes_per_line = pages_per_line << shift; + + stream_data_rate = (pb_bytes_per_line * 1000) / line_time_ns; + watermark->stream_data_rate = stream_data_rate; +} + +static void update_stream_watermark(struct ipu6_isys_video *av, bool state) +{ + struct isys_iwake_watermark *iwake_watermark = + &av->isys->iwake_watermark; + + if (!av->watermark->pixel_rate) + return; + + if (state) { + calculate_stream_datarate(av); + mutex_lock(&iwake_watermark->mutex); + list_add(&av->watermark->stream_node, + &iwake_watermark->video_list); + mutex_unlock(&iwake_watermark->mutex); + } else { + av->watermark->stream_data_rate = 0; + mutex_lock(&iwake_watermark->mutex); + list_del(&av->watermark->stream_node); + mutex_unlock(&iwake_watermark->mutex); + } + + update_watermark_setting(av->isys); +} + +void ipu6_isys_put_stream(struct ipu6_isys_stream *stream) +{ + unsigned int i; + unsigned long flags; + + if (!stream) { + dev_err(&stream->isys->adev->dev, "no available stream\n"); + return; + } + + spin_lock_irqsave(&stream->isys->streams_lock, flags); + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) { + if (&stream->isys->streams[i] == stream) { + if (stream->isys->streams_ref_count[i] > 0) { + stream->isys->streams_ref_count[i]--; + } else { + dev_warn(&stream->isys->adev->dev, + "stream %d isn't used\n", i); + } + break; + } + } + spin_unlock_irqrestore(&stream->isys->streams_lock, flags); +} + +struct ipu6_isys_stream *ipu6_isys_get_stream(struct ipu6_isys *isys) +{ + struct ipu6_isys_stream *stream = NULL; + unsigned long flags; + unsigned int i; + + if (!isys) + return NULL; + + spin_lock_irqsave(&isys->streams_lock, flags); + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) { + if (!isys->streams_ref_count[i]) { + isys->streams_ref_count[i]++; + stream = &isys->streams[i]; + break; + } + } + spin_unlock_irqrestore(&isys->streams_lock, flags); + + return stream; +} + +struct ipu6_isys_stream * +ipu6_isys_query_stream_by_handle(struct ipu6_isys *isys, u8 stream_handle) +{ + unsigned long flags; + struct ipu6_isys_stream *stream = NULL; + + if (!isys) + return NULL; + + if (stream_handle >= IPU6_ISYS_MAX_STREAMS) { + dev_err(&isys->adev->dev, + "stream_handle %d is invalid\n", stream_handle); + return NULL; + } + + spin_lock_irqsave(&isys->streams_lock, flags); + if (isys->streams_ref_count[stream_handle] > 0) { + isys->streams_ref_count[stream_handle]++; + stream = &isys->streams[stream_handle]; + } + spin_unlock_irqrestore(&isys->streams_lock, flags); + + return stream; +} + +struct ipu6_isys_stream * +ipu6_isys_query_stream_by_source(struct ipu6_isys *isys, int source) +{ + struct ipu6_isys_stream *stream = NULL; + unsigned long flags; + unsigned int i; + + if (!isys) + return NULL; + + if (source < 0) { + dev_err(&stream->isys->adev->dev, + "query stream with invalid port number\n"); + return NULL; + } + + spin_lock_irqsave(&isys->streams_lock, flags); + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) { + if (!isys->streams_ref_count[i]) + continue; + + if (isys->streams[i].stream_source == source) { + stream = &isys->streams[i]; + isys->streams_ref_count[i]++; + break; + } + } + spin_unlock_irqrestore(&isys->streams_lock, flags); + + return stream; +} + +int ipu6_isys_video_set_streaming(struct ipu6_isys_video *av, int state, + struct ipu6_isys_buffer_list *bl) +{ + struct media_device *mdev = av->vdev.entity.graph_obj.mdev; + struct ipu6_isys_stream *stream = av->stream; + struct device *dev = &av->isys->adev->dev; + struct media_entity *entity; + struct media_pipeline *pipe = av->pad.pipe; + struct media_pipeline_entity_iter iter; + struct media_entity_enum entities; + struct v4l2_subdev *sd, *ssd; + int ret = 0; + + dev_dbg(dev, "set stream: %d\n", state); + + if (WARN(!stream->source_entity, "No source entity for stream\n")) + return -ENODEV; + + ssd = media_entity_to_v4l2_subdev(stream->source_entity); + if (!state) { + stop_streaming_firmware(av); + + /* stop external sub-device now. */ + dev_info(dev, "stream off %s\n", stream->source_entity->name); + + v4l2_subdev_call(ssd, video, s_stream, state); + } + + if (!pipe) { + dev_err(dev, "No media pipe for %s\n", av->vdev.name); + return -EINVAL; + } + + ret = media_entity_enum_init(&entities, mdev); + if (ret) + return ret; + + ret = media_pipeline_entity_iter_init(pipe, &iter); + if (ret) + goto out_media_entity_enum_cleanup; + + media_pipeline_for_each_entity(pipe, &iter, entity) { + sd = media_entity_to_v4l2_subdev(entity); + + /* external source entity and non-subdev nodes are ignored */ + if (!is_media_entity_v4l2_subdev(entity) || + sd->owner != THIS_MODULE) + continue; + + dev_dbg(dev, "stream %s entity %s\n", state ? "on" : "off", + entity->name); + ret = v4l2_subdev_call(sd, video, s_stream, state); + if (!state) + continue; + if (ret && ret != -ENOIOCTLCMD) + goto out_media_entity_stop_streaming; + + media_entity_enum_set(&entities, entity); + } + + configure_stream_watermark(av, state); + update_stream_watermark(av, state); + + if (state) { + ret = start_stream_firmware(av, bl); + if (ret) + goto out_clear_stream_watermark; + + dev_dbg(dev, "set stream: source %d, stream_handle %d\n", + stream->stream_source, stream->stream_handle); + + /* Start source sub-device now. */ + dev_info(dev, "stream on %s\n", stream->source_entity->name); + + ret = v4l2_subdev_call(ssd, video, s_stream, state); + if (ret) + goto out_media_entity_stop_streaming_firmware; + } else { + close_streaming_firmware(av); + } + + media_pipeline_entity_iter_cleanup(&iter); + media_entity_enum_cleanup(&entities); + av->streaming = state; + + return 0; + +out_media_entity_stop_streaming_firmware: + stop_streaming_firmware(av); + +out_clear_stream_watermark: + update_stream_watermark(av, 0); + +out_media_entity_stop_streaming: + if (state) { + media_entity_enum_zero(&iter.ent_enum); + media_pipeline_for_each_entity(pipe, &iter, entity) { + if (!media_entity_enum_test(&entities, entity)) + continue; + + sd = media_entity_to_v4l2_subdev(entity); + v4l2_subdev_call(sd, video, s_stream, 0); + } + } + + media_pipeline_entity_iter_cleanup(&iter); + +out_media_entity_enum_cleanup: + media_entity_enum_cleanup(&entities); + + return ret; +} + +static const struct v4l2_ioctl_ops ioctl_ops_mplane = { + .vidioc_querycap = ipu6_isys_vidioc_querycap, + .vidioc_enum_fmt_vid_cap = ipu6_isys_vidioc_enum_fmt, + .vidioc_g_fmt_vid_cap_mplane = vidioc_g_fmt_vid_cap_mplane, + .vidioc_s_fmt_vid_cap_mplane = vidioc_s_fmt_vid_cap_mplane, + .vidioc_try_fmt_vid_cap_mplane = vidioc_try_fmt_vid_cap_mplane, + .vidioc_reqbufs = vb2_ioctl_reqbufs, + .vidioc_create_bufs = vb2_ioctl_create_bufs, + .vidioc_prepare_buf = vb2_ioctl_prepare_buf, + .vidioc_querybuf = vb2_ioctl_querybuf, + .vidioc_qbuf = vb2_ioctl_qbuf, + .vidioc_dqbuf = vb2_ioctl_dqbuf, + .vidioc_streamon = vb2_ioctl_streamon, + .vidioc_streamoff = vb2_ioctl_streamoff, + .vidioc_expbuf = vb2_ioctl_expbuf, + .vidioc_enum_input = vidioc_enum_input, + .vidioc_g_input = vidioc_g_input, + .vidioc_s_input = vidioc_s_input, +}; + +static const struct media_entity_operations entity_ops = { + .link_validate = link_validate, +}; + +static const struct v4l2_file_operations isys_fops = { + .owner = THIS_MODULE, + .poll = vb2_fop_poll, + .unlocked_ioctl = video_ioctl2, + .mmap = vb2_fop_mmap, + .open = video_open, + .release = video_release, +}; + +/* + * Do everything that's needed to initialise things related to video + * buffer queue, video node, and the related media entity. The caller + * is expected to assign isys field and set the name of the video + * device. + */ +int ipu6_isys_video_init(struct ipu6_isys_video *av) +{ + const struct v4l2_ioctl_ops *ioctl_ops = NULL; + struct v4l2_format format = { + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE, + .fmt.pix_mp = { + .width = 1920, + .height = 1080, + }, + }; + int ret; + + av->watermark = kzalloc(sizeof(*av->watermark), GFP_KERNEL); + if (!av->watermark) + return -ENOMEM; + + mutex_init(&av->mutex); + av->vdev.device_caps = V4L2_CAP_STREAMING; + ioctl_ops = &ioctl_ops_mplane; + av->vdev.device_caps |= V4L2_CAP_VIDEO_CAPTURE_MPLANE; + av->vdev.vfl_dir = VFL_DIR_RX; + + ret = ipu6_isys_queue_init(&av->aq); + if (ret) + goto out_free_watermark; + + av->pad.flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT; + ret = media_entity_pads_init(&av->vdev.entity, 1, &av->pad); + if (ret) + goto out_ipu6_isys_queue_cleanup; + + av->vdev.entity.ops = &entity_ops; + av->vdev.release = video_device_release_empty; + av->vdev.fops = &isys_fops; + av->vdev.v4l2_dev = &av->isys->v4l2_dev; + if (!av->vdev.ioctl_ops) + av->vdev.ioctl_ops = ioctl_ops; + av->vdev.queue = &av->aq.vbq; + av->vdev.lock = &av->mutex; + + ipu6_isys_video_try_fmt_vid_mplane(av, &format.fmt.pix_mp); + av->mpix = format.fmt.pix_mp; + + set_bit(V4L2_FL_USES_V4L2_FH, &av->vdev.flags); + video_set_drvdata(&av->vdev, av); + + ret = video_register_device(&av->vdev, VFL_TYPE_VIDEO, -1); + if (ret) + goto out_media_entity_cleanup; + + return ret; + +out_media_entity_cleanup: + video_unregister_device(&av->vdev); + media_entity_cleanup(&av->vdev.entity); + +out_ipu6_isys_queue_cleanup: + ipu6_isys_queue_cleanup(&av->aq); + +out_free_watermark: + mutex_destroy(&av->mutex); + kfree(av->watermark); + + return ret; +} + +void ipu6_isys_video_cleanup(struct ipu6_isys_video *av) +{ + kfree(av->watermark); + video_unregister_device(&av->vdev); + media_entity_cleanup(&av->vdev.entity); + mutex_destroy(&av->mutex); + ipu6_isys_queue_cleanup(&av->aq); +} diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-video.h b/drivers/media/pci/intel/ipu6/ipu6-isys-video.h new file mode 100644 index 000000000000..55a6e09937cc --- /dev/null +++ b/drivers/media/pci/intel/ipu6/ipu6-isys-video.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2013 - 2023 Intel Corporation */ + +#ifndef IPU6_ISYS_VIDEO_H +#define IPU6_ISYS_VIDEO_H + +#include +#include +#include +#include +#include +#include + +#include "ipu6-isys-queue.h" + +#define IPU6_ISYS_OUTPUT_PINS 11 +#define IPU6_ISYS_MAX_PARALLEL_SOF 2 +#define NR_OF_VIDEO_DEVICE 4 + +struct ipu6_isys; +struct ipu6_fw_isys_stream_cfg_data_abi; + +struct ipu6_isys_pixelformat { + u32 pixelformat; + u32 bpp; + u32 bpp_packed; + u32 code; + u32 css_pixelformat; +}; + +struct sequence_info { + unsigned int sequence; + u64 timestamp; +}; + +struct output_pin_data { + void (*pin_ready)(struct ipu6_isys_stream *stream, + struct ipu6_fw_isys_resp_info_abi *info); + struct ipu6_isys_queue *aq; +}; + +/* + * Align with firmware stream. Each stream represents a CSI virtual channel. + * May map to multiple video devices + */ +struct ipu6_isys_stream { + struct mutex mutex; + struct media_entity *source_entity; + atomic_t sequence; + unsigned int seq_index; + struct sequence_info seq[IPU6_ISYS_MAX_PARALLEL_SOF]; + int stream_source; + int stream_handle; + unsigned int nr_output_pins; + struct ipu6_isys_csi2 *csi2; + + int nr_queues; /* Number of capture queues */ + int nr_streaming; + int streaming; /* Has streaming been really started? */ + struct list_head queues; + struct completion stream_open_completion; + struct completion stream_close_completion; + struct completion stream_start_completion; + struct completion stream_stop_completion; + struct ipu6_isys *isys; + + struct output_pin_data output_pins[IPU6_ISYS_OUTPUT_PINS]; + int error; +}; + +struct video_stream_watermark { + u32 width; + u32 height; + u32 hblank; + u32 frame_rate; + u64 pixel_rate; + u64 stream_data_rate; + u16 sram_gran_shift; + u16 sram_gran_size; + struct list_head stream_node; +}; + +struct ipu6_isys_video { + /* Serialise access to other fields in the struct. */ + struct mutex mutex; + struct media_pad pad; + struct video_device vdev; + struct v4l2_pix_format_mplane mpix; + const struct ipu6_isys_pixelformat *pfmt; + struct ipu6_isys_queue aq; + struct ipu6_isys *isys; + struct ipu6_isys_stream *stream; + unsigned int streaming; + struct video_stream_watermark *watermark; +}; + +#define ipu6_isys_queue_to_video(__aq) \ + container_of(__aq, struct ipu6_isys_video, aq) + +extern const struct ipu6_isys_pixelformat ipu6_isys_pfmts[]; +extern const struct ipu6_isys_pixelformat ipu6_isys_pfmts_packed[]; + +int ipu6_isys_vidioc_querycap(struct file *file, void *fh, + struct v4l2_capability *cap); + +int ipu6_isys_vidioc_enum_fmt(struct file *file, void *fh, + struct v4l2_fmtdesc *f); +int ipu6_isys_video_prepare_streaming(struct ipu6_isys_video *av); +int ipu6_isys_video_set_streaming(struct ipu6_isys_video *av, int state, + struct ipu6_isys_buffer_list *bl); +int ipu6_isys_video_init(struct ipu6_isys_video *av); +void ipu6_isys_video_cleanup(struct ipu6_isys_video *av); +void ipu6_isys_put_stream(struct ipu6_isys_stream *stream); +struct ipu6_isys_stream *ipu6_isys_get_stream(struct ipu6_isys *isys); +struct ipu6_isys_stream * +ipu6_isys_query_stream_by_handle(struct ipu6_isys *isys, u8 stream_handle); +struct ipu6_isys_stream * +ipu6_isys_query_stream_by_source(struct ipu6_isys *isys, int source); + +#endif /* IPU6_ISYS_VIDEO_H */ From patchwork Thu Apr 13 10:04:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2242DC77B6C for ; Thu, 13 Apr 2023 09:55:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230097AbjDMJzi (ORCPT ); Thu, 13 Apr 2023 05:55:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230095AbjDMJzg (ORCPT ); Thu, 13 Apr 2023 05:55:36 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F4028A64 for ; Thu, 13 Apr 2023 02:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379724; x=1712915724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=atiALN3bP4RoJFTxBmhbHrRSYfbz9quUZu5k2iQBOGc=; b=WU/ZKyw1rFc5yqDwLsZbgz1aFgPfbDHOkZB9nl8Q1bx9tyZTLqIb6d5h KvZM33hRYcigiT6tLcWe7DWB4ylVZY8eVnfaioPJYTo24P0ZvRw00rRvl thmfBX7dH7nnDJ1DU0/C6UBs/GDuvHmD2II34Cex6d063l44WSCRsQWqU cfKhd/ib4HXT8hNhafnVrpeeu8hpdPMxC+v1DkRE+dmCPUDkz3zy9fHob QQT+5N3j12uTfxJb6Yx4CgsfuRuLAtHr6ANa7VsrEAFWohueFI/2XIDyz TDPlhoYlgZkHEi/BLjQAfJ2/d1bqVYmsmA7+dTZVBrWr5QqXVQ4SFQB6J Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993119" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993119" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600211" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600211" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:55:14 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 12/14] media: add Kconfig and Makefile for IPU6 Date: Thu, 13 Apr 2023 18:04:27 +0800 Message-Id: <20230413100429.919622-13-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Add IPU6 support in Kconfig and Makefile, with this patch you can build the Intel IPU6 and input system modules by select the CONFIG_VIDEO_INTEL_IPU6 in config. Signed-off-by: Bingbu Cao --- drivers/media/pci/Kconfig | 1 + drivers/media/pci/intel/Makefile | 3 ++- drivers/media/pci/intel/ipu6/Kconfig | 15 +++++++++++++++ drivers/media/pci/intel/ipu6/Makefile | 23 +++++++++++++++++++++++ 4 files changed, 41 insertions(+), 1 deletion(-) create mode 100644 drivers/media/pci/intel/ipu6/Kconfig create mode 100644 drivers/media/pci/intel/ipu6/Makefile diff --git a/drivers/media/pci/Kconfig b/drivers/media/pci/Kconfig index 480194543d05..38fb484f5c8e 100644 --- a/drivers/media/pci/Kconfig +++ b/drivers/media/pci/Kconfig @@ -74,6 +74,7 @@ config VIDEO_PCI_SKELETON when developing new drivers. source "drivers/media/pci/intel/ipu3/Kconfig" +source "drivers/media/pci/intel/ipu6/Kconfig" endif #MEDIA_PCI_SUPPORT endif #PCI diff --git a/drivers/media/pci/intel/Makefile b/drivers/media/pci/intel/Makefile index 0b4236c4db49..de2b73fef890 100644 --- a/drivers/media/pci/intel/Makefile +++ b/drivers/media/pci/intel/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only # -# Makefile for the IPU3 cio2 and ImGU drivers +# Makefile for the Intel IPU drivers # obj-y += ipu3/ +obj-$(CONFIG_VIDEO_INTEL_IPU6) += ipu6/ diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig new file mode 100644 index 000000000000..c88ef2f40f6d --- /dev/null +++ b/drivers/media/pci/intel/ipu6/Kconfig @@ -0,0 +1,15 @@ +config VIDEO_INTEL_IPU6 + tristate "Intel IPU6 driver" + depends on ACPI || COMPILE_TEST + depends on MEDIA_SUPPORT + depends on MEDIA_PCI_SUPPORT + depends on X86_64 + select IOMMU_IOVA + select VIDEOBUF2_DMA_CONTIG + select V4L2_FWNODE + help + This is the 6th Gen Intel Image Processing Unit, found in Intel SoCs + and used for capturing images and video from camera sensors. + + To compile this driver, say Y here! It contains 2 modules - + intel_ipu6 and intel_ipu6_isys. diff --git a/drivers/media/pci/intel/ipu6/Makefile b/drivers/media/pci/intel/ipu6/Makefile new file mode 100644 index 000000000000..6a6339c84ef4 --- /dev/null +++ b/drivers/media/pci/intel/ipu6/Makefile @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0-only + +intel-ipu6-objs += ipu6.o \ + ipu6-bus.o \ + ipu6-dma.o \ + ipu6-mmu.o \ + ipu6-buttress.o \ + ipu6-cpd.o \ + ipu6-fw-com.o + +obj-$(CONFIG_VIDEO_INTEL_IPU6) += intel-ipu6.o + +intel-ipu6-isys-objs += ipu6-isys.o \ + ipu6-isys-csi2.o \ + ipu6-fw-isys.o \ + ipu6-isys-video.o \ + ipu6-isys-queue.o \ + ipu6-isys-subdev.o \ + ipu6-isys-mcd-phy.o \ + ipu6-isys-jsl-phy.o \ + ipu6-isys-dwc-phy.o + +obj-$(CONFIG_VIDEO_INTEL_IPU6) += intel-ipu6-isys.o From patchwork Thu Apr 13 10:04:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 674368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E60AC77B6C for ; Thu, 13 Apr 2023 09:55:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230115AbjDMJzu (ORCPT ); Thu, 13 Apr 2023 05:55:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229722AbjDMJzt (ORCPT ); Thu, 13 Apr 2023 05:55:49 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AB998A6A for ; Thu, 13 Apr 2023 02:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379730; x=1712915730; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wBHU6pWtqCcZ+MLmEquS+uZP2DaNANFrhgCqLDUpPzs=; b=awNfQ4iNpFZB904/wKYsMgJyzERgaMSm1zYK/mg+O9+qYL3iNZYOYK0n i1hY1p6mRe36IvtcjXkTz7cXAI1gJuRgsDZzzwypz2z1bwyGx5q728Wcj 0wwGdAUvRffwysoIMoCW7rxDMn3ItZHbqI1X+UicTLFVWHqkHFUkaFKnx PofLV8Z8Mp5KlKp44HhxSGeM+MydwIJmyxAChJRlJX5CBzYY4X5gf70Ig PoOpRsD6KV/2CK8OBZNeos4j5woUlVLEbUKhqHWdQktXcAAkvcn91DfJP sfFUq9V8hvid4FOYJP4DASnvkuz2rcBz2kZa+VL2a43uzE7jsUtkUvUc/ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993135" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993135" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600233" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600233" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:55:18 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 13/14] Documentation: add Intel IPU6 ISYS driver admin-guide doc Date: Thu, 13 Apr 2023 18:04:28 +0800 Message-Id: <20230413100429.919622-14-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao This document mainly describe the functionality of IPU6 and IPU6 isys driver, and gives an example that how user can do imaging capture with tools. Signed-off-by: Bingbu Cao --- Documentation/admin-guide/media/ipu6-isys.rst | 128 +++++++ .../admin-guide/media/ipu6_isys_graph.svg | 338 ++++++++++++++++++ .../admin-guide/media/v4l-drivers.rst | 1 + 3 files changed, 467 insertions(+) create mode 100644 Documentation/admin-guide/media/ipu6-isys.rst create mode 100644 Documentation/admin-guide/media/ipu6_isys_graph.svg diff --git a/Documentation/admin-guide/media/ipu6-isys.rst b/Documentation/admin-guide/media/ipu6-isys.rst new file mode 100644 index 000000000000..083b00449b86 --- /dev/null +++ b/Documentation/admin-guide/media/ipu6-isys.rst @@ -0,0 +1,128 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. include:: + +=============================================================== +Intel Image Processing Unit 6 (IPU6) Input System driver +=============================================================== + +Copyright |copy| 2023 Intel Corporation + +Introduction +============ + +This file documents the Intel IPU6 (6th generation Image Processing Unit) +Input System (MIPI CSI2 receiver) drivers located under +drivers/media/pci/intel/ipu6. + + +The Intel IPU6 can be found in certain Intel Chipsets but not in all SKUs: + +* TigerLake +* JasperLake +* AlderLake +* RaptorLake +* MeteorLake + +Intel IPU6 is made up of two components - Input System (ISYS) and Processing +System (PSYS). + +The Input System mainly work as MIPI CSI2 receiver which receive and process the +imaging data from the sensors and outputs the frames to memory. + +There are 2 driver modules - intel_ipu6 and intel_ipu6_isys. intel_ipu6 is IPU6 +common driver which does PCI configuration, firmware loading and parsing, +firmware authentication, DMA mapping and IPU-MMU (internal Memory mapping Unit) +configuration. intel_ipu6_isys implements V4L2, Media Controller and V4L2 +sub-device interfaces. The IPU6 ISYS driver supports camera sensors connected +to the IPU6 ISYS through V4L2 sub-device sensor drivers. + +Input system driver +=================== + +Input System driver mainly configure CSI2 DPHY, construct the firmware stream +configuration and send commands to firmware and get response from hardware and +firmware and then return buffers to user. +The ISYS is represented as several V4L2 sub-devices - 'Intel IPU6 CSI2 $port', +which provide V4L2 subdev interfaces to the user space, there are also several +video nodes for each CSI-2 stream capture - 'Intel IPU6 ISYS capture $num' which +provide interface to user to set formats, queue buffers and streaming. + +.. kernel-figure:: ipu6_isys_graph.svg + :alt: ipu6 isys media graph + +Capturing frames by IPU6 ISYS +------------------------------------ + +IPU6 ISYS is used to capture frames from the camera sensors connected to the +CSI2 port. The supported input formats of ISYS are listed in table below: + +.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}| + +.. flat-table:: + :header-rows: 1 + + * - IPU6 ISYS supported input formats + + * - RGB565, RGB888 + + * - UYVY8, YUYV8 + + * - RAW8, RAW10, RAW12 + +Here is an example of IPU6 ISYS raw capture on Dell XPS 9315 laptop. On this +machine, ov01a10 sensor is connected to IPU ISYS CSI2 port 2, which can +generate images at sBGGR10 with resolution 1280x800. + +Using the media controller APIs, we can configure ov01a10 sensor by +media-ctl [#f1]_ and yavta [#f2]_ to transmit frames to IPU6 ISYS. + +.. code-block:: none + + # This example assumes /dev/media0 as the IPU ISYS media device + export MDEV=/dev/media0 + + # Establish the link for the media devices using media-ctl + media-ctl -d $MDEV -l "\"ov01a10 3-0036\":0 -> \"Intel IPU6 CSI2 2\":0[1]" + + # Set the format for the media devices + media-ctl -d $MDEV -V "ov01a10:0 [fmt:SBGGR10/1280x800]" + media-ctl -d $MDEV -V "Intel IPU6 CSI2 2:0 [fmt:SBGGR10/1280x800]" + media-ctl -d $MDEV -V "Intel IPU6 CSI2 2:1 [fmt:SBGGR10/1280x800]" + + # Establish the link for the media devices using media-ctl + media-ctl -d $MDEV -l "\"ov01a10 3-0036\":0 -> \"Intel IPU6 CSI2 2\":0[1]" + media-ctl -d $MDEV -l "\"Intel IPU6 CSI2 2\":1 ->\"Intel IPU6 ISYS Capture 0\":0[5]" + +Once the media pipeline is configured, desired sensor specific settings +(such as exposure and gain settings) can be set, using the yavta tool. + +e.g + +.. code-block:: none + + # and that ov01a10 sensor is connected to i2c bus 3 with address 0x36 + export SDEV=$(media-ctl -d $MDEV -e "ov01a10 3-0036") + + yavta -w 0x009e0903 400 $SDEV + yavta -w 0x009e0913 1000 $SDEV + yavta -w 0x009e0911 2000 $SDEV + +Once the desired sensor settings are set, frame captures can be done as below. + +e.g + +.. code-block:: none + + yavta --data-prefix -u -c10 -n5 -I -s 1280x800 --file=/tmp/frame-#.bin \ + -f SBGGR10 $(media-ctl -d $MDEV -e "Intel IPU6 ISYS Capture 0") + +With the above command, 10 frames are captured at 1280x800 resolution with +sBGGR10 format. The captured frames are available as /tmp/frame-#.bin files. + + +References +========== + +.. [#f1] https://git.ideasonboard.org/?p=media-ctl.git;a=summary +.. [#f2] https://git.ideasonboard.org/yavta.git diff --git a/Documentation/admin-guide/media/ipu6_isys_graph.svg b/Documentation/admin-guide/media/ipu6_isys_graph.svg new file mode 100644 index 000000000000..661aee18dbe2 --- /dev/null +++ b/Documentation/admin-guide/media/ipu6_isys_graph.svg @@ -0,0 +1,338 @@ + + + + + + +board + + + +n00000001 + +Intel IPU6 ISYS Capture 0 +/dev/video0 + + + +n00000005 + +Intel IPU6 ISYS Capture 1 +/dev/video1 + + + +n00000009 + +Intel IPU6 ISYS Capture 2 +/dev/video2 + + + +n0000000d + +Intel IPU6 ISYS Capture 3 +/dev/video3 + + + +n00000011 + +0 + +Intel IPU6 CSI2 0 +/dev/v4l-subdev0 + +1 + + + +n00000011:port1->n00000001 + + + + + +n00000011:port1->n00000005 + + + + + +n00000011:port1->n00000009 + + + + + +n00000011:port1->n0000000d + + + + + +n00000014 + +0 + +Intel IPU6 CSI2 1 +/dev/v4l-subdev1 + +1 + + + +n00000014:port1->n00000001 + + + + + +n00000014:port1->n00000005 + + + + + +n00000014:port1->n00000009 + + + + + +n00000014:port1->n0000000d + + + + + +n00000017 + +0 + +Intel IPU6 CSI2 2 +/dev/v4l-subdev2 + +1 + + + +n00000017:port1->n00000001 + + + + + +n00000017:port1->n00000005 + + + + + +n00000017:port1->n00000009 + + + + + +n00000017:port1->n0000000d + + + + + +n0000001a + +0 + +Intel IPU6 CSI2 3 +/dev/v4l-subdev3 + +1 + + + +n0000001a:port1->n00000001 + + + + + +n0000001a:port1->n00000005 + + + + + +n0000001a:port1->n00000009 + + + + + +n0000001a:port1->n0000000d + + + + + +n0000001d + +0 + +Intel IPU6 CSI2 4 +/dev/v4l-subdev4 + +1 + + + +n0000001d:port1->n00000001 + + + + + +n0000001d:port1->n00000005 + + + + + +n0000001d:port1->n00000009 + + + + + +n0000001d:port1->n0000000d + + + + + +n00000020 + +0 + +Intel IPU6 CSI2 5 +/dev/v4l-subdev5 + +1 + + + +n00000020:port1->n00000001 + + + + + +n00000020:port1->n00000005 + + + + + +n00000020:port1->n00000009 + + + + + +n00000020:port1->n0000000d + + + + + +n00000023 + +0 + +Intel IPU6 CSI2 6 +/dev/v4l-subdev6 + +1 + + + +n00000023:port1->n00000001 + + + + + +n00000023:port1->n00000005 + + + + + +n00000023:port1->n00000009 + + + + + +n00000023:port1->n0000000d + + + + + +n00000026 + +0 + +Intel IPU6 CSI2 7 +/dev/v4l-subdev7 + +1 + + + +n00000026:port1->n00000001 + + + + + +n00000026:port1->n00000005 + + + + + +n00000026:port1->n00000009 + + + + + +n00000026:port1->n0000000d + + + + + +n00000069 + + + +ov01a10 3-0036 +/dev/v4l-subdev8 + +0 + + + +n00000069:port0->n00000017:port0 + + + + + diff --git a/Documentation/admin-guide/media/v4l-drivers.rst b/Documentation/admin-guide/media/v4l-drivers.rst index 1c41f87c3917..f6328a242cbe 100644 --- a/Documentation/admin-guide/media/v4l-drivers.rst +++ b/Documentation/admin-guide/media/v4l-drivers.rst @@ -16,6 +16,7 @@ Video4Linux (V4L) driver-specific documentation imx imx7 ipu3 + ipu6-isys ivtv omap3isp omap4_camera From patchwork Thu Apr 13 10:04:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bingbu Cao X-Patchwork-Id: 673042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C84EC77B6E for ; Thu, 13 Apr 2023 09:55:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230070AbjDMJzx (ORCPT ); Thu, 13 Apr 2023 05:55:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230117AbjDMJzw (ORCPT ); Thu, 13 Apr 2023 05:55:52 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47D8559C6 for ; Thu, 13 Apr 2023 02:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379736; x=1712915736; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Gl7nJ3Rzb6W2l2zgujeWzAMr3YjQZ8evZiLtRnt2Iec=; b=OKtUX8uBSnhjDroIqjnZOb3cXqWuJHA0GrmX/vasCTni84rFJVd6Qc3X +d0zWjwBQs4YCOYy+1iR44U6ifsBFtGcre2jRFc77G8uz7ZuyPpIKuVDo j7r1v6Xc+rxVb97aN53qXVhmV2EsmweUauV+J9t67kK4dOL/QgJKSd/5e 1gr19gJ2RtfvsJkL61SfjUtW4hKHI9VtLE+MZfdIz38bU9UhhhcA6KGwE f7bCCkIKsHhBqEtnyxvyW/dmfJ/4w4wFVfev7Py/U94tACDP8oM638p89 vaZ8WMGAi/+Lk6sD9LbtmhLpjv4Eon9VTEM+ESCWAmtFZ6XukBSEiFE5p Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="371993147" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="371993147" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:55:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="639600254" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="639600254" Received: from icg-kernel3.bj.intel.com ([172.16.126.100]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2023 02:55:22 -0700 From: bingbu.cao@intel.com To: linux-media@vger.kernel.org, sakari.ailus@linux.intel.com, laurent.pinchart@ideasonboard.com, ilpo.jarvinen@linux.intel.com Cc: tfiga@chromium.org, senozhatsky@chromium.org, hdegoede@redhat.com, bingbu.cao@intel.com, bingbu.cao@linux.intel.com, tian.shu.qiu@intel.com, hongju.wang@intel.com, daniel.h.kang@intel.com Subject: [RFC PATCH 14/14] MAINTAINERS: add maintainers for Intel IPU6 input system driver Date: Thu, 13 Apr 2023 18:04:29 +0800 Message-Id: <20230413100429.919622-15-bingbu.cao@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230413100429.919622-1-bingbu.cao@intel.com> References: <20230413100429.919622-1-bingbu.cao@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Bingbu Cao Update MAINTAINERS file for Intel IPU6 input system driver. Signed-off-by: Bingbu Cao Reviewed-by: Laurent Pinchart --- MAINTAINERS | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index ec57c42ed544..22521c201c7b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10376,6 +10376,16 @@ F: Documentation/admin-guide/media/ipu3_rcb.svg F: Documentation/userspace-api/media/v4l/pixfmt-meta-intel-ipu3.rst F: drivers/staging/media/ipu3/ +INTEL IPU6 INPUT SYSTEM DRIVER +M: Sakari Ailus +M: Bingbu Cao +R: Tianshu Qiu +L: linux-media@vger.kernel.org +S: Maintained +T: git git://linuxtv.org/media_tree.git +F: Documentation/admin-guide/media/ipu6-isys.rst +F: drivers/media/pci/intel/ipu6/ + INTEL IXP4XX CRYPTO SUPPORT M: Corentin Labbe L: linux-crypto@vger.kernel.org