From patchwork Mon Jul 29 08:43:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Usyskin X-Patchwork-Id: 815598 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9196B13C80F for ; Mon, 29 Jul 2024 08:52:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722243138; cv=none; b=fNFOLLF4ZF5X0wNVa6X5Lg1Ht70M1JToZ4iDszEgkgUczzhYbSNQByB3B8OwEsjs6R4xex+azI2jDZuljEv2RcqiA0PApeLDrvGYHSXFTdC5viLM4XjeQjLK5VorG7mxdrNR+Lb9B1kBPHHXXmn4pMmQrui1U6g7fcebeLlAFfI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722243138; c=relaxed/simple; bh=GABW/H87S38DagxO4mykHzFZC7UrdYF4tnXI5Tu9G9E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TvJ8jL0DwNU3baXW1kxzM0zOlqyVh9HWDUe6OEiZS5CIAqwJUQMDQLAqmcoXTg7Ru/XL6IIl54lhhS8bGPVk4o7JOvmufGOKlm9yuifaelCN9bLDLM+4jWA37Mgp1d6YiUi+caqFYpd+CXYuFgvVR9zqyHViB2i32QMRP8zj23c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=F53xuFwl; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="F53xuFwl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722243136; x=1753779136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GABW/H87S38DagxO4mykHzFZC7UrdYF4tnXI5Tu9G9E=; b=F53xuFwlRm7aZ2cptqkUH6ZRVUi2+7ltSJ4HMcVrwa24dsnMQjBQDXOF Sre5S1k3mz9m4jGAc9NPD3JRwzDRb8rgZQrLHRPvg8oRTi5UYDZHvw8iM nsON+ZBArIxKpSNxJ/xS1tBKO0VA0hguWoa8fQpUZl65h/dQpEBH8lR9F h6TbcC3oTIpv23B9PQOFlC7+C88iBf3Ewso7Rd5ISJCCLLIB1H2X79z2V UMIrvB9LkauRTlHv3yQqskmtJBIB2UDvbakKfQHwi+sGensmWVcFkvBTp cgvEgQV/7pyjFWL4HVmW2Jio8/gtjscd/AQrJ0YvL7Z/YrCjPeqCV51EH w==; X-CSE-ConnectionGUID: X/XFmDs1TvWpMn32HhyTWQ== X-CSE-MsgGUID: 1oOwrKkKRR68ulwGq7HW+Q== X-IronPort-AV: E=McAfee;i="6700,10204,11147"; a="42509043" X-IronPort-AV: E=Sophos;i="6.09,245,1716274800"; d="scan'208";a="42509043" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 01:52:16 -0700 X-CSE-ConnectionGUID: fQBNJpfVSKKJ+0ribVJNSg== X-CSE-MsgGUID: 45by5YltTTmaKGdw+OEEYw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,245,1716274800"; d="scan'208";a="54708433" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 01:52:12 -0700 From: Alexander Usyskin To: Mark Brown , Lucas De Marchi , Oded Gabbay , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: Tomas Winkler , Alexander Usyskin , Vitaly Lubart , intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-spi@vger.kernel.org, intel-gfx@lists.freedesktop.org Subject: [PATCH v5 05/12] spi: intel-dg: implement mtd access handlers Date: Mon, 29 Jul 2024 11:43:19 +0300 Message-Id: <20240729084326.2278014-6-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240729084326.2278014-1-alexander.usyskin@intel.com> References: <20240729084326.2278014-1-alexander.usyskin@intel.com> Precedence: bulk X-Mailing-List: linux-spi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Tomas Winkler Implement mtd read, erase, and write handlers. For erase operation address and size should be 4K aligned. For write operation address and size has to be 4bytes aligned. CC: Rodrigo Vivi CC: Lucas De Marchi Signed-off-by: Tomas Winkler Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/spi/spi-intel-dg.c | 152 +++++++++++++++++++++++++++++++++++-- 1 file changed, 147 insertions(+), 5 deletions(-) diff --git a/drivers/spi/spi-intel-dg.c b/drivers/spi/spi-intel-dg.c index a936014f1a76..dfb457c43a5d 100644 --- a/drivers/spi/spi-intel-dg.c +++ b/drivers/spi/spi-intel-dg.c @@ -174,7 +174,6 @@ static int intel_dg_spi_is_valid(struct intel_dg_spi *spi) return 0; } -__maybe_unused static unsigned int spi_get_region(const struct intel_dg_spi *spi, loff_t from) { unsigned int i; @@ -206,7 +205,6 @@ static ssize_t spi_rewrite_partial(struct intel_dg_spi *spi, loff_t to, return len; } -__maybe_unused static ssize_t spi_write(struct intel_dg_spi *spi, u8 region, loff_t to, size_t len, const unsigned char *buf) { @@ -265,7 +263,6 @@ static ssize_t spi_write(struct intel_dg_spi *spi, u8 region, return len; } -__maybe_unused static ssize_t spi_read(struct intel_dg_spi *spi, u8 region, loff_t from, size_t len, unsigned char *buf) { @@ -324,7 +321,6 @@ static ssize_t spi_read(struct intel_dg_spi *spi, u8 region, return len; } -__maybe_unused static ssize_t spi_erase(struct intel_dg_spi *spi, u8 region, loff_t from, u64 len, u64 *fail_addr) { @@ -413,18 +409,164 @@ static int intel_dg_spi_init(struct intel_dg_spi *spi, struct device *device) static int intel_dg_spi_erase(struct mtd_info *mtd, struct erase_info *info) { - return 0; + struct intel_dg_spi *spi; + unsigned int idx; + u8 region; + u64 addr; + ssize_t bytes; + loff_t from; + size_t len; + size_t total_len; + int ret = 0; + + if (!mtd || !info) + return -EINVAL; + + spi = mtd->priv; + if (WARN_ON(!spi)) + return -EINVAL; + + if (!IS_ALIGNED(info->addr, SZ_4K) || !IS_ALIGNED(info->len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %llx\n", + info->addr, info->len); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + return -EINVAL; + } + + total_len = info->len; + addr = info->addr; + + mutex_lock(&spi->lock); + + while (total_len > 0) { + if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); + info->fail_addr = addr; + ret = -ERANGE; + goto out; + } + + idx = spi_get_region(spi, addr); + if (idx >= spi->nregions) { + dev_err(&mtd->dev, "out of range"); + info->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + ret = -ERANGE; + goto out; + } + + from = addr - spi->regions[idx].offset; + region = spi->regions[idx].id; + len = total_len; + if (len > spi->regions[idx].size - from) + len = spi->regions[idx].size - from; + + dev_dbg(&mtd->dev, "erasing region[%d] %s from %llx len %zx\n", + region, spi->regions[idx].name, from, len); + + bytes = spi_erase(spi, region, from, len, &info->fail_addr); + if (bytes < 0) { + dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); + info->fail_addr += spi->regions[idx].offset; + ret = bytes; + goto out; + } + + addr += len; + total_len -= len; + } + +out: + mutex_unlock(&spi->lock); + return ret; } static int intel_dg_spi_read(struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen, u_char *buf) { + struct intel_dg_spi *spi; + ssize_t ret; + unsigned int idx; + u8 region; + + if (!mtd) + return -EINVAL; + + spi = mtd->priv; + if (WARN_ON(!spi)) + return -EINVAL; + + idx = spi_get_region(spi, from); + + dev_dbg(&mtd->dev, "reading region[%d] %s from %lld len %zd\n", + spi->regions[idx].id, spi->regions[idx].name, from, len); + + if (idx >= spi->nregions) { + dev_err(&mtd->dev, "out of ragnge"); + return -ERANGE; + } + + from -= spi->regions[idx].offset; + region = spi->regions[idx].id; + if (len > spi->regions[idx].size - from) + len = spi->regions[idx].size - from; + + mutex_lock(&spi->lock); + + ret = spi_read(spi, region, from, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "read failed with %zd\n", ret); + mutex_unlock(&spi->lock); + return ret; + } + + *retlen = ret; + + mutex_unlock(&spi->lock); return 0; } static int intel_dg_spi_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, const u_char *buf) { + struct intel_dg_spi *spi; + ssize_t ret; + unsigned int idx; + u8 region; + + if (!mtd) + return -EINVAL; + + spi = mtd->priv; + if (WARN_ON(!spi)) + return -EINVAL; + + idx = spi_get_region(spi, to); + + dev_dbg(&mtd->dev, "writing region[%d] %s to %lld len %zd\n", + spi->regions[idx].id, spi->regions[idx].name, to, len); + + if (idx >= spi->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + to -= spi->regions[idx].offset; + region = spi->regions[idx].id; + if (len > spi->regions[idx].size - to) + len = spi->regions[idx].size - to; + + mutex_lock(&spi->lock); + + ret = spi_write(spi, region, to, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "write failed with %zd\n", ret); + mutex_unlock(&spi->lock); + return ret; + } + + *retlen = ret; + + mutex_unlock(&spi->lock); return 0; }