From patchwork Fri Sep 1 17:22:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 719548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03CD4CA0FEC for ; Fri, 1 Sep 2023 17:23:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350355AbjIARXV (ORCPT ); Fri, 1 Sep 2023 13:23:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350361AbjIARXV (ORCPT ); Fri, 1 Sep 2023 13:23:21 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F19C11B2 for ; Fri, 1 Sep 2023 10:23:15 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 381FOkfw011343; Fri, 1 Sep 2023 17:23:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=rwPUkzd9SUwwoaofgMIGGc69RX700agWAb3AhcsFPSE=; b=WaMTxXKYTedOex2p/D7grdTqgNRj+SH5S1gHbZW//xgCX5AjG+e/FhU5ps3Vut3CcFJC H2kOAqcQGeK5GH+iodNPo8pmewZ1Hw03ZcW3Vj2dfBYB5cj5vznJTC15RA29IFy83G7o nNRM6me2VPA4k9DqmG1kZJx7YO3ER0onXyEAlCI0pACfJHqWQD3jkAyEHyhUJ54BeP4m 1N0Qbv5Z2ZceAZPzdhWuAl6xXsh87G+yNDD8ZQf/qH/SsiM7+KyV+6fzAHFD7O09P9eK 7vsYhOxoLypuTlKnidCn6NkuOt10+YxHcorTOcQZ5wTGckHj+I8K0h4RWjk9INctvrKR Vg== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3su89e9y79-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Sep 2023 17:23:13 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 381HNCwS025373 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 1 Sep 2023 17:23:12 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.36; Fri, 1 Sep 2023 10:23:11 -0700 From: Jeffrey Hugo To: , , , CC: , , Jeffrey Hugo Subject: [PATCH 1/7] accel/qaic: Remove ->size field from struct qaic_bo Date: Fri, 1 Sep 2023 11:22:41 -0600 Message-ID: <20230901172247.11410-2-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230901172247.11410-1-quic_jhugo@quicinc.com> References: <20230901172247.11410-1-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: egsVXZKw46mpQZLoFACIgzpudFzMyAnx X-Proofpoint-GUID: egsVXZKw46mpQZLoFACIgzpudFzMyAnx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-01_14,2023-08-31_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 adultscore=0 clxscore=1015 suspectscore=0 malwarescore=0 mlxscore=0 impostorscore=0 bulkscore=0 lowpriorityscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309010163 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Pranjal Ramajor Asha Kanojiya ->size field in struct qaic_bo stores user requested buffer size for allocate path or size of the dmabuf(PRIME). Now for allocate path driver allocates a BO of size which is PAGE_SIZE aligned, this size is already stored in base BO structure (struct drm_gem_object). So difference is ->size of struct qaic_bo stores the raw value coming from user and ->size in struct drm_gem_object stores the PAGE_SZIE aligned size. Do not use ->size from struct qaic_bo for any validation or operation instead use ->size from struct drm_gem_object since we already have allocated that much memory then why not use it. Only validate if user is trying to use more then the BO size. This make the driver more flexible. After this change ->size field of struct qaic_bo becomes redundant. Remove it. Signed-off-by: Pranjal Ramajor Asha Kanojiya Reviewed-by: Jeffrey Hugo Signed-off-by: Jeffrey Hugo --- drivers/accel/qaic/qaic.h | 2 -- drivers/accel/qaic/qaic_data.c | 10 +++------- include/uapi/drm/qaic_accel.h | 12 ++++++------ 3 files changed, 9 insertions(+), 15 deletions(-) diff --git a/drivers/accel/qaic/qaic.h b/drivers/accel/qaic/qaic.h index f2bd637a0d4e..27cf66dbd5a5 100644 --- a/drivers/accel/qaic/qaic.h +++ b/drivers/accel/qaic/qaic.h @@ -158,8 +158,6 @@ struct qaic_bo { struct drm_gem_object base; /* Scatter/gather table for allocate/imported BO */ struct sg_table *sgt; - /* BO size requested by user. GEM object might be bigger in size. */ - u64 size; /* Head in list of slices of this BO */ struct list_head slices; /* Total nents, for all slices of this BO */ diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index a90b64b325b4..09b5c6a52cb3 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -579,7 +579,7 @@ static void qaic_gem_print_info(struct drm_printer *p, unsigned int indent, { struct qaic_bo *bo = to_qaic_bo(obj); - drm_printf_indent(p, indent, "user requested size=%llu\n", bo->size); + drm_printf_indent(p, indent, "BO DMA direction %d\n", bo->dir); } static const struct vm_operations_struct drm_vm_ops = { @@ -695,8 +695,6 @@ int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *fi if (ret) goto free_bo; - bo->size = args->size; - ret = drm_gem_handle_create(file_priv, obj, &args->handle); if (ret) goto free_sgt; @@ -828,7 +826,6 @@ static int qaic_prepare_import_bo(struct qaic_bo *bo, struct qaic_attach_slice_h } bo->sgt = sgt; - bo->size = hdr->size; return 0; } @@ -838,7 +835,7 @@ static int qaic_prepare_export_bo(struct qaic_device *qdev, struct qaic_bo *bo, { int ret; - if (bo->size != hdr->size) + if (bo->base.size < hdr->size) return -EINVAL; ret = dma_map_sgtable(&qdev->pdev->dev, bo->sgt, hdr->dir, 0); @@ -868,7 +865,6 @@ static void qaic_unprepare_import_bo(struct qaic_bo *bo) { dma_buf_unmap_attachment(bo->base.import_attach, bo->sgt, bo->dir); bo->sgt = NULL; - bo->size = 0; } static void qaic_unprepare_export_bo(struct qaic_device *qdev, struct qaic_bo *bo) @@ -1190,7 +1186,7 @@ static int send_bo_list_to_device(struct qaic_device *qdev, struct drm_file *fil goto failed_to_send_bo; } - if (is_partial && pexec[i].resize > bo->size) { + if (is_partial && pexec[i].resize > bo->base.size) { ret = -EINVAL; goto failed_to_send_bo; } diff --git a/include/uapi/drm/qaic_accel.h b/include/uapi/drm/qaic_accel.h index 2d348744a853..f89880b7bfb6 100644 --- a/include/uapi/drm/qaic_accel.h +++ b/include/uapi/drm/qaic_accel.h @@ -242,12 +242,12 @@ struct qaic_attach_slice_entry { * @dbc_id: In. Associate the sliced BO with this DBC. * @handle: In. GEM handle of the BO to slice. * @dir: In. Direction of data flow. 1 = DMA_TO_DEVICE, 2 = DMA_FROM_DEVICE - * @size: In. Total length of the BO. - * If BO is imported (DMABUF/PRIME) then this size - * should not exceed the size of DMABUF provided. - * If BO is allocated using DRM_IOCTL_QAIC_CREATE_BO - * then this size should be exactly same as the size - * provided during DRM_IOCTL_QAIC_CREATE_BO. + * @size: In. Total length of BO being used. This should not exceed base + * size of BO (struct drm_gem_object.base) + * For BOs being allocated using DRM_IOCTL_QAIC_CREATE_BO, size of + * BO requested is PAGE_SIZE aligned then allocated hence allocated + * BO size maybe bigger. This size should not exceed the new + * PAGE_SIZE aligned BO size. * @dev_addr: In. Device address this slice pushes to or pulls from. * @db_addr: In. Address of the doorbell to ring. * @db_data: In. Data to write to the doorbell. From patchwork Fri Sep 1 17:22:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 719547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9B8ACA0FEC for ; Fri, 1 Sep 2023 17:23:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350359AbjIARXZ (ORCPT ); Fri, 1 Sep 2023 13:23:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350358AbjIARXY (ORCPT ); Fri, 1 Sep 2023 13:23:24 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 764B7CDD for ; Fri, 1 Sep 2023 10:23:21 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 381AXRmf030094; Fri, 1 Sep 2023 17:23:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=Q+1ZsXtzfnOZpDrVk7f32Tfwuk2875coOzRrt1lkrUs=; b=UoTnjgV6FWKq1gLfWQ4lf2lbd/JnTooaeuUGkFfLk1UoDtTGarkxdMAoLsjV37DPxncK ikOE3PhvuiwHSNffwdsaYfkrlKdTm3aN6ySMeAJE9tCbpNUz1UEFXYKc7bkd52LT8i0l fByKmfpMtCkD4/zWSkM9j1hs3hWBForjOmR/DGH5Kr38wt+hFonrWZcZ2qCzOuF24dA5 MxtMbv0Wd3hlf7bZQa3h3bGZ5oM5JBwocJvzeThmz1hH6QHpXGzYOa77fPrrPHa2Ak4n UpYqimKeIwbzahiY6XWfkYG/9VdO7WOO3p6jDtMl3zrR0mv1sB/NoNK/Gd5eMUcl8eJu ow== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3su89e9y7f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Sep 2023 17:23:18 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 381HNIuZ015074 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 1 Sep 2023 17:23:18 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.36; Fri, 1 Sep 2023 10:23:17 -0700 From: Jeffrey Hugo To: , , , CC: , , Jeffrey Hugo Subject: [PATCH 5/7] accel/qaic: Clean up BO during flushing of transfer list Date: Fri, 1 Sep 2023 11:22:45 -0600 Message-ID: <20230901172247.11410-6-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230901172247.11410-1-quic_jhugo@quicinc.com> References: <20230901172247.11410-1-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: ULBLPvsrTltIfD6kDmKIzZKu1RAXlPRZ X-Proofpoint-GUID: ULBLPvsrTltIfD6kDmKIzZKu1RAXlPRZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-01_14,2023-08-31_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 priorityscore=1501 adultscore=0 clxscore=1015 suspectscore=0 malwarescore=0 mlxscore=0 impostorscore=0 bulkscore=0 lowpriorityscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309010163 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Pranjal Ramajor Asha Kanojiya Variables that are set while adding the corresponding BO in transfer list should be cleaned when flushing them out of transfer list prematurely. After this patch we do not need some of the cleanup done in release_dbc() This patch would also pave the way to have a central location to clean BO, during an undesired situation. Signed-off-by: Pranjal Ramajor Asha Kanojiya Reviewed-by: Jeffrey Hugo Signed-off-by: Jeffrey Hugo --- drivers/accel/qaic/qaic_data.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index c4b8b4bf0200..6e44e00937af 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -1808,6 +1808,12 @@ static void empty_xfer_list(struct qaic_device *qdev, struct dma_bridge_chan *db bo->queued = false; list_del(&bo->xfer_list); spin_unlock_irqrestore(&dbc->xfer_lock, flags); + bo->nr_slice_xfer_done = 0; + bo->req_id = 0; + bo->perf_stats.req_received_ts = 0; + bo->perf_stats.req_submit_ts = 0; + bo->perf_stats.req_processed_ts = 0; + bo->perf_stats.queue_level_before = 0; dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); complete_all(&bo->xfer_done); drm_gem_object_put(&bo->base); @@ -1876,16 +1882,8 @@ void release_dbc(struct qaic_device *qdev, u32 dbc_id) qaic_unprepare_bo(qdev, bo); bo->sliced = false; INIT_LIST_HEAD(&bo->slices); - bo->nr_slice_xfer_done = 0; - bo->queued = false; - bo->req_id = 0; init_completion(&bo->xfer_done); - complete_all(&bo->xfer_done); list_del(&bo->bo_list); - bo->perf_stats.req_received_ts = 0; - bo->perf_stats.req_submit_ts = 0; - bo->perf_stats.req_processed_ts = 0; - bo->perf_stats.queue_level_before = 0; } dbc->in_use = false; From patchwork Fri Sep 1 17:22:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 719546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EC44CA0FF0 for ; Fri, 1 Sep 2023 17:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350365AbjIARX0 (ORCPT ); Fri, 1 Sep 2023 13:23:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350363AbjIARXZ (ORCPT ); Fri, 1 Sep 2023 13:23:25 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B0151BF for ; Fri, 1 Sep 2023 10:23:22 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 381CXVaZ028519; Fri, 1 Sep 2023 17:23:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=v+ZFYVt8TwW3ne31qm3BwcWTa36IClGhFRczA02sF0k=; b=BoS2G4pueHhH1jKk86LN9UWC23qiFJTPw7DsCYaJvWdgWJzs1/mB2WYmpmXpaQaiyHQ8 MlUvIodxLXfPMIoNJRxZW78V+s7SiHxQY0jEdzExOzFQzCDW9tACx6rU0tCQsUMJvPiI 41oYDaXScMUpTmtpJTaGrlqNwM+M9XXMFwSUPj1bW/v7YSiAQn4XkWof0WnkbgqR+yGw 5m5jNKVqyuxDfU6mX/pyDqFhepKE8S7vFHcoj5tKZ+VoIdCBPl7SvdZ8eYEknqgwC8Jk tEpv74VtvYDNYFHyaCAAAW5p54gBYXPejObnxzLNhuu9s80klnnwc+lKRpDRiMCoQa8U IQ== Received: from nalasppmta03.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3sug2h8ujq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Sep 2023 17:23:19 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 381HNJ2d011359 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 1 Sep 2023 17:23:19 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.36; Fri, 1 Sep 2023 10:23:18 -0700 From: Jeffrey Hugo To: , , , CC: , , Jeffrey Hugo Subject: [PATCH 6/7] accel/qaic: Create a function to initialize BO Date: Fri, 1 Sep 2023 11:22:46 -0600 Message-ID: <20230901172247.11410-7-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230901172247.11410-1-quic_jhugo@quicinc.com> References: <20230901172247.11410-1-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: QgVqeaaS8RqHKuNnlXsGVpD78fjPZbhM X-Proofpoint-ORIG-GUID: QgVqeaaS8RqHKuNnlXsGVpD78fjPZbhM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-01_14,2023-08-31_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 bulkscore=0 suspectscore=0 mlxlogscore=999 priorityscore=1501 mlxscore=0 phishscore=0 adultscore=0 clxscore=1015 malwarescore=0 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309010163 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Pranjal Ramajor Asha Kanojiya This makes sure that we have a single place to initialize and re-initialize BO. Use this new API to cleanup release_dbc() We will need this for next patch to detach slicing to a BO. Signed-off-by: Pranjal Ramajor Asha Kanojiya Reviewed-by: Jeffrey Hugo Signed-off-by: Jeffrey Hugo --- drivers/accel/qaic/qaic_data.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index 6e44e00937af..2acb9dbac88b 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -635,6 +635,18 @@ static const struct drm_gem_object_funcs qaic_gem_funcs = { .vm_ops = &drm_vm_ops, }; +static void qaic_init_bo(struct qaic_bo *bo, bool reinit) +{ + if (reinit) { + bo->sliced = false; + reinit_completion(&bo->xfer_done); + } else { + init_completion(&bo->xfer_done); + } + complete_all(&bo->xfer_done); + INIT_LIST_HEAD(&bo->slices); +} + static struct qaic_bo *qaic_alloc_init_bo(void) { struct qaic_bo *bo; @@ -643,9 +655,7 @@ static struct qaic_bo *qaic_alloc_init_bo(void) if (!bo) return ERR_PTR(-ENOMEM); - INIT_LIST_HEAD(&bo->slices); - init_completion(&bo->xfer_done); - complete_all(&bo->xfer_done); + qaic_init_bo(bo, false); return bo; } @@ -1880,9 +1890,7 @@ void release_dbc(struct qaic_device *qdev, u32 dbc_id) list_for_each_entry_safe(bo, bo_temp, &dbc->bo_lists, bo_list) { qaic_free_slices_bo(bo); qaic_unprepare_bo(qdev, bo); - bo->sliced = false; - INIT_LIST_HEAD(&bo->slices); - init_completion(&bo->xfer_done); + qaic_init_bo(bo, true); list_del(&bo->bo_list); } From patchwork Fri Sep 1 17:22:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 719545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D0ACA0FEF for ; Fri, 1 Sep 2023 17:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350363AbjIARXe (ORCPT ); Fri, 1 Sep 2023 13:23:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350375AbjIARXd (ORCPT ); Fri, 1 Sep 2023 13:23:33 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFCD4CF3 for ; Fri, 1 Sep 2023 10:23:26 -0700 (PDT) Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 381GwkJm027603; Fri, 1 Sep 2023 17:23:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=7TKQZ+mJVlBlM643DDPZT9pa/C442ZwtAl0VVSwS9rM=; b=Idxx5At0UwgSgC11/pkUYiMRftwvF7ZAxV3hsKwO6sQY9dfYc/V65ZsfeSz+TMZWz+La KAlki3CDfxh3Z9YkFXRcASqBC0E5dkpxtFwQwx26kaOAlszWJfbfKHZuIt25ahhPINn/ 7kQHU3gVTnReOgdIq75NQO6jWelZQrCYurrW/Ng+FZCwHxacHFyEZgGzpDAYASQGEEL4 crf2TUmHZu+BE7WGx3Dl4tYUMEsNA6twp8FmMJR7dXpssLkZF6VNATZ6VSQ+azI2N0rC x+6Uce8IU6NzDMhSCXde/QSdDfMiaBrtzeblMXUnpEv6yEEw3rHHvuGgVvWGD9iJ7riL Gg== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3suc9k1dej-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Sep 2023 17:23:21 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 381HNKiE025820 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 1 Sep 2023 17:23:20 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.36; Fri, 1 Sep 2023 10:23:19 -0700 From: Jeffrey Hugo To: , , , CC: , , Jeffrey Hugo Subject: [PATCH 7/7] accel/qaic: Add QAIC_DETACH_SLICE_BO IOCTL Date: Fri, 1 Sep 2023 11:22:47 -0600 Message-ID: <20230901172247.11410-8-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230901172247.11410-1-quic_jhugo@quicinc.com> References: <20230901172247.11410-1-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: pHelFsYPB91grn0ArQJzFXnsefyHpi_i X-Proofpoint-ORIG-GUID: pHelFsYPB91grn0ArQJzFXnsefyHpi_i X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-01_14,2023-08-31_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxlogscore=999 bulkscore=0 clxscore=1015 phishscore=0 impostorscore=0 lowpriorityscore=0 priorityscore=1501 mlxscore=0 spamscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309010163 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Pranjal Ramajor Asha Kanojiya Once a BO is attached with slicing configuration that BO can only be used for that particular setting. With this new feature user can detach slicing configuration off an already sliced BO and attach new slicing configuration using QAIC_ATTACH_SLICE_BO. This will support BO recycling. detach_slice_bo() detaches slicing configuration from a BO. This new helper function can also be used in release_dbc() as we are doing the exact same thing. Signed-off-by: Pranjal Ramajor Asha Kanojiya Reviewed-by: Jeffrey Hugo [jhugo: add documentation for new ioctl] Signed-off-by: Jeffrey Hugo --- Documentation/accel/qaic/qaic.rst | 10 +++ drivers/accel/qaic/qaic.h | 4 +- drivers/accel/qaic/qaic_data.c | 119 +++++++++++++++++++++++++++--- drivers/accel/qaic/qaic_drv.c | 1 + include/uapi/drm/qaic_accel.h | 12 +++ 5 files changed, 135 insertions(+), 11 deletions(-) diff --git a/Documentation/accel/qaic/qaic.rst b/Documentation/accel/qaic/qaic.rst index 72a70ab6e3a8..c88502383136 100644 --- a/Documentation/accel/qaic/qaic.rst +++ b/Documentation/accel/qaic/qaic.rst @@ -123,6 +123,16 @@ DRM_IOCTL_QAIC_PART_DEV AIC100 device and can be used for limiting a process to some subset of resources. +DRM_IOCTL_QAIC_DETACH_SLICE_BO + This IOCTL allows userspace to remove the slicing information from a BO that + was originally provided by a call to DRM_IOCTL_QAIC_ATTACH_SLICE_BO. This + is the inverse of DRM_IOCTL_QAIC_ATTACH_SLICE_BO. The BO must be idle for + DRM_IOCTL_QAIC_DETACH_SLICE_BO to be called. After a successful detach slice + operation the BO may have new slicing information attached with a new call + to DRM_IOCTL_QAIC_ATTACH_SLICE_BO. After detach slice, the BO cannot be + executed until after a new attach slice operation. Combining attach slice + and detach slice calls allows userspace to use a BO with multiple workloads. + Userspace Client Isolation ========================== diff --git a/drivers/accel/qaic/qaic.h b/drivers/accel/qaic/qaic.h index 27cf66dbd5a5..28f1e81a1465 100644 --- a/drivers/accel/qaic/qaic.h +++ b/drivers/accel/qaic/qaic.h @@ -219,7 +219,8 @@ struct qaic_bo { */ u32 queue_level_before; } perf_stats; - + /* Synchronizes BO operations */ + struct mutex lock; }; struct bo_slice { @@ -275,6 +276,7 @@ int qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *f int qaic_partial_execute_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int qaic_detach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); void irq_polling_work(struct work_struct *work); #endif /* _QAIC_H_ */ diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c index 2acb9dbac88b..c90fa6a430f6 100644 --- a/drivers/accel/qaic/qaic_data.c +++ b/drivers/accel/qaic/qaic_data.c @@ -624,6 +624,7 @@ static void qaic_free_object(struct drm_gem_object *obj) qaic_free_sgt(bo->sgt); } + mutex_destroy(&bo->lock); drm_gem_object_release(obj); kfree(bo); } @@ -641,6 +642,7 @@ static void qaic_init_bo(struct qaic_bo *bo, bool reinit) bo->sliced = false; reinit_completion(&bo->xfer_done); } else { + mutex_init(&bo->lock); init_completion(&bo->xfer_done); } complete_all(&bo->xfer_done); @@ -1002,10 +1004,13 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi } bo = to_qaic_bo(obj); + ret = mutex_lock_interruptible(&bo->lock); + if (ret) + goto put_bo; if (bo->sliced) { ret = -EINVAL; - goto put_bo; + goto unlock_bo; } dbc = &qdev->dbc[args->hdr.dbc_id]; @@ -1029,7 +1034,7 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi bo->sliced = true; list_add_tail(&bo->bo_list, &bo->dbc->bo_lists); srcu_read_unlock(&dbc->ch_lock, rcu_id); - drm_gem_object_put(obj); + mutex_unlock(&bo->lock); srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); @@ -1039,6 +1044,8 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi qaic_unprepare_bo(qdev, bo); unlock_ch_srcu: srcu_read_unlock(&dbc->ch_lock, rcu_id); +unlock_bo: + mutex_unlock(&bo->lock); put_bo: drm_gem_object_put(obj); free_slice_ent: @@ -1193,15 +1200,18 @@ static int send_bo_list_to_device(struct qaic_device *qdev, struct drm_file *fil } bo = to_qaic_bo(obj); + ret = mutex_lock_interruptible(&bo->lock); + if (ret) + goto failed_to_send_bo; if (!bo->sliced) { ret = -EINVAL; - goto failed_to_send_bo; + goto unlock_bo; } if (is_partial && pexec[i].resize > bo->base.size) { ret = -EINVAL; - goto failed_to_send_bo; + goto unlock_bo; } spin_lock_irqsave(&dbc->xfer_lock, flags); @@ -1210,7 +1220,7 @@ static int send_bo_list_to_device(struct qaic_device *qdev, struct drm_file *fil if (queued) { spin_unlock_irqrestore(&dbc->xfer_lock, flags); ret = -EINVAL; - goto failed_to_send_bo; + goto unlock_bo; } bo->req_id = dbc->next_req_id++; @@ -1241,17 +1251,20 @@ static int send_bo_list_to_device(struct qaic_device *qdev, struct drm_file *fil if (ret) { bo->queued = false; spin_unlock_irqrestore(&dbc->xfer_lock, flags); - goto failed_to_send_bo; + goto unlock_bo; } } reinit_completion(&bo->xfer_done); list_add_tail(&bo->xfer_list, &dbc->xfer_list); spin_unlock_irqrestore(&dbc->xfer_lock, flags); dma_sync_sgtable_for_device(&qdev->pdev->dev, bo->sgt, bo->dir); + mutex_unlock(&bo->lock); } return 0; +unlock_bo: + mutex_unlock(&bo->lock); failed_to_send_bo: if (likely(obj)) drm_gem_object_put(obj); @@ -1807,6 +1820,91 @@ int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, struct drm_file return ret; } +static void detach_slice_bo(struct qaic_device *qdev, struct qaic_bo *bo) +{ + qaic_free_slices_bo(bo); + qaic_unprepare_bo(qdev, bo); + qaic_init_bo(bo, true); + list_del(&bo->bo_list); + drm_gem_object_put(&bo->base); +} + +int qaic_detach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct qaic_detach_slice *args = data; + int rcu_id, usr_rcu_id, qdev_rcu_id; + struct dma_bridge_chan *dbc; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + unsigned long flags; + struct qaic_bo *bo; + int ret; + + if (args->pad != 0) + return -EINVAL; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto unlock_dev_srcu; + } + + obj = drm_gem_object_lookup(file_priv, args->handle); + if (!obj) { + ret = -ENOENT; + goto unlock_dev_srcu; + } + + bo = to_qaic_bo(obj); + ret = mutex_lock_interruptible(&bo->lock); + if (ret) + goto put_bo; + + if (!bo->sliced) { + ret = -EINVAL; + goto unlock_bo; + } + + dbc = bo->dbc; + rcu_id = srcu_read_lock(&dbc->ch_lock); + if (dbc->usr != usr) { + ret = -EINVAL; + goto unlock_ch_srcu; + } + + /* Check if BO is committed to H/W for DMA */ + spin_lock_irqsave(&dbc->xfer_lock, flags); + if (bo->queued) { + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + ret = -EBUSY; + goto unlock_ch_srcu; + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + + detach_slice_bo(qdev, bo); + +unlock_ch_srcu: + srcu_read_unlock(&dbc->ch_lock, rcu_id); +unlock_bo: + mutex_unlock(&bo->lock); +put_bo: + drm_gem_object_put(obj); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + static void empty_xfer_list(struct qaic_device *qdev, struct dma_bridge_chan *dbc) { unsigned long flags; @@ -1888,10 +1986,11 @@ void release_dbc(struct qaic_device *qdev, u32 dbc_id) dbc->usr = NULL; list_for_each_entry_safe(bo, bo_temp, &dbc->bo_lists, bo_list) { - qaic_free_slices_bo(bo); - qaic_unprepare_bo(qdev, bo); - qaic_init_bo(bo, true); - list_del(&bo->bo_list); + drm_gem_object_get(&bo->base); + mutex_lock(&bo->lock); + detach_slice_bo(qdev, bo); + mutex_unlock(&bo->lock); + drm_gem_object_put(&bo->base); } dbc->in_use = false; diff --git a/drivers/accel/qaic/qaic_drv.c b/drivers/accel/qaic/qaic_drv.c index b5de82e6eb4d..e2bfb4eaf852 100644 --- a/drivers/accel/qaic/qaic_drv.c +++ b/drivers/accel/qaic/qaic_drv.c @@ -150,6 +150,7 @@ static const struct drm_ioctl_desc qaic_drm_ioctls[] = { DRM_IOCTL_DEF_DRV(QAIC_PARTIAL_EXECUTE_BO, qaic_partial_execute_bo_ioctl, 0), DRM_IOCTL_DEF_DRV(QAIC_WAIT_BO, qaic_wait_bo_ioctl, 0), DRM_IOCTL_DEF_DRV(QAIC_PERF_STATS_BO, qaic_perf_stats_bo_ioctl, 0), + DRM_IOCTL_DEF_DRV(QAIC_DETACH_SLICE_BO, qaic_detach_slice_bo_ioctl, 0), }; static const struct drm_driver qaic_accel_driver = { diff --git a/include/uapi/drm/qaic_accel.h b/include/uapi/drm/qaic_accel.h index f89880b7bfb6..43ac5d864512 100644 --- a/include/uapi/drm/qaic_accel.h +++ b/include/uapi/drm/qaic_accel.h @@ -372,6 +372,16 @@ struct qaic_perf_stats_entry { __u32 pad; }; +/** + * struct qaic_detach_slice - Detaches slicing configuration from BO. + * @handle: In. GEM handle of the BO to detach slicing configuration. + * @pad: Structure padding. Must be 0. + */ +struct qaic_detach_slice { + __u32 handle; + __u32 pad; +}; + #define DRM_QAIC_MANAGE 0x00 #define DRM_QAIC_CREATE_BO 0x01 #define DRM_QAIC_MMAP_BO 0x02 @@ -380,6 +390,7 @@ struct qaic_perf_stats_entry { #define DRM_QAIC_PARTIAL_EXECUTE_BO 0x05 #define DRM_QAIC_WAIT_BO 0x06 #define DRM_QAIC_PERF_STATS_BO 0x07 +#define DRM_QAIC_DETACH_SLICE_BO 0x08 #define DRM_IOCTL_QAIC_MANAGE DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_MANAGE, struct qaic_manage_msg) #define DRM_IOCTL_QAIC_CREATE_BO DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_CREATE_BO, struct qaic_create_bo) @@ -389,6 +400,7 @@ struct qaic_perf_stats_entry { #define DRM_IOCTL_QAIC_PARTIAL_EXECUTE_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_PARTIAL_EXECUTE_BO, struct qaic_execute) #define DRM_IOCTL_QAIC_WAIT_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_WAIT_BO, struct qaic_wait) #define DRM_IOCTL_QAIC_PERF_STATS_BO DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_PERF_STATS_BO, struct qaic_perf_stats) +#define DRM_IOCTL_QAIC_DETACH_SLICE_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_DETACH_SLICE_BO, struct qaic_detach_slice) #if defined(__cplusplus) }