From patchwork Wed Oct 18 07:02:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 735729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9660DC46CA1 for ; Wed, 18 Oct 2023 07:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344528AbjJRHDD (ORCPT ); Wed, 18 Oct 2023 03:03:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234947AbjJRHDB (ORCPT ); Wed, 18 Oct 2023 03:03:01 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7C0F100; Wed, 18 Oct 2023 00:02:56 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39I4vDju022978; Wed, 18 Oct 2023 07:02:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=i8Se3s0n/KHRd73QipFoyeDw5ZqV+mJMhlUDlsmzk0Q=; b=MsCZE1wULtgqO9L3copjLuvjHhHl01ZTxTHha4s3AzLx9BUYgdVOnqB5JfLh75vlciLF 0NkuKFGXv0Hs5LnYGkIYEIrR0T9c/pFWxuNz8Ga0AUPj7cYEfHCbZLY+ZXzRNR+sDJN/ WAsd8f6AqW3olBaHjY2quvT81V8fz3LEPOpvCTHbWAsvvXNzU4Wyg1tyzj/d0vJsD8Qe 4gW9rzgg40l09KvsHvy2a99PpGBkPQxImaxnWEpskBoOY/hOpBaCzN1QxU2rZC8z+7VO R3dJKCxW/HokjKRZVGUC5fzK+tJo5lBQ4QlXECWIAdzNYLfcL2EI61hNXfqcux2kO0GH OQ== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tsaf0v46a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:02:54 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39I72rfD022605 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:02:53 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Wed, 18 Oct 2023 00:02:50 -0700 From: Ekansh Gupta To: , CC: Ekansh Gupta , , Subject: [PATCH v5 1/5] misc: fastrpc: Add fastrpc multimode invoke request support Date: Wed, 18 Oct 2023 12:32:36 +0530 Message-ID: <1697612560-9726-2-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> References: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: zMMUN8SdGVVQgKo6mTzPbLaS20Dgs2jL X-Proofpoint-ORIG-GUID: zMMUN8SdGVVQgKo6mTzPbLaS20Dgs2jL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 suspectscore=0 priorityscore=1501 bulkscore=0 adultscore=0 mlxlogscore=999 malwarescore=0 mlxscore=0 impostorscore=0 spamscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310180058 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Multimode invocation request is intended to support multiple different type of requests. This will include enhanced invoke request to support CRC check and performance counter enablement. This will also support few driver level user controllable mechanisms like usage of shared context banks, wakelock support, etc. This IOCTL is also added with the aim to support few new fastrpc features like DSP PD notification framework, DSP Signalling mechanism etc. Signed-off-by: Ekansh Gupta --- Changes in v4: - Added padding member to IOCTL structure - Added checks for reserved members drivers/misc/fastrpc.c | 160 ++++++++++++++++++++++++++++++++------------ include/uapi/misc/fastrpc.h | 26 +++++++ 2 files changed, 145 insertions(+), 41 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index a66b7c1..e392e2a 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -573,7 +573,7 @@ static void fastrpc_get_buff_overlaps(struct fastrpc_invoke_ctx *ctx) static struct fastrpc_invoke_ctx *fastrpc_context_alloc( struct fastrpc_user *user, u32 kernel, u32 sc, - struct fastrpc_invoke_args *args) + struct fastrpc_enhanced_invoke *invoke) { struct fastrpc_channel_ctx *cctx = user->cctx; struct fastrpc_invoke_ctx *ctx = NULL; @@ -604,7 +604,7 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( kfree(ctx); return ERR_PTR(-ENOMEM); } - ctx->args = args; + ctx->args = (struct fastrpc_invoke_args *)invoke->inv.args; fastrpc_get_buff_overlaps(ctx); } @@ -1133,12 +1133,12 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx, } static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, - u32 handle, u32 sc, - struct fastrpc_invoke_args *args) + struct fastrpc_enhanced_invoke *invoke) { struct fastrpc_invoke_ctx *ctx = NULL; struct fastrpc_buf *buf, *b; - + struct fastrpc_invoke *inv = &invoke->inv; + u32 handle, sc; int err = 0; if (!fl->sctx) @@ -1147,12 +1147,14 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (!fl->cctx->rpdev) return -EPIPE; + handle = inv->handle; + sc = inv->sc; if (handle == FASTRPC_INIT_HANDLE && !kernel) { dev_warn_ratelimited(fl->sctx->dev, "user app trying to send a kernel RPC message (%d)\n", handle); return -EPERM; } - ctx = fastrpc_context_alloc(fl, kernel, sc, args); + ctx = fastrpc_context_alloc(fl, kernel, sc, invoke); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -1238,6 +1240,7 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, { struct fastrpc_init_create_static init; struct fastrpc_invoke_args *args; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_phy_page pages[1]; char *name; int err; @@ -1246,7 +1249,6 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, u32 namelen; u32 pageslen; } inbuf; - u32 sc; args = kcalloc(FASTRPC_CREATE_STATIC_PROCESS_NARGS, sizeof(*args), GFP_KERNEL); if (!args) @@ -1313,10 +1315,11 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, args[2].length = sizeof(*pages); args[2].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_STATIC, 3, 0); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_STATIC, 3, 0); + ioctl.inv.args = (__u64)args; - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, args); + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) goto err_invoke; @@ -1356,6 +1359,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, { struct fastrpc_init_create init; struct fastrpc_invoke_args *args; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_phy_page pages[1]; struct fastrpc_map *map = NULL; struct fastrpc_buf *imem = NULL; @@ -1369,7 +1373,6 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, u32 attrs; u32 siglen; } inbuf; - u32 sc; bool unsigned_module = false; args = kcalloc(FASTRPC_CREATE_PROCESS_NARGS, sizeof(*args), GFP_KERNEL); @@ -1443,12 +1446,13 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, args[5].length = sizeof(inbuf.siglen); args[5].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0); if (init.attrs) - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0); + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0); + ioctl.inv.args = (__u64)args; - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, args); + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) goto err_invoke; @@ -1500,17 +1504,19 @@ static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) { struct fastrpc_invoke_args args[1]; + struct fastrpc_enhanced_invoke ioctl; int tgid = 0; - u32 sc; tgid = fl->tgid; args[0].ptr = (u64)(uintptr_t) &tgid; args[0].length = sizeof(tgid); args[0].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_RELEASE, 1, 0); - return fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_RELEASE, 1, 0); + ioctl.inv.args = (__u64)args; + + return fastrpc_internal_invoke(fl, true, &ioctl); } static int fastrpc_device_release(struct inode *inode, struct file *file) @@ -1646,22 +1652,25 @@ static int fastrpc_dmabuf_alloc(struct fastrpc_user *fl, char __user *argp) static int fastrpc_init_attach(struct fastrpc_user *fl, int pd) { struct fastrpc_invoke_args args[1]; + struct fastrpc_enhanced_invoke ioctl; int tgid = fl->tgid; - u32 sc; args[0].ptr = (u64)(uintptr_t) &tgid; args[0].length = sizeof(tgid); args[0].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_ATTACH, 1, 0); fl->pd = pd; - return fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_ATTACH, 1, 0); + ioctl.inv.args = (__u64)args; + + return fastrpc_internal_invoke(fl, true, &ioctl); } static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_invoke_args *args = NULL; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_invoke inv; u32 nscalars; int err; @@ -1683,16 +1692,70 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) } } - err = fastrpc_internal_invoke(fl, false, inv.handle, inv.sc, args); + ioctl.inv = inv; + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, false, &ioctl); kfree(args); return err; } +static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) +{ + struct fastrpc_enhanced_invoke einv; + struct fastrpc_invoke_args *args = NULL; + struct fastrpc_ioctl_multimode_invoke invoke; + u32 nscalars; + int err, i; + + if (copy_from_user(&invoke, argp, sizeof(invoke))) + return -EFAULT; + + for (i = 0; i < 8; i++) { + if (invoke.reserved[i] != 0) + return -EINVAL; + } + if (invoke.rsvd != 0) + return -EINVAL; + + switch (invoke.req) { + case FASTRPC_INVOKE: + /* nscalars is truncated here to max supported value */ + if (copy_from_user(&einv, (void __user *)(uintptr_t)invoke.invparam, + invoke.size)) + return -EFAULT; + for (i = 0; i < 8; i++) { + if (einv.reserved[i] != 0) + return -EINVAL; + } + nscalars = REMOTE_SCALARS_LENGTH(einv.inv.sc); + if (nscalars) { + args = kcalloc(nscalars, sizeof(*args), GFP_KERNEL); + if (!args) + return -ENOMEM; + if (copy_from_user(args, (void __user *)(uintptr_t)einv.inv.args, + nscalars * sizeof(*args))) { + kfree(args); + return -EFAULT; + } + } + einv.inv.args = (__u64)args; + err = fastrpc_internal_invoke(fl, false, &einv); + kfree(args); + break; + default: + err = -ENOTTY; + break; + } + return err; +} + static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr_buf, uint32_t dsp_attr_buf_len) { struct fastrpc_invoke_args args[2] = { 0 }; + struct fastrpc_enhanced_invoke ioctl; /* Capability filled in userspace */ dsp_attr_buf[0] = 0; @@ -1705,8 +1768,11 @@ static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr args[1].fd = -1; fl->pd = USER_PD; - return fastrpc_internal_invoke(fl, true, FASTRPC_DSP_UTILITIES_HANDLE, - FASTRPC_SCALARS(0, 1, 1), args); + ioctl.inv.handle = FASTRPC_DSP_UTILITIES_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(0, 1, 1); + ioctl.inv.args = (__u64)args; + + return fastrpc_internal_invoke(fl, true, &ioctl); } static int fastrpc_get_info_from_kernel(struct fastrpc_ioctl_capability *cap, @@ -1793,10 +1859,10 @@ static int fastrpc_get_dsp_info(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_buf *buf) { struct fastrpc_invoke_args args[1] = { [0] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_munmap_req_msg req_msg; struct device *dev = fl->sctx->dev; int err; - u32 sc; req_msg.pgid = fl->tgid; req_msg.size = buf->size; @@ -1805,9 +1871,11 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_buf * args[0].ptr = (u64) (uintptr_t) &req_msg; args[0].length = sizeof(req_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, - &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); if (!err) { dev_dbg(dev, "unmmap\tpt 0x%09lx OK\n", buf->raddr); spin_lock(&fl->lock); @@ -1851,6 +1919,7 @@ static int fastrpc_req_munmap(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_invoke_args args[3] = { [0 ... 2] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_buf *buf = NULL; struct fastrpc_mmap_req_msg req_msg; struct fastrpc_mmap_rsp_msg rsp_msg; @@ -1858,7 +1927,6 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) struct fastrpc_req_mmap req; struct device *dev = fl->sctx->dev; int err; - u32 sc; if (copy_from_user(&req, argp, sizeof(req))) return -EFAULT; @@ -1901,9 +1969,11 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) args[2].ptr = (u64) (uintptr_t) &rsp_msg; args[2].length = sizeof(rsp_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, - &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { dev_err(dev, "mmap error (len 0x%08llx)\n", buf->size); goto err_invoke; @@ -1951,10 +2021,10 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_mem_unmap *req) { struct fastrpc_invoke_args args[1] = { [0] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_map *map = NULL, *iter, *m; struct fastrpc_mem_unmap_req_msg req_msg = { 0 }; int err = 0; - u32 sc; struct device *dev = fl->sctx->dev; spin_lock(&fl->lock); @@ -1980,9 +2050,11 @@ static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_me args[0].ptr = (u64) (uintptr_t) &req_msg; args[0].length = sizeof(req_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_UNMAP, 1, 0); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, - &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_UNMAP, 1, 0); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); fastrpc_map_put(map); if (err) dev_err(dev, "unmmap\tpt fd = %d, 0x%09llx error\n", map->fd, map->raddr); @@ -2003,6 +2075,7 @@ static int fastrpc_req_mem_unmap(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_invoke_args args[4] = { [0 ... 3] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_mem_map_req_msg req_msg = { 0 }; struct fastrpc_mmap_rsp_msg rsp_msg = { 0 }; struct fastrpc_mem_unmap req_unmap = { 0 }; @@ -2011,7 +2084,6 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) struct device *dev = fl->sctx->dev; struct fastrpc_map *map = NULL; int err; - u32 sc; if (copy_from_user(&req, argp, sizeof(req))) return -EFAULT; @@ -2047,8 +2119,11 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) args[3].ptr = (u64) (uintptr_t) &rsp_msg; args[3].length = sizeof(rsp_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_MAP, 3, 1); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_MAP, 3, 1); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { dev_err(dev, "mem mmap error, fd %d, vaddr %llx, size %lld\n", req.fd, req.vaddrin, map->size); @@ -2088,6 +2163,9 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int cmd, case FASTRPC_IOCTL_INVOKE: err = fastrpc_invoke(fl, argp); break; + case FASTRPC_IOCTL_MULTIMODE_INVOKE: + err = fastrpc_multimode_invoke(fl, argp); + break; case FASTRPC_IOCTL_INIT_ATTACH: err = fastrpc_init_attach(fl, ROOT_PD); break; diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h index f33d914..88194c5 100644 --- a/include/uapi/misc/fastrpc.h +++ b/include/uapi/misc/fastrpc.h @@ -16,6 +16,7 @@ #define FASTRPC_IOCTL_INIT_CREATE_STATIC _IOWR('R', 9, struct fastrpc_init_create_static) #define FASTRPC_IOCTL_MEM_MAP _IOWR('R', 10, struct fastrpc_mem_map) #define FASTRPC_IOCTL_MEM_UNMAP _IOWR('R', 11, struct fastrpc_mem_unmap) +#define FASTRPC_IOCTL_MULTIMODE_INVOKE _IOWR('R', 12, struct fastrpc_ioctl_multimode_invoke) #define FASTRPC_IOCTL_GET_DSP_INFO _IOWR('R', 13, struct fastrpc_ioctl_capability) /** @@ -80,6 +81,31 @@ struct fastrpc_invoke { __u64 args; }; +struct fastrpc_enhanced_invoke { + struct fastrpc_invoke inv; + __u64 crc; + __u64 perf_kernel; + __u64 perf_dsp; + __u32 reserved[8]; // keeping reserved bits for new requirements +}; + +struct fastrpc_ioctl_multimode_invoke { + __u32 req; + __u32 rsvd; // padding field + __u64 invparam; + __u64 size; + __u32 reserved[8]; // keeping reserved bits for new requirements +}; + +enum fastrpc_multimode_invoke_type { + FASTRPC_INVOKE = 1, + FASTRPC_INVOKE_ENHANCED = 2, + FASTRPC_INVOKE_CONTROL = 3, + FASTRPC_INVOKE_DSPSIGNAL = 4, + FASTRPC_INVOKE_NOTIF = 5, + FASTRPC_INVOKE_MULTISESSION = 6, +}; + struct fastrpc_init_create { __u32 filelen; /* elf file length */ __s32 filefd; /* fd for the file */ From patchwork Wed Oct 18 07:02:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 735258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54E1ACDB482 for ; Wed, 18 Oct 2023 07:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344559AbjJRHDE (ORCPT ); Wed, 18 Oct 2023 03:03:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344581AbjJRHDD (ORCPT ); Wed, 18 Oct 2023 03:03:03 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97F7E10E; Wed, 18 Oct 2023 00:02:59 -0700 (PDT) Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39I68UTG032369; Wed, 18 Oct 2023 07:02:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=hsibUURuaOSyADWg6+k+o19Ya1zYqRn0cGAJHu9TGow=; b=i10mpj3S5mLnokRxuindevwLEA8/vN3mabT+UNr/HGl4GYGfRGVf20o88alNKseU0OSG WqgTlAJPLofZ0ICLlXISccq+33jf8wTcFYQ1RGcC/tA4uYV90EnlA9YSYApi1V9ZqXd5 j8ZhxEU8TDmwZ5m+5jEkc/dvsO9heFCBvjygGdIPV+IWzv5TkwGj+Rtg4MdnlAQmahA9 fnxG3qZZcRJQtpTDVEKX4TJ/2d61zVcPnbSghY9GkNHt8GAct2JR+AKu51/Aro92nuOR zs45FkSAzwQa6ZsbXmChzTuvyR3MEmJWd0I95MLL5oWo3pBOYlLGFRIMgGprpC+U+iGB tA== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tsr7c2bht-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:02:56 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39I72tGH022654 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:02:55 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Wed, 18 Oct 2023 00:02:53 -0700 From: Ekansh Gupta To: , CC: Ekansh Gupta , , Subject: [PATCH v5 2/5] misc: fastrpc: Add CRC support for remote buffers Date: Wed, 18 Oct 2023 12:32:37 +0530 Message-ID: <1697612560-9726-3-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> References: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: Rm9LibQyPpEaRovXik-s6-r57_XPeJrd X-Proofpoint-GUID: Rm9LibQyPpEaRovXik-s6-r57_XPeJrd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0 lowpriorityscore=0 spamscore=0 clxscore=1015 bulkscore=0 adultscore=0 impostorscore=0 mlxscore=0 priorityscore=1501 mlxlogscore=999 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310180058 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org CRC check for input and output argument helps in ensuring data consistency over a remote call. If user intends to enable CRC check, first local user CRC is calculated at user end and a CRC buffer is passed to DSP to capture remote CRC values. DSP is expected to write to the remote CRC buffer which is then compared at user level with the local CRC values. Signed-off-by: Ekansh Gupta --- drivers/misc/fastrpc.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index e392e2a..825ff91 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -611,6 +611,7 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( /* Released in fastrpc_context_put() */ fastrpc_channel_ctx_get(cctx); + ctx->crc = (u32 *)(uintptr_t)invoke->crc; ctx->sc = sc; ctx->retval = -1; ctx->pid = current->pid; @@ -1066,6 +1067,7 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, struct fastrpc_invoke_buf *list; struct fastrpc_phy_page *pages; u64 *fdlist; + u32 *crclist; int i, inbufs, outbufs, handles; inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); @@ -1073,7 +1075,8 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, handles = REMOTE_SCALARS_INHANDLES(ctx->sc) + REMOTE_SCALARS_OUTHANDLES(ctx->sc); list = fastrpc_invoke_buf_start(rpra, ctx->nscalars); pages = fastrpc_phy_page_start(list, ctx->nscalars); - fdlist = (uint64_t *)(pages + inbufs + outbufs + handles); + fdlist = (u64 *)(pages + inbufs + outbufs + handles); + crclist = (u32 *)(fdlist + FASTRPC_MAX_FDLIST); for (i = inbufs; i < ctx->nbufs; ++i) { if (!ctx->maps[i]) { @@ -1097,6 +1100,10 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, fastrpc_map_put(mmap); } + if (ctx->crc && crclist && rpra) { + if (copy_to_user((void __user *)ctx->crc, crclist, FASTRPC_MAX_CRCLIST * sizeof(u32))) + return -EFAULT; + } return 0; } @@ -1721,6 +1728,7 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) switch (invoke.req) { case FASTRPC_INVOKE: + case FASTRPC_INVOKE_ENHANCED: /* nscalars is truncated here to max supported value */ if (copy_from_user(&einv, (void __user *)(uintptr_t)invoke.invparam, invoke.size)) From patchwork Wed Oct 18 07:02:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 735257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96E35CDB47E for ; Wed, 18 Oct 2023 07:03:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235147AbjJRHDV (ORCPT ); Wed, 18 Oct 2023 03:03:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344646AbjJRHDL (ORCPT ); Wed, 18 Oct 2023 03:03:11 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 936B9126; Wed, 18 Oct 2023 00:03:02 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39I4m6pn005203; Wed, 18 Oct 2023 07:02:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=tqqvxSf2BwZDIBBLod89GS6mURAg1BSoG2iU12URJk0=; b=gnCGleyrKy919PvYwLdDsYxXVWx5FFRsn6bY2sBzPEIFp+/b87hhh4B4ZiZgdhKQiDwa ZVFu04tHMNpBwoeWGIrR5s9EittWgi5erUpcCK0bDMMJl95JfqwAYZgsLm1W4dGfe3OT nRzVckpemMt4fC/bBRoPb82q0uWmd90qxm2j2mhBpn7SWsqH7hypM1x89uG9H86PYmGG jAkr4aSWICvJ80/xQUiDCwBQsTu/b3EFgymJm4QVRb2fllO6/7wR4DM4UISTvC9lQ3oD zkWWMG3ooubdEoA1FQda009KgcbSSrKYCE5gTgmUIAklbB9KprET3JtvoSkUz6doL6Gd lg== Received: from nalasppmta03.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tsaf0v46e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:02:59 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39I72w95007759 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:02:58 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Wed, 18 Oct 2023 00:02:55 -0700 From: Ekansh Gupta To: , CC: Ekansh Gupta , , Subject: [PATCH v5 3/5] misc: fastrpc: Capture kernel and DSP performance counters Date: Wed, 18 Oct 2023 12:32:38 +0530 Message-ID: <1697612560-9726-4-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> References: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: YwMHkqnYMUGapcLgz-Y_BDuTnZ7gMKUI X-Proofpoint-ORIG-GUID: YwMHkqnYMUGapcLgz-Y_BDuTnZ7gMKUI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 suspectscore=0 priorityscore=1501 bulkscore=0 adultscore=0 mlxlogscore=999 malwarescore=0 mlxscore=0 impostorscore=0 spamscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310180058 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support to capture kernel performance counters for different kernel level operations. These counters collects the information for remote call and copies the information to a buffer shared by user. Collection of DSP performance counters is also added as part of this change. DSP updates the performance information in the metadata which is then copied to a buffer passed by the user. Signed-off-by: Ekansh Gupta --- Changes in v2: - Fixed compile time warnings Changes in v3: - Squashed commits to get proper patch series drivers/misc/fastrpc.c | 140 +++++++++++++++++++++++++++++++++++++++++--- include/uapi/misc/fastrpc.h | 14 +++++ 2 files changed, 146 insertions(+), 8 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 825ff91..b9822c1 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,8 @@ #define FASTRPC_ALIGN 128 #define FASTRPC_MAX_FDLIST 16 #define FASTRPC_MAX_CRCLIST 64 +#define FASTRPC_KERNEL_PERF_LIST (PERF_KEY_MAX) +#define FASTRPC_DSP_PERF_LIST 12 #define FASTRPC_PHYS(p) ((p) & 0xffffffff) #define FASTRPC_CTX_MAX (256) #define FASTRPC_INIT_HANDLE 1 @@ -105,6 +108,27 @@ #define miscdev_to_fdevice(d) container_of(d, struct fastrpc_device, miscdev) +#define PERF_END ((void)0) + +#define PERF(enb, cnt, ff) \ + {\ + struct timespec64 startT = {0};\ + uint64_t *counter = cnt;\ + if (enb && counter) {\ + ktime_get_real_ts64(&startT);\ + } \ + ff ;\ + if (enb && counter) {\ + *counter += getnstimediff(&startT);\ + } \ + } + +#define GET_COUNTER(perf_ptr, offset) \ + (perf_ptr != NULL ?\ + (((offset >= 0) && (offset < PERF_KEY_MAX)) ?\ + (uint64_t *)(perf_ptr + offset)\ + : (uint64_t *)NULL) : (uint64_t *)NULL) + static const char *domains[FASTRPC_DEV_MAX] = { "adsp", "mdsp", "sdsp", "cdsp"}; struct fastrpc_phy_page { @@ -228,6 +252,19 @@ struct fastrpc_map { struct kref refcount; }; +struct fastrpc_perf { + u64 count; + u64 flush; + u64 map; + u64 copy; + u64 link; + u64 getargs; + u64 putargs; + u64 invargs; + u64 invoke; + u64 tid; +}; + struct fastrpc_invoke_ctx { int nscalars; int nbufs; @@ -236,6 +273,8 @@ struct fastrpc_invoke_ctx { int tgid; u32 sc; u32 *crc; + u64 *perf_kernel; + u64 *perf_dsp; u64 ctxid; u64 msg_sz; struct kref refcount; @@ -250,6 +289,7 @@ struct fastrpc_invoke_ctx { struct fastrpc_invoke_args *args; struct fastrpc_buf_overlap *olaps; struct fastrpc_channel_ctx *cctx; + struct fastrpc_perf *perf; }; struct fastrpc_session_ctx { @@ -299,6 +339,7 @@ struct fastrpc_user { struct fastrpc_session_ctx *sctx; struct fastrpc_buf *init_mem; + u32 profile; int tgid; int pd; bool is_secure_dev; @@ -308,6 +349,17 @@ struct fastrpc_user { struct mutex mutex; }; +static inline int64_t getnstimediff(struct timespec64 *start) +{ + int64_t ns; + struct timespec64 ts, b; + + ktime_get_real_ts64(&ts); + b = timespec64_sub(ts, *start); + ns = timespec64_to_ns(&b); + return ns; +} + static void fastrpc_free_map(struct kref *ref) { struct fastrpc_map *map; @@ -493,6 +545,9 @@ static void fastrpc_context_free(struct kref *ref) if (ctx->buf) fastrpc_buf_free(ctx->buf); + if (ctx->fl->profile) + kfree(ctx->perf); + spin_lock_irqsave(&cctx->lock, flags); idr_remove(&cctx->ctx_idr, ctx->ctxid >> 4); spin_unlock_irqrestore(&cctx->lock, flags); @@ -612,6 +667,14 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( fastrpc_channel_ctx_get(cctx); ctx->crc = (u32 *)(uintptr_t)invoke->crc; + ctx->perf_dsp = (u64 *)(uintptr_t)invoke->perf_dsp; + ctx->perf_kernel = (u64 *)(uintptr_t)invoke->perf_kernel; + if (ctx->fl->profile) { + ctx->perf = kzalloc(sizeof(*(ctx->perf)), GFP_KERNEL); + if (!ctx->perf) + return ERR_PTR(-ENOMEM); + ctx->perf->tid = ctx->fl->tgid; + } ctx->sc = sc; ctx->retval = -1; ctx->pid = current->pid; @@ -875,7 +938,8 @@ static int fastrpc_get_meta_size(struct fastrpc_invoke_ctx *ctx) sizeof(struct fastrpc_invoke_buf) + sizeof(struct fastrpc_phy_page)) * ctx->nscalars + sizeof(u64) * FASTRPC_MAX_FDLIST + - sizeof(u32) * FASTRPC_MAX_CRCLIST; + sizeof(u32) * FASTRPC_MAX_CRCLIST + + sizeof(u32) + sizeof(u64) * FASTRPC_DSP_PERF_LIST; return size; } @@ -942,16 +1006,22 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) int inbufs, i, oix, err = 0; u64 len, rlen, pkt_size; u64 pg_start, pg_end; + u64 *perf_counter = NULL; uintptr_t args; int metalen; + if (ctx->fl->profile) + perf_counter = (u64 *)ctx->perf + PERF_COUNT; + inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); metalen = fastrpc_get_meta_size(ctx); pkt_size = fastrpc_get_payload_size(ctx, metalen); + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_MAP), err = fastrpc_create_maps(ctx); if (err) return err; + PERF_END); ctx->msg_sz = pkt_size; @@ -983,6 +1053,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) if (ctx->maps[i]) { struct vm_area_struct *vma = NULL; + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_MAP), rpra[i].buf.pv = (u64) ctx->args[i].ptr; pages[i].addr = ctx->maps[i]->phys; @@ -997,9 +1068,9 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >> PAGE_SHIFT; pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE; - + PERF_END); } else { - + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_COPY), if (ctx->olaps[oix].offset == 0) { rlen -= ALIGN(args, FASTRPC_ALIGN) - args; args = ALIGN(args, FASTRPC_ALIGN); @@ -1021,12 +1092,14 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE; args = args + mlen; rlen -= mlen; + PERF_END); } if (i < inbufs && !ctx->maps[i]) { void *dst = (void *)(uintptr_t)rpra[i].buf.pv; void *src = (void *)(uintptr_t)ctx->args[i].ptr; + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_COPY), if (!kernel) { if (copy_from_user(dst, (void __user *)src, len)) { @@ -1036,6 +1109,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) } else { memcpy(dst, src, len); } + PERF_END); } } @@ -1066,9 +1140,9 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, struct fastrpc_map *mmap = NULL; struct fastrpc_invoke_buf *list; struct fastrpc_phy_page *pages; - u64 *fdlist; - u32 *crclist; - int i, inbufs, outbufs, handles; + u64 *fdlist, *perf_dsp_list; + u32 *crclist, *poll; + int i, inbufs, outbufs, handles, perferr; inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); outbufs = REMOTE_SCALARS_OUTBUFS(ctx->sc); @@ -1077,6 +1151,8 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, pages = fastrpc_phy_page_start(list, ctx->nscalars); fdlist = (u64 *)(pages + inbufs + outbufs + handles); crclist = (u32 *)(fdlist + FASTRPC_MAX_FDLIST); + poll = (u32 *)(crclist + FASTRPC_MAX_CRCLIST); + perf_dsp_list = (u64 *)(poll + 1); for (i = inbufs; i < ctx->nbufs; ++i) { if (!ctx->maps[i]) { @@ -1101,8 +1177,16 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, } if (ctx->crc && crclist && rpra) { - if (copy_to_user((void __user *)ctx->crc, crclist, FASTRPC_MAX_CRCLIST * sizeof(u32))) + if (copy_to_user((void __user *)ctx->crc, crclist, + FASTRPC_MAX_CRCLIST * sizeof(u32))) { return -EFAULT; + } + } + if (ctx->perf_dsp && perf_dsp_list) { + perferr = copy_to_user((void __user *)ctx->perf_dsp, + perf_dsp_list, FASTRPC_DSP_PERF_LIST * sizeof(u64)); + if (perferr) + dev_info(fl->sctx->dev, "Warning: failed to copy perf data %d\n", perferr); } return 0; } @@ -1139,6 +1223,20 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx, } +static void fastrpc_update_invoke_count(u32 handle, u64 *perf_counter, + struct timespec64 *invoket) +{ + u64 *invcount, *count; + + invcount = GET_COUNTER(perf_counter, PERF_INVOKE); + if (invcount) + *invcount += getnstimediff(invoket); + + count = GET_COUNTER(perf_counter, PERF_COUNT); + if (count) + *count += 1; +} + static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, struct fastrpc_enhanced_invoke *invoke) { @@ -1146,7 +1244,12 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, struct fastrpc_buf *buf, *b; struct fastrpc_invoke *inv = &invoke->inv; u32 handle, sc; - int err = 0; + u64 *perf_counter = NULL; + int err = 0, perferr = 0; + struct timespec64 invoket = {0}; + + if (fl->profile) + ktime_get_real_ts64(&invoket); if (!fl->sctx) return -EINVAL; @@ -1165,18 +1268,24 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (IS_ERR(ctx)) return PTR_ERR(ctx); + if (fl->profile) + perf_counter = (u64 *)ctx->perf + PERF_COUNT; + PERF(fl->profile, GET_COUNTER(perf_counter, PERF_GETARGS), if (ctx->nscalars) { err = fastrpc_get_args(kernel, ctx); if (err) goto bail; } + PERF_END); /* make sure that all CPU memory writes are seen by DSP */ dma_wmb(); + PERF(fl->profile, GET_COUNTER(perf_counter, PERF_LINK), /* Send invoke buffer to remote dsp */ err = fastrpc_invoke_send(fl->sctx, ctx, kernel, handle); if (err) goto bail; + PERF_END); if (kernel) { if (!wait_for_completion_timeout(&ctx->work, 10 * HZ)) @@ -1196,10 +1305,12 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (ctx->nscalars) { /* make sure that all memory writes by DSP are seen by CPU */ dma_rmb(); + PERF(fl->profile, GET_COUNTER(perf_counter, PERF_PUTARGS), /* populate all the output buffers with results */ err = fastrpc_put_args(ctx, kernel); if (err) goto bail; + PERF_END); } bail: @@ -1216,6 +1327,15 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, list_del(&buf->node); list_add_tail(&buf->node, &fl->cctx->invoke_interrupted_mmaps); } + } else if (ctx) { + if (fl->profile && !err) + fastrpc_update_invoke_count(handle, perf_counter, &invoket); + if (fl->profile && ctx->perf && ctx->perf_kernel) { + perferr = copy_to_user((void __user *)ctx->perf_kernel, + ctx->perf, FASTRPC_KERNEL_PERF_LIST * sizeof(u64)); + if (perferr) + dev_info(fl->sctx->dev, "Warning: failed to copy perf data %d\n", perferr); + } } if (err) @@ -1714,6 +1834,7 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) struct fastrpc_invoke_args *args = NULL; struct fastrpc_ioctl_multimode_invoke invoke; u32 nscalars; + u64 *perf_kernel; int err, i; if (copy_from_user(&invoke, argp, sizeof(invoke))) @@ -1748,6 +1869,9 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) return -EFAULT; } } + perf_kernel = (u64 *)(uintptr_t)einv.perf_kernel; + if (perf_kernel) + fl->profile = true; einv.inv.args = (__u64)args; err = fastrpc_internal_invoke(fl, false, &einv); kfree(args); diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h index 88194c5..3a2ba59 100644 --- a/include/uapi/misc/fastrpc.h +++ b/include/uapi/misc/fastrpc.h @@ -166,4 +166,18 @@ struct fastrpc_ioctl_capability { __u32 reserved[4]; }; +enum fastrpc_perfkeys { + PERF_COUNT = 0, + PERF_RESERVED1 = 1, + PERF_MAP = 2, + PERF_COPY = 3, + PERF_LINK = 4, + PERF_GETARGS = 5, + PERF_PUTARGS = 6, + PERF_RESERVED2 = 7, + PERF_INVOKE = 8, + PERF_RESERVED3 = 9, + PERF_KEY_MAX = 10, +}; + #endif /* __QCOM_FASTRPC_H__ */ From patchwork Wed Oct 18 07:02:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 735728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D149CDB483 for ; Wed, 18 Oct 2023 07:03:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344634AbjJRHDY (ORCPT ); Wed, 18 Oct 2023 03:03:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235073AbjJRHDS (ORCPT ); Wed, 18 Oct 2023 03:03:18 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7452138; Wed, 18 Oct 2023 00:03:04 -0700 (PDT) Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39I3x1AW016694; Wed, 18 Oct 2023 07:03:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=rx6FW+24Mo6NqWGgS2mDznpnkXHuFSOKE1VD33tTvpQ=; b=AbM5HspVNf0DmOYMLX54KGdlit+PjPy/zxgKILwDO8pUKyi3Xk8E+NhUcYBiYgYTsqJm yfhMIgLlNQFjTnkmh6Qg83fTu9Llexfao0DmnhyfNZqkAWWI3t8gFZsomuSfM4ZAu6Mq Laq71yzwzhLbu2heiw9tqH7wZ1Hlid5UtRGddFJpZ1u28ttHqRip7m394mpEZ5D0wJbQ fDOxFlnU4cFRyfdjOy3rwRTjFPz+UzhrODUQPAmGTebQDzRmEogb+L6YK/Sd9rZ7krrr W2lQdv2JU6Yfkx49SWVluFyNzk61pSOTfYNMtm+eAPHn/vOh2RpxeDFBcc6zeM4xkuNp 7w== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tsb7xkx0v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:03:02 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39I731Bc022730 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:03:01 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Wed, 18 Oct 2023 00:02:58 -0700 From: Ekansh Gupta To: , CC: Ekansh Gupta , , Subject: [PATCH v5 4/5] misc: fastrpc: Add support to save and restore interrupted Date: Wed, 18 Oct 2023 12:32:39 +0530 Message-ID: <1697612560-9726-5-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> References: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 805Gt3852qODVjTlAh1K1P92SydCA42r X-Proofpoint-GUID: 805Gt3852qODVjTlAh1K1P92SydCA42r X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxlogscore=999 suspectscore=0 lowpriorityscore=0 spamscore=0 bulkscore=0 mlxscore=0 phishscore=0 malwarescore=0 impostorscore=0 clxscore=1015 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310180058 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org For any remote call, driver sends a message to DSP using RPMSG framework. After message is sent, there is a wait on a completion object at driver which is completed when DSP response is received. There is a possibility that a signal is received while waiting causing the wait function to return -ERESTARTSYS. In this case the context should be saved and it should get restored for the next invocation for the thread. Adding changes to support saving and restoring of interrupted fastrpc contexts. Signed-off-by: Ekansh Gupta --- Changes in v2: - Fixed missing definition - Fixes compile time issue Changes in v5: - Removed Change-Id tag drivers/misc/fastrpc.c | 99 ++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 83 insertions(+), 16 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index b9822c1..9a481ac 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -333,6 +333,7 @@ struct fastrpc_user { struct list_head user; struct list_head maps; struct list_head pending; + struct list_head interrupted; struct list_head mmaps; struct fastrpc_channel_ctx *cctx; @@ -712,6 +713,40 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( return ERR_PTR(ret); } +static struct fastrpc_invoke_ctx *fastrpc_context_restore_interrupted( + struct fastrpc_user *fl, struct fastrpc_invoke *inv) +{ + struct fastrpc_invoke_ctx *ctx = NULL, *ictx = NULL, *n; + + spin_lock(&fl->lock); + list_for_each_entry_safe(ictx, n, &fl->interrupted, node) { + if (ictx->pid == current->pid) { + if (inv->sc != ictx->sc || ictx->fl != fl) { + dev_err(ictx->fl->sctx->dev, + "interrupted sc (0x%x) or fl (%pK) does not match with invoke sc (0x%x) or fl (%pK)\n", + ictx->sc, ictx->fl, inv->sc, fl); + spin_unlock(&fl->lock); + return ERR_PTR(-EINVAL); + } + ctx = ictx; + list_del(&ctx->node); + list_add_tail(&ctx->node, &fl->pending); + break; + } + } + spin_unlock(&fl->lock); + return ctx; +} + +static void fastrpc_context_save_interrupted( + struct fastrpc_invoke_ctx *ctx) +{ + spin_lock(&ctx->fl->lock); + list_del(&ctx->node); + list_add_tail(&ctx->node, &ctx->fl->interrupted); + spin_unlock(&ctx->fl->lock); +} + static struct sg_table * fastrpc_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction dir) @@ -1264,6 +1299,14 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, return -EPERM; } + if (!kernel) { + ctx = fastrpc_context_restore_interrupted(fl, inv); + if (IS_ERR(ctx)) + return PTR_ERR(ctx); + if (ctx) + goto wait; + } + ctx = fastrpc_context_alloc(fl, kernel, sc, invoke); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -1287,6 +1330,7 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, goto bail; PERF_END); +wait: if (kernel) { if (!wait_for_completion_timeout(&ctx->work, 10 * HZ)) err = -ETIMEDOUT; @@ -1323,6 +1367,9 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, } if (err == -ERESTARTSYS) { + if (ctx) + fastrpc_context_save_interrupted(ctx); + list_for_each_entry_safe(buf, b, &fl->mmaps, node) { list_del(&buf->node); list_add_tail(&buf->node, &fl->cctx->invoke_interrupted_mmaps); @@ -1444,7 +1491,7 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_STATIC, 3, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) @@ -1577,7 +1624,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0); if (init.attrs) ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) @@ -1628,6 +1675,25 @@ static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, spin_unlock_irqrestore(&cctx->lock, flags); } +static void fastrpc_context_list_free(struct fastrpc_user *fl) +{ + struct fastrpc_invoke_ctx *ctx, *n; + + list_for_each_entry_safe(ctx, n, &fl->interrupted, node) { + spin_lock(&fl->lock); + list_del(&ctx->node); + spin_unlock(&fl->lock); + fastrpc_context_put(ctx); + } + + list_for_each_entry_safe(ctx, n, &fl->pending, node) { + spin_lock(&fl->lock); + list_del(&ctx->node); + spin_unlock(&fl->lock); + fastrpc_context_put(ctx); + } +} + static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) { struct fastrpc_invoke_args args[1]; @@ -1641,7 +1707,7 @@ static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_RELEASE, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; return fastrpc_internal_invoke(fl, true, &ioctl); } @@ -1650,7 +1716,6 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) { struct fastrpc_user *fl = (struct fastrpc_user *)file->private_data; struct fastrpc_channel_ctx *cctx = fl->cctx; - struct fastrpc_invoke_ctx *ctx, *n; struct fastrpc_map *map, *m; struct fastrpc_buf *buf, *b; unsigned long flags; @@ -1664,10 +1729,7 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) if (fl->init_mem) fastrpc_buf_free(fl->init_mem); - list_for_each_entry_safe(ctx, n, &fl->pending, node) { - list_del(&ctx->node); - fastrpc_context_put(ctx); - } + fastrpc_context_list_free(fl); list_for_each_entry_safe(map, m, &fl->maps, node) fastrpc_map_put(map); @@ -1708,6 +1770,7 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) spin_lock_init(&fl->lock); mutex_init(&fl->mutex); INIT_LIST_HEAD(&fl->pending); + INIT_LIST_HEAD(&fl->interrupted); INIT_LIST_HEAD(&fl->maps); INIT_LIST_HEAD(&fl->mmaps); INIT_LIST_HEAD(&fl->user); @@ -1789,7 +1852,7 @@ static int fastrpc_init_attach(struct fastrpc_user *fl, int pd) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_ATTACH, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; return fastrpc_internal_invoke(fl, true, &ioctl); } @@ -1820,7 +1883,7 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) } ioctl.inv = inv; - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, false, &ioctl); kfree(args); @@ -1872,7 +1935,7 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) perf_kernel = (u64 *)(uintptr_t)einv.perf_kernel; if (perf_kernel) fl->profile = true; - einv.inv.args = (__u64)args; + einv.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, false, &einv); kfree(args); break; @@ -1902,7 +1965,7 @@ static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr ioctl.inv.handle = FASTRPC_DSP_UTILITIES_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(0, 1, 1); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; return fastrpc_internal_invoke(fl, true, &ioctl); } @@ -2005,7 +2068,7 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_buf * ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (!err) { @@ -2103,7 +2166,7 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { @@ -2184,7 +2247,7 @@ static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_me ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_UNMAP, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); fastrpc_map_put(map); @@ -2253,7 +2316,7 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_MAP, 3, 1); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { @@ -2574,6 +2637,10 @@ static void fastrpc_notify_users(struct fastrpc_user *user) ctx->retval = -EPIPE; complete(&ctx->work); } + list_for_each_entry(ctx, &user->interrupted, node) { + ctx->retval = -EPIPE; + complete(&ctx->work); + } spin_unlock(&user->lock); } From patchwork Wed Oct 18 07:02:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 735256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F381CCDB482 for ; Wed, 18 Oct 2023 07:03:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235026AbjJRHDb (ORCPT ); Wed, 18 Oct 2023 03:03:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235082AbjJRHDS (ORCPT ); Wed, 18 Oct 2023 03:03:18 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28ED718B; Wed, 18 Oct 2023 00:03:07 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39I6h3a3005984; Wed, 18 Oct 2023 07:03:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=KfDse0o37im4PvINpwe5KMZZsO/dvQxk6N0e0xEYiHE=; b=CECcd8DL2nt66r1uJEKYqtGMsKT5a+YuqisfQ2IQMG8z7DEysZgVXUfz7+VgTxntRv4u zsJiK7YK1x2J+WpgrToMbbJr1cGMAvTzVeYQo8bWr8TlaMjX3cTXy9T9EMTG7ZpCFr+q Xy0T6LYUbSX7bj4ifW9+jUhFp/RxDdywPs/iQKzpMQrH1+a3dPohKNCsYVWI8cgeGTHb oVgLOzTpl01a11RnxZ/FysF0L6dm4tqRx5hhi+UJI5bBHTofWKLB+On0QZ32OgnWihiy NZ6Nl0ngA7YblgZKUCHvj6kwujO2JNR2ZWr/LheG2gR2Dnxf7B45txNQYOmPRLhU2Xbc QA== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tt14011ju-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:03:04 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39I733Zj021193 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 18 Oct 2023 07:03:03 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Wed, 18 Oct 2023 00:03:00 -0700 From: Ekansh Gupta To: , CC: Ekansh Gupta , , Subject: [PATCH v5 5/5] misc: fastrpc: Add support to allocate shared context bank Date: Wed, 18 Oct 2023 12:32:40 +0530 Message-ID: <1697612560-9726-6-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> References: <1697612560-9726-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: R3rUJ7b2EhMVvlvf9CLw8G31c6kYjfnY X-Proofpoint-ORIG-GUID: R3rUJ7b2EhMVvlvf9CLw8G31c6kYjfnY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 suspectscore=0 phishscore=0 adultscore=0 impostorscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310180058 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Context banks could be set as a shared one using a DT propery "qcom,nsessions". The property takes the number of session to be created of the context bank. This change provides a control mechanism for user to use shared context banks for light weight processes. The session is set as shared while its creation and if a user requests for shared context bank, the same will be allocated during process initialization. Signed-off-by: Ekansh Gupta --- drivers/misc/fastrpc.c | 122 ++++++++++++++++++++++++++++++-------------- include/uapi/misc/fastrpc.h | 12 +++++ 2 files changed, 95 insertions(+), 39 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 9a481ac..b6b1884c 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -297,6 +297,7 @@ struct fastrpc_session_ctx { int sid; bool used; bool valid; + bool sharedcb; }; struct fastrpc_channel_ctx { @@ -344,12 +345,22 @@ struct fastrpc_user { int tgid; int pd; bool is_secure_dev; + bool sharedcb; /* Lock for lists */ spinlock_t lock; /* lock for allocations */ struct mutex mutex; }; +struct fastrpc_ctrl_smmu { + u32 sharedcb; /* Set to SMMU share context bank */ +}; + +struct fastrpc_internal_control { + u32 req; + struct fastrpc_ctrl_smmu smmu; +}; + static inline int64_t getnstimediff(struct timespec64 *start) { int64_t ns; @@ -851,6 +862,37 @@ static const struct dma_buf_ops fastrpc_dma_buf_ops = { .release = fastrpc_release, }; +static struct fastrpc_session_ctx *fastrpc_session_alloc( + struct fastrpc_channel_ctx *cctx, bool sharedcb) +{ + struct fastrpc_session_ctx *session = NULL; + unsigned long flags; + int i; + + spin_lock_irqsave(&cctx->lock, flags); + for (i = 0; i < cctx->sesscount; i++) { + if (!cctx->session[i].used && cctx->session[i].valid && + cctx->session[i].sharedcb == sharedcb) { + cctx->session[i].used = true; + session = &cctx->session[i]; + break; + } + } + spin_unlock_irqrestore(&cctx->lock, flags); + + return session; +} + +static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, + struct fastrpc_session_ctx *session) +{ + unsigned long flags; + + spin_lock_irqsave(&cctx->lock, flags); + session->used = false; + spin_unlock_irqrestore(&cctx->lock, flags); +} + static int fastrpc_map_create(struct fastrpc_user *fl, int fd, u64 len, u32 attr, struct fastrpc_map **ppmap) { @@ -1449,6 +1491,10 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, goto err_name; } + fl->sctx = fastrpc_session_alloc(fl->cctx, fl->sharedcb); + if (!fl->sctx) + return -EBUSY; + if (!fl->cctx->remote_heap) { err = fastrpc_remote_heap_alloc(fl, fl->sctx->dev, init.memlen, &fl->cctx->remote_heap); @@ -1571,6 +1617,10 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, goto err; } + fl->sctx = fastrpc_session_alloc(fl->cctx, fl->sharedcb); + if (!fl->sctx) + return -EBUSY; + inbuf.pgid = fl->tgid; inbuf.namelen = strlen(current->comm) + 1; inbuf.filelen = init.filelen; @@ -1645,36 +1695,6 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, return err; } -static struct fastrpc_session_ctx *fastrpc_session_alloc( - struct fastrpc_channel_ctx *cctx) -{ - struct fastrpc_session_ctx *session = NULL; - unsigned long flags; - int i; - - spin_lock_irqsave(&cctx->lock, flags); - for (i = 0; i < cctx->sesscount; i++) { - if (!cctx->session[i].used && cctx->session[i].valid) { - cctx->session[i].used = true; - session = &cctx->session[i]; - break; - } - } - spin_unlock_irqrestore(&cctx->lock, flags); - - return session; -} - -static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, - struct fastrpc_session_ctx *session) -{ - unsigned long flags; - - spin_lock_irqsave(&cctx->lock, flags); - session->used = false; - spin_unlock_irqrestore(&cctx->lock, flags); -} - static void fastrpc_context_list_free(struct fastrpc_user *fl) { struct fastrpc_invoke_ctx *ctx, *n; @@ -1778,15 +1798,6 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) fl->cctx = cctx; fl->is_secure_dev = fdevice->secure; - fl->sctx = fastrpc_session_alloc(cctx); - if (!fl->sctx) { - dev_err(&cctx->rpdev->dev, "No session available\n"); - mutex_destroy(&fl->mutex); - kfree(fl); - - return -EBUSY; - } - spin_lock_irqsave(&cctx->lock, flags); list_add_tail(&fl->user, &cctx->users); spin_unlock_irqrestore(&cctx->lock, flags); @@ -1845,6 +1856,10 @@ static int fastrpc_init_attach(struct fastrpc_user *fl, int pd) struct fastrpc_enhanced_invoke ioctl; int tgid = fl->tgid; + fl->sctx = fastrpc_session_alloc(fl->cctx, fl->sharedcb); + if (!fl->sctx) + return -EBUSY; + args[0].ptr = (u64)(uintptr_t) &tgid; args[0].length = sizeof(tgid); args[0].fd = -1; @@ -1891,11 +1906,33 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) return err; } +static int fastrpc_internal_control(struct fastrpc_user *fl, + struct fastrpc_internal_control *cp) +{ + int err = 0; + + if (!fl) + return -EBADF; + if (!cp) + return -EINVAL; + + switch (cp->req) { + case FASTRPC_CONTROL_SMMU: + fl->sharedcb = cp->smmu.sharedcb; + break; + default: + err = -EBADRQC; + break; + } + return err; +} + static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_enhanced_invoke einv; struct fastrpc_invoke_args *args = NULL; struct fastrpc_ioctl_multimode_invoke invoke; + struct fastrpc_internal_control cp = {0}; u32 nscalars; u64 *perf_kernel; int err, i; @@ -1939,6 +1976,12 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) err = fastrpc_internal_invoke(fl, false, &einv); kfree(args); break; + case FASTRPC_INVOKE_CONTROL: + if (copy_from_user(&cp, (void __user *)(uintptr_t)invoke.invparam, sizeof(cp))) + return -EFAULT; + + err = fastrpc_internal_control(fl, &cp); + break; default: err = -ENOTTY; break; @@ -2439,6 +2482,7 @@ static int fastrpc_cb_probe(struct platform_device *pdev) if (sessions > 0) { struct fastrpc_session_ctx *dup_sess; + sess->sharedcb = true; for (i = 1; i < sessions; i++) { if (cctx->sesscount >= FASTRPC_MAX_SESSIONS) break; diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h index 3a2ba59..76dc5df 100644 --- a/include/uapi/misc/fastrpc.h +++ b/include/uapi/misc/fastrpc.h @@ -166,6 +166,18 @@ struct fastrpc_ioctl_capability { __u32 reserved[4]; }; +enum fastrpc_control_type { + FASTRPC_CONTROL_LATENCY = 1, + FASTRPC_CONTROL_SMMU = 2, + FASTRPC_CONTROL_KALLOC = 3, + FASTRPC_CONTROL_WAKELOCK = 4, + FASTRPC_CONTROL_PM = 5, + FASTRPC_CONTROL_DSPPROCESS_CLEAN = 6, + FASTRPC_CONTROL_RPC_POLL = 7, + FASTRPC_CONTROL_ASYNC_WAKE = 8, + FASTRPC_CONTROL_NOTIF_WAKE = 9, +}; + enum fastrpc_perfkeys { PERF_COUNT = 0, PERF_RESERVED1 = 1,