From patchwork Thu Oct 26 08:17:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 738398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D44A6C25B48 for ; Thu, 26 Oct 2023 08:18:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344542AbjJZIST (ORCPT ); Thu, 26 Oct 2023 04:18:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231304AbjJZISS (ORCPT ); Thu, 26 Oct 2023 04:18:18 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD768CE; Thu, 26 Oct 2023 01:18:14 -0700 (PDT) Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39Q7Stw3016799; Thu, 26 Oct 2023 08:18:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=7dhbu1/TaVZ4FkFFPUGAHtpdXIopcfL0P/UkUNCUDQE=; b=U++qfikom3l5m5aCGGnDdJkH4XMiffO8fJh3qxiapE2qVT+zh/dpdvcAjm/mUPG4eR99 e9POVmLrEsNl6FrEkjpdj7yAsCabwUSHU96B/IwHm6kWRfaiVF7/axn2cR8vx6bdM2Ix xr2ta6majtxIqtKN4obbafN451kR6dwj8dc0Ss5BEzRsd+x4yTJiDEiUcXJXuccWGB/M mX3lBgn70qF8ZL4kBBrcJ/OKJyIv4T84Z3oNnozUpmzBRxscSlj/uhbeXafKOsrVszMQ 2jBQxRZrdsfPm9262VKxIujPYx7zOa/ZPFNM++y4bBVHPlX4kYF7X8V2iwxdEQCkyuLo JQ== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tye3trsg5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:12 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39Q8IAi8023622 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:10 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Thu, 26 Oct 2023 01:18:08 -0700 From: Ekansh Gupta To: , CC: , Subject: [PATCH v6 1/5] misc: fastrpc: Add fastrpc multimode invoke request support Date: Thu, 26 Oct 2023 13:47:58 +0530 Message-ID: <1698308282-8648-2-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> References: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: XV75A7ZR36R3u5gM9lbgmcwERNszV2CS X-Proofpoint-ORIG-GUID: XV75A7ZR36R3u5gM9lbgmcwERNszV2CS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-26_04,2023-10-25_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 suspectscore=0 spamscore=0 phishscore=0 malwarescore=0 impostorscore=0 mlxlogscore=999 adultscore=0 bulkscore=0 clxscore=1015 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2310170001 definitions=main-2310260068 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Multimode invocation request is intended to support multiple different type of requests. This will include enhanced invoke request to support CRC check and performance counter enablement. This will also support few driver level user controllable mechanisms like usage of shared context banks, wakelock support, etc. This IOCTL is also added with the aim to support few new fastrpc features like DSP PD notification framework, DSP Signalling mechanism etc. Signed-off-by: Ekansh Gupta --- Changes in v4: - Added padding member to IOCTL structure - Added checks for reserved members Changes in v6: - Added correct format of comments drivers/misc/fastrpc.c | 160 ++++++++++++++++++++++++++++++++------------ include/uapi/misc/fastrpc.h | 26 +++++++ 2 files changed, 145 insertions(+), 41 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index a66b7c1..e392e2a 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -573,7 +573,7 @@ static void fastrpc_get_buff_overlaps(struct fastrpc_invoke_ctx *ctx) static struct fastrpc_invoke_ctx *fastrpc_context_alloc( struct fastrpc_user *user, u32 kernel, u32 sc, - struct fastrpc_invoke_args *args) + struct fastrpc_enhanced_invoke *invoke) { struct fastrpc_channel_ctx *cctx = user->cctx; struct fastrpc_invoke_ctx *ctx = NULL; @@ -604,7 +604,7 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( kfree(ctx); return ERR_PTR(-ENOMEM); } - ctx->args = args; + ctx->args = (struct fastrpc_invoke_args *)invoke->inv.args; fastrpc_get_buff_overlaps(ctx); } @@ -1133,12 +1133,12 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx, } static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, - u32 handle, u32 sc, - struct fastrpc_invoke_args *args) + struct fastrpc_enhanced_invoke *invoke) { struct fastrpc_invoke_ctx *ctx = NULL; struct fastrpc_buf *buf, *b; - + struct fastrpc_invoke *inv = &invoke->inv; + u32 handle, sc; int err = 0; if (!fl->sctx) @@ -1147,12 +1147,14 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (!fl->cctx->rpdev) return -EPIPE; + handle = inv->handle; + sc = inv->sc; if (handle == FASTRPC_INIT_HANDLE && !kernel) { dev_warn_ratelimited(fl->sctx->dev, "user app trying to send a kernel RPC message (%d)\n", handle); return -EPERM; } - ctx = fastrpc_context_alloc(fl, kernel, sc, args); + ctx = fastrpc_context_alloc(fl, kernel, sc, invoke); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -1238,6 +1240,7 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, { struct fastrpc_init_create_static init; struct fastrpc_invoke_args *args; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_phy_page pages[1]; char *name; int err; @@ -1246,7 +1249,6 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, u32 namelen; u32 pageslen; } inbuf; - u32 sc; args = kcalloc(FASTRPC_CREATE_STATIC_PROCESS_NARGS, sizeof(*args), GFP_KERNEL); if (!args) @@ -1313,10 +1315,11 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, args[2].length = sizeof(*pages); args[2].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_STATIC, 3, 0); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_STATIC, 3, 0); + ioctl.inv.args = (__u64)args; - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, args); + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) goto err_invoke; @@ -1356,6 +1359,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, { struct fastrpc_init_create init; struct fastrpc_invoke_args *args; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_phy_page pages[1]; struct fastrpc_map *map = NULL; struct fastrpc_buf *imem = NULL; @@ -1369,7 +1373,6 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, u32 attrs; u32 siglen; } inbuf; - u32 sc; bool unsigned_module = false; args = kcalloc(FASTRPC_CREATE_PROCESS_NARGS, sizeof(*args), GFP_KERNEL); @@ -1443,12 +1446,13 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, args[5].length = sizeof(inbuf.siglen); args[5].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0); if (init.attrs) - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0); + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0); + ioctl.inv.args = (__u64)args; - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, args); + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) goto err_invoke; @@ -1500,17 +1504,19 @@ static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) { struct fastrpc_invoke_args args[1]; + struct fastrpc_enhanced_invoke ioctl; int tgid = 0; - u32 sc; tgid = fl->tgid; args[0].ptr = (u64)(uintptr_t) &tgid; args[0].length = sizeof(tgid); args[0].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_RELEASE, 1, 0); - return fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_RELEASE, 1, 0); + ioctl.inv.args = (__u64)args; + + return fastrpc_internal_invoke(fl, true, &ioctl); } static int fastrpc_device_release(struct inode *inode, struct file *file) @@ -1646,22 +1652,25 @@ static int fastrpc_dmabuf_alloc(struct fastrpc_user *fl, char __user *argp) static int fastrpc_init_attach(struct fastrpc_user *fl, int pd) { struct fastrpc_invoke_args args[1]; + struct fastrpc_enhanced_invoke ioctl; int tgid = fl->tgid; - u32 sc; args[0].ptr = (u64)(uintptr_t) &tgid; args[0].length = sizeof(tgid); args[0].fd = -1; - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_ATTACH, 1, 0); fl->pd = pd; - return fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, - sc, &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_ATTACH, 1, 0); + ioctl.inv.args = (__u64)args; + + return fastrpc_internal_invoke(fl, true, &ioctl); } static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_invoke_args *args = NULL; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_invoke inv; u32 nscalars; int err; @@ -1683,16 +1692,70 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) } } - err = fastrpc_internal_invoke(fl, false, inv.handle, inv.sc, args); + ioctl.inv = inv; + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, false, &ioctl); kfree(args); return err; } +static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) +{ + struct fastrpc_enhanced_invoke einv; + struct fastrpc_invoke_args *args = NULL; + struct fastrpc_ioctl_multimode_invoke invoke; + u32 nscalars; + int err, i; + + if (copy_from_user(&invoke, argp, sizeof(invoke))) + return -EFAULT; + + for (i = 0; i < 8; i++) { + if (invoke.reserved[i] != 0) + return -EINVAL; + } + if (invoke.rsvd != 0) + return -EINVAL; + + switch (invoke.req) { + case FASTRPC_INVOKE: + /* nscalars is truncated here to max supported value */ + if (copy_from_user(&einv, (void __user *)(uintptr_t)invoke.invparam, + invoke.size)) + return -EFAULT; + for (i = 0; i < 8; i++) { + if (einv.reserved[i] != 0) + return -EINVAL; + } + nscalars = REMOTE_SCALARS_LENGTH(einv.inv.sc); + if (nscalars) { + args = kcalloc(nscalars, sizeof(*args), GFP_KERNEL); + if (!args) + return -ENOMEM; + if (copy_from_user(args, (void __user *)(uintptr_t)einv.inv.args, + nscalars * sizeof(*args))) { + kfree(args); + return -EFAULT; + } + } + einv.inv.args = (__u64)args; + err = fastrpc_internal_invoke(fl, false, &einv); + kfree(args); + break; + default: + err = -ENOTTY; + break; + } + return err; +} + static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr_buf, uint32_t dsp_attr_buf_len) { struct fastrpc_invoke_args args[2] = { 0 }; + struct fastrpc_enhanced_invoke ioctl; /* Capability filled in userspace */ dsp_attr_buf[0] = 0; @@ -1705,8 +1768,11 @@ static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr args[1].fd = -1; fl->pd = USER_PD; - return fastrpc_internal_invoke(fl, true, FASTRPC_DSP_UTILITIES_HANDLE, - FASTRPC_SCALARS(0, 1, 1), args); + ioctl.inv.handle = FASTRPC_DSP_UTILITIES_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(0, 1, 1); + ioctl.inv.args = (__u64)args; + + return fastrpc_internal_invoke(fl, true, &ioctl); } static int fastrpc_get_info_from_kernel(struct fastrpc_ioctl_capability *cap, @@ -1793,10 +1859,10 @@ static int fastrpc_get_dsp_info(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_buf *buf) { struct fastrpc_invoke_args args[1] = { [0] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_munmap_req_msg req_msg; struct device *dev = fl->sctx->dev; int err; - u32 sc; req_msg.pgid = fl->tgid; req_msg.size = buf->size; @@ -1805,9 +1871,11 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_buf * args[0].ptr = (u64) (uintptr_t) &req_msg; args[0].length = sizeof(req_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, - &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); if (!err) { dev_dbg(dev, "unmmap\tpt 0x%09lx OK\n", buf->raddr); spin_lock(&fl->lock); @@ -1851,6 +1919,7 @@ static int fastrpc_req_munmap(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_invoke_args args[3] = { [0 ... 2] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_buf *buf = NULL; struct fastrpc_mmap_req_msg req_msg; struct fastrpc_mmap_rsp_msg rsp_msg; @@ -1858,7 +1927,6 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) struct fastrpc_req_mmap req; struct device *dev = fl->sctx->dev; int err; - u32 sc; if (copy_from_user(&req, argp, sizeof(req))) return -EFAULT; @@ -1901,9 +1969,11 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) args[2].ptr = (u64) (uintptr_t) &rsp_msg; args[2].length = sizeof(rsp_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, - &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { dev_err(dev, "mmap error (len 0x%08llx)\n", buf->size); goto err_invoke; @@ -1951,10 +2021,10 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_mem_unmap *req) { struct fastrpc_invoke_args args[1] = { [0] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_map *map = NULL, *iter, *m; struct fastrpc_mem_unmap_req_msg req_msg = { 0 }; int err = 0; - u32 sc; struct device *dev = fl->sctx->dev; spin_lock(&fl->lock); @@ -1980,9 +2050,11 @@ static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_me args[0].ptr = (u64) (uintptr_t) &req_msg; args[0].length = sizeof(req_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_UNMAP, 1, 0); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, - &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_UNMAP, 1, 0); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); fastrpc_map_put(map); if (err) dev_err(dev, "unmmap\tpt fd = %d, 0x%09llx error\n", map->fd, map->raddr); @@ -2003,6 +2075,7 @@ static int fastrpc_req_mem_unmap(struct fastrpc_user *fl, char __user *argp) static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_invoke_args args[4] = { [0 ... 3] = { 0 } }; + struct fastrpc_enhanced_invoke ioctl; struct fastrpc_mem_map_req_msg req_msg = { 0 }; struct fastrpc_mmap_rsp_msg rsp_msg = { 0 }; struct fastrpc_mem_unmap req_unmap = { 0 }; @@ -2011,7 +2084,6 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) struct device *dev = fl->sctx->dev; struct fastrpc_map *map = NULL; int err; - u32 sc; if (copy_from_user(&req, argp, sizeof(req))) return -EFAULT; @@ -2047,8 +2119,11 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) args[3].ptr = (u64) (uintptr_t) &rsp_msg; args[3].length = sizeof(rsp_msg); - sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_MAP, 3, 1); - err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, &args[0]); + ioctl.inv.handle = FASTRPC_INIT_HANDLE; + ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_MAP, 3, 1); + ioctl.inv.args = (__u64)args; + + err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { dev_err(dev, "mem mmap error, fd %d, vaddr %llx, size %lld\n", req.fd, req.vaddrin, map->size); @@ -2088,6 +2163,9 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int cmd, case FASTRPC_IOCTL_INVOKE: err = fastrpc_invoke(fl, argp); break; + case FASTRPC_IOCTL_MULTIMODE_INVOKE: + err = fastrpc_multimode_invoke(fl, argp); + break; case FASTRPC_IOCTL_INIT_ATTACH: err = fastrpc_init_attach(fl, ROOT_PD); break; diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h index f33d914..45c15be 100644 --- a/include/uapi/misc/fastrpc.h +++ b/include/uapi/misc/fastrpc.h @@ -16,6 +16,7 @@ #define FASTRPC_IOCTL_INIT_CREATE_STATIC _IOWR('R', 9, struct fastrpc_init_create_static) #define FASTRPC_IOCTL_MEM_MAP _IOWR('R', 10, struct fastrpc_mem_map) #define FASTRPC_IOCTL_MEM_UNMAP _IOWR('R', 11, struct fastrpc_mem_unmap) +#define FASTRPC_IOCTL_MULTIMODE_INVOKE _IOWR('R', 12, struct fastrpc_ioctl_multimode_invoke) #define FASTRPC_IOCTL_GET_DSP_INFO _IOWR('R', 13, struct fastrpc_ioctl_capability) /** @@ -80,6 +81,31 @@ struct fastrpc_invoke { __u64 args; }; +struct fastrpc_enhanced_invoke { + struct fastrpc_invoke inv; + __u64 crc; + __u64 perf_kernel; + __u64 perf_dsp; + __u32 reserved[8]; /* keeping reserved bits for new requirements */ +}; + +struct fastrpc_ioctl_multimode_invoke { + __u32 req; + __u32 rsvd; /* padding field */ + __u64 invparam; + __u64 size; + __u32 reserved[8]; /* keeping reserved bits for new requirements */ +}; + +enum fastrpc_multimode_invoke_type { + FASTRPC_INVOKE = 1, + FASTRPC_INVOKE_ENHANCED = 2, + FASTRPC_INVOKE_CONTROL = 3, + FASTRPC_INVOKE_DSPSIGNAL = 4, + FASTRPC_INVOKE_NOTIF = 5, + FASTRPC_INVOKE_MULTISESSION = 6, +}; + struct fastrpc_init_create { __u32 filelen; /* elf file length */ __s32 filefd; /* fd for the file */ From patchwork Thu Oct 26 08:17:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 738759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2F18C25B6F for ; Thu, 26 Oct 2023 08:18:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344550AbjJZIST (ORCPT ); Thu, 26 Oct 2023 04:18:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230283AbjJZISR (ORCPT ); Thu, 26 Oct 2023 04:18:17 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4A26DE; Thu, 26 Oct 2023 01:18:15 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39Q84QkD005464; Thu, 26 Oct 2023 08:18:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=hsibUURuaOSyADWg6+k+o19Ya1zYqRn0cGAJHu9TGow=; b=bOlQ0wH8sduyri08tHaW3tjayKiPU4p7/byZYApA8L0En5eReb31Eil3CVDXDS8T5Spp p/n/yuapNsYHgFJf87TivWd4LJrhgHSSLxcj+wSagdAdwTCRKrAA0IIzeDhEQIz9jW0d nCQw+iau/GMK30P8rkF66Esgqyt9DGmkTx7fk3H7ktnaasRQadOU7WOAhDyOzVRQoSBK fADOedQSgkzWNApoiwsX3zKPo5VX8ZjNOYpp3hzK+VPyUnS1xh4/qeXJqEjuEvVGV/vY TRJpUfe8jFVSZtSqGwMSrRkWNagh5aVlxc1gphGWbAtOnKG2MyICKQB55fsaqNMncy/q 7g== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3txtw1k4rj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:13 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39Q8ICIs017837 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:12 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Thu, 26 Oct 2023 01:18:10 -0700 From: Ekansh Gupta To: , CC: , Subject: [PATCH v6 2/5] misc: fastrpc: Add CRC support for remote buffers Date: Thu, 26 Oct 2023 13:47:59 +0530 Message-ID: <1698308282-8648-3-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> References: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: XRz_8GQ4-u_bW3nMSRy8ihqkK7RgC1WQ X-Proofpoint-GUID: XRz_8GQ4-u_bW3nMSRy8ihqkK7RgC1WQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-26_05,2023-10-25_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 malwarescore=0 priorityscore=1501 adultscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 impostorscore=0 mlxscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2310170001 definitions=main-2310260068 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org CRC check for input and output argument helps in ensuring data consistency over a remote call. If user intends to enable CRC check, first local user CRC is calculated at user end and a CRC buffer is passed to DSP to capture remote CRC values. DSP is expected to write to the remote CRC buffer which is then compared at user level with the local CRC values. Signed-off-by: Ekansh Gupta --- drivers/misc/fastrpc.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index e392e2a..825ff91 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -611,6 +611,7 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( /* Released in fastrpc_context_put() */ fastrpc_channel_ctx_get(cctx); + ctx->crc = (u32 *)(uintptr_t)invoke->crc; ctx->sc = sc; ctx->retval = -1; ctx->pid = current->pid; @@ -1066,6 +1067,7 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, struct fastrpc_invoke_buf *list; struct fastrpc_phy_page *pages; u64 *fdlist; + u32 *crclist; int i, inbufs, outbufs, handles; inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); @@ -1073,7 +1075,8 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, handles = REMOTE_SCALARS_INHANDLES(ctx->sc) + REMOTE_SCALARS_OUTHANDLES(ctx->sc); list = fastrpc_invoke_buf_start(rpra, ctx->nscalars); pages = fastrpc_phy_page_start(list, ctx->nscalars); - fdlist = (uint64_t *)(pages + inbufs + outbufs + handles); + fdlist = (u64 *)(pages + inbufs + outbufs + handles); + crclist = (u32 *)(fdlist + FASTRPC_MAX_FDLIST); for (i = inbufs; i < ctx->nbufs; ++i) { if (!ctx->maps[i]) { @@ -1097,6 +1100,10 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, fastrpc_map_put(mmap); } + if (ctx->crc && crclist && rpra) { + if (copy_to_user((void __user *)ctx->crc, crclist, FASTRPC_MAX_CRCLIST * sizeof(u32))) + return -EFAULT; + } return 0; } @@ -1721,6 +1728,7 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) switch (invoke.req) { case FASTRPC_INVOKE: + case FASTRPC_INVOKE_ENHANCED: /* nscalars is truncated here to max supported value */ if (copy_from_user(&einv, (void __user *)(uintptr_t)invoke.invparam, invoke.size)) From patchwork Thu Oct 26 08:18:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 738757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCD3AC25B48 for ; Thu, 26 Oct 2023 08:18:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344676AbjJZISp (ORCPT ); Thu, 26 Oct 2023 04:18:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344584AbjJZISd (ORCPT ); Thu, 26 Oct 2023 04:18:33 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C243BD4F; Thu, 26 Oct 2023 01:18:25 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39Q7PSth002724; Thu, 26 Oct 2023 08:18:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=CAU7NC61C++Dz389b3BW4iYZEVWhr33pur5URmR4PUQ=; b=krofsJxoyhxEuJHtZQpaVdz1VPO8ZEpHxEhCVKi5WtB7NAPSSyCYskXpGOHcCg9JgTV0 GWLc4Tt3Q42d2hkClL61x8pEv222yY0ZOzu4nAqHwO926EOLFjAEGbV5+2tZHQ8cM1Nm CvXjZOyGEu/yB7ouutTT3J8jKVBV7uqG+OiqEnVfR36QMIVvbfJTWit7+hfWIwXcu4fB N1jBLhlEBjEYG2V7lXC1OnzmtrxBWqgy2YsaA4II9dAJ0lRQwz1UzmOd0K6e0I3Mn5iH kLdEZIR+y6hDIISOL8Og5Ajcwh0H5OPwBCzEoju0FviYJ/+Yy9ndaulbbe0PvLv0X1p/ 0w== Received: from nalasppmta03.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3tyfm9gkrj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:16 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39Q8IFrv025005 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:15 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Thu, 26 Oct 2023 01:18:13 -0700 From: Ekansh Gupta To: , CC: , Subject: [PATCH v6 3/5] misc: fastrpc: Capture kernel and DSP performance counters Date: Thu, 26 Oct 2023 13:48:00 +0530 Message-ID: <1698308282-8648-4-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> References: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: -MhvB3etGKT4LsYPUpdXUGJUR4J_9-yC X-Proofpoint-GUID: -MhvB3etGKT4LsYPUpdXUGJUR4J_9-yC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-26_05,2023-10-25_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 clxscore=1015 phishscore=0 mlxlogscore=999 adultscore=0 suspectscore=0 mlxscore=0 spamscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2310170001 definitions=main-2310260068 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support to capture kernel performance counters for different kernel level operations. These counters collects the information for remote call and copies the information to a buffer shared by user. Collection of DSP performance counters is also added as part of this change. DSP updates the performance information in the metadata which is then copied to a buffer passed by the user. Signed-off-by: Ekansh Gupta --- Changes in v2: - Fixed compile time warnings Changes in v3: - Squashed commits to get proper patch series drivers/misc/fastrpc.c | 140 +++++++++++++++++++++++++++++++++++++++++--- include/uapi/misc/fastrpc.h | 14 +++++ 2 files changed, 146 insertions(+), 8 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 825ff91..b9822c1 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,8 @@ #define FASTRPC_ALIGN 128 #define FASTRPC_MAX_FDLIST 16 #define FASTRPC_MAX_CRCLIST 64 +#define FASTRPC_KERNEL_PERF_LIST (PERF_KEY_MAX) +#define FASTRPC_DSP_PERF_LIST 12 #define FASTRPC_PHYS(p) ((p) & 0xffffffff) #define FASTRPC_CTX_MAX (256) #define FASTRPC_INIT_HANDLE 1 @@ -105,6 +108,27 @@ #define miscdev_to_fdevice(d) container_of(d, struct fastrpc_device, miscdev) +#define PERF_END ((void)0) + +#define PERF(enb, cnt, ff) \ + {\ + struct timespec64 startT = {0};\ + uint64_t *counter = cnt;\ + if (enb && counter) {\ + ktime_get_real_ts64(&startT);\ + } \ + ff ;\ + if (enb && counter) {\ + *counter += getnstimediff(&startT);\ + } \ + } + +#define GET_COUNTER(perf_ptr, offset) \ + (perf_ptr != NULL ?\ + (((offset >= 0) && (offset < PERF_KEY_MAX)) ?\ + (uint64_t *)(perf_ptr + offset)\ + : (uint64_t *)NULL) : (uint64_t *)NULL) + static const char *domains[FASTRPC_DEV_MAX] = { "adsp", "mdsp", "sdsp", "cdsp"}; struct fastrpc_phy_page { @@ -228,6 +252,19 @@ struct fastrpc_map { struct kref refcount; }; +struct fastrpc_perf { + u64 count; + u64 flush; + u64 map; + u64 copy; + u64 link; + u64 getargs; + u64 putargs; + u64 invargs; + u64 invoke; + u64 tid; +}; + struct fastrpc_invoke_ctx { int nscalars; int nbufs; @@ -236,6 +273,8 @@ struct fastrpc_invoke_ctx { int tgid; u32 sc; u32 *crc; + u64 *perf_kernel; + u64 *perf_dsp; u64 ctxid; u64 msg_sz; struct kref refcount; @@ -250,6 +289,7 @@ struct fastrpc_invoke_ctx { struct fastrpc_invoke_args *args; struct fastrpc_buf_overlap *olaps; struct fastrpc_channel_ctx *cctx; + struct fastrpc_perf *perf; }; struct fastrpc_session_ctx { @@ -299,6 +339,7 @@ struct fastrpc_user { struct fastrpc_session_ctx *sctx; struct fastrpc_buf *init_mem; + u32 profile; int tgid; int pd; bool is_secure_dev; @@ -308,6 +349,17 @@ struct fastrpc_user { struct mutex mutex; }; +static inline int64_t getnstimediff(struct timespec64 *start) +{ + int64_t ns; + struct timespec64 ts, b; + + ktime_get_real_ts64(&ts); + b = timespec64_sub(ts, *start); + ns = timespec64_to_ns(&b); + return ns; +} + static void fastrpc_free_map(struct kref *ref) { struct fastrpc_map *map; @@ -493,6 +545,9 @@ static void fastrpc_context_free(struct kref *ref) if (ctx->buf) fastrpc_buf_free(ctx->buf); + if (ctx->fl->profile) + kfree(ctx->perf); + spin_lock_irqsave(&cctx->lock, flags); idr_remove(&cctx->ctx_idr, ctx->ctxid >> 4); spin_unlock_irqrestore(&cctx->lock, flags); @@ -612,6 +667,14 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( fastrpc_channel_ctx_get(cctx); ctx->crc = (u32 *)(uintptr_t)invoke->crc; + ctx->perf_dsp = (u64 *)(uintptr_t)invoke->perf_dsp; + ctx->perf_kernel = (u64 *)(uintptr_t)invoke->perf_kernel; + if (ctx->fl->profile) { + ctx->perf = kzalloc(sizeof(*(ctx->perf)), GFP_KERNEL); + if (!ctx->perf) + return ERR_PTR(-ENOMEM); + ctx->perf->tid = ctx->fl->tgid; + } ctx->sc = sc; ctx->retval = -1; ctx->pid = current->pid; @@ -875,7 +938,8 @@ static int fastrpc_get_meta_size(struct fastrpc_invoke_ctx *ctx) sizeof(struct fastrpc_invoke_buf) + sizeof(struct fastrpc_phy_page)) * ctx->nscalars + sizeof(u64) * FASTRPC_MAX_FDLIST + - sizeof(u32) * FASTRPC_MAX_CRCLIST; + sizeof(u32) * FASTRPC_MAX_CRCLIST + + sizeof(u32) + sizeof(u64) * FASTRPC_DSP_PERF_LIST; return size; } @@ -942,16 +1006,22 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) int inbufs, i, oix, err = 0; u64 len, rlen, pkt_size; u64 pg_start, pg_end; + u64 *perf_counter = NULL; uintptr_t args; int metalen; + if (ctx->fl->profile) + perf_counter = (u64 *)ctx->perf + PERF_COUNT; + inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); metalen = fastrpc_get_meta_size(ctx); pkt_size = fastrpc_get_payload_size(ctx, metalen); + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_MAP), err = fastrpc_create_maps(ctx); if (err) return err; + PERF_END); ctx->msg_sz = pkt_size; @@ -983,6 +1053,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) if (ctx->maps[i]) { struct vm_area_struct *vma = NULL; + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_MAP), rpra[i].buf.pv = (u64) ctx->args[i].ptr; pages[i].addr = ctx->maps[i]->phys; @@ -997,9 +1068,9 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >> PAGE_SHIFT; pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE; - + PERF_END); } else { - + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_COPY), if (ctx->olaps[oix].offset == 0) { rlen -= ALIGN(args, FASTRPC_ALIGN) - args; args = ALIGN(args, FASTRPC_ALIGN); @@ -1021,12 +1092,14 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE; args = args + mlen; rlen -= mlen; + PERF_END); } if (i < inbufs && !ctx->maps[i]) { void *dst = (void *)(uintptr_t)rpra[i].buf.pv; void *src = (void *)(uintptr_t)ctx->args[i].ptr; + PERF(ctx->fl->profile, GET_COUNTER(perf_counter, PERF_COPY), if (!kernel) { if (copy_from_user(dst, (void __user *)src, len)) { @@ -1036,6 +1109,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) } else { memcpy(dst, src, len); } + PERF_END); } } @@ -1066,9 +1140,9 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, struct fastrpc_map *mmap = NULL; struct fastrpc_invoke_buf *list; struct fastrpc_phy_page *pages; - u64 *fdlist; - u32 *crclist; - int i, inbufs, outbufs, handles; + u64 *fdlist, *perf_dsp_list; + u32 *crclist, *poll; + int i, inbufs, outbufs, handles, perferr; inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); outbufs = REMOTE_SCALARS_OUTBUFS(ctx->sc); @@ -1077,6 +1151,8 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, pages = fastrpc_phy_page_start(list, ctx->nscalars); fdlist = (u64 *)(pages + inbufs + outbufs + handles); crclist = (u32 *)(fdlist + FASTRPC_MAX_FDLIST); + poll = (u32 *)(crclist + FASTRPC_MAX_CRCLIST); + perf_dsp_list = (u64 *)(poll + 1); for (i = inbufs; i < ctx->nbufs; ++i) { if (!ctx->maps[i]) { @@ -1101,8 +1177,16 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx, } if (ctx->crc && crclist && rpra) { - if (copy_to_user((void __user *)ctx->crc, crclist, FASTRPC_MAX_CRCLIST * sizeof(u32))) + if (copy_to_user((void __user *)ctx->crc, crclist, + FASTRPC_MAX_CRCLIST * sizeof(u32))) { return -EFAULT; + } + } + if (ctx->perf_dsp && perf_dsp_list) { + perferr = copy_to_user((void __user *)ctx->perf_dsp, + perf_dsp_list, FASTRPC_DSP_PERF_LIST * sizeof(u64)); + if (perferr) + dev_info(fl->sctx->dev, "Warning: failed to copy perf data %d\n", perferr); } return 0; } @@ -1139,6 +1223,20 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx, } +static void fastrpc_update_invoke_count(u32 handle, u64 *perf_counter, + struct timespec64 *invoket) +{ + u64 *invcount, *count; + + invcount = GET_COUNTER(perf_counter, PERF_INVOKE); + if (invcount) + *invcount += getnstimediff(invoket); + + count = GET_COUNTER(perf_counter, PERF_COUNT); + if (count) + *count += 1; +} + static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, struct fastrpc_enhanced_invoke *invoke) { @@ -1146,7 +1244,12 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, struct fastrpc_buf *buf, *b; struct fastrpc_invoke *inv = &invoke->inv; u32 handle, sc; - int err = 0; + u64 *perf_counter = NULL; + int err = 0, perferr = 0; + struct timespec64 invoket = {0}; + + if (fl->profile) + ktime_get_real_ts64(&invoket); if (!fl->sctx) return -EINVAL; @@ -1165,18 +1268,24 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (IS_ERR(ctx)) return PTR_ERR(ctx); + if (fl->profile) + perf_counter = (u64 *)ctx->perf + PERF_COUNT; + PERF(fl->profile, GET_COUNTER(perf_counter, PERF_GETARGS), if (ctx->nscalars) { err = fastrpc_get_args(kernel, ctx); if (err) goto bail; } + PERF_END); /* make sure that all CPU memory writes are seen by DSP */ dma_wmb(); + PERF(fl->profile, GET_COUNTER(perf_counter, PERF_LINK), /* Send invoke buffer to remote dsp */ err = fastrpc_invoke_send(fl->sctx, ctx, kernel, handle); if (err) goto bail; + PERF_END); if (kernel) { if (!wait_for_completion_timeout(&ctx->work, 10 * HZ)) @@ -1196,10 +1305,12 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (ctx->nscalars) { /* make sure that all memory writes by DSP are seen by CPU */ dma_rmb(); + PERF(fl->profile, GET_COUNTER(perf_counter, PERF_PUTARGS), /* populate all the output buffers with results */ err = fastrpc_put_args(ctx, kernel); if (err) goto bail; + PERF_END); } bail: @@ -1216,6 +1327,15 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, list_del(&buf->node); list_add_tail(&buf->node, &fl->cctx->invoke_interrupted_mmaps); } + } else if (ctx) { + if (fl->profile && !err) + fastrpc_update_invoke_count(handle, perf_counter, &invoket); + if (fl->profile && ctx->perf && ctx->perf_kernel) { + perferr = copy_to_user((void __user *)ctx->perf_kernel, + ctx->perf, FASTRPC_KERNEL_PERF_LIST * sizeof(u64)); + if (perferr) + dev_info(fl->sctx->dev, "Warning: failed to copy perf data %d\n", perferr); + } } if (err) @@ -1714,6 +1834,7 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) struct fastrpc_invoke_args *args = NULL; struct fastrpc_ioctl_multimode_invoke invoke; u32 nscalars; + u64 *perf_kernel; int err, i; if (copy_from_user(&invoke, argp, sizeof(invoke))) @@ -1748,6 +1869,9 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) return -EFAULT; } } + perf_kernel = (u64 *)(uintptr_t)einv.perf_kernel; + if (perf_kernel) + fl->profile = true; einv.inv.args = (__u64)args; err = fastrpc_internal_invoke(fl, false, &einv); kfree(args); diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h index 45c15be..074675e 100644 --- a/include/uapi/misc/fastrpc.h +++ b/include/uapi/misc/fastrpc.h @@ -166,4 +166,18 @@ struct fastrpc_ioctl_capability { __u32 reserved[4]; }; +enum fastrpc_perfkeys { + PERF_COUNT = 0, + PERF_RESERVED1 = 1, + PERF_MAP = 2, + PERF_COPY = 3, + PERF_LINK = 4, + PERF_GETARGS = 5, + PERF_PUTARGS = 6, + PERF_RESERVED2 = 7, + PERF_INVOKE = 8, + PERF_RESERVED3 = 9, + PERF_KEY_MAX = 10, +}; + #endif /* __QCOM_FASTRPC_H__ */ From patchwork Thu Oct 26 08:18:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 738758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 258A9C25B6F for ; Thu, 26 Oct 2023 08:18:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344632AbjJZIS2 (ORCPT ); Thu, 26 Oct 2023 04:18:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344603AbjJZISX (ORCPT ); Thu, 26 Oct 2023 04:18:23 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71D17128; Thu, 26 Oct 2023 01:18:20 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39Q7PX2b025292; Thu, 26 Oct 2023 08:18:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=rx6FW+24Mo6NqWGgS2mDznpnkXHuFSOKE1VD33tTvpQ=; b=ZS3fbeV0mDgTq34d/zS0MuPfyrF8546aHHHiixEwe1SbeAH5/GEp31LbHqSOmU9x7GGn nGInP22fih1vfxqDYoo8P8a5T3Wxd6OYCcOe2wAQnrlUpoLJwqrMMasSHARK7QJtThot N0doZTzQVhrgHQBdi4GIrsZoj2PVkN0ZJSCSIcT0RS3T1geU9T4lEHM9vEaEk/J2b1AN mCzoV+LqGzZfy4o4f/RlguLR2yYSNpU5ZKg7gMeA18Onj1frIuXrspbjAstb8DM/UC/z izKQZnWI2i1C74XTC4pRH8jQr8DyGJXtCt/Q9mEm+j5WHupr5iPES4dSorlZzBcFjC0X pA== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3txtw1k4rp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:18 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39Q8IHx7017883 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:17 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Thu, 26 Oct 2023 01:18:15 -0700 From: Ekansh Gupta To: , CC: , Subject: [PATCH v6 4/5] misc: fastrpc: Add support to save and restore interrupted Date: Thu, 26 Oct 2023 13:48:01 +0530 Message-ID: <1698308282-8648-5-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> References: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: wXL9ImVyilZaYdRYGh3_ar0_m3NFvkMB X-Proofpoint-GUID: wXL9ImVyilZaYdRYGh3_ar0_m3NFvkMB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-26_05,2023-10-25_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 malwarescore=0 priorityscore=1501 adultscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 impostorscore=0 mlxscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2310170001 definitions=main-2310260068 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org For any remote call, driver sends a message to DSP using RPMSG framework. After message is sent, there is a wait on a completion object at driver which is completed when DSP response is received. There is a possibility that a signal is received while waiting causing the wait function to return -ERESTARTSYS. In this case the context should be saved and it should get restored for the next invocation for the thread. Adding changes to support saving and restoring of interrupted fastrpc contexts. Signed-off-by: Ekansh Gupta --- Changes in v2: - Fixed missing definition - Fixes compile time issue Changes in v5: - Removed Change-Id tag drivers/misc/fastrpc.c | 99 ++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 83 insertions(+), 16 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index b9822c1..9a481ac 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -333,6 +333,7 @@ struct fastrpc_user { struct list_head user; struct list_head maps; struct list_head pending; + struct list_head interrupted; struct list_head mmaps; struct fastrpc_channel_ctx *cctx; @@ -712,6 +713,40 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( return ERR_PTR(ret); } +static struct fastrpc_invoke_ctx *fastrpc_context_restore_interrupted( + struct fastrpc_user *fl, struct fastrpc_invoke *inv) +{ + struct fastrpc_invoke_ctx *ctx = NULL, *ictx = NULL, *n; + + spin_lock(&fl->lock); + list_for_each_entry_safe(ictx, n, &fl->interrupted, node) { + if (ictx->pid == current->pid) { + if (inv->sc != ictx->sc || ictx->fl != fl) { + dev_err(ictx->fl->sctx->dev, + "interrupted sc (0x%x) or fl (%pK) does not match with invoke sc (0x%x) or fl (%pK)\n", + ictx->sc, ictx->fl, inv->sc, fl); + spin_unlock(&fl->lock); + return ERR_PTR(-EINVAL); + } + ctx = ictx; + list_del(&ctx->node); + list_add_tail(&ctx->node, &fl->pending); + break; + } + } + spin_unlock(&fl->lock); + return ctx; +} + +static void fastrpc_context_save_interrupted( + struct fastrpc_invoke_ctx *ctx) +{ + spin_lock(&ctx->fl->lock); + list_del(&ctx->node); + list_add_tail(&ctx->node, &ctx->fl->interrupted); + spin_unlock(&ctx->fl->lock); +} + static struct sg_table * fastrpc_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction dir) @@ -1264,6 +1299,14 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, return -EPERM; } + if (!kernel) { + ctx = fastrpc_context_restore_interrupted(fl, inv); + if (IS_ERR(ctx)) + return PTR_ERR(ctx); + if (ctx) + goto wait; + } + ctx = fastrpc_context_alloc(fl, kernel, sc, invoke); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -1287,6 +1330,7 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, goto bail; PERF_END); +wait: if (kernel) { if (!wait_for_completion_timeout(&ctx->work, 10 * HZ)) err = -ETIMEDOUT; @@ -1323,6 +1367,9 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, } if (err == -ERESTARTSYS) { + if (ctx) + fastrpc_context_save_interrupted(ctx); + list_for_each_entry_safe(buf, b, &fl->mmaps, node) { list_del(&buf->node); list_add_tail(&buf->node, &fl->cctx->invoke_interrupted_mmaps); @@ -1444,7 +1491,7 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_STATIC, 3, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) @@ -1577,7 +1624,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE, 4, 0); if (init.attrs) ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_CREATE_ATTR, 4, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) @@ -1628,6 +1675,25 @@ static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, spin_unlock_irqrestore(&cctx->lock, flags); } +static void fastrpc_context_list_free(struct fastrpc_user *fl) +{ + struct fastrpc_invoke_ctx *ctx, *n; + + list_for_each_entry_safe(ctx, n, &fl->interrupted, node) { + spin_lock(&fl->lock); + list_del(&ctx->node); + spin_unlock(&fl->lock); + fastrpc_context_put(ctx); + } + + list_for_each_entry_safe(ctx, n, &fl->pending, node) { + spin_lock(&fl->lock); + list_del(&ctx->node); + spin_unlock(&fl->lock); + fastrpc_context_put(ctx); + } +} + static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) { struct fastrpc_invoke_args args[1]; @@ -1641,7 +1707,7 @@ static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_RELEASE, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; return fastrpc_internal_invoke(fl, true, &ioctl); } @@ -1650,7 +1716,6 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) { struct fastrpc_user *fl = (struct fastrpc_user *)file->private_data; struct fastrpc_channel_ctx *cctx = fl->cctx; - struct fastrpc_invoke_ctx *ctx, *n; struct fastrpc_map *map, *m; struct fastrpc_buf *buf, *b; unsigned long flags; @@ -1664,10 +1729,7 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) if (fl->init_mem) fastrpc_buf_free(fl->init_mem); - list_for_each_entry_safe(ctx, n, &fl->pending, node) { - list_del(&ctx->node); - fastrpc_context_put(ctx); - } + fastrpc_context_list_free(fl); list_for_each_entry_safe(map, m, &fl->maps, node) fastrpc_map_put(map); @@ -1708,6 +1770,7 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) spin_lock_init(&fl->lock); mutex_init(&fl->mutex); INIT_LIST_HEAD(&fl->pending); + INIT_LIST_HEAD(&fl->interrupted); INIT_LIST_HEAD(&fl->maps); INIT_LIST_HEAD(&fl->mmaps); INIT_LIST_HEAD(&fl->user); @@ -1789,7 +1852,7 @@ static int fastrpc_init_attach(struct fastrpc_user *fl, int pd) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_ATTACH, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; return fastrpc_internal_invoke(fl, true, &ioctl); } @@ -1820,7 +1883,7 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) } ioctl.inv = inv; - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, false, &ioctl); kfree(args); @@ -1872,7 +1935,7 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) perf_kernel = (u64 *)(uintptr_t)einv.perf_kernel; if (perf_kernel) fl->profile = true; - einv.inv.args = (__u64)args; + einv.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, false, &einv); kfree(args); break; @@ -1902,7 +1965,7 @@ static int fastrpc_get_info_from_dsp(struct fastrpc_user *fl, uint32_t *dsp_attr ioctl.inv.handle = FASTRPC_DSP_UTILITIES_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(0, 1, 1); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; return fastrpc_internal_invoke(fl, true, &ioctl); } @@ -2005,7 +2068,7 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_buf * ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (!err) { @@ -2103,7 +2166,7 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { @@ -2184,7 +2247,7 @@ static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_me ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_UNMAP, 1, 0); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); fastrpc_map_put(map); @@ -2253,7 +2316,7 @@ static int fastrpc_req_mem_map(struct fastrpc_user *fl, char __user *argp) ioctl.inv.handle = FASTRPC_INIT_HANDLE; ioctl.inv.sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MEM_MAP, 3, 1); - ioctl.inv.args = (__u64)args; + ioctl.inv.args = (u64)args; err = fastrpc_internal_invoke(fl, true, &ioctl); if (err) { @@ -2574,6 +2637,10 @@ static void fastrpc_notify_users(struct fastrpc_user *user) ctx->retval = -EPIPE; complete(&ctx->work); } + list_for_each_entry(ctx, &user->interrupted, node) { + ctx->retval = -EPIPE; + complete(&ctx->work); + } spin_unlock(&user->lock); } From patchwork Thu Oct 26 08:18:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ekansh Gupta X-Patchwork-Id: 738397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2767C25B48 for ; Thu, 26 Oct 2023 08:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344696AbjJZISe (ORCPT ); Thu, 26 Oct 2023 04:18:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344620AbjJZIS1 (ORCPT ); Thu, 26 Oct 2023 04:18:27 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3F461B2; Thu, 26 Oct 2023 01:18:22 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39Q7QAUT025592; Thu, 26 Oct 2023 08:18:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=mSi9WBDCHFP67vFBaZpRzmem1kybzwSSx2XVp/2m/NY=; b=N9osCq0OQmCBPs1axmjXylwdXSjynAsnaRNPXHodpGerFXMQNCbAjFP0XEQ/hL7GKg1m hUmX48/cEOjhjw0lKgy+VJlHmMpgeoOEuYZ7EE4fP0JReoHdi257KOjndXvJ3HIGxgi5 BSCJhBnl18fnTWbpgqQI2r7kqHxJJBcpQh1EQxGmPaFM97JTJVCh2u8sBJ0rF1wCfsyA VAmW4t4jdxQX5c065olee+lv3mzCS3KR0yKwk4i3YDVlabIlH67PjJpBGPL4oCnrLiZR ks3mD0KpaM0uDRnEiuKgqYsramFe+nR+5wtHWMjPBinqNPrOqlR3tc7RP/JVlH8ns0+M 2A== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3txtw1k4ru-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:20 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 39Q8IJC3023726 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Oct 2023 08:18:19 GMT Received: from ekangupt-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.39; Thu, 26 Oct 2023 01:18:17 -0700 From: Ekansh Gupta To: , CC: , Subject: [PATCH v6 5/5] misc: fastrpc: Add support to allocate shared context bank Date: Thu, 26 Oct 2023 13:48:02 +0530 Message-ID: <1698308282-8648-6-git-send-email-quic_ekangupt@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> References: <1698308282-8648-1-git-send-email-quic_ekangupt@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: mkGLnqAYt4bFlAHNRJiP93Pq1evxsIaj X-Proofpoint-GUID: mkGLnqAYt4bFlAHNRJiP93Pq1evxsIaj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-26_05,2023-10-25_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 malwarescore=0 priorityscore=1501 adultscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 impostorscore=0 mlxscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2310170001 definitions=main-2310260068 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Context banks could be set as a shared one using a DT propery "qcom,nsessions". The property takes the number of session to be created of the context bank. This change provides a control mechanism for user to use shared context banks for light weight processes. The session is set as shared while its creation and if a user requests for shared context bank, the same will be allocated during process initialization. Signed-off-by: Ekansh Gupta --- drivers/misc/fastrpc.c | 122 ++++++++++++++++++++++++++++++-------------- include/uapi/misc/fastrpc.h | 12 +++++ 2 files changed, 95 insertions(+), 39 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 9a481ac..b6b1884c 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -297,6 +297,7 @@ struct fastrpc_session_ctx { int sid; bool used; bool valid; + bool sharedcb; }; struct fastrpc_channel_ctx { @@ -344,12 +345,22 @@ struct fastrpc_user { int tgid; int pd; bool is_secure_dev; + bool sharedcb; /* Lock for lists */ spinlock_t lock; /* lock for allocations */ struct mutex mutex; }; +struct fastrpc_ctrl_smmu { + u32 sharedcb; /* Set to SMMU share context bank */ +}; + +struct fastrpc_internal_control { + u32 req; + struct fastrpc_ctrl_smmu smmu; +}; + static inline int64_t getnstimediff(struct timespec64 *start) { int64_t ns; @@ -851,6 +862,37 @@ static const struct dma_buf_ops fastrpc_dma_buf_ops = { .release = fastrpc_release, }; +static struct fastrpc_session_ctx *fastrpc_session_alloc( + struct fastrpc_channel_ctx *cctx, bool sharedcb) +{ + struct fastrpc_session_ctx *session = NULL; + unsigned long flags; + int i; + + spin_lock_irqsave(&cctx->lock, flags); + for (i = 0; i < cctx->sesscount; i++) { + if (!cctx->session[i].used && cctx->session[i].valid && + cctx->session[i].sharedcb == sharedcb) { + cctx->session[i].used = true; + session = &cctx->session[i]; + break; + } + } + spin_unlock_irqrestore(&cctx->lock, flags); + + return session; +} + +static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, + struct fastrpc_session_ctx *session) +{ + unsigned long flags; + + spin_lock_irqsave(&cctx->lock, flags); + session->used = false; + spin_unlock_irqrestore(&cctx->lock, flags); +} + static int fastrpc_map_create(struct fastrpc_user *fl, int fd, u64 len, u32 attr, struct fastrpc_map **ppmap) { @@ -1449,6 +1491,10 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl, goto err_name; } + fl->sctx = fastrpc_session_alloc(fl->cctx, fl->sharedcb); + if (!fl->sctx) + return -EBUSY; + if (!fl->cctx->remote_heap) { err = fastrpc_remote_heap_alloc(fl, fl->sctx->dev, init.memlen, &fl->cctx->remote_heap); @@ -1571,6 +1617,10 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, goto err; } + fl->sctx = fastrpc_session_alloc(fl->cctx, fl->sharedcb); + if (!fl->sctx) + return -EBUSY; + inbuf.pgid = fl->tgid; inbuf.namelen = strlen(current->comm) + 1; inbuf.filelen = init.filelen; @@ -1645,36 +1695,6 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, return err; } -static struct fastrpc_session_ctx *fastrpc_session_alloc( - struct fastrpc_channel_ctx *cctx) -{ - struct fastrpc_session_ctx *session = NULL; - unsigned long flags; - int i; - - spin_lock_irqsave(&cctx->lock, flags); - for (i = 0; i < cctx->sesscount; i++) { - if (!cctx->session[i].used && cctx->session[i].valid) { - cctx->session[i].used = true; - session = &cctx->session[i]; - break; - } - } - spin_unlock_irqrestore(&cctx->lock, flags); - - return session; -} - -static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, - struct fastrpc_session_ctx *session) -{ - unsigned long flags; - - spin_lock_irqsave(&cctx->lock, flags); - session->used = false; - spin_unlock_irqrestore(&cctx->lock, flags); -} - static void fastrpc_context_list_free(struct fastrpc_user *fl) { struct fastrpc_invoke_ctx *ctx, *n; @@ -1778,15 +1798,6 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) fl->cctx = cctx; fl->is_secure_dev = fdevice->secure; - fl->sctx = fastrpc_session_alloc(cctx); - if (!fl->sctx) { - dev_err(&cctx->rpdev->dev, "No session available\n"); - mutex_destroy(&fl->mutex); - kfree(fl); - - return -EBUSY; - } - spin_lock_irqsave(&cctx->lock, flags); list_add_tail(&fl->user, &cctx->users); spin_unlock_irqrestore(&cctx->lock, flags); @@ -1845,6 +1856,10 @@ static int fastrpc_init_attach(struct fastrpc_user *fl, int pd) struct fastrpc_enhanced_invoke ioctl; int tgid = fl->tgid; + fl->sctx = fastrpc_session_alloc(fl->cctx, fl->sharedcb); + if (!fl->sctx) + return -EBUSY; + args[0].ptr = (u64)(uintptr_t) &tgid; args[0].length = sizeof(tgid); args[0].fd = -1; @@ -1891,11 +1906,33 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp) return err; } +static int fastrpc_internal_control(struct fastrpc_user *fl, + struct fastrpc_internal_control *cp) +{ + int err = 0; + + if (!fl) + return -EBADF; + if (!cp) + return -EINVAL; + + switch (cp->req) { + case FASTRPC_CONTROL_SMMU: + fl->sharedcb = cp->smmu.sharedcb; + break; + default: + err = -EBADRQC; + break; + } + return err; +} + static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) { struct fastrpc_enhanced_invoke einv; struct fastrpc_invoke_args *args = NULL; struct fastrpc_ioctl_multimode_invoke invoke; + struct fastrpc_internal_control cp = {0}; u32 nscalars; u64 *perf_kernel; int err, i; @@ -1939,6 +1976,12 @@ static int fastrpc_multimode_invoke(struct fastrpc_user *fl, char __user *argp) err = fastrpc_internal_invoke(fl, false, &einv); kfree(args); break; + case FASTRPC_INVOKE_CONTROL: + if (copy_from_user(&cp, (void __user *)(uintptr_t)invoke.invparam, sizeof(cp))) + return -EFAULT; + + err = fastrpc_internal_control(fl, &cp); + break; default: err = -ENOTTY; break; @@ -2439,6 +2482,7 @@ static int fastrpc_cb_probe(struct platform_device *pdev) if (sessions > 0) { struct fastrpc_session_ctx *dup_sess; + sess->sharedcb = true; for (i = 1; i < sessions; i++) { if (cctx->sesscount >= FASTRPC_MAX_SESSIONS) break; diff --git a/include/uapi/misc/fastrpc.h b/include/uapi/misc/fastrpc.h index 074675e..3dfd8e9 100644 --- a/include/uapi/misc/fastrpc.h +++ b/include/uapi/misc/fastrpc.h @@ -166,6 +166,18 @@ struct fastrpc_ioctl_capability { __u32 reserved[4]; }; +enum fastrpc_control_type { + FASTRPC_CONTROL_LATENCY = 1, + FASTRPC_CONTROL_SMMU = 2, + FASTRPC_CONTROL_KALLOC = 3, + FASTRPC_CONTROL_WAKELOCK = 4, + FASTRPC_CONTROL_PM = 5, + FASTRPC_CONTROL_DSPPROCESS_CLEAN = 6, + FASTRPC_CONTROL_RPC_POLL = 7, + FASTRPC_CONTROL_ASYNC_WAKE = 8, + FASTRPC_CONTROL_NOTIF_WAKE = 9, +}; + enum fastrpc_perfkeys { PERF_COUNT = 0, PERF_RESERVED1 = 1,