From patchwork Wed Jan 26 03:09:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Qian X-Patchwork-Id: 537050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35472C5AC75 for ; Wed, 26 Jan 2022 03:10:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236835AbiAZDKX (ORCPT ); Tue, 25 Jan 2022 22:10:23 -0500 Received: from mail-eopbgr00082.outbound.protection.outlook.com ([40.107.0.82]:27013 "EHLO EUR02-AM5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S236851AbiAZDKR (ORCPT ); Tue, 25 Jan 2022 22:10:17 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F3O7dLSCC+/goeOiYddiTI6BhY+c19jq8EBRQ6Vg+etwy6/qoXHvIKvNR3mE3C5gKrfKPI9FEkZDuUMMgOLro69UP2O6izwXo+73KbQzXeGNaxp8yyixoItFxFnbIcOkZONCaetx/aIYSWF/wqkIcd5K/IV/UkH4n1vqf1JnDCvI3eXezRfZVlYpoTW6EwZa7TMrBGyNkH/Ste8CzvmZbRQ14+byuE07uT8JpR+sTVle8pqYEWvarIc53i9jPRW3JOtBgrz/IHVySLVA2oGaiUlep3NCrW4CGF7gVDwrJZ0Vp53wzrzsFpul6EiM1tcqiufp3RAilaXv9OM81SRLYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bYbeV5YSEW6aQjmrdZxVqYYi4qhkoUhKiRmlnHxtfD4=; b=KMMIB8fwfFqY26mpjKd3TLZ8pt1qeeHrhTpNDijTrxAgQtSl63UFS0h05AlnDggWJs0bwWntjA6XpN03mknRid+J1GXEkxkF9ITytANmMsSDyWM5vTwLXKZ7jqi55vbH3fIhWw3PxCVHYMwX6nR+KQ4g4+SRPU6Nrdn7JgrgJjDV2o/Dr7pwF+/GJR2EmnyRwH/j8B6wjWwQ/f+huWAo7RFMqmjleSKkKVaNfdXTqk+7g9k/HnhUN78WZiTY+T4hCVy5ib/xi8hYGDmHLVKdTuMIzCsdJ5ecYfngYUf9UzHlYL9nxkA1NaokXfM1H8Qymm2Qdy49lXVOri5p40B6Zw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bYbeV5YSEW6aQjmrdZxVqYYi4qhkoUhKiRmlnHxtfD4=; b=ZBvSJ7IB4I8SZ/2auzxEunyq24X37WZRdF/56wD4GMmnenee2QZxOxJr0DB2iOynraHCc1uidP4qrizr1AzbHlbcUsLRcA1RY7xpPARx8dy9Mab2wfGyt4ZWiPW8SttIeoFk+klB4AD3dHBvOyuDhtFEt+K7DLcxabUSgKu9qQc= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) by AM6PR0402MB3319.eurprd04.prod.outlook.com (2603:10a6:209:e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Wed, 26 Jan 2022 03:10:14 +0000 Received: from AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13]) by AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13%4]) with mapi id 15.20.4930.015; Wed, 26 Jan 2022 03:10:14 +0000 From: Ming Qian To: mchehab@kernel.org, shawnguo@kernel.org, robh+dt@kernel.org, s.hauer@pengutronix.de Cc: hverkuil-cisco@xs4all.nl, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, aisheng.dong@nxp.com, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v15.1 02/13] media: add nv12m_8l128 and nv12m_10be_8l128 video format. Date: Wed, 26 Jan 2022 11:09:21 +0800 Message-Id: <3fd0d09d382b827ddc7f97ca6e4d7b3e4c1bf4fc.1643165765.git.ming.qian@nxp.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: References: X-ClientProxiedBy: SG2PR01CA0091.apcprd01.prod.exchangelabs.com (2603:1096:3:15::17) To AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d95e729b-b9fc-48a8-f4d8-08d9e0795f50 X-MS-TrafficTypeDiagnostic: AM6PR0402MB3319:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FeNdTi1PbIB+A2aajmvBMwRA9tMqZ8vnMe3oZ2AAM0WWn8hC4XcgFnImOvjLsXOBBqMN5A89hmKuG8rbs8aATRSheOsxL5gsfg9cXjWsWcQSq5GV6psvYpg5GBHuZsQ2NGnDWuEju8PAZ4dZsjhkSPOTo+MgG8N87xjf6vFdNFTJtvpSDzATkLMe/YJr0hAndKDQ5O0ub2AUyXOXd00plEBC6mnEbEy7xoyjWkzUT18kb7yattU+XaK3dwAloRGeqdqKm//pk7bv1/KId8m54uugXv+w6TG2el4lSFlsHOqTKowxOEdPCfOhF62MjC6txU0j4owLDO7xvLkwnAYWm0UNv0deNwwOvixT9FOYdcEehS9wuvOJmctirkX7Do2UHyeXa/gP64de5DYJpOoQTduw42Hy0sB/HypKGRaCEx2Bg26QXWAyFq50ok51ijqRarcUwUeqPfm9Tsv7m/N+oYXHJRkg4npHO7tILYO4eFtvICM5HXDHZa5kNKSYh10+d2rpcdqXDF9XjDNLc9D0VCpJno3hPr5toBiF1Kv8Ifql5gTGgstRy6JRS8vAnx3IkvCceAuyS2/LgEpW6vBrvpW1SXRGVumG43vZzUTi8ulbSk8LqDn6V7FueLD5gVp0wtmw77qPmWA9ndHq7wMl/jgleV79oHX9ajUotIxmgOOBqgQD33nQmlD1yL/+w6/VDENXlJzVXHwilGjoiZNMNRpDltxwjRlMW3BGLRbqMGqFG3EWkXmap+efKAdTmjP+Ty5xNOvh4N2s4494sV5dHHZmCIzQVNJobPfNibVWElo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB6341.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(83380400001)(508600001)(2616005)(36756003)(316002)(966005)(186003)(26005)(6486002)(66946007)(66476007)(66556008)(8676002)(5660300002)(4326008)(86362001)(52116002)(6666004)(6506007)(6512007)(44832011)(7416002)(8936002)(38100700002)(38350700002)(2906002)(20210929001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: dEEiz5q7UZUDb1v5k1NoU9rdMNAVrklyoqlQEyErv0OakW+pDLypG3xzZ1UmNNrIGWY3yyKIeluLFO/+T7NeReVjJsSrR9FcK+FYPfCEmDhsPvXS8WGD7FAmDOr77u/wlgpT3pk6E5eB9W6CH7pxyNGePyLzqRzFawLinKSYNXYCRAL2ZjkcKTbhc0aV/wBBLIgIv/iY1Zpea8yYYXiOs65Ra8qW96o2Hr+1P3N97GtjDcVuOWsGNA72MuEySurAS0a7NwY7fxURGKnsobf6fYJ7aFOqEsa40RzOUWWs0v/TzVISMnUx+10uvLiLzPVkl6hZPh7rm/LUn4No7kwTP0bICxpQ5p2h+XDgKIERPUHxuCnaM50cExdzXq4Z41A80gkbW0he0fO+Af9XUkIVUsaqd0RWelENR9k79pX/dBX+2WlfO60e9zrvMWn6RHjAW4pp9Dc4UDRfVv+YirOxtmr2z9q+nyLVj7c8MlmDPhMhIObFaH8ue7aWeuPyHxq6sQV2VatZoCpWV3KvJOuqIZGjvCElGsENIQr1nE/MIfjPnXnTysxaZDUCTHRDDUxRDJUf1Avmw5ed2gWoe1rHJATSXVy0SQx5fRwPl1+m1bXzERsYjjTEiITe92ernwN6tz5pr+GtkQtUz+b+FQdMVbJ2p/phXq8g/CT2hzp+oKKwpXJXfh8fbYwesENY+yyzzzBexYoaH64zDCeUa5RGRWDWhkUGsUOiRowaJOtPJYvPmaiUJKfA6a9KE1pOStBcYh+Qe2lKI0fejUaY3j1YTQobZ3t5o2qSzRP+u7oy4kotMDZ9/F2V0/iNFE2xjvnky/5+hySiHz1Dha0h3GGoVx1wPTDXJtLEGMeJ+IIrQRP66puDj78XU+GcVHyUrRCmZ4F68dv83LpPwg+U2E8/enqu3/Uii4xmYWRXqAmA1b5Aj6eGuS+TsuGF/TE2lnSAzefC+w91AyIAfOTelTuT8if2LzHVRDAAmCL2fcDan1+qX2KaLDhl21kXKpFr0ekohaAK4ttv54zhKJqBIsa/R+8qP5N1htvLv9zx6OTyfswPhb9PEwlY0SA0udKZzoMdHjpOX5oRssewL1PBhDDMVeSQZ76fF7t/FgUWPk9A4cQRFCzAgjTDUXZe8iblnKCEKewN6f4BWhIAUydZLNM3Gl6zbcByg1yosVnAJN6d5pJWG+Q1y3X2UzDm130tuSWb3GhtRsPsRemyDx/xSjlMdZ2bqSnRwjdufBGYZuOtw2FDvzrzUAOU1/uOaagRAPjTPSAW4S5Z7rO7pCjeaXJ4RETJ9hhFkHMIYXxjtAPw/rV9qKEvclgLfxeewdOIk0SSh1DeCFIuaNARgGhJcxIXrm1Lw3n9WVvPvRCdJDHWk8kIYJVJJD9Fqlyqvgcf++9lDxARh7ibV+mOcQmY0KTQcnZlPmzwxgo+uFQ8Ggktc4XMh+ed1Ej/mXg+yZLz0z5I/551rblJhWd5kSE7PDkF6h9zezl2otY4M1Ta+zgXcOWGDBbwtOZ024W6t6OLH+wWMDpq9EPgdStfZK/SOGrRQ+c3pRgapU94TpfgFCS2TWVA/TawhXlsVK+lnEpgyzJBvFCaR87zuSmrz8A9ZF1Kg10wdNGiWPNviAYifXqnZbQ= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: d95e729b-b9fc-48a8-f4d8-08d9e0795f50 X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6341.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2022 03:10:14.5847 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: vNpnN0Qq9pmUYwkY7fOKFhR9MEnQ6ySbId7Yn2esj2raZzic8Slpa/Mb0UCV4F3IE3JHv6n5vj8Z0L3lrDrtuQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3319 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org nv12m_8l128 is 8-bit tiled nv12 format used by amphion decoder. nv12m_10be_8l128 is 10-bit tiled format used by amphion decoder. The tile size is 8x128 Signed-off-by: Ming Qian Signed-off-by: Shijie Qin Signed-off-by: Zhou Peng --- .../media/v4l/pixfmt-yuv-planar.rst | 28 +++++++++++++++++-- drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++ include/uapi/linux/videodev2.h | 2 ++ 3 files changed, 29 insertions(+), 3 deletions(-) diff --git a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst index 3a09d93d405b..8b2ff1d29639 100644 --- a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst +++ b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst @@ -257,6 +257,8 @@ of the luma plane. .. _V4L2-PIX-FMT-NV12-4L4: .. _V4L2-PIX-FMT-NV12-16L16: .. _V4L2-PIX-FMT-NV12-32L32: +.. _V4L2_PIX_FMT_NV12M_8L128: +.. _V4L2_PIX_FMT_NV12M_10BE_8L128: Tiled NV12 ---------- @@ -281,21 +283,41 @@ If the vertical resolution is an odd number of tiles, the last row of tiles is stored in linear order. The layouts of the luma and chroma planes are identical. -``V4L2_PIX_FMT_NV12_4L4`` stores pixel in 4x4 tiles, and stores +``V4L2_PIX_FMT_NV12_4L4`` stores pixels in 4x4 tiles, and stores tiles linearly in memory. The line stride and image height must be aligned to a multiple of 4. The layouts of the luma and chroma planes are identical. -``V4L2_PIX_FMT_NV12_16L16`` stores pixel in 16x16 tiles, and stores +``V4L2_PIX_FMT_NV12_16L16`` stores pixels in 16x16 tiles, and stores tiles linearly in memory. The line stride and image height must be aligned to a multiple of 16. The layouts of the luma and chroma planes are identical. -``V4L2_PIX_FMT_NV12_32L32`` stores pixel in 32x32 tiles, and stores +``V4L2_PIX_FMT_NV12_32L32`` stores pixels in 32x32 tiles, and stores tiles linearly in memory. The line stride and image height must be aligned to a multiple of 32. The layouts of the luma and chroma planes are identical. +``V4L2_PIX_FMT_NV12M_8L128`` is similar to ``V4L2_PIX_FMT_NV12M`` but stores +pixels in 2D 8x128 tiles, and stores tiles linearly in memory. +The image height must be aligned to a multiple of 128. +The layouts of the luma and chroma planes are identical. + +``V4L2_PIX_FMT_NV12M_10BE_8L128`` is similar to ``V4L2_PIX_FMT_NV12M`` but stores +10 bits pixels in 2D 8x128 tiles, and stores tiles linearly in memory. +the data is arranged in big endian order. +The image height must be aligned to a multiple of 128. +The layouts of the luma and chroma planes are identical. +Note the tile size is 8bytes multiplied by 128 bytes, +it means that the low bits and high bits of one pixel may be in different tiles. +The 10 bit pixels are packed, so 5 bytes contain 4 10-bit pixels layout like +this (for luma): +byte 0: Y0(bits 9-2) +byte 1: Y0(bits 1-0) Y1(bits 9-4) +byte 2: Y1(bits 3-0) Y2(bits 9-6) +byte 3: Y2(bits 5-0) Y3(bits 9-8) +byte 4: Y3(bits 7-0) + .. _nv12mt: .. kernel-figure:: nv12mt.svg diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 9ac557b8e146..efbfda9ec3ce 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -1388,6 +1388,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt) case V4L2_META_FMT_VIVID: descr = "Vivid Metadata"; break; case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip ISP1 3A Parameters"; break; case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip ISP1 3A Statistics"; break; + case V4L2_PIX_FMT_NV12M_8L128: descr = "NV12M (8x128 Linear)"; break; + case V4L2_PIX_FMT_NV12M_10BE_8L128: descr = "10-bit NV12M (8x128 Linear, BE)"; break; default: /* Compressed formats */ diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index df8b9c486ba1..3768a0a80830 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -632,6 +632,8 @@ struct v4l2_pix_format { /* Tiled YUV formats, non contiguous planes */ #define V4L2_PIX_FMT_NV12MT v4l2_fourcc('T', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 64x32 tiles */ #define V4L2_PIX_FMT_NV12MT_16X16 v4l2_fourcc('V', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 16x16 tiles */ +#define V4L2_PIX_FMT_NV12M_8L128 v4l2_fourcc('N', 'A', '1', '2') /* Y/CbCr 4:2:0 8x128 tiles */ +#define V4L2_PIX_FMT_NV12M_10BE_8L128 v4l2_fourcc_be('N', 'T', '1', '2') /* Y/CbCr 4:2:0 10-bit 8x128 tiles */ /* Bayer formats - see http://www.siliconimaging.com/RGB%20Bayer.htm */ #define V4L2_PIX_FMT_SBGGR8 v4l2_fourcc('B', 'A', '8', '1') /* 8 BGBG.. GRGR.. */ From patchwork Wed Jan 26 03:09:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Qian X-Patchwork-Id: 537049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31B4EC5DF62 for ; Wed, 26 Jan 2022 03:10:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236877AbiAZDK3 (ORCPT ); Tue, 25 Jan 2022 22:10:29 -0500 Received: from mail-eopbgr00041.outbound.protection.outlook.com ([40.107.0.41]:62220 "EHLO EUR02-AM5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S236858AbiAZDK0 (ORCPT ); Tue, 25 Jan 2022 22:10:26 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Swl5DvCD5S0OhXAqD6iChI4HCKYRKiBJqgxkhWHZFpxHpKwTqnSCilrSQHOnzFa2Tv+3gwmZce99Eh3K5qETwaCruAgeNZVXIWdceclUQqb9+Ow/XleUCtdOM0q9poQhjogP/cWP5BybCtap/1rLYTIiLtbgsZdJG2r4ziqGHGxyygLBTbow7WXi6Zy3Z3N22jGwUeiyra5hOo1+X8nCaxhly/nwjKy8e7ox3is1ORdadxGiuy8qUgiftYq1Y+T953qnW0wA15ogKR4qUmpd9RhdTzmKz9T2ImGt2el4IS/XeGD3pBXQK9X73N8rLt749Z+3t7XYgmRy+1LJfeJFiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Pt5Fv+dCY1FB6nPV25k4EBfI7jHmIyxr/Z+xSNuVX5s=; b=Zg+5sqW4ZVkY8tQSYxKkbbqyuDUF41YKeZhi/moR5SBSgTvH4qPWU/D53JA1r0FgAtcYeK24Jm2LQGBl7CQzSjtj/zl/kPgZb0CkNd9rMK22zWLTeluUuKNIUzh4hC5s2jWwhikwxoT6jsaQJ9traIBX6jrlYjkgHSA6ehzCh4batRckTFlMPwtEwd5P+eS9Bu46wsdFSqshDQzzFPqDdoiIM93n6IihOjuTQflVq/3e3YxzkiKVm3Nm9zCQmVUl4z7EOducyqlfGUsMcGiTudtCNJ8EexY3Pvg4pumdLi80L62msUsA+lRwWz53x6me/OSon6FPU+kHfDMEoWrXnA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Pt5Fv+dCY1FB6nPV25k4EBfI7jHmIyxr/Z+xSNuVX5s=; b=bAVyEzCuXih9MFll9FbgzHc2BA8uRaqyyNCer/nFOYrkycrJvWozJ+8fCKgu6lv/G55W2LDETAmJITF8pkjhZy+I7hAoFv9bgja9A/0CnjXZWvoyBk4ovnvPOubMm5ucf3zfCx2GlIFx5zGEXwdO2ZjyUWtp8tfJXie7szDkpOw= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) by AM6PR0402MB3319.eurprd04.prod.outlook.com (2603:10a6:209:e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Wed, 26 Jan 2022 03:10:23 +0000 Received: from AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13]) by AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13%4]) with mapi id 15.20.4930.015; Wed, 26 Jan 2022 03:10:23 +0000 From: Ming Qian To: mchehab@kernel.org, shawnguo@kernel.org, robh+dt@kernel.org, s.hauer@pengutronix.de Cc: hverkuil-cisco@xs4all.nl, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, aisheng.dong@nxp.com, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v15.1 04/13] media: amphion: add vpu core driver Date: Wed, 26 Jan 2022 11:09:23 +0800 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: X-ClientProxiedBy: SG2PR01CA0091.apcprd01.prod.exchangelabs.com (2603:1096:3:15::17) To AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e3237645-02ca-4c43-9591-08d9e0796434 X-MS-TrafficTypeDiagnostic: AM6PR0402MB3319:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:298; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +OtuzEsZK3x2NZSgNYFJiLUJ4sCIWHqCzyYOLNPCa+DgNYE5u8ZR3WvDe9ghe+QQjFDSZqfsgGsiNkMNETIa57w4DnzZIPpAfSRNV3i6RjMKtCSljFHS+gMxhjEu2GYJSwmoIwpbPirX9c1j26omrzEKIgpqB5bheUvFfy5n2jiRA69vBZJT922YEBtbh36HuvePIQLD0J3yoX/d+a62/WZdgi0Ywpvb3pcjpyPYGHj8WrSkIp9Di4IyHQcalHEx88nqCa8nVmVMsmjTb2xsE1Fm15af2ErE5zJKjxsO/dOixFZJbda9naLhNKoa2eITIc1yVKQIgditS43NSkN3nRkdPyYCwbIoWs0n50jV3xPAboXYg6C3Wwf44yR8SuRwMyz4PU2YoDRAnXntKR9tQRyqrKO8USrNLkqdo2BYFETpRcg9SdzafhiUTL0MKkHRx3TpDaWoNnUR3QWLpmzBeWElhKiN7h+9rTMIdoSVzGn4KKvxhnC9IRD/oHPswwzpQHdbfUHpW1yLx+Volc848dWtd6ooc98mE04pYqNsQn0zvAxRuGnUR9pg3g6DUv0aDtAtuurXfgeYcs+ySpdmaPRZyLwlI2OcozSdgBw0D2EbzPBUC4DHxyQCBvosSZbvsS2oUmYaE7pw4EijslX7OZqj1iBEwgIjsjf+NT/1IbbvuHG253HhHiF3Rr7shJWrE7IG+uNSpVZHxOf9ggk+5g== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB6341.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(83380400001)(508600001)(2616005)(36756003)(316002)(186003)(26005)(6486002)(66946007)(66476007)(66556008)(8676002)(5660300002)(4326008)(86362001)(30864003)(52116002)(6506007)(6512007)(44832011)(7416002)(8936002)(38100700002)(38350700002)(2906002)(559001)(579004)(20210929001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: a9ZAdotiPp8UE9U9b0+Z3jO7c6w3eP0QcEVY1t25UkIyKZrqLV4gLPlWWAgZvlnSrBteESl30yXh00lo9o+xPJsX3lkU0RFSmmBZrsKdMa1WLWXxlIfhkGAyNZPjL/zUAbKa6J+NYaHkgpWhmvEzY9RIWVVQphvUgXs1P8Si0i2o09Jm1E+HkBTrLfg4/S/HIFtR8B+gcnKq8qAQTJ+Rh+uU6G5PewgSm/ly4Z/vGXjvYGjYPlidemmHlW/AD5XwpRYLL63epAvx36PPdnpdCLi+ug82nsNkMYmtxF8ghgxiQng/tP33HylBSeYymTbL7eQTHAxDCXV+u4W0pb9w1REDcAXaUDYHUPiXEBevgnsKVby82IcIx/S3yTASgB0c1mk9Q1CL+ftyfXcxMLSATQ2r7Hihrz0jNHaeR8As5RdpJqXeJVKVoz18bwauhzXtV2lQeHDovFDO9cobvw9xZD9FyHwoJpY9+zqFLpm8E16SJAsbRBNZv08VW/1D3UWIEVb7gHo86KEZoZwZrYfr0iUcMAM52zCNWlcdHgE/mprkxBIEu9ZyfJ61PpcKMZhTDba0zvnVIZEqkBPaMUc2CDfvK2ekrfpiwyvw4DKcjON2qZQQc7ioB5qmaq3QZ9p3W+CAn36tuTDHpKaeCpGWW0kVJwkClpN1Utaqbqm8GCazmPe7MPJP5sCXzBxE/FGDef1WnJO5Ub4zv9ZeSuQ24zeuR65arJGE4PX3sL6aa18DoV3CzJtmlM9t/62jV3FaXAVXRkZ1pU/5WHY4YFh4dwqJXo6J4iT0a7c+5vYzhpDUdauMovc6eYRtpKlCojglvTN0zUtjpw9+mQfb/KCIvf3SbmMuWxobOWpGc72Kw2L+U31i2nMuFktUcYe3jiJIa2p5Dw6rKUd4pXXOdDzj369UDVfYH74Dop7BP+zkQiAGc2ipyB/K/P3GLaWPTODLWMEAgFuSoqlVWORbJTsLbafYSJx5VBg7i3cZOTd51jRC+Jb2hX+Dtuof7rIFES7E0BM0CNSmKc2GK5bh4p2uIJgAcvGRR8pwW2caEvlZT5EN307giHDhTwngeTIlc+PrvNfDsW3YAbnuglMMtpUlvm4VeHPFmRKkIDPKTGLUvEBlDGPukNXrfXCryQ8mbaceq/frWCy8XnsVotaGOMaHOrZRgyHCbOTfMXbTKpnYdQ+uYjcsHOvVTDu+WbD4qUczA8lf0Rx5pll1Nm7i0uFcXddcb7MTHrf3x1xbpBSFqqnNQzGAc8/9NQtTMHsnmnR5RomLhnaZUHn3OTTDxST7x50I6V7FW9MTE1WJsUGZfXIpNuITU+UXINDRV/AjaSyS5QQZhAcH6EDhYobvQAY1fivmPfLqGihjJTMipZaUcHzv0Izl1/Xv3fL9ejIhwp0yPJSNzEg7nn+kjxKe+g/wpqnIb+AXrfxCUmM9cSnm3nYX+mmS1ggT+gkEjqzQ2rpHsbbIjvqR9a+yXYuZBRis5fsnJNNyRjnqYB2cEoPNBfhHmOVYZA82UCn2jY4CpmKeZWLaLIRWaxZ95k/cvnmuM5109i4xhr9katA1ie6W1Um70KpnoPCJlvxKVfYOGZT5TqPtmSs9lDyJMFFpKHcbfTT2AJVy4q+DPajJPvExr+E= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: e3237645-02ca-4c43-9591-08d9e0796434 X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6341.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2022 03:10:22.9749 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: HfJXmbaSK2DQYkAE1CCUBBnCUliMsqdwvCHMp5heTm4CxmelvcEjTIHCoUDCaEzHiWfS0rH0jDb1nYuhmJ+bcA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3319 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The vpu supports encoder and decoder. it needs vpu core to handle it. core will run either encoder or decoder firmware. This driver is for support the vpu core. Signed-off-by: Ming Qian Signed-off-by: Shijie Qin Signed-off-by: Zhou Peng Tested-by: Nicolas Dufresne --- drivers/media/platform/amphion/vpu_codec.h | 68 ++ drivers/media/platform/amphion/vpu_core.c | 871 +++++++++++++++++++++ drivers/media/platform/amphion/vpu_core.h | 15 + drivers/media/platform/amphion/vpu_dbg.c | 495 ++++++++++++ drivers/media/platform/amphion/vpu_rpc.c | 257 ++++++ drivers/media/platform/amphion/vpu_rpc.h | 456 +++++++++++ 6 files changed, 2162 insertions(+) create mode 100644 drivers/media/platform/amphion/vpu_codec.h create mode 100644 drivers/media/platform/amphion/vpu_core.c create mode 100644 drivers/media/platform/amphion/vpu_core.h create mode 100644 drivers/media/platform/amphion/vpu_dbg.c create mode 100644 drivers/media/platform/amphion/vpu_rpc.c create mode 100644 drivers/media/platform/amphion/vpu_rpc.h diff --git a/drivers/media/platform/amphion/vpu_codec.h b/drivers/media/platform/amphion/vpu_codec.h new file mode 100644 index 000000000000..528a93f08ecd --- /dev/null +++ b/drivers/media/platform/amphion/vpu_codec.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2020-2021 NXP + */ + +#ifndef _AMPHION_VPU_CODEC_H +#define _AMPHION_VPU_CODEC_H + +struct vpu_encode_params { + u32 input_format; + u32 codec_format; + u32 profile; + u32 tier; + u32 level; + struct v4l2_fract frame_rate; + u32 src_stride; + u32 src_width; + u32 src_height; + struct v4l2_rect crop; + u32 out_width; + u32 out_height; + + u32 gop_length; + u32 bframes; + + u32 rc_enable; + u32 rc_mode; + u32 bitrate; + u32 bitrate_min; + u32 bitrate_max; + + u32 i_frame_qp; + u32 p_frame_qp; + u32 b_frame_qp; + u32 qp_min; + u32 qp_max; + u32 qp_min_i; + u32 qp_max_i; + + struct { + u32 enable; + u32 idc; + u32 width; + u32 height; + } sar; + + struct { + u32 primaries; + u32 transfer; + u32 matrix; + u32 full_range; + } color; +}; + +struct vpu_decode_params { + u32 codec_format; + u32 output_format; + u32 b_dis_reorder; + u32 b_non_frame; + u32 frame_count; + u32 end_flag; + struct { + u32 base; + u32 size; + } udata; +}; + +#endif diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c new file mode 100644 index 000000000000..07dcc8bb1c94 --- /dev/null +++ b/drivers/media/platform/amphion/vpu_core.c @@ -0,0 +1,871 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_defs.h" +#include "vpu_core.h" +#include "vpu_mbox.h" +#include "vpu_msgs.h" +#include "vpu_rpc.h" +#include "vpu_cmds.h" + +void csr_writel(struct vpu_core *core, u32 reg, u32 val) +{ + writel(val, core->base + reg); +} + +u32 csr_readl(struct vpu_core *core, u32 reg) +{ + return readl(core->base + reg); +} + +static int vpu_core_load_firmware(struct vpu_core *core) +{ + const struct firmware *pfw = NULL; + int ret = 0; + + if (!core->fw.virt) { + dev_err(core->dev, "firmware buffer is not ready\n"); + return -EINVAL; + } + + ret = request_firmware(&pfw, core->res->fwname, core->dev); + dev_dbg(core->dev, "request_firmware %s : %d\n", core->res->fwname, ret); + if (ret) { + dev_err(core->dev, "request firmware %s failed, ret = %d\n", + core->res->fwname, ret); + return ret; + } + + if (core->fw.length < pfw->size) { + dev_err(core->dev, "firmware buffer size want %zu, but %d\n", + pfw->size, core->fw.length); + ret = -EINVAL; + goto exit; + } + + memset_io(core->fw.virt, 0, core->fw.length); + memcpy(core->fw.virt, pfw->data, pfw->size); + core->fw.bytesused = pfw->size; + ret = vpu_iface_on_firmware_loaded(core); +exit: + release_firmware(pfw); + pfw = NULL; + + return ret; +} + +static int vpu_core_boot_done(struct vpu_core *core) +{ + u32 fw_version; + + fw_version = vpu_iface_get_version(core); + dev_info(core->dev, "%s firmware version : %d.%d.%d\n", + vpu_core_type_desc(core->type), + (fw_version >> 16) & 0xff, + (fw_version >> 8) & 0xff, + fw_version & 0xff); + core->supported_instance_count = vpu_iface_get_max_instance_count(core); + if (core->res->act_size) { + u32 count = core->act.length / core->res->act_size; + + core->supported_instance_count = min(core->supported_instance_count, count); + } + core->fw_version = fw_version; + core->state = VPU_CORE_ACTIVE; + + return 0; +} + +static int vpu_core_wait_boot_done(struct vpu_core *core) +{ + int ret; + + ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT); + if (!ret) { + dev_err(core->dev, "boot timeout\n"); + return -EINVAL; + } + return vpu_core_boot_done(core); +} + +static int vpu_core_boot(struct vpu_core *core, bool load) +{ + int ret; + + reinit_completion(&core->cmp); + if (load) { + ret = vpu_core_load_firmware(core); + if (ret) + return ret; + } + + vpu_iface_boot_core(core); + return vpu_core_wait_boot_done(core); +} + +static int vpu_core_shutdown(struct vpu_core *core) +{ + return vpu_iface_shutdown_core(core); +} + +static int vpu_core_restore(struct vpu_core *core) +{ + int ret; + + ret = vpu_core_sw_reset(core); + if (ret) + return ret; + + vpu_core_boot_done(core); + return vpu_iface_restore_core(core); +} + +static int __vpu_alloc_dma(struct device *dev, struct vpu_buffer *buf) +{ + gfp_t gfp = GFP_KERNEL | GFP_DMA32; + + if (!buf->length) + return 0; + + buf->virt = dma_alloc_coherent(dev, buf->length, &buf->phys, gfp); + if (!buf->virt) + return -ENOMEM; + + buf->dev = dev; + + return 0; +} + +void vpu_free_dma(struct vpu_buffer *buf) +{ + if (!buf->virt || !buf->dev) + return; + + dma_free_coherent(buf->dev, buf->length, buf->virt, buf->phys); + buf->virt = NULL; + buf->phys = 0; + buf->length = 0; + buf->bytesused = 0; + buf->dev = NULL; +} + +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf) +{ + return __vpu_alloc_dma(core->dev, buf); +} + +static void vpu_core_check_hang(struct vpu_core *core) +{ + if (core->hang_mask) + core->state = VPU_CORE_HANG; +} + +static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu, u32 type) +{ + struct vpu_core *core = NULL; + int request_count = INT_MAX; + struct vpu_core *c; + + list_for_each_entry(c, &vpu->cores, list) { + dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n", c->instance_mask, c->state); + if (c->type != type) + continue; + if (c->state == VPU_CORE_DEINIT) { + core = c; + break; + } + vpu_core_check_hang(c); + if (c->state != VPU_CORE_ACTIVE) + continue; + if (c->request_count < request_count) { + request_count = c->request_count; + core = c; + } + if (!request_count) + break; + } + + return core; +} + +static bool vpu_core_is_exist(struct vpu_dev *vpu, struct vpu_core *core) +{ + struct vpu_core *c; + + list_for_each_entry(c, &vpu->cores, list) { + if (c == core) + return true; + } + + return false; +} + +static void vpu_core_get_vpu(struct vpu_core *core) +{ + core->vpu->get_vpu(core->vpu); + if (core->type == VPU_CORE_TYPE_ENC) + core->vpu->get_enc(core->vpu); + if (core->type == VPU_CORE_TYPE_DEC) + core->vpu->get_dec(core->vpu); +} + +static int vpu_core_register(struct device *dev, struct vpu_core *core) +{ + struct vpu_dev *vpu = dev_get_drvdata(dev); + int ret = 0; + + dev_dbg(core->dev, "register core %s\n", vpu_core_type_desc(core->type)); + if (vpu_core_is_exist(vpu, core)) + return 0; + + core->workqueue = alloc_workqueue("vpu", WQ_UNBOUND | WQ_MEM_RECLAIM, 1); + if (!core->workqueue) { + dev_err(core->dev, "fail to alloc workqueue\n"); + return -ENOMEM; + } + INIT_WORK(&core->msg_work, vpu_msg_run_work); + INIT_DELAYED_WORK(&core->msg_delayed_work, vpu_msg_delayed_work); + core->msg_buffer_size = roundup_pow_of_two(VPU_MSG_BUFFER_SIZE); + core->msg_buffer = vzalloc(core->msg_buffer_size); + if (!core->msg_buffer) { + dev_err(core->dev, "failed allocate buffer for fifo\n"); + ret = -ENOMEM; + goto error; + } + ret = kfifo_init(&core->msg_fifo, core->msg_buffer, core->msg_buffer_size); + if (ret) { + dev_err(core->dev, "failed init kfifo\n"); + goto error; + } + + list_add_tail(&core->list, &vpu->cores); + + vpu_core_get_vpu(core); + + if (vpu_iface_get_power_state(core)) + ret = vpu_core_restore(core); + if (ret) + goto error; + + return 0; +error: + if (core->msg_buffer) { + vfree(core->msg_buffer); + core->msg_buffer = NULL; + } + if (core->workqueue) { + destroy_workqueue(core->workqueue); + core->workqueue = NULL; + } + return ret; +} + +static void vpu_core_put_vpu(struct vpu_core *core) +{ + if (core->type == VPU_CORE_TYPE_ENC) + core->vpu->put_enc(core->vpu); + if (core->type == VPU_CORE_TYPE_DEC) + core->vpu->put_dec(core->vpu); + core->vpu->put_vpu(core->vpu); +} + +static int vpu_core_unregister(struct device *dev, struct vpu_core *core) +{ + list_del_init(&core->list); + + vpu_core_put_vpu(core); + core->vpu = NULL; + vfree(core->msg_buffer); + core->msg_buffer = NULL; + + if (core->workqueue) { + cancel_work_sync(&core->msg_work); + cancel_delayed_work_sync(&core->msg_delayed_work); + destroy_workqueue(core->workqueue); + core->workqueue = NULL; + } + + return 0; +} + +static int vpu_core_acquire_instance(struct vpu_core *core) +{ + int id; + + id = ffz(core->instance_mask); + if (id >= core->supported_instance_count) + return -EINVAL; + + set_bit(id, &core->instance_mask); + + return id; +} + +static void vpu_core_release_instance(struct vpu_core *core, int id) +{ + if (id < 0 || id >= core->supported_instance_count) + return; + + clear_bit(id, &core->instance_mask); +} + +struct vpu_inst *vpu_inst_get(struct vpu_inst *inst) +{ + if (!inst) + return NULL; + + atomic_inc(&inst->ref_count); + + return inst; +} + +void vpu_inst_put(struct vpu_inst *inst) +{ + if (!inst) + return; + if (atomic_dec_and_test(&inst->ref_count)) { + if (inst->release) + inst->release(inst); + } +} + +struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum vpu_core_type type) +{ + struct vpu_core *core = NULL; + int ret; + + mutex_lock(&vpu->lock); + + core = vpu_core_find_proper_by_type(vpu, type); + if (!core) + goto exit; + + mutex_lock(&core->lock); + pm_runtime_get_sync(core->dev); + + if (core->state == VPU_CORE_DEINIT) { + ret = vpu_core_boot(core, true); + if (ret) { + pm_runtime_put_sync(core->dev); + mutex_unlock(&core->lock); + core = NULL; + goto exit; + } + } + + core->request_count++; + + mutex_unlock(&core->lock); +exit: + mutex_unlock(&vpu->lock); + + return core; +} + +void vpu_release_core(struct vpu_core *core) +{ + if (!core) + return; + + mutex_lock(&core->lock); + pm_runtime_put_sync(core->dev); + if (core->request_count) + core->request_count--; + mutex_unlock(&core->lock); +} + +int vpu_inst_register(struct vpu_inst *inst) +{ + struct vpu_dev *vpu; + struct vpu_core *core; + int ret = 0; + + vpu = inst->vpu; + core = inst->core; + if (!core) { + core = vpu_request_core(vpu, inst->type); + if (!core) { + dev_err(vpu->dev, "there is no vpu core for %s\n", + vpu_core_type_desc(inst->type)); + return -EINVAL; + } + inst->core = core; + inst->dev = get_device(core->dev); + } + + mutex_lock(&core->lock); + if (inst->id >= 0 && inst->id < core->supported_instance_count) + goto exit; + + ret = vpu_core_acquire_instance(core); + if (ret < 0) + goto exit; + + vpu_trace(inst->dev, "[%d] %p\n", ret, inst); + inst->id = ret; + list_add_tail(&inst->list, &core->instances); + ret = 0; + if (core->res->act_size) { + inst->act.phys = core->act.phys + core->res->act_size * inst->id; + inst->act.virt = core->act.virt + core->res->act_size * inst->id; + inst->act.length = core->res->act_size; + } + vpu_inst_create_dbgfs_file(inst); +exit: + mutex_unlock(&core->lock); + + if (ret) + dev_err(core->dev, "register instance fail\n"); + return ret; +} + +int vpu_inst_unregister(struct vpu_inst *inst) +{ + struct vpu_core *core; + + if (!inst->core) + return 0; + + core = inst->core; + vpu_clear_request(inst); + mutex_lock(&core->lock); + if (inst->id >= 0 && inst->id < core->supported_instance_count) { + vpu_inst_remove_dbgfs_file(inst); + list_del_init(&inst->list); + vpu_core_release_instance(core, inst->id); + inst->id = VPU_INST_NULL_ID; + } + vpu_core_check_hang(core); + if (core->state == VPU_CORE_HANG && !core->instance_mask) { + dev_info(core->dev, "reset hang core\n"); + if (!vpu_core_sw_reset(core)) { + core->state = VPU_CORE_ACTIVE; + core->hang_mask = 0; + } + } + mutex_unlock(&core->lock); + + return 0; +} + +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index) +{ + struct vpu_inst *inst = NULL; + struct vpu_inst *tmp; + + mutex_lock(&core->lock); + if (!test_bit(index, &core->instance_mask)) + goto exit; + list_for_each_entry(tmp, &core->instances, list) { + if (tmp->id == index) { + inst = vpu_inst_get(tmp); + break; + } + } +exit: + mutex_unlock(&core->lock); + + return inst; +} + +const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst) +{ + struct vpu_dev *vpu; + struct vpu_core *core = NULL; + const struct vpu_core_resources *res = NULL; + + if (!inst || !inst->vpu) + return NULL; + + if (inst->core && inst->core->res) + return inst->core->res; + + vpu = inst->vpu; + mutex_lock(&vpu->lock); + list_for_each_entry(core, &vpu->cores, list) { + if (core->type == inst->type) { + res = core->res; + break; + } + } + mutex_unlock(&vpu->lock); + + return res; +} + +static int vpu_core_parse_dt(struct vpu_core *core, struct device_node *np) +{ + struct device_node *node; + struct resource res; + int ret; + + if (of_count_phandle_with_args(np, "memory-region", NULL) < 2) { + dev_err(core->dev, "need 2 memory-region for boot and rpc\n"); + return -ENODEV; + } + + node = of_parse_phandle(np, "memory-region", 0); + if (!node) { + dev_err(core->dev, "boot-region of_parse_phandle error\n"); + return -ENODEV; + } + if (of_address_to_resource(node, 0, &res)) { + dev_err(core->dev, "boot-region of_address_to_resource error\n"); + return -EINVAL; + } + core->fw.phys = res.start; + core->fw.length = resource_size(&res); + + node = of_parse_phandle(np, "memory-region", 1); + if (!node) { + dev_err(core->dev, "rpc-region of_parse_phandle error\n"); + return -ENODEV; + } + if (of_address_to_resource(node, 0, &res)) { + dev_err(core->dev, "rpc-region of_address_to_resource error\n"); + return -EINVAL; + } + core->rpc.phys = res.start; + core->rpc.length = resource_size(&res); + + if (core->rpc.length < core->res->rpc_size + core->res->fwlog_size) { + dev_err(core->dev, "the rpc-region <%pad, 0x%x> is not enough\n", + &core->rpc.phys, core->rpc.length); + return -EINVAL; + } + + core->fw.virt = ioremap_wc(core->fw.phys, core->fw.length); + core->rpc.virt = ioremap_wc(core->rpc.phys, core->rpc.length); + memset_io(core->rpc.virt, 0, core->rpc.length); + + ret = vpu_iface_check_memory_region(core, core->rpc.phys, core->rpc.length); + if (ret != VPU_CORE_MEMORY_UNCACHED) { + dev_err(core->dev, "rpc region<%pad, 0x%x> isn't uncached\n", + &core->rpc.phys, core->rpc.length); + return -EINVAL; + } + + core->log.phys = core->rpc.phys + core->res->rpc_size; + core->log.virt = core->rpc.virt + core->res->rpc_size; + core->log.length = core->res->fwlog_size; + core->act.phys = core->log.phys + core->log.length; + core->act.virt = core->log.virt + core->log.length; + core->act.length = core->rpc.length - core->res->rpc_size - core->log.length; + core->rpc.length = core->res->rpc_size; + + return 0; +} + +static int vpu_core_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct vpu_core *core; + struct vpu_dev *vpu = dev_get_drvdata(dev->parent); + struct vpu_shared_addr *iface; + u32 iface_data_size; + int ret; + + dev_dbg(dev, "probe\n"); + if (!vpu) + return -EINVAL; + core = devm_kzalloc(dev, sizeof(*core), GFP_KERNEL); + if (!core) + return -ENOMEM; + + core->pdev = pdev; + core->dev = dev; + platform_set_drvdata(pdev, core); + core->vpu = vpu; + INIT_LIST_HEAD(&core->instances); + mutex_init(&core->lock); + mutex_init(&core->cmd_lock); + init_completion(&core->cmp); + init_waitqueue_head(&core->ack_wq); + core->state = VPU_CORE_DEINIT; + + core->res = of_device_get_match_data(dev); + if (!core->res) + return -ENODEV; + + core->type = core->res->type; + core->id = of_alias_get_id(dev->of_node, "vpu_core"); + if (core->id < 0) { + dev_err(dev, "can't get vpu core id\n"); + return core->id; + } + dev_info(core->dev, "[%d] = %s\n", core->id, vpu_core_type_desc(core->type)); + ret = vpu_core_parse_dt(core, dev->of_node); + if (ret) + return ret; + + core->base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(core->base)) + return PTR_ERR(core->base); + + if (!vpu_iface_check_codec(core)) { + dev_err(core->dev, "is not supported\n"); + return -EINVAL; + } + + ret = vpu_mbox_init(core); + if (ret) + return ret; + + iface = devm_kzalloc(dev, sizeof(*iface), GFP_KERNEL); + if (!iface) + return -ENOMEM; + + iface_data_size = vpu_iface_get_data_size(core); + if (iface_data_size) { + iface->priv = devm_kzalloc(dev, iface_data_size, GFP_KERNEL); + if (!iface->priv) + return -ENOMEM; + } + + ret = vpu_iface_init(core, iface, &core->rpc, core->fw.phys); + if (ret) { + dev_err(core->dev, "init iface fail, ret = %d\n", ret); + return ret; + } + + vpu_iface_config_system(core, vpu->res->mreg_base, vpu->base); + vpu_iface_set_log_buf(core, &core->log); + + pm_runtime_enable(dev); + ret = pm_runtime_get_sync(dev); + if (ret) { + pm_runtime_put_noidle(dev); + pm_runtime_set_suspended(dev); + goto err_runtime_disable; + } + + ret = vpu_core_register(dev->parent, core); + if (ret) + goto err_core_register; + core->parent = dev->parent; + + pm_runtime_put_sync(dev); + vpu_core_create_dbgfs_file(core); + + return 0; + +err_core_register: + pm_runtime_put_sync(dev); +err_runtime_disable: + pm_runtime_disable(dev); + + return ret; +} + +static int vpu_core_remove(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct vpu_core *core = platform_get_drvdata(pdev); + int ret; + + vpu_core_remove_dbgfs_file(core); + ret = pm_runtime_get_sync(dev); + WARN_ON(ret < 0); + + vpu_core_shutdown(core); + pm_runtime_put_sync(dev); + pm_runtime_disable(dev); + + vpu_core_unregister(core->parent, core); + iounmap(core->fw.virt); + iounmap(core->rpc.virt); + mutex_destroy(&core->lock); + mutex_destroy(&core->cmd_lock); + + return 0; +} + +static int __maybe_unused vpu_core_runtime_resume(struct device *dev) +{ + struct vpu_core *core = dev_get_drvdata(dev); + + return vpu_mbox_request(core); +} + +static int __maybe_unused vpu_core_runtime_suspend(struct device *dev) +{ + struct vpu_core *core = dev_get_drvdata(dev); + + vpu_mbox_free(core); + return 0; +} + +static void vpu_core_cancel_work(struct vpu_core *core) +{ + struct vpu_inst *inst = NULL; + + cancel_work_sync(&core->msg_work); + cancel_delayed_work_sync(&core->msg_delayed_work); + + mutex_lock(&core->lock); + list_for_each_entry(inst, &core->instances, list) + cancel_work_sync(&inst->msg_work); + mutex_unlock(&core->lock); +} + +static void vpu_core_resume_work(struct vpu_core *core) +{ + struct vpu_inst *inst = NULL; + unsigned long delay = msecs_to_jiffies(10); + + queue_work(core->workqueue, &core->msg_work); + queue_delayed_work(core->workqueue, &core->msg_delayed_work, delay); + + mutex_lock(&core->lock); + list_for_each_entry(inst, &core->instances, list) + queue_work(inst->workqueue, &inst->msg_work); + mutex_unlock(&core->lock); +} + +static int __maybe_unused vpu_core_resume(struct device *dev) +{ + struct vpu_core *core = dev_get_drvdata(dev); + int ret = 0; + + mutex_lock(&core->lock); + pm_runtime_get_sync(dev); + vpu_core_get_vpu(core); + if (core->state != VPU_CORE_SNAPSHOT) + goto exit; + + if (!vpu_iface_get_power_state(core)) { + if (!list_empty(&core->instances)) { + ret = vpu_core_boot(core, false); + if (ret) { + dev_err(core->dev, "%s boot fail\n", __func__); + core->state = VPU_CORE_DEINIT; + goto exit; + } + } else { + core->state = VPU_CORE_DEINIT; + } + } else { + if (!list_empty(&core->instances)) { + ret = vpu_core_sw_reset(core); + if (ret) { + dev_err(core->dev, "%s sw_reset fail\n", __func__); + core->state = VPU_CORE_HANG; + goto exit; + } + } + core->state = VPU_CORE_ACTIVE; + } + +exit: + pm_runtime_put_sync(dev); + mutex_unlock(&core->lock); + + vpu_core_resume_work(core); + return ret; +} + +static int __maybe_unused vpu_core_suspend(struct device *dev) +{ + struct vpu_core *core = dev_get_drvdata(dev); + int ret = 0; + + mutex_lock(&core->lock); + if (core->state == VPU_CORE_ACTIVE) { + if (!list_empty(&core->instances)) { + ret = vpu_core_snapshot(core); + if (ret) { + mutex_unlock(&core->lock); + return ret; + } + } + + core->state = VPU_CORE_SNAPSHOT; + } + mutex_unlock(&core->lock); + + vpu_core_cancel_work(core); + + mutex_lock(&core->lock); + vpu_core_put_vpu(core); + mutex_unlock(&core->lock); + return ret; +} + +static const struct dev_pm_ops vpu_core_pm_ops = { + SET_RUNTIME_PM_OPS(vpu_core_runtime_suspend, vpu_core_runtime_resume, NULL) + SET_SYSTEM_SLEEP_PM_OPS(vpu_core_suspend, vpu_core_resume) +}; + +static struct vpu_core_resources imx8q_enc = { + .type = VPU_CORE_TYPE_ENC, + .fwname = "vpu/vpu_fw_imx8_enc.bin", + .stride = 16, + .max_width = 1920, + .max_height = 1920, + .min_width = 64, + .min_height = 48, + .step_width = 2, + .step_height = 2, + .rpc_size = 0x80000, + .fwlog_size = 0x80000, + .act_size = 0xc0000, +}; + +static struct vpu_core_resources imx8q_dec = { + .type = VPU_CORE_TYPE_DEC, + .fwname = "vpu/vpu_fw_imx8_dec.bin", + .stride = 256, + .max_width = 8188, + .max_height = 8188, + .min_width = 16, + .min_height = 16, + .step_width = 1, + .step_height = 1, + .rpc_size = 0x80000, + .fwlog_size = 0x80000, +}; + +static const struct of_device_id vpu_core_dt_match[] = { + { .compatible = "nxp,imx8q-vpu-encoder", .data = &imx8q_enc }, + { .compatible = "nxp,imx8q-vpu-decoder", .data = &imx8q_dec }, + {} +}; +MODULE_DEVICE_TABLE(of, vpu_core_dt_match); + +static struct platform_driver amphion_vpu_core_driver = { + .probe = vpu_core_probe, + .remove = vpu_core_remove, + .driver = { + .name = "amphion-vpu-core", + .of_match_table = vpu_core_dt_match, + .pm = &vpu_core_pm_ops, + }, +}; + +int __init vpu_core_driver_init(void) +{ + return platform_driver_register(&hion_vpu_core_driver); +} + +void __exit vpu_core_driver_exit(void) +{ + platform_driver_unregister(&hion_vpu_core_driver); +} diff --git a/drivers/media/platform/amphion/vpu_core.h b/drivers/media/platform/amphion/vpu_core.h new file mode 100644 index 000000000000..00a662997da4 --- /dev/null +++ b/drivers/media/platform/amphion/vpu_core.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2020-2021 NXP + */ + +#ifndef _AMPHION_VPU_CORE_H +#define _AMPHION_VPU_CORE_H + +void csr_writel(struct vpu_core *core, u32 reg, u32 val); +u32 csr_readl(struct vpu_core *core, u32 reg); +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf); +void vpu_free_dma(struct vpu_buffer *buf); +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index); + +#endif diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c new file mode 100644 index 000000000000..23a01c0dbf9d --- /dev/null +++ b/drivers/media/platform/amphion/vpu_dbg.c @@ -0,0 +1,495 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_defs.h" +#include "vpu_helpers.h" +#include "vpu_cmds.h" +#include "vpu_rpc.h" +#include "vpu_v4l2.h" + +struct print_buf_desc { + u32 start_h_phy; + u32 start_h_vir; + u32 start_m; + u32 bytes; + u32 read; + u32 write; + char buffer[0]; +}; + +static char *vb2_stat_name[] = { + [VB2_BUF_STATE_DEQUEUED] = "dequeued", + [VB2_BUF_STATE_IN_REQUEST] = "in_request", + [VB2_BUF_STATE_PREPARING] = "preparing", + [VB2_BUF_STATE_QUEUED] = "queued", + [VB2_BUF_STATE_ACTIVE] = "active", + [VB2_BUF_STATE_DONE] = "done", + [VB2_BUF_STATE_ERROR] = "error", +}; + +static char *vpu_stat_name[] = { + [VPU_BUF_STATE_IDLE] = "idle", + [VPU_BUF_STATE_INUSE] = "inuse", + [VPU_BUF_STATE_DECODED] = "decoded", + [VPU_BUF_STATE_READY] = "ready", + [VPU_BUF_STATE_SKIP] = "skip", + [VPU_BUF_STATE_ERROR] = "error", +}; + +static int vpu_dbg_instance(struct seq_file *s, void *data) +{ + struct vpu_inst *inst = s->private; + char str[128]; + int num; + struct vb2_queue *vq; + int i; + + if (!inst || !inst->fh.m2m_ctx) + return 0; + + num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(inst->type)); + if (seq_write(s, str, num)) + return 0; + + num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid, inst->pid); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), "state = %d\n", inst->state); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), + "min_buffer_out = %d, min_buffer_cap = %d\n", + inst->min_buffer_out, inst->min_buffer_cap); + if (seq_write(s, str, num)) + return 0; + + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx); + num = scnprintf(str, sizeof(str), + "output (%2d, %2d): fmt = %c%c%c%c %d x %d, %d;", + vb2_is_streaming(vq), + vq->num_buffers, + inst->out_format.pixfmt, + inst->out_format.pixfmt >> 8, + inst->out_format.pixfmt >> 16, + inst->out_format.pixfmt >> 24, + inst->out_format.width, + inst->out_format.height, + vq->last_buffer_dequeued); + if (seq_write(s, str, num)) + return 0; + for (i = 0; i < inst->out_format.num_planes; i++) { + num = scnprintf(str, sizeof(str), " %d(%d)", + inst->out_format.sizeimage[i], + inst->out_format.bytesperline[i]); + if (seq_write(s, str, num)) + return 0; + } + if (seq_write(s, "\n", 1)) + return 0; + + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx); + num = scnprintf(str, sizeof(str), + "capture(%2d, %2d): fmt = %c%c%c%c %d x %d, %d;", + vb2_is_streaming(vq), + vq->num_buffers, + inst->cap_format.pixfmt, + inst->cap_format.pixfmt >> 8, + inst->cap_format.pixfmt >> 16, + inst->cap_format.pixfmt >> 24, + inst->cap_format.width, + inst->cap_format.height, + vq->last_buffer_dequeued); + if (seq_write(s, str, num)) + return 0; + for (i = 0; i < inst->cap_format.num_planes; i++) { + num = scnprintf(str, sizeof(str), " %d(%d)", + inst->cap_format.sizeimage[i], + inst->cap_format.bytesperline[i]); + if (seq_write(s, str, num)) + return 0; + } + if (seq_write(s, "\n", 1)) + return 0; + num = scnprintf(str, sizeof(str), "crop: (%d, %d) %d x %d\n", + inst->crop.left, + inst->crop.top, + inst->crop.width, + inst->crop.height); + if (seq_write(s, str, num)) + return 0; + + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx); + for (i = 0; i < vq->num_buffers; i++) { + struct vb2_buffer *vb = vq->bufs[i]; + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + + if (vb->state == VB2_BUF_STATE_DEQUEUED) + continue; + num = scnprintf(str, sizeof(str), + "output [%2d] state = %10s, %8s\n", + i, vb2_stat_name[vb->state], + vpu_stat_name[vpu_get_buffer_state(vbuf)]); + if (seq_write(s, str, num)) + return 0; + } + + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx); + for (i = 0; i < vq->num_buffers; i++) { + struct vb2_buffer *vb = vq->bufs[i]; + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + + if (vb->state == VB2_BUF_STATE_DEQUEUED) + continue; + num = scnprintf(str, sizeof(str), + "capture[%2d] state = %10s, %8s\n", + i, vb2_stat_name[vb->state], + vpu_stat_name[vpu_get_buffer_state(vbuf)]); + if (seq_write(s, str, num)) + return 0; + } + + num = scnprintf(str, sizeof(str), "sequence = %d\n", inst->sequence); + if (seq_write(s, str, num)) + return 0; + + if (inst->use_stream_buffer) { + num = scnprintf(str, sizeof(str), "stream_buffer = %d / %d, <%pad, 0x%x>\n", + vpu_helper_get_used_space(inst), + inst->stream_buffer.length, + &inst->stream_buffer.phys, + inst->stream_buffer.length); + if (seq_write(s, str, num)) + return 0; + } + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&inst->msg_fifo)); + if (seq_write(s, str, num)) + return 0; + + num = scnprintf(str, sizeof(str), "flow :\n"); + if (seq_write(s, str, num)) + return 0; + + mutex_lock(&inst->core->cmd_lock); + for (i = 0; i < ARRAY_SIZE(inst->flows); i++) { + u32 idx = (inst->flow_idx + i) % (ARRAY_SIZE(inst->flows)); + + if (!inst->flows[idx]) + continue; + num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n", + inst->flows[idx] >= VPU_MSG_ID_NOOP ? "M" : "C", + inst->flows[idx]); + if (seq_write(s, str, num)) { + mutex_unlock(&inst->core->cmd_lock); + return 0; + } + } + mutex_unlock(&inst->core->cmd_lock); + + i = 0; + while (true) { + num = call_vop(inst, get_debug_info, str, sizeof(str), i++); + if (num <= 0) + break; + if (seq_write(s, str, num)) + return 0; + } + + return 0; +} + +static int vpu_dbg_core(struct seq_file *s, void *data) +{ + struct vpu_core *core = s->private; + struct vpu_shared_addr *iface = core->iface; + char str[128]; + int num; + + num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(core->type)); + if (seq_write(s, str, num)) + return 0; + + num = scnprintf(str, sizeof(str), "boot_region = <%pad, 0x%x>\n", + &core->fw.phys, core->fw.length); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), "rpc_region = <%pad, 0x%x> used = 0x%x\n", + &core->rpc.phys, core->rpc.length, core->rpc.bytesused); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), "fwlog_region = <%pad, 0x%x>\n", + &core->log.phys, core->log.length); + if (seq_write(s, str, num)) + return 0; + + num = scnprintf(str, sizeof(str), "state = %d\n", core->state); + if (seq_write(s, str, num)) + return 0; + if (core->state == VPU_CORE_DEINIT) + return 0; + num = scnprintf(str, sizeof(str), "fw version = %d.%d.%d\n", + (core->fw_version >> 16) & 0xff, + (core->fw_version >> 8) & 0xff, + core->fw_version & 0xff); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), "instances = %d/%d (0x%02lx), %d\n", + hweight32(core->instance_mask), + core->supported_instance_count, + core->instance_mask, + core->request_count); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&core->msg_fifo)); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), + "cmd_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n", + iface->cmd_desc->start, + iface->cmd_desc->end, + iface->cmd_desc->wptr, + iface->cmd_desc->rptr); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), + "msg_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n", + iface->msg_desc->start, + iface->msg_desc->end, + iface->msg_desc->wptr, + iface->msg_desc->rptr); + if (seq_write(s, str, num)) + return 0; + + return 0; +} + +static int vpu_dbg_fwlog(struct seq_file *s, void *data) +{ + struct vpu_core *core = s->private; + struct print_buf_desc *print_buf; + int length; + u32 rptr; + u32 wptr; + int ret = 0; + + if (!core->log.virt || core->state == VPU_CORE_DEINIT) + return 0; + + print_buf = core->log.virt; + rptr = print_buf->read; + wptr = print_buf->write; + + if (rptr == wptr) + return 0; + else if (rptr < wptr) + length = wptr - rptr; + else + length = print_buf->bytes + wptr - rptr; + + if (s->count + length >= s->size) { + s->count = s->size; + return 0; + } + + if (rptr + length >= print_buf->bytes) { + int num = print_buf->bytes - rptr; + + if (seq_write(s, print_buf->buffer + rptr, num)) + ret = -1; + length -= num; + rptr = 0; + } + + if (length) { + if (seq_write(s, print_buf->buffer + rptr, length)) + ret = -1; + rptr += length; + } + if (!ret) + print_buf->read = rptr; + + return 0; +} + +static int vpu_dbg_inst_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, vpu_dbg_instance, inode->i_private); +} + +static ssize_t vpu_dbg_inst_write(struct file *file, + const char __user *user_buf, size_t size, loff_t *ppos) +{ + struct seq_file *s = file->private_data; + struct vpu_inst *inst = s->private; + + vpu_session_debug(inst); + + return size; +} + +static ssize_t vpu_dbg_core_write(struct file *file, + const char __user *user_buf, size_t size, loff_t *ppos) +{ + struct seq_file *s = file->private_data; + struct vpu_core *core = s->private; + + pm_runtime_get_sync(core->dev); + mutex_lock(&core->lock); + if (core->state != VPU_CORE_DEINIT && !core->instance_mask) { + dev_info(core->dev, "reset\n"); + if (!vpu_core_sw_reset(core)) { + core->state = VPU_CORE_ACTIVE; + core->hang_mask = 0; + } + } + mutex_unlock(&core->lock); + pm_runtime_put_sync(core->dev); + + return size; +} + +static int vpu_dbg_core_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, vpu_dbg_core, inode->i_private); +} + +static int vpu_dbg_fwlog_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, vpu_dbg_fwlog, inode->i_private); +} + +static const struct file_operations vpu_dbg_inst_fops = { + .owner = THIS_MODULE, + .open = vpu_dbg_inst_open, + .release = single_release, + .read = seq_read, + .write = vpu_dbg_inst_write, +}; + +static const struct file_operations vpu_dbg_core_fops = { + .owner = THIS_MODULE, + .open = vpu_dbg_core_open, + .release = single_release, + .read = seq_read, + .write = vpu_dbg_core_write, +}; + +static const struct file_operations vpu_dbg_fwlog_fops = { + .owner = THIS_MODULE, + .open = vpu_dbg_fwlog_open, + .release = single_release, + .read = seq_read, +}; + +int vpu_inst_create_dbgfs_file(struct vpu_inst *inst) +{ + struct vpu_dev *vpu; + char name[64]; + + if (!inst || !inst->core || !inst->core->vpu) + return -EINVAL; + + vpu = inst->core->vpu; + if (!vpu->debugfs) + return -EINVAL; + + if (inst->debugfs) + return 0; + + scnprintf(name, sizeof(name), "instance.%d.%d", inst->core->id, inst->id); + inst->debugfs = debugfs_create_file((const char *)name, + VERIFY_OCTAL_PERMISSIONS(0644), + vpu->debugfs, + inst, + &vpu_dbg_inst_fops); + if (!inst->debugfs) { + dev_err(inst->dev, "vpu create debugfs %s fail\n", name); + return -EINVAL; + } + + return 0; +} + +int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst) +{ + if (!inst) + return 0; + + debugfs_remove(inst->debugfs); + inst->debugfs = NULL; + + return 0; +} + +int vpu_core_create_dbgfs_file(struct vpu_core *core) +{ + struct vpu_dev *vpu; + char name[64]; + + if (!core || !core->vpu) + return -EINVAL; + + vpu = core->vpu; + if (!vpu->debugfs) + return -EINVAL; + + if (!core->debugfs) { + scnprintf(name, sizeof(name), "core.%d", core->id); + core->debugfs = debugfs_create_file((const char *)name, + VERIFY_OCTAL_PERMISSIONS(0644), + vpu->debugfs, + core, + &vpu_dbg_core_fops); + if (!core->debugfs) { + dev_err(core->dev, "vpu create debugfs %s fail\n", name); + return -EINVAL; + } + } + if (!core->debugfs_fwlog) { + scnprintf(name, sizeof(name), "fwlog.%d", core->id); + core->debugfs_fwlog = debugfs_create_file((const char *)name, + VERIFY_OCTAL_PERMISSIONS(0444), + vpu->debugfs, + core, + &vpu_dbg_fwlog_fops); + if (!core->debugfs_fwlog) { + dev_err(core->dev, "vpu create debugfs %s fail\n", name); + return -EINVAL; + } + } + + return 0; +} + +int vpu_core_remove_dbgfs_file(struct vpu_core *core) +{ + if (!core) + return 0; + debugfs_remove(core->debugfs); + core->debugfs = NULL; + debugfs_remove(core->debugfs_fwlog); + core->debugfs_fwlog = NULL; + + return 0; +} + +void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow) +{ + if (!inst) + return; + + inst->flows[inst->flow_idx] = flow; + inst->flow_idx = (inst->flow_idx + 1) % (ARRAY_SIZE(inst->flows)); +} diff --git a/drivers/media/platform/amphion/vpu_rpc.c b/drivers/media/platform/amphion/vpu_rpc.c new file mode 100644 index 000000000000..16fb55927e0e --- /dev/null +++ b/drivers/media/platform/amphion/vpu_rpc.c @@ -0,0 +1,257 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_rpc.h" +#include "vpu_imx8q.h" +#include "vpu_windsor.h" +#include "vpu_malone.h" + +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->check_memory_region) + return VPU_CORE_MEMORY_INVALID; + + return ops->check_memory_region(core->fw.phys, addr, size); +} + +static u32 vpu_rpc_check_buffer_space(struct vpu_rpc_buffer_desc *desc, bool write) +{ + u32 ptr1; + u32 ptr2; + u32 size; + + size = desc->end - desc->start; + if (write) { + ptr1 = desc->wptr; + ptr2 = desc->rptr; + } else { + ptr1 = desc->rptr; + ptr2 = desc->wptr; + } + + if (ptr1 == ptr2) { + if (!write) + return 0; + else + return size; + } + + return (ptr2 + size - ptr1) % size; +} + +static int vpu_rpc_send_cmd_buf(struct vpu_shared_addr *shared, struct vpu_rpc_event *cmd) +{ + struct vpu_rpc_buffer_desc *desc; + u32 space = 0; + u32 *data; + u32 wptr; + u32 i; + + desc = shared->cmd_desc; + space = vpu_rpc_check_buffer_space(desc, true); + if (space < (((cmd->hdr.num + 1) << 2) + 16)) + return -EINVAL; + wptr = desc->wptr; + data = (u32 *)(shared->cmd_mem_vir + desc->wptr - desc->start); + *data = 0; + *data |= ((cmd->hdr.index & 0xff) << 24); + *data |= ((cmd->hdr.num & 0xff) << 16); + *data |= (cmd->hdr.id & 0x3fff); + wptr += 4; + data++; + if (wptr >= desc->end) { + wptr = desc->start; + data = shared->cmd_mem_vir; + } + + for (i = 0; i < cmd->hdr.num; i++) { + *data = cmd->data[i]; + wptr += 4; + data++; + if (wptr >= desc->end) { + wptr = desc->start; + data = shared->cmd_mem_vir; + } + } + + /*update wptr after data is written*/ + mb(); + desc->wptr = wptr; + + return 0; +} + +static bool vpu_rpc_check_msg(struct vpu_shared_addr *shared) +{ + struct vpu_rpc_buffer_desc *desc; + u32 space = 0; + u32 msgword; + u32 msgnum; + + desc = shared->msg_desc; + space = vpu_rpc_check_buffer_space(desc, 0); + space = (space >> 2); + + if (space) { + msgword = *(u32 *)(shared->msg_mem_vir + desc->rptr - desc->start); + msgnum = (msgword & 0xff0000) >> 16; + if (msgnum <= space) + return true; + } + + return false; +} + +static int vpu_rpc_receive_msg_buf(struct vpu_shared_addr *shared, struct vpu_rpc_event *msg) +{ + struct vpu_rpc_buffer_desc *desc; + u32 *data; + u32 msgword; + u32 rptr; + u32 i; + + if (!vpu_rpc_check_msg(shared)) + return -EINVAL; + + desc = shared->msg_desc; + data = (u32 *)(shared->msg_mem_vir + desc->rptr - desc->start); + rptr = desc->rptr; + msgword = *data; + data++; + rptr += 4; + if (rptr >= desc->end) { + rptr = desc->start; + data = shared->msg_mem_vir; + } + + msg->hdr.index = (msgword >> 24) & 0xff; + msg->hdr.num = (msgword >> 16) & 0xff; + msg->hdr.id = msgword & 0x3fff; + + if (msg->hdr.num > ARRAY_SIZE(msg->data)) + return -EINVAL; + + for (i = 0; i < msg->hdr.num; i++) { + msg->data[i] = *data; + data++; + rptr += 4; + if (rptr >= desc->end) { + rptr = desc->start; + data = shared->msg_mem_vir; + } + } + + /*update rptr after data is read*/ + mb(); + desc->rptr = rptr; + + return 0; +} + +struct vpu_iface_ops imx8q_rpc_ops[] = { + [VPU_CORE_TYPE_ENC] = { + .check_codec = vpu_imx8q_check_codec, + .check_fmt = vpu_imx8q_check_fmt, + .boot_core = vpu_imx8q_boot_core, + .get_power_state = vpu_imx8q_get_power_state, + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded, + .get_data_size = vpu_windsor_get_data_size, + .check_memory_region = vpu_imx8q_check_memory_region, + .init_rpc = vpu_windsor_init_rpc, + .set_log_buf = vpu_windsor_set_log_buf, + .set_system_cfg = vpu_windsor_set_system_cfg, + .get_version = vpu_windsor_get_version, + .send_cmd_buf = vpu_rpc_send_cmd_buf, + .receive_msg_buf = vpu_rpc_receive_msg_buf, + .pack_cmd = vpu_windsor_pack_cmd, + .convert_msg_id = vpu_windsor_convert_msg_id, + .unpack_msg_data = vpu_windsor_unpack_msg_data, + .config_memory_resource = vpu_windsor_config_memory_resource, + .get_stream_buffer_size = vpu_windsor_get_stream_buffer_size, + .config_stream_buffer = vpu_windsor_config_stream_buffer, + .get_stream_buffer_desc = vpu_windsor_get_stream_buffer_desc, + .update_stream_buffer = vpu_windsor_update_stream_buffer, + .set_encode_params = vpu_windsor_set_encode_params, + .input_frame = vpu_windsor_input_frame, + .get_max_instance_count = vpu_windsor_get_max_instance_count, + }, + [VPU_CORE_TYPE_DEC] = { + .check_codec = vpu_imx8q_check_codec, + .check_fmt = vpu_imx8q_check_fmt, + .boot_core = vpu_imx8q_boot_core, + .get_power_state = vpu_imx8q_get_power_state, + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded, + .get_data_size = vpu_malone_get_data_size, + .check_memory_region = vpu_imx8q_check_memory_region, + .init_rpc = vpu_malone_init_rpc, + .set_log_buf = vpu_malone_set_log_buf, + .set_system_cfg = vpu_malone_set_system_cfg, + .get_version = vpu_malone_get_version, + .send_cmd_buf = vpu_rpc_send_cmd_buf, + .receive_msg_buf = vpu_rpc_receive_msg_buf, + .get_stream_buffer_size = vpu_malone_get_stream_buffer_size, + .config_stream_buffer = vpu_malone_config_stream_buffer, + .set_decode_params = vpu_malone_set_decode_params, + .pack_cmd = vpu_malone_pack_cmd, + .convert_msg_id = vpu_malone_convert_msg_id, + .unpack_msg_data = vpu_malone_unpack_msg_data, + .get_stream_buffer_desc = vpu_malone_get_stream_buffer_desc, + .update_stream_buffer = vpu_malone_update_stream_buffer, + .add_scode = vpu_malone_add_scode, + .input_frame = vpu_malone_input_frame, + .pre_send_cmd = vpu_malone_pre_cmd, + .post_send_cmd = vpu_malone_post_cmd, + .init_instance = vpu_malone_init_instance, + .get_max_instance_count = vpu_malone_get_max_instance_count, + }, +}; + +static struct vpu_iface_ops *vpu_get_iface(struct vpu_dev *vpu, enum vpu_core_type type) +{ + struct vpu_iface_ops *rpc_ops = NULL; + u32 size = 0; + + switch (vpu->res->plat_type) { + case IMX8QXP: + case IMX8QM: + rpc_ops = imx8q_rpc_ops; + size = ARRAY_SIZE(imx8q_rpc_ops); + break; + default: + return NULL; + } + + if (type >= size) + return NULL; + + return &rpc_ops[type]; +} + +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core) +{ + return vpu_get_iface(core->vpu, core->type); +} + +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst) +{ + if (inst->core) + return vpu_core_get_iface(inst->core); + + return vpu_get_iface(inst->vpu, inst->type); +} diff --git a/drivers/media/platform/amphion/vpu_rpc.h b/drivers/media/platform/amphion/vpu_rpc.h new file mode 100644 index 000000000000..c764ff52d026 --- /dev/null +++ b/drivers/media/platform/amphion/vpu_rpc.h @@ -0,0 +1,456 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2020-2021 NXP + */ + +#ifndef _AMPHION_VPU_RPC_H +#define _AMPHION_VPU_RPC_H + +#include +#include "vpu_codec.h" + +struct vpu_rpc_buffer_desc { + u32 wptr; + u32 rptr; + u32 start; + u32 end; +}; + +struct vpu_shared_addr { + void *iface; + struct vpu_rpc_buffer_desc *cmd_desc; + void *cmd_mem_vir; + struct vpu_rpc_buffer_desc *msg_desc; + void *msg_mem_vir; + + unsigned long boot_addr; + struct vpu_core *core; + void *priv; +}; + +struct vpu_rpc_event_header { + u32 index; + u32 id; + u32 num; +}; + +struct vpu_rpc_event { + struct vpu_rpc_event_header hdr; + u32 data[128]; +}; + +struct vpu_iface_ops { + bool (*check_codec)(enum vpu_core_type type); + bool (*check_fmt)(enum vpu_core_type type, u32 pixelfmt); + u32 (*get_data_size)(void); + u32 (*check_memory_region)(dma_addr_t base, dma_addr_t addr, u32 size); + int (*boot_core)(struct vpu_core *core); + int (*shutdown_core)(struct vpu_core *core); + int (*restore_core)(struct vpu_core *core); + int (*get_power_state)(struct vpu_core *core); + int (*on_firmware_loaded)(struct vpu_core *core); + void (*init_rpc)(struct vpu_shared_addr *shared, + struct vpu_buffer *rpc, dma_addr_t boot_addr); + void (*set_log_buf)(struct vpu_shared_addr *shared, + struct vpu_buffer *log); + void (*set_system_cfg)(struct vpu_shared_addr *shared, + u32 regs_base, void __iomem *regs, u32 index); + void (*set_stream_cfg)(struct vpu_shared_addr *shared, u32 index); + u32 (*get_version)(struct vpu_shared_addr *shared); + u32 (*get_max_instance_count)(struct vpu_shared_addr *shared); + int (*get_stream_buffer_size)(struct vpu_shared_addr *shared); + int (*send_cmd_buf)(struct vpu_shared_addr *shared, + struct vpu_rpc_event *cmd); + int (*receive_msg_buf)(struct vpu_shared_addr *shared, + struct vpu_rpc_event *msg); + int (*pack_cmd)(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data); + int (*convert_msg_id)(u32 msg_id); + int (*unpack_msg_data)(struct vpu_rpc_event *pkt, void *data); + int (*input_frame)(struct vpu_shared_addr *shared, + struct vpu_inst *inst, struct vb2_buffer *vb); + int (*config_memory_resource)(struct vpu_shared_addr *shared, + u32 instance, + u32 type, + u32 index, + struct vpu_buffer *buf); + int (*config_stream_buffer)(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_buffer *buf); + int (*update_stream_buffer)(struct vpu_shared_addr *shared, + u32 instance, u32 ptr, bool write); + int (*get_stream_buffer_desc)(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_rpc_buffer_desc *desc); + int (*set_encode_params)(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_encode_params *params, + u32 update); + int (*set_decode_params)(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_decode_params *params, + u32 update); + int (*add_scode)(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_buffer *stream_buffer, + u32 pixelformat, + u32 scode_type); + int (*pre_send_cmd)(struct vpu_shared_addr *shared, u32 instance); + int (*post_send_cmd)(struct vpu_shared_addr *shared, u32 instance); + int (*init_instance)(struct vpu_shared_addr *shared, u32 instance); +}; + +enum { + VPU_CORE_MEMORY_INVALID = 0, + VPU_CORE_MEMORY_CACHED, + VPU_CORE_MEMORY_UNCACHED +}; + +struct vpu_rpc_region_t { + dma_addr_t start; + dma_addr_t end; + dma_addr_t type; +}; + +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core); +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst); +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size); + +static inline bool vpu_iface_check_codec(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (ops && ops->check_codec) + return ops->check_codec(core->type); + + return true; +} + +static inline bool vpu_iface_check_format(struct vpu_inst *inst, u32 pixelfmt) +{ + struct vpu_iface_ops *ops = vpu_inst_get_iface(inst); + + if (ops && ops->check_fmt) + return ops->check_fmt(inst->type, pixelfmt); + + return true; +} + +static inline int vpu_iface_boot_core(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (ops && ops->boot_core) + return ops->boot_core(core); + return 0; +} + +static inline int vpu_iface_get_power_state(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (ops && ops->get_power_state) + return ops->get_power_state(core); + return 1; +} + +static inline int vpu_iface_shutdown_core(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (ops && ops->shutdown_core) + return ops->shutdown_core(core); + return 0; +} + +static inline int vpu_iface_restore_core(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (ops && ops->restore_core) + return ops->restore_core(core); + return 0; +} + +static inline int vpu_iface_on_firmware_loaded(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (ops && ops->on_firmware_loaded) + return ops->on_firmware_loaded(core); + + return 0; +} + +static inline u32 vpu_iface_get_data_size(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->get_data_size) + return 0; + + return ops->get_data_size(); +} + +static inline int vpu_iface_init(struct vpu_core *core, + struct vpu_shared_addr *shared, + struct vpu_buffer *rpc, + dma_addr_t boot_addr) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->init_rpc) + return -EINVAL; + + ops->init_rpc(shared, rpc, boot_addr); + core->iface = shared; + shared->core = core; + if (rpc->bytesused > rpc->length) + return -ENOSPC; + return 0; +} + +static inline int vpu_iface_set_log_buf(struct vpu_core *core, + struct vpu_buffer *log) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops) + return -EINVAL; + + if (ops->set_log_buf) + ops->set_log_buf(core->iface, log); + + return 0; +} + +static inline int vpu_iface_config_system(struct vpu_core *core, u32 regs_base, void __iomem *regs) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops) + return -EINVAL; + if (ops->set_system_cfg) + ops->set_system_cfg(core->iface, regs_base, regs, core->id); + + return 0; +} + +static inline int vpu_iface_get_stream_buffer_size(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->get_stream_buffer_size) + return 0; + + return ops->get_stream_buffer_size(core->iface); +} + +static inline int vpu_iface_config_stream(struct vpu_inst *inst) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || inst->id < 0) + return -EINVAL; + if (ops->set_stream_cfg) + ops->set_stream_cfg(inst->core->iface, inst->id); + return 0; +} + +static inline int vpu_iface_send_cmd(struct vpu_core *core, struct vpu_rpc_event *cmd) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->send_cmd_buf) + return -EINVAL; + + return ops->send_cmd_buf(core->iface, cmd); +} + +static inline int vpu_iface_receive_msg(struct vpu_core *core, struct vpu_rpc_event *msg) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->receive_msg_buf) + return -EINVAL; + + return ops->receive_msg_buf(core->iface, msg); +} + +static inline int vpu_iface_pack_cmd(struct vpu_core *core, + struct vpu_rpc_event *pkt, + u32 index, u32 id, void *data) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->pack_cmd) + return -EINVAL; + return ops->pack_cmd(pkt, index, id, data); +} + +static inline int vpu_iface_convert_msg_id(struct vpu_core *core, u32 msg_id) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->convert_msg_id) + return -EINVAL; + + return ops->convert_msg_id(msg_id); +} + +static inline int vpu_iface_unpack_msg_data(struct vpu_core *core, + struct vpu_rpc_event *pkt, void *data) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->unpack_msg_data) + return -EINVAL; + + return ops->unpack_msg_data(pkt, data); +} + +static inline int vpu_iface_input_frame(struct vpu_inst *inst, + struct vb2_buffer *vb) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->input_frame) + return -EINVAL; + + return ops->input_frame(inst->core->iface, inst, vb); +} + +static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst, + u32 type, + u32 index, + struct vpu_buffer *buf) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->config_memory_resource || inst->id < 0) + return -EINVAL; + + return ops->config_memory_resource(inst->core->iface, + inst->id, + type, index, buf); +} + +static inline int vpu_iface_config_stream_buffer(struct vpu_inst *inst, + struct vpu_buffer *buf) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->config_stream_buffer || inst->id < 0) + return -EINVAL; + + return ops->config_stream_buffer(inst->core->iface, inst->id, buf); +} + +static inline int vpu_iface_update_stream_buffer(struct vpu_inst *inst, + u32 ptr, bool write) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->update_stream_buffer || inst->id < 0) + return -EINVAL; + + return ops->update_stream_buffer(inst->core->iface, inst->id, ptr, write); +} + +static inline int vpu_iface_get_stream_buffer_desc(struct vpu_inst *inst, + struct vpu_rpc_buffer_desc *desc) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->get_stream_buffer_desc || inst->id < 0) + return -EINVAL; + + if (!desc) + return 0; + + return ops->get_stream_buffer_desc(inst->core->iface, inst->id, desc); +} + +static inline u32 vpu_iface_get_version(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->get_version) + return 0; + + return ops->get_version(core->iface); +} + +static inline u32 vpu_iface_get_max_instance_count(struct vpu_core *core) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(core); + + if (!ops || !ops->get_max_instance_count) + return 0; + + return ops->get_max_instance_count(core->iface); +} + +static inline int vpu_iface_set_encode_params(struct vpu_inst *inst, + struct vpu_encode_params *params, u32 update) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->set_encode_params || inst->id < 0) + return -EINVAL; + + return ops->set_encode_params(inst->core->iface, inst->id, params, update); +} + +static inline int vpu_iface_set_decode_params(struct vpu_inst *inst, + struct vpu_decode_params *params, u32 update) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->set_decode_params || inst->id < 0) + return -EINVAL; + + return ops->set_decode_params(inst->core->iface, inst->id, params, update); +} + +static inline int vpu_iface_add_scode(struct vpu_inst *inst, u32 scode_type) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (!ops || !ops->add_scode || inst->id < 0) + return -EINVAL; + + return ops->add_scode(inst->core->iface, inst->id, + &inst->stream_buffer, + inst->out_format.pixfmt, + scode_type); +} + +static inline int vpu_iface_pre_send_cmd(struct vpu_inst *inst) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (ops && ops->pre_send_cmd && inst->id >= 0) + return ops->pre_send_cmd(inst->core->iface, inst->id); + return 0; +} + +static inline int vpu_iface_post_send_cmd(struct vpu_inst *inst) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (ops && ops->post_send_cmd && inst->id >= 0) + return ops->post_send_cmd(inst->core->iface, inst->id); + return 0; +} + +static inline int vpu_iface_init_instance(struct vpu_inst *inst) +{ + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core); + + if (ops && ops->init_instance && inst->id >= 0) + return ops->init_instance(inst->core->iface, inst->id); + + return 0; +} + +#endif From patchwork Wed Jan 26 03:09:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Qian X-Patchwork-Id: 537048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E1A5C5AC75 for ; Wed, 26 Jan 2022 03:10:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236984AbiAZDKu (ORCPT ); Tue, 25 Jan 2022 22:10:50 -0500 Received: from mail-eopbgr00041.outbound.protection.outlook.com ([40.107.0.41]:62220 "EHLO EUR02-AM5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S236895AbiAZDKd (ORCPT ); Tue, 25 Jan 2022 22:10:33 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OA99oQTuKecrKfl1KKObq9IExcoF1m0iS3uWrgay+w/dZpbzZc/kWtVKIzo81HjPvv14+t9yiC2bpwmMqScA2y45yXTXSQp6eSkeY9rA6X2S5SbTIiw+8u+A4qCiW8lPTrMtm21GXreTp+aK3B03/kss0mjhuLhK3mOqiExX3EPM7U6tZQY3WuP0nkyqwwcH4Sl3UBmh3H5wobSVno0vgCDrNPx3dtZqKqsWBItivmgV1ddJut2NgWBz9qediWRl5w10OJTE2spO720qZ+DZXu81CCHXMmQXw7rO1SQNTJKronwAkgeBdyBFy/h6ZSAUuCSrjdE+WyEHKCZTge1/RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oC7zv35MCSnwFfqppw46ehl0mqtaKp5u0rguzZlKoWc=; b=Po1mOCBgRHqhaizS87UrdmGK6S2xM+OaSxG08M4J1orDtOkr3ALwJEv41f2IgiLzz2ibGizScN16AOaiKZGUaGs0gX8x6SBTDACWWQGDCRb1MesCLcAtzBrQwRj5Y7qyxucxd4Kv2axx71HZc8HGLhIA/s+ty2955jAgx291n299lQul8KLcKhd3PTfjdFobj1Ss7aUoV76hl02tLg+3Zz4drQIXaOV5/Z5koojxDjgKWisqkGU3gdpZcMoeiVfCuCF9VLU/OWy30oRVDozekFHMinlmFovoIEOzygc992/7rlJZ9DxAslfB2xAyGchB5P2xEZYdmOmwY1hV9CMmBQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oC7zv35MCSnwFfqppw46ehl0mqtaKp5u0rguzZlKoWc=; b=AepJO4DLfu3OvpZ1ROLi3jnSRr0yHdCxt2a4vupMD9dQM/wFvTKg9LAJDVNIz7Iq4qIyL2XXOhffhVxUWFDfLZg5tVOevHerOgCuhOSIcMsuk+gDeOpAKUqkPLO+AnFPd+eNYXNfVlVlryoQmC93LHD0rqUHdMV7ZIyi6+s28fA= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) by AM6PR0402MB3319.eurprd04.prod.outlook.com (2603:10a6:209:e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Wed, 26 Jan 2022 03:10:31 +0000 Received: from AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13]) by AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13%4]) with mapi id 15.20.4930.015; Wed, 26 Jan 2022 03:10:31 +0000 From: Ming Qian To: mchehab@kernel.org, shawnguo@kernel.org, robh+dt@kernel.org, s.hauer@pengutronix.de Cc: hverkuil-cisco@xs4all.nl, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, aisheng.dong@nxp.com, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v15.1 06/13] media: amphion: add vpu v4l2 m2m support Date: Wed, 26 Jan 2022 11:09:25 +0800 Message-Id: <1b3df5788edc51cbedf0627965b10b3f7ad1a60a.1643165765.git.ming.qian@nxp.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: References: X-ClientProxiedBy: SG2PR01CA0091.apcprd01.prod.exchangelabs.com (2603:1096:3:15::17) To AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 46fee73e-dc3c-453a-219a-08d9e0796924 X-MS-TrafficTypeDiagnostic: AM6PR0402MB3319:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:172; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ja261NK+fBObIx3YaSf4hPpE9wXv/tfGQG4fazpKAmNGcgJtuWzgjx4VjPlsVMTD+n3vuhr8Hz9Z/8VLa8tmV0pZgxysuTX1okb7FzE1WENZsz0E21H1Z7W+6Am9GY/3YlsFIFuiN5c/ekTmw0pDuSs/hQHBT8De/lobG2gJdteFC4v/9lOPGmmqF+rN3hyUy2+beCB7v/Xw0qqDF6upcrNglZMwoS6mgWA4o10hYoh7+oQzu6yaGuboKG5SZgDh7mGKYmbptQtS0QsWt0/O4p1aLZcToTg6wJudF94yBL16Wx7Hn+s0cuTjcHgjMmfwaL+QBe+KOW/QkHYHrmxBO1muvRqMUuUi6hCrRJck8slEX7tLPVw9kJXXG7jPZPXt7NgnC+2KXsdSAI0UCGyZtRD8XO9Au/lzkCDO5qCqcPRHb/vcsDK3V6d+FZ0ypC5SBbmgIzUVrIQ7A+7uGPZXst9tXcav+JXMDhGkLLOCnv9UaMGxgr2nIGaFWHHvL3BI23xiPUIyw0uCy/sFOCs8R0odjFYtf9RDeqOKyNImpfhUwBPLnZMBsfNlS8Nk0U+l1yPbkH2Sfo33NYJ/Icuu6h+6LvZJAFe9PwNIy0HztoYOVAirvYUgo9lb4UMazs26+kUtty/0n1PC4SvKb6wwO7D9IbwiYSth2LpVjdrm/2xKHX2j5SJ/zwNSmA7OTsm0PuAfiqfMWde7JE8SpXvfuA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB6341.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(83380400001)(508600001)(2616005)(36756003)(316002)(186003)(26005)(6486002)(66946007)(66476007)(66556008)(8676002)(5660300002)(4326008)(86362001)(30864003)(52116002)(6666004)(6506007)(6512007)(44832011)(7416002)(8936002)(38100700002)(38350700002)(2906002)(579004)(20210929001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 2/Gy+o+RsV7swFylXNZ8UvzkXxCfJzyPD7Bm3vnDP1Wj2rNSIqz3rUGGDJVwOGV4O86y34m0iu4LOKMLeSlK25CTkGijD65cVg2WcDccQi+tMjb2s5jZmE0nLpnzlKPJK9Z++wGyJye3bpEIirBXHubyxi+rK7M4xKOQOKrT2igrhRpzMhgfbT/2ul3TBsVRBFabWyqkCIsPwGujeFuzdZLjDwmFiQB3PR4lJ7hvLY8eKVbD5j6HYyLgiMPBuGJzP6oEV8mT1iJscKq6vG+JTjkhpWMATfSu/zRYjOAJTZJVCPaz1ANRi30S/R3iGzHwAkiKQ7zEQ1LNVeDnJVVS8XZSEaBLRFqGUOhrTeTeRVedch5Td2eFVE9qAW+EePajNO2BUFWmCoLSwo6+yEa3tNmQbFbAKOpd0Q0KJ94McM87tGdcyzjm3HYGptbplNX7s2A7MlZvWSbKsW6/Yp7z79uCo0VnF+B61MbDAQ7L9KX7yRiFfDN1uhOtRfaRj5cI82KoaCcfmgkmTml+xuRMvDaSzvzlA8hUIE9qPaMfYqqQtWFrsoRZlMobWvbZjCBcb3UiyzWo8O9uX03lde9BAr1l+ZI8AyOdAg8Paa0hHsfk1QKhcIXqRr85rUdLr+VmCQpId96A8JOTA6vwIW5NV4sd1OiqEtK6GbKJcHswPFSGWiPl4JyZiKyijeWEPanInW7M0qLFjPkhoyplkARfYaUovicS41NMMMh4gBz3NmiBEIT4c34hGRn1PzYai9x8i7YiQS7AQO0L9ETCuaETQN7YXQxXlY2FGPv4jCwbUF25z64aOQ4fv4HZmj/8cPl0ssRL/RwlIsJdv49ImjR0HKM0eS3UmYWjQI5hSXvyPP9/pMHlx73oM/7fimaWHxLEIszp+2frud/3D+qnYx02L85WIzjRzLxPFJXl9FyfbtXTaKKGXvINEllP2TXfByKTCH8zFwikIW1sHQ78xJY3vVqYLXYo0KC7qAE0DAb9gBErgWa5ETTt0KVo/9ZePn/d1R7WoVjtxLhAMxJUxq0K0AE18V2dyH56kSOyWYMuVmOX7ZWUJQL/ILTfEvRy6kTtcHxKB2YUYtcc0zAKxUwPZWQfG6ZASgVF9Z6ZAKQx/sT+KRLnRtMyyESXWI3fxp0gt1p6DW7LFh4p80fLsyRwkZGDXxjr18i+kO9I34YaIHxVPbGOiRW3+fopqKjasxcg5QAhYi5wF+Qs6qUdF3L8po8M/pS18Ytyv3m+iNtRsj241NfwjDmZlGIm5nJl9NNvtM5awUjvlUGek9n5xFQAN3JuuEon6SAYZGTqV7CbkmBKLKm3tPVyBeqp60mKqDrIUiraOCQTFBxzW2kbAggq1bW9wvrBIRZ7SahVbFLlb4Qy5s7Dx7tMa2AnnkfRn2EqXcUcGL8BiSwCRjk7TFHy8rhGSlSHnCqZYFTKXXqDS4MhBZe7XUIx//acxheuDxxvW2GFl+ngyYQGYalJybuzwJ7oJHXG9dandGIF6a73Cpjcq7FcTwNVTcWvoBNrcnk8JuufWXqrGVBWUcXWTm/CjbVTSQEUXkXiXSTJ/4u7liHLGxols5tH7xuiNEbFcYeovsW8UXsB64XqHclxZMg6vEwQFlS41qf3x/iecQTMkTM= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 46fee73e-dc3c-453a-219a-08d9e0796924 X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6341.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2022 03:10:31.1150 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: soeTKg3ntv5BqmHZpM4NEUO3khMROcymV6Xr6nQ2K52WTp02mKxXqaWhAv5T89ZdUnTarc5u9kmAYacXZHkXHQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3319 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org vpu_v4l2.c implements the v4l2 m2m driver methods. vpu_helpers.c implements the common helper functions vpu_color.c converts the v4l2 colorspace with the VUI parameters that specified by ITU-T | ISO/IEC Signed-off-by: Ming Qian Signed-off-by: Shijie Qin Signed-off-by: Zhou Peng Reported-by: kernel test robot Tested-by: Nicolas Dufresne --- drivers/media/platform/amphion/vpu_color.c | 183 +++++ drivers/media/platform/amphion/vpu_helpers.c | 413 +++++++++++ drivers/media/platform/amphion/vpu_helpers.h | 74 ++ drivers/media/platform/amphion/vpu_v4l2.c | 720 +++++++++++++++++++ drivers/media/platform/amphion/vpu_v4l2.h | 55 ++ 5 files changed, 1445 insertions(+) create mode 100644 drivers/media/platform/amphion/vpu_color.c create mode 100644 drivers/media/platform/amphion/vpu_helpers.c create mode 100644 drivers/media/platform/amphion/vpu_helpers.h create mode 100644 drivers/media/platform/amphion/vpu_v4l2.c create mode 100644 drivers/media/platform/amphion/vpu_v4l2.h diff --git a/drivers/media/platform/amphion/vpu_color.c b/drivers/media/platform/amphion/vpu_color.c new file mode 100644 index 000000000000..80b9a53fd1c1 --- /dev/null +++ b/drivers/media/platform/amphion/vpu_color.c @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_helpers.h" + +static const u8 colorprimaries[] = { + 0, + V4L2_COLORSPACE_REC709, /*Rec. ITU-R BT.709-6*/ + 0, + 0, + V4L2_COLORSPACE_470_SYSTEM_M, /*Rec. ITU-R BT.470-6 System M*/ + V4L2_COLORSPACE_470_SYSTEM_BG, /*Rec. ITU-R BT.470-6 System B, G*/ + V4L2_COLORSPACE_SMPTE170M, /*SMPTE170M*/ + V4L2_COLORSPACE_SMPTE240M, /*SMPTE240M*/ + 0, /*Generic film*/ + V4L2_COLORSPACE_BT2020, /*Rec. ITU-R BT.2020-2*/ + 0, /*SMPTE ST 428-1*/ +}; + +static const u8 colortransfers[] = { + 0, + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.709-6*/ + 0, + 0, + 0, /*Rec. ITU-R BT.470-6 System M*/ + 0, /*Rec. ITU-R BT.470-6 System B, G*/ + V4L2_XFER_FUNC_709, /*SMPTE170M*/ + V4L2_XFER_FUNC_SMPTE240M, /*SMPTE240M*/ + V4L2_XFER_FUNC_NONE, /*Linear transfer characteristics*/ + 0, + 0, + 0, /*IEC 61966-2-4*/ + 0, /*Rec. ITU-R BT.1361-0 extended colour gamut*/ + V4L2_XFER_FUNC_SRGB, /*IEC 61966-2-1 sRGB or sYCC*/ + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (10 bit system)*/ + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (12 bit system)*/ + V4L2_XFER_FUNC_SMPTE2084, /*SMPTE ST 2084*/ + 0, /*SMPTE ST 428-1*/ + 0 /*Rec. ITU-R BT.2100-0 hybrid log-gamma (HLG)*/ +}; + +static const u8 colormatrixcoefs[] = { + 0, + V4L2_YCBCR_ENC_709, /*Rec. ITU-R BT.709-6*/ + 0, + 0, + 0, /*Title 47 Code of Federal Regulations*/ + V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 625*/ + V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 525*/ + V4L2_YCBCR_ENC_SMPTE240M, /*SMPTE240M*/ + 0, + V4L2_YCBCR_ENC_BT2020, /*Rec. ITU-R BT.2020-2*/ + V4L2_YCBCR_ENC_BT2020_CONST_LUM /*Rec. ITU-R BT.2020-2 constant*/ +}; + +u32 vpu_color_cvrt_primaries_v2i(u32 primaries) +{ + return vpu_helper_find_in_array_u8(colorprimaries, ARRAY_SIZE(colorprimaries), primaries); +} + +u32 vpu_color_cvrt_primaries_i2v(u32 primaries) +{ + return primaries < ARRAY_SIZE(colorprimaries) ? colorprimaries[primaries] : 0; +} + +u32 vpu_color_cvrt_transfers_v2i(u32 transfers) +{ + return vpu_helper_find_in_array_u8(colortransfers, ARRAY_SIZE(colortransfers), transfers); +} + +u32 vpu_color_cvrt_transfers_i2v(u32 transfers) +{ + return transfers < ARRAY_SIZE(colortransfers) ? colortransfers[transfers] : 0; +} + +u32 vpu_color_cvrt_matrix_v2i(u32 matrix) +{ + return vpu_helper_find_in_array_u8(colormatrixcoefs, ARRAY_SIZE(colormatrixcoefs), matrix); +} + +u32 vpu_color_cvrt_matrix_i2v(u32 matrix) +{ + return matrix < ARRAY_SIZE(colormatrixcoefs) ? colormatrixcoefs[matrix] : 0; +} + +u32 vpu_color_cvrt_full_range_v2i(u32 full_range) +{ + return (full_range == V4L2_QUANTIZATION_FULL_RANGE); +} + +u32 vpu_color_cvrt_full_range_i2v(u32 full_range) +{ + if (full_range) + return V4L2_QUANTIZATION_FULL_RANGE; + + return V4L2_QUANTIZATION_LIM_RANGE; +} + +int vpu_color_check_primaries(u32 primaries) +{ + return vpu_color_cvrt_primaries_v2i(primaries) ? 0 : -EINVAL; +} + +int vpu_color_check_transfers(u32 transfers) +{ + return vpu_color_cvrt_transfers_v2i(transfers) ? 0 : -EINVAL; +} + +int vpu_color_check_matrix(u32 matrix) +{ + return vpu_color_cvrt_matrix_v2i(matrix) ? 0 : -EINVAL; +} + +int vpu_color_check_full_range(u32 full_range) +{ + int ret = -EINVAL; + + switch (full_range) { + case V4L2_QUANTIZATION_FULL_RANGE: + case V4L2_QUANTIZATION_LIM_RANGE: + ret = 0; + break; + default: + break; + } + + return ret; +} + +int vpu_color_get_default(u32 primaries, u32 *ptransfers, u32 *pmatrix, u32 *pfull_range) +{ + u32 transfers; + u32 matrix; + u32 full_range; + + switch (primaries) { + case V4L2_COLORSPACE_REC709: + transfers = V4L2_XFER_FUNC_709; + matrix = V4L2_YCBCR_ENC_709; + break; + case V4L2_COLORSPACE_470_SYSTEM_M: + case V4L2_COLORSPACE_470_SYSTEM_BG: + case V4L2_COLORSPACE_SMPTE170M: + transfers = V4L2_XFER_FUNC_709; + matrix = V4L2_YCBCR_ENC_601; + break; + case V4L2_COLORSPACE_SMPTE240M: + transfers = V4L2_XFER_FUNC_SMPTE240M; + matrix = V4L2_YCBCR_ENC_SMPTE240M; + break; + case V4L2_COLORSPACE_BT2020: + transfers = V4L2_XFER_FUNC_709; + matrix = V4L2_YCBCR_ENC_BT2020; + break; + default: + transfers = V4L2_XFER_FUNC_DEFAULT; + matrix = V4L2_YCBCR_ENC_DEFAULT; + break; + } + full_range = V4L2_QUANTIZATION_LIM_RANGE; + + if (ptransfers) + *ptransfers = transfers; + if (pmatrix) + *pmatrix = matrix; + if (pfull_range) + *pfull_range = full_range; + + return 0; +} diff --git a/drivers/media/platform/amphion/vpu_helpers.c b/drivers/media/platform/amphion/vpu_helpers.c new file mode 100644 index 000000000000..768abf89e606 --- /dev/null +++ b/drivers/media/platform/amphion/vpu_helpers.c @@ -0,0 +1,413 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_core.h" +#include "vpu_rpc.h" +#include "vpu_helpers.h" + +int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x) +{ + int i; + + for (i = 0; i < size; i++) { + if (array[i] == x) + return i; + } + + return 0; +} + +bool vpu_helper_check_type(struct vpu_inst *inst, u32 type) +{ + const struct vpu_format *pfmt; + + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) { + if (!vpu_iface_check_format(inst, pfmt->pixfmt)) + continue; + if (pfmt->type == type) + return true; + } + + return false; +} + +const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt) +{ + const struct vpu_format *pfmt; + + if (!inst || !inst->formats) + return NULL; + + if (!vpu_iface_check_format(inst, pixelfmt)) + return NULL; + + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) { + if (pfmt->pixfmt == pixelfmt && (!type || type == pfmt->type)) + return pfmt; + } + + return NULL; +} + +const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index) +{ + const struct vpu_format *pfmt; + int i = 0; + + if (!inst || !inst->formats) + return NULL; + + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) { + if (!vpu_iface_check_format(inst, pfmt->pixfmt)) + continue; + + if (pfmt->type == type) { + if (index == i) + return pfmt; + i++; + } + } + + return NULL; +} + +u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width) +{ + const struct vpu_core_resources *res; + + if (!inst) + return width; + + res = vpu_get_resource(inst); + if (!res) + return width; + if (res->max_width) + width = clamp(width, res->min_width, res->max_width); + if (res->step_width) + width = ALIGN(width, res->step_width); + + return width; +} + +u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height) +{ + const struct vpu_core_resources *res; + + if (!inst) + return height; + + res = vpu_get_resource(inst); + if (!res) + return height; + if (res->max_height) + height = clamp(height, res->min_height, res->max_height); + if (res->step_height) + height = ALIGN(height, res->step_height); + + return height; +} + +static u32 get_nv12_plane_size(u32 width, u32 height, int plane_no, + u32 stride, u32 interlaced, u32 *pbl) +{ + u32 bytesperline; + u32 size = 0; + + bytesperline = ALIGN(width, stride); + if (pbl) + bytesperline = max(bytesperline, *pbl); + height = ALIGN(height, 2); + if (plane_no == 0) + size = bytesperline * height; + else if (plane_no == 1) + size = bytesperline * height >> 1; + if (pbl) + *pbl = bytesperline; + + return size; +} + +static u32 get_tiled_8l128_plane_size(u32 fmt, u32 width, u32 height, int plane_no, + u32 stride, u32 interlaced, u32 *pbl) +{ + u32 ws = 3; + u32 hs = 7; + u32 bitdepth = 8; + u32 bytesperline; + u32 size = 0; + + if (interlaced) + hs++; + if (fmt == V4L2_PIX_FMT_NV12M_10BE_8L128) + bitdepth = 10; + bytesperline = DIV_ROUND_UP(width * bitdepth, BITS_PER_BYTE); + bytesperline = ALIGN(bytesperline, 1 << ws); + bytesperline = ALIGN(bytesperline, stride); + if (pbl) + bytesperline = max(bytesperline, *pbl); + height = ALIGN(height, 1 << hs); + if (plane_no == 0) + size = bytesperline * height; + else if (plane_no == 1) + size = (bytesperline * ALIGN(height, 1 << (hs + 1))) >> 1; + if (pbl) + *pbl = bytesperline; + + return size; +} + +static u32 get_default_plane_size(u32 width, u32 height, int plane_no, + u32 stride, u32 interlaced, u32 *pbl) +{ + u32 bytesperline; + u32 size = 0; + + bytesperline = ALIGN(width, stride); + if (pbl) + bytesperline = max(bytesperline, *pbl); + if (plane_no == 0) + size = bytesperline * height; + if (pbl) + *pbl = bytesperline; + + return size; +} + +u32 vpu_helper_get_plane_size(u32 fmt, u32 w, u32 h, int plane_no, + u32 stride, u32 interlaced, u32 *pbl) +{ + switch (fmt) { + case V4L2_PIX_FMT_NV12M: + return get_nv12_plane_size(w, h, plane_no, stride, interlaced, pbl); + case V4L2_PIX_FMT_NV12M_8L128: + case V4L2_PIX_FMT_NV12M_10BE_8L128: + return get_tiled_8l128_plane_size(fmt, w, h, plane_no, stride, interlaced, pbl); + default: + return get_default_plane_size(w, h, plane_no, stride, interlaced, pbl); + } +} + +u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer, + u32 *rptr, u32 size, void *dst) +{ + u32 offset; + u32 start; + u32 end; + void *virt; + + if (!stream_buffer || !rptr || !dst) + return -EINVAL; + + if (!size) + return 0; + + offset = *rptr; + start = stream_buffer->phys; + end = start + stream_buffer->length; + virt = stream_buffer->virt; + + if (offset < start || offset > end) + return -EINVAL; + + if (offset + size <= end) { + memcpy(dst, virt + (offset - start), size); + } else { + memcpy(dst, virt + (offset - start), end - offset); + memcpy(dst + end - offset, virt, size + offset - end); + } + + *rptr = vpu_helper_step_walk(stream_buffer, offset, size); + return size; +} + +u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer, + u32 *wptr, u32 size, void *src) +{ + u32 offset; + u32 start; + u32 end; + void *virt; + + if (!stream_buffer || !wptr || !src) + return -EINVAL; + + if (!size) + return 0; + + offset = *wptr; + start = stream_buffer->phys; + end = start + stream_buffer->length; + virt = stream_buffer->virt; + if (offset < start || offset > end) + return -EINVAL; + + if (offset + size <= end) { + memcpy(virt + (offset - start), src, size); + } else { + memcpy(virt + (offset - start), src, end - offset); + memcpy(virt, src + end - offset, size + offset - end); + } + + *wptr = vpu_helper_step_walk(stream_buffer, offset, size); + + return size; +} + +u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer, + u32 *wptr, u8 val, u32 size) +{ + u32 offset; + u32 start; + u32 end; + void *virt; + + if (!stream_buffer || !wptr) + return -EINVAL; + + if (!size) + return 0; + + offset = *wptr; + start = stream_buffer->phys; + end = start + stream_buffer->length; + virt = stream_buffer->virt; + if (offset < start || offset > end) + return -EINVAL; + + if (offset + size <= end) { + memset(virt + (offset - start), val, size); + } else { + memset(virt + (offset - start), val, end - offset); + memset(virt, val, size + offset - end); + } + + offset += size; + if (offset >= end) + offset -= stream_buffer->length; + + *wptr = offset; + + return size; +} + +u32 vpu_helper_get_free_space(struct vpu_inst *inst) +{ + struct vpu_rpc_buffer_desc desc; + + if (vpu_iface_get_stream_buffer_desc(inst, &desc)) + return 0; + + if (desc.rptr > desc.wptr) + return desc.rptr - desc.wptr; + else if (desc.rptr < desc.wptr) + return (desc.end - desc.start + desc.rptr - desc.wptr); + else + return desc.end - desc.start; +} + +u32 vpu_helper_get_used_space(struct vpu_inst *inst) +{ + struct vpu_rpc_buffer_desc desc; + + if (vpu_iface_get_stream_buffer_desc(inst, &desc)) + return 0; + + if (desc.wptr > desc.rptr) + return desc.wptr - desc.rptr; + else if (desc.wptr < desc.rptr) + return (desc.end - desc.start + desc.wptr - desc.rptr); + else + return 0; +} + +int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl) +{ + struct vpu_inst *inst = ctrl_to_inst(ctrl); + + switch (ctrl->id) { + case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE: + ctrl->val = inst->min_buffer_cap; + break; + case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT: + ctrl->val = inst->min_buffer_out; + break; + default: + return -EINVAL; + } + + return 0; +} + +int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer, + u32 pixelformat, u32 offset, u32 bytesused) +{ + u32 start_code; + int start_code_size; + u32 val = 0; + int i; + int ret = -EINVAL; + + if (!stream_buffer || !stream_buffer->virt) + return -EINVAL; + + switch (pixelformat) { + case V4L2_PIX_FMT_H264: + start_code_size = 4; + start_code = 0x00000001; + break; + default: + return 0; + } + + for (i = 0; i < bytesused; i++) { + val = (val << 8) | vpu_helper_read_byte(stream_buffer, offset + i); + if (i < start_code_size - 1) + continue; + if (val == start_code) { + ret = i + 1 - start_code_size; + break; + } + } + + return ret; +} + +int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src) +{ + u32 i; + + if (!pairs || !cnt) + return -EINVAL; + + for (i = 0; i < cnt; i++) { + if (pairs[i].src == src) + return pairs[i].dst; + } + + return -EINVAL; +} + +int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst) +{ + u32 i; + + if (!pairs || !cnt) + return -EINVAL; + + for (i = 0; i < cnt; i++) { + if (pairs[i].dst == dst) + return pairs[i].src; + } + + return -EINVAL; +} diff --git a/drivers/media/platform/amphion/vpu_helpers.h b/drivers/media/platform/amphion/vpu_helpers.h new file mode 100644 index 000000000000..3676cc83e85b --- /dev/null +++ b/drivers/media/platform/amphion/vpu_helpers.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2020-2021 NXP + */ + +#ifndef _AMPHION_VPU_HELPERS_H +#define _AMPHION_VPU_HELPERS_H + +struct vpu_pair { + u32 src; + u32 dst; +}; + +#define MAKE_TIMESTAMP(s, ns) (((s32)(s) * NSEC_PER_SEC) + (ns)) +#define VPU_INVALID_TIMESTAMP MAKE_TIMESTAMP(-1, 0) + +int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x); +bool vpu_helper_check_type(struct vpu_inst *inst, u32 type); +const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt); +const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index); +u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width); +u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height); +u32 vpu_helper_get_plane_size(u32 fmt, u32 width, u32 height, int plane_no, + u32 stride, u32 interlaced, u32 *pbl); +u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer, + u32 *rptr, u32 size, void *dst); +u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer, + u32 *wptr, u32 size, void *src); +u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer, + u32 *wptr, u8 val, u32 size); +u32 vpu_helper_get_free_space(struct vpu_inst *inst); +u32 vpu_helper_get_used_space(struct vpu_inst *inst); +int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl); +void vpu_helper_get_kmp_next(const u8 *pattern, int *next, int size); +int vpu_helper_kmp_search(u8 *s, int s_len, const u8 *p, int p_len, int *next); +int vpu_helper_kmp_search_in_stream_buffer(struct vpu_buffer *stream_buffer, + u32 offset, int bytesused, + const u8 *p, int p_len, int *next); +int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer, + u32 pixelformat, u32 offset, u32 bytesused); + +static inline u32 vpu_helper_step_walk(struct vpu_buffer *stream_buffer, u32 pos, u32 step) +{ + pos += step; + if (pos > stream_buffer->phys + stream_buffer->length) + pos -= stream_buffer->length; + + return pos; +} + +static inline u8 vpu_helper_read_byte(struct vpu_buffer *stream_buffer, u32 pos) +{ + u8 *pdata = (u8 *)stream_buffer->virt; + + return pdata[pos % stream_buffer->length]; +} + +int vpu_color_check_primaries(u32 primaries); +int vpu_color_check_transfers(u32 transfers); +int vpu_color_check_matrix(u32 matrix); +int vpu_color_check_full_range(u32 full_range); +u32 vpu_color_cvrt_primaries_v2i(u32 primaries); +u32 vpu_color_cvrt_primaries_i2v(u32 primaries); +u32 vpu_color_cvrt_transfers_v2i(u32 transfers); +u32 vpu_color_cvrt_transfers_i2v(u32 transfers); +u32 vpu_color_cvrt_matrix_v2i(u32 matrix); +u32 vpu_color_cvrt_matrix_i2v(u32 matrix); +u32 vpu_color_cvrt_full_range_v2i(u32 full_range); +u32 vpu_color_cvrt_full_range_i2v(u32 full_range); +int vpu_color_get_default(u32 primaries, u32 *ptransfers, u32 *pmatrix, u32 *pfull_range); + +int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src); +int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst); +#endif diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c new file mode 100644 index 000000000000..9d3176c0f5eb --- /dev/null +++ b/drivers/media/platform/amphion/vpu_v4l2.c @@ -0,0 +1,720 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_core.h" +#include "vpu_v4l2.h" +#include "vpu_msgs.h" +#include "vpu_helpers.h" + +void vpu_inst_lock(struct vpu_inst *inst) +{ + mutex_lock(&inst->lock); +} + +void vpu_inst_unlock(struct vpu_inst *inst) +{ + mutex_unlock(&inst->lock); +} + +dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no) +{ + if (plane_no >= vb->num_planes) + return 0; + return vb2_dma_contig_plane_dma_addr(vb, plane_no) + + vb->planes[plane_no].data_offset; +} + +unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no) +{ + if (plane_no >= vb->num_planes) + return 0; + return vb2_plane_size(vb, plane_no) - vb->planes[plane_no].data_offset; +} + +void vpu_set_buffer_state(struct vb2_v4l2_buffer *vbuf, unsigned int state) +{ + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf); + + vpu_buf->state = state; +} + +unsigned int vpu_get_buffer_state(struct vb2_v4l2_buffer *vbuf) +{ + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf); + + return vpu_buf->state; +} + +void vpu_v4l2_set_error(struct vpu_inst *inst) +{ + struct vb2_queue *src_q; + struct vb2_queue *dst_q; + + if (!inst) + return; + + vpu_inst_lock(inst); + dev_err(inst->dev, "some error occurs in codec\n"); + if (inst->fh.m2m_ctx) { + src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx); + dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx); + if (src_q) + src_q->error = 1; + if (dst_q) + dst_q->error = 1; + } + vpu_inst_unlock(inst); +} + +int vpu_notify_eos(struct vpu_inst *inst) +{ + static const struct v4l2_event ev = { + .id = 0, + .type = V4L2_EVENT_EOS + }; + + vpu_trace(inst->dev, "[%d]\n", inst->id); + v4l2_event_queue_fh(&inst->fh, &ev); + + return 0; +} + +int vpu_notify_source_change(struct vpu_inst *inst) +{ + static const struct v4l2_event ev = { + .id = 0, + .type = V4L2_EVENT_SOURCE_CHANGE, + .u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION + }; + + vpu_trace(inst->dev, "[%d]\n", inst->id); + v4l2_event_queue_fh(&inst->fh, &ev); + return 0; +} + +int vpu_set_last_buffer_dequeued(struct vpu_inst *inst) +{ + struct vb2_queue *q; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx); + if (!list_empty(&q->done_list)) + return -EINVAL; + + if (q->last_buffer_dequeued) + return 0; + vpu_trace(inst->dev, "last buffer dequeued\n"); + q->last_buffer_dequeued = true; + wake_up(&q->done_wq); + vpu_notify_eos(inst); + return 0; +} + +const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst, struct v4l2_format *f) +{ + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp; + u32 type = f->type; + u32 stride = 1; + u32 bytesperline; + u32 sizeimage; + const struct vpu_format *fmt; + const struct vpu_core_resources *res; + int i; + + fmt = vpu_helper_find_format(inst, type, pixmp->pixelformat); + if (!fmt) { + fmt = vpu_helper_enum_format(inst, type, 0); + if (!fmt) + return NULL; + pixmp->pixelformat = fmt->pixfmt; + } + + res = vpu_get_resource(inst); + if (res) + stride = res->stride; + if (pixmp->width) + pixmp->width = vpu_helper_valid_frame_width(inst, pixmp->width); + if (pixmp->height) + pixmp->height = vpu_helper_valid_frame_height(inst, pixmp->height); + pixmp->flags = fmt->flags; + pixmp->num_planes = fmt->num_planes; + if (pixmp->field == V4L2_FIELD_ANY) + pixmp->field = V4L2_FIELD_NONE; + for (i = 0; i < pixmp->num_planes; i++) { + bytesperline = max_t(s32, pixmp->plane_fmt[i].bytesperline, 0); + sizeimage = vpu_helper_get_plane_size(pixmp->pixelformat, + pixmp->width, + pixmp->height, + i, + stride, + pixmp->field > V4L2_FIELD_NONE ? 1 : 0, + &bytesperline); + sizeimage = max_t(s32, pixmp->plane_fmt[i].sizeimage, sizeimage); + pixmp->plane_fmt[i].bytesperline = bytesperline; + pixmp->plane_fmt[i].sizeimage = sizeimage; + } + + return fmt; +} + +static bool vpu_check_ready(struct vpu_inst *inst, u32 type) +{ + if (!inst) + return false; + if (inst->state == VPU_CODEC_STATE_DEINIT || inst->id < 0) + return false; + if (!inst->ops->check_ready) + return true; + return call_vop(inst, check_ready, type); +} + +int vpu_process_output_buffer(struct vpu_inst *inst) +{ + struct v4l2_m2m_buffer *buf = NULL; + struct vb2_v4l2_buffer *vbuf = NULL; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + + if (!vpu_check_ready(inst, inst->out_format.type)) + return -EINVAL; + + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) { + vbuf = &buf->vb; + if (vpu_get_buffer_state(vbuf) == VPU_BUF_STATE_IDLE) + break; + vbuf = NULL; + } + + if (!vbuf) + return -EINVAL; + + dev_dbg(inst->dev, "[%d]frame id = %d / %d\n", + inst->id, vbuf->sequence, inst->sequence); + return call_vop(inst, process_output, &vbuf->vb2_buf); +} + +int vpu_process_capture_buffer(struct vpu_inst *inst) +{ + struct v4l2_m2m_buffer *buf = NULL; + struct vb2_v4l2_buffer *vbuf = NULL; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + + if (!vpu_check_ready(inst, inst->cap_format.type)) + return -EINVAL; + + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) { + vbuf = &buf->vb; + if (vpu_get_buffer_state(vbuf) == VPU_BUF_STATE_IDLE) + break; + vbuf = NULL; + } + if (!vbuf) + return -EINVAL; + + return call_vop(inst, process_capture, &vbuf->vb2_buf); +} + +struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst, u32 type, u32 sequence) +{ + struct v4l2_m2m_buffer *buf = NULL; + struct vb2_v4l2_buffer *vbuf = NULL; + + if (!inst || !inst->fh.m2m_ctx) + return NULL; + + if (V4L2_TYPE_IS_OUTPUT(type)) { + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) { + vbuf = &buf->vb; + if (vbuf->sequence == sequence) + break; + vbuf = NULL; + } + } else { + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) { + vbuf = &buf->vb; + if (vbuf->sequence == sequence) + break; + vbuf = NULL; + } + } + + return vbuf; +} + +struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst, u32 type, u32 idx) +{ + struct v4l2_m2m_buffer *buf = NULL; + struct vb2_v4l2_buffer *vbuf = NULL; + + if (!inst || !inst->fh.m2m_ctx) + return NULL; + + if (V4L2_TYPE_IS_OUTPUT(type)) { + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) { + vbuf = &buf->vb; + if (vbuf->vb2_buf.index == idx) + break; + vbuf = NULL; + } + } else { + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) { + vbuf = &buf->vb; + if (vbuf->vb2_buf.index == idx) + break; + vbuf = NULL; + } + } + + return vbuf; +} + +int vpu_get_num_buffers(struct vpu_inst *inst, u32 type) +{ + struct vb2_queue *q; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + + if (V4L2_TYPE_IS_OUTPUT(type)) + q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx); + else + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx); + + return q->num_buffers; +} + +static void vpu_m2m_device_run(void *priv) +{ +} + +static void vpu_m2m_job_abort(void *priv) +{ + struct vpu_inst *inst = priv; + struct v4l2_m2m_ctx *m2m_ctx = inst->fh.m2m_ctx; + + v4l2_m2m_job_finish(m2m_ctx->m2m_dev, m2m_ctx); +} + +static const struct v4l2_m2m_ops vpu_m2m_ops = { + .device_run = vpu_m2m_device_run, + .job_abort = vpu_m2m_job_abort +}; + +static int vpu_vb2_queue_setup(struct vb2_queue *vq, + unsigned int *buf_count, + unsigned int *plane_count, + unsigned int psize[], + struct device *allocators[]) +{ + struct vpu_inst *inst = vb2_get_drv_priv(vq); + struct vpu_format *cur_fmt; + int i; + + cur_fmt = vpu_get_format(inst, vq->type); + + if (*plane_count) { + if (*plane_count != cur_fmt->num_planes) + return -EINVAL; + for (i = 0; i < cur_fmt->num_planes; i++) { + if (psize[i] < cur_fmt->sizeimage[i]) + return -EINVAL; + } + return 0; + } + + *plane_count = cur_fmt->num_planes; + for (i = 0; i < cur_fmt->num_planes; i++) + psize[i] = cur_fmt->sizeimage[i]; + + return 0; +} + +static int vpu_vb2_buf_init(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + + vpu_set_buffer_state(vbuf, VPU_BUF_STATE_IDLE); + return 0; +} + +static int vpu_vb2_buf_out_validate(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + + vbuf->field = V4L2_FIELD_NONE; + + return 0; +} + +static int vpu_vb2_buf_prepare(struct vb2_buffer *vb) +{ + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue); + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct vpu_format *cur_fmt; + u32 i; + + cur_fmt = vpu_get_format(inst, vb->type); + for (i = 0; i < cur_fmt->num_planes; i++) { + if (vpu_get_vb_length(vb, i) < cur_fmt->sizeimage[i]) { + dev_dbg(inst->dev, "[%d] %s buf[%d] is invalid\n", + inst->id, vpu_type_name(vb->type), vb->index); + vpu_set_buffer_state(vbuf, VPU_BUF_STATE_ERROR); + } + } + + return 0; +} + +static void vpu_vb2_buf_finish(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue); + struct vb2_queue *q = vb->vb2_queue; + + if (vbuf->flags & V4L2_BUF_FLAG_LAST) + vpu_notify_eos(inst); + + if (list_empty(&q->done_list)) + call_vop(inst, on_queue_empty, q->type); +} + +void vpu_vb2_buffers_return(struct vpu_inst *inst, unsigned int type, enum vb2_buffer_state state) +{ + struct vb2_v4l2_buffer *buf; + + if (!inst || !inst->fh.m2m_ctx) + return; + + if (V4L2_TYPE_IS_OUTPUT(type)) { + while ((buf = v4l2_m2m_src_buf_remove(inst->fh.m2m_ctx))) + v4l2_m2m_buf_done(buf, state); + } else { + while ((buf = v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx))) + v4l2_m2m_buf_done(buf, state); + } +} + +static int vpu_vb2_start_streaming(struct vb2_queue *q, unsigned int count) +{ + struct vpu_inst *inst = vb2_get_drv_priv(q); + struct vpu_format *fmt = vpu_get_format(inst, q->type); + int ret; + + vpu_inst_unlock(inst); + ret = vpu_inst_register(inst); + vpu_inst_lock(inst); + if (ret) { + vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_QUEUED); + return ret; + } + + vpu_trace(inst->dev, "[%d] %s %c%c%c%c %dx%d %u(%u) %u(%u) %u(%u) %d\n", + inst->id, vpu_type_name(q->type), + fmt->pixfmt, + fmt->pixfmt >> 8, + fmt->pixfmt >> 16, + fmt->pixfmt >> 24, + fmt->width, fmt->height, + fmt->sizeimage[0], fmt->bytesperline[0], + fmt->sizeimage[1], fmt->bytesperline[1], + fmt->sizeimage[2], fmt->bytesperline[2], + q->num_buffers); + call_vop(inst, start, q->type); + vb2_clear_last_buffer_dequeued(q); + + return 0; +} + +static void vpu_vb2_stop_streaming(struct vb2_queue *q) +{ + struct vpu_inst *inst = vb2_get_drv_priv(q); + + vpu_trace(inst->dev, "[%d] %s\n", inst->id, vpu_type_name(q->type)); + + call_vop(inst, stop, q->type); + vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_ERROR); + if (V4L2_TYPE_IS_OUTPUT(q->type)) + inst->sequence = 0; +} + +static void vpu_vb2_buf_queue(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue); + + if (!inst || !inst->fh.m2m_ctx) + return; + if (V4L2_TYPE_IS_OUTPUT(vb->type)) { + vbuf->sequence = inst->sequence++; + if ((s64)vb->timestamp < 0) + vb->timestamp = VPU_INVALID_TIMESTAMP; + } + + v4l2_m2m_buf_queue(inst->fh.m2m_ctx, vbuf); + vpu_process_output_buffer(inst); + vpu_process_capture_buffer(inst); +} + +static const struct vb2_ops vpu_vb2_ops = { + .queue_setup = vpu_vb2_queue_setup, + .buf_init = vpu_vb2_buf_init, + .buf_out_validate = vpu_vb2_buf_out_validate, + .buf_prepare = vpu_vb2_buf_prepare, + .buf_finish = vpu_vb2_buf_finish, + .start_streaming = vpu_vb2_start_streaming, + .stop_streaming = vpu_vb2_stop_streaming, + .buf_queue = vpu_vb2_buf_queue, + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, +}; + +static int vpu_m2m_queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq) +{ + struct vpu_inst *inst = priv; + int ret; + + src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; + inst->out_format.type = src_vq->type; + src_vq->io_modes = VB2_MMAP | VB2_DMABUF; + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + src_vq->ops = &vpu_vb2_ops; + src_vq->mem_ops = &vb2_dma_contig_memops; + if (inst->type == VPU_CORE_TYPE_DEC && inst->use_stream_buffer) + src_vq->mem_ops = &vb2_vmalloc_memops; + src_vq->drv_priv = inst; + src_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer); + src_vq->min_buffers_needed = 1; + src_vq->dev = inst->vpu->dev; + src_vq->lock = &inst->lock; + ret = vb2_queue_init(src_vq); + if (ret) + return ret; + + dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; + inst->cap_format.type = dst_vq->type; + dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + dst_vq->ops = &vpu_vb2_ops; + dst_vq->mem_ops = &vb2_dma_contig_memops; + if (inst->type == VPU_CORE_TYPE_ENC && inst->use_stream_buffer) + dst_vq->mem_ops = &vb2_vmalloc_memops; + dst_vq->drv_priv = inst; + dst_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer); + dst_vq->min_buffers_needed = 1; + dst_vq->dev = inst->vpu->dev; + dst_vq->lock = &inst->lock; + ret = vb2_queue_init(dst_vq); + if (ret) { + vb2_queue_release(src_vq); + return ret; + } + + return 0; +} + +static int vpu_v4l2_release(struct vpu_inst *inst) +{ + vpu_trace(inst->vpu->dev, "%p\n", inst); + + vpu_release_core(inst->core); + put_device(inst->dev); + + if (inst->workqueue) { + cancel_work_sync(&inst->msg_work); + destroy_workqueue(inst->workqueue); + inst->workqueue = NULL; + } + + v4l2_ctrl_handler_free(&inst->ctrl_handler); + mutex_destroy(&inst->lock); + v4l2_fh_del(&inst->fh); + v4l2_fh_exit(&inst->fh); + + call_vop(inst, cleanup); + + return 0; +} + +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst) +{ + struct vpu_dev *vpu = video_drvdata(file); + struct vpu_func *func; + int ret = 0; + + if (!file || !inst || !inst->ops) + return -EINVAL; + + if (inst->type == VPU_CORE_TYPE_ENC) + func = &vpu->encoder; + else + func = &vpu->decoder; + + atomic_set(&inst->ref_count, 0); + vpu_inst_get(inst); + inst->vpu = vpu; + inst->core = vpu_request_core(vpu, inst->type); + if (inst->core) + inst->dev = get_device(inst->core->dev); + mutex_init(&inst->lock); + INIT_LIST_HEAD(&inst->cmd_q); + inst->id = VPU_INST_NULL_ID; + inst->release = vpu_v4l2_release; + inst->pid = current->pid; + inst->tgid = current->tgid; + inst->min_buffer_cap = 2; + inst->min_buffer_out = 2; + v4l2_fh_init(&inst->fh, func->vfd); + v4l2_fh_add(&inst->fh); + + ret = call_vop(inst, ctrl_init); + if (ret) + goto error; + + inst->fh.m2m_ctx = v4l2_m2m_ctx_init(func->m2m_dev, inst, vpu_m2m_queue_init); + if (IS_ERR(inst->fh.m2m_ctx)) { + dev_err(vpu->dev, "v4l2_m2m_ctx_init fail\n"); + ret = PTR_ERR(func->m2m_dev); + goto error; + } + + inst->fh.ctrl_handler = &inst->ctrl_handler; + file->private_data = &inst->fh; + inst->state = VPU_CODEC_STATE_DEINIT; + inst->workqueue = alloc_workqueue("vpu_inst", WQ_UNBOUND | WQ_MEM_RECLAIM, 1); + if (inst->workqueue) { + INIT_WORK(&inst->msg_work, vpu_inst_run_work); + ret = kfifo_init(&inst->msg_fifo, + inst->msg_buffer, + rounddown_pow_of_two(sizeof(inst->msg_buffer))); + if (ret) { + destroy_workqueue(inst->workqueue); + inst->workqueue = NULL; + } + } + vpu_trace(vpu->dev, "tgid = %d, pid = %d, type = %s, inst = %p\n", + inst->tgid, inst->pid, vpu_core_type_desc(inst->type), inst); + + return 0; +error: + vpu_inst_put(inst); + return ret; +} + +int vpu_v4l2_close(struct file *file) +{ + struct vpu_dev *vpu = video_drvdata(file); + struct vpu_inst *inst = to_inst(file); + + vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n", inst->tgid, inst->pid, inst); + + vpu_inst_lock(inst); + if (inst->fh.m2m_ctx) { + v4l2_m2m_ctx_release(inst->fh.m2m_ctx); + inst->fh.m2m_ctx = NULL; + } + vpu_inst_unlock(inst); + + call_vop(inst, release); + vpu_inst_unregister(inst); + vpu_inst_put(inst); + + return 0; +} + +int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func) +{ + struct video_device *vfd; + int ret; + + if (!vpu || !func) + return -EINVAL; + + if (func->vfd) + return 0; + + func->m2m_dev = v4l2_m2m_init(&vpu_m2m_ops); + if (IS_ERR(func->m2m_dev)) { + dev_err(vpu->dev, "v4l2_m2m_init fail\n"); + func->vfd = NULL; + return PTR_ERR(func->m2m_dev); + } + + vfd = video_device_alloc(); + if (!vfd) { + v4l2_m2m_release(func->m2m_dev); + dev_err(vpu->dev, "alloc vpu decoder video device fail\n"); + return -ENOMEM; + } + vfd->release = video_device_release; + vfd->vfl_dir = VFL_DIR_M2M; + vfd->v4l2_dev = &vpu->v4l2_dev; + vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING; + if (func->type == VPU_CORE_TYPE_ENC) { + strscpy(vfd->name, "amphion-vpu-encoder", sizeof(vfd->name)); + vfd->fops = venc_get_fops(); + vfd->ioctl_ops = venc_get_ioctl_ops(); + } else { + strscpy(vfd->name, "amphion-vpu-decoder", sizeof(vfd->name)); + vfd->fops = vdec_get_fops(); + vfd->ioctl_ops = vdec_get_ioctl_ops(); + } + + ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1); + if (ret) { + video_device_release(vfd); + v4l2_m2m_release(func->m2m_dev); + return ret; + } + video_set_drvdata(vfd, vpu); + func->vfd = vfd; + + ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function); + if (ret) { + v4l2_m2m_release(func->m2m_dev); + func->m2m_dev = NULL; + video_unregister_device(func->vfd); + func->vfd = NULL; + return ret; + } + + return 0; +} + +void vpu_remove_func(struct vpu_func *func) +{ + if (!func) + return; + + if (func->m2m_dev) { + v4l2_m2m_unregister_media_controller(func->m2m_dev); + v4l2_m2m_release(func->m2m_dev); + func->m2m_dev = NULL; + } + if (func->vfd) { + video_unregister_device(func->vfd); + func->vfd = NULL; + } +} diff --git a/drivers/media/platform/amphion/vpu_v4l2.h b/drivers/media/platform/amphion/vpu_v4l2.h new file mode 100644 index 000000000000..90fa7ea67495 --- /dev/null +++ b/drivers/media/platform/amphion/vpu_v4l2.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2020-2021 NXP + */ + +#ifndef _AMPHION_VPU_V4L2_H +#define _AMPHION_VPU_V4L2_H + +#include + +void vpu_inst_lock(struct vpu_inst *inst); +void vpu_inst_unlock(struct vpu_inst *inst); +void vpu_set_buffer_state(struct vb2_v4l2_buffer *vbuf, unsigned int state); +unsigned int vpu_get_buffer_state(struct vb2_v4l2_buffer *vbuf); + +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst); +int vpu_v4l2_close(struct file *file); + +const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst, struct v4l2_format *f); +int vpu_process_output_buffer(struct vpu_inst *inst); +int vpu_process_capture_buffer(struct vpu_inst *inst); +struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst, u32 type, u32 sequence); +struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst, u32 type, u32 idx); +void vpu_v4l2_set_error(struct vpu_inst *inst); +int vpu_notify_eos(struct vpu_inst *inst); +int vpu_notify_source_change(struct vpu_inst *inst); +int vpu_set_last_buffer_dequeued(struct vpu_inst *inst); +void vpu_vb2_buffers_return(struct vpu_inst *inst, unsigned int type, enum vb2_buffer_state state); +int vpu_get_num_buffers(struct vpu_inst *inst, u32 type); + +dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no); +unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no); +static inline struct vpu_format *vpu_get_format(struct vpu_inst *inst, u32 type) +{ + if (V4L2_TYPE_IS_OUTPUT(type)) + return &inst->out_format; + else + return &inst->cap_format; +} + +static inline char *vpu_type_name(u32 type) +{ + return V4L2_TYPE_IS_OUTPUT(type) ? "output" : "capture"; +} + +static inline int vpu_vb_is_codecconfig(struct vb2_v4l2_buffer *vbuf) +{ +#ifdef V4L2_BUF_FLAG_CODECCONFIG + return (vbuf->flags & V4L2_BUF_FLAG_CODECCONFIG) ? 1 : 0; +#else + return 0; +#endif +} + +#endif From patchwork Wed Jan 26 03:09:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Qian X-Patchwork-Id: 537047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 068BBC63684 for ; Wed, 26 Jan 2022 03:11:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236979AbiAZDLI (ORCPT ); Tue, 25 Jan 2022 22:11:08 -0500 Received: from mail-eopbgr00041.outbound.protection.outlook.com ([40.107.0.41]:62220 "EHLO EUR02-AM5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S236841AbiAZDKv (ORCPT ); Tue, 25 Jan 2022 22:10:51 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CPaIZRSoejrm/INL/DcJEiRDTvu1wXULJ9sVK5yjuXvNTap6PzvtUNcYKYgIg5up8RDflgg6MEBKV1ZLRmR/eXVvcYudj0ead23xaHHoQipctQANCAgd+mDGHWDxB/Hu3+KESyKKoaeWiCPyCtm2BSg3jnSj6YZV/3kEBOGbC4vMNxsdNv2/Msig3ei1QZGzNtA7ElTMKLsaqPq7xcVN13qgpuEyq3VhC27+n8bB2Ihy6k74zmcDyt/CViW73eU3O9d0rlJHL/cnVb7EPuYjuiyEd56+zwRymioaeO6WTVj8thLbxg36HPn41dCa6o/FFSd0qBuZ4d/Ec10rjuYLHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sa2q21vSCYb4PGVbCvZZk3Xg5QVz0njkVo4CaqCTqeI=; b=gG3OSQo9zefkWBZvSvkQTvtglQHds3Cxd22c6lkj9a4UQn3V3ZT2+0KNYtHjrXgFN8ffg5aIYZZcoa25xttVxDVRWK32teot3PygTZQMj9OntWqCbJ/sNW36E0aglwG+1M8Kytk6qev96H6kw0fF92/FUSbWhDl+QEDoyYGrEQqC77jkycSIcyo/e0j5WixPC5kVzhxnf1nu6rauReg5NnHU6weTgZhQaW8buuznor3OY3GIGIOBlqCOYmMU+gBbKS5CL5ih1dvJGg0do8lqI5EuOMtaowPGFoclRBdx6ECZu5w5SLHssP/J61D8V6FMbkCTsK7v5mhGxPXr7XQnPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sa2q21vSCYb4PGVbCvZZk3Xg5QVz0njkVo4CaqCTqeI=; b=X9jsb9c/KZJXmE61q7MVWrO9FhudcvCsybGQEyXltzkXJ0xf7CgEpPgkE7s8T+lpQFmx+MmAvXWqOcynKKuiR6kKOeHIR7Imsyez0hZ5rh5znzElZWNy7kHTYD0/g/H1P7/pkBQHqe+0+NoLZRJzGPb46KGwfNCHVY0fSgC/TuM= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) by AM6PR0402MB3319.eurprd04.prod.outlook.com (2603:10a6:209:e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Wed, 26 Jan 2022 03:10:35 +0000 Received: from AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13]) by AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13%4]) with mapi id 15.20.4930.015; Wed, 26 Jan 2022 03:10:35 +0000 From: Ming Qian To: mchehab@kernel.org, shawnguo@kernel.org, robh+dt@kernel.org, s.hauer@pengutronix.de Cc: hverkuil-cisco@xs4all.nl, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, aisheng.dong@nxp.com, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v15.1 07/13] media: amphion: add v4l2 m2m vpu encoder stateful driver Date: Wed, 26 Jan 2022 11:09:26 +0800 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: X-ClientProxiedBy: SG2PR01CA0091.apcprd01.prod.exchangelabs.com (2603:1096:3:15::17) To AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dac19747-5b5e-4f82-e7ad-08d9e0796b9b X-MS-TrafficTypeDiagnostic: AM6PR0402MB3319:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:272; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZV+bpmZYF80FzBL0g4ovZcR8s3S/QB3ybXDOf7D12Hoa7tfIfLCYuKbASKvRBsE7CG/pw1HYji1MSMng5u4OhnOMMc849Yr+OWyfmVmk2vmVujy8q5gG8cseHTDDxVr4d+JjmOGX88qHX+tDIpltG7vSte9lwtCKrEKhhnkbfxuONREkw/l4j95cs3eWo3RGwGTqR/XYG5TYI/Obn8YNErqWwjwzjyYoL3hyeiPe548Gr8SOSHzYovX2uUGhJZ3O8Cb6h/u/zf1gWxqv7LZv2wdRyyQy/xh2nG7YQPxTSH69KLlJhqQgYDuShCnUkPxZtRDX8LztdnpCFVYMI92thbNVBjouy6V29itR2E6IEn2Z3gcMXqzKHWdmcUdE/OKr/j2dHszjytj4efClhHIsezsgLTsZwCXKw86a0ExDkr5fe8RAW18JsuraqWFtInQ7S7y320eL+rKngOQUeV4tohvwHkV/5Hrz8S5qPgXIDCTTzr5F/73wIjrO9oiffN5FZtVQT2JJGsKDL1KHqTTz+0l3Hxl+FA8n1mKnWfphaKcG5x5TXuOY/W9ynzOCrT4/ZN5Bsb7Rik2E+o9w8Ke03R5WKYML4ohnfsDGo9JGqKUqLFWwNw+clVFrKdOvw6zk/W/W4AwYPQUKBHLVbXrHmI9uBqbkTHbV8jJ9MOVb0pRHOvkAggRseGxV2jCOPB2URi7u0l45z2LCuOryPurhTgLXZyPQU0jg1HIje+1qrZg= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB6341.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(83380400001)(508600001)(2616005)(36756003)(316002)(186003)(26005)(6486002)(66946007)(66476007)(66556008)(8676002)(5660300002)(4326008)(86362001)(30864003)(52116002)(6666004)(6506007)(6512007)(44832011)(7416002)(8936002)(38100700002)(38350700002)(2906002)(461764006)(20210929001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: MafLG8hRQbOtQhE+jyFN7V0AYrtwPV818wAlyy8B5WHUG5L6W2TqHqJLKz2DLKKj463wZ0T+x9Fwm5JKd5TihpmD4kjU+xemA343pDWfsWflTF1Iz3QTqEMpiR11FrQya06lIEOmMZzQsy/yaDSkJPLsXyaMp6SEfyQioxTEcjHRy/qJOrfpnw8LJJPpFCwdt0QHq15ijAILVvgIHLNOaSfFq+u/IwmQEBBYvxcc4KG+TtAbV/pNjyVzSJtleoPXQPgxjurMzFkvYmYYG32RSsFR2Jzwocxncz664nRbXgXWC91U7RFctvV17HwJK41JBdCQjLDYyG4oNCm3JJyN/oFgVo0Mg5fMIFzom1xeiPDbe3+tMvfz9StotWa0RUmP0NaBHPWrbEkOLDPl64++WVh9mCLUKqynpN2WSztkpl/KaM4Oqk7oJvaVJMMYjKph7KQVE7DWU6qKEAUf//CWod6Gv+/WBUOD6rowmA09tHV43gFqzp37L/RDOvYMs4mNAOibfbM5VwZZjoSiTy8HPWLQEwQQRzUU+bD3GIDQGULNXxcMF7iJ7k42xqydkr8KtwyeghCprELdZ+zwmAXHrxoiqEB04/39T4vBsQy+B35neevTopce3fewXKISBsgVkzvg2P8NaJssTKWOW0BErKNoCEAJhb2uEzGFUVVFSo0VluaqdtKWQjFiEI9ifjUEin5bUG7XuhvdAHHxkm7z7WMtz2bOj7UTF78HDwDFhKWFKeDm2B6ZudJKkr64s5ZdrLzcc+f/OxOv8xn3VoeAkGDUE2Jqv9r1syRTrXpwcNKpDv9MiD2jry4JdG3kDDPulbDPg09PHV+bZ0bIXz2i7SG3EfdUm0l9N79mLF0WqgELVUwkdSivSI3WjC6oDJaSpp918H+kQwhql03xLajhyZYFRgapGFmOrc/xmhcVAEA15SDLfQI04D0KohoFlhrkONdSBO9Er1v1KPA0MzdTqeuBTO3VVhiziq728pCQUAlDKY9nT8Esa22ke3ZQWMJaffkSdvr3kW2DHunbt6M3AoYeWs1IlqzLpxng5F+CQfx712e3fOomG5vygIdj4OyEY6qrlL95TjrrvG2YxAo1ox85zcmaMc5eODSeOS9q48/VvS/9QJwGmO+1Oi5J25dd/aetERwzzAvX9xJ6tlZGCutr/P/1BoGlo/fqJrKv9SN8fSA7XjnFZPLSiar88PsPapZDhX13ywcgcuWjUlH/OK+P6NaQf8IWrPYIU2SoIQ6w77aSfcQ3sNCf3okVrNeu86UiVgPxI7jBWn4PGLkF+UHuWLePL563SFGPMKuXnqKluLw0nE1QcyDNPeGs7qc1SUn7XhRoWaCXjwOYaFUMlAfjUvv6jJG0J1Mt1hiVcd/A4XyTMSrWFoq42/X6PzxYwNAvBd2kKyepDKdt1caAtc1S43ZeW1Cn7IDMEymg3LQmZjz2OY2nxYrpWQ9uU5xeQwCPwZ61cMt91dSZPObCov5RSgh3POmrwZQx8GCJKyNbsxqa1F50+Mc5f94Y4yrV8+8MhA0VnUyPnGj1UcXIyzu6yE3g/q8dxAZrm8yxD2/DtE9UXVzZH8uqzLG6NpVEiyMWUxTnz8h6I3bAXvOKJ6JFzdo7qQr+mqwD1Pm2lmE= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: dac19747-5b5e-4f82-e7ad-08d9e0796b9b X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6341.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2022 03:10:35.2554 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: k7Ng5LeMYou6zo5XzfPrJGDSU/L7AbFd5v8q6eKbDeRbIR9l+/g8BQq/5KtV5p/FpGziPvDhUwmeB4mZ0wmgYg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3319 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This consists of video encoder implementation plus encoder controls. Signed-off-by: Ming Qian Signed-off-by: Shijie Qin Signed-off-by: Zhou Peng Tested-by: Nicolas Dufresne --- drivers/media/platform/amphion/venc.c | 1365 +++++++++++++++++++++++++ 1 file changed, 1365 insertions(+) create mode 100644 drivers/media/platform/amphion/venc.c diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c new file mode 100644 index 000000000000..3b96d6f91e6e --- /dev/null +++ b/drivers/media/platform/amphion/venc.c @@ -0,0 +1,1365 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_defs.h" +#include "vpu_core.h" +#include "vpu_helpers.h" +#include "vpu_v4l2.h" +#include "vpu_cmds.h" +#include "vpu_rpc.h" + +#define VENC_OUTPUT_ENABLE BIT(0) +#define VENC_CAPTURE_ENABLE BIT(1) +#define VENC_ENABLE_MASK (VENC_OUTPUT_ENABLE | VENC_CAPTURE_ENABLE) +#define VENC_MAX_BUF_CNT 8 + +struct venc_t { + struct vpu_encode_params params; + u32 request_key_frame; + u32 input_ready; + u32 cpb_size; + bool bitrate_change; + + struct vpu_buffer enc[VENC_MAX_BUF_CNT]; + struct vpu_buffer ref[VENC_MAX_BUF_CNT]; + struct vpu_buffer act[VENC_MAX_BUF_CNT]; + struct list_head frames; + u32 frame_count; + u32 encode_count; + u32 ready_count; + u32 enable; + u32 stopped; + + u32 skipped_count; + u32 skipped_bytes; + + wait_queue_head_t wq; +}; + +struct venc_frame_t { + struct list_head list; + struct vpu_enc_pic_info info; + u32 bytesused; + s64 timestamp; +}; + +static const struct vpu_format venc_formats[] = { + { + .pixfmt = V4L2_PIX_FMT_NV12M, + .num_planes = 2, + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE, + }, + { + .pixfmt = V4L2_PIX_FMT_H264, + .num_planes = 1, + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE, + }, + {0, 0, 0, 0}, +}; + +static int venc_querycap(struct file *file, void *fh, struct v4l2_capability *cap) +{ + strscpy(cap->driver, "amphion-vpu", sizeof(cap->driver)); + strscpy(cap->card, "amphion vpu encoder", sizeof(cap->card)); + strscpy(cap->bus_info, "platform: amphion-vpu", sizeof(cap->bus_info)); + + return 0; +} + +static int venc_enum_fmt(struct file *file, void *fh, struct v4l2_fmtdesc *f) +{ + struct vpu_inst *inst = to_inst(file); + const struct vpu_format *fmt; + + memset(f->reserved, 0, sizeof(f->reserved)); + fmt = vpu_helper_enum_format(inst, f->type, f->index); + if (!fmt) + return -EINVAL; + + f->pixelformat = fmt->pixfmt; + f->flags = fmt->flags; + + return 0; +} + +static int venc_enum_framesizes(struct file *file, void *fh, struct v4l2_frmsizeenum *fsize) +{ + struct vpu_inst *inst = to_inst(file); + const struct vpu_core_resources *res; + + if (!fsize || fsize->index) + return -EINVAL; + + if (!vpu_helper_find_format(inst, 0, fsize->pixel_format)) + return -EINVAL; + + res = vpu_get_resource(inst); + if (!res) + return -EINVAL; + fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE; + fsize->stepwise.max_width = res->max_width; + fsize->stepwise.max_height = res->max_height; + fsize->stepwise.min_width = res->min_width; + fsize->stepwise.min_height = res->min_height; + fsize->stepwise.step_width = res->step_width; + fsize->stepwise.step_height = res->step_height; + + return 0; +} + +static int venc_enum_frameintervals(struct file *file, void *fh, struct v4l2_frmivalenum *fival) +{ + struct vpu_inst *inst = to_inst(file); + const struct vpu_core_resources *res; + + if (!fival || fival->index) + return -EINVAL; + + if (!vpu_helper_find_format(inst, 0, fival->pixel_format)) + return -EINVAL; + + if (!fival->width || !fival->height) + return -EINVAL; + + res = vpu_get_resource(inst); + if (!res) + return -EINVAL; + if (fival->width < res->min_width || fival->width > res->max_width || + fival->height < res->min_height || fival->height > res->max_height) + return -EINVAL; + + fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS; + fival->stepwise.min.numerator = 1; + fival->stepwise.min.denominator = USHRT_MAX; + fival->stepwise.max.numerator = USHRT_MAX; + fival->stepwise.max.denominator = 1; + fival->stepwise.step.numerator = 1; + fival->stepwise.step.denominator = 1; + + return 0; +} + +static int venc_g_fmt(struct file *file, void *fh, struct v4l2_format *f) +{ + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc = inst->priv; + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp; + struct vpu_format *cur_fmt; + int i; + + cur_fmt = vpu_get_format(inst, f->type); + + pixmp->pixelformat = cur_fmt->pixfmt; + pixmp->num_planes = cur_fmt->num_planes; + pixmp->width = cur_fmt->width; + pixmp->height = cur_fmt->height; + pixmp->field = cur_fmt->field; + pixmp->flags = cur_fmt->flags; + for (i = 0; i < pixmp->num_planes; i++) { + pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i]; + pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i]; + } + + f->fmt.pix_mp.colorspace = venc->params.color.primaries; + f->fmt.pix_mp.xfer_func = venc->params.color.transfer; + f->fmt.pix_mp.ycbcr_enc = venc->params.color.matrix; + f->fmt.pix_mp.quantization = venc->params.color.full_range; + + return 0; +} + +static int venc_try_fmt(struct file *file, void *fh, struct v4l2_format *f) +{ + struct vpu_inst *inst = to_inst(file); + + vpu_try_fmt_common(inst, f); + + return 0; +} + +static int venc_s_fmt(struct file *file, void *fh, struct v4l2_format *f) +{ + struct vpu_inst *inst = to_inst(file); + const struct vpu_format *fmt; + struct vpu_format *cur_fmt; + struct vb2_queue *q; + struct venc_t *venc = inst->priv; + struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp; + int i; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + q = v4l2_m2m_get_vq(inst->fh.m2m_ctx, f->type); + if (!q) + return -EINVAL; + if (vb2_is_busy(q)) + return -EBUSY; + + fmt = vpu_try_fmt_common(inst, f); + if (!fmt) + return -EINVAL; + + cur_fmt = vpu_get_format(inst, f->type); + + cur_fmt->pixfmt = fmt->pixfmt; + cur_fmt->num_planes = fmt->num_planes; + cur_fmt->flags = fmt->flags; + cur_fmt->width = pix_mp->width; + cur_fmt->height = pix_mp->height; + for (i = 0; i < fmt->num_planes; i++) { + cur_fmt->sizeimage[i] = pix_mp->plane_fmt[i].sizeimage; + cur_fmt->bytesperline[i] = pix_mp->plane_fmt[i].bytesperline; + } + + if (pix_mp->field != V4L2_FIELD_ANY) + cur_fmt->field = pix_mp->field; + + if (V4L2_TYPE_IS_OUTPUT(f->type)) { + venc->params.input_format = cur_fmt->pixfmt; + venc->params.src_stride = cur_fmt->bytesperline[0]; + venc->params.src_width = cur_fmt->width; + venc->params.src_height = cur_fmt->height; + venc->params.crop.left = 0; + venc->params.crop.top = 0; + venc->params.crop.width = cur_fmt->width; + venc->params.crop.height = cur_fmt->height; + } else { + venc->params.codec_format = cur_fmt->pixfmt; + venc->params.out_width = cur_fmt->width; + venc->params.out_height = cur_fmt->height; + } + + if (V4L2_TYPE_IS_OUTPUT(f->type)) { + if (!vpu_color_check_primaries(pix_mp->colorspace)) { + venc->params.color.primaries = pix_mp->colorspace; + vpu_color_get_default(venc->params.color.primaries, + &venc->params.color.transfer, + &venc->params.color.matrix, + &venc->params.color.full_range); + } + if (!vpu_color_check_transfers(pix_mp->xfer_func)) + venc->params.color.transfer = pix_mp->xfer_func; + if (!vpu_color_check_matrix(pix_mp->ycbcr_enc)) + venc->params.color.matrix = pix_mp->ycbcr_enc; + if (!vpu_color_check_full_range(pix_mp->quantization)) + venc->params.color.full_range = pix_mp->quantization; + } + + pix_mp->colorspace = venc->params.color.primaries; + pix_mp->xfer_func = venc->params.color.transfer; + pix_mp->ycbcr_enc = venc->params.color.matrix; + pix_mp->quantization = venc->params.color.full_range; + + return 0; +} + +static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm) +{ + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc = inst->priv; + struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe; + + if (!parm) + return -EINVAL; + + if (!vpu_helper_check_type(inst, parm->type)) + return -EINVAL; + + parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME; + parm->parm.capture.readbuffers = 0; + timeperframe->numerator = venc->params.frame_rate.numerator; + timeperframe->denominator = venc->params.frame_rate.denominator; + + return 0; +} + +static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm) +{ + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc = inst->priv; + struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe; + unsigned long n, d; + + if (!parm) + return -EINVAL; + + if (!vpu_helper_check_type(inst, parm->type)) + return -EINVAL; + + if (!timeperframe->numerator) + timeperframe->numerator = venc->params.frame_rate.numerator; + if (!timeperframe->denominator) + timeperframe->denominator = venc->params.frame_rate.denominator; + + venc->params.frame_rate.numerator = timeperframe->numerator; + venc->params.frame_rate.denominator = timeperframe->denominator; + + rational_best_approximation(venc->params.frame_rate.numerator, + venc->params.frame_rate.denominator, + venc->params.frame_rate.numerator, + venc->params.frame_rate.denominator, + &n, &d); + venc->params.frame_rate.numerator = n; + venc->params.frame_rate.denominator = d; + + parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME; + memset(parm->parm.capture.reserved, 0, sizeof(parm->parm.capture.reserved)); + + return 0; +} + +static int venc_g_selection(struct file *file, void *fh, struct v4l2_selection *s) +{ + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc = inst->priv; + + if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) + return -EINVAL; + + switch (s->target) { + case V4L2_SEL_TGT_CROP_DEFAULT: + case V4L2_SEL_TGT_CROP_BOUNDS: + s->r.left = 0; + s->r.top = 0; + s->r.width = inst->out_format.width; + s->r.height = inst->out_format.height; + break; + case V4L2_SEL_TGT_CROP: + s->r = venc->params.crop; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int venc_valid_crop(struct venc_t *venc, const struct vpu_core_resources *res) +{ + struct v4l2_rect *rect = NULL; + u32 min_width; + u32 min_height; + u32 src_width; + u32 src_height; + + rect = &venc->params.crop; + min_width = res->min_width; + min_height = res->min_height; + src_width = venc->params.src_width; + src_height = venc->params.src_height; + + if (rect->width == 0 || rect->height == 0) + return -EINVAL; + if (rect->left > src_width - min_width || rect->top > src_height - min_height) + return -EINVAL; + + rect->width = min(rect->width, src_width - rect->left); + rect->width = max_t(u32, rect->width, min_width); + + rect->height = min(rect->height, src_height - rect->top); + rect->height = max_t(u32, rect->height, min_height); + + return 0; +} + +static int venc_s_selection(struct file *file, void *fh, struct v4l2_selection *s) +{ + struct vpu_inst *inst = to_inst(file); + const struct vpu_core_resources *res; + struct venc_t *venc = inst->priv; + + res = vpu_get_resource(inst); + if (!res) + return -EINVAL; + + if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) + return -EINVAL; + if (s->target != V4L2_SEL_TGT_CROP) + return -EINVAL; + + venc->params.crop.left = ALIGN(s->r.left, res->step_width); + venc->params.crop.top = ALIGN(s->r.top, res->step_height); + venc->params.crop.width = ALIGN(s->r.width, res->step_width); + venc->params.crop.height = ALIGN(s->r.height, res->step_height); + if (venc_valid_crop(venc, res)) { + venc->params.crop.left = 0; + venc->params.crop.top = 0; + venc->params.crop.width = venc->params.src_width; + venc->params.crop.height = venc->params.src_height; + } + + inst->crop = venc->params.crop; + + return 0; +} + +static int venc_drain(struct vpu_inst *inst) +{ + struct venc_t *venc = inst->priv; + int ret; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + if (inst->state != VPU_CODEC_STATE_DRAIN) + return 0; + + if (v4l2_m2m_num_src_bufs_ready(inst->fh.m2m_ctx)) + return 0; + + if (!venc->input_ready) + return 0; + + venc->input_ready = false; + vpu_trace(inst->dev, "[%d]\n", inst->id); + ret = vpu_session_stop(inst); + if (ret) + return ret; + inst->state = VPU_CODEC_STATE_STOP; + wake_up_all(&venc->wq); + + return 0; +} + +static int venc_request_eos(struct vpu_inst *inst) +{ + inst->state = VPU_CODEC_STATE_DRAIN; + venc_drain(inst); + + return 0; +} + +static int venc_encoder_cmd(struct file *file, void *fh, struct v4l2_encoder_cmd *cmd) +{ + struct vpu_inst *inst = to_inst(file); + int ret; + + ret = v4l2_m2m_ioctl_try_encoder_cmd(file, fh, cmd); + if (ret) + return ret; + + vpu_inst_lock(inst); + if (cmd->cmd == V4L2_ENC_CMD_STOP) { + if (inst->state == VPU_CODEC_STATE_DEINIT) + vpu_set_last_buffer_dequeued(inst); + else + venc_request_eos(inst); + } + vpu_inst_unlock(inst); + + return 0; +} + +static int venc_subscribe_event(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub) +{ + switch (sub->type) { + case V4L2_EVENT_EOS: + return v4l2_event_subscribe(fh, sub, 0, NULL); + case V4L2_EVENT_CTRL: + return v4l2_ctrl_subscribe_event(fh, sub); + default: + return -EINVAL; + } +} + +static const struct v4l2_ioctl_ops venc_ioctl_ops = { + .vidioc_querycap = venc_querycap, + .vidioc_enum_fmt_vid_cap = venc_enum_fmt, + .vidioc_enum_fmt_vid_out = venc_enum_fmt, + .vidioc_enum_framesizes = venc_enum_framesizes, + .vidioc_enum_frameintervals = venc_enum_frameintervals, + .vidioc_g_fmt_vid_cap_mplane = venc_g_fmt, + .vidioc_g_fmt_vid_out_mplane = venc_g_fmt, + .vidioc_try_fmt_vid_cap_mplane = venc_try_fmt, + .vidioc_try_fmt_vid_out_mplane = venc_try_fmt, + .vidioc_s_fmt_vid_cap_mplane = venc_s_fmt, + .vidioc_s_fmt_vid_out_mplane = venc_s_fmt, + .vidioc_g_parm = venc_g_parm, + .vidioc_s_parm = venc_s_parm, + .vidioc_g_selection = venc_g_selection, + .vidioc_s_selection = venc_s_selection, + .vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd, + .vidioc_encoder_cmd = venc_encoder_cmd, + .vidioc_subscribe_event = venc_subscribe_event, + .vidioc_unsubscribe_event = v4l2_event_unsubscribe, + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs, + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf, + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs, + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf, + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf, + .vidioc_expbuf = v4l2_m2m_ioctl_expbuf, + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf, + .vidioc_streamon = v4l2_m2m_ioctl_streamon, + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff, +}; + +static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl) +{ + struct vpu_inst *inst = ctrl_to_inst(ctrl); + struct venc_t *venc = inst->priv; + int ret = 0; + + vpu_inst_lock(inst); + switch (ctrl->id) { + case V4L2_CID_MPEG_VIDEO_H264_PROFILE: + venc->params.profile = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_LEVEL: + venc->params.level = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_FRAME_RC_ENABLE: + venc->params.rc_enable = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_BITRATE_MODE: + venc->params.rc_mode = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_BITRATE: + if (ctrl->val != venc->params.bitrate) + venc->bitrate_change = true; + venc->params.bitrate = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_BITRATE_PEAK: + venc->params.bitrate_max = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_GOP_SIZE: + venc->params.gop_length = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_B_FRAMES: + venc->params.bframes = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP: + venc->params.i_frame_qp = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP: + venc->params.p_frame_qp = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP: + venc->params.b_frame_qp = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME: + venc->request_key_frame = 1; + break; + case V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE: + venc->cpb_size = ctrl->val * 1024; + break; + case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE: + venc->params.sar.enable = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC: + venc->params.sar.idc = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_WIDTH: + venc->params.sar.width = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_HEIGHT: + venc->params.sar.height = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_HEADER_MODE: + break; + default: + ret = -EINVAL; + break; + } + vpu_inst_unlock(inst); + + return ret; +} + +static const struct v4l2_ctrl_ops venc_ctrl_ops = { + .s_ctrl = venc_op_s_ctrl, + .g_volatile_ctrl = vpu_helper_g_volatile_ctrl, +}; + +static int venc_ctrl_init(struct vpu_inst *inst) +{ + struct v4l2_ctrl *ctrl; + int ret; + + ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 20); + if (ret) + return ret; + + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_PROFILE, + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH, + ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) | + (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) | + (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)), + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH); + + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_LEVEL, + V4L2_MPEG_VIDEO_H264_LEVEL_5_1, + 0x0, + V4L2_MPEG_VIDEO_H264_LEVEL_4_0); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_FRAME_RC_ENABLE, 0, 1, 1, 1); + + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_BITRATE_MODE, + V4L2_MPEG_VIDEO_BITRATE_MODE_CBR, + ~((1 << V4L2_MPEG_VIDEO_BITRATE_MODE_VBR) | + (1 << V4L2_MPEG_VIDEO_BITRATE_MODE_CBR)), + V4L2_MPEG_VIDEO_BITRATE_MODE_CBR); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_BITRATE, + BITRATE_MIN, + BITRATE_MAX, + BITRATE_STEP, + BITRATE_DEFAULT); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_BITRATE_PEAK, + BITRATE_MIN, BITRATE_MAX, + BITRATE_STEP, + BITRATE_DEFAULT_PEAK); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_GOP_SIZE, 0, (1 << 16) - 1, 1, 30); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_B_FRAMES, 0, 4, 1, 0); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP, 1, 51, 1, 26); + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP, 1, 51, 1, 28); + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP, 1, 51, 1, 30); + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME, 0, 0, 0, 0); + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, 1, 32, 1, 2); + if (ctrl) + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE; + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 1, 32, 1, 2); + if (ctrl) + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE; + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE, 64, 10240, 1, 1024); + + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE, 0, 1, 1, 1); + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC, + V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_EXTENDED, + 0x0, + V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_1x1); + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_WIDTH, + 0, USHRT_MAX, 1, 1); + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_HEIGHT, + 0, USHRT_MAX, 1, 1); + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops, + V4L2_CID_MPEG_VIDEO_HEADER_MODE, + V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME, + ~(1 << V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME), + V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME); + + ret = v4l2_ctrl_handler_setup(&inst->ctrl_handler); + if (ret) { + dev_err(inst->dev, "[%d] setup ctrls fail, ret = %d\n", inst->id, ret); + v4l2_ctrl_handler_free(&inst->ctrl_handler); + return ret; + } + + return 0; +} + +static bool venc_check_ready(struct vpu_inst *inst, unsigned int type) +{ + struct venc_t *venc = inst->priv; + + if (V4L2_TYPE_IS_OUTPUT(type)) { + if (vpu_helper_get_free_space(inst) < venc->cpb_size) + return false; + return venc->input_ready; + } + + if (list_empty(&venc->frames)) + return false; + return true; +} + +static u32 venc_get_enable_mask(u32 type) +{ + if (V4L2_TYPE_IS_OUTPUT(type)) + return VENC_OUTPUT_ENABLE; + else + return VENC_CAPTURE_ENABLE; +} + +static void venc_set_enable(struct venc_t *venc, u32 type, int enable) +{ + u32 mask = venc_get_enable_mask(type); + + if (enable) + venc->enable |= mask; + else + venc->enable &= ~mask; +} + +static u32 venc_get_enable(struct venc_t *venc, u32 type) +{ + return venc->enable & venc_get_enable_mask(type); +} + +static void venc_input_done(struct vpu_inst *inst) +{ + struct venc_t *venc = inst->priv; + + vpu_inst_lock(inst); + venc->input_ready = true; + vpu_process_output_buffer(inst); + if (inst->state == VPU_CODEC_STATE_DRAIN) + venc_drain(inst); + vpu_inst_unlock(inst); +} + +/* + * It's hardware limitation, that there may be several bytes + * redundant data at the beginning of frame. + * For android platform, the redundant data may cause cts test fail + * So driver will strip them + */ +static int venc_precheck_encoded_frame(struct vpu_inst *inst, struct venc_frame_t *frame) +{ + struct venc_t *venc; + int skipped; + + if (!inst || !frame || !frame->bytesused) + return -EINVAL; + + venc = inst->priv; + skipped = vpu_helper_find_startcode(&inst->stream_buffer, + inst->cap_format.pixfmt, + frame->info.wptr - inst->stream_buffer.phys, + frame->bytesused); + if (skipped > 0) { + frame->bytesused -= skipped; + frame->info.wptr = vpu_helper_step_walk(&inst->stream_buffer, + frame->info.wptr, skipped); + venc->skipped_bytes += skipped; + venc->skipped_count++; + } + + return 0; +} + +static int venc_get_one_encoded_frame(struct vpu_inst *inst, + struct venc_frame_t *frame, + struct vb2_v4l2_buffer *vbuf) +{ + struct venc_t *venc = inst->priv; + + if (!vbuf) + return -EAGAIN; + + if (!venc_get_enable(inst->priv, vbuf->vb2_buf.type)) { + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR); + return 0; + } + if (frame->bytesused > vbuf->vb2_buf.planes[0].length) { + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR); + return -ENOMEM; + } + + venc_precheck_encoded_frame(inst, frame); + + if (frame->bytesused) { + u32 rptr = frame->info.wptr; + void *dst = vb2_plane_vaddr(&vbuf->vb2_buf, 0); + + vpu_helper_copy_from_stream_buffer(&inst->stream_buffer, + &rptr, frame->bytesused, dst); + vpu_iface_update_stream_buffer(inst, rptr, 0); + } + vb2_set_plane_payload(&vbuf->vb2_buf, 0, frame->bytesused); + vbuf->sequence = frame->info.frame_id; + vbuf->vb2_buf.timestamp = frame->info.timestamp; + vbuf->field = inst->cap_format.field; + vbuf->flags |= frame->info.pic_type; + vpu_set_buffer_state(vbuf, VPU_BUF_STATE_IDLE); + dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, frame->info.timestamp); + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE); + venc->ready_count++; + + if (vbuf->flags & V4L2_BUF_FLAG_KEYFRAME) + dev_dbg(inst->dev, "[%d][%d]key frame\n", inst->id, frame->info.frame_id); + + return 0; +} + +static int venc_get_encoded_frames(struct vpu_inst *inst) +{ + struct venc_t *venc; + struct venc_frame_t *frame; + struct venc_frame_t *tmp; + + if (!inst || !inst->priv || !inst->fh.m2m_ctx) + return -EINVAL; + + venc = inst->priv; + list_for_each_entry_safe(frame, tmp, &venc->frames, list) { + if (venc_get_one_encoded_frame(inst, frame, + v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx))) + break; + list_del_init(&frame->list); + vfree(frame); + } + + return 0; +} + +static int venc_frame_encoded(struct vpu_inst *inst, void *arg) +{ + struct vpu_enc_pic_info *info = arg; + struct venc_frame_t *frame; + struct venc_t *venc; + int ret = 0; + + if (!inst || !info) + return -EINVAL; + venc = inst->priv; + frame = vzalloc(sizeof(*frame)); + if (!frame) + return -ENOMEM; + + memcpy(&frame->info, info, sizeof(frame->info)); + frame->bytesused = info->frame_size; + + vpu_inst_lock(inst); + list_add_tail(&frame->list, &venc->frames); + venc->encode_count++; + venc_get_encoded_frames(inst); + vpu_inst_unlock(inst); + + return ret; +} + +static void venc_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame) +{ + struct vb2_v4l2_buffer *vbuf; + + if (!inst || !frame || !inst->fh.m2m_ctx) + return; + + vpu_inst_lock(inst); + if (!venc_get_enable(inst->priv, frame->type)) + goto exit; + vbuf = vpu_find_buf_by_sequence(inst, frame->type, frame->sequence); + if (!vbuf) { + dev_err(inst->dev, "[%d] can't find buf: type %d, sequence %d\n", + inst->id, frame->type, frame->sequence); + goto exit; + } + + vpu_set_buffer_state(vbuf, VPU_BUF_STATE_IDLE); + if (V4L2_TYPE_IS_OUTPUT(frame->type)) + v4l2_m2m_src_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf); + else + v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf); + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE); +exit: + vpu_inst_unlock(inst); +} + +static void venc_set_last_buffer_dequeued(struct vpu_inst *inst) +{ + struct venc_t *venc = inst->priv; + + if (venc->stopped && list_empty(&venc->frames)) + vpu_set_last_buffer_dequeued(inst); +} + +static void venc_stop_done(struct vpu_inst *inst) +{ + struct venc_t *venc = inst->priv; + + vpu_inst_lock(inst); + venc->stopped = true; + venc_set_last_buffer_dequeued(inst); + vpu_inst_unlock(inst); + + wake_up_all(&venc->wq); +} + +static void venc_event_notify(struct vpu_inst *inst, u32 event, void *data) +{ +} + +static void venc_release(struct vpu_inst *inst) +{ +} + +static void venc_cleanup(struct vpu_inst *inst) +{ + struct venc_t *venc; + + if (!inst) + return; + + venc = inst->priv; + if (venc) + vfree(venc); + inst->priv = NULL; + vfree(inst); +} + +static int venc_start_session(struct vpu_inst *inst, u32 type) +{ + struct venc_t *venc = inst->priv; + int stream_buffer_size; + int ret; + + venc_set_enable(venc, type, 1); + if ((venc->enable & VENC_ENABLE_MASK) != VENC_ENABLE_MASK) + return 0; + + vpu_iface_init_instance(inst); + stream_buffer_size = vpu_iface_get_stream_buffer_size(inst->core); + if (stream_buffer_size > 0) { + inst->stream_buffer.length = max_t(u32, stream_buffer_size, venc->cpb_size * 3); + ret = vpu_alloc_dma(inst->core, &inst->stream_buffer); + if (ret) + goto error; + + inst->use_stream_buffer = true; + vpu_iface_config_stream_buffer(inst, &inst->stream_buffer); + } + + ret = vpu_iface_set_encode_params(inst, &venc->params, 0); + if (ret) + goto error; + ret = vpu_session_configure_codec(inst); + if (ret) + goto error; + + inst->state = VPU_CODEC_STATE_CONFIGURED; + /*vpu_iface_config_memory_resource*/ + + /*config enc expert mode parameter*/ + ret = vpu_iface_set_encode_params(inst, &venc->params, 1); + if (ret) + goto error; + + ret = vpu_session_start(inst); + if (ret) + goto error; + inst->state = VPU_CODEC_STATE_STARTED; + + venc->bitrate_change = false; + venc->input_ready = true; + venc->frame_count = 0; + venc->encode_count = 0; + venc->ready_count = 0; + venc->stopped = false; + vpu_process_output_buffer(inst); + if (venc->frame_count == 0) + dev_err(inst->dev, "[%d] there is no input when starting\n", inst->id); + + return 0; +error: + venc_set_enable(venc, type, 0); + inst->state = VPU_CODEC_STATE_DEINIT; + + vpu_free_dma(&inst->stream_buffer); + return ret; +} + +static void venc_cleanup_mem_resource(struct vpu_inst *inst) +{ + struct venc_t *venc; + u32 i; + + venc = inst->priv; + + for (i = 0; i < ARRAY_SIZE(venc->enc); i++) + vpu_free_dma(&venc->enc[i]); + for (i = 0; i < ARRAY_SIZE(venc->ref); i++) + vpu_free_dma(&venc->ref[i]); + for (i = 0; i < ARRAY_SIZE(venc->act); i++) + vpu_free_dma(&venc->act[i]); +} + +static void venc_request_mem_resource(struct vpu_inst *inst, + u32 enc_frame_size, + u32 enc_frame_num, + u32 ref_frame_size, + u32 ref_frame_num, + u32 act_frame_size, + u32 act_frame_num) +{ + struct venc_t *venc; + u32 i; + int ret; + + venc = inst->priv; + if (enc_frame_num > ARRAY_SIZE(venc->enc)) { + dev_err(inst->dev, "[%d] enc num(%d) is out of range\n", inst->id, enc_frame_num); + return; + } + if (ref_frame_num > ARRAY_SIZE(venc->ref)) { + dev_err(inst->dev, "[%d] ref num(%d) is out of range\n", inst->id, ref_frame_num); + return; + } + if (act_frame_num > ARRAY_SIZE(venc->act)) { + dev_err(inst->dev, "[%d] act num(%d) is out of range\n", inst->id, act_frame_num); + return; + } + + for (i = 0; i < enc_frame_num; i++) { + venc->enc[i].length = enc_frame_size; + ret = vpu_alloc_dma(inst->core, &venc->enc[i]); + if (ret) { + venc_cleanup_mem_resource(inst); + return; + } + } + for (i = 0; i < ref_frame_num; i++) { + venc->ref[i].length = ref_frame_size; + ret = vpu_alloc_dma(inst->core, &venc->ref[i]); + if (ret) { + venc_cleanup_mem_resource(inst); + return; + } + } + if (act_frame_num != 1 || act_frame_size > inst->act.length) { + venc_cleanup_mem_resource(inst); + return; + } + venc->act[0].length = act_frame_size; + venc->act[0].phys = inst->act.phys; + venc->act[0].virt = inst->act.virt; + + for (i = 0; i < enc_frame_num; i++) + vpu_iface_config_memory_resource(inst, MEM_RES_ENC, i, &venc->enc[i]); + for (i = 0; i < ref_frame_num; i++) + vpu_iface_config_memory_resource(inst, MEM_RES_REF, i, &venc->ref[i]); + for (i = 0; i < act_frame_num; i++) + vpu_iface_config_memory_resource(inst, MEM_RES_ACT, i, &venc->act[i]); +} + +static void venc_cleanup_frames(struct venc_t *venc) +{ + struct venc_frame_t *frame; + struct venc_frame_t *tmp; + + list_for_each_entry_safe(frame, tmp, &venc->frames, list) { + list_del_init(&frame->list); + vfree(frame); + } +} + +static int venc_stop_session(struct vpu_inst *inst, u32 type) +{ + struct venc_t *venc = inst->priv; + + venc_set_enable(venc, type, 0); + if (venc->enable & VENC_ENABLE_MASK) + return 0; + + if (inst->state == VPU_CODEC_STATE_DEINIT) + return 0; + + if (inst->state != VPU_CODEC_STATE_STOP) + venc_request_eos(inst); + + call_vop(inst, wait_prepare); + if (!wait_event_timeout(venc->wq, venc->stopped, VPU_TIMEOUT)) { + set_bit(inst->id, &inst->core->hang_mask); + vpu_session_debug(inst); + } + call_vop(inst, wait_finish); + + inst->state = VPU_CODEC_STATE_DEINIT; + venc_cleanup_frames(inst->priv); + vpu_free_dma(&inst->stream_buffer); + venc_cleanup_mem_resource(inst); + + return 0; +} + +static int venc_process_output(struct vpu_inst *inst, struct vb2_buffer *vb) +{ + struct venc_t *venc = inst->priv; + struct vb2_v4l2_buffer *vbuf; + u32 flags; + + if (inst->state == VPU_CODEC_STATE_DEINIT) + return -EINVAL; + + vbuf = to_vb2_v4l2_buffer(vb); + if (inst->state == VPU_CODEC_STATE_STARTED) + inst->state = VPU_CODEC_STATE_ACTIVE; + + flags = vbuf->flags; + if (venc->request_key_frame) { + vbuf->flags |= V4L2_BUF_FLAG_KEYFRAME; + venc->request_key_frame = 0; + } + if (venc->bitrate_change) { + vpu_session_update_parameters(inst, &venc->params); + venc->bitrate_change = false; + } + dev_dbg(inst->dev, "[%d][INPUT TS]%32lld\n", inst->id, vb->timestamp); + vpu_iface_input_frame(inst, vb); + vbuf->flags = flags; + venc->input_ready = false; + venc->frame_count++; + vpu_set_buffer_state(vbuf, VPU_BUF_STATE_INUSE); + + return 0; +} + +static int venc_process_capture(struct vpu_inst *inst, struct vb2_buffer *vb) +{ + struct venc_t *venc; + struct venc_frame_t *frame = NULL; + struct vb2_v4l2_buffer *vbuf; + int ret; + + if (!inst || !inst->fh.m2m_ctx) + return -EINVAL; + + venc = inst->priv; + if (list_empty(&venc->frames)) + return -EINVAL; + + frame = list_first_entry(&venc->frames, struct venc_frame_t, list); + vbuf = to_vb2_v4l2_buffer(vb); + v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf); + ret = venc_get_one_encoded_frame(inst, frame, vbuf); + if (ret) + return ret; + + list_del_init(&frame->list); + vfree(frame); + return 0; +} + +static void venc_on_queue_empty(struct vpu_inst *inst, u32 type) +{ + struct venc_t *venc = inst->priv; + + if (V4L2_TYPE_IS_OUTPUT(type)) + return; + + if (venc->stopped) + venc_set_last_buffer_dequeued(inst); +} + +static int venc_get_debug_info(struct vpu_inst *inst, char *str, u32 size, u32 i) +{ + struct venc_t *venc = inst->priv; + int num = -1; + + switch (i) { + case 0: + num = scnprintf(str, size, "profile = %d\n", venc->params.profile); + break; + case 1: + num = scnprintf(str, size, "level = %d\n", venc->params.level); + break; + case 2: + num = scnprintf(str, size, "fps = %d/%d\n", + venc->params.frame_rate.numerator, + venc->params.frame_rate.denominator); + break; + case 3: + num = scnprintf(str, size, "%d x %d -> %d x %d\n", + venc->params.src_width, + venc->params.src_height, + venc->params.out_width, + venc->params.out_height); + break; + case 4: + num = scnprintf(str, size, "(%d, %d) %d x %d\n", + venc->params.crop.left, + venc->params.crop.top, + venc->params.crop.width, + venc->params.crop.height); + break; + case 5: + num = scnprintf(str, size, + "enable = 0x%x, input = %d, encode = %d, ready = %d, stopped = %d\n", + venc->enable, + venc->frame_count, venc->encode_count, + venc->ready_count, + venc->stopped); + break; + case 6: + num = scnprintf(str, size, "gop = %d\n", venc->params.gop_length); + break; + case 7: + num = scnprintf(str, size, "bframes = %d\n", venc->params.bframes); + break; + case 8: + num = scnprintf(str, size, "rc: %s, mode = %d, bitrate = %d(%d), qp = %d\n", + venc->params.rc_enable ? "enable" : "disable", + venc->params.rc_mode, + venc->params.bitrate, + venc->params.bitrate_max, + venc->params.i_frame_qp); + break; + case 9: + num = scnprintf(str, size, "sar: enable = %d, idc = %d, %d x %d\n", + venc->params.sar.enable, + venc->params.sar.idc, + venc->params.sar.width, + venc->params.sar.height); + + break; + case 10: + num = scnprintf(str, size, + "colorspace: primaries = %d, transfer = %d, matrix = %d, full_range = %d\n", + venc->params.color.primaries, + venc->params.color.transfer, + venc->params.color.matrix, + venc->params.color.full_range); + break; + case 11: + num = scnprintf(str, size, "skipped: count = %d, bytes = %d\n", + venc->skipped_count, venc->skipped_bytes); + break; + default: + break; + } + + return num; +} + +static struct vpu_inst_ops venc_inst_ops = { + .ctrl_init = venc_ctrl_init, + .check_ready = venc_check_ready, + .input_done = venc_input_done, + .get_one_frame = venc_frame_encoded, + .buf_done = venc_buf_done, + .stop_done = venc_stop_done, + .event_notify = venc_event_notify, + .release = venc_release, + .cleanup = venc_cleanup, + .start = venc_start_session, + .mem_request = venc_request_mem_resource, + .stop = venc_stop_session, + .process_output = venc_process_output, + .process_capture = venc_process_capture, + .on_queue_empty = venc_on_queue_empty, + .get_debug_info = venc_get_debug_info, + .wait_prepare = vpu_inst_unlock, + .wait_finish = vpu_inst_lock, +}; + +static void venc_init(struct file *file) +{ + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc; + struct v4l2_format f; + struct v4l2_streamparm parm; + + venc = inst->priv; + venc->params.qp_min = 1; + venc->params.qp_max = 51; + venc->params.qp_min_i = 1; + venc->params.qp_max_i = 51; + venc->params.bitrate_min = BITRATE_MIN; + + memset(&f, 0, sizeof(f)); + f.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; + f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_NV12M; + f.fmt.pix_mp.width = 1280; + f.fmt.pix_mp.height = 720; + f.fmt.pix_mp.field = V4L2_FIELD_NONE; + f.fmt.pix_mp.colorspace = V4L2_COLORSPACE_REC709; + venc_s_fmt(file, &inst->fh, &f); + + memset(&f, 0, sizeof(f)); + f.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; + f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264; + f.fmt.pix_mp.width = 1280; + f.fmt.pix_mp.height = 720; + f.fmt.pix_mp.field = V4L2_FIELD_NONE; + venc_s_fmt(file, &inst->fh, &f); + + memset(&parm, 0, sizeof(parm)); + parm.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; + parm.parm.capture.timeperframe.numerator = 1; + parm.parm.capture.timeperframe.denominator = 30; + venc_s_parm(file, &inst->fh, &parm); +} + +static int venc_open(struct file *file) +{ + struct vpu_inst *inst; + struct venc_t *venc; + int ret; + + inst = vzalloc(sizeof(*inst)); + if (!inst) + return -ENOMEM; + + venc = vzalloc(sizeof(*venc)); + if (!venc) { + vfree(inst); + return -ENOMEM; + } + + inst->ops = &venc_inst_ops; + inst->formats = venc_formats; + inst->type = VPU_CORE_TYPE_ENC; + inst->priv = venc; + INIT_LIST_HEAD(&venc->frames); + init_waitqueue_head(&venc->wq); + + ret = vpu_v4l2_open(file, inst); + if (ret) + return ret; + + venc_init(file); + + return 0; +} + +static const struct v4l2_file_operations venc_fops = { + .owner = THIS_MODULE, + .open = venc_open, + .release = vpu_v4l2_close, + .unlocked_ioctl = video_ioctl2, + .poll = v4l2_m2m_fop_poll, + .mmap = v4l2_m2m_fop_mmap, +}; + +const struct v4l2_ioctl_ops *venc_get_ioctl_ops(void) +{ + return &venc_ioctl_ops; +} + +const struct v4l2_file_operations *venc_get_fops(void) +{ + return &venc_fops; +} From patchwork Wed Jan 26 03:09:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Qian X-Patchwork-Id: 537045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C020C3525A for ; Wed, 26 Jan 2022 03:11:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237095AbiAZDLd (ORCPT ); Tue, 25 Jan 2022 22:11:33 -0500 Received: from mail-eopbgr00041.outbound.protection.outlook.com ([40.107.0.41]:62220 "EHLO EUR02-AM5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S236973AbiAZDLW (ORCPT ); Tue, 25 Jan 2022 22:11:22 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PCUOr/g/+XusgWNvGqkgrI/yzVlxrjKPic/Jmmu+QzNS1CoITRy3yrF7Yx1iPDJFrjhAuM+SV/6DfLkNjKqY0Z7CfOzRRfm3VFPZxdwXrAfOqW5GV4pYcunSGoNTExCXQA7z5WgYd9lptZG9+qJ4qC43zQA8niEDBddwcGGjI55tmAyQeKVzZz7EnfY/NfdkShXT70c8F68TQ6vVSgrmVLv+SIw2F7Z2PHEhiszJ5JBM+qemni8lHv+xQqAGJF3d4f5t1wmUl7CXaw8bNHl0E7A9Dpr+XOWJWGfx2KXbFw/zCXbvdyURaBrDxFJsp6n3O8nj84YD310p+8gK23rGMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5d+DTAQMAt0Syvo/dR8AqO27H2rV1VGBmRy537TrPtQ=; b=kbwQtP1BvvEz6tv+tOFBCucQ0WMENllm56BeJEqCi4PLyO2261J8vf+goRwg1uMUFm/QsuW6oBItV4tKbRKMANdrpIpzUjx+Hk/zDxUcW9fpAtflxeb8X58H3zSjZMjtA3TgU3q75LmFtk+SzS/1V3jl1SDpaEpyhG0kWtpFNi5n8NKsbTI3JQAWZePVpEfDRe4IksUYomM1iyYUfC7d57Vw4pwPhB9x3zC8/7rRNZZ2sigM1rWN2o34oJ/nJh+k3Zvi2LnB46dGXwcy7Z1e3ulk2A4GmL9Z9UQFF1HE3V3gVCLl3OpZuv6JO3q9G+QkS3IQnYmnLuzLvwfk2n8ZRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5d+DTAQMAt0Syvo/dR8AqO27H2rV1VGBmRy537TrPtQ=; b=e2KsEDzpUUx0zJTW+A/qgj+crtx5Q2m/AWCZjmImizeAUrR6b4VgfOYXvtbGs6UEv49a4CYi6hc0xcARanu5gTme9PQAtFCA3Z2r7lk0CW+jez8oNLISEMUcTaMrWEHOu1ik4ZXvoKxK+LkDPqDUYCnFW/89Oe4bOAaPFeqP6PU= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) by AM6PR0402MB3319.eurprd04.prod.outlook.com (2603:10a6:209:e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Wed, 26 Jan 2022 03:10:47 +0000 Received: from AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13]) by AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13%4]) with mapi id 15.20.4930.015; Wed, 26 Jan 2022 03:10:47 +0000 From: Ming Qian To: mchehab@kernel.org, shawnguo@kernel.org, robh+dt@kernel.org, s.hauer@pengutronix.de Cc: hverkuil-cisco@xs4all.nl, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, aisheng.dong@nxp.com, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v15.1 10/13] media: amphion: implement malone decoder rpc interface Date: Wed, 26 Jan 2022 11:09:29 +0800 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: X-ClientProxiedBy: SG2PR01CA0091.apcprd01.prod.exchangelabs.com (2603:1096:3:15::17) To AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 58f437bc-f5e4-49c9-3bd0-08d9e07972e4 X-MS-TrafficTypeDiagnostic: AM6PR0402MB3319:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:147; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: coXZnXxKVtJcpbALZiuAegLKRIdW7WiWQsK9u9SNDaXYeBBxeajLAWXZuWARoj39cAjWZ/1HYGB7MmpJuaXBpvgmBksIkb1z8D3LQEf0aL+yhpPQaW8bSV7XNPJ7MTr66eN2Hzsk5ybguPZdcYH/Q+Ko7inAMovjnqscq3KjrC6S9fk/OriKGmvH6xlm3TzH2iZzpo2rWtfJ35KF8rioQDZGNS93pQKJyVpgi9XTcy5EwmaMZ6ddcT2baaFzgRZQ3qxGUfiAD4a9KaaaytVoMnnzNThA0fPN43MQhfWmNjx82aMtJpnNSNtjuJ2tfd/CAywqk/2svzhURFXntfY8n1DMntPGSVuwMWdyAx8xTqTSlDUFQCLsDTZoQG9JSw8bMcmULl+V7CfzmGEWQ/gbLzAXx3oYfrfWZkiXUHxjdqTDBDtSlxvjk6bjyY9yilEMnKhpFt3TDLYOHSUcjxc0pnPmFZSGjtfahaIhAnQPPXzFyiZxci26HY8ZDngWsSxmvxTjQ+drGpmZ0CSurtO7LlheIVYXj91CTDCWTk/MN+gzBNFz1ZBeySERnJH38A6J9SB7UK/2Oi35SHXslK/rj+CNbTBIS1MNAxVGbFAUd0uj+/Yv0S5fvCHng78axeXBVRaJKiVZ9iBEsZ5c6bddEUPecnj9YUj/ZXaU1KI+53nJu9Dm8zNy9FdrJQWKNtoJxxwaYLBzg8cZLHjRRdAMFg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB6341.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(83380400001)(508600001)(2616005)(36756003)(316002)(186003)(26005)(6486002)(66946007)(66476007)(66556008)(8676002)(5660300002)(4326008)(86362001)(30864003)(52116002)(6666004)(6506007)(6512007)(44832011)(7416002)(8936002)(38100700002)(38350700002)(2906002)(559001)(579004)(20210929001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: WTE1HhYzLmWRu34tvRrAcQOk54gZix18+ICKvRsOGFHxGOFOYZPXvlklRCrzPj3ovKNmtwMo4YA1+PlvaR9SKqtJtDROg5LTYqJbYLWAZNUWCNTmNv0X6SKOVyRknwOW/rtRNLp+Ba//7UFnXUw4+MCc8S2bbvlOC3tkTlLSikNGu/0zIIcPME22JkVzChASgxHGMh2TusRD3G3mikjHmJn1YXxRbKGapfBfKTeO6932IAgQMREVCg9ENM0LeJksAZnoSrESbUP8tK5jF2RmDBAgjTv7Ktbh5THs2bLuRh3o7QToU/tBjBrCWNmuoCtAMdWSeeMvHOn8SqNh1XWMRQC4qQP3ccKz7TwOo6v3g97d9CDYGID9Q2uLrxiSpj4nODO8+N0MYcmGhtb9MxawzN8MCnk1lLGZwywiW5nQdD+UQvq2ECur66FGaWDdwlXyvQmwavGqqkGJ3PfEzgba3bn7BONwT/6artyv2QkaWLCbMdZKp5pD6x09yZ3lbdMl6mwo+YCLbdolf79wbkIfMhM3JdTxsd3oQkc0R/dhQKnyKh9lcqA2kslVEhQjX95vX87lztiGe60FbvUpeLSZP9ErQoqRFi4NP/XdhSH0ec0esFugFeFv3DLD5xTnavEsijA6xmRTEO+OlU9T6A6ehPSt9pQP9zpB9UFgdpgVMorSfNfZaEpkt3j/HJQj4r48ZXXacrskeE98eMPL5t6xwQDKWZVe8ASf9/upuyhrSXBP9CImH2QjzGyntkJy8tUY6M1zLkKWUyN7oxhyXJ9j+dvAqKyTmR4xcEMWljWI4weDGO5WV59vkD0XksQV76QtvLqifIJvz+NEmqZgVYe/JhTqy/Xkf0yapFGfCRlRxtXWw9GqL3+VD9K/JCi+CcQjpkm7fc1JIXem9yA7FM0IQnad2Gl7TsrZybRubVvi/glxZLvGkIhH8EvU3DlfDbxgUvYeKkpnPxX1gIMkMdZ9+VNEGjyW0q+J9bRpfBUmYgnCYWcsJ2kqtb+wI34MxRd+Kj4oItm60MgNhFrZXMAMIYOdugQU12/ek46qSGPSyPfynZSOHgycznk3VuyU7clWq8OuNZ1psmS6RzYYTk0Ft/4HKf5+MBALMlf23hUbNRUP64xIiFnvnfrnI6q58K1lpuL4bIySRBAIXl93Dha8wf1JtTBUGkLKmhHm5myK77Vi3WZ754vOkX3SS1Vpp5Qa0vbRrSoMnrLQajGEIDr9gDiy4qN2MAmjpG53mN1qF7+RFwLA6v61BI9JGSkgjLuNRZeraGiWsazMhCudDpwDaMYeuKKL37xvGKDOnMmvTEdsit1aEFTMdhFk26FhxEBAf1FYyH/S+izg4uRhIwAOise9kty/fVaa42ZQeNTPiMBwBVVzOpL9ogsuaNuL+k1WqwgkHXqn4ygBhgszlZut7KA9FkFL8DA1HAPlHq16F29oBJjgwMRCfzHgGqHtBLSTK+mX5+9DmtG584XsiShDSkhepkTE2rD7FE+0N2unfOuV1tbmzNszvc+AAGxNdtW2HEozD1Q/EgTkyeZvKjzRZoMqRQ1FCeH5w04OMeiGmXaLXjsOLOXhoZsrQxHISTAWO4F6FXp0QA0X1sAg8HcW3Apgf1HoEQ6ZE6UTj0VvGfE= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 58f437bc-f5e4-49c9-3bd0-08d9e07972e4 X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6341.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2022 03:10:47.5204 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: tl61gGsQqujS9qtv3Rz658M/zNwPmqEzm4f5ZEJbLZGc06qLjHD3sbFCu2XSg+derPsjQYwikkygGZwrzUhsdw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0402MB3319 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This part implements the malone decoder rpc interface. Signed-off-by: Ming Qian Signed-off-by: Shijie Qin Signed-off-by: Zhou Peng Tested-by: Nicolas Dufresne --- drivers/media/platform/amphion/vpu_malone.c | 1625 +++++++++++++++++++ drivers/media/platform/amphion/vpu_malone.h | 44 + 2 files changed, 1669 insertions(+) create mode 100644 drivers/media/platform/amphion/vpu_malone.c create mode 100644 drivers/media/platform/amphion/vpu_malone.h diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c new file mode 100644 index 000000000000..76085b69af6d --- /dev/null +++ b/drivers/media/platform/amphion/vpu_malone.c @@ -0,0 +1,1625 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "vpu.h" +#include "vpu_rpc.h" +#include "vpu_defs.h" +#include "vpu_helpers.h" +#include "vpu_v4l2.h" +#include "vpu_cmds.h" +#include "vpu_imx8q.h" +#include "vpu_malone.h" + +#define CMD_SIZE 25600 +#define MSG_SIZE 25600 +#define CODEC_SIZE 0x1000 +#define JPEG_SIZE 0x1000 +#define SEQ_SIZE 0x1000 +#define GOP_SIZE 0x1000 +#define PIC_SIZE 0x1000 +#define QMETER_SIZE 0x1000 +#define DBGLOG_SIZE 0x10000 +#define DEBUG_SIZE 0x80000 +#define ENG_SIZE 0x1000 +#define MALONE_SKIPPED_FRAME_ID 0x555 + +#define MALONE_ALIGN_MBI 0x800 +#define MALONE_DCP_CHUNK_BIT 16 +#define MALONE_DCP_SIZE_MAX 0x3000000 +#define MALONE_DCP_SIZE_MIN 0x100000 +#define MALONE_DCP_FIXED_MB_ALLOC 250 + +#define CONFIG_SET(val, cfg, pos, mask) \ + (*(cfg) |= (((val) << (pos)) & (mask))) +//x means source data , y means destination data +#define STREAM_CONFIG_FORMAT_SET(x, y) CONFIG_SET(x, y, 0, 0x0000000F) +#define STREAM_CONFIG_STRBUFIDX_SET(x, y) CONFIG_SET(x, y, 8, 0x00000300) +#define STREAM_CONFIG_NOSEQ_SET(x, y) CONFIG_SET(x, y, 10, 0x00000400) +#define STREAM_CONFIG_DEBLOCK_SET(x, y) CONFIG_SET(x, y, 11, 0x00000800) +#define STREAM_CONFIG_DERING_SET(x, y) CONFIG_SET(x, y, 12, 0x00001000) +#define STREAM_CONFIG_IBWAIT_SET(x, y) CONFIG_SET(x, y, 13, 0x00002000) +#define STREAM_CONFIG_FBC_SET(x, y) CONFIG_SET(x, y, 14, 0x00004000) +#define STREAM_CONFIG_PLAY_MODE_SET(x, y) CONFIG_SET(x, y, 16, 0x00030000) +#define STREAM_CONFIG_ENABLE_DCP_SET(x, y) CONFIG_SET(x, y, 20, 0x00100000) +#define STREAM_CONFIG_NUM_STR_BUF_SET(x, y) CONFIG_SET(x, y, 21, 0x00600000) +#define STREAM_CONFIG_MALONE_USAGE_SET(x, y) CONFIG_SET(x, y, 23, 0x01800000) +#define STREAM_CONFIG_MULTI_VID_SET(x, y) CONFIG_SET(x, y, 25, 0x02000000) +#define STREAM_CONFIG_OBFUSC_EN_SET(x, y) CONFIG_SET(x, y, 26, 0x04000000) +#define STREAM_CONFIG_RC4_EN_SET(x, y) CONFIG_SET(x, y, 27, 0x08000000) +#define STREAM_CONFIG_MCX_SET(x, y) CONFIG_SET(x, y, 28, 0x10000000) +#define STREAM_CONFIG_PES_SET(x, y) CONFIG_SET(x, y, 29, 0x20000000) +#define STREAM_CONFIG_NUM_DBE_SET(x, y) CONFIG_SET(x, y, 30, 0x40000000) +#define STREAM_CONFIG_FS_CTRL_MODE_SET(x, y) CONFIG_SET(x, y, 31, 0x80000000) + +enum vpu_malone_stream_input_mode { + INVALID_MODE = 0, + FRAME_LVL, + NON_FRAME_LVL +}; + +enum vpu_malone_format { + MALONE_FMT_NULL = 0x0, + MALONE_FMT_AVC = 0x1, + MALONE_FMT_MP2 = 0x2, + MALONE_FMT_VC1 = 0x3, + MALONE_FMT_AVS = 0x4, + MALONE_FMT_ASP = 0x5, + MALONE_FMT_JPG = 0x6, + MALONE_FMT_RV = 0x7, + MALONE_FMT_VP6 = 0x8, + MALONE_FMT_SPK = 0x9, + MALONE_FMT_VP8 = 0xA, + MALONE_FMT_HEVC = 0xB, + MALONE_FMT_LAST = MALONE_FMT_HEVC +}; + +enum { + VID_API_CMD_NULL = 0x00, + VID_API_CMD_PARSE_NEXT_SEQ = 0x01, + VID_API_CMD_PARSE_NEXT_I = 0x02, + VID_API_CMD_PARSE_NEXT_IP = 0x03, + VID_API_CMD_PARSE_NEXT_ANY = 0x04, + VID_API_CMD_DEC_PIC = 0x05, + VID_API_CMD_UPDATE_ES_WR_PTR = 0x06, + VID_API_CMD_UPDATE_ES_RD_PTR = 0x07, + VID_API_CMD_UPDATE_UDATA = 0x08, + VID_API_CMD_GET_FSINFO = 0x09, + VID_API_CMD_SKIP_PIC = 0x0a, + VID_API_CMD_DEC_CHUNK = 0x0b, + VID_API_CMD_START = 0x10, + VID_API_CMD_STOP = 0x11, + VID_API_CMD_ABORT = 0x12, + VID_API_CMD_RST_BUF = 0x13, + VID_API_CMD_FS_RELEASE = 0x15, + VID_API_CMD_MEM_REGION_ATTACH = 0x16, + VID_API_CMD_MEM_REGION_DETACH = 0x17, + VID_API_CMD_MVC_VIEW_SELECT = 0x18, + VID_API_CMD_FS_ALLOC = 0x19, + VID_API_CMD_DBG_GET_STATUS = 0x1C, + VID_API_CMD_DBG_START_LOG = 0x1D, + VID_API_CMD_DBG_STOP_LOG = 0x1E, + VID_API_CMD_DBG_DUMP_LOG = 0x1F, + VID_API_CMD_YUV_READY = 0x20, + VID_API_CMD_TS = 0x21, + + VID_API_CMD_FIRM_RESET = 0x40, + + VID_API_CMD_SNAPSHOT = 0xAA, + VID_API_CMD_ROLL_SNAPSHOT = 0xAB, + VID_API_CMD_LOCK_SCHEDULER = 0xAC, + VID_API_CMD_UNLOCK_SCHEDULER = 0xAD, + VID_API_CMD_CQ_FIFO_DUMP = 0xAE, + VID_API_CMD_DBG_FIFO_DUMP = 0xAF, + VID_API_CMD_SVC_ILP = 0xBB, + VID_API_CMD_FW_STATUS = 0xF0, + VID_API_CMD_INVALID = 0xFF +}; + +enum { + VID_API_EVENT_NULL = 0x00, + VID_API_EVENT_RESET_DONE = 0x01, + VID_API_EVENT_SEQ_HDR_FOUND = 0x02, + VID_API_EVENT_PIC_HDR_FOUND = 0x03, + VID_API_EVENT_PIC_DECODED = 0x04, + VID_API_EVENT_FIFO_LOW = 0x05, + VID_API_EVENT_FIFO_HIGH = 0x06, + VID_API_EVENT_FIFO_EMPTY = 0x07, + VID_API_EVENT_FIFO_FULL = 0x08, + VID_API_EVENT_BS_ERROR = 0x09, + VID_API_EVENT_UDATA_FIFO_UPTD = 0x0A, + VID_API_EVENT_RES_CHANGE = 0x0B, + VID_API_EVENT_FIFO_OVF = 0x0C, + VID_API_EVENT_CHUNK_DECODED = 0x0D, + VID_API_EVENT_REQ_FRAME_BUFF = 0x10, + VID_API_EVENT_FRAME_BUFF_RDY = 0x11, + VID_API_EVENT_REL_FRAME_BUFF = 0x12, + VID_API_EVENT_STR_BUF_RST = 0x13, + VID_API_EVENT_RET_PING = 0x14, + VID_API_EVENT_QMETER = 0x15, + VID_API_EVENT_STR_FMT_CHANGE = 0x16, + VID_API_EVENT_FIRMWARE_XCPT = 0x17, + VID_API_EVENT_START_DONE = 0x18, + VID_API_EVENT_STOPPED = 0x19, + VID_API_EVENT_ABORT_DONE = 0x1A, + VID_API_EVENT_FINISHED = 0x1B, + VID_API_EVENT_DBG_STAT_UPDATE = 0x1C, + VID_API_EVENT_DBG_LOG_STARTED = 0x1D, + VID_API_EVENT_DBG_LOG_STOPPED = 0x1E, + VID_API_EVENT_DBG_LOG_UPDATED = 0x1F, + VID_API_EVENT_DBG_MSG_DEC = 0x20, + VID_API_EVENT_DEC_SC_ERR = 0x21, + VID_API_EVENT_CQ_FIFO_DUMP = 0x22, + VID_API_EVENT_DBG_FIFO_DUMP = 0x23, + VID_API_EVENT_DEC_CHECK_RES = 0x24, + VID_API_EVENT_DEC_CFG_INFO = 0x25, + VID_API_EVENT_UNSUPPORTED_STREAM = 0x26, + VID_API_EVENT_STR_SUSPENDED = 0x30, + VID_API_EVENT_SNAPSHOT_DONE = 0x40, + VID_API_EVENT_FW_STATUS = 0xF0, + VID_API_EVENT_INVALID = 0xFF +}; + +struct vpu_malone_buffer_desc { + struct vpu_rpc_buffer_desc buffer; + u32 low; + u32 high; +}; + +struct vpu_malone_str_buffer { + u32 wptr; + u32 rptr; + u32 start; + u32 end; + u32 lwm; +}; + +struct vpu_malone_picth_info { + u32 frame_pitch; +}; + +struct vpu_malone_table_desc { + u32 array_base; + u32 size; +}; + +struct vpu_malone_dbglog_desc { + u32 addr; + u32 size; + u32 level; + u32 reserved; +}; + +struct vpu_malone_frame_buffer { + u32 addr; + u32 size; +}; + +struct vpu_malone_udata { + u32 base; + u32 total_size; + u32 slot_size; +}; + +struct vpu_malone_buffer_info { + u32 stream_input_mode; + u32 stream_pic_input_count; + u32 stream_pic_parsed_count; + u32 stream_buffer_threshold; + u32 stream_pic_end_flag; +}; + +struct vpu_malone_encrypt_info { + u32 rec4key[8]; + u32 obfusc; +}; + +struct malone_iface { + u32 exec_base_addr; + u32 exec_area_size; + struct vpu_malone_buffer_desc cmd_buffer_desc; + struct vpu_malone_buffer_desc msg_buffer_desc; + u32 cmd_int_enable[VID_API_NUM_STREAMS]; + struct vpu_malone_picth_info stream_pitch_info[VID_API_NUM_STREAMS]; + u32 stream_config[VID_API_NUM_STREAMS]; + struct vpu_malone_table_desc codec_param_tab_desc; + struct vpu_malone_table_desc jpeg_param_tab_desc; + u32 stream_buffer_desc[VID_API_NUM_STREAMS][VID_API_MAX_BUF_PER_STR]; + struct vpu_malone_table_desc seq_info_tab_desc; + struct vpu_malone_table_desc pic_info_tab_desc; + struct vpu_malone_table_desc gop_info_tab_desc; + struct vpu_malone_table_desc qmeter_info_tab_desc; + u32 stream_error[VID_API_NUM_STREAMS]; + u32 fw_version; + u32 fw_offset; + u32 max_streams; + struct vpu_malone_dbglog_desc dbglog_desc; + struct vpu_rpc_buffer_desc api_cmd_buffer_desc[VID_API_NUM_STREAMS]; + struct vpu_malone_udata udata_buffer[VID_API_NUM_STREAMS]; + struct vpu_malone_buffer_desc debug_buffer_desc; + struct vpu_malone_buffer_desc eng_access_buff_desc[VID_API_NUM_STREAMS]; + u32 encrypt_info[VID_API_NUM_STREAMS]; + struct vpu_rpc_system_config system_cfg; + u32 api_version; + struct vpu_malone_buffer_info stream_buff_info[VID_API_NUM_STREAMS]; +}; + +struct malone_jpg_params { + u32 rotation_angle; + u32 horiz_scale_factor; + u32 vert_scale_factor; + u32 rotation_mode; + u32 rgb_mode; + u32 chunk_mode; /* 0 ~ 1 */ + u32 last_chunk; /* 0 ~ 1 */ + u32 chunk_rows; /* 0 ~ 255 */ + u32 num_bytes; + u32 jpg_crop_x; + u32 jpg_crop_y; + u32 jpg_crop_width; + u32 jpg_crop_height; + u32 jpg_mjpeg_mode; + u32 jpg_mjpeg_interlaced; +}; + +struct malone_codec_params { + u32 disp_imm; + u32 fourcc; + u32 codec_version; + u32 frame_rate; + u32 dbglog_enable; + u32 bsdma_lwm; + u32 bbd_coring; + u32 bbd_s_thr_row; + u32 bbd_p_thr_row; + u32 bbd_s_thr_logo_row; + u32 bbd_p_thr_logo_row; + u32 bbd_s_thr_col; + u32 bbd_p_thr_col; + u32 bbd_chr_thr_row; + u32 bbd_chr_thr_col; + u32 bbd_uv_mid_level; + u32 bbd_excl_win_mb_left; + u32 bbd_excl_win_mb_right; +}; + +struct malone_padding_scode { + u32 scode_type; + u32 pixelformat; + u32 data[2]; +}; + +struct malone_fmt_mapping { + u32 pixelformat; + enum vpu_malone_format malone_format; +}; + +struct malone_scode_t { + struct vpu_inst *inst; + struct vb2_buffer *vb; + u32 wptr; + u32 need_data; +}; + +struct malone_scode_handler { + u32 pixelformat; + int (*insert_scode_seq)(struct malone_scode_t *scode); + int (*insert_scode_pic)(struct malone_scode_t *scode); +}; + +struct vpu_dec_ctrl { + struct malone_codec_params *codec_param; + struct malone_jpg_params *jpg; + void *seq_mem; + void *pic_mem; + void *gop_mem; + void *qmeter_mem; + void *dbglog_mem; + struct vpu_malone_str_buffer *str_buf[VID_API_NUM_STREAMS]; + u32 buf_addr[VID_API_NUM_STREAMS]; +}; + +u32 vpu_malone_get_data_size(void) +{ + return sizeof(struct vpu_dec_ctrl); +} + +void vpu_malone_init_rpc(struct vpu_shared_addr *shared, + struct vpu_buffer *rpc, dma_addr_t boot_addr) +{ + struct malone_iface *iface; + struct vpu_dec_ctrl *hc; + unsigned long base_phy_addr; + unsigned long phy_addr; + unsigned long offset; + unsigned int i; + + if (rpc->phys < boot_addr) + return; + + iface = rpc->virt; + base_phy_addr = rpc->phys - boot_addr; + hc = shared->priv; + + shared->iface = iface; + shared->boot_addr = boot_addr; + + iface->exec_base_addr = base_phy_addr; + iface->exec_area_size = rpc->length; + + offset = sizeof(struct malone_iface); + phy_addr = base_phy_addr + offset; + + shared->cmd_desc = &iface->cmd_buffer_desc.buffer; + shared->cmd_mem_vir = rpc->virt + offset; + iface->cmd_buffer_desc.buffer.start = + iface->cmd_buffer_desc.buffer.rptr = + iface->cmd_buffer_desc.buffer.wptr = phy_addr; + iface->cmd_buffer_desc.buffer.end = iface->cmd_buffer_desc.buffer.start + CMD_SIZE; + offset += CMD_SIZE; + phy_addr = base_phy_addr + offset; + + shared->msg_desc = &iface->msg_buffer_desc.buffer; + shared->msg_mem_vir = rpc->virt + offset; + iface->msg_buffer_desc.buffer.start = + iface->msg_buffer_desc.buffer.wptr = + iface->msg_buffer_desc.buffer.rptr = phy_addr; + iface->msg_buffer_desc.buffer.end = iface->msg_buffer_desc.buffer.start + MSG_SIZE; + offset += MSG_SIZE; + phy_addr = base_phy_addr + offset; + + iface->codec_param_tab_desc.array_base = phy_addr; + hc->codec_param = rpc->virt + offset; + offset += CODEC_SIZE; + phy_addr = base_phy_addr + offset; + + iface->jpeg_param_tab_desc.array_base = phy_addr; + hc->jpg = rpc->virt + offset; + offset += JPEG_SIZE; + phy_addr = base_phy_addr + offset; + + iface->seq_info_tab_desc.array_base = phy_addr; + hc->seq_mem = rpc->virt + offset; + offset += SEQ_SIZE; + phy_addr = base_phy_addr + offset; + + iface->pic_info_tab_desc.array_base = phy_addr; + hc->pic_mem = rpc->virt + offset; + offset += PIC_SIZE; + phy_addr = base_phy_addr + offset; + + iface->gop_info_tab_desc.array_base = phy_addr; + hc->gop_mem = rpc->virt + offset; + offset += GOP_SIZE; + phy_addr = base_phy_addr + offset; + + iface->qmeter_info_tab_desc.array_base = phy_addr; + hc->qmeter_mem = rpc->virt + offset; + offset += QMETER_SIZE; + phy_addr = base_phy_addr + offset; + + iface->dbglog_desc.addr = phy_addr; + iface->dbglog_desc.size = DBGLOG_SIZE; + hc->dbglog_mem = rpc->virt + offset; + offset += DBGLOG_SIZE; + phy_addr = base_phy_addr + offset; + + for (i = 0; i < VID_API_NUM_STREAMS; i++) { + iface->eng_access_buff_desc[i].buffer.start = + iface->eng_access_buff_desc[i].buffer.wptr = + iface->eng_access_buff_desc[i].buffer.rptr = phy_addr; + iface->eng_access_buff_desc[i].buffer.end = + iface->eng_access_buff_desc[i].buffer.start + ENG_SIZE; + offset += ENG_SIZE; + phy_addr = base_phy_addr + offset; + } + + for (i = 0; i < VID_API_NUM_STREAMS; i++) { + iface->encrypt_info[i] = phy_addr; + offset += sizeof(struct vpu_malone_encrypt_info); + phy_addr = base_phy_addr + offset; + } + + rpc->bytesused = offset; +} + +void vpu_malone_set_log_buf(struct vpu_shared_addr *shared, + struct vpu_buffer *log) +{ + struct malone_iface *iface = shared->iface; + + iface->debug_buffer_desc.buffer.start = + iface->debug_buffer_desc.buffer.wptr = + iface->debug_buffer_desc.buffer.rptr = log->phys - shared->boot_addr; + iface->debug_buffer_desc.buffer.end = iface->debug_buffer_desc.buffer.start + log->length; +} + +static u32 get_str_buffer_offset(u32 instance) +{ + return DEC_MFD_XREG_SLV_BASE + MFD_MCX + MFD_MCX_OFF * instance; +} + +void vpu_malone_set_system_cfg(struct vpu_shared_addr *shared, + u32 regs_base, void __iomem *regs, u32 core_id) +{ + struct malone_iface *iface = shared->iface; + struct vpu_rpc_system_config *config = &iface->system_cfg; + struct vpu_dec_ctrl *hc = shared->priv; + int i; + + vpu_imx8q_set_system_cfg_common(config, regs_base, core_id); + for (i = 0; i < VID_API_NUM_STREAMS; i++) { + u32 offset = get_str_buffer_offset(i); + + hc->buf_addr[i] = regs_base + offset; + hc->str_buf[i] = regs + offset; + } +} + +u32 vpu_malone_get_version(struct vpu_shared_addr *shared) +{ + struct malone_iface *iface = shared->iface; + + return iface->fw_version; +} + +int vpu_malone_get_stream_buffer_size(struct vpu_shared_addr *shared) +{ + return 0xc00000; +} + +int vpu_malone_config_stream_buffer(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_buffer *buf) +{ + struct malone_iface *iface = shared->iface; + struct vpu_dec_ctrl *hc = shared->priv; + struct vpu_malone_str_buffer *str_buf = hc->str_buf[instance]; + + str_buf->start = buf->phys; + str_buf->rptr = buf->phys; + str_buf->wptr = buf->phys; + str_buf->end = buf->phys + buf->length; + str_buf->lwm = 0x1; + + iface->stream_buffer_desc[instance][0] = hc->buf_addr[instance]; + + return 0; +} + +int vpu_malone_get_stream_buffer_desc(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_rpc_buffer_desc *desc) +{ + struct vpu_dec_ctrl *hc = shared->priv; + struct vpu_malone_str_buffer *str_buf = hc->str_buf[instance]; + + if (desc) { + desc->wptr = str_buf->wptr; + desc->rptr = str_buf->rptr; + desc->start = str_buf->start; + desc->end = str_buf->end; + } + + return 0; +} + +static void vpu_malone_update_wptr(struct vpu_malone_str_buffer *str_buf, u32 wptr) +{ + /*update wptr after data is written*/ + mb(); + str_buf->wptr = wptr; +} + +static void vpu_malone_update_rptr(struct vpu_malone_str_buffer *str_buf, u32 rptr) +{ + /*update rptr after data is read*/ + mb(); + str_buf->rptr = rptr; +} + +int vpu_malone_update_stream_buffer(struct vpu_shared_addr *shared, + u32 instance, u32 ptr, bool write) +{ + struct vpu_dec_ctrl *hc = shared->priv; + struct vpu_malone_str_buffer *str_buf = hc->str_buf[instance]; + + if (write) + vpu_malone_update_wptr(str_buf, ptr); + else + vpu_malone_update_rptr(str_buf, ptr); + + return 0; +} + +static struct malone_fmt_mapping fmt_mappings[] = { + {V4L2_PIX_FMT_H264, MALONE_FMT_AVC}, + {V4L2_PIX_FMT_H264_MVC, MALONE_FMT_AVC}, + {V4L2_PIX_FMT_HEVC, MALONE_FMT_HEVC}, + {V4L2_PIX_FMT_VC1_ANNEX_G, MALONE_FMT_VC1}, + {V4L2_PIX_FMT_VC1_ANNEX_L, MALONE_FMT_VC1}, + {V4L2_PIX_FMT_MPEG2, MALONE_FMT_MP2}, + {V4L2_PIX_FMT_MPEG4, MALONE_FMT_ASP}, + {V4L2_PIX_FMT_XVID, MALONE_FMT_ASP}, + {V4L2_PIX_FMT_H263, MALONE_FMT_ASP}, + {V4L2_PIX_FMT_JPEG, MALONE_FMT_JPG}, + {V4L2_PIX_FMT_VP8, MALONE_FMT_VP8}, +}; + +static enum vpu_malone_format vpu_malone_format_remap(u32 pixelformat) +{ + u32 i; + + for (i = 0; i < ARRAY_SIZE(fmt_mappings); i++) { + if (pixelformat == fmt_mappings[i].pixelformat) + return fmt_mappings[i].malone_format; + } + + return MALONE_FMT_NULL; +} + +static void vpu_malone_set_stream_cfg(struct vpu_shared_addr *shared, + u32 instance, + enum vpu_malone_format malone_format) +{ + struct malone_iface *iface = shared->iface; + u32 *curr_str_cfg = &iface->stream_config[instance]; + + *curr_str_cfg = 0; + STREAM_CONFIG_FORMAT_SET(malone_format, curr_str_cfg); + STREAM_CONFIG_STRBUFIDX_SET(0, curr_str_cfg); + STREAM_CONFIG_NOSEQ_SET(0, curr_str_cfg); + STREAM_CONFIG_DEBLOCK_SET(0, curr_str_cfg); + STREAM_CONFIG_DERING_SET(0, curr_str_cfg); + STREAM_CONFIG_PLAY_MODE_SET(0x3, curr_str_cfg); + STREAM_CONFIG_FS_CTRL_MODE_SET(0x1, curr_str_cfg); + STREAM_CONFIG_ENABLE_DCP_SET(1, curr_str_cfg); + STREAM_CONFIG_NUM_STR_BUF_SET(1, curr_str_cfg); + STREAM_CONFIG_MALONE_USAGE_SET(1, curr_str_cfg); + STREAM_CONFIG_MULTI_VID_SET(0, curr_str_cfg); + STREAM_CONFIG_OBFUSC_EN_SET(0, curr_str_cfg); + STREAM_CONFIG_RC4_EN_SET(0, curr_str_cfg); + STREAM_CONFIG_MCX_SET(1, curr_str_cfg); + STREAM_CONFIG_PES_SET(0, curr_str_cfg); + STREAM_CONFIG_NUM_DBE_SET(1, curr_str_cfg); +} + +static int vpu_malone_set_params(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_decode_params *params) +{ + struct malone_iface *iface = shared->iface; + struct vpu_dec_ctrl *hc = shared->priv; + enum vpu_malone_format malone_format; + + malone_format = vpu_malone_format_remap(params->codec_format); + iface->udata_buffer[instance].base = params->udata.base; + iface->udata_buffer[instance].slot_size = params->udata.size; + + vpu_malone_set_stream_cfg(shared, instance, malone_format); + + if (malone_format == MALONE_FMT_JPG) { + //1:JPGD_MJPEG_MODE_A; 2:JPGD_MJPEG_MODE_B + hc->jpg[instance].jpg_mjpeg_mode = 1; + //0: JPGD_MJPEG_PROGRESSIVE + hc->jpg[instance].jpg_mjpeg_interlaced = 0; + } + + hc->codec_param[instance].disp_imm = params->b_dis_reorder ? 1 : 0; + hc->codec_param[instance].dbglog_enable = 0; + iface->dbglog_desc.level = 0; + + if (params->b_non_frame) + iface->stream_buff_info[instance].stream_input_mode = NON_FRAME_LVL; + else + iface->stream_buff_info[instance].stream_input_mode = FRAME_LVL; + iface->stream_buff_info[instance].stream_buffer_threshold = 0; + iface->stream_buff_info[instance].stream_pic_input_count = 0; + + return 0; +} + +static bool vpu_malone_is_non_frame_mode(struct vpu_shared_addr *shared, u32 instance) +{ + struct malone_iface *iface = shared->iface; + + if (iface->stream_buff_info[instance].stream_input_mode == NON_FRAME_LVL) + return true; + + return false; +} + +static int vpu_malone_update_params(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_decode_params *params) +{ + struct malone_iface *iface = shared->iface; + + if (params->end_flag) + iface->stream_buff_info[instance].stream_pic_end_flag = params->end_flag; + params->end_flag = 0; + + return 0; +} + +int vpu_malone_set_decode_params(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_decode_params *params, + u32 update) +{ + if (!params) + return -EINVAL; + + if (!update) + return vpu_malone_set_params(shared, instance, params); + else + return vpu_malone_update_params(shared, instance, params); +} + +static struct vpu_pair malone_cmds[] = { + {VPU_CMD_ID_START, VID_API_CMD_START}, + {VPU_CMD_ID_STOP, VID_API_CMD_STOP}, + {VPU_CMD_ID_ABORT, VID_API_CMD_ABORT}, + {VPU_CMD_ID_RST_BUF, VID_API_CMD_RST_BUF}, + {VPU_CMD_ID_SNAPSHOT, VID_API_CMD_SNAPSHOT}, + {VPU_CMD_ID_FIRM_RESET, VID_API_CMD_FIRM_RESET}, + {VPU_CMD_ID_FS_ALLOC, VID_API_CMD_FS_ALLOC}, + {VPU_CMD_ID_FS_RELEASE, VID_API_CMD_FS_RELEASE}, + {VPU_CMD_ID_TIMESTAMP, VID_API_CMD_TS}, + {VPU_CMD_ID_DEBUG, VID_API_CMD_FW_STATUS}, +}; + +static struct vpu_pair malone_msgs[] = { + {VPU_MSG_ID_RESET_DONE, VID_API_EVENT_RESET_DONE}, + {VPU_MSG_ID_START_DONE, VID_API_EVENT_START_DONE}, + {VPU_MSG_ID_STOP_DONE, VID_API_EVENT_STOPPED}, + {VPU_MSG_ID_ABORT_DONE, VID_API_EVENT_ABORT_DONE}, + {VPU_MSG_ID_BUF_RST, VID_API_EVENT_STR_BUF_RST}, + {VPU_MSG_ID_PIC_EOS, VID_API_EVENT_FINISHED}, + {VPU_MSG_ID_SEQ_HDR_FOUND, VID_API_EVENT_SEQ_HDR_FOUND}, + {VPU_MSG_ID_RES_CHANGE, VID_API_EVENT_RES_CHANGE}, + {VPU_MSG_ID_PIC_HDR_FOUND, VID_API_EVENT_PIC_HDR_FOUND}, + {VPU_MSG_ID_PIC_DECODED, VID_API_EVENT_PIC_DECODED}, + {VPU_MSG_ID_DEC_DONE, VID_API_EVENT_FRAME_BUFF_RDY}, + {VPU_MSG_ID_FRAME_REQ, VID_API_EVENT_REQ_FRAME_BUFF}, + {VPU_MSG_ID_FRAME_RELEASE, VID_API_EVENT_REL_FRAME_BUFF}, + {VPU_MSG_ID_FIFO_LOW, VID_API_EVENT_FIFO_LOW}, + {VPU_MSG_ID_BS_ERROR, VID_API_EVENT_BS_ERROR}, + {VPU_MSG_ID_UNSUPPORTED, VID_API_EVENT_UNSUPPORTED_STREAM}, + {VPU_MSG_ID_FIRMWARE_XCPT, VID_API_EVENT_FIRMWARE_XCPT}, +}; + +static void vpu_malone_pack_fs_alloc(struct vpu_rpc_event *pkt, + struct vpu_fs_info *fs) +{ + const u32 fs_type[] = { + [MEM_RES_FRAME] = 0, + [MEM_RES_MBI] = 1, + [MEM_RES_DCP] = 2, + }; + + pkt->hdr.num = 7; + pkt->data[0] = fs->id | (fs->tag << 24); + pkt->data[1] = fs->luma_addr; + if (fs->type == MEM_RES_FRAME) { + /* + * if luma_addr equal to chroma_addr, + * means luma(plane[0]) and chromau(plane[1]) used the + * same fd -- usage of NXP codec2. Need to manually + * offset chroma addr. + */ + if (fs->luma_addr == fs->chroma_addr) + fs->chroma_addr = fs->luma_addr + fs->luma_size; + pkt->data[2] = fs->luma_addr + fs->luma_size / 2; + pkt->data[3] = fs->chroma_addr; + pkt->data[4] = fs->chroma_addr + fs->chromau_size / 2; + pkt->data[5] = fs->bytesperline; + } else { + pkt->data[2] = fs->luma_size; + pkt->data[3] = 0; + pkt->data[4] = 0; + pkt->data[5] = 0; + } + pkt->data[6] = fs_type[fs->type]; +} + +static void vpu_malone_pack_fs_release(struct vpu_rpc_event *pkt, + struct vpu_fs_info *fs) +{ + pkt->hdr.num = 1; + pkt->data[0] = fs->id | (fs->tag << 24); +} + +static void vpu_malone_pack_timestamp(struct vpu_rpc_event *pkt, + struct vpu_ts_info *info) +{ + pkt->hdr.num = 3; + if (info->timestamp < 0) { + pkt->data[0] = (u32)-1; + pkt->data[1] = 0; + } else { + pkt->data[0] = info->timestamp / NSEC_PER_SEC; + pkt->data[1] = info->timestamp % NSEC_PER_SEC; + } + pkt->data[2] = info->size; +} + +int vpu_malone_pack_cmd(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data) +{ + int ret; + + ret = vpu_find_dst_by_src(malone_cmds, ARRAY_SIZE(malone_cmds), id); + if (ret < 0) + return ret; + + pkt->hdr.id = ret; + pkt->hdr.num = 0; + pkt->hdr.index = index; + + switch (id) { + case VPU_CMD_ID_FS_ALLOC: + vpu_malone_pack_fs_alloc(pkt, data); + break; + case VPU_CMD_ID_FS_RELEASE: + vpu_malone_pack_fs_release(pkt, data); + break; + case VPU_CMD_ID_TIMESTAMP: + vpu_malone_pack_timestamp(pkt, data); + break; + } + + pkt->hdr.index = index; + return 0; +} + +int vpu_malone_convert_msg_id(u32 id) +{ + return vpu_find_src_by_dst(malone_msgs, ARRAY_SIZE(malone_msgs), id); +} + +static void vpu_malone_fill_planes(struct vpu_dec_codec_info *info) +{ + u32 interlaced = info->progressive ? 0 : 1; + + info->bytesperline[0] = 0; + info->sizeimage[0] = vpu_helper_get_plane_size(info->pixfmt, + info->decoded_width, + info->decoded_height, + 0, + info->stride, + interlaced, + &info->bytesperline[0]); + info->bytesperline[1] = 0; + info->sizeimage[1] = vpu_helper_get_plane_size(info->pixfmt, + info->decoded_width, + info->decoded_height, + 1, + info->stride, + interlaced, + &info->bytesperline[1]); +} + +static void vpu_malone_init_seq_hdr(struct vpu_dec_codec_info *info) +{ + u32 chunks = info->num_dfe_area >> MALONE_DCP_CHUNK_BIT; + + vpu_malone_fill_planes(info); + + info->mbi_size = (info->sizeimage[0] + info->sizeimage[1]) >> 2; + info->mbi_size = ALIGN(info->mbi_size, MALONE_ALIGN_MBI); + + info->dcp_size = MALONE_DCP_SIZE_MAX; + if (chunks) { + u32 mb_num; + u32 mb_w; + u32 mb_h; + + mb_w = DIV_ROUND_UP(info->decoded_width, 16); + mb_h = DIV_ROUND_UP(info->decoded_height, 16); + mb_num = mb_w * mb_h; + info->dcp_size = mb_num * MALONE_DCP_FIXED_MB_ALLOC * chunks; + info->dcp_size = clamp_t(u32, info->dcp_size, + MALONE_DCP_SIZE_MIN, MALONE_DCP_SIZE_MAX); + } +} + +static void vpu_malone_unpack_seq_hdr(struct vpu_rpc_event *pkt, + struct vpu_dec_codec_info *info) +{ + info->num_ref_frms = pkt->data[0]; + info->num_dpb_frms = pkt->data[1]; + info->num_dfe_area = pkt->data[2]; + info->progressive = pkt->data[3]; + info->width = pkt->data[5]; + info->height = pkt->data[4]; + info->decoded_width = pkt->data[12]; + info->decoded_height = pkt->data[11]; + info->frame_rate.numerator = 1000; + info->frame_rate.denominator = pkt->data[8]; + info->dsp_asp_ratio = pkt->data[9]; + info->level_idc = pkt->data[10]; + info->bit_depth_luma = pkt->data[13]; + info->bit_depth_chroma = pkt->data[14]; + info->chroma_fmt = pkt->data[15]; + info->color_primaries = vpu_color_cvrt_primaries_i2v(pkt->data[16]); + info->transfer_chars = vpu_color_cvrt_transfers_i2v(pkt->data[17]); + info->matrix_coeffs = vpu_color_cvrt_matrix_i2v(pkt->data[18]); + info->full_range = vpu_color_cvrt_full_range_i2v(pkt->data[19]); + info->vui_present = pkt->data[20]; + info->mvc_num_views = pkt->data[21]; + info->offset_x = pkt->data[23]; + info->offset_y = pkt->data[25]; + info->tag = pkt->data[27]; + if (info->bit_depth_luma > 8) + info->pixfmt = V4L2_PIX_FMT_NV12M_10BE_8L128; + else + info->pixfmt = V4L2_PIX_FMT_NV12M_8L128; + if (info->frame_rate.numerator && info->frame_rate.denominator) { + unsigned long n, d; + + rational_best_approximation(info->frame_rate.numerator, + info->frame_rate.denominator, + info->frame_rate.numerator, + info->frame_rate.denominator, + &n, &d); + info->frame_rate.numerator = n; + info->frame_rate.denominator = d; + } + vpu_malone_init_seq_hdr(info); +} + +static void vpu_malone_unpack_pic_info(struct vpu_rpc_event *pkt, + struct vpu_dec_pic_info *info) +{ + info->id = pkt->data[7]; + info->luma = pkt->data[0]; + info->start = pkt->data[10]; + info->end = pkt->data[12]; + info->pic_size = pkt->data[11]; + info->stride = pkt->data[5]; + info->consumed_count = pkt->data[13]; + if (info->id == MALONE_SKIPPED_FRAME_ID) + info->skipped = 1; + else + info->skipped = 0; +} + +static void vpu_malone_unpack_req_frame(struct vpu_rpc_event *pkt, + struct vpu_fs_info *info) +{ + info->type = pkt->data[1]; +} + +static void vpu_malone_unpack_rel_frame(struct vpu_rpc_event *pkt, + struct vpu_fs_info *info) +{ + info->id = pkt->data[0]; + info->type = pkt->data[1]; + info->not_displayed = pkt->data[2]; +} + +static void vpu_malone_unpack_buff_rdy(struct vpu_rpc_event *pkt, + struct vpu_dec_pic_info *info) +{ + info->id = pkt->data[0]; + info->luma = pkt->data[1]; + info->stride = pkt->data[3]; + if (info->id == MALONE_SKIPPED_FRAME_ID) + info->skipped = 1; + else + info->skipped = 0; + info->timestamp = MAKE_TIMESTAMP(pkt->data[9], pkt->data[10]); +} + +int vpu_malone_unpack_msg_data(struct vpu_rpc_event *pkt, void *data) +{ + if (!pkt || !data) + return -EINVAL; + + switch (pkt->hdr.id) { + case VID_API_EVENT_SEQ_HDR_FOUND: + vpu_malone_unpack_seq_hdr(pkt, data); + break; + case VID_API_EVENT_PIC_DECODED: + vpu_malone_unpack_pic_info(pkt, data); + break; + case VID_API_EVENT_REQ_FRAME_BUFF: + vpu_malone_unpack_req_frame(pkt, data); + break; + case VID_API_EVENT_REL_FRAME_BUFF: + vpu_malone_unpack_rel_frame(pkt, data); + break; + case VID_API_EVENT_FRAME_BUFF_RDY: + vpu_malone_unpack_buff_rdy(pkt, data); + break; + } + + return 0; +} + +static const struct malone_padding_scode padding_scodes[] = { + {SCODE_PADDING_EOS, V4L2_PIX_FMT_H264, {0x0B010000, 0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_H264_MVC, {0x0B010000, 0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_HEVC, {0x4A010000, 0x20}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_VC1_ANNEX_G, {0x0a010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_VC1_ANNEX_L, {0x0a010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_MPEG2, {0xCC010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_MPEG4, {0xb1010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_XVID, {0xb1010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_H263, {0xb1010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_VP8, {0x34010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_JPEG, {0xefff0000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_H264, {0x0B010000, 0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_H264_MVC, {0x0B010000, 0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_HEVC, {0x4A010000, 0x20}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_VC1_ANNEX_G, {0x0a010000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_VC1_ANNEX_L, {0x0a010000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_MPEG2, {0xb7010000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_MPEG4, {0xb1010000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_XVID, {0xb1010000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_H263, {0xb1010000, 0x0}}, + {SCODE_PADDING_ABORT, V4L2_PIX_FMT_VP8, {0x34010000, 0x0}}, + {SCODE_PADDING_EOS, V4L2_PIX_FMT_JPEG, {0x0, 0x0}}, + {SCODE_PADDING_BUFFLUSH, V4L2_PIX_FMT_H264, {0x15010000, 0x0}}, + {SCODE_PADDING_BUFFLUSH, V4L2_PIX_FMT_H264_MVC, {0x15010000, 0x0}}, +}; + +static const struct malone_padding_scode padding_scode_dft = {0x0, 0x0}; + +static const struct malone_padding_scode *get_padding_scode(u32 type, u32 fmt) +{ + const struct malone_padding_scode *s; + int i; + + for (i = 0; i < ARRAY_SIZE(padding_scodes); i++) { + s = &padding_scodes[i]; + + if (s->scode_type == type && s->pixelformat == fmt) + return s; + } + + if (type != SCODE_PADDING_BUFFLUSH) + return &padding_scode_dft; + + return NULL; +} + +static int vpu_malone_add_padding_scode(struct vpu_buffer *stream_buffer, + struct vpu_malone_str_buffer *str_buf, + u32 pixelformat, u32 scode_type) +{ + u32 wptr; + u32 size; + u32 total_size = 0; + const struct malone_padding_scode *ps; + const u32 padding_size = 4096; + int ret; + + ps = get_padding_scode(scode_type, pixelformat); + if (!ps) + return -EINVAL; + + wptr = str_buf->wptr; + size = ALIGN(wptr, 4) - wptr; + if (size) + vpu_helper_memset_stream_buffer(stream_buffer, &wptr, 0, size); + total_size += size; + + size = sizeof(ps->data); + ret = vpu_helper_copy_to_stream_buffer(stream_buffer, &wptr, size, (void *)ps->data); + if (ret < size) + return -EINVAL; + total_size += size; + + size = padding_size - sizeof(ps->data); + vpu_helper_memset_stream_buffer(stream_buffer, &wptr, 0, size); + total_size += size; + + vpu_malone_update_wptr(str_buf, wptr); + return total_size; +} + +int vpu_malone_add_scode(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_buffer *stream_buffer, + u32 pixelformat, + u32 scode_type) +{ + struct vpu_dec_ctrl *hc = shared->priv; + struct vpu_malone_str_buffer *str_buf = hc->str_buf[instance]; + int ret = -EINVAL; + + switch (scode_type) { + case SCODE_PADDING_EOS: + case SCODE_PADDING_ABORT: + case SCODE_PADDING_BUFFLUSH: + ret = vpu_malone_add_padding_scode(stream_buffer, str_buf, pixelformat, scode_type); + break; + default: + break; + } + + return ret; +} + +#define MALONE_PAYLOAD_HEADER_SIZE 16 +#define MALONE_CODEC_VERSION_ID 0x1 +#define MALONE_CODEC_ID_VC1_SIMPLE 0x10 +#define MALONE_CODEC_ID_VC1_MAIN 0x11 +#define MALONE_CODEC_ID_ARV8 0x28 +#define MALONE_CODEC_ID_ARV9 0x29 +#define MALONE_CODEC_ID_VP6 0x36 +#define MALONE_CODEC_ID_VP8 0x36 +#define MALONE_CODEC_ID_DIVX3 0x38 +#define MALONE_CODEC_ID_SPK 0x39 + +#define MALONE_VP8_IVF_SEQ_HEADER_LEN 32 +#define MALONE_VP8_IVF_FRAME_HEADER_LEN 8 + +#define MALONE_VC1_RCV_CODEC_V1_VERSION 0x85 +#define MALONE_VC1_RCV_CODEC_V2_VERSION 0xC5 +#define MALONE_VC1_RCV_NUM_FRAMES 0xFF +#define MALONE_VC1_RCV_SEQ_EXT_DATA_SIZE 4 +#define MALONE_VC1_RCV_SEQ_HEADER_LEN 20 +#define MALONE_VC1_RCV_PIC_HEADER_LEN 4 +#define MALONE_VC1_NAL_HEADER_LEN 4 +#define MALONE_VC1_CONTAIN_NAL(data) (((data) & 0x00FFFFFF) == 0x00010000) + +static void set_payload_hdr(u8 *dst, u32 scd_type, u32 codec_id, + u32 buffer_size, u32 width, u32 height) +{ + unsigned int payload_size; + /* payload_size = buffer_size + itself_size(16) - start_code(4) */ + payload_size = buffer_size + 12; + + dst[0] = 0x00; + dst[1] = 0x00; + dst[2] = 0x01; + dst[3] = scd_type; + + /* length */ + dst[4] = ((payload_size >> 16) & 0xff); + dst[5] = ((payload_size >> 8) & 0xff); + dst[6] = 0x4e; + dst[7] = ((payload_size >> 0) & 0xff); + + /* Codec ID and Version */ + dst[8] = codec_id; + dst[9] = MALONE_CODEC_VERSION_ID; + + /* width */ + dst[10] = ((width >> 8) & 0xff); + dst[11] = ((width >> 0) & 0xff); + dst[12] = 0x58; + + /* height */ + dst[13] = ((height >> 8) & 0xff); + dst[14] = ((height >> 0) & 0xff); + dst[15] = 0x50; +} + +static void set_vp8_ivf_seqhdr(u8 *dst, u32 width, u32 height) +{ + /* 0-3byte signature "DKIF" */ + dst[0] = 0x44; + dst[1] = 0x4b; + dst[2] = 0x49; + dst[3] = 0x46; + /* 4-5byte version: should be 0*/ + dst[4] = 0x00; + dst[5] = 0x00; + /* 6-7 length of Header */ + dst[6] = MALONE_VP8_IVF_SEQ_HEADER_LEN; + dst[7] = MALONE_VP8_IVF_SEQ_HEADER_LEN >> 8; + /* 8-11 VP8 fourcc */ + dst[8] = 0x56; + dst[9] = 0x50; + dst[10] = 0x38; + dst[11] = 0x30; + /* 12-13 width in pixels */ + dst[12] = width; + dst[13] = width >> 8; + /* 14-15 height in pixels */ + dst[14] = height; + dst[15] = height >> 8; + /* 16-19 frame rate */ + dst[16] = 0xe8; + dst[17] = 0x03; + dst[18] = 0x00; + dst[19] = 0x00; + /* 20-23 time scale */ + dst[20] = 0x01; + dst[21] = 0x00; + dst[22] = 0x00; + dst[23] = 0x00; + /* 24-27 number frames */ + dst[24] = 0xdf; + dst[25] = 0xf9; + dst[26] = 0x09; + dst[27] = 0x00; + /* 28-31 reserved */ +} + +static void set_vp8_ivf_pichdr(u8 *dst, u32 frame_size) +{ + /* + * firmware just parse 64-bit timestamp(8 bytes). + * As not transfer timestamp to firmware, use default value(ZERO). + * No need to do anything here + */ +} + +static void set_vc1_rcv_seqhdr(u8 *dst, u8 *src, u32 width, u32 height) +{ + u32 frames = MALONE_VC1_RCV_NUM_FRAMES; + u32 ext_data_size = MALONE_VC1_RCV_SEQ_EXT_DATA_SIZE; + + /* 0-2 Number of frames, used default value 0xFF */ + dst[0] = frames; + dst[1] = frames >> 8; + dst[2] = frames >> 16; + + /* 3 RCV version, used V1 */ + dst[3] = MALONE_VC1_RCV_CODEC_V1_VERSION; + + /* 4-7 extension data size */ + dst[4] = ext_data_size; + dst[5] = ext_data_size >> 8; + dst[6] = ext_data_size >> 16; + dst[7] = ext_data_size >> 24; + /* 8-11 extension data */ + dst[8] = src[0]; + dst[9] = src[1]; + dst[10] = src[2]; + dst[11] = src[3]; + + /* height */ + dst[12] = height; + dst[13] = (height >> 8) & 0xff; + dst[14] = (height >> 16) & 0xff; + dst[15] = (height >> 24) & 0xff; + /* width */ + dst[16] = width; + dst[17] = (width >> 8) & 0xff; + dst[18] = (width >> 16) & 0xff; + dst[19] = (width >> 24) & 0xff; +} + +static void set_vc1_rcv_pichdr(u8 *dst, u32 buffer_size) +{ + dst[0] = buffer_size; + dst[1] = buffer_size >> 8; + dst[2] = buffer_size >> 16; + dst[3] = buffer_size >> 24; +} + +static void create_vc1_nal_pichdr(u8 *dst) +{ + /* need insert nal header: special ID */ + dst[0] = 0x0; + dst[1] = 0x0; + dst[2] = 0x01; + dst[3] = 0x0D; +} + +static int vpu_malone_insert_scode_seq(struct malone_scode_t *scode, u32 codec_id, u32 ext_size) +{ + u8 hdr[MALONE_PAYLOAD_HEADER_SIZE]; + int ret; + + set_payload_hdr(hdr, + SCODE_SEQUENCE, + codec_id, + ext_size, + scode->inst->out_format.width, + scode->inst->out_format.height); + ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(hdr), + hdr); + return ret; +} + +static int vpu_malone_insert_scode_pic(struct malone_scode_t *scode, u32 codec_id, u32 ext_size) +{ + u8 hdr[MALONE_PAYLOAD_HEADER_SIZE]; + + set_payload_hdr(hdr, + SCODE_PICTURE, + codec_id, + ext_size + vb2_get_plane_payload(scode->vb, 0), + scode->inst->out_format.width, + scode->inst->out_format.height); + return vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(hdr), + hdr); +} + +static int vpu_malone_insert_scode_vc1_g_pic(struct malone_scode_t *scode) +{ + struct vb2_v4l2_buffer *vbuf; + u8 nal_hdr[MALONE_VC1_NAL_HEADER_LEN]; + u32 *data = NULL; + + vbuf = to_vb2_v4l2_buffer(scode->vb); + data = vb2_plane_vaddr(scode->vb, 0); + + if (vbuf->sequence == 0 || vpu_vb_is_codecconfig(vbuf)) + return 0; + if (MALONE_VC1_CONTAIN_NAL(*data)) + return 0; + + create_vc1_nal_pichdr(nal_hdr); + return vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(nal_hdr), + nal_hdr); +} + +static int vpu_malone_insert_scode_vc1_l_seq(struct malone_scode_t *scode) +{ + int ret; + int size = 0; + u8 rcv_seqhdr[MALONE_VC1_RCV_SEQ_HEADER_LEN]; + + scode->need_data = 0; + + ret = vpu_malone_insert_scode_seq(scode, MALONE_CODEC_ID_VC1_SIMPLE, + sizeof(rcv_seqhdr)); + if (ret < 0) + return ret; + size = ret; + + set_vc1_rcv_seqhdr(rcv_seqhdr, + vb2_plane_vaddr(scode->vb, 0), + scode->inst->out_format.width, + scode->inst->out_format.height); + ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(rcv_seqhdr), + rcv_seqhdr); + + if (ret < 0) + return ret; + size += ret; + return size; +} + +static int vpu_malone_insert_scode_vc1_l_pic(struct malone_scode_t *scode) +{ + int ret; + int size = 0; + u8 rcv_pichdr[MALONE_VC1_RCV_PIC_HEADER_LEN]; + + ret = vpu_malone_insert_scode_pic(scode, MALONE_CODEC_ID_VC1_SIMPLE, + sizeof(rcv_pichdr)); + if (ret < 0) + return ret; + size = ret; + + set_vc1_rcv_pichdr(rcv_pichdr, vb2_get_plane_payload(scode->vb, 0)); + ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(rcv_pichdr), + rcv_pichdr); + if (ret < 0) + return ret; + size += ret; + return size; +} + +static int vpu_malone_insert_scode_vp8_seq(struct malone_scode_t *scode) +{ + int ret; + int size = 0; + u8 ivf_hdr[MALONE_VP8_IVF_SEQ_HEADER_LEN]; + + ret = vpu_malone_insert_scode_seq(scode, MALONE_CODEC_ID_VP8, sizeof(ivf_hdr)); + if (ret < 0) + return ret; + size = ret; + + set_vp8_ivf_seqhdr(ivf_hdr, + scode->inst->out_format.width, + scode->inst->out_format.height); + ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(ivf_hdr), + ivf_hdr); + if (ret < 0) + return ret; + size += ret; + + return size; +} + +static int vpu_malone_insert_scode_vp8_pic(struct malone_scode_t *scode) +{ + int ret; + int size = 0; + u8 ivf_hdr[MALONE_VP8_IVF_FRAME_HEADER_LEN] = {0}; + + ret = vpu_malone_insert_scode_pic(scode, MALONE_CODEC_ID_VP8, sizeof(ivf_hdr)); + if (ret < 0) + return ret; + size = ret; + + set_vp8_ivf_pichdr(ivf_hdr, vb2_get_plane_payload(scode->vb, 0)); + ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer, + &scode->wptr, + sizeof(ivf_hdr), + ivf_hdr); + if (ret < 0) + return ret; + size += ret; + + return size; +} + +static const struct malone_scode_handler scode_handlers[] = { + { + /* fix me, need to swap return operation after gstreamer swap */ + .pixelformat = V4L2_PIX_FMT_VC1_ANNEX_L, + .insert_scode_seq = vpu_malone_insert_scode_vc1_l_seq, + .insert_scode_pic = vpu_malone_insert_scode_vc1_l_pic, + }, + { + .pixelformat = V4L2_PIX_FMT_VC1_ANNEX_G, + .insert_scode_pic = vpu_malone_insert_scode_vc1_g_pic, + }, + { + .pixelformat = V4L2_PIX_FMT_VP8, + .insert_scode_seq = vpu_malone_insert_scode_vp8_seq, + .insert_scode_pic = vpu_malone_insert_scode_vp8_pic, + }, +}; + +static const struct malone_scode_handler *get_scode_handler(u32 pixelformat) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(scode_handlers); i++) { + if (scode_handlers[i].pixelformat == pixelformat) + return &scode_handlers[i]; + } + + return NULL; +} + +static int vpu_malone_insert_scode(struct malone_scode_t *scode, u32 type) +{ + const struct malone_scode_handler *handler; + int ret = 0; + + if (!scode || !scode->inst || !scode->vb) + return 0; + + scode->need_data = 1; + handler = get_scode_handler(scode->inst->out_format.pixfmt); + if (!handler) + return 0; + + switch (type) { + case SCODE_SEQUENCE: + if (handler->insert_scode_seq) + ret = handler->insert_scode_seq(scode); + break; + case SCODE_PICTURE: + if (handler->insert_scode_pic) + ret = handler->insert_scode_pic(scode); + break; + default: + break; + } + + return ret; +} + +static int vpu_malone_input_frame_data(struct vpu_malone_str_buffer *str_buf, + struct vpu_inst *inst, struct vb2_buffer *vb, + u32 disp_imm) +{ + struct malone_scode_t scode; + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + u32 wptr = str_buf->wptr; + int size = 0; + int ret = 0; + + /*add scode: SCODE_SEQUENCE, SCODE_PICTURE, SCODE_SLICE*/ + scode.inst = inst; + scode.vb = vb; + scode.wptr = wptr; + scode.need_data = 1; + if (vbuf->sequence == 0 || vpu_vb_is_codecconfig(vbuf)) + ret = vpu_malone_insert_scode(&scode, SCODE_SEQUENCE); + + if (ret < 0) + return -ENOMEM; + size += ret; + wptr = scode.wptr; + if (!scode.need_data) { + vpu_malone_update_wptr(str_buf, wptr); + return size; + } + + ret = vpu_malone_insert_scode(&scode, SCODE_PICTURE); + if (ret < 0) + return -ENOMEM; + size += ret; + wptr = scode.wptr; + + ret = vpu_helper_copy_to_stream_buffer(&inst->stream_buffer, + &wptr, + vb2_get_plane_payload(vb, 0), + vb2_plane_vaddr(vb, 0)); + if (ret < vb2_get_plane_payload(vb, 0)) + return -ENOMEM; + size += ret; + + vpu_malone_update_wptr(str_buf, wptr); + + if (disp_imm && !vpu_vb_is_codecconfig(vbuf)) { + ret = vpu_malone_add_scode(inst->core->iface, + inst->id, + &inst->stream_buffer, + inst->out_format.pixfmt, + SCODE_PADDING_BUFFLUSH); + if (ret < 0) + return ret; + size += ret; + } + + return size; +} + +static int vpu_malone_input_stream_data(struct vpu_malone_str_buffer *str_buf, + struct vpu_inst *inst, struct vb2_buffer *vb) +{ + u32 wptr = str_buf->wptr; + int ret = 0; + + ret = vpu_helper_copy_to_stream_buffer(&inst->stream_buffer, + &wptr, + vb2_get_plane_payload(vb, 0), + vb2_plane_vaddr(vb, 0)); + if (ret < vb2_get_plane_payload(vb, 0)) + return -ENOMEM; + + vpu_malone_update_wptr(str_buf, wptr); + + return ret; +} + +static int vpu_malone_input_ts(struct vpu_inst *inst, s64 timestamp, u32 size) +{ + struct vpu_ts_info info; + + memset(&info, 0, sizeof(info)); + info.timestamp = timestamp; + info.size = size; + + return vpu_session_fill_timestamp(inst, &info); +} + +int vpu_malone_input_frame(struct vpu_shared_addr *shared, + struct vpu_inst *inst, struct vb2_buffer *vb) +{ + struct vpu_dec_ctrl *hc = shared->priv; + struct vb2_v4l2_buffer *vbuf; + struct vpu_malone_str_buffer *str_buf = hc->str_buf[inst->id]; + u32 disp_imm = hc->codec_param[inst->id].disp_imm; + u32 size; + int ret; + + if (vpu_malone_is_non_frame_mode(shared, inst->id)) + ret = vpu_malone_input_stream_data(str_buf, inst, vb); + else + ret = vpu_malone_input_frame_data(str_buf, inst, vb, disp_imm); + if (ret < 0) + return ret; + size = ret; + + /* + * if buffer only contain codec data, and the timestamp is invalid, + * don't put the invalid timestamp to resync + * merge the data to next frame + */ + vbuf = to_vb2_v4l2_buffer(vb); + if (vpu_vb_is_codecconfig(vbuf) && (s64)vb->timestamp < 0) { + inst->extra_size += size; + return 0; + } + if (inst->extra_size) { + size += inst->extra_size; + inst->extra_size = 0; + } + + ret = vpu_malone_input_ts(inst, vb->timestamp, size); + if (ret) + return ret; + + return 0; +} + +static bool vpu_malone_check_ready(struct vpu_shared_addr *shared, u32 instance) +{ + struct malone_iface *iface = shared->iface; + struct vpu_rpc_buffer_desc *desc = &iface->api_cmd_buffer_desc[instance]; + u32 size = desc->end - desc->start; + u32 rptr = desc->rptr; + u32 wptr = desc->wptr; + u32 used = (wptr + size - rptr) % size; + + if (!size || used < size / 2) + return true; + + return false; +} + +bool vpu_malone_is_ready(struct vpu_shared_addr *shared, u32 instance) +{ + u32 cnt = 0; + + while (!vpu_malone_check_ready(shared, instance)) { + if (cnt > 30) + return false; + mdelay(1); + cnt++; + } + return true; +} + +int vpu_malone_pre_cmd(struct vpu_shared_addr *shared, u32 instance) +{ + if (!vpu_malone_is_ready(shared, instance)) + return -EINVAL; + + return 0; +} + +int vpu_malone_post_cmd(struct vpu_shared_addr *shared, u32 instance) +{ + struct malone_iface *iface = shared->iface; + struct vpu_rpc_buffer_desc *desc = &iface->api_cmd_buffer_desc[instance]; + + desc->wptr++; + if (desc->wptr == desc->end) + desc->wptr = desc->start; + + return 0; +} + +int vpu_malone_init_instance(struct vpu_shared_addr *shared, u32 instance) +{ + struct malone_iface *iface = shared->iface; + struct vpu_rpc_buffer_desc *desc = &iface->api_cmd_buffer_desc[instance]; + + desc->wptr = desc->rptr; + if (desc->wptr == desc->end) + desc->wptr = desc->start; + + return 0; +} + +u32 vpu_malone_get_max_instance_count(struct vpu_shared_addr *shared) +{ + struct malone_iface *iface = shared->iface; + + return iface->max_streams; +} diff --git a/drivers/media/platform/amphion/vpu_malone.h b/drivers/media/platform/amphion/vpu_malone.h new file mode 100644 index 000000000000..e5a5cbe9843e --- /dev/null +++ b/drivers/media/platform/amphion/vpu_malone.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2020-2021 NXP + */ + +#ifndef _AMPHION_VPU_MALONE_H +#define _AMPHION_VPU_MALONE_H + +u32 vpu_malone_get_data_size(void); +void vpu_malone_init_rpc(struct vpu_shared_addr *shared, + struct vpu_buffer *rpc, dma_addr_t boot_addr); +void vpu_malone_set_log_buf(struct vpu_shared_addr *shared, + struct vpu_buffer *log); +void vpu_malone_set_system_cfg(struct vpu_shared_addr *shared, + u32 regs_base, void __iomem *regs, u32 core_id); +u32 vpu_malone_get_version(struct vpu_shared_addr *shared); +int vpu_malone_get_stream_buffer_size(struct vpu_shared_addr *shared); +int vpu_malone_config_stream_buffer(struct vpu_shared_addr *shared, + u32 instance, struct vpu_buffer *buf); +int vpu_malone_get_stream_buffer_desc(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_rpc_buffer_desc *desc); +int vpu_malone_update_stream_buffer(struct vpu_shared_addr *shared, + u32 instance, u32 ptr, bool write); +int vpu_malone_set_decode_params(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_decode_params *params, u32 update); +int vpu_malone_pack_cmd(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data); +int vpu_malone_convert_msg_id(u32 msg_id); +int vpu_malone_unpack_msg_data(struct vpu_rpc_event *pkt, void *data); +int vpu_malone_add_scode(struct vpu_shared_addr *shared, + u32 instance, + struct vpu_buffer *stream_buffer, + u32 pixelformat, + u32 scode_type); +int vpu_malone_input_frame(struct vpu_shared_addr *shared, + struct vpu_inst *inst, struct vb2_buffer *vb); +bool vpu_malone_is_ready(struct vpu_shared_addr *shared, u32 instance); +int vpu_malone_pre_cmd(struct vpu_shared_addr *shared, u32 instance); +int vpu_malone_post_cmd(struct vpu_shared_addr *shared, u32 instance); +int vpu_malone_init_instance(struct vpu_shared_addr *shared, u32 instance); +u32 vpu_malone_get_max_instance_count(struct vpu_shared_addr *shared); + +#endif From patchwork Wed Jan 26 03:09:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Qian X-Patchwork-Id: 537046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D962AC63684 for ; Wed, 26 Jan 2022 03:11:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237012AbiAZDLS (ORCPT ); Tue, 25 Jan 2022 22:11:18 -0500 Received: from mail-eopbgr60071.outbound.protection.outlook.com ([40.107.6.71]:13102 "EHLO EUR04-DB3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S236975AbiAZDLI (ORCPT ); Tue, 25 Jan 2022 22:11:08 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TY0p1GBOB8trFSQvvel9akpq7Xr0p2xvop0+l13vxdv81QHGqzvmDVJnRJ0fPfSHuPhp9G61E02GkW6AjWdtmFHfOMSuc9tpQVi3fz7TYZH3GctRcrtWEFIG2E2f96X/Drs0ZlWaq/NcuWw8REA9kuFBb4N/iKV1oDp4kmdTodFoFKKr+Ih7ZLOZuurrIKV9apFmRqJqncEW6/S+MpLdPNVb0Y30Y0Xsz+qOmXmh+RksLBHhDYSxhfbegNxC+6w4dsjm6SeI26KZDsF4gCWww5I8efKyNbXsFki0saYi/QrCxAWSFDhdh3c0j4+M6t9j+1/JIh5d+Hq8HCDW38rsYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=n28rS0Fd8JNwou/uVb69Hpd2c9bgZbAmlqiUrrb2Bqw=; b=Vpgy2QmXYGbxC9ZF9ny6F3hgEvu8t9wb8CVkVX8gDrcwT0AJMKz1VNp5o6UUSDt/zZbL9S4x5e+iz2+EJOJGob2S4nOKg6LCgrpmpu1Uw7DhJcvsK0hGsnHGZUrNxf31zY+a6doXL54hMUjdkdnNNbabz7dzCo4NWJ9HjTYapvo1PxM5cdUKap4fILWy++ol7X+YESQRGik9AITkS96lWBegoOIs53OkogjB7NwpnjuVpmQaAqZ8Ov0q/tMN/gqpZNPWnzwchvgO9NJYGPlijUR/xcUikbf0aKHJoX/LTTbHdl0OZ9ndD2htta9q7/AIqS3oYW8CDZrI6bGfR38l3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=n28rS0Fd8JNwou/uVb69Hpd2c9bgZbAmlqiUrrb2Bqw=; b=HWME5U0xbC/kkMMVqkURjoB5EsdyghUcvStLr04EAqf08XBus8ZM/vvMXDvhFkSU9dKKo/2GMkgc+hpQ3ZZDZlY5WFDx8NCGmRjMjYzTz6MnqRvvT6nDNYkcK0I1xRrj0D8NBj2fnSKuIr7LnX+f5sNCQf7IjBkQrjdiePFTPVw= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) by VI1PR04MB5248.eurprd04.prod.outlook.com (2603:10a6:803:62::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Wed, 26 Jan 2022 03:11:00 +0000 Received: from AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13]) by AM6PR04MB6341.eurprd04.prod.outlook.com ([fe80::3d7e:6627:fdd0:9d13%4]) with mapi id 15.20.4930.015; Wed, 26 Jan 2022 03:10:59 +0000 From: Ming Qian To: mchehab@kernel.org, shawnguo@kernel.org, robh+dt@kernel.org, s.hauer@pengutronix.de Cc: hverkuil-cisco@xs4all.nl, kernel@pengutronix.de, festevam@gmail.com, linux-imx@nxp.com, aisheng.dong@nxp.com, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v15.1 13/13] MAINTAINERS: add AMPHION VPU CODEC V4L2 driver entry Date: Wed, 26 Jan 2022 11:09:32 +0800 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: X-ClientProxiedBy: SG2PR01CA0091.apcprd01.prod.exchangelabs.com (2603:1096:3:15::17) To AM6PR04MB6341.eurprd04.prod.outlook.com (2603:10a6:20b:d8::14) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8b8413f7-2af5-45b2-7b92-08d9e0797a38 X-MS-TrafficTypeDiagnostic: VI1PR04MB5248:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:296; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: h2wKcHNo+ZGO9vOt53EBKLMg/joN7XVtVqDuglL8DftXXhuSjwMQ7Lt9PYpvCDV02GD7+CH311aHIm+RVz1/EmpB7yJdPL2of2MVK4Mceh7kjnsfLp2SkUYqL7WWFXSUjv0r5PSxvhTjUg4995ttoZ78TU/jb0VGdgu960lrQDpSYC8Klb8YaQaR9PToy+EVzooZO9oNJH7YAm4HHN87j4UgJOegz3h2Bzid8v/H+/y4c1h9mQ56RLWTfkoNnvwQwW7hT+Suoe6KOMpIz4zSi0FSVivmUn0a+HSr2Mf4rnju3iFkwKvQ1sCdX3lkz2BBm9ex58JPlU7sVYejAykR53ONHYuFU70xaGB6oyONlPaSKSSfbe2bujnTEFl/wWQw/X33PtCCh3QOSZmyExGBD544yht553NfETJ3idrInc5iyEFgKMsHa2/aOwWztC2UftZ6PjuB6M4to2s6b6mnFP43y19hK24uxFmszCHUj52wMfcsfPh3/0YU/d8bC75x0a71ObP/6lIeHUG2ZPYhE91t2ZA0Y3X2+adWymk8JSdBHULehrVN1bpzcGNbo14o5A3cBlmjleFo4/oWW6Wyu/0x6i6DCbHjPG7goZOJV90KQJlGK+lX1Gtcrgi5MZAPkXSKT6K8CQ8Gyv5enpAKV0GBrijWNy0EhBuR6TRJeEM3N1NMeNX45EZryOSgm3Eyy4SNA/WyIrwse6t73cvA0g== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB6341.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(38350700002)(2616005)(186003)(26005)(38100700002)(66946007)(8676002)(66476007)(7416002)(4744005)(2906002)(8936002)(66556008)(5660300002)(44832011)(36756003)(4326008)(86362001)(52116002)(6506007)(6512007)(6666004)(316002)(508600001)(6486002)(20210929001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Yo563IqCA+5/tqbHIL/9bf0BFSZ0odJdgdmdUJk42J9FRkgV6mosMeYPMru3GcnEAIjp4tnAxYphnL632ZNTxOsqR5kXvEP03CX5Iih+YbfdPutLlHy5MUonay2/tyaOiiDKpXENgHvveX+raKHXeRiIdCSeZAeJ0ZsXRlkMF0O4EYgeuQhPJ0ir/nEhu3BkzfGM0xj0aJjVkriBiFNP7z2FRtLRn+mjrN4uXmtEEqtc9FLjIV1ZtvnR02yAMUCPj8ItLPWa4aWOA0Eyik1Lv0EjTvLwzdSv82nio5ViGj9kNKu99R/+b4d7d7qDuqW7sW6VajzIdDfZukq2yLBz0v1AGtAculo0Ukv1LYQ7sq46Y6GIxtcSc+qF+5jTFx6eftyKjd4kIN5+PgQjC1nz2gZLvZtTkNiDFqL2XDe31sy9pqme5i/3NyZTIPbhy72LUaoe1lRcl+jUzHUUf0ALmcGKiBMJ2wgr+JAvg/DWRuXwXYDbGMZSfSAE/y4e4KKqm4KdHxdy29JyMlR7AtktZEthCT2vNq9nmottX6Huo62tNStJ3rImuZ35YhI7gbHaU+dpGXVzNP97cc5h9Kzi3zqJObxqLcF7SG63kHLaSCMe6uWg8RaNvgiW0GDO0/TwsLC/VeqRxkGoYNd4Ljf96U4RMwBKsgXxhBi3K1WryYqmOxdrn2lp9y0A+eiPcXI6MItuSBJsLLd5JszrjK6XMMHfCqhPYQOUNl9xpspFGNrvAld78r0lNjfOD+i6yjFwG0tBCTkD+qmFyp1SXI8KFiYZRPlde/DTr8ln4udrLHnDk5ZH9qCrU2oX1o8QfaLkgoGg3ueqvZyGVBQAEr+kFyibyWq9k9g+yVoq6ojuBvF7iVZ/aFhRlkJNR4ZW2yI9y+DugFNAjzvNPwj+o/tsVVvkqTUNAwu/fFfDetVOjQRvsO8scv8eLhnf7z+DuFMpE31dg9LhS6yOOvpb7LvP13A5xhmuQVUmys6QC9RfLZBRTzHR3eYvVuI7gCuzlizErI0QsShkZLNoslnwvgRm9KKRqix6V4VQdyGZBO0y0g+FRCGHZM4rp6IfUSIoO0IyHcGQw6BF7uzPepwumvhkZHYyyGSepf0nHRXYUSZZA7TqI+xdKV9q5/YT+y4FRqmrhktGVLBeIk/XD20PpRuRjPj/pP7rHvmootX1TyAf2Nl0Bi3Z2Elj3GD7CYji+R3QEqNL9s0gd2VIXVd4d3MKXDxo150k/poptZxG12ND4ra+OJg2jjpJrGU0nXXDXHOEhUNcu4QLnMXqQ9TB4jxV+dXA5iEEmdQQYFEBduK5tjUEz+8CYWPzwS/D0PhKaBY7bHUWp1US+bizEF0LPslbCBxxYbXIDcGj/JCWskCNwLjz0sYVjgpfNc9WOKCcE6f9wbq/JF/YtQIIEyHm+shmJAl/vX3QQIFurMLIXBCRfwk5Y3829wdm2wwsyFY4h6ZkJ2w+8e8qBwM7PiH1C8N7Sgxz8NcSli3PzdGRywjHCwTCX6EOSJ+OndXecWb6sEiu9zn8QuSJLFKZRPGuVJfPrn8N9ABBrIfI0vUALIHAKxdwyAAhWjydKz0KhJLEREVukOMrDfLAM0G0odW1WuHjINyGXR8DlrtqkaiYJ1c5fpM= X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8b8413f7-2af5-45b2-7b92-08d9e0797a38 X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6341.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2022 03:10:59.8478 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: AhwFRa+WsOLaweEL2d3u2Hn4jljm1s1+MrqBnaaI8Nq9A55SGBBOSAi2M631+MRAQVE2dCwBDkgxDyehzPl4XQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5248 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add AMPHION VPU CODEC v4l2 driver entry Signed-off-by: Ming Qian Signed-off-by: Shijie Qin Signed-off-by: Zhou Peng --- MAINTAINERS | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index ea3e6c914384..c89dfc45c1c7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1030,6 +1030,15 @@ S: Maintained F: Documentation/hid/amd-sfh* F: drivers/hid/amd-sfh-hid/ +AMPHION VPU CODEC V4L2 DRIVER +M: Ming Qian +M: Shijie Qin +M: Zhou Peng +L: linux-media@vger.kernel.org +S: Maintained +F: Documentation/devicetree/bindings/media/amphion,vpu.yaml +F: drivers/media/platform/amphion/ + AMS AS73211 DRIVER M: Christian Eggers L: linux-iio@vger.kernel.org