From patchwork Thu Aug 5 14:42:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Pietrasiewicz X-Patchwork-Id: 492498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DC83C432BE for ; Thu, 5 Aug 2021 14:43:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 139C561151 for ; Thu, 5 Aug 2021 14:43:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241969AbhHEOnM (ORCPT ); Thu, 5 Aug 2021 10:43:12 -0400 Received: from bhuna.collabora.co.uk ([46.235.227.227]:59076 "EHLO bhuna.collabora.co.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241639AbhHEOnL (ORCPT ); Thu, 5 Aug 2021 10:43:11 -0400 Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: andrzej.p) with ESMTPSA id 65F421F4409E From: Andrzej Pietrasiewicz To: linux-media@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev Cc: Andrzej Pietrasiewicz , Benjamin Gaignard , Boris Brezillon , Ezequiel Garcia , Fabio Estevam , Greg Kroah-Hartman , Hans Verkuil , Heiko Stuebner , Jernej Skrabec , Mauro Carvalho Chehab , Nicolas Dufresne , NXP Linux Team , Pengutronix Kernel Team , Philipp Zabel , Sascha Hauer , Shawn Guo , kernel@collabora.com, Ezequiel Garcia Subject: [PATCH v3 02/10] hantro: postproc: Introduce struct hantro_postproc_ops Date: Thu, 5 Aug 2021 16:42:38 +0200 Message-Id: <20210805144246.11998-3-andrzej.p@collabora.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210805144246.11998-1-andrzej.p@collabora.com> References: <20210805144246.11998-1-andrzej.p@collabora.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Ezequiel Garcia Turns out the post-processor block on the G2 core is substantially different from the one on the G1 core. Introduce hantro_postproc_ops with .enable and .disable methods, which will allow to support the G2 post-processor cleanly. Signed-off-by: Ezequiel Garcia Signed-off-by: Andrzej Pietrasiewicz --- drivers/staging/media/hantro/hantro.h | 5 +-- drivers/staging/media/hantro/hantro_hw.h | 13 +++++++- .../staging/media/hantro/hantro_postproc.c | 33 ++++++++++++++----- drivers/staging/media/hantro/imx8m_vpu_hw.c | 2 +- .../staging/media/hantro/rockchip_vpu_hw.c | 6 ++-- .../staging/media/hantro/sama5d4_vdec_hw.c | 2 +- 6 files changed, 44 insertions(+), 17 deletions(-) diff --git a/drivers/staging/media/hantro/hantro.h b/drivers/staging/media/hantro/hantro.h index c2e2dca38628..c2e01959dc00 100644 --- a/drivers/staging/media/hantro/hantro.h +++ b/drivers/staging/media/hantro/hantro.h @@ -28,6 +28,7 @@ struct hantro_ctx; struct hantro_codec_ops; +struct hantro_postproc_ops; #define HANTRO_JPEG_ENCODER BIT(0) #define HANTRO_ENCODERS 0x0000ffff @@ -59,6 +60,7 @@ struct hantro_irq { * @num_dec_fmts: Number of decoder formats. * @postproc_fmts: Post-processor formats. * @num_postproc_fmts: Number of post-processor formats. + * @postproc_ops: Post-processor ops. * @codec: Supported codecs * @codec_ops: Codec ops. * @init: Initialize hardware, optional. @@ -69,7 +71,6 @@ struct hantro_irq { * @num_clocks: number of clocks in the array * @reg_names: array of register range names * @num_regs: number of register range names in the array - * @postproc_regs: &struct hantro_postproc_regs pointer */ struct hantro_variant { unsigned int enc_offset; @@ -80,6 +81,7 @@ struct hantro_variant { unsigned int num_dec_fmts; const struct hantro_fmt *postproc_fmts; unsigned int num_postproc_fmts; + const struct hantro_postproc_ops *postproc_ops; unsigned int codec; const struct hantro_codec_ops *codec_ops; int (*init)(struct hantro_dev *vpu); @@ -90,7 +92,6 @@ struct hantro_variant { int num_clocks; const char * const *reg_names; int num_regs; - const struct hantro_postproc_regs *postproc_regs; }; /** diff --git a/drivers/staging/media/hantro/hantro_hw.h b/drivers/staging/media/hantro/hantro_hw.h index df7b5e3a57b9..4323e63dfbfc 100644 --- a/drivers/staging/media/hantro/hantro_hw.h +++ b/drivers/staging/media/hantro/hantro_hw.h @@ -170,6 +170,17 @@ struct hantro_postproc_ctx { struct hantro_aux_buf dec_q[VB2_MAX_FRAME]; }; +/** + * struct hantro_postproc_ops - post-processor operations + * + * @enable: Enable the post-processor block. Optional. + * @disable: Disable the post-processor block. Optional. + */ +struct hantro_postproc_ops { + void (*enable)(struct hantro_ctx *ctx); + void (*disable)(struct hantro_ctx *ctx); +}; + /** * struct hantro_codec_ops - codec mode specific operations * @@ -217,7 +228,7 @@ extern const struct hantro_variant rk3328_vpu_variant; extern const struct hantro_variant rk3399_vpu_variant; extern const struct hantro_variant sama5d4_vdec_variant; -extern const struct hantro_postproc_regs hantro_g1_postproc_regs; +extern const struct hantro_postproc_ops hantro_g1_postproc_ops; extern const u32 hantro_vp8_dec_mc_filter[8][6]; diff --git a/drivers/staging/media/hantro/hantro_postproc.c b/drivers/staging/media/hantro/hantro_postproc.c index 07842152003f..882fb8bc5ddd 100644 --- a/drivers/staging/media/hantro/hantro_postproc.c +++ b/drivers/staging/media/hantro/hantro_postproc.c @@ -15,14 +15,14 @@ #define HANTRO_PP_REG_WRITE(vpu, reg_name, val) \ { \ hantro_reg_write(vpu, \ - &(vpu)->variant->postproc_regs->reg_name, \ + &hantro_g1_postproc_regs.reg_name, \ val); \ } #define HANTRO_PP_REG_WRITE_S(vpu, reg_name, val) \ { \ hantro_reg_write_s(vpu, \ - &(vpu)->variant->postproc_regs->reg_name, \ + &hantro_g1_postproc_regs.reg_name, \ val); \ } @@ -64,16 +64,13 @@ bool hantro_needs_postproc(const struct hantro_ctx *ctx, return fmt->fourcc != V4L2_PIX_FMT_NV12; } -void hantro_postproc_enable(struct hantro_ctx *ctx) +static void hantro_postproc_g1_enable(struct hantro_ctx *ctx) { struct hantro_dev *vpu = ctx->dev; struct vb2_v4l2_buffer *dst_buf; u32 src_pp_fmt, dst_pp_fmt; dma_addr_t dst_dma; - if (!vpu->variant->postproc_regs) - return; - /* Turn on pipeline mode. Must be done first. */ HANTRO_PP_REG_WRITE_S(vpu, pipeline_en, 0x1); @@ -154,12 +151,30 @@ int hantro_postproc_alloc(struct hantro_ctx *ctx) return 0; } +static void hantro_postproc_g1_disable(struct hantro_ctx *ctx) +{ + struct hantro_dev *vpu = ctx->dev; + + HANTRO_PP_REG_WRITE_S(vpu, pipeline_en, 0x0); +} + void hantro_postproc_disable(struct hantro_ctx *ctx) { struct hantro_dev *vpu = ctx->dev; - if (!vpu->variant->postproc_regs) - return; + if (vpu->variant->postproc_ops && vpu->variant->postproc_ops->disable) + vpu->variant->postproc_ops->disable(ctx); +} - HANTRO_PP_REG_WRITE_S(vpu, pipeline_en, 0x0); +void hantro_postproc_enable(struct hantro_ctx *ctx) +{ + struct hantro_dev *vpu = ctx->dev; + + if (vpu->variant->postproc_ops && vpu->variant->postproc_ops->enable) + vpu->variant->postproc_ops->enable(ctx); } + +const struct hantro_postproc_ops hantro_g1_postproc_ops = { + .enable = hantro_postproc_g1_enable, + .disable = hantro_postproc_g1_disable, +}; diff --git a/drivers/staging/media/hantro/imx8m_vpu_hw.c b/drivers/staging/media/hantro/imx8m_vpu_hw.c index ea919bfb9891..22fa7d2f3b64 100644 --- a/drivers/staging/media/hantro/imx8m_vpu_hw.c +++ b/drivers/staging/media/hantro/imx8m_vpu_hw.c @@ -262,7 +262,7 @@ const struct hantro_variant imx8mq_vpu_variant = { .num_dec_fmts = ARRAY_SIZE(imx8m_vpu_dec_fmts), .postproc_fmts = imx8m_vpu_postproc_fmts, .num_postproc_fmts = ARRAY_SIZE(imx8m_vpu_postproc_fmts), - .postproc_regs = &hantro_g1_postproc_regs, + .postproc_ops = &hantro_g1_postproc_ops, .codec = HANTRO_MPEG2_DECODER | HANTRO_VP8_DECODER | HANTRO_H264_DECODER, .codec_ops = imx8mq_vpu_codec_ops, diff --git a/drivers/staging/media/hantro/rockchip_vpu_hw.c b/drivers/staging/media/hantro/rockchip_vpu_hw.c index d4f52957cc53..6c1ad5534ce5 100644 --- a/drivers/staging/media/hantro/rockchip_vpu_hw.c +++ b/drivers/staging/media/hantro/rockchip_vpu_hw.c @@ -460,7 +460,7 @@ const struct hantro_variant rk3036_vpu_variant = { .num_dec_fmts = ARRAY_SIZE(rk3066_vpu_dec_fmts), .postproc_fmts = rockchip_vpu1_postproc_fmts, .num_postproc_fmts = ARRAY_SIZE(rockchip_vpu1_postproc_fmts), - .postproc_regs = &hantro_g1_postproc_regs, + .postproc_ops = &hantro_g1_postproc_ops, .codec = HANTRO_MPEG2_DECODER | HANTRO_VP8_DECODER | HANTRO_H264_DECODER, .codec_ops = rk3036_vpu_codec_ops, @@ -485,7 +485,7 @@ const struct hantro_variant rk3066_vpu_variant = { .num_dec_fmts = ARRAY_SIZE(rk3066_vpu_dec_fmts), .postproc_fmts = rockchip_vpu1_postproc_fmts, .num_postproc_fmts = ARRAY_SIZE(rockchip_vpu1_postproc_fmts), - .postproc_regs = &hantro_g1_postproc_regs, + .postproc_ops = &hantro_g1_postproc_ops, .codec = HANTRO_JPEG_ENCODER | HANTRO_MPEG2_DECODER | HANTRO_VP8_DECODER | HANTRO_H264_DECODER, .codec_ops = rk3066_vpu_codec_ops, @@ -505,7 +505,7 @@ const struct hantro_variant rk3288_vpu_variant = { .num_dec_fmts = ARRAY_SIZE(rk3288_vpu_dec_fmts), .postproc_fmts = rockchip_vpu1_postproc_fmts, .num_postproc_fmts = ARRAY_SIZE(rockchip_vpu1_postproc_fmts), - .postproc_regs = &hantro_g1_postproc_regs, + .postproc_ops = &hantro_g1_postproc_ops, .codec = HANTRO_JPEG_ENCODER | HANTRO_MPEG2_DECODER | HANTRO_VP8_DECODER | HANTRO_H264_DECODER, .codec_ops = rk3288_vpu_codec_ops, diff --git a/drivers/staging/media/hantro/sama5d4_vdec_hw.c b/drivers/staging/media/hantro/sama5d4_vdec_hw.c index 9c3b8cd0b239..f3fecc7248c4 100644 --- a/drivers/staging/media/hantro/sama5d4_vdec_hw.c +++ b/drivers/staging/media/hantro/sama5d4_vdec_hw.c @@ -100,7 +100,7 @@ const struct hantro_variant sama5d4_vdec_variant = { .num_dec_fmts = ARRAY_SIZE(sama5d4_vdec_fmts), .postproc_fmts = sama5d4_vdec_postproc_fmts, .num_postproc_fmts = ARRAY_SIZE(sama5d4_vdec_postproc_fmts), - .postproc_regs = &hantro_g1_postproc_regs, + .postproc_ops = &hantro_g1_postproc_ops, .codec = HANTRO_MPEG2_DECODER | HANTRO_VP8_DECODER | HANTRO_H264_DECODER, .codec_ops = sama5d4_vdec_codec_ops, From patchwork Thu Aug 5 14:42:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Pietrasiewicz X-Patchwork-Id: 492496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AD7BC4338F for ; Thu, 5 Aug 2021 14:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 138FA61154 for ; Thu, 5 Aug 2021 14:45:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241639AbhHEOpy (ORCPT ); Thu, 5 Aug 2021 10:45:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241976AbhHEOnM (ORCPT ); Thu, 5 Aug 2021 10:43:12 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86045C061765; Thu, 5 Aug 2021 07:42:58 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: andrzej.p) with ESMTPSA id 904A51F440AE From: Andrzej Pietrasiewicz To: linux-media@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev Cc: Andrzej Pietrasiewicz , Benjamin Gaignard , Boris Brezillon , Ezequiel Garcia , Fabio Estevam , Greg Kroah-Hartman , Hans Verkuil , Heiko Stuebner , Jernej Skrabec , Mauro Carvalho Chehab , Nicolas Dufresne , NXP Linux Team , Pengutronix Kernel Team , Philipp Zabel , Sascha Hauer , Shawn Guo , kernel@collabora.com, Ezequiel Garcia Subject: [PATCH v3 04/10] hantro: Add quirk for NV12/NV12_4L4 capture format Date: Thu, 5 Aug 2021 16:42:40 +0200 Message-Id: <20210805144246.11998-5-andrzej.p@collabora.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210805144246.11998-1-andrzej.p@collabora.com> References: <20210805144246.11998-1-andrzej.p@collabora.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Ezequiel Garcia The G2 core decoder engine produces NV12_4L4 format, which is a simple NV12 4x4 tiled format. The driver currently hides this format by always enabling the post-processor engine, and therefore offering NV12 directly. This is done without using the logic in hantro_postproc.c and therefore makes it difficult to add VP9 cleanly. Since fixing this is not easy, add a small quirk to force NV12 if HEVC was configured, but otherwise declare NV12_4L4 as the pixel format in imx8mq_vpu_g2_variant.dec_fmts. This will be used by the VP9 decoder which will be added soon. Signed-off-by: Ezequiel Garcia Signed-off-by: Andrzej Pietrasiewicz --- drivers/staging/media/hantro/hantro_v4l2.c | 14 ++++++++++++++ drivers/staging/media/hantro/imx8m_vpu_hw.c | 2 +- 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c index bcb0bdff4a9a..d1f060c55fed 100644 --- a/drivers/staging/media/hantro/hantro_v4l2.c +++ b/drivers/staging/media/hantro/hantro_v4l2.c @@ -150,6 +150,20 @@ static int vidioc_enum_fmt(struct file *file, void *priv, unsigned int num_fmts, i, j = 0; bool skip_mode_none; + /* + * The HEVC decoder on the G2 core needs a little quirk to offer NV12 + * only on the capture side. Once the post-processor logic is used, + * we will be able to expose NV12_4L4 and NV12 as the other cases, + * and therefore remove this quirk. + */ + if (capture && ctx->vpu_src_fmt->fourcc == V4L2_PIX_FMT_HEVC_SLICE) { + if (f->index == 0) { + f->pixelformat = V4L2_PIX_FMT_NV12; + return 0; + } + return -EINVAL; + } + /* * When dealing with an encoder: * - on the capture side we want to filter out all MODE_NONE formats. diff --git a/drivers/staging/media/hantro/imx8m_vpu_hw.c b/drivers/staging/media/hantro/imx8m_vpu_hw.c index 02e61438220a..a40b161e5956 100644 --- a/drivers/staging/media/hantro/imx8m_vpu_hw.c +++ b/drivers/staging/media/hantro/imx8m_vpu_hw.c @@ -134,7 +134,7 @@ static const struct hantro_fmt imx8m_vpu_dec_fmts[] = { static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = { { - .fourcc = V4L2_PIX_FMT_NV12, + .fourcc = V4L2_PIX_FMT_NV12_4L4, .codec_mode = HANTRO_MODE_NONE, }, { From patchwork Thu Aug 5 14:42:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Pietrasiewicz X-Patchwork-Id: 492495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD0F0C43216 for ; Thu, 5 Aug 2021 14:45:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C031B61155 for ; Thu, 5 Aug 2021 14:45:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242014AbhHEOpz (ORCPT ); Thu, 5 Aug 2021 10:45:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241994AbhHEOnQ (ORCPT ); Thu, 5 Aug 2021 10:43:16 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1997C061799; Thu, 5 Aug 2021 07:43:01 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: andrzej.p) with ESMTPSA id 92E6D1F440BE From: Andrzej Pietrasiewicz To: linux-media@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev Cc: Andrzej Pietrasiewicz , Benjamin Gaignard , Boris Brezillon , Ezequiel Garcia , Fabio Estevam , Greg Kroah-Hartman , Hans Verkuil , Heiko Stuebner , Jernej Skrabec , Mauro Carvalho Chehab , Nicolas Dufresne , NXP Linux Team , Pengutronix Kernel Team , Philipp Zabel , Sascha Hauer , Shawn Guo , kernel@collabora.com, Ezequiel Garcia , Adrian Ratiu Subject: [PATCH v3 07/10] media: rkvdec: Add the VP9 backend Date: Thu, 5 Aug 2021 16:42:43 +0200 Message-Id: <20210805144246.11998-8-andrzej.p@collabora.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210805144246.11998-1-andrzej.p@collabora.com> References: <20210805144246.11998-1-andrzej.p@collabora.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Boris Brezillon The Rockchip VDEC supports VP9 profile 0 up to 4096x2304@30fps. Add a backend for this new format. Signed-off-by: Boris Brezillon Signed-off-by: Ezequiel Garcia Signed-off-by: Adrian Ratiu Co-developed-by: Andrzej Pietrasiewicz Signed-off-by: Andrzej Pietrasiewicz --- drivers/staging/media/rkvdec/Kconfig | 1 + drivers/staging/media/rkvdec/Makefile | 2 +- drivers/staging/media/rkvdec/rkvdec-vp9.c | 1078 +++++++++++++++++++++ drivers/staging/media/rkvdec/rkvdec.c | 52 +- drivers/staging/media/rkvdec/rkvdec.h | 12 +- 5 files changed, 1137 insertions(+), 8 deletions(-) create mode 100644 drivers/staging/media/rkvdec/rkvdec-vp9.c diff --git a/drivers/staging/media/rkvdec/Kconfig b/drivers/staging/media/rkvdec/Kconfig index c02199b5e0fd..dc7292f346fa 100644 --- a/drivers/staging/media/rkvdec/Kconfig +++ b/drivers/staging/media/rkvdec/Kconfig @@ -9,6 +9,7 @@ config VIDEO_ROCKCHIP_VDEC select VIDEOBUF2_VMALLOC select V4L2_MEM2MEM_DEV select V4L2_H264 + select V4L2_VP9 help Support for the Rockchip Video Decoder IP present on Rockchip SoCs, which accelerates video decoding. diff --git a/drivers/staging/media/rkvdec/Makefile b/drivers/staging/media/rkvdec/Makefile index c08fed0a39f9..cb86b429cfaa 100644 --- a/drivers/staging/media/rkvdec/Makefile +++ b/drivers/staging/media/rkvdec/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_VIDEO_ROCKCHIP_VDEC) += rockchip-vdec.o -rockchip-vdec-y += rkvdec.o rkvdec-h264.o +rockchip-vdec-y += rkvdec.o rkvdec-h264.o rkvdec-vp9.o diff --git a/drivers/staging/media/rkvdec/rkvdec-vp9.c b/drivers/staging/media/rkvdec/rkvdec-vp9.c new file mode 100644 index 000000000000..e8b3a5c3d0f4 --- /dev/null +++ b/drivers/staging/media/rkvdec/rkvdec-vp9.c @@ -0,0 +1,1078 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Rockchip Video Decoder VP9 backend + * + * Copyright (C) 2019 Collabora, Ltd. + * Boris Brezillon + * Copyright (C) 2021 Collabora, Ltd. + * Andrzej Pietrasiewicz + * + * Copyright (C) 2016 Rockchip Electronics Co., Ltd. + * Alpha Lin + */ + +/* + * For following the vp9 spec please start reading this driver + * code from rkvdec_vp9_run() followed by rkvdec_vp9_done(). + */ + +#include +#include +#include +#include + +#include "rkvdec.h" +#include "rkvdec-regs.h" + +#define RKVDEC_VP9_PROBE_SIZE 4864 +#define RKVDEC_VP9_COUNT_SIZE 13232 +#define RKVDEC_VP9_MAX_SEGMAP_SIZE 73728 + +struct rkvdec_vp9_intra_mode_probs { + u8 y_mode[105]; + u8 uv_mode[23]; +}; + +struct rkvdec_vp9_intra_only_frame_probs { + u8 coef_intra[4][2][128]; + struct rkvdec_vp9_intra_mode_probs intra_mode[10]; +}; + +struct rkvdec_vp9_inter_frame_probs { + u8 y_mode[4][9]; + u8 comp_mode[5]; + u8 comp_ref[5]; + u8 single_ref[5][2]; + u8 inter_mode[7][3]; + u8 interp_filter[4][2]; + u8 padding0[11]; + u8 coef[2][4][2][128]; + u8 uv_mode_0_2[3][9]; + u8 padding1[5]; + u8 uv_mode_3_5[3][9]; + u8 padding2[5]; + u8 uv_mode_6_8[3][9]; + u8 padding3[5]; + u8 uv_mode_9[9]; + u8 padding4[7]; + u8 padding5[16]; + struct { + u8 joint[3]; + u8 sign[2]; + u8 classes[2][10]; + u8 class0_bit[2]; + u8 bits[2][10]; + u8 class0_fr[2][2][3]; + u8 fr[2][3]; + u8 class0_hp[2]; + u8 hp[2]; + } mv; +}; + +struct rkvdec_vp9_probs { + u8 partition[16][3]; + u8 pred[3]; + u8 tree[7]; + u8 skip[3]; + u8 tx32[2][3]; + u8 tx16[2][2]; + u8 tx8[2][1]; + u8 is_inter[4]; + /* 128 bit alignment */ + u8 padding0[3]; + union { + struct rkvdec_vp9_inter_frame_probs inter; + struct rkvdec_vp9_intra_only_frame_probs intra_only; + }; +}; + +/* Data structure describing auxiliary buffer format. */ +struct rkvdec_vp9_priv_tbl { + struct rkvdec_vp9_probs probs; + u8 segmap[2][RKVDEC_VP9_MAX_SEGMAP_SIZE]; +}; + +struct rkvdec_vp9_refs_counts { + u32 eob[2]; + u32 coeff[3]; +}; + +struct rkvdec_vp9_inter_frame_symbol_counts { + u32 partition[16][4]; + u32 skip[3][2]; + u32 inter[4][2]; + u32 tx32p[2][4]; + u32 tx16p[2][4]; + u32 tx8p[2][2]; + u32 y_mode[4][10]; + u32 uv_mode[10][10]; + u32 comp[5][2]; + u32 comp_ref[5][2]; + u32 single_ref[5][2][2]; + u32 mv_mode[7][4]; + u32 filter[4][3]; + u32 mv_joint[4]; + u32 sign[2][2]; + /* add 1 element for align */ + u32 classes[2][11 + 1]; + u32 class0[2][2]; + u32 bits[2][10][2]; + u32 class0_fp[2][2][4]; + u32 fp[2][4]; + u32 class0_hp[2][2]; + u32 hp[2][2]; + struct rkvdec_vp9_refs_counts ref_cnt[2][4][2][6][6]; +}; + +struct rkvdec_vp9_intra_frame_symbol_counts { + u32 partition[4][4][4]; + u32 skip[3][2]; + u32 intra[4][2]; + u32 tx32p[2][4]; + u32 tx16p[2][4]; + u32 tx8p[2][2]; + struct rkvdec_vp9_refs_counts ref_cnt[2][4][2][6][6]; +}; + +struct rkvdec_vp9_run { + struct rkvdec_run base; + const struct v4l2_ctrl_vp9_frame *decode_params; +}; + +struct rkvdec_vp9_frame_info { + u32 valid : 1; + u32 segmapid : 1; + u32 frame_context_idx : 2; + u32 reference_mode : 2; + u32 tx_mode : 3; + u32 interpolation_filter : 3; + u32 flags; + u64 timestamp; + struct v4l2_vp9_segmentation seg; + struct v4l2_vp9_loop_filter lf; +}; + +struct rkvdec_vp9_ctx { + struct rkvdec_aux_buf priv_tbl; + struct rkvdec_aux_buf count_tbl; + struct v4l2_vp9_frame_symbol_counts inter_cnts; + struct v4l2_vp9_frame_symbol_counts intra_cnts; + struct v4l2_vp9_frame_context probability_tables; + struct v4l2_vp9_frame_context frame_context[4]; + struct rkvdec_vp9_frame_info cur; + struct rkvdec_vp9_frame_info last; +}; + +static void write_coeff_plane(const u8 coef[6][6][3], u8 *coeff_plane) +{ + unsigned int idx = 0, byte_count = 0; + int k, m, n; + u8 p; + + for (k = 0; k < 6; k++) { + for (m = 0; m < 6; m++) { + for (n = 0; n < 3; n++) { + p = coef[k][m][n]; + coeff_plane[idx++] = p; + byte_count++; + if (byte_count == 27) { + idx += 5; + byte_count = 0; + } + } + } + } +} + +static void init_intra_only_probs(struct rkvdec_ctx *ctx, + const struct rkvdec_vp9_run *run) +{ + const struct v4l2_ctrl_vp9_frame *dec_params; + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + struct rkvdec_vp9_priv_tbl *tbl = vp9_ctx->priv_tbl.cpu; + struct rkvdec_vp9_intra_only_frame_probs *rkprobs; + const struct v4l2_vp9_frame_context *probs; + unsigned int i, j, k, m; + + rkprobs = &tbl->probs.intra_only; + dec_params = run->decode_params; + probs = &vp9_ctx->probability_tables; + + /* + * intra only 149 x 128 bits ,aligned to 152 x 128 bits coeff related + * prob 64 x 128 bits + */ + for (i = 0; i < ARRAY_SIZE(probs->coef); i++) { + for (j = 0; j < ARRAY_SIZE(probs->coef[0]); j++) + write_coeff_plane(probs->coef[i][j][0], + rkprobs->coef_intra[i][j]); + } + + /* intra mode prob 80 x 128 bits */ + for (i = 0; i < ARRAY_SIZE(v4l2_vp9_kf_y_mode_prob); i++) { + unsigned int byte_count = 0; + int idx = 0; + + /* vp9_kf_y_mode_prob */ + for (j = 0; j < ARRAY_SIZE(v4l2_vp9_kf_y_mode_prob[0]); j++) { + for (k = 0; k < ARRAY_SIZE(v4l2_vp9_kf_y_mode_prob[0][0]); + k++) { + u8 val = v4l2_vp9_kf_y_mode_prob[i][j][k]; + + rkprobs->intra_mode[i].y_mode[idx++] = val; + byte_count++; + if (byte_count == 27) { + byte_count = 0; + idx += 5; + } + } + } + + idx = 0; + if (i < 4) { + for (m = 0; m < (i < 3 ? 23 : 21); m++) { + const u8 *ptr = (const u8 *)v4l2_vp9_kf_uv_mode_prob; + + rkprobs->intra_mode[i].uv_mode[idx++] = ptr[i * 23 + m]; + } + } + } +} + +static void init_inter_probs(struct rkvdec_ctx *ctx, + const struct rkvdec_vp9_run *run) +{ + const struct v4l2_ctrl_vp9_frame *dec_params; + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + struct rkvdec_vp9_priv_tbl *tbl = vp9_ctx->priv_tbl.cpu; + struct rkvdec_vp9_inter_frame_probs *rkprobs; + const struct v4l2_vp9_frame_context *probs; + unsigned int i, j, k; + + rkprobs = &tbl->probs.inter; + dec_params = run->decode_params; + probs = &vp9_ctx->probability_tables; + + /* + * inter probs + * 151 x 128 bits, aligned to 152 x 128 bits + * inter only + * intra_y_mode & inter_block info 6 x 128 bits + */ + + memcpy(rkprobs->y_mode, probs->y_mode, sizeof(rkprobs->y_mode)); + memcpy(rkprobs->comp_mode, probs->comp_mode, + sizeof(rkprobs->comp_mode)); + memcpy(rkprobs->comp_ref, probs->comp_ref, + sizeof(rkprobs->comp_ref)); + memcpy(rkprobs->single_ref, probs->single_ref, + sizeof(rkprobs->single_ref)); + memcpy(rkprobs->inter_mode, probs->inter_mode, + sizeof(rkprobs->inter_mode)); + memcpy(rkprobs->interp_filter, probs->interp_filter, + sizeof(rkprobs->interp_filter)); + + /* 128 x 128 bits coeff related */ + for (i = 0; i < ARRAY_SIZE(probs->coef); i++) { + for (j = 0; j < ARRAY_SIZE(probs->coef[0]); j++) { + for (k = 0; k < ARRAY_SIZE(probs->coef[0][0]); k++) + write_coeff_plane(probs->coef[i][j][k], + rkprobs->coef[k][i][j]); + } + } + + /* intra uv mode 6 x 128 */ + memcpy(rkprobs->uv_mode_0_2, &probs->uv_mode[0], + sizeof(rkprobs->uv_mode_0_2)); + memcpy(rkprobs->uv_mode_3_5, &probs->uv_mode[3], + sizeof(rkprobs->uv_mode_3_5)); + memcpy(rkprobs->uv_mode_6_8, &probs->uv_mode[6], + sizeof(rkprobs->uv_mode_6_8)); + memcpy(rkprobs->uv_mode_9, &probs->uv_mode[9], + sizeof(rkprobs->uv_mode_9)); + + /* mv related 6 x 128 */ + memcpy(rkprobs->mv.joint, probs->mv.joint, + sizeof(rkprobs->mv.joint)); + memcpy(rkprobs->mv.sign, probs->mv.sign, + sizeof(rkprobs->mv.sign)); + memcpy(rkprobs->mv.classes, probs->mv.classes, + sizeof(rkprobs->mv.classes)); + memcpy(rkprobs->mv.class0_bit, probs->mv.class0_bit, + sizeof(rkprobs->mv.class0_bit)); + memcpy(rkprobs->mv.bits, probs->mv.bits, + sizeof(rkprobs->mv.bits)); + memcpy(rkprobs->mv.class0_fr, probs->mv.class0_fr, + sizeof(rkprobs->mv.class0_fr)); + memcpy(rkprobs->mv.fr, probs->mv.fr, + sizeof(rkprobs->mv.fr)); + memcpy(rkprobs->mv.class0_hp, probs->mv.class0_hp, + sizeof(rkprobs->mv.class0_hp)); + memcpy(rkprobs->mv.hp, probs->mv.hp, + sizeof(rkprobs->mv.hp)); +} + +static void init_probs(struct rkvdec_ctx *ctx, + const struct rkvdec_vp9_run *run) +{ + const struct v4l2_ctrl_vp9_frame *dec_params; + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + struct rkvdec_vp9_priv_tbl *tbl = vp9_ctx->priv_tbl.cpu; + struct rkvdec_vp9_probs *rkprobs = &tbl->probs; + const struct v4l2_vp9_segmentation *seg; + const struct v4l2_vp9_frame_context *probs; + bool intra_only; + + dec_params = run->decode_params; + probs = &vp9_ctx->probability_tables; + seg = &dec_params->seg; + + memset(rkprobs, 0, sizeof(*rkprobs)); + + intra_only = !!(dec_params->flags & + (V4L2_VP9_FRAME_FLAG_KEY_FRAME | + V4L2_VP9_FRAME_FLAG_INTRA_ONLY)); + + /* sb info 5 x 128 bit */ + memcpy(rkprobs->partition, + intra_only ? v4l2_vp9_kf_partition_probs : probs->partition, + sizeof(rkprobs->partition)); + + memcpy(rkprobs->pred, seg->pred_probs, sizeof(rkprobs->pred)); + memcpy(rkprobs->tree, seg->tree_probs, sizeof(rkprobs->tree)); + memcpy(rkprobs->skip, probs->skip, sizeof(rkprobs->skip)); + memcpy(rkprobs->tx32, probs->tx32, sizeof(rkprobs->tx32)); + memcpy(rkprobs->tx16, probs->tx16, sizeof(rkprobs->tx16)); + memcpy(rkprobs->tx8, probs->tx8, sizeof(rkprobs->tx8)); + memcpy(rkprobs->is_inter, probs->is_inter, sizeof(rkprobs->is_inter)); + + if (intra_only) + init_intra_only_probs(ctx, run); + else + init_inter_probs(ctx, run); +} + +struct rkvdec_vp9_ref_reg { + u32 reg_frm_size; + u32 reg_hor_stride; + u32 reg_y_stride; + u32 reg_yuv_stride; + u32 reg_ref_base; +}; + +static struct rkvdec_vp9_ref_reg ref_regs[] = { + { + .reg_frm_size = RKVDEC_REG_VP9_FRAME_SIZE(0), + .reg_hor_stride = RKVDEC_VP9_HOR_VIRSTRIDE(0), + .reg_y_stride = RKVDEC_VP9_LAST_FRAME_YSTRIDE, + .reg_yuv_stride = RKVDEC_VP9_LAST_FRAME_YUVSTRIDE, + .reg_ref_base = RKVDEC_REG_VP9_LAST_FRAME_BASE, + }, + { + .reg_frm_size = RKVDEC_REG_VP9_FRAME_SIZE(1), + .reg_hor_stride = RKVDEC_VP9_HOR_VIRSTRIDE(1), + .reg_y_stride = RKVDEC_VP9_GOLDEN_FRAME_YSTRIDE, + .reg_yuv_stride = 0, + .reg_ref_base = RKVDEC_REG_VP9_GOLDEN_FRAME_BASE, + }, + { + .reg_frm_size = RKVDEC_REG_VP9_FRAME_SIZE(2), + .reg_hor_stride = RKVDEC_VP9_HOR_VIRSTRIDE(2), + .reg_y_stride = RKVDEC_VP9_ALTREF_FRAME_YSTRIDE, + .reg_yuv_stride = 0, + .reg_ref_base = RKVDEC_REG_VP9_ALTREF_FRAME_BASE, + } +}; + +static struct rkvdec_decoded_buffer * +get_ref_buf(struct rkvdec_ctx *ctx, struct vb2_v4l2_buffer *dst, u64 timestamp) +{ + struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx; + struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q; + int buf_idx; + + /* + * If a ref is unused or invalid, address of current destination + * buffer is returned. + */ + buf_idx = vb2_find_timestamp(cap_q, timestamp, 0); + if (buf_idx < 0) + return vb2_to_rkvdec_decoded_buf(&dst->vb2_buf); + + return vb2_to_rkvdec_decoded_buf(vb2_get_buffer(cap_q, buf_idx)); +} + +static dma_addr_t get_mv_base_addr(struct rkvdec_decoded_buffer *buf) +{ + unsigned int aligned_pitch, aligned_height, yuv_len; + + aligned_height = round_up(buf->vp9.height, 64); + aligned_pitch = round_up(buf->vp9.width * buf->vp9.bit_depth, 512) / 8; + yuv_len = (aligned_height * aligned_pitch * 3) / 2; + + return vb2_dma_contig_plane_dma_addr(&buf->base.vb.vb2_buf, 0) + + yuv_len; +} + +static void config_ref_registers(struct rkvdec_ctx *ctx, + const struct rkvdec_vp9_run *run, + struct rkvdec_decoded_buffer *ref_buf, + struct rkvdec_vp9_ref_reg *ref_reg) +{ + unsigned int aligned_pitch, aligned_height, y_len, yuv_len; + struct rkvdec_dev *rkvdec = ctx->dev; + + aligned_height = round_up(ref_buf->vp9.height, 64); + writel_relaxed(RKVDEC_VP9_FRAMEWIDTH(ref_buf->vp9.width) | + RKVDEC_VP9_FRAMEHEIGHT(ref_buf->vp9.height), + rkvdec->regs + ref_reg->reg_frm_size); + + writel_relaxed(vb2_dma_contig_plane_dma_addr(&ref_buf->base.vb.vb2_buf, 0), + rkvdec->regs + ref_reg->reg_ref_base); + + if (&ref_buf->base.vb == run->base.bufs.dst) + return; + + aligned_pitch = round_up(ref_buf->vp9.width * ref_buf->vp9.bit_depth, 512) / 8; + y_len = aligned_height * aligned_pitch; + yuv_len = (y_len * 3) / 2; + + writel_relaxed(RKVDEC_HOR_Y_VIRSTRIDE(aligned_pitch / 16) | + RKVDEC_HOR_UV_VIRSTRIDE(aligned_pitch / 16), + rkvdec->regs + ref_reg->reg_hor_stride); + writel_relaxed(RKVDEC_VP9_REF_YSTRIDE(y_len / 16), + rkvdec->regs + ref_reg->reg_y_stride); + + if (!ref_reg->reg_yuv_stride) + return; + + writel_relaxed(RKVDEC_VP9_REF_YUVSTRIDE(yuv_len / 16), + rkvdec->regs + ref_reg->reg_yuv_stride); +} + +static void config_seg_registers(struct rkvdec_ctx *ctx, unsigned int segid) +{ + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + const struct v4l2_vp9_segmentation *seg; + struct rkvdec_dev *rkvdec = ctx->dev; + s16 feature_val; + int feature_id; + u32 val = 0; + + seg = vp9_ctx->last.valid ? &vp9_ctx->last.seg : &vp9_ctx->cur.seg; + feature_id = V4L2_VP9_SEG_LVL_ALT_Q; + if (v4l2_vp9_seg_feat_enabled(seg->feature_enabled, feature_id, segid)) { + feature_val = seg->feature_data[segid][feature_id]; + val |= RKVDEC_SEGID_FRAME_QP_DELTA_EN(1) | + RKVDEC_SEGID_FRAME_QP_DELTA(feature_val); + } + + feature_id = V4L2_VP9_SEG_LVL_ALT_L; + if (v4l2_vp9_seg_feat_enabled(seg->feature_enabled, feature_id, segid)) { + feature_val = seg->feature_data[segid][feature_id]; + val |= RKVDEC_SEGID_FRAME_LOOPFILTER_VALUE_EN(1) | + RKVDEC_SEGID_FRAME_LOOPFILTER_VALUE(feature_val); + } + + feature_id = V4L2_VP9_SEG_LVL_REF_FRAME; + if (v4l2_vp9_seg_feat_enabled(seg->feature_enabled, feature_id, segid)) { + feature_val = seg->feature_data[segid][feature_id]; + val |= RKVDEC_SEGID_REFERINFO_EN(1) | + RKVDEC_SEGID_REFERINFO(feature_val); + } + + feature_id = V4L2_VP9_SEG_LVL_SKIP; + if (v4l2_vp9_seg_feat_enabled(seg->feature_enabled, feature_id, segid)) + val |= RKVDEC_SEGID_FRAME_SKIP_EN(1); + + if (!segid && + (seg->flags & V4L2_VP9_SEGMENTATION_FLAG_ABS_OR_DELTA_UPDATE)) + val |= RKVDEC_SEGID_ABS_DELTA(1); + + writel_relaxed(val, rkvdec->regs + RKVDEC_VP9_SEGID_GRP(segid)); +} + +static void update_dec_buf_info(struct rkvdec_decoded_buffer *buf, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + buf->vp9.width = dec_params->frame_width_minus_1 + 1; + buf->vp9.height = dec_params->frame_height_minus_1 + 1; + buf->vp9.bit_depth = dec_params->bit_depth; +} + +static void update_ctx_cur_info(struct rkvdec_vp9_ctx *vp9_ctx, + struct rkvdec_decoded_buffer *buf, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + vp9_ctx->cur.valid = true; + vp9_ctx->cur.reference_mode = dec_params->reference_mode; + vp9_ctx->cur.tx_mode = dec_params->tx_mode; + vp9_ctx->cur.interpolation_filter = dec_params->interpolation_filter; + vp9_ctx->cur.flags = dec_params->flags; + vp9_ctx->cur.timestamp = buf->base.vb.vb2_buf.timestamp; + vp9_ctx->cur.seg = dec_params->seg; + vp9_ctx->cur.lf = dec_params->lf; +} + +static void update_ctx_last_info(struct rkvdec_vp9_ctx *vp9_ctx) +{ + vp9_ctx->last = vp9_ctx->cur; +} + +static void config_registers(struct rkvdec_ctx *ctx, + const struct rkvdec_vp9_run *run) +{ + unsigned int y_len, uv_len, yuv_len, bit_depth, aligned_height, aligned_pitch, stream_len; + const struct v4l2_ctrl_vp9_frame *dec_params; + struct rkvdec_decoded_buffer *ref_bufs[3]; + struct rkvdec_decoded_buffer *dst, *last, *mv_ref; + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + u32 val, last_frame_info = 0; + const struct v4l2_vp9_segmentation *seg; + struct rkvdec_dev *rkvdec = ctx->dev; + dma_addr_t addr; + bool intra_only; + unsigned int i; + + dec_params = run->decode_params; + dst = vb2_to_rkvdec_decoded_buf(&run->base.bufs.dst->vb2_buf); + ref_bufs[0] = get_ref_buf(ctx, &dst->base.vb, dec_params->last_frame_ts); + ref_bufs[1] = get_ref_buf(ctx, &dst->base.vb, dec_params->golden_frame_ts); + ref_bufs[2] = get_ref_buf(ctx, &dst->base.vb, dec_params->alt_frame_ts); + + if (vp9_ctx->last.valid) + last = get_ref_buf(ctx, &dst->base.vb, vp9_ctx->last.timestamp); + else + last = dst; + + update_dec_buf_info(dst, dec_params); + update_ctx_cur_info(vp9_ctx, dst, dec_params); + seg = &dec_params->seg; + + intra_only = !!(dec_params->flags & + (V4L2_VP9_FRAME_FLAG_KEY_FRAME | + V4L2_VP9_FRAME_FLAG_INTRA_ONLY)); + + writel_relaxed(RKVDEC_MODE(RKVDEC_MODE_VP9), + rkvdec->regs + RKVDEC_REG_SYSCTRL); + + bit_depth = dec_params->bit_depth; + aligned_height = round_up(ctx->decoded_fmt.fmt.pix_mp.height, 64); + + aligned_pitch = round_up(ctx->decoded_fmt.fmt.pix_mp.width * + bit_depth, + 512) / 8; + y_len = aligned_height * aligned_pitch; + uv_len = y_len / 2; + yuv_len = y_len + uv_len; + + writel_relaxed(RKVDEC_Y_HOR_VIRSTRIDE(aligned_pitch / 16) | + RKVDEC_UV_HOR_VIRSTRIDE(aligned_pitch / 16), + rkvdec->regs + RKVDEC_REG_PICPAR); + writel_relaxed(RKVDEC_Y_VIRSTRIDE(y_len / 16), + rkvdec->regs + RKVDEC_REG_Y_VIRSTRIDE); + writel_relaxed(RKVDEC_YUV_VIRSTRIDE(yuv_len / 16), + rkvdec->regs + RKVDEC_REG_YUV_VIRSTRIDE); + + stream_len = vb2_get_plane_payload(&run->base.bufs.src->vb2_buf, 0); + writel_relaxed(RKVDEC_STRM_LEN(stream_len), + rkvdec->regs + RKVDEC_REG_STRM_LEN); + + /* + * Reset count buffer, because decoder only output intra related syntax + * counts when decoding intra frame, but update entropy need to update + * all the probabilities. + */ + if (intra_only) + memset(vp9_ctx->count_tbl.cpu, 0, vp9_ctx->count_tbl.size); + + vp9_ctx->cur.segmapid = vp9_ctx->last.segmapid; + if (!intra_only && + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT) && + (!(seg->flags & V4L2_VP9_SEGMENTATION_FLAG_ENABLED) || + (seg->flags & V4L2_VP9_SEGMENTATION_FLAG_UPDATE_MAP))) + vp9_ctx->cur.segmapid++; + + for (i = 0; i < ARRAY_SIZE(ref_bufs); i++) + config_ref_registers(ctx, run, ref_bufs[i], &ref_regs[i]); + + for (i = 0; i < 8; i++) + config_seg_registers(ctx, i); + + writel_relaxed(RKVDEC_VP9_TX_MODE(dec_params->tx_mode) | + RKVDEC_VP9_FRAME_REF_MODE(dec_params->reference_mode), + rkvdec->regs + RKVDEC_VP9_CPRHEADER_CONFIG); + + if (!intra_only) { + const struct v4l2_vp9_loop_filter *lf; + s8 delta; + + if (vp9_ctx->last.valid) + lf = &vp9_ctx->last.lf; + else + lf = &vp9_ctx->cur.lf; + + val = 0; + for (i = 0; i < ARRAY_SIZE(lf->ref_deltas); i++) { + delta = lf->ref_deltas[i]; + val |= RKVDEC_REF_DELTAS_LASTFRAME(i, delta); + } + + writel_relaxed(val, + rkvdec->regs + RKVDEC_VP9_REF_DELTAS_LASTFRAME); + + for (i = 0; i < ARRAY_SIZE(lf->mode_deltas); i++) { + delta = lf->mode_deltas[i]; + last_frame_info |= RKVDEC_MODE_DELTAS_LASTFRAME(i, + delta); + } + } + + if (vp9_ctx->last.valid && !intra_only && + vp9_ctx->last.seg.flags & V4L2_VP9_SEGMENTATION_FLAG_ENABLED) + last_frame_info |= RKVDEC_SEG_EN_LASTFRAME; + + if (vp9_ctx->last.valid && + vp9_ctx->last.flags & V4L2_VP9_FRAME_FLAG_SHOW_FRAME) + last_frame_info |= RKVDEC_LAST_SHOW_FRAME; + + if (vp9_ctx->last.valid && + vp9_ctx->last.flags & + (V4L2_VP9_FRAME_FLAG_KEY_FRAME | V4L2_VP9_FRAME_FLAG_INTRA_ONLY)) + last_frame_info |= RKVDEC_LAST_INTRA_ONLY; + + if (vp9_ctx->last.valid && + last->vp9.width == dst->vp9.width && + last->vp9.height == dst->vp9.height) + last_frame_info |= RKVDEC_LAST_WIDHHEIGHT_EQCUR; + + writel_relaxed(last_frame_info, + rkvdec->regs + RKVDEC_VP9_INFO_LASTFRAME); + + writel_relaxed(stream_len - dec_params->compressed_header_size - + dec_params->uncompressed_header_size, + rkvdec->regs + RKVDEC_VP9_LASTTILE_SIZE); + + for (i = 0; !intra_only && i < ARRAY_SIZE(ref_bufs); i++) { + unsigned int refw = ref_bufs[i]->vp9.width; + unsigned int refh = ref_bufs[i]->vp9.height; + u32 hscale, vscale; + + hscale = (refw << 14) / dst->vp9.width; + vscale = (refh << 14) / dst->vp9.height; + writel_relaxed(RKVDEC_VP9_REF_HOR_SCALE(hscale) | + RKVDEC_VP9_REF_VER_SCALE(vscale), + rkvdec->regs + RKVDEC_VP9_REF_SCALE(i)); + } + + addr = vb2_dma_contig_plane_dma_addr(&dst->base.vb.vb2_buf, 0); + writel_relaxed(addr, rkvdec->regs + RKVDEC_REG_DECOUT_BASE); + addr = vb2_dma_contig_plane_dma_addr(&run->base.bufs.src->vb2_buf, 0); + writel_relaxed(addr, rkvdec->regs + RKVDEC_REG_STRM_RLC_BASE); + writel_relaxed(vp9_ctx->priv_tbl.dma + + offsetof(struct rkvdec_vp9_priv_tbl, probs), + rkvdec->regs + RKVDEC_REG_CABACTBL_PROB_BASE); + writel_relaxed(vp9_ctx->count_tbl.dma, + rkvdec->regs + RKVDEC_REG_VP9COUNT_BASE); + + writel_relaxed(vp9_ctx->priv_tbl.dma + + offsetof(struct rkvdec_vp9_priv_tbl, segmap) + + (RKVDEC_VP9_MAX_SEGMAP_SIZE * vp9_ctx->cur.segmapid), + rkvdec->regs + RKVDEC_REG_VP9_SEGIDCUR_BASE); + writel_relaxed(vp9_ctx->priv_tbl.dma + + offsetof(struct rkvdec_vp9_priv_tbl, segmap) + + (RKVDEC_VP9_MAX_SEGMAP_SIZE * (!vp9_ctx->cur.segmapid)), + rkvdec->regs + RKVDEC_REG_VP9_SEGIDLAST_BASE); + + if (!intra_only && + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT) && + vp9_ctx->last.valid) + mv_ref = last; + else + mv_ref = dst; + + writel_relaxed(get_mv_base_addr(mv_ref), + rkvdec->regs + RKVDEC_VP9_REF_COLMV_BASE); + + writel_relaxed(ctx->decoded_fmt.fmt.pix_mp.width | + (ctx->decoded_fmt.fmt.pix_mp.height << 16), + rkvdec->regs + RKVDEC_REG_PERFORMANCE_CYCLE); +} + +static int validate_dec_params(struct rkvdec_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + unsigned int aligned_width, aligned_height; + + /* We only support profile 0. */ + if (dec_params->profile != 0) { + dev_err(ctx->dev->dev, "unsupported profile %d\n", + dec_params->profile); + return -EINVAL; + } + + aligned_width = round_up(dec_params->frame_width_minus_1 + 1, 64); + aligned_height = round_up(dec_params->frame_height_minus_1 + 1, 64); + + /* + * Userspace should update the capture/decoded format when the + * resolution changes. + */ + if (aligned_width != ctx->decoded_fmt.fmt.pix_mp.width || + aligned_height != ctx->decoded_fmt.fmt.pix_mp.height) { + dev_err(ctx->dev->dev, + "unexpected bitstream resolution %dx%d\n", + dec_params->frame_width_minus_1 + 1, + dec_params->frame_height_minus_1 + 1); + return -EINVAL; + } + + return 0; +} + +static int rkvdec_vp9_run_preamble(struct rkvdec_ctx *ctx, + struct rkvdec_vp9_run *run) +{ + const struct v4l2_ctrl_vp9_frame *dec_params; + const struct v4l2_ctrl_vp9_compressed_hdr_probs *prob_updates; + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + struct v4l2_ctrl *ctrl; + unsigned int fctx_idx; + int ret; + + /* v4l2-specific stuff */ + rkvdec_run_preamble(ctx, &run->base); + + ctrl = v4l2_ctrl_find(&ctx->ctrl_hdl, + V4L2_CID_STATELESS_VP9_FRAME); + if (WARN_ON(!ctrl)) + return -EINVAL; + dec_params = ctrl->p_cur.p; + + ret = validate_dec_params(ctx, dec_params); + if (ret) + return ret; + + run->decode_params = dec_params; + + ctrl = v4l2_ctrl_find(&ctx->ctrl_hdl, V4L2_CID_STATELESS_VP9_COMPRESSED_HDR_PROBS); + if (WARN_ON(!ctrl)) + return -EINVAL; + prob_updates = ctrl->p_cur.p; + + /* + * vp9 stuff + * + * by this point the userspace has done all parts of 6.2 uncompressed_header() + * except this fragment: + * if ( FrameIsIntra || error_resilient_mode ) { + * setup_past_independence ( ) + * if ( frame_type == KEY_FRAME || error_resilient_mode == 1 || + * reset_frame_context == 3 ) { + * for ( i = 0; i < 4; i ++ ) { + * save_probs( i ) + * } + * } else if ( reset_frame_context == 2 ) { + * save_probs( frame_context_idx ) + * } + * frame_context_idx = 0 + * } + */ + fctx_idx = v4l2_vp9_reset_frame_ctx(dec_params, vp9_ctx->frame_context); + vp9_ctx->cur.frame_context_idx = fctx_idx; + + /* 6.1 frame(sz): load_probs() and load_probs2() */ + vp9_ctx->probability_tables = vp9_ctx->frame_context[fctx_idx]; + + /* + * The userspace has also performed 6.3 compressed_header(), but handling the + * probs in a special way. All probs which need updating, except MV-related, + * have been read from the bitstream and translated through inv_map_table[], + * but no 6.3.6 inv_recenter_nonneg(v, m) has been performed. The values passed + * by userspace are either translated values (there are no 0 values in + * inv_map_table[]), or zero to indicate no update. All MV-related probs which need + * updating have been read from the bitstream and (mv_prob << 1) | 1 has been + * performed. The values passed by userspace are either new values + * to replace old ones (the above mentioned shift and bitwise or never result in + * a zero) or zero to indicate no update. + * fw_update_probs() performs actual probs updates or leaves probs as-is + * for values for which a zero was passed from userspace. + */ + v4l2_vp9_fw_update_probs(&vp9_ctx->probability_tables, prob_updates, dec_params); + + return 0; +} + +static int rkvdec_vp9_run(struct rkvdec_ctx *ctx) +{ + struct rkvdec_dev *rkvdec = ctx->dev; + struct rkvdec_vp9_run run = { }; + int ret; + + ret = rkvdec_vp9_run_preamble(ctx, &run); + if (ret) { + rkvdec_run_postamble(ctx, &run.base); + return ret; + } + + /* Prepare probs. */ + init_probs(ctx, &run); + + /* Configure hardware registers. */ + config_registers(ctx, &run); + + rkvdec_run_postamble(ctx, &run.base); + + schedule_delayed_work(&rkvdec->watchdog_work, msecs_to_jiffies(2000)); + + writel(1, rkvdec->regs + RKVDEC_REG_PREF_LUMA_CACHE_COMMAND); + writel(1, rkvdec->regs + RKVDEC_REG_PREF_CHR_CACHE_COMMAND); + + writel(0xe, rkvdec->regs + RKVDEC_REG_STRMD_ERR_EN); + /* Start decoding! */ + writel(RKVDEC_INTERRUPT_DEC_E | RKVDEC_CONFIG_DEC_CLK_GATE_E | + RKVDEC_TIMEOUT_E | RKVDEC_BUF_EMPTY_E, + rkvdec->regs + RKVDEC_REG_INTERRUPT); + + return 0; +} + +#define copy_tx_and_skip(p1, p2) \ +do { \ + memcpy((p1)->tx8, (p2)->tx8, sizeof((p1)->tx8)); \ + memcpy((p1)->tx16, (p2)->tx16, sizeof((p1)->tx16)); \ + memcpy((p1)->tx32, (p2)->tx32, sizeof((p1)->tx32)); \ + memcpy((p1)->skip, (p2)->skip, sizeof((p1)->skip)); \ +} while (0) + +static void rkvdec_vp9_done(struct rkvdec_ctx *ctx, + struct vb2_v4l2_buffer *src_buf, + struct vb2_v4l2_buffer *dst_buf, + enum vb2_buffer_state result) +{ + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + unsigned int fctx_idx; + + /* v4l2-specific stuff */ + if (result == VB2_BUF_STATE_ERROR) + goto out_update_last; + + /* + * vp9 stuff + * + * 6.1.2 refresh_probs() + * + * In the spec a complementary condition goes last in 6.1.2 refresh_probs(), + * but it makes no sense to perform all the activities from the first "if" + * there if we actually are not refreshing the frame context. On top of that, + * because of 6.2 uncompressed_header() whenever error_resilient_mode == 1, + * refresh_frame_context == 0. Consequently, if we don't jump to out_update_last + * it means error_resilient_mode must be 0. + */ + if (!(vp9_ctx->cur.flags & V4L2_VP9_FRAME_FLAG_REFRESH_FRAME_CTX)) + goto out_update_last; + + fctx_idx = vp9_ctx->cur.frame_context_idx; + + if (!(vp9_ctx->cur.flags & V4L2_VP9_FRAME_FLAG_PARALLEL_DEC_MODE)) { + /* error_resilient_mode == 0 && frame_parallel_decoding_mode == 0 */ + struct v4l2_vp9_frame_context *probs = &vp9_ctx->probability_tables; + bool frame_is_intra = vp9_ctx->cur.flags & + (V4L2_VP9_FRAME_FLAG_KEY_FRAME | V4L2_VP9_FRAME_FLAG_INTRA_ONLY); + struct tx_and_skip { + u8 tx8[2][1]; + u8 tx16[2][2]; + u8 tx32[2][3]; + u8 skip[3]; + } _tx_skip, *tx_skip = &_tx_skip; + struct v4l2_vp9_frame_symbol_counts *counts; + + /* buffer the forward-updated TX and skip probs */ + if (frame_is_intra) + copy_tx_and_skip(tx_skip, probs); + + /* 6.1.2 refresh_probs(): load_probs() and load_probs2() */ + *probs = vp9_ctx->frame_context[fctx_idx]; + + /* if FrameIsIntra then undo the effect of load_probs2() */ + if (frame_is_intra) + copy_tx_and_skip(probs, tx_skip); + + counts = frame_is_intra ? &vp9_ctx->intra_cnts : &vp9_ctx->inter_cnts; + v4l2_vp9_adapt_coef_probs(probs, counts, + !vp9_ctx->last.valid || + vp9_ctx->last.flags & V4L2_VP9_FRAME_FLAG_KEY_FRAME, + frame_is_intra); + if (!frame_is_intra) { + const struct rkvdec_vp9_inter_frame_symbol_counts *inter_cnts; + u32 classes[2][11]; + int i; + + inter_cnts = vp9_ctx->count_tbl.cpu; + for (i = 0; i < ARRAY_SIZE(classes); ++i) + memcpy(classes[i], inter_cnts->classes[i], sizeof(classes[0])); + counts->classes = &classes; + + /* load_probs2() already done */ + v4l2_vp9_adapt_noncoef_probs(&vp9_ctx->probability_tables, counts, + vp9_ctx->cur.reference_mode, + vp9_ctx->cur.interpolation_filter, + vp9_ctx->cur.tx_mode, vp9_ctx->cur.flags); + } + } + + /* 6.1.2 refresh_probs(): save_probs(fctx_idx) */ + vp9_ctx->frame_context[fctx_idx] = vp9_ctx->probability_tables; + +out_update_last: + update_ctx_last_info(vp9_ctx); +} + +static void rkvdec_init_v4l2_vp9_count_tbl(struct rkvdec_ctx *ctx) +{ + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + struct rkvdec_vp9_intra_frame_symbol_counts *intra_cnts = vp9_ctx->count_tbl.cpu; + struct rkvdec_vp9_inter_frame_symbol_counts *inter_cnts = vp9_ctx->count_tbl.cpu; + int i, j, k, l, m; + + vp9_ctx->inter_cnts.partition = &inter_cnts->partition; + vp9_ctx->inter_cnts.skip = &inter_cnts->skip; + vp9_ctx->inter_cnts.intra_inter = &inter_cnts->inter; + vp9_ctx->inter_cnts.tx32p = &inter_cnts->tx32p; + vp9_ctx->inter_cnts.tx16p = &inter_cnts->tx16p; + vp9_ctx->inter_cnts.tx8p = &inter_cnts->tx8p; + + vp9_ctx->intra_cnts.partition = (u32 (*)[16][4])(&intra_cnts->partition); + vp9_ctx->intra_cnts.skip = &intra_cnts->skip; + vp9_ctx->intra_cnts.intra_inter = &intra_cnts->intra; + vp9_ctx->intra_cnts.tx32p = &intra_cnts->tx32p; + vp9_ctx->intra_cnts.tx16p = &intra_cnts->tx16p; + vp9_ctx->intra_cnts.tx8p = &intra_cnts->tx8p; + + vp9_ctx->inter_cnts.y_mode = &inter_cnts->y_mode; + vp9_ctx->inter_cnts.uv_mode = &inter_cnts->uv_mode; + vp9_ctx->inter_cnts.comp = &inter_cnts->comp; + vp9_ctx->inter_cnts.comp_ref = &inter_cnts->comp_ref; + vp9_ctx->inter_cnts.single_ref = &inter_cnts->single_ref; + vp9_ctx->inter_cnts.mv_mode = &inter_cnts->mv_mode; + vp9_ctx->inter_cnts.filter = &inter_cnts->filter; + vp9_ctx->inter_cnts.mv_joint = &inter_cnts->mv_joint; + vp9_ctx->inter_cnts.sign = &inter_cnts->sign; + /* + * rk hardware actually uses "u32 classes[2][11 + 1];" + * instead of "u32 classes[2][11];", so this must be explicitly + * copied into vp9_ctx->classes when passing the data to the + * vp9 library function + */ + vp9_ctx->inter_cnts.class0 = &inter_cnts->class0; + vp9_ctx->inter_cnts.bits = &inter_cnts->bits; + vp9_ctx->inter_cnts.class0_fp = &inter_cnts->class0_fp; + vp9_ctx->inter_cnts.fp = &inter_cnts->fp; + vp9_ctx->inter_cnts.class0_hp = &inter_cnts->class0_hp; + vp9_ctx->inter_cnts.hp = &inter_cnts->hp; + +#define INNERMOST_LOOP \ + do { \ + for (m = 0; m < ARRAY_SIZE(vp9_ctx->inter_cnts.coeff[0][0][0][0]); ++m) {\ + vp9_ctx->inter_cnts.coeff[i][j][k][l][m] = \ + &inter_cnts->ref_cnt[k][i][j][l][m].coeff; \ + vp9_ctx->inter_cnts.eob[i][j][k][l][m][0] = \ + &inter_cnts->ref_cnt[k][i][j][l][m].eob[0]; \ + vp9_ctx->inter_cnts.eob[i][j][k][l][m][1] = \ + &inter_cnts->ref_cnt[k][i][j][l][m].eob[1]; \ + \ + vp9_ctx->intra_cnts.coeff[i][j][k][l][m] = \ + &intra_cnts->ref_cnt[k][i][j][l][m].coeff; \ + vp9_ctx->intra_cnts.eob[i][j][k][l][m][0] = \ + &intra_cnts->ref_cnt[k][i][j][l][m].eob[0]; \ + vp9_ctx->intra_cnts.eob[i][j][k][l][m][1] = \ + &intra_cnts->ref_cnt[k][i][j][l][m].eob[1]; \ + } \ + } while (0) + + for (i = 0; i < ARRAY_SIZE(vp9_ctx->inter_cnts.coeff); ++i) + for (j = 0; j < ARRAY_SIZE(vp9_ctx->inter_cnts.coeff[0]); ++j) + for (k = 0; k < ARRAY_SIZE(vp9_ctx->inter_cnts.coeff[0][0]); ++k) + for (l = 0; l < ARRAY_SIZE(vp9_ctx->inter_cnts.coeff[0][0][0]); ++l) + INNERMOST_LOOP; +#undef INNERMOST_LOOP +} + +static int rkvdec_vp9_start(struct rkvdec_ctx *ctx) +{ + struct rkvdec_dev *rkvdec = ctx->dev; + struct rkvdec_vp9_priv_tbl *priv_tbl; + struct rkvdec_vp9_ctx *vp9_ctx; + unsigned char *count_tbl; + int ret; + + vp9_ctx = kzalloc(sizeof(*vp9_ctx), GFP_KERNEL); + if (!vp9_ctx) + return -ENOMEM; + + ctx->priv = vp9_ctx; + + priv_tbl = dma_alloc_coherent(rkvdec->dev, sizeof(*priv_tbl), + &vp9_ctx->priv_tbl.dma, GFP_KERNEL); + if (!priv_tbl) { + ret = -ENOMEM; + goto err_free_ctx; + } + + vp9_ctx->priv_tbl.size = sizeof(*priv_tbl); + vp9_ctx->priv_tbl.cpu = priv_tbl; + memset(priv_tbl, 0, sizeof(*priv_tbl)); + + count_tbl = dma_alloc_coherent(rkvdec->dev, RKVDEC_VP9_COUNT_SIZE, + &vp9_ctx->count_tbl.dma, GFP_KERNEL); + if (!count_tbl) { + ret = -ENOMEM; + goto err_free_priv_tbl; + } + + vp9_ctx->count_tbl.size = RKVDEC_VP9_COUNT_SIZE; + vp9_ctx->count_tbl.cpu = count_tbl; + memset(count_tbl, 0, sizeof(*count_tbl)); + rkvdec_init_v4l2_vp9_count_tbl(ctx); + + return 0; + +err_free_priv_tbl: + dma_free_coherent(rkvdec->dev, vp9_ctx->priv_tbl.size, + vp9_ctx->priv_tbl.cpu, vp9_ctx->priv_tbl.dma); + +err_free_ctx: + kfree(vp9_ctx); + return ret; +} + +static void rkvdec_vp9_stop(struct rkvdec_ctx *ctx) +{ + struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; + struct rkvdec_dev *rkvdec = ctx->dev; + + dma_free_coherent(rkvdec->dev, vp9_ctx->count_tbl.size, + vp9_ctx->count_tbl.cpu, vp9_ctx->count_tbl.dma); + dma_free_coherent(rkvdec->dev, vp9_ctx->priv_tbl.size, + vp9_ctx->priv_tbl.cpu, vp9_ctx->priv_tbl.dma); + kfree(vp9_ctx); +} + +static int rkvdec_vp9_adjust_fmt(struct rkvdec_ctx *ctx, + struct v4l2_format *f) +{ + struct v4l2_pix_format_mplane *fmt = &f->fmt.pix_mp; + + fmt->num_planes = 1; + if (!fmt->plane_fmt[0].sizeimage) + fmt->plane_fmt[0].sizeimage = fmt->width * fmt->height * 2; + return 0; +} + +const struct rkvdec_coded_fmt_ops rkvdec_vp9_fmt_ops = { + .adjust_fmt = rkvdec_vp9_adjust_fmt, + .start = rkvdec_vp9_start, + .stop = rkvdec_vp9_stop, + .run = rkvdec_vp9_run, + .done = rkvdec_vp9_done, +}; diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c index 7131156c1f2c..9553d66e6325 100644 --- a/drivers/staging/media/rkvdec/rkvdec.c +++ b/drivers/staging/media/rkvdec/rkvdec.c @@ -99,10 +99,30 @@ static const struct rkvdec_ctrls rkvdec_h264_ctrls = { .num_ctrls = ARRAY_SIZE(rkvdec_h264_ctrl_descs), }; -static const u32 rkvdec_h264_decoded_fmts[] = { +static const u32 rkvdec_h264_vp9_decoded_fmts[] = { V4L2_PIX_FMT_NV12, }; +static const struct rkvdec_ctrl_desc rkvdec_vp9_ctrl_descs[] = { + { + .cfg.id = V4L2_CID_STATELESS_VP9_FRAME, + }, + { + .cfg.id = V4L2_CID_STATELESS_VP9_COMPRESSED_HDR_PROBS, + }, + { + .cfg.id = V4L2_CID_MPEG_VIDEO_VP9_PROFILE, + .cfg.min = V4L2_MPEG_VIDEO_VP9_PROFILE_0, + .cfg.max = V4L2_MPEG_VIDEO_VP9_PROFILE_0, + .cfg.def = V4L2_MPEG_VIDEO_VP9_PROFILE_0, + }, +}; + +static const struct rkvdec_ctrls rkvdec_vp9_ctrls = { + .ctrls = rkvdec_vp9_ctrl_descs, + .num_ctrls = ARRAY_SIZE(rkvdec_vp9_ctrl_descs), +}; + static const struct rkvdec_coded_fmt_desc rkvdec_coded_fmts[] = { { .fourcc = V4L2_PIX_FMT_H264_SLICE, @@ -116,8 +136,23 @@ static const struct rkvdec_coded_fmt_desc rkvdec_coded_fmts[] = { }, .ctrls = &rkvdec_h264_ctrls, .ops = &rkvdec_h264_fmt_ops, - .num_decoded_fmts = ARRAY_SIZE(rkvdec_h264_decoded_fmts), - .decoded_fmts = rkvdec_h264_decoded_fmts, + .num_decoded_fmts = ARRAY_SIZE(rkvdec_h264_vp9_decoded_fmts), + .decoded_fmts = rkvdec_h264_vp9_decoded_fmts, + }, + { + .fourcc = V4L2_PIX_FMT_VP9_FRAME, + .frmsize = { + .min_width = 64, + .max_width = 4096, + .step_width = 64, + .min_height = 64, + .max_height = 2304, + .step_height = 64, + }, + .ctrls = &rkvdec_vp9_ctrls, + .ops = &rkvdec_vp9_fmt_ops, + .num_decoded_fmts = ARRAY_SIZE(rkvdec_h264_vp9_decoded_fmts), + .decoded_fmts = rkvdec_h264_vp9_decoded_fmts, } }; @@ -319,7 +354,7 @@ static int rkvdec_s_output_fmt(struct file *file, void *priv, struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx; const struct rkvdec_coded_fmt_desc *desc; struct v4l2_format *cap_fmt; - struct vb2_queue *peer_vq; + struct vb2_queue *peer_vq, *vq; int ret; /* @@ -331,6 +366,15 @@ static int rkvdec_s_output_fmt(struct file *file, void *priv, if (vb2_is_busy(peer_vq)) return -EBUSY; + /* + * Some codecs like VP9 can contain dynamic resolution changes which + * are currently not supported by the V4L2 API or driver, so return + * an error if userspace tries to reconfigure the output format. + */ + vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); + if (vb2_is_busy(vq)) + return -EINVAL; + ret = rkvdec_s_fmt(file, priv, f, rkvdec_try_output_fmt); if (ret) return ret; diff --git a/drivers/staging/media/rkvdec/rkvdec.h b/drivers/staging/media/rkvdec/rkvdec.h index 52ac3874c5e5..2f4ea1786b93 100644 --- a/drivers/staging/media/rkvdec/rkvdec.h +++ b/drivers/staging/media/rkvdec/rkvdec.h @@ -42,14 +42,18 @@ struct rkvdec_run { struct rkvdec_vp9_decoded_buffer_info { /* Info needed when the decoded frame serves as a reference frame. */ - u16 width; - u16 height; - u32 bit_depth : 4; + unsigned short width; + unsigned short height; + unsigned int bit_depth : 4; }; struct rkvdec_decoded_buffer { /* Must be the first field in this struct. */ struct v4l2_m2m_buffer base; + + union { + struct rkvdec_vp9_decoded_buffer_info vp9; + }; }; static inline struct rkvdec_decoded_buffer * @@ -116,4 +120,6 @@ void rkvdec_run_preamble(struct rkvdec_ctx *ctx, struct rkvdec_run *run); void rkvdec_run_postamble(struct rkvdec_ctx *ctx, struct rkvdec_run *run); extern const struct rkvdec_coded_fmt_ops rkvdec_h264_fmt_ops; +extern const struct rkvdec_coded_fmt_ops rkvdec_vp9_fmt_ops; + #endif /* RKVDEC_H_ */ From patchwork Thu Aug 5 14:42:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Pietrasiewicz X-Patchwork-Id: 492493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81BD0C432BE for ; Thu, 5 Aug 2021 14:45:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5BC9961157 for ; Thu, 5 Aug 2021 14:45:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242048AbhHEOp7 (ORCPT ); Thu, 5 Aug 2021 10:45:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242001AbhHEOnT (ORCPT ); Thu, 5 Aug 2021 10:43:19 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B08BC06179A; Thu, 5 Aug 2021 07:43:04 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: andrzej.p) with ESMTPSA id 2BE321F4409D From: Andrzej Pietrasiewicz To: linux-media@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev Cc: Andrzej Pietrasiewicz , Benjamin Gaignard , Boris Brezillon , Ezequiel Garcia , Fabio Estevam , Greg Kroah-Hartman , Hans Verkuil , Heiko Stuebner , Jernej Skrabec , Mauro Carvalho Chehab , Nicolas Dufresne , NXP Linux Team , Pengutronix Kernel Team , Philipp Zabel , Sascha Hauer , Shawn Guo , kernel@collabora.com Subject: [PATCH v3 09/10] media: hantro: Support VP9 on the G2 core Date: Thu, 5 Aug 2021 16:42:45 +0200 Message-Id: <20210805144246.11998-10-andrzej.p@collabora.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210805144246.11998-1-andrzej.p@collabora.com> References: <20210805144246.11998-1-andrzej.p@collabora.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org VeriSilicon Hantro G2 core supports VP9 codec. Signed-off-by: Andrzej Pietrasiewicz --- drivers/staging/media/hantro/Kconfig | 1 + drivers/staging/media/hantro/Makefile | 6 +- drivers/staging/media/hantro/hantro.h | 26 + drivers/staging/media/hantro/hantro_drv.c | 18 +- drivers/staging/media/hantro/hantro_g2_regs.h | 97 ++ .../staging/media/hantro/hantro_g2_vp9_dec.c | 978 ++++++++++++++++++ drivers/staging/media/hantro/hantro_hw.h | 67 ++ drivers/staging/media/hantro/hantro_v4l2.c | 6 + drivers/staging/media/hantro/hantro_vp9.c | 240 +++++ drivers/staging/media/hantro/hantro_vp9.h | 103 ++ drivers/staging/media/hantro/imx8m_vpu_hw.c | 22 +- 11 files changed, 1560 insertions(+), 4 deletions(-) create mode 100644 drivers/staging/media/hantro/hantro_g2_vp9_dec.c create mode 100644 drivers/staging/media/hantro/hantro_vp9.c create mode 100644 drivers/staging/media/hantro/hantro_vp9.h diff --git a/drivers/staging/media/hantro/Kconfig b/drivers/staging/media/hantro/Kconfig index 20b1f6d7b69c..00a57d88c92e 100644 --- a/drivers/staging/media/hantro/Kconfig +++ b/drivers/staging/media/hantro/Kconfig @@ -9,6 +9,7 @@ config VIDEO_HANTRO select VIDEOBUF2_VMALLOC select V4L2_MEM2MEM_DEV select V4L2_H264 + select V4L2_VP9 help Support for the Hantro IP based Video Processing Units present on Rockchip and NXP i.MX8M SoCs, which accelerate video and image diff --git a/drivers/staging/media/hantro/Makefile b/drivers/staging/media/hantro/Makefile index fe6d84871d07..28af0a1ee4bf 100644 --- a/drivers/staging/media/hantro/Makefile +++ b/drivers/staging/media/hantro/Makefile @@ -10,9 +10,10 @@ hantro-vpu-y += \ hantro_g1.o \ hantro_g1_h264_dec.o \ hantro_g1_mpeg2_dec.o \ - hantro_g2_hevc_dec.o \ hantro_g1_vp8_dec.o \ hantro_g2.o \ + hantro_g2_hevc_dec.o \ + hantro_g2_vp9_dec.o \ rockchip_vpu2_hw_jpeg_enc.o \ rockchip_vpu2_hw_h264_dec.o \ rockchip_vpu2_hw_mpeg2_dec.o \ @@ -21,7 +22,8 @@ hantro-vpu-y += \ hantro_h264.o \ hantro_hevc.o \ hantro_mpeg2.o \ - hantro_vp8.o + hantro_vp8.o \ + hantro_vp9.o hantro-vpu-$(CONFIG_VIDEO_HANTRO_IMX8M) += \ imx8m_vpu_hw.o diff --git a/drivers/staging/media/hantro/hantro.h b/drivers/staging/media/hantro/hantro.h index d91eb2b1c509..1e8c1a6e3eb0 100644 --- a/drivers/staging/media/hantro/hantro.h +++ b/drivers/staging/media/hantro/hantro.h @@ -36,6 +36,7 @@ struct hantro_postproc_ops; #define HANTRO_VP8_DECODER BIT(17) #define HANTRO_H264_DECODER BIT(18) #define HANTRO_HEVC_DECODER BIT(19) +#define HANTRO_VP9_DECODER BIT(20) #define HANTRO_DECODERS 0xffff0000 /** @@ -110,6 +111,7 @@ enum hantro_codec_mode { HANTRO_MODE_MPEG2_DEC, HANTRO_MODE_VP8_DEC, HANTRO_MODE_HEVC_DEC, + HANTRO_MODE_VP9_DEC, }; /* @@ -223,6 +225,7 @@ struct hantro_dev { * @mpeg2_dec: MPEG-2-decoding context. * @vp8_dec: VP8-decoding context. * @hevc_dec: HEVC-decoding context. + * @vp9_dec: VP9-decoding context. */ struct hantro_ctx { struct hantro_dev *dev; @@ -250,6 +253,7 @@ struct hantro_ctx { struct hantro_mpeg2_dec_hw_ctx mpeg2_dec; struct hantro_vp8_dec_hw_ctx vp8_dec; struct hantro_hevc_dec_hw_ctx hevc_dec; + struct hantro_vp9_dec_hw_ctx vp9_dec; }; }; @@ -299,6 +303,22 @@ struct hantro_postproc_regs { struct hantro_reg display_width; }; +struct hantro_vp9_decoded_buffer_info { + /* Info needed when the decoded frame serves as a reference frame. */ + unsigned short width; + unsigned short height; + u32 bit_depth : 4; +}; + +struct hantro_decoded_buffer { + /* Must be the first field in this struct. */ + struct v4l2_m2m_buffer base; + + union { + struct hantro_vp9_decoded_buffer_info vp9; + }; +}; + /* Logging helpers */ /** @@ -436,6 +456,12 @@ hantro_get_dec_buf_addr(struct hantro_ctx *ctx, struct vb2_buffer *vb) return vb2_dma_contig_plane_dma_addr(vb, 0); } +static inline struct hantro_decoded_buffer * +vb2_to_hantro_decoded_buf(struct vb2_buffer *buf) +{ + return container_of(buf, struct hantro_decoded_buffer, base.vb.vb2_buf); +} + void hantro_postproc_disable(struct hantro_ctx *ctx); void hantro_postproc_enable(struct hantro_ctx *ctx); void hantro_postproc_free(struct hantro_ctx *ctx); diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c index 8a2edd67f2c6..0d1f3914f670 100644 --- a/drivers/staging/media/hantro/hantro_drv.c +++ b/drivers/staging/media/hantro/hantro_drv.c @@ -232,7 +232,7 @@ queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq) dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; dst_vq->drv_priv = ctx; dst_vq->ops = &hantro_queue_ops; - dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + dst_vq->buf_struct_size = sizeof(struct hantro_decoded_buffer); dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; dst_vq->lock = &ctx->dev->vpu_mutex; dst_vq->dev = ctx->dev->v4l2_dev.dev; @@ -266,6 +266,12 @@ static int hantro_try_ctrl(struct v4l2_ctrl *ctrl) if (sps->flags & V4L2_HEVC_SPS_FLAG_SCALING_LIST_ENABLED) /* No scaling support */ return -EINVAL; + } else if (ctrl->id == V4L2_CID_STATELESS_VP9_FRAME) { + const struct v4l2_ctrl_vp9_frame *dec_params = ctrl->p_new.p_vp9_frame; + + /* We only support profile 0 */ + if (dec_params->profile != 0) + return -EINVAL; } return 0; } @@ -459,6 +465,16 @@ static const struct hantro_ctrl controls[] = { .step = 1, .ops = &hantro_hevc_ctrl_ops, }, + }, { + .codec = HANTRO_VP9_DECODER, + .cfg = { + .id = V4L2_CID_STATELESS_VP9_FRAME, + }, + }, { + .codec = HANTRO_VP9_DECODER, + .cfg = { + .id = V4L2_CID_STATELESS_VP9_COMPRESSED_HDR_PROBS, + }, }, }; diff --git a/drivers/staging/media/hantro/hantro_g2_regs.h b/drivers/staging/media/hantro/hantro_g2_regs.h index 0ac0ba375e80..21ca21648614 100644 --- a/drivers/staging/media/hantro/hantro_g2_regs.h +++ b/drivers/staging/media/hantro/hantro_g2_regs.h @@ -28,6 +28,7 @@ #define G2_REG_INTERRUPT_DEC_E BIT(0) #define HEVC_DEC_MODE 0xc +#define VP9_DEC_MODE 0xd #define BUS_WIDTH_32 0 #define BUS_WIDTH_64 1 @@ -49,6 +50,7 @@ #define g2_pic_height_in_cbs G2_DEC_REG(4, 6, 0x1fff) #define g2_num_ref_frames G2_DEC_REG(4, 0, 0x1f) +#define g2_start_bit G2_DEC_REG(5, 25, 0x7f) #define g2_scaling_list_e G2_DEC_REG(5, 24, 0x1) #define g2_cb_qp_offset G2_DEC_REG(5, 19, 0x1f) #define g2_cr_qp_offset G2_DEC_REG(5, 14, 0x1f) @@ -84,6 +86,7 @@ #define g2_bit_depth_y_minus8 G2_DEC_REG(8, 6, 0x3) #define g2_bit_depth_c_minus8 G2_DEC_REG(8, 4, 0x3) #define g2_output_8_bits G2_DEC_REG(8, 3, 0x1) +#define g2_output_format G2_DEC_REG(8, 0, 0x7) #define g2_refidx1_active G2_DEC_REG(9, 19, 0x1f) #define g2_refidx0_active G2_DEC_REG(9, 14, 0x1f) @@ -96,6 +99,14 @@ #define g2_tile_e G2_DEC_REG(10, 1, 0x1) #define g2_entropy_sync_e G2_DEC_REG(10, 0, 0x1) +#define vp9_transform_mode G2_DEC_REG(11, 27, 0x7) +#define vp9_filt_sharpness G2_DEC_REG(11, 21, 0x7) +#define vp9_mcomp_filt_type G2_DEC_REG(11, 8, 0x7) +#define vp9_high_prec_mv_e G2_DEC_REG(11, 7, 0x1) +#define vp9_comp_pred_mode G2_DEC_REG(11, 4, 0x3) +#define vp9_gref_sign_bias G2_DEC_REG(11, 2, 0x1) +#define vp9_aref_sign_bias G2_DEC_REG(11, 0, 0x1) + #define g2_refer_lterm_e G2_DEC_REG(12, 16, 0xffff) #define g2_min_cb_size G2_DEC_REG(12, 13, 0x7) #define g2_max_cb_size G2_DEC_REG(12, 10, 0x7) @@ -154,6 +165,50 @@ #define g2_partial_ctb_y G2_DEC_REG(20, 30, 0x1) #define g2_pic_width_4x4 G2_DEC_REG(20, 16, 0xfff) #define g2_pic_height_4x4 G2_DEC_REG(20, 0, 0xfff) + +#define vp9_qp_delta_y_dc G2_DEC_REG(13, 23, 0x3f) +#define vp9_qp_delta_ch_dc G2_DEC_REG(13, 17, 0x3f) +#define vp9_qp_delta_ch_ac G2_DEC_REG(13, 11, 0x3f) +#define vp9_last_sign_bias G2_DEC_REG(13, 10, 0x1) +#define vp9_lossless_e G2_DEC_REG(13, 9, 0x1) +#define vp9_comp_pred_var_ref1 G2_DEC_REG(13, 7, 0x3) +#define vp9_comp_pred_var_ref0 G2_DEC_REG(13, 5, 0x3) +#define vp9_comp_pred_fixed_ref G2_DEC_REG(13, 3, 0x3) +#define vp9_segment_temp_upd_e G2_DEC_REG(13, 2, 0x1) +#define vp9_segment_upd_e G2_DEC_REG(13, 1, 0x1) +#define vp9_segment_e G2_DEC_REG(13, 0, 0x1) + +#define vp9_filt_level G2_DEC_REG(14, 18, 0x3f) +#define vp9_refpic_seg0 G2_DEC_REG(14, 15, 0x7) +#define vp9_skip_seg0 G2_DEC_REG(14, 14, 0x1) +#define vp9_filt_level_seg0 G2_DEC_REG(14, 8, 0x3f) +#define vp9_quant_seg0 G2_DEC_REG(14, 0, 0xff) + +#define vp9_refpic_seg1 G2_DEC_REG(15, 15, 0x7) +#define vp9_skip_seg1 G2_DEC_REG(15, 14, 0x1) +#define vp9_filt_level_seg1 G2_DEC_REG(15, 8, 0x3f) +#define vp9_quant_seg1 G2_DEC_REG(15, 0, 0xff) + +#define vp9_refpic_seg2 G2_DEC_REG(16, 15, 0x7) +#define vp9_skip_seg2 G2_DEC_REG(16, 14, 0x1) +#define vp9_filt_level_seg2 G2_DEC_REG(16, 8, 0x3f) +#define vp9_quant_seg2 G2_DEC_REG(16, 0, 0xff) + +#define vp9_refpic_seg3 G2_DEC_REG(17, 15, 0x7) +#define vp9_skip_seg3 G2_DEC_REG(17, 14, 0x1) +#define vp9_filt_level_seg3 G2_DEC_REG(17, 8, 0x3f) +#define vp9_quant_seg3 G2_DEC_REG(17, 0, 0xff) + +#define vp9_refpic_seg4 G2_DEC_REG(18, 15, 0x7) +#define vp9_skip_seg4 G2_DEC_REG(18, 14, 0x1) +#define vp9_filt_level_seg4 G2_DEC_REG(18, 8, 0x3f) +#define vp9_quant_seg4 G2_DEC_REG(18, 0, 0xff) + +#define vp9_refpic_seg5 G2_DEC_REG(19, 15, 0x7) +#define vp9_skip_seg5 G2_DEC_REG(19, 14, 0x1) +#define vp9_filt_level_seg5 G2_DEC_REG(19, 8, 0x3f) +#define vp9_quant_seg5 G2_DEC_REG(19, 0, 0xff) + #define hevc_cur_poc_00 G2_DEC_REG(46, 24, 0xff) #define hevc_cur_poc_01 G2_DEC_REG(46, 16, 0xff) #define hevc_cur_poc_02 G2_DEC_REG(46, 8, 0xff) @@ -174,6 +229,44 @@ #define hevc_cur_poc_14 G2_DEC_REG(49, 8, 0xff) #define hevc_cur_poc_15 G2_DEC_REG(49, 0, 0xff) +#define vp9_refpic_seg6 G2_DEC_REG(31, 15, 0x7) +#define vp9_skip_seg6 G2_DEC_REG(31, 14, 0x1) +#define vp9_filt_level_seg6 G2_DEC_REG(31, 8, 0x3f) +#define vp9_quant_seg6 G2_DEC_REG(31, 0, 0xff) + +#define vp9_refpic_seg7 G2_DEC_REG(32, 15, 0x7) +#define vp9_skip_seg7 G2_DEC_REG(32, 14, 0x1) +#define vp9_filt_level_seg7 G2_DEC_REG(32, 8, 0x3f) +#define vp9_quant_seg7 G2_DEC_REG(32, 0, 0xff) + +#define vp9_lref_width G2_DEC_REG(33, 16, 0xffff) +#define vp9_lref_height G2_DEC_REG(33, 0, 0xffff) + +#define vp9_gref_width G2_DEC_REG(34, 16, 0xffff) +#define vp9_gref_height G2_DEC_REG(34, 0, 0xffff) + +#define vp9_aref_width G2_DEC_REG(35, 16, 0xffff) +#define vp9_aref_height G2_DEC_REG(35, 0, 0xffff) + +#define vp9_lref_hor_scale G2_DEC_REG(36, 16, 0xffff) +#define vp9_lref_ver_scale G2_DEC_REG(36, 0, 0xffff) + +#define vp9_gref_hor_scale G2_DEC_REG(37, 16, 0xffff) +#define vp9_gref_ver_scale G2_DEC_REG(37, 0, 0xffff) + +#define vp9_aref_hor_scale G2_DEC_REG(38, 16, 0xffff) +#define vp9_aref_ver_scale G2_DEC_REG(38, 0, 0xffff) + +#define vp9_filt_ref_adj_0 G2_DEC_REG(46, 24, 0x7f) +#define vp9_filt_ref_adj_1 G2_DEC_REG(46, 16, 0x7f) +#define vp9_filt_ref_adj_2 G2_DEC_REG(46, 8, 0x7f) +#define vp9_filt_ref_adj_3 G2_DEC_REG(46, 0, 0x7f) + +#define vp9_filt_mb_adj_0 G2_DEC_REG(47, 24, 0x7f) +#define vp9_filt_mb_adj_1 G2_DEC_REG(47, 16, 0x7f) +#define vp9_filt_mb_adj_2 G2_DEC_REG(47, 8, 0x7f) +#define vp9_filt_mb_adj_3 G2_DEC_REG(47, 0, 0x7f) + #define g2_apf_threshold G2_DEC_REG(55, 0, 0xffff) #define g2_clk_gate_e G2_DEC_REG(58, 16, 0x1) @@ -186,6 +279,8 @@ #define G2_ADDR_DST (G2_SWREG(65)) #define G2_REG_ADDR_REF(i) (G2_SWREG(67) + ((i) * 0x8)) +#define VP9_ADDR_SEGMENT_WRITE (G2_SWREG(79)) +#define VP9_ADDR_SEGMENT_READ (G2_SWREG(81)) #define G2_ADDR_DST_CHR (G2_SWREG(99)) #define G2_REG_CHR_REF(i) (G2_SWREG(101) + ((i) * 0x8)) #define G2_ADDR_DST_MV (G2_SWREG(133)) @@ -193,6 +288,8 @@ #define G2_ADDR_TILE_SIZE (G2_SWREG(167)) #define G2_ADDR_STR (G2_SWREG(169)) #define HEVC_SCALING_LIST (G2_SWREG(171)) +#define VP9_ADDR_CTR (G2_SWREG(171)) +#define VP9_ADDR_PROBS (G2_SWREG(173)) #define G2_RASTER_SCAN (G2_SWREG(175)) #define G2_RASTER_SCAN_CHR (G2_SWREG(177)) #define G2_TILE_FILTER (G2_SWREG(179)) diff --git a/drivers/staging/media/hantro/hantro_g2_vp9_dec.c b/drivers/staging/media/hantro/hantro_g2_vp9_dec.c new file mode 100644 index 000000000000..45a7be4a43fa --- /dev/null +++ b/drivers/staging/media/hantro/hantro_g2_vp9_dec.c @@ -0,0 +1,978 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hantro VP9 codec driver + * + * Copyright (C) 2021 Collabora Ltd. + */ +#include "media/videobuf2-core.h" +#include "media/videobuf2-dma-contig.h" +#include "media/videobuf2-v4l2.h" +#include +#include +#include +#include + +#include "hantro.h" +#include "hantro_vp9.h" +#include "hantro_g2_regs.h" + +#define G2_ALIGN 16 + +enum hantro_ref_frames { + INTRA_FRAME = 0, + LAST_FRAME = 1, + GOLDEN_FRAME = 2, + ALTREF_FRAME = 3, + MAX_REF_FRAMES = 4 +}; + +static int start_prepare_run(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame **dec_params) +{ + const struct v4l2_ctrl_vp9_compressed_hdr_probs *prob_updates; + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + struct v4l2_ctrl *ctrl; + unsigned int fctx_idx; + + /* v4l2-specific stuff */ + hantro_start_prepare_run(ctx); + + ctrl = v4l2_ctrl_find(&ctx->ctrl_handler, V4L2_CID_STATELESS_VP9_FRAME); + if (WARN_ON(!ctrl)) + return -EINVAL; + *dec_params = ctrl->p_cur.p; + + ctrl = v4l2_ctrl_find(&ctx->ctrl_handler, V4L2_CID_STATELESS_VP9_COMPRESSED_HDR_PROBS); + if (WARN_ON(!ctrl)) + return -EINVAL; + prob_updates = ctrl->p_cur.p; + + /* + * vp9 stuff + * + * by this point the userspace has done all parts of 6.2 uncompressed_header() + * except this fragment: + * if ( FrameIsIntra || error_resilient_mode ) { + * setup_past_independence ( ) + * if ( frame_type == KEY_FRAME || error_resilient_mode == 1 || + * reset_frame_context == 3 ) { + * for ( i = 0; i < 4; i ++ ) { + * save_probs( i ) + * } + * } else if ( reset_frame_context == 2 ) { + * save_probs( frame_context_idx ) + * } + * frame_context_idx = 0 + * } + */ + fctx_idx = v4l2_vp9_reset_frame_ctx(*dec_params, vp9_ctx->frame_context); + vp9_ctx->cur.frame_context_idx = fctx_idx; + + /* 6.1 frame(sz): load_probs() and load_probs2() */ + vp9_ctx->probability_tables = vp9_ctx->frame_context[fctx_idx]; + + /* + * The userspace has also performed 6.3 compressed_header(), but handling the + * probs in a special way. All probs which need updating, except MV-related, + * have been read from the bitstream and translated through inv_map_table[], + * but no 6.3.6 inv_recenter_nonneg(v, m) has been performed. The values passed + * by userspace are either translated values (there are no 0 values in + * inv_map_table[]), or zero to indicate no update. All MV-related probs which need + * updating have been read from the bitstream and (mv_prob << 1) | 1 has been + * performed. The values passed by userspace are either new values + * to replace old ones (the above mentioned shift and bitwise or never result in + * a zero) or zero to indicate no update. + * fw_update_probs() performs actual probs updates or leaves probs as-is + * for values for which a zero was passed from userspace. + */ + v4l2_vp9_fw_update_probs(&vp9_ctx->probability_tables, prob_updates, *dec_params); + + return 0; +} + +static size_t chroma_offset(const struct hantro_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + int bytes_per_pixel = dec_params->bit_depth == 8 ? 1 : 2; + + return ctx->src_fmt.width * ctx->src_fmt.height * bytes_per_pixel; +} + +static size_t mv_offset(const struct hantro_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + size_t cr_offset = chroma_offset(ctx, dec_params); + + return ALIGN((cr_offset * 3) / 2, G2_ALIGN); +} + +static struct hantro_decoded_buffer * +get_ref_buf(struct hantro_ctx *ctx, struct vb2_v4l2_buffer *dst, u64 timestamp) +{ + struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx; + struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q; + int buf_idx; + + /* + * If a ref is unused or invalid, address of current destination + * buffer is returned. + */ + buf_idx = vb2_find_timestamp(cap_q, timestamp, 0); + if (buf_idx < 0) + return vb2_to_hantro_decoded_buf(&dst->vb2_buf); + + return vb2_to_hantro_decoded_buf(vb2_get_buffer(cap_q, buf_idx)); +} + +static void update_dec_buf_info(struct hantro_decoded_buffer *buf, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + buf->vp9.width = dec_params->frame_width_minus_1 + 1; + buf->vp9.height = dec_params->frame_height_minus_1 + 1; + buf->vp9.bit_depth = dec_params->bit_depth; +} + +static void update_ctx_cur_info(struct hantro_vp9_dec_hw_ctx *vp9_ctx, + struct hantro_decoded_buffer *buf, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + vp9_ctx->cur.valid = true; + vp9_ctx->cur.reference_mode = dec_params->reference_mode; + vp9_ctx->cur.tx_mode = dec_params->tx_mode; + vp9_ctx->cur.interpolation_filter = dec_params->interpolation_filter; + vp9_ctx->cur.flags = dec_params->flags; + vp9_ctx->cur.timestamp = buf->base.vb.vb2_buf.timestamp; +} + +static void config_output(struct hantro_ctx *ctx, + struct hantro_decoded_buffer *dst, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + dma_addr_t luma_addr, chroma_addr, mv_addr; + + hantro_reg_write(ctx->dev, &g2_out_dis, 0); + hantro_reg_write(ctx->dev, &g2_output_format, 0); + + luma_addr = vb2_dma_contig_plane_dma_addr(&dst->base.vb.vb2_buf, 0); + hantro_write_addr(ctx->dev, G2_ADDR_DST, luma_addr); + + chroma_addr = luma_addr + chroma_offset(ctx, dec_params); + hantro_write_addr(ctx->dev, G2_ADDR_DST_CHR, chroma_addr); + + mv_addr = luma_addr + mv_offset(ctx, dec_params); + hantro_write_addr(ctx->dev, G2_ADDR_DST_MV, mv_addr); +} + +struct hantro_vp9_ref_reg { + const struct hantro_reg width; + const struct hantro_reg height; + const struct hantro_reg hor_scale; + const struct hantro_reg ver_scale; + u32 y_base; + u32 c_base; +}; + +static void config_ref(struct hantro_ctx *ctx, + struct hantro_decoded_buffer *dst, + const struct hantro_vp9_ref_reg *ref_reg, + const struct v4l2_ctrl_vp9_frame *dec_params, + u64 ref_ts) +{ + struct hantro_decoded_buffer *buf; + dma_addr_t luma_addr, chroma_addr; + u32 refw, refh; + + buf = get_ref_buf(ctx, &dst->base.vb, ref_ts); + refw = buf->vp9.width; + refh = buf->vp9.height; + + hantro_reg_write(ctx->dev, &ref_reg->width, refw); + hantro_reg_write(ctx->dev, &ref_reg->height, refh); + + hantro_reg_write(ctx->dev, &ref_reg->hor_scale, (refw << 14) / dst->vp9.width); + hantro_reg_write(ctx->dev, &ref_reg->ver_scale, (refh << 14) / dst->vp9.height); + + luma_addr = vb2_dma_contig_plane_dma_addr(&buf->base.vb.vb2_buf, 0); + hantro_write_addr(ctx->dev, ref_reg->y_base, luma_addr); + + chroma_addr = luma_addr + chroma_offset(ctx, dec_params); + hantro_write_addr(ctx->dev, ref_reg->c_base, chroma_addr); +} + +static void config_ref_registers(struct hantro_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params, + struct hantro_decoded_buffer *dst, + struct hantro_decoded_buffer *mv_ref) +{ + static const struct hantro_vp9_ref_reg ref_regs[] = { + { + /* Last */ + .width = vp9_lref_width, + .height = vp9_lref_height, + .hor_scale = vp9_lref_hor_scale, + .ver_scale = vp9_lref_ver_scale, + .y_base = G2_REG_ADDR_REF(0), + .c_base = G2_REG_CHR_REF(0), + }, { + /* Golden */ + .width = vp9_gref_width, + .height = vp9_gref_height, + .hor_scale = vp9_gref_hor_scale, + .ver_scale = vp9_gref_ver_scale, + .y_base = G2_REG_ADDR_REF(4), + .c_base = G2_REG_CHR_REF(4), + }, { + /* Altref */ + .width = vp9_aref_width, + .height = vp9_aref_height, + .hor_scale = vp9_aref_hor_scale, + .ver_scale = vp9_aref_ver_scale, + .y_base = G2_REG_ADDR_REF(5), + .c_base = G2_REG_CHR_REF(5), + }, + }; + dma_addr_t mv_addr; + + config_ref(ctx, dst, &ref_regs[0], dec_params, dec_params->last_frame_ts); + config_ref(ctx, dst, &ref_regs[1], dec_params, dec_params->golden_frame_ts); + config_ref(ctx, dst, &ref_regs[2], dec_params, dec_params->alt_frame_ts); + + mv_addr = vb2_dma_contig_plane_dma_addr(&mv_ref->base.vb.vb2_buf, 0) + + mv_offset(ctx, dec_params); + hantro_write_addr(ctx->dev, G2_REG_DMV_REF(0), mv_addr); + + hantro_reg_write(ctx->dev, &vp9_last_sign_bias, + dec_params->ref_frame_sign_bias & V4L2_VP9_SIGN_BIAS_LAST ? 1 : 0); + + hantro_reg_write(ctx->dev, &vp9_gref_sign_bias, + dec_params->ref_frame_sign_bias & V4L2_VP9_SIGN_BIAS_GOLDEN ? 1 : 0); + + hantro_reg_write(ctx->dev, &vp9_aref_sign_bias, + dec_params->ref_frame_sign_bias & V4L2_VP9_SIGN_BIAS_ALT ? 1 : 0); +} + +static void recompute_tile_info(unsigned short *tile_info, unsigned int tiles, unsigned int sbs) +{ + int i; + unsigned int accumulated = 0; + unsigned int next_accumulated; + + for (i = 1; i <= tiles; ++i) { + next_accumulated = i * sbs / tiles; + *tile_info++ = next_accumulated - accumulated; + accumulated = next_accumulated; + } +} + +static void +recompute_tile_rc_info(struct hantro_ctx *ctx, + unsigned int tile_r, unsigned int tile_c, + unsigned int sbs_r, unsigned int sbs_c) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + + recompute_tile_info(vp9_ctx->tile_r_info, tile_r, sbs_r); + recompute_tile_info(vp9_ctx->tile_c_info, tile_c, sbs_c); + + vp9_ctx->last_tile_r = tile_r; + vp9_ctx->last_tile_c = tile_c; + vp9_ctx->last_sbs_r = sbs_r; + vp9_ctx->last_sbs_c = sbs_c; +} + +static inline unsigned int first_tile_row(unsigned int tile_r, unsigned int sbs_r) +{ + if (tile_r == sbs_r + 1) + return 1; + + if (tile_r == sbs_r + 2) + return 2; + + return 0; +} + +static void +fill_tile_info(struct hantro_ctx *ctx, + unsigned int tile_r, unsigned int tile_c, + unsigned int sbs_r, unsigned int sbs_c, + unsigned short *tile_mem) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + unsigned int i, j; + bool first = true; + + for (i = first_tile_row(tile_r, sbs_r); i < tile_r; ++i) { + unsigned short r_info = vp9_ctx->tile_r_info[i]; + + if (first) { + if (i > 0) + r_info += vp9_ctx->tile_r_info[0]; + if (i == 2) + r_info += vp9_ctx->tile_r_info[1]; + first = false; + } + for (j = 0; j < tile_c; ++j) { + *tile_mem++ = vp9_ctx->tile_c_info[j]; + *tile_mem++ = r_info; + } + } +} + +static void +config_tiles(struct hantro_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params, + struct hantro_decoded_buffer *dst) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + struct hantro_aux_buf *misc = &vp9_ctx->misc; + struct hantro_aux_buf *tile_edge = &vp9_ctx->tile_edge; + dma_addr_t addr; + unsigned short *tile_mem; + + addr = misc->dma + vp9_ctx->tile_info_offset; + hantro_write_addr(ctx->dev, G2_ADDR_TILE_SIZE, addr); + + tile_mem = misc->cpu + vp9_ctx->tile_info_offset; + if (dec_params->tile_cols_log2 || dec_params->tile_rows_log2) { + unsigned int tile_r = (1 << dec_params->tile_rows_log2); + unsigned int tile_c = (1 << dec_params->tile_cols_log2); + unsigned int sbs_r = hantro_vp9_num_sbs(dst->vp9.height); + unsigned int sbs_c = hantro_vp9_num_sbs(dst->vp9.width); + + if (tile_r != vp9_ctx->last_tile_r || tile_c != vp9_ctx->last_tile_c || + sbs_r != vp9_ctx->last_sbs_r || sbs_c != vp9_ctx->last_sbs_c) + recompute_tile_rc_info(ctx, tile_r, tile_c, sbs_r, sbs_c); + + fill_tile_info(ctx, tile_r, tile_c, sbs_r, sbs_c, tile_mem); + + hantro_reg_write(ctx->dev, &g2_tile_e, 1); + hantro_reg_write(ctx->dev, &g2_num_tile_cols, tile_c); + hantro_reg_write(ctx->dev, &g2_num_tile_rows, tile_r); + + addr = tile_edge->dma; + hantro_write_addr(ctx->dev, G2_TILE_FILTER, addr); + + addr = tile_edge->dma + vp9_ctx->bsd_ctrl_offset; + hantro_write_addr(ctx->dev, G2_TILE_BSD, addr); + } else { + tile_mem[0] = hantro_vp9_num_sbs(dst->vp9.width); + tile_mem[1] = hantro_vp9_num_sbs(dst->vp9.height); + + hantro_reg_write(ctx->dev, &g2_tile_e, 0); + hantro_reg_write(ctx->dev, &g2_num_tile_cols, 1); + hantro_reg_write(ctx->dev, &g2_num_tile_rows, 1); + } +} + +static void +update_feat_and_flag(struct hantro_vp9_dec_hw_ctx *vp9_ctx, + const struct v4l2_vp9_segmentation *seg, + enum v4l2_vp9_segment_feature feature, + unsigned int segid) +{ + u8 mask = V4L2_VP9_SEGMENT_FEATURE_ENABLED(feature); + + vp9_ctx->feature_data[segid][feature] = seg->feature_data[segid][feature]; + vp9_ctx->feature_enabled[segid] &= ~mask; + vp9_ctx->feature_enabled[segid] |= (seg->feature_enabled[segid] & mask); +} + +static inline s16 clip3(s16 x, s16 y, s16 z) +{ + return (z < x) ? x : (z > y) ? y : z; +} + +static s16 feat_val_clip3(s16 feat_val, s16 feature_data, bool absolute, u8 clip) +{ + if (absolute) + return feature_data; + + return clip3(0, 255, feat_val + feature_data); +} + +static void config_segment(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + const struct v4l2_vp9_segmentation *seg; + s16 feat_val; + unsigned char feat_id; + unsigned int segid; + bool segment_enabled, absolute, update_data; + + static const struct hantro_reg seg_regs[8][V4L2_VP9_SEG_LVL_MAX] = { + { vp9_quant_seg0, vp9_filt_level_seg0, vp9_refpic_seg0, vp9_skip_seg0 }, + { vp9_quant_seg1, vp9_filt_level_seg1, vp9_refpic_seg1, vp9_skip_seg1 }, + { vp9_quant_seg2, vp9_filt_level_seg2, vp9_refpic_seg2, vp9_skip_seg2 }, + { vp9_quant_seg3, vp9_filt_level_seg3, vp9_refpic_seg3, vp9_skip_seg3 }, + { vp9_quant_seg4, vp9_filt_level_seg4, vp9_refpic_seg4, vp9_skip_seg4 }, + { vp9_quant_seg5, vp9_filt_level_seg5, vp9_refpic_seg5, vp9_skip_seg5 }, + { vp9_quant_seg6, vp9_filt_level_seg6, vp9_refpic_seg6, vp9_skip_seg6 }, + { vp9_quant_seg7, vp9_filt_level_seg7, vp9_refpic_seg7, vp9_skip_seg7 }, + }; + + segment_enabled = !!(dec_params->seg.flags & V4L2_VP9_SEGMENTATION_FLAG_ENABLED); + hantro_reg_write(ctx->dev, &vp9_segment_e, segment_enabled); + hantro_reg_write(ctx->dev, &vp9_segment_upd_e, + !!(dec_params->seg.flags & V4L2_VP9_SEGMENTATION_FLAG_UPDATE_MAP)); + hantro_reg_write(ctx->dev, &vp9_segment_temp_upd_e, + !!(dec_params->seg.flags & V4L2_VP9_SEGMENTATION_FLAG_TEMPORAL_UPDATE)); + + seg = &dec_params->seg; + absolute = !!(seg->flags & V4L2_VP9_SEGMENTATION_FLAG_ABS_OR_DELTA_UPDATE); + update_data = !!(seg->flags & V4L2_VP9_SEGMENTATION_FLAG_UPDATE_DATA); + + for (segid = 0; segid < 8; ++segid) { + /* Quantizer segment feature */ + feat_id = V4L2_VP9_SEG_LVL_ALT_Q; + feat_val = dec_params->quant.base_q_idx; + if (segment_enabled) { + if (update_data) + update_feat_and_flag(vp9_ctx, seg, feat_id, segid); + if (v4l2_vp9_seg_feat_enabled(vp9_ctx->feature_enabled, feat_id, segid)) + feat_val = feat_val_clip3(feat_val, + vp9_ctx->feature_data[segid][feat_id], + absolute, 255); + } + hantro_reg_write(ctx->dev, &seg_regs[segid][feat_id], feat_val); + + /* Loop filter segment feature */ + feat_id = V4L2_VP9_SEG_LVL_ALT_L; + feat_val = dec_params->lf.level; + if (segment_enabled) { + if (update_data) + update_feat_and_flag(vp9_ctx, seg, feat_id, segid); + if (v4l2_vp9_seg_feat_enabled(vp9_ctx->feature_enabled, feat_id, segid)) + feat_val = feat_val_clip3(feat_val, + vp9_ctx->feature_data[segid][feat_id], + absolute, 63); + } + hantro_reg_write(ctx->dev, &seg_regs[segid][feat_id], feat_val); + + /* Reference frame segment feature */ + feat_id = V4L2_VP9_SEG_LVL_REF_FRAME; + feat_val = 0; + if (segment_enabled) { + if (update_data) + update_feat_and_flag(vp9_ctx, seg, feat_id, segid); + if (!(dec_params->flags & V4L2_VP9_FRAME_FLAG_KEY_FRAME) && + v4l2_vp9_seg_feat_enabled(vp9_ctx->feature_enabled, feat_id, segid)) + feat_val = vp9_ctx->feature_data[segid][feat_id] + 1; + } + hantro_reg_write(ctx->dev, &seg_regs[segid][feat_id], feat_val); + + /* Skip segment feature */ + feat_id = V4L2_VP9_SEG_LVL_SKIP; + feat_val = 0; + if (segment_enabled) { + if (update_data) + update_feat_and_flag(vp9_ctx, seg, feat_id, segid); + feat_val = v4l2_vp9_seg_feat_enabled(vp9_ctx->feature_enabled, + feat_id, segid) ? 1 : 0; + } + hantro_reg_write(ctx->dev, &seg_regs[segid][feat_id], feat_val); + } +} + +static void config_loop_filter(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params) +{ + bool d = dec_params->lf.flags & V4L2_VP9_LOOP_FILTER_FLAG_DELTA_ENABLED; + + hantro_reg_write(ctx->dev, &vp9_filt_level, dec_params->lf.level); + hantro_reg_write(ctx->dev, &g2_out_filtering_dis, dec_params->lf.level == 0); + hantro_reg_write(ctx->dev, &vp9_filt_sharpness, dec_params->lf.sharpness); + + hantro_reg_write(ctx->dev, &vp9_filt_ref_adj_0, d ? dec_params->lf.ref_deltas[0] : 0); + hantro_reg_write(ctx->dev, &vp9_filt_ref_adj_1, d ? dec_params->lf.ref_deltas[1] : 0); + hantro_reg_write(ctx->dev, &vp9_filt_ref_adj_2, d ? dec_params->lf.ref_deltas[2] : 0); + hantro_reg_write(ctx->dev, &vp9_filt_ref_adj_3, d ? dec_params->lf.ref_deltas[3] : 0); + hantro_reg_write(ctx->dev, &vp9_filt_mb_adj_0, d ? dec_params->lf.mode_deltas[0] : 0); + hantro_reg_write(ctx->dev, &vp9_filt_mb_adj_1, d ? dec_params->lf.mode_deltas[1] : 0); +} + +static void config_picture_dimensions(struct hantro_ctx *ctx, struct hantro_decoded_buffer *dst) +{ + u32 pic_w_4x4, pic_h_4x4; + + hantro_reg_write(ctx->dev, &g2_pic_width_in_cbs, (dst->vp9.width + 7) / 8); + hantro_reg_write(ctx->dev, &g2_pic_height_in_cbs, (dst->vp9.height + 7) / 8); + pic_w_4x4 = roundup(dst->vp9.width, 8) >> 2; + pic_h_4x4 = roundup(dst->vp9.height, 8) >> 2; + hantro_reg_write(ctx->dev, &g2_pic_width_4x4, pic_w_4x4); + hantro_reg_write(ctx->dev, &g2_pic_height_4x4, pic_h_4x4); +} + +static void +config_bit_depth(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params) +{ + hantro_reg_write(ctx->dev, &g2_bit_depth_y_minus8, dec_params->bit_depth - 8); + hantro_reg_write(ctx->dev, &g2_bit_depth_c_minus8, dec_params->bit_depth - 8); +} + +static inline bool is_lossless(const struct v4l2_vp9_quantization *quant) +{ + return quant->base_q_idx == 0 && quant->delta_q_uv_ac == 0 && + quant->delta_q_uv_dc == 0 && quant->delta_q_y_dc == 0; +} + +static void +config_quant(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params) +{ + hantro_reg_write(ctx->dev, &vp9_qp_delta_y_dc, dec_params->quant.delta_q_y_dc); + hantro_reg_write(ctx->dev, &vp9_qp_delta_ch_dc, dec_params->quant.delta_q_uv_dc); + hantro_reg_write(ctx->dev, &vp9_qp_delta_ch_ac, dec_params->quant.delta_q_uv_ac); + hantro_reg_write(ctx->dev, &vp9_lossless_e, is_lossless(&dec_params->quant)); +} + +static u32 +hantro_interp_filter_from_v4l2(enum v4l2_vp9_interpolation_filter interpolation_filter) +{ + switch (interpolation_filter) { + case V4L2_VP9_INTERP_FILTER_EIGHTTAP: + return 0x1; + case V4L2_VP9_INTERP_FILTER_EIGHTTAP_SMOOTH: + return 0; + case V4L2_VP9_INTERP_FILTER_EIGHTTAP_SHARP: + return 0x2; + case V4L2_VP9_INTERP_FILTER_BILINEAR: + return 0x3; + case V4L2_VP9_INTERP_FILTER_SWITCHABLE: + return 0x4; + } + + return 0; +} + +static void +config_others(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params, + bool intra_only, bool resolution_change) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + + hantro_reg_write(ctx->dev, &g2_idr_pic_e, intra_only); + + hantro_reg_write(ctx->dev, &vp9_transform_mode, dec_params->tx_mode); + + hantro_reg_write(ctx->dev, &vp9_mcomp_filt_type, intra_only ? + 0 : hantro_interp_filter_from_v4l2(dec_params->interpolation_filter)); + + hantro_reg_write(ctx->dev, &vp9_high_prec_mv_e, + !!(dec_params->flags & V4L2_VP9_FRAME_FLAG_ALLOW_HIGH_PREC_MV)); + + hantro_reg_write(ctx->dev, &vp9_comp_pred_mode, dec_params->reference_mode); + + hantro_reg_write(ctx->dev, &g2_tempor_mvp_e, + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT) && + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_KEY_FRAME) && + !(vp9_ctx->last.flags & V4L2_VP9_FRAME_FLAG_KEY_FRAME) && + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_INTRA_ONLY) && + !resolution_change && + vp9_ctx->last.flags & V4L2_VP9_FRAME_FLAG_SHOW_FRAME + ); + + hantro_reg_write(ctx->dev, &g2_write_mvs_e, + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_KEY_FRAME)); +} + +static void +config_compound_reference(struct hantro_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params) +{ + u32 comp_fixed_ref, comp_var_ref[2]; + bool last_ref_frame_sign_bias; + bool golden_ref_frame_sign_bias; + bool alt_ref_frame_sign_bias; + bool comp_ref_allowed = 0; + + comp_fixed_ref = 0; + comp_var_ref[0] = 0; + comp_var_ref[1] = 0; + + last_ref_frame_sign_bias = dec_params->ref_frame_sign_bias & V4L2_VP9_SIGN_BIAS_LAST; + golden_ref_frame_sign_bias = dec_params->ref_frame_sign_bias & V4L2_VP9_SIGN_BIAS_GOLDEN; + alt_ref_frame_sign_bias = dec_params->ref_frame_sign_bias & V4L2_VP9_SIGN_BIAS_ALT; + + /* 6.3.12 Frame reference mode syntax */ + comp_ref_allowed |= golden_ref_frame_sign_bias != last_ref_frame_sign_bias; + comp_ref_allowed |= alt_ref_frame_sign_bias != last_ref_frame_sign_bias; + + if (comp_ref_allowed) { + if (last_ref_frame_sign_bias == + golden_ref_frame_sign_bias) { + comp_fixed_ref = ALTREF_FRAME; + comp_var_ref[0] = LAST_FRAME; + comp_var_ref[1] = GOLDEN_FRAME; + } else if (last_ref_frame_sign_bias == + alt_ref_frame_sign_bias) { + comp_fixed_ref = GOLDEN_FRAME; + comp_var_ref[0] = LAST_FRAME; + comp_var_ref[1] = ALTREF_FRAME; + } else { + comp_fixed_ref = LAST_FRAME; + comp_var_ref[0] = GOLDEN_FRAME; + comp_var_ref[1] = ALTREF_FRAME; + } + } + + hantro_reg_write(ctx->dev, &vp9_comp_pred_fixed_ref, comp_fixed_ref); + hantro_reg_write(ctx->dev, &vp9_comp_pred_var_ref0, comp_var_ref[0]); + hantro_reg_write(ctx->dev, &vp9_comp_pred_var_ref1, comp_var_ref[1]); +} + +#define INNER_LOOP \ +do { \ + for (m = 0; m < ARRAY_SIZE(adaptive->coef[0][0][0][0]); ++m) { \ + memcpy(adaptive->coef[i][j][k][l][m], \ + probs->coef[i][j][k][l][m], \ + sizeof(probs->coef[i][j][k][l][m])); \ + \ + adaptive->coef[i][j][k][l][m][3] = 0; \ + } \ +} while (0) + +static void config_probs(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + struct hantro_aux_buf *misc = &vp9_ctx->misc; + struct hantro_g2_all_probs *all_probs = misc->cpu; + struct hantro_g2_probs *adaptive; + struct hantro_g2_mv_probs *mv; + const struct v4l2_vp9_segmentation *seg = &dec_params->seg; + const struct v4l2_vp9_frame_context *probs = &vp9_ctx->probability_tables; + int i, j, k, l, m; + + for (i = 0; i < ARRAY_SIZE(all_probs->kf_y_mode_prob); ++i) + for (j = 0; j < ARRAY_SIZE(all_probs->kf_y_mode_prob[0]); ++j) { + memcpy(all_probs->kf_y_mode_prob[i][j], + v4l2_vp9_kf_y_mode_prob[i][j], + ARRAY_SIZE(all_probs->kf_y_mode_prob[i][j])); + + all_probs->kf_y_mode_prob_tail[i][j][0] = + v4l2_vp9_kf_y_mode_prob[i][j][8]; + } + + memcpy(all_probs->mb_segment_tree_probs, seg->tree_probs, + sizeof(all_probs->mb_segment_tree_probs)); + + memcpy(all_probs->segment_pred_probs, seg->pred_probs, + sizeof(all_probs->segment_pred_probs)); + + for (i = 0; i < ARRAY_SIZE(all_probs->kf_uv_mode_prob); ++i) { + memcpy(all_probs->kf_uv_mode_prob[i], v4l2_vp9_kf_uv_mode_prob[i], + ARRAY_SIZE(all_probs->kf_uv_mode_prob[i])); + + all_probs->kf_uv_mode_prob_tail[i][0] = v4l2_vp9_kf_uv_mode_prob[i][8]; + } + + adaptive = &all_probs->probs; + + for (i = 0; i < ARRAY_SIZE(adaptive->inter_mode); ++i) { + memcpy(adaptive->inter_mode[i], probs->inter_mode[i], + sizeof(probs->inter_mode)); + + adaptive->inter_mode[i][3] = 0; + } + + memcpy(adaptive->is_inter, probs->is_inter, sizeof(adaptive->is_inter)); + + for (i = 0; i < ARRAY_SIZE(adaptive->uv_mode); ++i) { + memcpy(adaptive->uv_mode[i], probs->uv_mode[i], + sizeof(adaptive->uv_mode[i])); + adaptive->uv_mode_tail[i][0] = probs->uv_mode[i][8]; + } + + memcpy(adaptive->tx8, probs->tx8, sizeof(adaptive->tx8)); + memcpy(adaptive->tx16, probs->tx16, sizeof(adaptive->tx16)); + memcpy(adaptive->tx32, probs->tx32, sizeof(adaptive->tx32)); + + for (i = 0; i < ARRAY_SIZE(adaptive->y_mode); ++i) { + memcpy(adaptive->y_mode[i], probs->y_mode[i], + ARRAY_SIZE(adaptive->y_mode[i])); + + adaptive->y_mode_tail[i][0] = probs->y_mode[i][8]; + } + + for (i = 0; i < ARRAY_SIZE(adaptive->partition[0]); ++i) { + memcpy(adaptive->partition[0][i], v4l2_vp9_kf_partition_probs[i], + sizeof(v4l2_vp9_kf_partition_probs[i])); + + adaptive->partition[0][i][3] = 0; + } + + for (i = 0; i < ARRAY_SIZE(adaptive->partition[1]); ++i) { + memcpy(adaptive->partition[1][i], probs->partition[i], + sizeof(probs->partition[i])); + + adaptive->partition[1][i][3] = 0; + } + + memcpy(adaptive->interp_filter, probs->interp_filter, + sizeof(adaptive->interp_filter)); + + memcpy(adaptive->comp_mode, probs->comp_mode, sizeof(adaptive->comp_mode)); + + memcpy(adaptive->skip, probs->skip, sizeof(adaptive->skip)); + + mv = &adaptive->mv; + + memcpy(mv->joint, probs->mv.joint, sizeof(mv->joint)); + memcpy(mv->sign, probs->mv.sign, sizeof(mv->sign)); + memcpy(mv->class0_bit, probs->mv.class0_bit, sizeof(mv->class0_bit)); + memcpy(mv->fr, probs->mv.fr, sizeof(mv->fr)); + memcpy(mv->class0_hp, probs->mv.class0_hp, sizeof(mv->class0_hp)); + memcpy(mv->hp, probs->mv.hp, sizeof(mv->hp)); + memcpy(mv->classes, probs->mv.classes, sizeof(mv->classes)); + memcpy(mv->class0_fr, probs->mv.class0_fr, sizeof(mv->class0_fr)); + memcpy(mv->bits, probs->mv.bits, sizeof(mv->bits)); + + memcpy(adaptive->single_ref, probs->single_ref, sizeof(adaptive->single_ref)); + + memcpy(adaptive->comp_ref, probs->comp_ref, sizeof(adaptive->comp_ref)); + + for (i = 0; i < ARRAY_SIZE(adaptive->coef); ++i) + for (j = 0; j < ARRAY_SIZE(adaptive->coef[0]); ++j) + for (k = 0; k < ARRAY_SIZE(adaptive->coef[0][0]); ++k) + for (l = 0; l < ARRAY_SIZE(adaptive->coef[0][0][0]); ++l) + INNER_LOOP; + + hantro_write_addr(ctx->dev, VP9_ADDR_PROBS, misc->dma); +} + +static void config_counts(struct hantro_ctx *ctx) +{ + struct hantro_vp9_dec_hw_ctx *vp9_dec = &ctx->vp9_dec; + struct hantro_aux_buf *misc = &vp9_dec->misc; + dma_addr_t addr = misc->dma + vp9_dec->ctx_counters_offset; + + hantro_write_addr(ctx->dev, VP9_ADDR_CTR, addr); +} + +static void config_seg_map(struct hantro_ctx *ctx, + const struct v4l2_ctrl_vp9_frame *dec_params, + bool intra_only, bool update_map) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + struct hantro_aux_buf *segment_map = &vp9_ctx->segment_map; + dma_addr_t addr; + + if (intra_only || + (dec_params->flags & V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT)) { + memset(segment_map->cpu, 0, segment_map->size); + memset(vp9_ctx->feature_data, 0, sizeof(vp9_ctx->feature_data)); + memset(vp9_ctx->feature_enabled, 0, sizeof(vp9_ctx->feature_enabled)); + } + + addr = segment_map->dma + vp9_ctx->active_segment * vp9_ctx->segment_map_size; + hantro_write_addr(ctx->dev, VP9_ADDR_SEGMENT_READ, addr); + + addr = segment_map->dma + (1 - vp9_ctx->active_segment) * vp9_ctx->segment_map_size; + hantro_write_addr(ctx->dev, VP9_ADDR_SEGMENT_WRITE, addr); + + if (update_map) + vp9_ctx->active_segment = 1 - vp9_ctx->active_segment; +} + +static void +config_source(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params, + struct vb2_v4l2_buffer *vb2_src) +{ + dma_addr_t stream_base, tmp_addr; + unsigned int headres_size; + u32 src_len, start_bit, src_buf_len; + + headres_size = dec_params->uncompressed_header_size + + dec_params->compressed_header_size; + + stream_base = vb2_dma_contig_plane_dma_addr(&vb2_src->vb2_buf, 0); + hantro_write_addr(ctx->dev, G2_ADDR_STR, stream_base); + + tmp_addr = stream_base + headres_size; + start_bit = (tmp_addr & 0xf) * 8; + hantro_reg_write(ctx->dev, &g2_start_bit, start_bit); + + src_len = vb2_get_plane_payload(&vb2_src->vb2_buf, 0); + src_len += start_bit / 8 - headres_size; + hantro_reg_write(ctx->dev, &g2_stream_len, src_len); + + tmp_addr &= ~0xf; + hantro_reg_write(ctx->dev, &g2_strm_start_offset, tmp_addr - stream_base); + src_buf_len = vb2_plane_size(&vb2_src->vb2_buf, 0); + hantro_reg_write(ctx->dev, &g2_strm_buffer_len, src_buf_len); +} + +static void +config_registers(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp9_frame *dec_params, + struct vb2_v4l2_buffer *vb2_src, struct vb2_v4l2_buffer *vb2_dst) +{ + struct hantro_decoded_buffer *dst, *last, *mv_ref; + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + const struct v4l2_vp9_segmentation *seg; + bool intra_only, resolution_change; + + /* vp9 stuff */ + dst = vb2_to_hantro_decoded_buf(&vb2_dst->vb2_buf); + + if (vp9_ctx->last.valid) + last = get_ref_buf(ctx, &dst->base.vb, vp9_ctx->last.timestamp); + else + last = dst; + + update_dec_buf_info(dst, dec_params); + update_ctx_cur_info(vp9_ctx, dst, dec_params); + seg = &dec_params->seg; + + intra_only = !!(dec_params->flags & + (V4L2_VP9_FRAME_FLAG_KEY_FRAME | + V4L2_VP9_FRAME_FLAG_INTRA_ONLY)); + + if (!intra_only && + !(dec_params->flags & V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT) && + vp9_ctx->last.valid) + mv_ref = last; + else + mv_ref = dst; + + resolution_change = dst->vp9.width != last->vp9.width || + dst->vp9.height != last->vp9.height; + + /* configure basic registers */ + hantro_reg_write(ctx->dev, &g2_mode, VP9_DEC_MODE); + hantro_reg_write(ctx->dev, &g2_strm_swap, 0xf); + hantro_reg_write(ctx->dev, &g2_dirmv_swap, 0xf); + hantro_reg_write(ctx->dev, &g2_compress_swap, 0xf); + hantro_reg_write(ctx->dev, &g2_buswidth, BUS_WIDTH_128); + hantro_reg_write(ctx->dev, &g2_max_burst, 16); + hantro_reg_write(ctx->dev, &g2_apf_threshold, 8); + hantro_reg_write(ctx->dev, &g2_ref_compress_bypass, 1); + hantro_reg_write(ctx->dev, &g2_clk_gate_e, 1); + hantro_reg_write(ctx->dev, &g2_max_cb_size, 6); + hantro_reg_write(ctx->dev, &g2_min_cb_size, 3); + + config_output(ctx, dst, dec_params); + + if (!intra_only) + config_ref_registers(ctx, dec_params, dst, mv_ref); + + config_tiles(ctx, dec_params, dst); + config_segment(ctx, dec_params); + config_loop_filter(ctx, dec_params); + config_picture_dimensions(ctx, dst); + config_bit_depth(ctx, dec_params); + config_quant(ctx, dec_params); + config_others(ctx, dec_params, intra_only, resolution_change); + config_compound_reference(ctx, dec_params); + config_probs(ctx, dec_params); + config_counts(ctx); + config_seg_map(ctx, dec_params, intra_only, + seg->flags & V4L2_VP9_SEGMENTATION_FLAG_UPDATE_MAP); + config_source(ctx, dec_params, vb2_src); +} + +int hantro_g2_vp9_dec_run(struct hantro_ctx *ctx) +{ + const struct v4l2_ctrl_vp9_frame *decode_params; + struct vb2_v4l2_buffer *src; + struct vb2_v4l2_buffer *dst; + int ret; + + hantro_g2_check_idle(ctx->dev); + + ret = start_prepare_run(ctx, &decode_params); + if (ret) { + hantro_end_prepare_run(ctx); + return ret; + } + + src = hantro_get_src_buf(ctx); + dst = hantro_get_dst_buf(ctx); + + config_registers(ctx, decode_params, src, dst); + + hantro_end_prepare_run(ctx); + + vdpu_write(ctx->dev, G2_REG_INTERRUPT_DEC_E, G2_REG_INTERRUPT); + + return 0; +} + +#define copy_tx_and_skip(p1, p2) \ +do { \ + memcpy((p1)->tx8, (p2)->tx8, sizeof((p1)->tx8)); \ + memcpy((p1)->tx16, (p2)->tx16, sizeof((p1)->tx16)); \ + memcpy((p1)->tx32, (p2)->tx32, sizeof((p1)->tx32)); \ + memcpy((p1)->skip, (p2)->skip, sizeof((p1)->skip)); \ +} while (0) + +void hantro_g2_vp9_dec_done(struct hantro_ctx *ctx) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + unsigned int fctx_idx; + + if (!(vp9_ctx->cur.flags & V4L2_VP9_FRAME_FLAG_REFRESH_FRAME_CTX)) + goto out_update_last; + + fctx_idx = vp9_ctx->cur.frame_context_idx; + + if (!(vp9_ctx->cur.flags & V4L2_VP9_FRAME_FLAG_PARALLEL_DEC_MODE)) { + /* error_resilient_mode == 0 && frame_parallel_decoding_mode == 0 */ + struct v4l2_vp9_frame_context *probs = &vp9_ctx->probability_tables; + bool frame_is_intra = vp9_ctx->cur.flags & + (V4L2_VP9_FRAME_FLAG_KEY_FRAME | V4L2_VP9_FRAME_FLAG_INTRA_ONLY); + struct tx_and_skip { + u8 tx8[2][1]; + u8 tx16[2][2]; + u8 tx32[2][3]; + u8 skip[3]; + } _tx_skip, *tx_skip = &_tx_skip; + struct v4l2_vp9_frame_symbol_counts *counts; + struct symbol_counts *hantro_cnts; + u32 tx16p[2][4]; + int i; + + /* buffer the forward-updated TX and skip probs */ + if (frame_is_intra) + copy_tx_and_skip(tx_skip, probs); + + /* 6.1.2 refresh_probs(): load_probs() and load_probs2() */ + *probs = vp9_ctx->frame_context[fctx_idx]; + + /* if FrameIsIntra then undo the effect of load_probs2() */ + if (frame_is_intra) + copy_tx_and_skip(probs, tx_skip); + + counts = &vp9_ctx->cnts; + hantro_cnts = vp9_ctx->misc.cpu + vp9_ctx->ctx_counters_offset; + for (i = 0; i < ARRAY_SIZE(tx16p); ++i) { + memcpy(tx16p[i], + hantro_cnts->tx16x16_count[i], + sizeof(hantro_cnts->tx16x16_count[0])); + tx16p[i][3] = 0; + } + counts->tx16p = &tx16p; + + v4l2_vp9_adapt_coef_probs(probs, counts, + !vp9_ctx->last.valid || + vp9_ctx->last.flags & V4L2_VP9_FRAME_FLAG_KEY_FRAME, + frame_is_intra); + + if (!frame_is_intra) { + /* load_probs2() already done */ + u32 mv_mode[7][4]; + + for (i = 0; i < ARRAY_SIZE(mv_mode); ++i) { + mv_mode[i][0] = hantro_cnts->inter_mode_counts[i][1][0]; + mv_mode[i][1] = hantro_cnts->inter_mode_counts[i][2][0]; + mv_mode[i][2] = hantro_cnts->inter_mode_counts[i][0][0]; + mv_mode[i][3] = hantro_cnts->inter_mode_counts[i][2][1]; + } + counts->mv_mode = &mv_mode; + v4l2_vp9_adapt_noncoef_probs(&vp9_ctx->probability_tables, counts, + vp9_ctx->cur.reference_mode, + vp9_ctx->cur.interpolation_filter, + vp9_ctx->cur.tx_mode, vp9_ctx->cur.flags); + } + } + + vp9_ctx->frame_context[fctx_idx] = vp9_ctx->probability_tables; + +out_update_last: + vp9_ctx->last = vp9_ctx->cur; +} diff --git a/drivers/staging/media/hantro/hantro_hw.h b/drivers/staging/media/hantro/hantro_hw.h index 42b3f3961f75..2961d399fd60 100644 --- a/drivers/staging/media/hantro/hantro_hw.h +++ b/drivers/staging/media/hantro/hantro_hw.h @@ -12,6 +12,7 @@ #include #include #include +#include #include #define DEC_8190_ALIGN_MASK 0x07U @@ -161,6 +162,50 @@ struct hantro_vp8_dec_hw_ctx { struct hantro_aux_buf prob_tbl; }; +struct hantro_vp9_frame_info { + u32 valid : 1; + u32 frame_context_idx : 2; + u32 reference_mode : 2; + u32 tx_mode : 3; + u32 interpolation_filter : 3; + u32 flags; + u64 timestamp; +}; + +#define MAX_SB_COLS 64 +#define MAX_SB_ROWS 34 + +/** + * struct hantro_vp9_dec_hw_ctx + * + */ +struct hantro_vp9_dec_hw_ctx { + struct hantro_aux_buf tile_edge; + struct hantro_aux_buf segment_map; + struct hantro_aux_buf misc; + struct v4l2_vp9_frame_symbol_counts cnts; + struct v4l2_vp9_frame_context probability_tables; + struct v4l2_vp9_frame_context frame_context[4]; + struct hantro_vp9_frame_info cur; + struct hantro_vp9_frame_info last; + + unsigned int bsd_ctrl_offset; + unsigned int segment_map_size; + unsigned int ctx_counters_offset; + unsigned int tile_info_offset; + + unsigned short tile_r_info[MAX_SB_ROWS]; + unsigned short tile_c_info[MAX_SB_COLS]; + unsigned int last_tile_r; + unsigned int last_tile_c; + unsigned int last_sbs_r; + unsigned int last_sbs_c; + + unsigned int active_segment; + u8 feature_enabled[8]; + s16 feature_data[8][4]; +}; + /** * struct hantro_postproc_ctx * @@ -267,6 +312,24 @@ void hantro_hevc_ref_remove_unused(struct hantro_ctx *ctx); size_t hantro_hevc_chroma_offset(const struct v4l2_ctrl_hevc_sps *sps); size_t hantro_hevc_motion_vectors_offset(const struct v4l2_ctrl_hevc_sps *sps); +static inline unsigned short hantro_vp9_num_sbs(unsigned short dimension) +{ + return (dimension + 63) / 64; +} + +static inline size_t +hantro_vp9_mv_size(unsigned int width, unsigned int height) +{ + int num_ctbs; + + /* + * There can be up to (CTBs x 64) number of blocks, + * and the motion vector for each block needs 16 bytes. + */ + num_ctbs = hantro_vp9_num_sbs(width) * hantro_vp9_num_sbs(height); + return (num_ctbs * 64) * 16; +} + static inline size_t hantro_h264_mv_size(unsigned int width, unsigned int height) { @@ -308,6 +371,10 @@ void hantro_vp8_dec_exit(struct hantro_ctx *ctx); void hantro_vp8_prob_update(struct hantro_ctx *ctx, const struct v4l2_ctrl_vp8_frame *hdr); +int hantro_g2_vp9_dec_run(struct hantro_ctx *ctx); +void hantro_g2_vp9_dec_done(struct hantro_ctx *ctx); +int hantro_vp9_dec_init(struct hantro_ctx *ctx); +void hantro_vp9_dec_exit(struct hantro_ctx *ctx); void hantro_g2_check_idle(struct hantro_dev *vpu); #endif /* HANTRO_HW_H_ */ diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c index d1f060c55fed..e4b0645ba6fc 100644 --- a/drivers/staging/media/hantro/hantro_v4l2.c +++ b/drivers/staging/media/hantro/hantro_v4l2.c @@ -299,6 +299,11 @@ static int hantro_try_fmt(const struct hantro_ctx *ctx, pix_mp->plane_fmt[0].sizeimage += hantro_h264_mv_size(pix_mp->width, pix_mp->height); + else if (ctx->vpu_src_fmt->fourcc == V4L2_PIX_FMT_VP9_FRAME && + !hantro_needs_postproc(ctx, fmt)) + pix_mp->plane_fmt[0].sizeimage += + hantro_vp9_mv_size(pix_mp->width, + pix_mp->height); } else if (!pix_mp->plane_fmt[0].sizeimage) { /* * For coded formats the application can specify @@ -407,6 +412,7 @@ hantro_update_requires_request(struct hantro_ctx *ctx, u32 fourcc) case V4L2_PIX_FMT_VP8_FRAME: case V4L2_PIX_FMT_H264_SLICE: case V4L2_PIX_FMT_HEVC_SLICE: + case V4L2_PIX_FMT_VP9_FRAME: ctx->fh.m2m_ctx->out_q_ctx.q.requires_requests = true; break; default: diff --git a/drivers/staging/media/hantro/hantro_vp9.c b/drivers/staging/media/hantro/hantro_vp9.c new file mode 100644 index 000000000000..566cd376c097 --- /dev/null +++ b/drivers/staging/media/hantro/hantro_vp9.c @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hantro VP9 codec driver + * + * Copyright (C) 2021 Collabora Ltd. + */ + +#include +#include + +#include "hantro.h" +#include "hantro_hw.h" +#include "hantro_vp9.h" + +#define POW2(x) (1 << (x)) + +#define MAX_LOG2_TILE_COLUMNS 6 +#define MAX_NUM_TILE_COLS POW2(MAX_LOG2_TILE_COLUMNS) +#define MAX_TILE_COLS 20 +#define MAX_TILE_ROWS 22 + +static size_t hantro_vp9_tile_filter_size(unsigned int height) +{ + u32 h, height32, size; + + h = roundup(height, 8); + + height32 = roundup(h, 64); + size = 24 * height32 * (MAX_NUM_TILE_COLS - 1); /* luma: 8, chroma: 8 + 8 */ + + return size; +} + +static size_t hantro_vp9_bsd_control_size(unsigned int height) +{ + u32 h, height32; + + h = roundup(height, 8); + height32 = roundup(h, 64); + + return 16 * (height32 / 4) * (MAX_NUM_TILE_COLS - 1); +} + +static size_t hantro_vp9_segment_map_size(unsigned int width, unsigned int height) +{ + u32 w, h; + int num_ctbs; + + w = roundup(width, 8); + h = roundup(height, 8); + num_ctbs = ((w + 63) / 64) * ((h + 63) / 64); + + return num_ctbs * 32; +} + +static inline size_t hantro_vp9_prob_tab_size(void) +{ + return roundup(sizeof(struct hantro_g2_all_probs), 16); +} + +static inline size_t hantro_vp9_count_tab_size(void) +{ + return roundup(sizeof(struct symbol_counts), 16); +} + +static inline size_t hantro_vp9_tile_info_size(void) +{ + return roundup((MAX_TILE_COLS * MAX_TILE_ROWS * 4 * sizeof(u16) + 15 + 16) & ~0xf, 16); +} + +static void *get_coeffs_arr(struct symbol_counts *cnts, int i, int j, int k, int l, int m) +{ + if (i == 0) + return &cnts->count_coeffs[j][k][l][m]; + + if (i == 1) + return &cnts->count_coeffs8x8[j][k][l][m]; + + if (i == 2) + return &cnts->count_coeffs16x16[j][k][l][m]; + + if (i == 3) + return &cnts->count_coeffs32x32[j][k][l][m]; + + return NULL; +} + +static void *get_eobs1(struct symbol_counts *cnts, int i, int j, int k, int l, int m) +{ + if (i == 0) + return &cnts->count_coeffs[j][k][l][m][3]; + + if (i == 1) + return &cnts->count_coeffs8x8[j][k][l][m][3]; + + if (i == 2) + return &cnts->count_coeffs16x16[j][k][l][m][3]; + + if (i == 3) + return &cnts->count_coeffs32x32[j][k][l][m][3]; + + return NULL; +} + +#define INNER_LOOP \ + do { \ + for (m = 0; m < ARRAY_SIZE(vp9_ctx->cnts.coeff[i][0][0][0]); ++m) { \ + vp9_ctx->cnts.coeff[i][j][k][l][m] = \ + get_coeffs_arr(cnts, i, j, k, l, m); \ + vp9_ctx->cnts.eob[i][j][k][l][m][0] = \ + &cnts->count_eobs[i][j][k][l][m]; \ + vp9_ctx->cnts.eob[i][j][k][l][m][1] = \ + get_eobs1(cnts, i, j, k, l, m); \ + } \ + } while (0) + +static void init_v4l2_vp9_count_tbl(struct hantro_ctx *ctx) +{ + struct hantro_vp9_dec_hw_ctx *vp9_ctx = &ctx->vp9_dec; + struct symbol_counts *cnts = vp9_ctx->misc.cpu + vp9_ctx->ctx_counters_offset; + int i, j, k, l, m; + + vp9_ctx->cnts.partition = &cnts->partition_counts; + vp9_ctx->cnts.skip = &cnts->mbskip_count; + vp9_ctx->cnts.intra_inter = &cnts->intra_inter_count; + vp9_ctx->cnts.tx32p = &cnts->tx32x32_count; + /* + * g2 hardware uses tx16x16_count[2][3], while the api + * expects tx16p[2][4], so this must be explicitly copied + * into vp9_ctx->cnts.tx16p when passing the data to the + * vp9 library function + */ + vp9_ctx->cnts.tx8p = &cnts->tx8x8_count; + + vp9_ctx->cnts.y_mode = &cnts->sb_ymode_counts; + vp9_ctx->cnts.uv_mode = &cnts->uv_mode_counts; + vp9_ctx->cnts.comp = &cnts->comp_inter_count; + vp9_ctx->cnts.comp_ref = &cnts->comp_ref_count; + vp9_ctx->cnts.single_ref = &cnts->single_ref_count; + vp9_ctx->cnts.filter = &cnts->switchable_interp_counts; + vp9_ctx->cnts.mv_joint = &cnts->mv_counts.joints; + vp9_ctx->cnts.sign = &cnts->mv_counts.sign; + vp9_ctx->cnts.classes = &cnts->mv_counts.classes; + vp9_ctx->cnts.class0 = &cnts->mv_counts.class0; + vp9_ctx->cnts.bits = &cnts->mv_counts.bits; + vp9_ctx->cnts.class0_fp = &cnts->mv_counts.class0_fp; + vp9_ctx->cnts.fp = &cnts->mv_counts.fp; + vp9_ctx->cnts.class0_hp = &cnts->mv_counts.class0_hp; + vp9_ctx->cnts.hp = &cnts->mv_counts.hp; + + for (i = 0; i < ARRAY_SIZE(vp9_ctx->cnts.coeff); ++i) + for (j = 0; j < ARRAY_SIZE(vp9_ctx->cnts.coeff[i]); ++j) + for (k = 0; k < ARRAY_SIZE(vp9_ctx->cnts.coeff[i][0]); ++k) + for (l = 0; l < ARRAY_SIZE(vp9_ctx->cnts.coeff[i][0][0]); ++l) + INNER_LOOP; +} + +int hantro_vp9_dec_init(struct hantro_ctx *ctx) +{ + struct hantro_dev *vpu = ctx->dev; + const struct hantro_variant *variant = vpu->variant; + struct hantro_vp9_dec_hw_ctx *vp9_dec = &ctx->vp9_dec; + struct hantro_aux_buf *tile_edge = &vp9_dec->tile_edge; + struct hantro_aux_buf *segment_map = &vp9_dec->segment_map; + struct hantro_aux_buf *misc = &vp9_dec->misc; + u32 i, max_width, max_height, size; + + if (variant->num_dec_fmts < 1) + return -EINVAL; + + for (i = 0; i < variant->num_dec_fmts; ++i) + if (variant->dec_fmts[i].fourcc == V4L2_PIX_FMT_VP9_FRAME) + break; + + if (i == variant->num_dec_fmts) + return -EINVAL; + + max_width = vpu->variant->dec_fmts[i].frmsize.max_width; + max_height = vpu->variant->dec_fmts[i].frmsize.max_height; + + size = hantro_vp9_tile_filter_size(max_height); + vp9_dec->bsd_ctrl_offset = size; + size += hantro_vp9_bsd_control_size(max_height); + + tile_edge->cpu = dma_alloc_coherent(vpu->dev, size, &tile_edge->dma, GFP_KERNEL); + if (!tile_edge->cpu) + return -ENOMEM; + + tile_edge->size = size; + memset(tile_edge->cpu, 0, size); + + size = hantro_vp9_segment_map_size(max_width, max_height); + vp9_dec->segment_map_size = size; + size *= 2; /* we need two areas of this size, used alternately */ + + segment_map->cpu = dma_alloc_coherent(vpu->dev, size, &segment_map->dma, GFP_KERNEL); + if (!segment_map->cpu) + goto err_segment_map; + + segment_map->size = size; + memset(segment_map->cpu, 0, size); + + size = hantro_vp9_prob_tab_size(); + vp9_dec->ctx_counters_offset = size; + size += hantro_vp9_count_tab_size(); + vp9_dec->tile_info_offset = size; + size += hantro_vp9_tile_info_size(); + + misc->cpu = dma_alloc_coherent(vpu->dev, size, &misc->dma, GFP_KERNEL); + if (!misc->cpu) + goto err_misc; + + misc->size = size; + memset(misc->cpu, 0, size); + + init_v4l2_vp9_count_tbl(ctx); + + return 0; + +err_misc: + dma_free_coherent(vpu->dev, segment_map->size, segment_map->cpu, segment_map->dma); + +err_segment_map: + dma_free_coherent(vpu->dev, tile_edge->size, tile_edge->cpu, tile_edge->dma); + + return -ENOMEM; +} + +void hantro_vp9_dec_exit(struct hantro_ctx *ctx) +{ + struct hantro_dev *vpu = ctx->dev; + struct hantro_vp9_dec_hw_ctx *vp9_dec = &ctx->vp9_dec; + struct hantro_aux_buf *tile_edge = &vp9_dec->tile_edge; + struct hantro_aux_buf *segment_map = &vp9_dec->segment_map; + struct hantro_aux_buf *misc = &vp9_dec->misc; + + dma_free_coherent(vpu->dev, misc->size, misc->cpu, misc->dma); + dma_free_coherent(vpu->dev, segment_map->size, segment_map->cpu, segment_map->dma); + dma_free_coherent(vpu->dev, tile_edge->size, tile_edge->cpu, tile_edge->dma); +} diff --git a/drivers/staging/media/hantro/hantro_vp9.h b/drivers/staging/media/hantro/hantro_vp9.h new file mode 100644 index 000000000000..c7f4bd3ff8dd --- /dev/null +++ b/drivers/staging/media/hantro/hantro_vp9.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Hantro VP9 codec driver + * + * Copyright (C) 2021 Collabora Ltd. + */ + +struct hantro_g2_mv_probs { + u8 joint[3]; + u8 sign[2]; + u8 class0_bit[2][1]; + u8 fr[2][3]; + u8 class0_hp[2]; + u8 hp[2]; + u8 classes[2][10]; + u8 class0_fr[2][2][3]; + u8 bits[2][10]; +}; + +struct hantro_g2_probs { + u8 inter_mode[7][4]; + u8 is_inter[4]; + u8 uv_mode[10][8]; + u8 tx8[2][1]; + u8 tx16[2][2]; + u8 tx32[2][3]; + u8 y_mode_tail[4][1]; + u8 y_mode[4][8]; + u8 partition[2][16][4]; /* [keyframe][][], [inter][][] */ + u8 uv_mode_tail[10][1]; + u8 interp_filter[4][2]; + u8 comp_mode[5]; + u8 skip[3]; + + u8 pad1[1]; + + struct hantro_g2_mv_probs mv; + + u8 single_ref[5][2]; + u8 comp_ref[5]; + + u8 pad2[17]; + + u8 coef[4][2][2][6][6][4]; +}; + +struct hantro_g2_all_probs { + u8 kf_y_mode_prob[10][10][8]; + + u8 kf_y_mode_prob_tail[10][10][1]; + u8 ref_pred_probs[3]; + u8 mb_segment_tree_probs[7]; + u8 segment_pred_probs[3]; + u8 ref_scores[4]; + u8 prob_comppred[2]; + + u8 pad1[9]; + + u8 kf_uv_mode_prob[10][8]; + u8 kf_uv_mode_prob_tail[10][1]; + + u8 pad2[6]; + + struct hantro_g2_probs probs; +}; + +struct mv_counts { + u32 joints[4]; + u32 sign[2][2]; + u32 classes[2][11]; + u32 class0[2][2]; + u32 bits[2][10][2]; + u32 class0_fp[2][2][4]; + u32 fp[2][4]; + u32 class0_hp[2][2]; + u32 hp[2][2]; +}; + +struct symbol_counts { + u32 inter_mode_counts[7][3][2]; + u32 sb_ymode_counts[4][10]; + u32 uv_mode_counts[10][10]; + u32 partition_counts[16][4]; + u32 switchable_interp_counts[4][3]; + u32 intra_inter_count[4][2]; + u32 comp_inter_count[5][2]; + u32 single_ref_count[5][2][2]; + u32 comp_ref_count[5][2]; + u32 tx32x32_count[2][4]; + u32 tx16x16_count[2][3]; + u32 tx8x8_count[2][2]; + u32 mbskip_count[3][2]; + + struct mv_counts mv_counts; + + u32 count_coeffs[2][2][6][6][4]; + u32 count_coeffs8x8[2][2][6][6][4]; + u32 count_coeffs16x16[2][2][6][6][4]; + u32 count_coeffs32x32[2][2][6][6][4]; + + u32 count_eobs[4][2][2][6][6]; +}; + diff --git a/drivers/staging/media/hantro/imx8m_vpu_hw.c b/drivers/staging/media/hantro/imx8m_vpu_hw.c index a40b161e5956..455a107ffb02 100644 --- a/drivers/staging/media/hantro/imx8m_vpu_hw.c +++ b/drivers/staging/media/hantro/imx8m_vpu_hw.c @@ -150,6 +150,19 @@ static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = { .step_height = MB_DIM, }, }, + { + .fourcc = V4L2_PIX_FMT_VP9_FRAME, + .codec_mode = HANTRO_MODE_VP9_DEC, + .max_depth = 2, + .frmsize = { + .min_width = 48, + .max_width = 3840, + .step_width = MB_DIM, + .min_height = 48, + .max_height = 2160, + .step_height = MB_DIM, + }, + }, }; static irqreturn_t imx8m_vpu_g1_irq(int irq, void *dev_id) @@ -241,6 +254,13 @@ static const struct hantro_codec_ops imx8mq_vpu_g2_codec_ops[] = { .init = hantro_hevc_dec_init, .exit = hantro_hevc_dec_exit, }, + [HANTRO_MODE_VP9_DEC] = { + .run = hantro_g2_vp9_dec_run, + .done = hantro_g2_vp9_dec_done, + .reset = imx8m_vpu_g2_reset, + .init = hantro_vp9_dec_init, + .exit = hantro_vp9_dec_exit, + }, }; /* @@ -281,7 +301,7 @@ const struct hantro_variant imx8mq_vpu_g2_variant = { .dec_offset = 0x0, .dec_fmts = imx8m_vpu_g2_dec_fmts, .num_dec_fmts = ARRAY_SIZE(imx8m_vpu_g2_dec_fmts), - .codec = HANTRO_HEVC_DECODER, + .codec = HANTRO_HEVC_DECODER | HANTRO_VP9_DECODER, .codec_ops = imx8mq_vpu_g2_codec_ops, .init = imx8mq_vpu_hw_init, .runtime_resume = imx8mq_runtime_resume, From patchwork Thu Aug 5 14:42:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Pietrasiewicz X-Patchwork-Id: 492494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UNPARSEABLE_RELAY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32255C4320E for ; Thu, 5 Aug 2021 14:45:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CC4D61159 for ; Thu, 5 Aug 2021 14:45:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229892AbhHEOp7 (ORCPT ); Thu, 5 Aug 2021 10:45:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242005AbhHEOnT (ORCPT ); Thu, 5 Aug 2021 10:43:19 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35264C06179B; Thu, 5 Aug 2021 07:43:05 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: andrzej.p) with ESMTPSA id F40061F440D0 From: Andrzej Pietrasiewicz To: linux-media@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev Cc: Andrzej Pietrasiewicz , Benjamin Gaignard , Boris Brezillon , Ezequiel Garcia , Fabio Estevam , Greg Kroah-Hartman , Hans Verkuil , Heiko Stuebner , Jernej Skrabec , Mauro Carvalho Chehab , Nicolas Dufresne , NXP Linux Team , Pengutronix Kernel Team , Philipp Zabel , Sascha Hauer , Shawn Guo , kernel@collabora.com, Ezequiel Garcia Subject: [PATCH v3 10/10] media: hantro: Support NV12 on the G2 core Date: Thu, 5 Aug 2021 16:42:46 +0200 Message-Id: <20210805144246.11998-11-andrzej.p@collabora.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210805144246.11998-1-andrzej.p@collabora.com> References: <20210805144246.11998-1-andrzej.p@collabora.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Ezequiel Garcia The G2 decoder block produces NV12 4x4 tiled format (NV12_4L4). Enable the G2 post-processor block, in order to produce regular NV12. The logic in hantro_postproc.c is leveraged to take care of allocating the extra buffers and configure the post-processor, which is significantly simpler than the one on the G1. Signed-off-by: Ezequiel Garcia Signed-off-by: Andrzej Pietrasiewicz --- .../staging/media/hantro/hantro_g2_vp9_dec.c | 6 ++-- drivers/staging/media/hantro/hantro_hw.h | 1 + .../staging/media/hantro/hantro_postproc.c | 31 +++++++++++++++++++ drivers/staging/media/hantro/imx8m_vpu_hw.c | 11 +++++++ 4 files changed, 46 insertions(+), 3 deletions(-) diff --git a/drivers/staging/media/hantro/hantro_g2_vp9_dec.c b/drivers/staging/media/hantro/hantro_g2_vp9_dec.c index 45a7be4a43fa..23463f2c10f4 100644 --- a/drivers/staging/media/hantro/hantro_g2_vp9_dec.c +++ b/drivers/staging/media/hantro/hantro_g2_vp9_dec.c @@ -152,7 +152,7 @@ static void config_output(struct hantro_ctx *ctx, hantro_reg_write(ctx->dev, &g2_out_dis, 0); hantro_reg_write(ctx->dev, &g2_output_format, 0); - luma_addr = vb2_dma_contig_plane_dma_addr(&dst->base.vb.vb2_buf, 0); + luma_addr = hantro_get_dec_buf_addr(ctx, &dst->base.vb.vb2_buf); hantro_write_addr(ctx->dev, G2_ADDR_DST, luma_addr); chroma_addr = luma_addr + chroma_offset(ctx, dec_params); @@ -191,7 +191,7 @@ static void config_ref(struct hantro_ctx *ctx, hantro_reg_write(ctx->dev, &ref_reg->hor_scale, (refw << 14) / dst->vp9.width); hantro_reg_write(ctx->dev, &ref_reg->ver_scale, (refh << 14) / dst->vp9.height); - luma_addr = vb2_dma_contig_plane_dma_addr(&buf->base.vb.vb2_buf, 0); + luma_addr = hantro_get_dec_buf_addr(ctx, &buf->base.vb.vb2_buf); hantro_write_addr(ctx->dev, ref_reg->y_base, luma_addr); chroma_addr = luma_addr + chroma_offset(ctx, dec_params); @@ -236,7 +236,7 @@ static void config_ref_registers(struct hantro_ctx *ctx, config_ref(ctx, dst, &ref_regs[1], dec_params, dec_params->golden_frame_ts); config_ref(ctx, dst, &ref_regs[2], dec_params, dec_params->alt_frame_ts); - mv_addr = vb2_dma_contig_plane_dma_addr(&mv_ref->base.vb.vb2_buf, 0) + + mv_addr = hantro_get_dec_buf_addr(ctx, &mv_ref->base.vb.vb2_buf) + mv_offset(ctx, dec_params); hantro_write_addr(ctx->dev, G2_REG_DMV_REF(0), mv_addr); diff --git a/drivers/staging/media/hantro/hantro_hw.h b/drivers/staging/media/hantro/hantro_hw.h index 2961d399fd60..3d4a5dc1e6d5 100644 --- a/drivers/staging/media/hantro/hantro_hw.h +++ b/drivers/staging/media/hantro/hantro_hw.h @@ -274,6 +274,7 @@ extern const struct hantro_variant rk3399_vpu_variant; extern const struct hantro_variant sama5d4_vdec_variant; extern const struct hantro_postproc_ops hantro_g1_postproc_ops; +extern const struct hantro_postproc_ops hantro_g2_postproc_ops; extern const u32 hantro_vp8_dec_mc_filter[8][6]; diff --git a/drivers/staging/media/hantro/hantro_postproc.c b/drivers/staging/media/hantro/hantro_postproc.c index 4549aec08feb..bc94bf46d218 100644 --- a/drivers/staging/media/hantro/hantro_postproc.c +++ b/drivers/staging/media/hantro/hantro_postproc.c @@ -11,6 +11,7 @@ #include "hantro.h" #include "hantro_hw.h" #include "hantro_g1_regs.h" +#include "hantro_g2_regs.h" #define HANTRO_PP_REG_WRITE(vpu, reg_name, val) \ { \ @@ -99,6 +100,21 @@ static void hantro_postproc_g1_enable(struct hantro_ctx *ctx) HANTRO_PP_REG_WRITE(vpu, display_width, ctx->dst_fmt.width); } +static void hantro_postproc_g2_enable(struct hantro_ctx *ctx) +{ + struct hantro_dev *vpu = ctx->dev; + struct vb2_v4l2_buffer *dst_buf; + size_t chroma_offset = ctx->dst_fmt.width * ctx->dst_fmt.height; + dma_addr_t dst_dma; + + dst_buf = hantro_get_dst_buf(ctx); + dst_dma = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0); + + hantro_write_addr(vpu, G2_RASTER_SCAN, dst_dma); + hantro_write_addr(vpu, G2_RASTER_SCAN_CHR, dst_dma + chroma_offset); + hantro_reg_write(vpu, &g2_out_rs_e, 1); +} + void hantro_postproc_free(struct hantro_ctx *ctx) { struct hantro_dev *vpu = ctx->dev; @@ -127,6 +143,9 @@ int hantro_postproc_alloc(struct hantro_ctx *ctx) if (ctx->vpu_src_fmt->fourcc == V4L2_PIX_FMT_H264_SLICE) buf_size += hantro_h264_mv_size(ctx->dst_fmt.width, ctx->dst_fmt.height); + else if (ctx->vpu_src_fmt->fourcc == V4L2_PIX_FMT_VP9_FRAME) + buf_size += hantro_vp9_mv_size(ctx->dst_fmt.width, + ctx->dst_fmt.height); for (i = 0; i < num_buffers; ++i) { struct hantro_aux_buf *priv = &ctx->postproc.dec_q[i]; @@ -152,6 +171,13 @@ static void hantro_postproc_g1_disable(struct hantro_ctx *ctx) HANTRO_PP_REG_WRITE_S(vpu, pipeline_en, 0x0); } +static void hantro_postproc_g2_disable(struct hantro_ctx *ctx) +{ + struct hantro_dev *vpu = ctx->dev; + + hantro_reg_write(vpu, &g2_out_rs_e, 0); +} + void hantro_postproc_disable(struct hantro_ctx *ctx) { struct hantro_dev *vpu = ctx->dev; @@ -172,3 +198,8 @@ const struct hantro_postproc_ops hantro_g1_postproc_ops = { .enable = hantro_postproc_g1_enable, .disable = hantro_postproc_g1_disable, }; + +const struct hantro_postproc_ops hantro_g2_postproc_ops = { + .enable = hantro_postproc_g2_enable, + .disable = hantro_postproc_g2_disable, +}; diff --git a/drivers/staging/media/hantro/imx8m_vpu_hw.c b/drivers/staging/media/hantro/imx8m_vpu_hw.c index 455a107ffb02..1a43f6fceef9 100644 --- a/drivers/staging/media/hantro/imx8m_vpu_hw.c +++ b/drivers/staging/media/hantro/imx8m_vpu_hw.c @@ -132,6 +132,14 @@ static const struct hantro_fmt imx8m_vpu_dec_fmts[] = { }, }; +static const struct hantro_fmt imx8m_vpu_g2_postproc_fmts[] = { + { + .fourcc = V4L2_PIX_FMT_NV12, + .codec_mode = HANTRO_MODE_NONE, + .postprocessed = true, + }, +}; + static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = { { .fourcc = V4L2_PIX_FMT_NV12_4L4, @@ -301,6 +309,9 @@ const struct hantro_variant imx8mq_vpu_g2_variant = { .dec_offset = 0x0, .dec_fmts = imx8m_vpu_g2_dec_fmts, .num_dec_fmts = ARRAY_SIZE(imx8m_vpu_g2_dec_fmts), + .postproc_fmts = imx8m_vpu_g2_postproc_fmts, + .num_postproc_fmts = ARRAY_SIZE(imx8m_vpu_g2_postproc_fmts), + .postproc_ops = &hantro_g2_postproc_ops, .codec = HANTRO_HEVC_DECODER | HANTRO_VP9_DECODER, .codec_ops = imx8mq_vpu_g2_codec_ops, .init = imx8mq_vpu_hw_init,