From patchwork Mon Jul 5 01:21:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Baryshkov X-Patchwork-Id: 470187 Delivered-To: patch@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp3547219jao; Sun, 4 Jul 2021 18:25:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyavWGFka7w38kSWStJTiDYr56Uh9f3ILs8GgrsBhi8iPEpkdJ+DnWRBCbX16A6dw3Ngqn6 X-Received: by 2002:a17:907:7294:: with SMTP id dt20mr5102407ejc.369.1625448324296; Sun, 04 Jul 2021 18:25:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625448324; cv=none; d=google.com; s=arc-20160816; b=S6W3sXDW6hvV+g2ilqk2Ce/2tBp9AWim/qVGOKSCDdL/tbEjb5+dqni+ZeG0Go/ZPg bYGlo7RPqt8iNrBYDLjSwGQc+k9lycD0XVIyWBHgcZaxPF7wQzFDz3A+qXIhkHBXNM2O v6phv0MP1KojBmFi4fOl95WDlim3RnPQDW0NFRk8ymynPDClIECXp9b7Nl6ZvVOSO2kt p6tUKznkqI+hT2WAbv6b7RO/nGtnr00EEO1RsibRJCRiEWmmL0CnG9elCJgoLwOKG+Fd +1Ba7Ez+F0N7Hl9WC02Amy5Xe2bkJz+vPZSdUEhOQqiidUgfM+gDgRQboLCSPu53tq2w 8XFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=47c58xNuBVy+HjyyUwDzPJ7gG0EJZF2uit+Xjm3vGpk=; b=mRFHii/LkgYHZGN8AzuUV3/BWY5tlOssqf+8j34enZaBqVV84xqduRoq+ObaUrIFl3 Lpu86ZH90qPQtn/spQKjFtoH1yG6HaIJJ9kyeJpVi5yTYL4yfE92ffVKgxGoUHkTCFVy OAbR5gv86ucBfp4FaS8NfIbma6301fOLYVHX8g41PksFQCQLV93QjUnGyGJOJouNTNUV vbVYZ5VYzyrjWzqmUOI2+FW2pYK9Snbiwrz6125KJl1DxMdKCR7kCISy0vLLbDNWs425 EbbQrcen5L8nvz1YQ96wWH/n8GjCVilU+eVivKUdg9BcmbguAYsxrjBt7Q4eZ62z+iPP aYAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=THfowcWE; spf=pass (google.com: domain of linux-arm-msm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cn17si4805170edb.13.2021.07.04.18.25.24; Sun, 04 Jul 2021 18:25:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-arm-msm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=THfowcWE; spf=pass (google.com: domain of linux-arm-msm-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229744AbhGEB17 (ORCPT + 17 others); Sun, 4 Jul 2021 21:27:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbhGEB16 (ORCPT ); Sun, 4 Jul 2021 21:27:58 -0400 Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com [IPv6:2a00:1450:4864:20::135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A879BC061574 for ; Sun, 4 Jul 2021 18:25:21 -0700 (PDT) Received: by mail-lf1-x135.google.com with SMTP id c28so6062043lfp.11 for ; Sun, 04 Jul 2021 18:25:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=47c58xNuBVy+HjyyUwDzPJ7gG0EJZF2uit+Xjm3vGpk=; b=THfowcWEIHKKRnHx7T9ErQQ7yGDwuPiwOBLhTb0tVAFfGL6wyGlJePupuP52m/ztVd K8DX8Hu/XyE7kY/NPhJqu/zy3X/DeedsHTNwSNcMGLS3NzX3bYco9lmgAHTBEIWBs9PZ xkyq//R11r3HdTM3MjGvmprzDjLY1Sg2L+f/0foTsNrzKN6YBmf7Kqh1Rsz6iB30hj8G oqiWuaU3Sd8y8pCi29XIbKwQ4Fji5sHJ416iEU9mJ1xi3TUmSnPhDYw/oTTQYukcwKT/ fOe2p8RGiYshgc0jC14PgTwsdU4f9BqPQxW7951Ab0ZEyqlJ4q5yyHjTRKAd3K/ilUcY cAbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=47c58xNuBVy+HjyyUwDzPJ7gG0EJZF2uit+Xjm3vGpk=; b=KcLhV1qH5KqlQnHKoCPUyTrvoQEod0+iKR/gLhGIidpTzF/I3MQX0T5TCn7EoymuMk bG2+dKQMYHfaOKw+KwVsfLhQQW9NgB/CHZWBHziu8cC/0JqmPTFay8NZInmr7+BHvE5X HuVm8W9Pw6BtQkJniG4fgB40xvrkch21J9o4aVT5r/XQgZPCHhnKZyHZGbMD/Y4Y5A5L hyaqR1Jorhi+VglIYPpbt7OjqCfDJv17siQm9mW35aPHczOK1z4CkAP2NNWMNz1mhjHV 3yFhCeGtCREoEVJ4aoub/EwRpTMMMyQRlQejy+uHcplFjQnbff5wpdGcA4ju4mAP84B9 YtEQ== X-Gm-Message-State: AOAM5319HyuyWsDfnqe6YYq7elTGKDbwpi+FCn3/g2Byax9GcohjQHSQ tmaxh5n8o95OalaeAPg+h05ZOg== X-Received: by 2002:a19:dc5e:: with SMTP id f30mr8717604lfj.318.1625448320035; Sun, 04 Jul 2021 18:25:20 -0700 (PDT) Received: from eriador.lan ([37.153.55.125]) by smtp.gmail.com with ESMTPSA id c7sm1004345lfm.50.2021.07.04.18.25.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jul 2021 18:25:19 -0700 (PDT) From: Dmitry Baryshkov To: Bjorn Andersson , Rob Clark , Sean Paul , Abhinav Kumar Cc: Jonathan Marek , Stephen Boyd , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org Subject: [PATCH v2 17/22] drm/msm/dpu: add support for SSPP allocation to RM Date: Mon, 5 Jul 2021 04:21:10 +0300 Message-Id: <20210705012115.4179824-18-dmitry.baryshkov@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210705012115.4179824-1-dmitry.baryshkov@linaro.org> References: <20210705012115.4179824-1-dmitry.baryshkov@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for handling and allocting SSPP blocks through the resource manager. Handling code is not converted to use it though. Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h | 10 +++ drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 18 ++--- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 81 +++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h | 6 ++ 5 files changed, 104 insertions(+), 12 deletions(-) -- 2.30.2 diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h index 00098e33391e..c5ac8defa073 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h @@ -387,6 +387,16 @@ struct dpu_hw_pipe { struct dpu_hw_sspp_ops ops; }; +/** + * to_dpu_hw_pipe - convert base object dpu_hw_base to container + * @hw: Pointer to base hardware block + * return: Pointer to hardware block container + */ +static inline struct dpu_hw_pipe *to_dpu_hw_pipe(struct dpu_hw_blk *hw) +{ + return container_of(hw, struct dpu_hw_pipe, base); +} + /** * dpu_hw_sspp_init - initializes the sspp hw driver object. * Should be called once before accessing every pipe. diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h index ab65c817eb42..04a2ab548f54 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h @@ -159,6 +159,7 @@ struct dpu_global_state { uint32_t ctl_to_enc_id[CTL_MAX - CTL_0]; uint32_t intf_to_enc_id[INTF_MAX - INTF_0]; uint32_t dspp_to_enc_id[DSPP_MAX - DSPP_0]; + uint32_t pipe_to_plane_id[SSPP_MAX - SSPP_NONE]; }; struct dpu_global_state diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index 3850f2714bf3..61008e8afb0a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -1234,8 +1234,6 @@ static void dpu_plane_destroy(struct drm_plane *plane) /* this will destroy the states as well */ drm_plane_cleanup(plane); - dpu_hw_sspp_destroy(pdpu->pipe_hw); - kfree(pdpu); } } @@ -1389,14 +1387,13 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev, pdpu->pipe = pipe; /* initialize underlying h/w driver */ - pdpu->pipe_hw = dpu_hw_sspp_init(pipe, kms->mmio, kms->catalog); - if (IS_ERR(pdpu->pipe_hw)) { - DPU_ERROR("[%u]SSPP init failed\n", pipe); - ret = PTR_ERR(pdpu->pipe_hw); + if (!kms->rm.sspp_blks[pipe - SSPP_NONE]) goto clean_plane; - } else if (!pdpu->pipe_hw->cap || !pdpu->pipe_hw->cap->sblk) { + pdpu->pipe_hw = to_dpu_hw_pipe(kms->rm.sspp_blks[pipe - SSPP_NONE]); + + if (!pdpu->pipe_hw->cap || !pdpu->pipe_hw->cap->sblk) { DPU_ERROR("[%u]SSPP init returned invalid cfg\n", pipe); - goto clean_sspp; + goto clean_plane; } format_list = pdpu->pipe_hw->cap->sblk->format_list; @@ -1406,7 +1403,7 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev, format_list, num_formats, supported_format_modifiers, type, NULL); if (ret) - goto clean_sspp; + goto clean_plane; pdpu->catalog = kms->catalog; @@ -1432,9 +1429,6 @@ struct drm_plane *dpu_plane_init(struct drm_device *dev, pipe, plane->base.id); return plane; -clean_sspp: - if (pdpu && pdpu->pipe_hw) - dpu_hw_sspp_destroy(pdpu->pipe_hw); clean_plane: kfree(pdpu); return ERR_PTR(ret); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c index f9c83d6e427a..21c9e513f1f6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c @@ -8,6 +8,7 @@ #include "dpu_hw_lm.h" #include "dpu_hw_ctl.h" #include "dpu_hw_pingpong.h" +#include "dpu_hw_sspp.h" #include "dpu_hw_intf.h" #include "dpu_hw_dspp.h" #include "dpu_hw_merge3d.h" @@ -35,6 +36,14 @@ int dpu_rm_destroy(struct dpu_rm *rm) { int i; + for (i = 0; i < ARRAY_SIZE(rm->sspp_blks); i++) { + struct dpu_hw_pipe *hw; + + if (rm->sspp_blks[i]) { + hw = to_dpu_hw_pipe(rm->sspp_blks[i]); + dpu_hw_sspp_destroy(hw); + } + } for (i = 0; i < ARRAY_SIZE(rm->pingpong_blks); i++) { struct dpu_hw_pingpong *hw; @@ -166,6 +175,24 @@ int dpu_rm_init(struct dpu_rm *rm, rm->pingpong_blks[pp->id - PINGPONG_0] = &hw->base; } + for (i = 0; i < cat->sspp_count; i++) { + struct dpu_hw_pipe *hw; + const struct dpu_sspp_cfg *sspp = &cat->sspp[i]; + + if (sspp->id <= SSPP_NONE || sspp->id >= SSPP_MAX) { + DPU_ERROR("skip sspp %d with invalid id\n", sspp->id); + continue; + } + hw = dpu_hw_sspp_init(sspp->id, mmio, cat); + if (IS_ERR_OR_NULL(hw)) { + rc = PTR_ERR(hw); + DPU_ERROR("failed sspp object creation: err %d\n", + rc); + goto fail; + } + rm->sspp_blks[sspp->id - SSPP_NONE] = &hw->base; + } + for (i = 0; i < cat->intf_count; i++) { struct dpu_hw_intf *hw; const struct dpu_intf_cfg *intf = &cat->intf[i]; @@ -660,3 +687,57 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, return num_blks; } + +enum dpu_sspp dpu_rm_get_sspp(struct dpu_rm *rm, struct dpu_global_state *global_state, uint32_t plane_id, bool yuv, bool scale) +{ + int i; + enum dpu_sspp pipe = SSPP_NONE; + struct dpu_hw_pipe *pipe_hw; + bool retry = false; + +retry_loop: + for (i = 0; i < ARRAY_SIZE(rm->sspp_blks) && pipe == SSPP_NONE; i++) { + if (!rm->sspp_blks[i]) + continue; + if (reserved_by_other(global_state->pipe_to_plane_id, i, plane_id)) + continue; + + pipe_hw = to_dpu_hw_pipe(rm->sspp_blks[i]); + + /* skip incompatible planes */ + if (scale && !(pipe_hw->cap->features & DPU_SSPP_SCALER)) + continue; + + if (yuv && (!(pipe_hw->cap->features & DPU_SSPP_SCALER) || + !(pipe_hw->cap->features & DPU_SSPP_CSC_ANY))) + continue; + + /* For non-yuv, non-scaled planes try to find simple (DMA) + * plane, fallback to VIG on a second try. + * + * This way we'd leave VIG pipes to be later used for YUV formats. + */ + + if (!scale && !yuv && !retry && + (pipe_hw->cap->features & (DPU_SSPP_SCALER | DPU_SSPP_CSC_ANY))) + continue; + + pipe = i + SSPP_NONE; + }; + + if (!scale && !yuv && !retry && pipe == SSPP_NONE) { + retry = true; + goto retry_loop; + } + + if (pipe != SSPP_NONE) + global_state->pipe_to_plane_id[pipe - SSPP_NONE] = plane_id; + + return pipe; +} + +void dpu_rm_release_sspp(struct dpu_rm *rm, struct dpu_global_state *global_state, uint32_t plane_id) +{ + _dpu_rm_clear_mapping(global_state->pipe_to_plane_id, + ARRAY_SIZE(global_state->pipe_to_plane_id), plane_id); +} diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h index 1f12c8d5b8aa..b759fe39f6d6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h @@ -16,6 +16,7 @@ struct dpu_global_state; /** * struct dpu_rm - DPU dynamic hardware resource manager * @pingpong_blks: array of pingpong hardware resources + * @sspp_blks: array of sspp hardware resources * @mixer_blks: array of layer mixer hardware resources * @ctl_blks: array of ctl hardware resources * @intf_blks: array of intf hardware resources @@ -25,6 +26,7 @@ struct dpu_global_state; */ struct dpu_rm { struct dpu_hw_blk *pingpong_blks[PINGPONG_MAX - PINGPONG_0]; + struct dpu_hw_blk *sspp_blks[SSPP_MAX - SSPP_NONE]; struct dpu_hw_blk *mixer_blks[LM_MAX - LM_0]; struct dpu_hw_blk *ctl_blks[CTL_MAX - CTL_0]; struct dpu_hw_blk *intf_blks[INTF_MAX - INTF_0]; @@ -88,5 +90,9 @@ void dpu_rm_release(struct dpu_global_state *global_state, int dpu_rm_get_assigned_resources(struct dpu_rm *rm, struct dpu_global_state *global_state, uint32_t enc_id, enum dpu_hw_blk_type type, struct dpu_hw_blk **blks, int blks_size); + +enum dpu_sspp dpu_rm_get_sspp(struct dpu_rm *rm, struct dpu_global_state *global_state, uint32_t plane_id, bool yuv, bool scale); +void dpu_rm_release_sspp(struct dpu_rm *rm, struct dpu_global_state *global_state, uint32_t plane_id); + #endif /* __DPU_RM_H__ */