From patchwork Tue Jul 23 16:06:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169547 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9072061ilk; Tue, 23 Jul 2019 09:08:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqx1wW8h8c9XTCmoMphAm0TKZF7Iy3hqA9wysZl4RAWThfdjp5EIn5gmluddxQAYRF4AyIo2 X-Received: by 2002:a17:90a:ba94:: with SMTP id t20mr83111056pjr.8.1563898125273; Tue, 23 Jul 2019 09:08:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898125; cv=none; d=google.com; s=arc-20160816; b=dj40lvhjps3e5G3FaltflJNGmJLVVIUbCiPr84e3/8iQLCL0KWb5OdahG1Mf+KDP3J 0CLmhmuMZQuAM05zLFFhGdu45GLeD0a4IyoXtEQdGwyXKc0krYO7CXI4eWV+2YtxLe9K BcrSZqh4gL5/4uHTVkE8CqFf/ENCJxkzv4nncabSmJ/6M1u4JgZpqS7LTcd3Ghv6Kgcn ZnYYz58GNzSFaMKRdC6dQ1iV5UpJQW3yBmh8Ox0XP08fjRx1hUnRvHSNA91S8Z55F5wQ ujz+o5wcDzjVALTDidG8Vs3gO5s2kZunQLSAn7WmGaZDJXxMTuqKdJTKLV0dW0a+fdlw FZCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=VtZtBs9LxSg12r7GoaFJEjk0WZCodwUK4N3AZmR3EJY=; b=o9e0paC8Qbd4CcMqq/fmEvViNBOf1hmPYlMcr5kEkkH/QkGUFodSm9oyniqtqV4iqX m9cHNr9bkb6wL/3aIa82Kup2w/mdhxW38ekmtw1D9coTtLEz2aYI+8yI5KbjAYr1EYk2 taFc2JsDzjI63h6anayLyU2Eud5AYSxMPpQpyTl5owbwfMmYCijZ07nSWUpVoptfbAi1 ybrm1SPTQQdb5O+9jVPB9uIBWG1uy7Mva/VmtK1OEbYINjITFYJW2GapPTSzPUzb7+Vr Q2TdvJZ4ynZxgYH1bWKAIiL+x4ZmKP7p4rR+XEBIM6PW1x9uEF68yvufGGpQ/mBpwZBZ ol9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si11799673plb.345.2019.07.23.09.08.45; Tue, 23 Jul 2019 09:08:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388866AbfGWQIo (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:44 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2706 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727363AbfGWQIj (ORCPT ); Tue, 23 Jul 2019 12:08:39 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 1ABEF7B5C9FE4BA3DA5B; Wed, 24 Jul 2019 00:08:29 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:21 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 2/6] vfio/type1: Check reserved region conflict and update iova list Date: Tue, 23 Jul 2019 17:06:33 +0100 Message-ID: <20190723160637.8384-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This retrieves the reserved regions associated with dev group and checks for conflicts with any existing dma mappings. Also update the iova list excluding the reserved regions. Reserved regions with type IOMMU_RESV_DIRECT_RELAXABLE are excluded from above checks as they are considered as directly mapped regions which are known to be relaxable. Signed-off-by: Shameer Kolothum --- v7-->v8 -Added check for iommu_get_group_resv_regions() error ret. --- drivers/vfio/vfio_iommu_type1.c | 98 +++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 6a69652b406b..a3c9794ccf83 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1496,6 +1496,88 @@ static int vfio_iommu_aper_resize(struct list_head *iova, return 0; } +/* + * Check reserved region conflicts with existing dma mappings + */ +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu, + struct list_head *resv_regions) +{ + struct iommu_resv_region *region; + + /* Check for conflict with existing dma mappings */ + list_for_each_entry(region, resv_regions, list) { + if (region->type == IOMMU_RESV_DIRECT_RELAXABLE) + continue; + + if (vfio_find_dma(iommu, region->start, region->length)) + return true; + } + + return false; +} + +/* + * Check iova region overlap with reserved regions and + * exclude them from the iommu iova range + */ +static int vfio_iommu_resv_exclude(struct list_head *iova, + struct list_head *resv_regions) +{ + struct iommu_resv_region *resv; + struct vfio_iova *n, *next; + + list_for_each_entry(resv, resv_regions, list) { + phys_addr_t start, end; + + if (resv->type == IOMMU_RESV_DIRECT_RELAXABLE) + continue; + + start = resv->start; + end = resv->start + resv->length - 1; + + list_for_each_entry_safe(n, next, iova, list) { + int ret = 0; + + /* No overlap */ + if (start > n->end || end < n->start) + continue; + /* + * Insert a new node if current node overlaps with the + * reserve region to exlude that from valid iova range. + * Note that, new node is inserted before the current + * node and finally the current node is deleted keeping + * the list updated and sorted. + */ + if (start > n->start) + ret = vfio_iommu_iova_insert(&n->list, n->start, + start - 1); + if (!ret && end < n->end) + ret = vfio_iommu_iova_insert(&n->list, end + 1, + n->end); + if (ret) + return ret; + + list_del(&n->list); + kfree(n); + } + } + + if (list_empty(iova)) + return -EINVAL; + + return 0; +} + +static void vfio_iommu_resv_free(struct list_head *resv_regions) +{ + struct iommu_resv_region *n, *next; + + list_for_each_entry_safe(n, next, resv_regions, list) { + list_del(&n->list); + kfree(n); + } +} + static void vfio_iommu_iova_free(struct list_head *iova) { struct vfio_iova *n, *next; @@ -1547,6 +1629,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, phys_addr_t resv_msi_base; struct iommu_domain_geometry geo; LIST_HEAD(iova_copy); + LIST_HEAD(group_resv_regions); mutex_lock(&iommu->lock); @@ -1632,6 +1715,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, goto out_detach; } + ret = iommu_get_group_resv_regions(iommu_group, &group_resv_regions); + if (ret) + goto out_detach; + + if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) { + ret = -EINVAL; + goto out_detach; + } + /* * We don't want to work on the original iova list as the list * gets modified and in case of failure we have to retain the @@ -1646,6 +1738,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; + ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1706,6 +1802,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); + vfio_iommu_resv_free(&group_resv_regions); return 0; @@ -1714,6 +1811,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, out_domain: iommu_domain_free(domain->domain); vfio_iommu_iova_free(&iova_copy); + vfio_iommu_resv_free(&group_resv_regions); out_free: kfree(domain); kfree(group);