Message ID | 20200619103636.11974-34-m.szyprowski@samsung.com |
---|---|
State | New |
Headers | show |
Series | DRM: fix struct sg_table nents vs. orig_nents misuse | expand |
Hi Marek,
I love your patch! Perhaps something to improve:
[auto build test WARNING on next-20200618]
[also build test WARNING on v5.8-rc1]
[cannot apply to linuxtv-media/master staging/staging-testing drm-exynos/exynos-drm-next drm-intel/for-linux-next linus/master v5.8-rc1 v5.7 v5.7-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Marek-Szyprowski/DRM-fix-struct-sg_table-nents-vs-orig_nents-misuse/20200619-184302
base: ce2cc8efd7a40cbd17841add878cb691d0ce0bba
config: x86_64-randconfig-r005-20200621 (attached as .config)
compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project ef455a55bcf2cfea04a99c361b182ad18b7f03f1)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>, old ones prefixed by <<):
>> drivers/rapidio/devices/rio_mport_cdev.c:939:40: warning: variable 'nents' is uninitialized when used here [-Wuninitialized]
ret = do_dma_request(req, xfer, sync, nents);
^~~~~
drivers/rapidio/devices/rio_mport_cdev.c:816:11: note: initialize the variable 'nents' to silence this warning
int nents;
^
= 0
1 warning generated.
vim +/nents +939 drivers/rapidio/devices/rio_mport_cdev.c
e8de370188d098 Alexandre Bounine 2016-03-22 793
e8de370188d098 Alexandre Bounine 2016-03-22 794 /*
e8de370188d098 Alexandre Bounine 2016-03-22 795 * rio_dma_transfer() - Perform RapidIO DMA data transfer to/from
e8de370188d098 Alexandre Bounine 2016-03-22 796 * the remote RapidIO device
e8de370188d098 Alexandre Bounine 2016-03-22 797 * @filp: file pointer associated with the call
e8de370188d098 Alexandre Bounine 2016-03-22 798 * @transfer_mode: DMA transfer mode
e8de370188d098 Alexandre Bounine 2016-03-22 799 * @sync: synchronization mode
e8de370188d098 Alexandre Bounine 2016-03-22 800 * @dir: DMA transfer direction (DMA_MEM_TO_DEV = write OR
e8de370188d098 Alexandre Bounine 2016-03-22 801 * DMA_DEV_TO_MEM = read)
e8de370188d098 Alexandre Bounine 2016-03-22 802 * @xfer: data transfer descriptor structure
e8de370188d098 Alexandre Bounine 2016-03-22 803 */
e8de370188d098 Alexandre Bounine 2016-03-22 804 static int
4e1016dac1ccce Alexandre Bounine 2016-05-05 805 rio_dma_transfer(struct file *filp, u32 transfer_mode,
e8de370188d098 Alexandre Bounine 2016-03-22 806 enum rio_transfer_sync sync, enum dma_data_direction dir,
e8de370188d098 Alexandre Bounine 2016-03-22 807 struct rio_transfer_io *xfer)
e8de370188d098 Alexandre Bounine 2016-03-22 808 {
e8de370188d098 Alexandre Bounine 2016-03-22 809 struct mport_cdev_priv *priv = filp->private_data;
e8de370188d098 Alexandre Bounine 2016-03-22 810 unsigned long nr_pages = 0;
e8de370188d098 Alexandre Bounine 2016-03-22 811 struct page **page_list = NULL;
e8de370188d098 Alexandre Bounine 2016-03-22 812 struct mport_dma_req *req;
e8de370188d098 Alexandre Bounine 2016-03-22 813 struct mport_dev *md = priv->md;
e8de370188d098 Alexandre Bounine 2016-03-22 814 struct dma_chan *chan;
67446283d89467 John Hubbard 2020-06-04 815 int ret;
e8de370188d098 Alexandre Bounine 2016-03-22 816 int nents;
e8de370188d098 Alexandre Bounine 2016-03-22 817
e8de370188d098 Alexandre Bounine 2016-03-22 818 if (xfer->length == 0)
e8de370188d098 Alexandre Bounine 2016-03-22 819 return -EINVAL;
e8de370188d098 Alexandre Bounine 2016-03-22 820 req = kzalloc(sizeof(*req), GFP_KERNEL);
e8de370188d098 Alexandre Bounine 2016-03-22 821 if (!req)
e8de370188d098 Alexandre Bounine 2016-03-22 822 return -ENOMEM;
e8de370188d098 Alexandre Bounine 2016-03-22 823
e8de370188d098 Alexandre Bounine 2016-03-22 824 ret = get_dma_channel(priv);
e8de370188d098 Alexandre Bounine 2016-03-22 825 if (ret) {
e8de370188d098 Alexandre Bounine 2016-03-22 826 kfree(req);
e8de370188d098 Alexandre Bounine 2016-03-22 827 return ret;
e8de370188d098 Alexandre Bounine 2016-03-22 828 }
c5157b76869ba9 Ioan Nicu 2018-04-20 829 chan = priv->dmach;
c5157b76869ba9 Ioan Nicu 2018-04-20 830
c5157b76869ba9 Ioan Nicu 2018-04-20 831 kref_init(&req->refcount);
c5157b76869ba9 Ioan Nicu 2018-04-20 832 init_completion(&req->req_comp);
c5157b76869ba9 Ioan Nicu 2018-04-20 833 req->dir = dir;
c5157b76869ba9 Ioan Nicu 2018-04-20 834 req->filp = filp;
c5157b76869ba9 Ioan Nicu 2018-04-20 835 req->priv = priv;
c5157b76869ba9 Ioan Nicu 2018-04-20 836 req->dmach = chan;
c5157b76869ba9 Ioan Nicu 2018-04-20 837 req->sync = sync;
e8de370188d098 Alexandre Bounine 2016-03-22 838
e8de370188d098 Alexandre Bounine 2016-03-22 839 /*
e8de370188d098 Alexandre Bounine 2016-03-22 840 * If parameter loc_addr != NULL, we are transferring data from/to
e8de370188d098 Alexandre Bounine 2016-03-22 841 * data buffer allocated in user-space: lock in memory user-space
e8de370188d098 Alexandre Bounine 2016-03-22 842 * buffer pages and build an SG table for DMA transfer request
e8de370188d098 Alexandre Bounine 2016-03-22 843 *
e8de370188d098 Alexandre Bounine 2016-03-22 844 * Otherwise (loc_addr == NULL) contiguous kernel-space buffer is
e8de370188d098 Alexandre Bounine 2016-03-22 845 * used for DMA data transfers: build single entry SG table using
e8de370188d098 Alexandre Bounine 2016-03-22 846 * offset within the internal buffer specified by handle parameter.
e8de370188d098 Alexandre Bounine 2016-03-22 847 */
e8de370188d098 Alexandre Bounine 2016-03-22 848 if (xfer->loc_addr) {
c4860ad6056483 Tvrtko Ursulin 2017-07-31 849 unsigned int offset;
e8de370188d098 Alexandre Bounine 2016-03-22 850 long pinned;
e8de370188d098 Alexandre Bounine 2016-03-22 851
c4860ad6056483 Tvrtko Ursulin 2017-07-31 852 offset = lower_32_bits(offset_in_page(xfer->loc_addr));
e8de370188d098 Alexandre Bounine 2016-03-22 853 nr_pages = PAGE_ALIGN(xfer->length + offset) >> PAGE_SHIFT;
e8de370188d098 Alexandre Bounine 2016-03-22 854
e8de370188d098 Alexandre Bounine 2016-03-22 855 page_list = kmalloc_array(nr_pages,
e8de370188d098 Alexandre Bounine 2016-03-22 856 sizeof(*page_list), GFP_KERNEL);
e8de370188d098 Alexandre Bounine 2016-03-22 857 if (page_list == NULL) {
e8de370188d098 Alexandre Bounine 2016-03-22 858 ret = -ENOMEM;
e8de370188d098 Alexandre Bounine 2016-03-22 859 goto err_req;
e8de370188d098 Alexandre Bounine 2016-03-22 860 }
e8de370188d098 Alexandre Bounine 2016-03-22 861
67446283d89467 John Hubbard 2020-06-04 862 pinned = pin_user_pages_fast(
e8de370188d098 Alexandre Bounine 2016-03-22 863 (unsigned long)xfer->loc_addr & PAGE_MASK,
73b0140bf0fe9d Ira Weiny 2019-05-13 864 nr_pages,
73b0140bf0fe9d Ira Weiny 2019-05-13 865 dir == DMA_FROM_DEVICE ? FOLL_WRITE : 0,
73b0140bf0fe9d Ira Weiny 2019-05-13 866 page_list);
e8de370188d098 Alexandre Bounine 2016-03-22 867
e8de370188d098 Alexandre Bounine 2016-03-22 868 if (pinned != nr_pages) {
e8de370188d098 Alexandre Bounine 2016-03-22 869 if (pinned < 0) {
67446283d89467 John Hubbard 2020-06-04 870 rmcd_error("pin_user_pages_fast err=%ld",
369f2679f7e739 Lorenzo Stoakes 2017-02-27 871 pinned);
e8de370188d098 Alexandre Bounine 2016-03-22 872 nr_pages = 0;
e8de370188d098 Alexandre Bounine 2016-03-22 873 } else
e8de370188d098 Alexandre Bounine 2016-03-22 874 rmcd_error("pinned %ld out of %ld pages",
e8de370188d098 Alexandre Bounine 2016-03-22 875 pinned, nr_pages);
e8de370188d098 Alexandre Bounine 2016-03-22 876 ret = -EFAULT;
ffca476a0a8d26 John Hubbard 2020-05-22 877 /*
ffca476a0a8d26 John Hubbard 2020-05-22 878 * Set nr_pages up to mean "how many pages to unpin, in
ffca476a0a8d26 John Hubbard 2020-05-22 879 * the error handler:
ffca476a0a8d26 John Hubbard 2020-05-22 880 */
ffca476a0a8d26 John Hubbard 2020-05-22 881 nr_pages = pinned;
e8de370188d098 Alexandre Bounine 2016-03-22 882 goto err_pg;
e8de370188d098 Alexandre Bounine 2016-03-22 883 }
e8de370188d098 Alexandre Bounine 2016-03-22 884
e8de370188d098 Alexandre Bounine 2016-03-22 885 ret = sg_alloc_table_from_pages(&req->sgt, page_list, nr_pages,
e8de370188d098 Alexandre Bounine 2016-03-22 886 offset, xfer->length, GFP_KERNEL);
e8de370188d098 Alexandre Bounine 2016-03-22 887 if (ret) {
e8de370188d098 Alexandre Bounine 2016-03-22 888 rmcd_error("sg_alloc_table failed with err=%d", ret);
e8de370188d098 Alexandre Bounine 2016-03-22 889 goto err_pg;
e8de370188d098 Alexandre Bounine 2016-03-22 890 }
e8de370188d098 Alexandre Bounine 2016-03-22 891
e8de370188d098 Alexandre Bounine 2016-03-22 892 req->page_list = page_list;
e8de370188d098 Alexandre Bounine 2016-03-22 893 req->nr_pages = nr_pages;
e8de370188d098 Alexandre Bounine 2016-03-22 894 } else {
e8de370188d098 Alexandre Bounine 2016-03-22 895 dma_addr_t baddr;
e8de370188d098 Alexandre Bounine 2016-03-22 896 struct rio_mport_mapping *map;
e8de370188d098 Alexandre Bounine 2016-03-22 897
e8de370188d098 Alexandre Bounine 2016-03-22 898 baddr = (dma_addr_t)xfer->handle;
e8de370188d098 Alexandre Bounine 2016-03-22 899
e8de370188d098 Alexandre Bounine 2016-03-22 900 mutex_lock(&md->buf_mutex);
e8de370188d098 Alexandre Bounine 2016-03-22 901 list_for_each_entry(map, &md->mappings, node) {
e8de370188d098 Alexandre Bounine 2016-03-22 902 if (baddr >= map->phys_addr &&
e8de370188d098 Alexandre Bounine 2016-03-22 903 baddr < (map->phys_addr + map->size)) {
e8de370188d098 Alexandre Bounine 2016-03-22 904 kref_get(&map->ref);
e8de370188d098 Alexandre Bounine 2016-03-22 905 req->map = map;
e8de370188d098 Alexandre Bounine 2016-03-22 906 break;
e8de370188d098 Alexandre Bounine 2016-03-22 907 }
e8de370188d098 Alexandre Bounine 2016-03-22 908 }
e8de370188d098 Alexandre Bounine 2016-03-22 909 mutex_unlock(&md->buf_mutex);
e8de370188d098 Alexandre Bounine 2016-03-22 910
e8de370188d098 Alexandre Bounine 2016-03-22 911 if (req->map == NULL) {
e8de370188d098 Alexandre Bounine 2016-03-22 912 ret = -ENOMEM;
e8de370188d098 Alexandre Bounine 2016-03-22 913 goto err_req;
e8de370188d098 Alexandre Bounine 2016-03-22 914 }
e8de370188d098 Alexandre Bounine 2016-03-22 915
e8de370188d098 Alexandre Bounine 2016-03-22 916 if (xfer->length + xfer->offset > map->size) {
e8de370188d098 Alexandre Bounine 2016-03-22 917 ret = -EINVAL;
e8de370188d098 Alexandre Bounine 2016-03-22 918 goto err_req;
e8de370188d098 Alexandre Bounine 2016-03-22 919 }
e8de370188d098 Alexandre Bounine 2016-03-22 920
e8de370188d098 Alexandre Bounine 2016-03-22 921 ret = sg_alloc_table(&req->sgt, 1, GFP_KERNEL);
e8de370188d098 Alexandre Bounine 2016-03-22 922 if (unlikely(ret)) {
e8de370188d098 Alexandre Bounine 2016-03-22 923 rmcd_error("sg_alloc_table failed for internal buf");
e8de370188d098 Alexandre Bounine 2016-03-22 924 goto err_req;
e8de370188d098 Alexandre Bounine 2016-03-22 925 }
e8de370188d098 Alexandre Bounine 2016-03-22 926
e8de370188d098 Alexandre Bounine 2016-03-22 927 sg_set_buf(req->sgt.sgl,
e8de370188d098 Alexandre Bounine 2016-03-22 928 map->virt_addr + (baddr - map->phys_addr) +
e8de370188d098 Alexandre Bounine 2016-03-22 929 xfer->offset, xfer->length);
e8de370188d098 Alexandre Bounine 2016-03-22 930 }
e8de370188d098 Alexandre Bounine 2016-03-22 931
c99597eab54307 Marek Szyprowski 2020-06-19 932 ret = dma_map_sgtable(chan->device->dev, &req->sgt, dir, 0);
c99597eab54307 Marek Szyprowski 2020-06-19 933 if (ret) {
e8de370188d098 Alexandre Bounine 2016-03-22 934 rmcd_error("Failed to map SG list");
b1402dcb5643b7 Christophe JAILLET 2017-11-17 935 ret = -EFAULT;
b1402dcb5643b7 Christophe JAILLET 2017-11-17 936 goto err_pg;
e8de370188d098 Alexandre Bounine 2016-03-22 937 }
e8de370188d098 Alexandre Bounine 2016-03-22 938
e8de370188d098 Alexandre Bounine 2016-03-22 @939 ret = do_dma_request(req, xfer, sync, nents);
e8de370188d098 Alexandre Bounine 2016-03-22 940
e8de370188d098 Alexandre Bounine 2016-03-22 941 if (ret >= 0) {
bbd876adb8c729 Ioan Nicu 2018-04-10 942 if (sync == RIO_TRANSFER_ASYNC)
e8de370188d098 Alexandre Bounine 2016-03-22 943 return ret; /* return ASYNC cookie */
bbd876adb8c729 Ioan Nicu 2018-04-10 944 } else {
bbd876adb8c729 Ioan Nicu 2018-04-10 945 rmcd_debug(DMA, "do_dma_request failed with err=%d", ret);
e8de370188d098 Alexandre Bounine 2016-03-22 946 }
e8de370188d098 Alexandre Bounine 2016-03-22 947
e8de370188d098 Alexandre Bounine 2016-03-22 948 err_pg:
bbd876adb8c729 Ioan Nicu 2018-04-10 949 if (!req->page_list) {
67446283d89467 John Hubbard 2020-06-04 950 unpin_user_pages(page_list, nr_pages);
e8de370188d098 Alexandre Bounine 2016-03-22 951 kfree(page_list);
e8de370188d098 Alexandre Bounine 2016-03-22 952 }
e8de370188d098 Alexandre Bounine 2016-03-22 953 err_req:
bbd876adb8c729 Ioan Nicu 2018-04-10 954 kref_put(&req->refcount, dma_req_free);
e8de370188d098 Alexandre Bounine 2016-03-22 955 return ret;
e8de370188d098 Alexandre Bounine 2016-03-22 956 }
e8de370188d098 Alexandre Bounine 2016-03-22 957
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c index 451608e960a1..98c572627c8c 100644 --- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -573,8 +573,7 @@ static void dma_req_free(struct kref *ref) refcount); struct mport_cdev_priv *priv = req->priv; - dma_unmap_sg(req->dmach->device->dev, - req->sgt.sgl, req->sgt.nents, req->dir); + dma_unmap_sgtable(req->dmach->device->dev, &req->sgt, req->dir, 0); sg_free_table(&req->sgt); if (req->page_list) { unpin_user_pages(req->page_list, req->nr_pages); @@ -930,9 +929,8 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode, xfer->offset, xfer->length); } - nents = dma_map_sg(chan->device->dev, - req->sgt.sgl, req->sgt.nents, dir); - if (nents == 0) { + ret = dma_map_sgtable(chan->device->dev, &req->sgt, dir, 0); + if (ret) { rmcd_error("Failed to map SG list"); ret = -EFAULT; goto err_pg;
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function returns the number of the created entries in the DMA address space. However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and dma_unmap_sg must be called with the original number of the entries passed to the dma_map_sg(). struct sg_table is a common structure used for describing a non-contiguous memory buffer, used commonly in the DRM and graphics subsystems. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry). It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg() function. To avoid such issues, lets use a common dma-mapping wrappers operating directly on the struct sg_table objects and use scatterlist page iterators where possible. This, almost always, hides references to the nents and orig_nents entries, making the code robust, easier to follow and copy/paste safe. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> --- drivers/rapidio/devices/rio_mport_cdev.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) -- 2.17.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel