From patchwork Tue Aug 18 12:47:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Coly Li X-Patchwork-Id: 262373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB0CFC433E1 for ; Tue, 18 Aug 2020 12:48:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B930B207D3 for ; Tue, 18 Aug 2020 12:48:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726795AbgHRMsM (ORCPT ); Tue, 18 Aug 2020 08:48:12 -0400 Received: from mx2.suse.de ([195.135.220.15]:48392 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726476AbgHRMsE (ORCPT ); Tue, 18 Aug 2020 08:48:04 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C8668ADF2; Tue, 18 Aug 2020 12:48:26 +0000 (UTC) From: Coly Li To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, netdev@vger.kernel.org, open-iscsi@googlegroups.com, linux-scsi@vger.kernel.org, ceph-devel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Coly Li , Chaitanya Kulkarni , Christoph Hellwig , Hannes Reinecke , Jan Kara , Jens Axboe , Mikhail Skorzhinskii , Philipp Reisner , Sagi Grimberg , Vlastimil Babka , stable@vger.kernel.org Subject: [PATCH v6 1/6] net: introduce helper sendpage_ok() in include/linux/net.h Date: Tue, 18 Aug 2020 20:47:31 +0800 Message-Id: <20200818124736.5790-2-colyli@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200818124736.5790-1-colyli@suse.de> References: <20200818124736.5790-1-colyli@suse.de> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The original problem was from nvme-over-tcp code, who mistakenly uses kernel_sendpage() to send pages allocated by __get_free_pages() without __GFP_COMP flag. Such pages don't have refcount (page_count is 0) on tail pages, sending them by kernel_sendpage() may trigger a kernel panic from a corrupted kernel heap, because these pages are incorrectly freed in network stack as page_count 0 pages. This patch introduces a helper sendpage_ok(), it returns true if the checking page, - is not slab page: PageSlab(page) is false. - has page refcount: page_count(page) is not zero All drivers who want to send page to remote end by kernel_sendpage() may use this helper to check whether the page is OK. If the helper does not return true, the driver should try other non sendpage method (e.g. sock_no_sendpage()) to handle the page. Signed-off-by: Coly Li Cc: Chaitanya Kulkarni Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Jan Kara Cc: Jens Axboe Cc: Mikhail Skorzhinskii Cc: Philipp Reisner Cc: Sagi Grimberg Cc: Vlastimil Babka Cc: stable@vger.kernel.org --- include/linux/net.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/include/linux/net.h b/include/linux/net.h index d48ff1180879..a807fad31958 100644 --- a/include/linux/net.h +++ b/include/linux/net.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -286,6 +287,21 @@ do { \ #define net_get_random_once_wait(buf, nbytes) \ get_random_once_wait((buf), (nbytes)) +/* + * E.g. XFS meta- & log-data is in slab pages, or bcache meta + * data pages, or other high order pages allocated by + * __get_free_pages() without __GFP_COMP, which have a page_count + * of 0 and/or have PageSlab() set. We cannot use send_page for + * those, as that does get_page(); put_page(); and would cause + * either a VM_BUG directly, or __page_cache_release a page that + * would actually still be referenced by someone, leading to some + * obscure delayed Oops somewhere else. + */ +static inline bool sendpage_ok(struct page *page) +{ + return (!PageSlab(page) && page_count(page) >= 1); +} + int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec, size_t num, size_t len); int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,